Spring is making a late arrival in New York this year, and the delay is beginning to take its toll. Last week, as I prepared to drive back from the warm, balmy air of Virginia towards New York, I decided to take the top off of my Jeep for a taste of spring. Despite knowing it was a rainy day back home, I took my chances and tried to judge how far I could push it before pulling over to put the roof back on.
Naturally, I sought the advice of my smartphone’s digital assistant. After all, this seemed like the perfect task for a product that everyone tells me is clever and packed with intelligence, albeit of the artificial variety. Silly me. What I thought was a simple question: “When will I encounter rain on my journey,” or even better, “Warn me 15 minutes before I will encounter rain,” left my digital assistant confused and, sadly, referring me to a variety of websites that could not answer my question. Besides, I was driving and hands-free is the only option. I did not seek a website; I wanted an answer. In the end, my digital assistant let me down…and I may have gotten a little damp on the ride home.
A couple of years ago, I probably wouldn’t have expected a useful result from a digital assistant, but I’ve been slowly persuaded to think bigger. We’ve gone from positioning these ‘assistants’ as simply being a voice interface to something rather more intelligent that, when asked a question, will provide an answer.
I would posit that there are a couple of different issues that lead to consumer disappointment. First is the over-hyped nature of artificial intelligence that’s now promised in almost every tech device. Second, I suspect, is that there needs to be a greater break with the ‘old’ technology that is within the smartphone. The smartphone gained its place in our hearts because it provided access to the internet via websites and apps. As such, there’s a tendency for the new assistants to leverage those sources as legitimate answers, rather than assuming that you want the actual answer immediately.
I expect big changes in the next year or so, as digital assistants migrate from smartphone-based solutions to a plethora of devices, such as wearables, that don’t have large screens. Without these screens, the creator mindset will have to change significantly. Indeed, one could argue that we’re already seeing that; Alexa and Google Home were originally designed around a speaker, not a screen, but both have the safety net of a smartphone app for more complex questions. To continue to evolve, manufacturers and developers will need to eliminate the reliance on a screen – even as a fall back – and enable digital assistants to operate without them.
The winner(s) in this space are by no means a done deal (even though Alexa has a head-start in terms of third-party integration), and failure to fix the issues will lead to mediocre use of any digital assistant beyond the simplest of tasks. But, maybe that’s okay in the near-term, as long as we all stop over-selling the products, promising the impossible. After all, we are very much programmed to work around our phone’s limitations, rather than the other way around. Take for example, the man I encountered at a Starbucks on the drive back to New York. He was busily restarting his phone and re-entering his password so that he could purchase a coffee using an app. Cash or credit would’ve been far quicker and easier... but where’s the fun in that?