The following is a post that I drafted on 9-Jan 2017, more than a year ago. Given the reported excitement over Alexa at CES 2018, I thought it would be a good idea to publish it and to see how things have changed in a year. I am actually a bit surprised at how relevant my observations remain today. I have left it as I did a year ago, and hence there are parts that are obviously unfinished.
One of the more interesting things in tech this month was the presence of Alexa enabled products at CES. Although this seems to have taken quite a few people by surprise, it’s actually quite easy to see how this happened, and why other aspects of an AI enabled interface (answering via Google Graph or voice control of your smartphone etc.) have not seen excitement anywhere near. One critical caveat though is that at this point, we have only seen excitement from home appliance vendors and not from the consumers; at this point, we have not yet seen real demand.
The following is a summary of my thoughts on this.
Who has the capability for a voice UI?
- More than a year ago, I wrote a comment on Techpinions that “Amazon, by offering Echo voice to other companies, is essentially making it the new AWS. This could result in even small startups using sophisticated voice technology”. This is essentially what we are seeing at CES. We are seeing many startups and non-tech companies integrate with Alexa to provide a voice UI to their products.
- Essentially, nobody needs in-house voice recognition technology anymore to provide a voice UI to control their product. Anybody can do it. Importantly, since it will probably be difficult to significantly differentiate on voice, voice UI itself will become a commodity.
Who needs a voice UI?
- One huge problem with AI, and new category products in general, is that very few people need them. Impressive as Google Now may be, people have not yet shown much excitement over an assistant that peeks into your email and notifies you of the few events and appointments that it has managed to decipher correctly. There has been little need for assistants that try to learn everything about you and predict what you may want.
- It turns out that the people who wanted a voice UI the most weren’t end-customers. The ones that had the most use for a voice UI were vendors; they have always wanted something new and shiny to lure consumers to buy a new TV, a new refrigerator, a new microwave, etc. The perfect example would be TVs with 3D and 4K, devices that were previous darlings of CES a few years ago, but ended up being duds.
- What Amazon has provided is a promising (but yet unproved) solution. It provides hope for the eternal question; how do you get customers to spend a few more dollars on your commodity product? Home appliance vendors have forever been adding small incremental features to lure consumers into replacing their 5-year old device, and to get them to buy higher priced ones rather then their low-end offerings. Wider TVs, 3D, 4K, Internet connectivity, lower energy consumption, SD slots to view photos, etc. Despite being a dud in the consumer market, features like 3D and 4K were actually big topics at previous CESs.
How it adds up
- Amazon has basically provided a way to add a voice controlled interface to otherwise mundane devices, with very little effort and investment. Alexa-enabled devices can now tout headline grabbing titles like “AI-powered washing machine” and pretend to be something that’s genuinely useful and innovative.
- Importantly, we have to remember that the consumer is completely absent from the argument. The need for thes AI-powered devices comes from the vendors, not the end users. Like 3D/4K TVs, the excitement could be very short lived.
- Therefore, we have to keep in mind that this excitement is completely independent of how well Alexa’s AI performs.
- More than anything, this tells me how quick adoption of a product can be if you have hit a clear need. Conversely, it shows how hard it is to get the mainstream to share enthusiasm for a product without one. Here, we must keep in mind that AI is not the key because the existence of a need for it has not been demonstrated. The key is the ability to be buzzword compliant with minimum effort.
- One point of discussion is whether Amazon has managed to get a head start that is significant, of whether Google or Apple would easily catch up if and when they open up their ecosystems. As mentioned above, the motive for the home appliance vendors to be HomeKit/Siri or Google Home compatible is very strong. As long as Apple or Google provide an easy way to be compatible, then vendors will most likely rush in just like they have done for Amazon. Keep in mind that making your appliance compatible is probably much easier than developing an independent app, something that the vendors still do.
- Regarding the above, it’s also important to note that the replacement cycle for home appliances is very long, hence early leads are unlikely to translate to large share of installed base.
- The smart home has hardly taken off, and some pundits have attributed this to the lack of a central control. If this is truly the case, then we might see a surge in smart home appliance adoption, due to Alexa. However, we must also be aware that Alexa compatible device adoption alone is unlikely to be a reliable measure of smart home adoption, unless there is a clear price difference (with 3D TV adoption, even people who did not 3D bought them). We must be careful in how we interpret the numbers.
- The broad question here is what AI is really good for, and does the popularity of Alexa at CES give us a hint? Why would people want AI at all, and what level of AI would be necessary to solve the job. Although far from decisive, if the main job of Alexa ends up controlling home appliances through voice, without necessarily inferring or predicting your needs by peeking into your daily habits, email and calendar events, then that would suggest that a glorified, voice controlled macro library is what people want and need. That is, unless the task is complex, humans can tell computers what they need, and their isn’t a clear need for computers to be clever. If this is indeed the case, then Google’s Knowledge Graph and huge repositories of users’ private information may not be as useful as one may think.
- AI is already used to determine the best washing conditions for your specific laundry, and current AI may not significantly contribute to that.
AI does not have a job nor does it solve a need. Therefore it cannot be a product. It is only a technology.
Alexa at CES shows how it might be a product, or rather what AI might be useful for. Of course, ultimately the user will decide.
Alexa is something that will help drive sales of high-end commodity products.
4 thoughts on “AI Is Not A Product And What Alexa Taught Us At CES 2017”
I think both voice UI and smart assistant have a strong threshold effect, compounded by weak discoverability and iffy feedback. When they work fine, they’re… magical. I love dictating my texts and telling my TV to put on the news while I’m cooking. The issue is, if this works only 95% of the time, it’s no longer magical, it’s a hassle, all the more so because when something doesn’t work, it’s hard to be sure if the phrasing is wrong, if the mike is too far, if background noises/speech confused the service… Sometimes it’s even hard to realize something went wrong, “OK Google” has a way to trigger on its own, it sometimes dials the wrong person or the wrong phone for the right person… and there’s no way to know things went wrong but to look at the screen, if there is a screen.
I’m worried the voice UI / smart assistant thing will end up as big a mess as the touchscreens and fancy UIs that have been ruining our appliances for the last decade. There’s no in-between: either my microwave has 2 knobs for strength and duration, or it lets me tell it to “boil this” and “cook that”, and it better know that I like my microwaved carrots slightly undercooked, and that by boiling I mean bring to a simmer. As of today, Cortana is still an idiot that understands “set an alarm in an hour” as “set an alarm at the next hour on the dot”. Maybe I should try other wordings (“in 60 minutes ?”), but I just dumped it. “Ok Google” on a phone can barely do anything offline (it can call, not sure it can text, I might have had the wording wrong).
So making it easy to tack voice onto any gizmo sounds like giving random OEMs plenty of rope to hang themselves with. Even the 4 main players aren’t there yet with very basic stuff. Making devices voice-controlled sounds like it lowers barriers to usage, until one realizes we now have to memorize 10 commands for umpteen gizmos, and can never be sure of their current state and compliance to our orders. We’ll probably get there eventually, in about the same time frame as home computers got there … with mobile phones.
Thanks and I totally agree on your points.
I’m find it quite amusing that we’re still saying the same things as last year, and nothing has changed really. I might write a post declaring “peak Alexa” soon, maybe a month or two after the HomePod goes on sale. It’s not that I think the HomePod will be a great hit, but because Apple sometimes has really interesting ideas that make you think again.
Actually, I was thinking of that, starting small w/ voice-controlled media playback is probably smart at this stage. I just checked, I can tell gAssistant to “play Instead by Madeleine Peyroux”, it even asks me which music player it should use and correctly feeds the song to third-party players.
Maybe we need an anthropomorphic personification. Not a disembodied UI, but a cudly Aibo or a snarky sprite or a rolling BB-8 as the gateway to the AI. It’ll make interactions (I’m talking to you because I’m looking at you) and feedback (I’m sure I can get it to do an eye roll ^^) flow more naturally. Plus they might be able to display/project stuff which is often quicker.
That makes the problem and solution much more complex and long-term, and Google Glass-level wrong for not even naming their assistant.