I took Amazon's Whole Foods bait the other day and bought an Echo and a Dot while buying groceries (yup... weird). Here are my first impressions after about a week's worth of use.
I set Alexa up in a new space that doesn't have much ambient noise; no kiddos, no pets (barking). I'm extremely bearish on using voice computer interaction in real-world/day-to-day environments. I don't think it will ultimately work for two reasons: one, background/ambient/adjacent audio noise pervades the bulk of life, and machines can't filter it out (we're not even close on this front). two, unlike all intrusive technology to-date, audio/voice is intrusive and active enough, that socially and culturally, I think the behavioral shifts required for mass adoption are too abrasive. If you and I are hanging out having a conversation, it's one thing for me to pull out a screen and mess with it, passively paying attention to what you're saying, and quite another for me to full-stop pause our interaction with a visual or verbal cue, engage a computer (another entity really), then re-engage with you. It's awful... you can try it today with Siri.
That's another post though. Onto Alexa.
I went with Alexa over Google Home because the number of Alexa integrations dwarfs Google's, and I'm all about integrations. Voice recognition feels as good as Google's though.
Getting things setup was really simple, and I appreciated the delayed software update approach. Of course there's a s/w update (there always is w/ IoT devices), but there's nothing worse than being forced into one immediately upon setting something up for the first time. Nice touch Amazon... I hope all devices move to this delayed-initial-update approach.
The way you add capabilities/integrations to Alexa is done by adding what they call Skills. Just think "extensions" or "integrations."
One of the Skills I added required inputting an API token over voice. That was interesting. Imagine verbally telling a computer "A56D8F2298OG9234SHE." The instructions suggested I used the NATO Phonetic Alphabet, so I went and learned that, and then "input" my token. It took a few tries, but I got it in there. This particular Skill required some other settings configuration, so I went on to say things like "SET UNITS Imperial." Configuring software using voice is just wild.
I added my car manufacturer's Skill, and I can interact with my car via Alexa now. This has been useful actually. Instead of firing up the iOS app for my car, I can just say things like "Alexa, ask to lock my car" and "Alexa, ask to start climate control." etc.
Home automation stuff is fun too. "Alexa, lock the front door." "Alexa, turn on the kettle." "Alexa, turn on the Phonograph." (those first two Skills made possible by Wink, and the latter by Logitech Harmony).
I am weary of pulling out my mobile device to do all of my home automation stuff. In general I'm just sick of screens and remotes, so starting to do things via voice is a welcome reprieve. This brings me to the more interesting part of this post.
The degree to which my brain has been wired for visual/reading input, and kinesthetic (keyboard/touch-screen) output is more significant than I realized, and pushing myself to use voice has provided the contrast to really perceive it.
With Voice, there is no multitasking; everything is serial. There are no other open tabs to use as background/reference as you do your task. With Voice, you have to have all the information in your head, before engaging. So much of our online/computerized world today allows us to simply copy/paste (metaphorically speaking) our way through life.
Alexa is causing me to use memory in ways I haven't had to in a long time. It also caused me to learn the NATO Phonetic Alphabet so I could better speak to the computer.
Curious to see where this all goes.