I’m trying a little experiment. This is a blog post written entirely using Siri. I am a big believer in the potential for Voice and other forms of human computer interaction to completely change the way we do things.
I’m going to try to write this post without making any edits are modifications I’ve arty cheated once as this has stopped working lithotomist finished with my post because iPods and I typically don’t think that’s enough but anyway we’ll see what happens.
My last two attempts at dictation just failed. Let me keep going. I think there are three things that this experience is getting me to think about.
At this point, this whole effort completely fell apart. Siri couldn’t understand me, and it seemed like I could talk for minutes before I got a “I don’t undrstand you” fail message and basically have nothing at all dictated. Wow, that was a brutally weak effort.
Anyway, what I wanted to say was that this experiment makes me wonder about a few things.
1. I wonder in what contexts voice control will really be native. I can see lots of use cases in professional contexts and for folks who might be impaired physically. But I wonder how many areas of normal life this will fit in. The car is one obvious one that I’ve blogged about before, and I think it’s really promising. I could see it working in tasks that are very bite sized, as in a tweet or a personal reminder. But are there going to be other mainstream uses cases that I’m not thinking of that will seem completely obvious in a few years? I think so, I’m just not sure what they are.
2. I think the biggest limitation that I can’t really get my head around is the issue of background noise. I really can’t imagine a realistic solution for this, but if there isn’t a solution, I think that dramatically limits a) whether voice control will be broadly applicable and b) whether voice will be a default with little or no other alternatives.
3. The more interesting question this brings up is how our patterns of thought will evolve if voice control does become a standard thing, especially for longer form work/commands. Even in writing this post, I realize how much of my thought process occurs in starts and stops, tests and revisions. It’s only gotten more so with personal computing. I remember that when I used to hand-write letters or papers, there was that much more friction in editing something, and thus, I was much more measured in my thinking. But digitally, my process is much more one of throwing stuff on the screen and editing later. Using speech recognition harkens back to that older behavior, and I wonder if that will be a detriment to its adoption or a positive thing. We’ll see. My gut also say that forcing my brain to think in a more measured and intentional fashion is probably a positive thing, although I suspect a neuroscientist would do a much better job explaining why.