To try and understand a bit more about how people decide whether they want to use a voice-activated or touch-screen activated interface, I spoke with Alex Rudnicky, an expert in human-computer interaction at Carnegie Mellon University. The process he described is something like a marketplace, with voice and touchscreen interfaces competing for a user’s attention: The user weighs a host of factors, some circumstantial (am I in a crowded bar that will make it hard for Siri to hear me?) and others about the inherent nature of the task (Siri may prove better at “find a cheap Chinese restaurant within a mile” than Yelp’s interface, which requires you to sort through many options to nail that request).
This description fits in well with some of the Parks Associates qualitative findings, in which many people told them they use Siri when driving or otherwise have their hands full, John Barrett of the Parks Associates told me. Calculating the costs and benefits of each, a user will go with whatever’s easier, even if, Rudnicky says, they find the AI “annoying.” (Of course, this raises the question of just what it means for AI to be “annoying.” It seems that the robotic qualities people find annoying might cease to be so, if the AI were to just do it’s job effectively and quickly.)
Beyond these sorts of functional concerns, users may also weigh cultural norms. A study done by some human-computer interaction experts and Carnegie Mellon and SUNY Buffalo found that people preferred to give feedback via text than vocally to a robot (pdf), perhaps because they became sheepish talking to a robot around strangers. But if you’ve ever taken a public bus or subway in recent years, you know how fragile such inhibitions are. Rudnicky said we can expect self-consciousness about talking to robots to melt away over the next few years, much as it has for talking on a cellphone.
No comments:
Post a Comment