Article originally published on LinkedIn on 5 December 2018
Has anyone done any research on physiological reactions of people when their phone calls are answered by a interactive voice response (IVR) vs. a person? Personally, I’m sure “voice misrecognition” systems raise my blood pressure – like facing a gauntlet of inevitable miscues.
Reason behind frequent frustration with this is a problem common to all automated interaction based on statistics & algorithms (autocorrect is another aggravating example): Systems are trained on typical interactions, but for some of us at least, routine business tends to be done over the internet and the phone is for unusual questions or troubleshooting issues that don’t fit in the usual categories. The potential for misunderstanding human voice input just complicates things further. For these situations, the IVRs seem set up to fail – non-typical interaction requests plus less familiar terms. (“Representative!”)
For those who’ve invited an “artificial intelligence” (AI) system (Alexa, Cortana, Siri, Google Assistant) into their home, maybe the comfort level with such voice interaction is higher, or maybe the range of tasks and interaction requirements of those systems is designed to be more open-ended than IVR.
But that could widen what seems to be an interesting area of investigation. Physiological response to IVR and voice AI could be considered across the sequence of computerized voice interactions – from knowing it’s coming, to initiating the communication, through the communication and its various exchanges, and afterwards. What would the results tell us about design, use, limitations, and unintended effects of intelligent automated voice interaction?