Voice User Interface design (VUI) at a full swing is gradually turning advanced and popular these days. Digital personal assistants such as Amazon’s Alexa, Apple’s Siri, Google Now and Microsoft’s Cortana, are regularly advancing to become the best available voice assistant in the market.
Since, the launch of Echo, the voice assistant device by Amazon in December 2014, till today approximately 8.2 million devices are sold, and the return of voice search continues to scale. MindMeld’s Internet Trends Report of 2016, stats that 60% of people last year started using voice search, and in the last six months 41% of people started. You may find more information at stotion.
A prediction by BCC research asserts that at an annual growth rate of 12.1%, the global market for voice recognition technologies of $104.4 billion in 2016 will rise to $184.9 billion in 2021.
This wave is being driven by technological advancements and deep learning, which lets the developers build systems that have outstanding accuracy for the tasks like speech recognition, language and image analysis.
Microsoft, in 2016 announced that its most recent speech-recognition system reached equivalence with human transcribers in identifying human speech.
The pace at which voice technology is advancing, it’s changing the way we interact with our devices. Still, several common UX design methods still are in practice – which includes user research, persona making, user flows, prototyping, usability testing, and iterative design – a few differences for voice UIs should be noted.
If you’re planning to begin your first project of voice user interface design, below are five essential tips that will help you throughout –
Conversational – Talking vs typing
It’s essential to ensure that a voice UI recognizes natural speech and should accept a broad range of various inputs. Typing and speaking one same thing is different, rather using few keywords, use complete sentences or questions.
Visualize your Sunday morning, when you type “brunch nearby” on your phone. A list of all relevant places will appear on your screen. But, when we communicate with a voice service, you’d be more likely to request in a manner like, “Alexa, what are the best places to brunch nearby?” Ensure the machines are capable of recognizing and reacting to thousands of various commands to simply be successful.
Make recognition intuitive
Nobody likes to learn a hundred of commands to execute particular tasks. Be mindful not to create a system that which is complex, not user-friendly and takes too much time to be familiar.
Machines should be capable of remembering us and becoming more productive with each use. For suppose, you ask your device for directions, which is like,
“Alexa, can you give me directions to home.”
“Sure, where is your home?”
“You know where my home is!”
“I’m sorry, you’ll need to repeat that.”
This scene creates a disappointing experience for the user which is neither satisfying nor successful. However, if the system would have retained information about your home address, quickly a list of all the directions would have been provided. Possibly a brief voice response with a visual element like a map and directions. Delivering an experience like this is quickly rewarding and satisfying. Intuitive design, as with GUI or graphical user interfaces, must be done right by the designers.
Approachability – Analyze what users need
Two essential things that make Voice Interactions successfully are the device recognizing the person speaking, and the speaker understanding the device. The designers must always acknowledge potential speech checks, auditory impairments, and every element that can influence the interaction, like intellective disorders. Even language, accent, or voice tone affects how the device interprets them.
As a designer, you should be intelligent about where and how to use design and voice in a manner that anyone can use it, irrelevant to how they speak or how they listen.
Consider the user’s environment surroundings
Saying something on your phone with a loud, busy train in the background is an example, why it’s necessary to recognise how different conditions influence the type of interface you designed. If the primary application is for driving, it is an excellent choice then – the hands and eyes of users are busy, but their voice and ears are not. If you use the app somewhere in a noisy place, it’s better to request on a visual interface, for the surrounding noise makes voice recognition and hearing more challenging.
If the usability of your app is both at home and on a public conveyance, it is essential to provide an option to switch between a voice and visual interface.
Feedback – two side interaction
In a normal conversation one to let another know, that he/she is interested in a conversation by nodding heads, smiling, and some other gestures. Similarly, it should be delivered to the users who interact with your machine. It’s essential to respect this with your design, so the user grasps their device is switched on and pays attention to them.
The system must always inform users about what is happening. Furthermore, it’s critical to regard how your user will understand that the system is conscious in a non-invasive manner. You can feature it with a sound effect or flashlight as per your choice.