We started experimenting with voice as a patient interface early this year, and showed a solution with a voice-enabled total-joint care plan to a select group of customers and partners at HIMSS 2017. Recently we were finalists in the Merck-sponsored Alexa Diabetes Challenge, where we built a voice-enabled IOT scale and diabetic foot scanner, and also a voice-powered interactive care plan.
Over the course of the challenge we tested the voice experience with people with Type 2 diabetes. We also installed the scale and scanner in a clinic, and we found that clinicians also wanted to engage with voice. Voice is a natural in the clinical setting: there’s no screen to get in the way of interactions, and people are used to answering questions. Voice is also great in the home.
However, voice isn’t always the best interface which is why we think multimodal care plans including voice, text, mobile, and web can deliver a more comprehensive solution. Since it’s easier for someone to overhear a conversation than look at your smartphone or even computer screen, mobile or web are often better interfaces depending on the person’s location (for example taking public transit), or the task they need to do (for example, reporting status of a bowel movement). We do think that voice has many great healthcare applications, and benefits for certain interactions and populations.
In our testing, we found that both patients and providers really enjoyed the voice interactions and wanted to continue the conversation. They felt very natural, and people used language that they would use with a human. For example, when asked to let the voice-powered scale know when he was ready to have his foot scan, one person responded with:
“Ready when you are.”
This natural user interface presents challenges for developers. It’s hard to model all the possible responses and utterances that a person would use. Our application, would answer to ready, sure, yes, and okay, but the “when you are” caused her some confusion.
Possibly the most important facet of voice is the connection people have with voice is extremely strong, and unlike mobile voice is not yet associated with the need to follow up, check email, or other alerts. (Notifications on voice devices could change this.)
“Voice gives the feeling someone cares. Nudges you in the right direction”
Creating a persona for voice is important, and relying on the personas created by the experts like the Alexa team, is probably the best way for beginners to start.
“Instructions and voice were very calm, and clear, and easy to understand”
Calm is the operative word here. Visual user interfaces can be described as clean, but calm is definitely a personification of the experience.
Voice is often seen as a more ubiquitous experience, possibly because using fewer words, and constantly checking for the correct meaning are best practices, for example “You want me to buy two tickets for Aladdin at 7:00 pm. Is this correct?” We often hear pushback on mobile apps for seniors, but haven’t heard the same for voice. However, during our testing, a senior who was hard-of-hearing told us she couldn’t understand Alexa, and thought that she talked too quickly. While developers can put pauses to set the speed of prompts and responses in conversation, this would mean that the same speed would have used for all users of the skill, which might be too slow for some or two fast for others. Rather than needing to build different skills based on hearing and comprehension speed it would be great if end-users could define this setting so that we can build usable interfaces for everyone.
While this was our first foray into testing voice with care plans, we see a lot of potential to drive a more emotional connection with the care plan, and to better integrate into someone’s day.
People need to manage interactions throughout their day, and integrating into the best experience based on what they need to do and where they are provides a great opportunity to do that, whether that’s voice, SMS, email, web, or mobile. While these consumer voice applications are not yet HIPAA-compliant, like our tester patient said we’ll be “ready when you are.”