Blog

Healthcare + A.I. Northwest

The Xconomy Healthcare + A.I. Northwest Conference at Cambia Grove featured speakers and panels discussing the opportunities and prospects for applying machine learning and artificial intelligence to find solutions for health care. The consensus was that we are no longer held back by a lack of technological understanding and ability. A.I. and M.L. models can be learned at a large scale by harnessing the power of the cloud and advances in data science. According to the panelists, today’s challenges to incorporating A.I. into healthcare include abundant, but inadequate data and resistance from health systems and providers.

Many researchers have found insufficient data to be an unexpected challenge. As keynote speaker Peter Lee of Microsoft Research pointed out, the more data we have, the better our machine learned models can be. He used an analogy to a speech identifier trained on multiple languages such that the model predicted English better after learning French to illustrate that improvements can be made with large sets of unstructured data. Unfortunately, because we are not capturing enough of the right kind of data for researchers, much patient data is getting lost in the “health data funnel” due to PHI and quality concerns. Lee called for more data sharing and data transparency at every level.

Physician researchers on multiple panels were concerned about a lack of suitable data. Soheil Meshinchi, a pediatric oncologist from Fred Hutchinson Cancer Research Center, is engaged in collecting data specific to children. He discussed his research on Acute Myeloid Leukemia on the panel titled, ‘Will A.I. Help Discover and Personalize the Next Breakthrough Therapy?’. While there is a large body of research on AML in adults, he has found that the disease behaves much differently at a genomic level in children. He also expressed distrust in some published research because studies are rarely reproduced and often a researcher who presents results contrary to existing research faces headwinds at journals who are reticent to publish “negative data”. His focus at this point is gathering as much data as he can.

Matthew Thompson, a physician researcher at the University of Washington School of Medicine, argued on the “Innovations for the Over-Worked Physician” panel that technology has made patient interaction demonstrably worse, but that these problems can and should be solved innovatively with artificial intelligence. His specific complaints include both inputting and extracting data from health system EHRs, as well as an overall glut of raw patient data, often generated by the patient himself, and far too much published research for clinicians to digest.

Both keynote speakers, Microsoft’s Lee and Oren Etzioni of the Allen Institute for Artificial Intelligence, referenced the large numbers of research papers published every year. According to Etzioni, the number of scientific papers published has doubled every nine years since World War II. Lee referenced a statistic that 4000 studies on precision cancer treatments are published each year. They are both relying on innovative machine reading techniques to analyze and categorize research papers to make them more available to physicians (and other scientists). Dr. Etzioni’s team has developed SemanticScholar.org to combat the common challenges facing those who look for research papers. He aims to reduce the number of citations they must follow while also identifying the most relevant and up-to-date research available. One of the advantages of taking this approach to patient data

is that scientific texts have no PHI concerns. Lee’s team is marrying patient data and machine reading to match potential research subjects with appropriate NIH studies.

Dr. Thompson was concerned that too much data is presented to the medical staff and very few of the “predictive rules” used by ER personnel are both ‘accurate and safe’. When reviewing patient outcomes and observations to predict the severity of an infection, he found that patients or their caregivers would provide ample information, but often clinicians would disregard certain details as noise because they were atypical symptoms. The amount of data that providers have to observe for a patient is massive, but machine learned models may be utilized to distill that data into the most relevant and actionable signals.

Before data is gathered and interpreted, it must be collected. Like Dr. Thompson, Harjinder Sandhu of Saykara sees ponderous, physician-driven data entry via EHR as significant barrier to efficient data collection. Sandhu notes that healthcare is the only industry where the highest-paid teammember is performing this onerous task and his company is using artificial intelligence to ease that burden on the physician.

Once patient data has been aggregated and processed into models, the challenge is getting the information in front of providers. This requires buy-in from the health system, physician, and, occasionally, the patient and his caregivers. Mary Haggard of Providence Health and Services spoke on the “Tech Entrepreneurs Journey into Healthcare” panel and stated that the biggest problem for entrepreneurs is defining the correct problem to solve. During the “Investment Perspective” panel, Matt Holman of Echo Health Ventures recommended tech startups emphasize an understanding of the context of the problem within a health system.

One of the most important and difficult hurdles for health technology companies is working into clinical workflow. Mike McSherry from Xealth has found that physician champions who know how they want to use technology help with integrating into a health system or physicians group. Lynn McGrath of Eigen Healthcare believes physicians want their data to be defined, quick to assess, condensed, and actionable, while Shelly Fitz points out that providers are not used to all the data they are receiving and they don’t yet know how to use it all. These are all issues that can and will be solved as healthcare technology continues to become more intelligent.

As Wellpepper’s CTO, Mike Van Snellenberg pointed out, health systems and doctors are resistant to “shiny new things”, for good reason. When approaching a health system, in addition to engaging the administration, clinicians need to understand why the machine learned model is recommending a given course of treatment. After integration, patients will also want to understand why a given course of treatment is being recommended. Applying artificial intelligence solutions to medicine must take into account the human element, as well.

The exciting possibilities of artificial intelligence and machine learning are hindered more by human constraints in health systems and data collection than by available technology. “Patients are throwing off all kinds of data when they’re not in the clinic,” according to our CTO. Wellpepper’s tools for capturing patient-generated data provide a pathway for providers to access actionable analysis.

Posted in: Healthcare Disruption, Healthcare Technology, patient engagement

Leave a Comment (0) →

Are Women Better Surgeons? Patient-Generated Data Knows The Answer

As empowerers of patients and collectors of patient-generated data, we’re pretty bullish on the ability for this data to show insights. We fully admit to being biased, and view things through a lens of the patient experience and outcomes, which is why we had some ideas about a recent study that showed female surgeons had better outcomes than male surgeons.

The study, conducted on data from Ontario, Canada, was a retrospective population analysis of patients of male and female surgeons looking at rates of complications, readmissions, and death. The results of the study showed that patients of female surgeons had a small but statistically significant decrease in 30-day mortality and similar surgical outcomes.

Does this mean that women are technically better surgeons? Probably not. However, there is one sentence that stands out to a possible reason that patients of female surgeons had better outcomes.

A retrospective analysis showed no difference in outcomes by surgeon sex in patients who had emergency surgery, where patients do not usually choose their surgeon.

This would lead us to believe that there is something about the relationship between the patient and the provider that is resulting in better outcomes. We have seen this at Wellpepper, while we haven’t broken our aggregate data down by gender lines, we have seen that within the same clinic, intervention, and patient population, we see significant differences in patient engagement and outcomes between patients being seen by different providers.

Some healthcare professionals are better than others at motivating patients, and the relationship between provider and patient is key for adherence to care plans which improve outcomes. By tracking patient outcomes and adherence by provider, using patient-generated data, we are able to see insights that go beyond what a retroactive study from EMR data can show.

While our treatment plans, and continued analysis of patient outcomes against those treatment plans go much further than simply amplifying the patient-provider relationship, for example with adaptive reminders, manageable and actionable building blocks, and instant feedback, never underestimate the power of the human connection in healthcare.

Posted in: Adherence, Behavior Change, big data, Clinical Research, patient-generated data

Leave a Comment (0) →

Meet Wellpepper At Connected Health

We’re gearing up for a great week at Connected Health. See Wellpepper, and our Alexa Diabetes Challenge Grand Prize winning entry Sugarpod in Boston next week. Contact sales@wellpepper.com to schedule a demo, drop by Booth 84 in the Innovation Zone.

 

Wednesday October 25
Natural Language Pre-Conference, we’ll be talking about the Alexa Diabetes Challenge, Sugarpod, and voice

Thursday October 26

Voice Technologies In Healthcare Applications

  • Room: Harborview 2/3
  • Session Number:R0240D
  • 2:40 PM – 3:30 PM

U.S. Department of Health and Human Services Town Hall with Bruce Greenstein, Entrepreneur Panel and Q&A (Invite-only)

Friday October 27

The Power of Patient-Generated Data

Exhibition Showcase 11:00 AM – 11:10 AM

 

 

Posted in: patient engagement, Voice

Leave a Comment (0) →

Wellpepper Wins $125K Grand Prize in Alexa Diabetes Challenge

NEW YORK: Today, the Challenge judges awarded Wellpepper the $125,000 grand prize in the Alexa Diabetes Challenge. Wellpepper is the team behind Sugarpod, a concept for a multimodal diabetes care plan solution using voice interactions.

The multi-stage Challenge is sponsored by Merck & Co., Inc., Kenilworth, New Jersey, U.S.A., supported by Amazon Web Services (AWS), and powered by Luminary Labs. In April, the competition launched with an open call for concepts that demonstrate the future potential of voice technologies and supporting Amazon Web Services to improve the experience of those who have been newly diagnosed with type 2 diabetes.

“Technology advances are creating digital health opportunities to improve support for people managing life with a chronic disease,” said Tony Alvarez, president, Primary Care Business Line and Customer Strategy at Merck & Co., Inc. “One purpose of the Alexa Diabetes Challenge was to identify new ways to use the technology already present in a patient’s daily routine. The winner of the Challenge did just that.”

Sugarpod is a concept for an interactive diabetes care plan solution that provides tailored tasks based on patient preferences. It delivers patient experiences via SMS, email, web, and a native mobile application – and one day, through voice interfaces as well. Since much of diabetes management occurs in the home, the Wellpepper team recognized that integrating voice was the natural next step to make the platform more convenient where patients are using it most. During the Challenge, Wellpepper also prototyped an Alexa-enabled scale and foot scanner that alerts patients about potential foot problems, a common diabetes complication.

“Sugarpod helps newly diagnosed people with type 2 diabetes integrate new information and routines into the fabric of their daily lives to self-manage, connect to care, and avoid complications. The Challenge showed us the appeal of voice solutions for patients and clinical value of early detection with home-based solutions,” said Anne Weiler, co-founder and CEO of Wellpepper.

The Challenge received 96 submissions from a variety of innovators, including research institutions, software companies, startups, and healthcare providers. The panel of judges, independent from Merck, narrowed the field down to Wellpepper and four other finalists, who each received $25,000 and $10,000 in AWS promotional credits and advanced to the Virtual Accelerator. During this phase of the competition, the finalists received expert mentorship as they iterated their solutions in preparation for Demo Day. At Demo Day on September 25, 2017, the five finalists presented their solutions to the judges and a live audience of industry leaders at the AWS Pop-up Loft in New York to compete for the grand prize.

“The Alexa Diabetes Challenge has been a great experiment to re-think what a consumer, patient, and caregiver experience could be like and how voice can become a frictionless interface for these interactions. We can imagine a future where technological innovations, like those provided by Amazon and AWS, are supporting those who need them most,” said Oxana Pickeral, Global Segment Leader in Healthcare and Life Sciences at Amazon Web Services.

Learn more at alexadiabeteschallenge.com and follow the Challenge at @ADchallenge.                                                                   

###

Contact: Emily Hallquist

(425) 785-4531 or emily@luminary-labs.com

Posted in: Healthcare Technology, Healthcare transformation, patient engagement, Press Release

Leave a Comment (0) →

Building a Voice Experience for People with Type 2 Diabetes

Recently we were finalists in the Merck-sponsored Alexa Diabetes Challenge, where we built a voice-powered interactive care plan, complemented by a voice-enabled IOT scale and diabetic foot scanner. The scanner uses the existing routine of weighing-in to scan the person’s feet for foot ulcers, a serious but usually preventable complication of Diabetes.

We blogged about our experience testing the device in clinic here

We blogged about feedback from patients here

This was a fun and productive challenge for our team, so we wanted to share some of our lessons learned from implementing the voice interface. This may be of interest to developers working on similar problems.

Voice Experience Design

As a team, we sat down and brainstormed a long list of things we thought a person with diabetes might want to ask, whether they were interacting with the scale and scanner device, or a standalone Amazon Echo. We tried not to be constrained by things we knew our system could do. This was a long list that we then categorized into intents and prioritized.

  • ~60% of the utterances we knew we’d be able to handle well (“My blood sugar is 85.”, “Send a message to my care team”, “Is it ok to drink soda?”)
  • ~20% of the utterances we couldn’t handle fully, but could reasonably redirect (“How many calories are in 8oz of chicken and a half cup of rice?”)
  • ~20% of the utterances we didn’t think we’d be able to get to, but were interesting for our backlog (“I feel like smoking.”)

After some “wizard of oz” testing of our planned voice interactions, we decided that we needed to support both quick-hit interactions, where a user quickly records their blood sugar or weight for example, and guided interactions where we guide the patient through a few tasks on their care plan. The guided interactions were particularly important for our voice-powered scale and foot scanner so that we could harness an existing habit (weighing oneself) and capture additional information at the same time. This allows the interaction to fit seamlessly into someone’s day.

 

Challenge 1: We wanted to integrate the speech hardware into our scale / foot scanner device using the Alexa Voice Service, rather than using an off-the-shelf Echo device.

The Alexa Voice Service is a client SDK and a set of interface standards for how to build Echo-like capabilities into other hardware products. We decided early on to prototype our device around a Raspberry Pi 3 board to have sufficient processing power to:

  • Handle voice interactions (including wake-word detection)
  • Drive the sensors (camera array, thermal imaging, load sensors)
  • Run an image classifier on the device
  • Drive on-device illumination to assist the imaging devices
  • Securely perform network operations both for device control and for sending images to our cloud service

Raspberry Pi in Sugarpod

The device needed built-in illumination in order to capture usable photos of peoples feet to look for ulcers and abnormalities. Since the device needed built-in illumination to perform imaging, one of our team members came up with the idea of dual-purposing the LED lighting as a speech status indicator. In the same way that the Amazon Echo uses blinking cyan and blue to show status on the LED ring, our entire scale bed could do this.

As we started prototyping with basic audio microphones and speakers, we quickly discovered how important the audio-preprocessing system is in our application. In our testing there were many cases of poor transcriptions or unrecognizable utterances, especially when the user was standing any distance from the device. Our physical chassis designs put the microphone height around 2’ from the ground, which is far from the average user’s mouth, and also in real-world deployments would be in echo-filled bathrooms. Clearly, we needed to use a proper far-field mic array. We considered using a mic array dev kit, which we decided was too expensive and added too much complexity for the challenge. We also spent a couple hours investigating whether we could hack an Echo Dot to use it’s audio hardware.

Eventually we decided that it would make the most sense to stick with an off-the-shelf Echo for our prototype. Thus, in addition to being a foot scanner and connected scale, the device is also the world’s most elegant long-armed Echo Dot holder! It was easy to physically include the Echo Dot into our design. We figured out where the speaker was, and adjusted our 3D models to include sound holes. Since the mic array and cue lights are on top, we made sure that this part of the device remained exposed.

We will be revisiting the voice hardware design as we look at moving the prototype towards commercial viability.

 

Challenge 2: We wanted both quick-hit and guided interactions to use the same handlers for clean code organization but hit some speed bumps enabling intent-to-intent handoff

Our guided workflows are comprised of stacks and queues that hand off between various handlers in our skill, but we found this hard to do when we moved to Alexa Skill Builder. The Alexa Skill Builder (currently in beta) enables skill developers to customize the speech model for each intent and provide better support for common multi-turn interactions like filling slots and verifying intents. This was a big improvement, but also forced us rework some things.

For example, we wanted the same blood sugar handler to run whether you initiated a conversation with “Alexa, tell Sugarpod my blood sugar is 85”, or if the handler was invoked as part of a guided workflow where Alexa asks “You haven’t told me your blood sugar for today. Have you measured it recently?”

We tried a number of ways to have our guided workflow handlers switch intents, but this didn’t seem to be possible to do with the Alexa API. As a workaround we ended up allowing all of our handlers to run in the context of both the quick-hit entry point intent (like BloodSugarTaskIntent) as well as in guided workflows (like RunTasksIntent), and then expanded the guided workflow intent slots to include the union set of all slots needed for any handler that might run in that workflow.

Another challenge was that we wanted to use the standard AMAZON.YesIntent or AMAZON.NoIntent in our skill, however the Alexa Skill Builder does not allow this, presumably because it needs to reserve these intents for slot and intent confirmation. Our workaround for this was to use a fictitious slot (we called ours “ConfirmationSlot”) in basically all of our intents which could be “confirmed”, every time we wanted to ask for Yes/No values. We factored this into a helper library that is used throughout our skill codebase.

if (confirmationSlot.confirmSlotStatus === 'CONFIRMED') {
  confirmationSlot.confirmationStatus = "NONE";
  handler.emitWithState("AMAZON.YesIntent");
  return true;
} else if (confirmationSlot.confirmSlotStatus === 'DENIED') {
  confirmationSlot.confirmationStatus = "NONE";
  handler.emitWithState("AMAZON.NoIntent");
  return true;
}

Challenge 3: The Voice Kit speech recognizer did not always reliably recognize complicated and often-mispronounced pharmaceutical names

One of our intents allows the user to report medication usage, by saying something like “Alexa, tell Sugarpod I took my Metformin.” People are not always able to pronounce drug names clearly, so we wanted to allow for mispronunciation. For the Alexa Diabetes Challenge, we curated a list of the ~200 most common medications associated with diabetes and its frequent co-morbidities, including over-the-counter and prescription medication. These were bound to a custom slot type.

When we tested this, however, we found that Alexa’s speech recognizer sometimes struggled to identify medication from our list. This was especially true when the speaker mispronounced the name of the medication (understandable with an utterance like “Alexa, tell Sugarpod I took my Thiazolidinedion”). We observed this even with reasonably good pronunciation. We particularly liked “Mad foreman” as a transcription when one of us asked about “metformin.” Complex pronunciations are a well-known problem in medical and other specialized vocabularies, so this is a real-world problem that we wanted to invest some time in.

Ideally, we would have been able to take an empirical approach and collect a set of common pronunciations (and mispronunciations) of the medications and train a new model. It may additionally be interesting to use disfluencies such as hesitation and repetition in the recorded utterance as features in a model. However, this is not something that is currently possible using Alexa Skills Kit.

Our fallback was to use some basic algorithmic methods to try to find better matches. We had reasonable success with simple fuzzy matching schemes like Soundex and NYIIS, which gave us a good improvement over the raw Alexa ASR results. We also started to evaluate whether an edit-distance approach would work better (for example, comparing phonetic representations of the search term against the corpus of expected pharmaceutical names using a Levenshtein edit distance), but we eventually decided that a Fuzzy Soundex match was sufficient for the purposes of the challenge (Fuzzy Soundex: David Holmes & M. Catherine McCabe http://ieeexplore.ieee.org/document/1000354/).

Even though our current implementation provides good performance, this remains an area for further investigation, particularly as we continue to work on larger lists of pharmaceuticals associated with other disease or intervention types.

 

Conclusion

This challenge helped us stretch our thinking about the voice experience, and gave us the opportunity to solve some important problems along the way. The work we’ve done is beneficial not just for Diabetes care plans, but also for all of our other care plans too.

While the Alexa voice pipeline is not yet a HIPAA-eligible service, we’re looking forward to being able to use our voice experience with patients as soon as it is!

Posted in: Healthcare Technology, Voice

Leave a Comment (0) →

Ready When You Are: Voice Interfaces for Patient Engagement

We started experimenting with voice as a patient interface early this year, and showed a solution with a voice-enabled total-joint care plan to a select group of customers and partners at HIMSS 2017. Recently we were finalists in the Merck-sponsored Alexa Diabetes Challenge, where we built a voice-enabled IOT scale and diabetic foot scanner, and also a voice-powered interactive care plan.

Over the course of the challenge we tested the voice experience with people with Type 2 diabetes. We also installed the scale and scanner in a clinic, and we found that clinicians also wanted to engage with voice. Voice is a natural in the clinical setting: there’s no screen to get in the way of interactions, and people are used to answering questions. Voice is also great in the home.

However, voice isn’t always the best interface which is why we think multimodal care plans including voice, text, mobile, and web can deliver a more comprehensive solution. Since it’s easier for someone to overhear a conversation than look at your smartphone or even computer screen, mobile or web are often better interfaces depending on the person’s location (for example taking public transit), or the task they need to do (for example, reporting status of a bowel movement). We do think that voice has many great healthcare applications, and benefits for certain interactions and populations.

In our testing, we found that both patients and providers really enjoyed the voice interactions and wanted to continue the conversation. They felt very natural, and people used language that they would use with a human. For example, when asked to let the voice-powered scale know when he was ready to have his foot scan, one person responded with:

“Ready when you are.”

This natural user interface presents challenges for developers. It’s hard to model all the possible responses and utterances that a person would use. Our application, would answer to ready, sure, yes, and okay, but the “when you are” caused her some confusion.

Possibly the most important facet of voice is the connection people have with voice is extremely strong, and unlike mobile voice is not yet associated with the need to follow up, check email, or other alerts. (Notifications on voice devices could change this.)

“Voice gives the feeling someone cares. Nudges you in the right direction”

Creating a persona for voice is important, and relying on the personas created by the experts like the Alexa team, is probably the best way for beginners to start.

“Instructions and voice were very calm, and clear, and easy to understand”

Calm is the operative word here. Visual user interfaces can be described as clean, but calm is definitely a personification of the experience.

Voice is often seen as a more ubiquitous experience, possibly because using fewer words, and constantly checking for the correct meaning are best practices, for example “You want me to buy two tickets for Aladdin at 7:00 pm. Is this correct?” We often hear pushback on mobile apps for seniors, but haven’t heard the same for voice. However, during our testing, a senior who was hard-of-hearing told us she couldn’t understand Alexa, and thought that she talked too quickly. While developers can put pauses to set the speed of prompts and responses in conversation, this would mean that the same speed would have used for all users of the skill, which might be too slow for some or two fast for others. Rather than needing to build different skills based on hearing and comprehension speed it would be great if end-users could define this setting so that we can build usable interfaces for everyone.

While this was our first foray into testing voice with care plans, we see a lot of potential to drive a more emotional connection with the care plan, and to better integrate into someone’s day.

People need to manage interactions throughout their day, and integrating into the best experience based on what they need to do and where they are provides a great opportunity to do that, whether that’s voice, SMS, email, web, or mobile. While these consumer voice applications are not yet HIPAA-compliant, like our tester patient said we’ll be “ready when you are.”

Posted in: Behavior Change, Healthcare Technology, Healthcare transformation, patient engagement

Leave a Comment (0) →

What Motivates You, May Not Motivate Me

At Wellpepper our goal is to empower people to be able to follow their care plans and possibly change their behavior, so we think a lot about how to motivate people. Early on when working with Terry Ellis, Director of the Boston University Center for Neurorehabilitation, wanted to make sure that our messages to patients that may struggle with adherence were positive. She works with people who have Parkinson’s disease, and stressed that while they may improve symptoms they would not “get better.”

Last week I had a similar conversation with an endocrinologist about diabetes care plans. People with chronic diseases are often overwhelmed and may take a defeatist attitude to their health. Feedback and tools need to be non-judgmental and encouraging. Ideas like “compliance” and “adherence” may not be the way to look at it. Sometimes the approach should be “something is better than nothing.” And humans, not just algorithms need to decide what “good” is.

Am I good or great?

Here’s an example, non-healthcare related of algorithmic evaluation gone wrong. Rather than applauding me for being in the top tier of energy efficient homes, the City of Seattle, says I’m merely “good.” There’s no context on my “excellent” neighbors, for example are they in a newly built home compared to my 112 year old one, and no suggestions on what I might want to do to become “excellent. (Is it the 30-year old fridge?) I’m left with a feeling of hopelessness, rather than a resolve to try to get rid of that extra 2KW. Also, what does that even mean? Is 2KW a big deal?

Now imagine you’re struggling with a chronic disease. You’ve done your best, but a poorly tuned algorithm says you’re merely good, not excellent. Well, maybe what you’ve done is your excellent. This is why we enable people to set their own goals and track progress against them, and why care plans need to be personalized for each patient. It’s also why we don’t publish stats on overall adherence. Adherence for me might be 3 out of 5 days. For someone else it might be 7 days a week. It might depend on the care plan or the person.

As part of every care plan in Wellpepper, patients can set their own goals. Sometimes clinicians worry about the patient’s ability to do this. These are not functional goals, they represent what’s important to patients, like family time or events, enjoying life, and so on. We did an analysis of thousands of these patient-entered goals, and determined that it’s possible to track progress against these goals, so we rolled out a new feature that enables patients to do this.

Patient progress against patient-defined goal

Success should be defined by the patient, and outcome goals by clinicians. Motivation and measures need to be appropriate to what the patient is being treated for and their abilities. Personalization, customization, and a patient-centered approach can achieve this. To learn more, get in touch.

Posted in: Behavior Change, chronic disease, Healthcare motivation, Healthcare Technology, Healthcare transformation, Outcomes, patient engagement, patient-generated data

Leave a Comment (0) →

Disruptive Innovation, Sparks of Light, or the Evolution of Care: Recap of Mayo Transform Conference

In what has been a roller-coaster year for healthcare legislation, it’s the annual touchstone of the Mayo Clinic Transform Conference provided a welcome opportunity to reflect on where we are. This conference, sponsored by the Mayo Clinic Center for Innovation attracts powerhouse speakers like Andy Slavitt and Clayton Christensen, and yet manages to fly under the radar. This year’s theme was about closing the gap between people and health, so the social determinants of health were a key topic, as was whether disruption alone would solve the problem.

Dr Robert Pearl

This was my third year attending, and second year speaking at the conference, and I’ve noticed a trend: the conference starts by articulating the problem, and building up solutions and creative ways to reshape the problems over the course of the two days. This year the conference was deftly moderated by Elizabeth Rosenthal, MD,Editor-In-Chief of Kaiser Health News and author of “An American Sickness.” Rosenthal, an MD herself, and former NYTimes journalist, peppered her moderation with real-world examples of both waste and inefficiencies and effective programs based on her investigative journalism.

I’ve been wanting to write a blog post for a while that riffs on the theme of “You Are Here” trying to outline where we are in the digital evolution in healthcare, but it’s clear that we don’t know where we are, digital or otherwise: too much is currently in flux. There are points of light with effective programs, and things that seem very broken. The panel I was on, was titled “Disruptive Innovation” and I’m afraid we let the audience down, as while we are doing some very interesting things with health systems, we are not turning every model on its head. We work with providers and patients to help patients outside the clinic. Truly disruptive innovation would work completely outside the system, which leads to the question, can health systems disrupt themselves or will it come from entirely new entrants like say Google, Apple, or Amazon?

Dr. David Feinberg of Geisinger reads from debate opponent Dr. Robert Pearl’s book

Clayton Christensen, the closing keynote speaker, likens hospitals to mainframe computers, and basically says they will be overtaken by smaller more nimble organizations, much like the PC and now smartphone revolution. Organizations like Iora Health who holistically and preventatively manage a Medicare Advantage population are the epitome of these new entrants, and we’ve seen some hospitals struggle this year, but will they go away entirely? The answer to this question may lie in the excellent debate session “Is The US Healthcare System Terminally Broken” hosted by Intelligence Squared and moderated by author and ABC News Correspondent John Donovan.

 

Shannon Brownlee, senior VP of the Lown Institute and visiting scientist at the Harvard T.H. Chan School of Public Health, and Robert Pearl, MD, and former CEO of the Permanente Medical group were arguing that the system is broken, vs Ezekiel Emmanuel, MD, Senior Fellow Center for American Progress, and David Feinberg, MD, CEO of Geisinger.

While prior to the debate the audience favored the idea that the system is irreparably broken, by the end, they had come around to the idea that it’s not, which would point to the ability for healthcare to disrupt itself. The debate

Is Healthcare Terminally Broken

The final audience vote

was ridiculously fun, partially from the enthusiasm of the debaters, and because the topic was so dear to all attendees. You can listen to the podcast yourself. However, the posing of the question set up an almost impossible challenge for Pearl and Brownlee: they had to argue the patient is terminal, but without any possible solution. No one in the room wanted to hear that, and so when Emmanuel and Feinberg were able to point to innovative programs like the Geisinger Money Back Warranty or Fresh Food Pharmacy that just needed to find scale, the audience latched onto the hope that we can fix things, and we all have to believe in these points of light, to face each new day of challenges.

Posted in: Health Regulations, Healthcare Disruption, Healthcare Legislation, Healthcare Policy, Healthcare Technology, Healthcare transformation

Leave a Comment (0) →

Alexa Voice Challenge for Type 2 Diabetes: Evolving An Idea

For the past couple of months some of our Wellpepper team, with some additional help from a couple of post-docs from University of Washington, have been working hard on a novel integrated device, mobile, and voice care plan to help people newly diagnosed with type 2 diabetes as part of our entry in the Alexa Diabetes Challenge.

Team Sugarpod

This challenge offered a great opportunity to evolve our thinking in the power of integrating experiences directly into a person’s day using the right technology for the setting. It also provided the opportunity to go from idea to prototype in a rapid timeframe.

Our solution featured an integrated mobile and voice care plan, and a unique device: a voice powered scale that scans for diabetic foot ulcers, a leading cause of amputation, hospitalization, and increased mortality, and is estimated to cost the health system up to $9B per year.

During the challenge, we had access to amazing resources, including a 2-day bootcamp held at Amazon headquarters during which we heard from experts in voice, behavior change, caring for people with type 2 diabetes, and a focus group with people who have type 2 diabetes. We also had 1:1 sessions with various experts who had seen our entry and helped us think through the challenges of developing it. After the bootcamp, we were assigned a mentor, an experienced pharmacist and diabetes educator, who was available for any questions. Experts from the bootcamp also held office hours where we explored topics like

Early Prototype Voice Powered Scale & Scanner

how to help coach people in what they can do with an Alexa skill, and how to build trust with a device that takes pictures in your bathroom.

As we evolved our solution, we were fortunate to have support from Dr Wellesley Chapman, medical director of Kaiser Permanente Washington’s Innovation Group. We were able to install the device in a Diabetes and Wound Clinic. We used this to train our image classifier to look for foot ulcers, and compare results to human detection, and also to test the voice service. We used an anonymous voice service as Alexa and the Lex services are not currently HIPAA-eligible.

We gathered feedback from diabetes educators, clinicians at KP Washington, and across the country, and from people with Type 2 diabetes. While not everyone wanted to use all aspects of the solution, they all felt that the various components: voice, mobile, and device offered a lot of support and value. As well, we determined that there is an opportunity for a voice-powered scale and scanner in the clinic which could aid in early detection and streamline productivity. Voice interactions in the clinic are a natural fit.

Judges and Competitors: Alexa Diabetes Challenge

The great thing about a challenge is the constraints provided to do something really great in a short period of time. We’re so proud of the Sugarpod team, and also incredibly impressed with the other entries in this competition ranging from a focus on supporting the mental health challenges faced by people newly diagnosed with Type 2 diabetes to a specific protocol for diet and nutrition, to solutions that helped manage all aspects of care. We enjoyed meeting our fellow competitors at the bootcamp and the final, and wish we had met in a situation where we could collaborate with them. We also appreciated the thoughtful feedback and questions from the judges, and would definitely have a lot to gain from deeper discussions with them on the topic.

Stay tuned for more on our learnings through this challenge and our experiences with voice.

Posted in: Healthcare Disruption, Healthcare Technology, Healthcare transformation, M-health, Managing Chronic Disease, Outcomes, patient engagement, patient-generated data

Leave a Comment (0) →

Who Defines Value?

Pharma companies have recently jumped on the value bandwagon with proposals for value-based drug pricing based on outcomes and effectiveness. They have started to enter contracts with payers for specific drugs based on the impact the drug has on the condition the drug is treating.

This is a step in the right direction, and much better than pricing based on maximizing shareholder value, but value is in the eye of the beholder, and the patient is a key stakeholder. Shared decision making does a good job defining what’s important to a patient. The goal of shared decision making is to choose courses of care that offer the best outcome to the patient, and can consider some of the following:

  • Is this procedure my only option? What are alternative types of treatment?
  • What are the possible outcomes and side effects of each option, including the option of doing nothing?
  • What is the estimated cost of the procedure and any related follow-up care or medication?

Source: Center for American Progress

Simply, value can be defined by the following.

The challenge of the equation is in the definitions of acceptable outcomes and costs. Here are a few things that people might consider when evaluating a drug or a course of care.

  • Inconvenience or effort: How much does this disrupt their life? Does it prevent the person from doing other things?
  • Cost: How much does it cost? This could be in monetary terms, time, side effects, or quality of life.
  • Outcome: What is the expected outcome and how closely does it align with the outcome that’s important to me?

You can see that based on these factors, that healthcare can be a market of one. My idea of value and acceptable outcomes could be very different from yours. And, unfortunately, the patient is not a consumer in a free and transparent market. That said, it is possible to make consumer-like decisions in healthcare.

Let’s look at the value decision I tried to make this past weekend. I fractured and dislocated a finger while playing Ultimate Frisbee. I was pretty sure the finger was dislocated, which shouldn’t be a big deal, so tried to go to urgent care where I expected value based on time, outcome, and cost. Well guess what? Urgent care is not open on a Saturday night. I had a feeling that emergency care would not meet my value criteria of effort, since I expected a long wait, and I got it. On the cost, I did know that the provider I went to was in-network so that wasn’t a big issue, but I still don’t know the total cost if I’d had to pay out of pocket.

Waiting in ED

Waiting

Outcome was great, and the level of care was great. What was not great is that it took 4 hours to get x-rays, pop my finger back in, and splint it. If I had been choosing as a consumer, I’d never have chosen this. With higher deductibles and co-pays, people are making decisions as consumers which is why hospitals advertise wait times, and some are looking at how to completely overhaul the ER, both of which would get us closer to value.

Let’s look at an example on value-based drug pricing. Back when I had the Cadillac of US healthcare plans when I was working at Microsoft, I was prescribed a topical psoriasis drug. The expected outcome was no psoriasis lesions. The cost was $800 for a 60g tube. Since I didn’t have to pay anything out of pocket, I got the prescription. Did it work? Yes. Was it worth it to me? No. I had other creams that cost much less, and worked almost as well. I didn’t end up getting it again—I wouldn’t have paid $800 for it myself, so why should my employer? If cost is not part of the equation, people are making decisions with only partial information, and can’t possibly judge value. Co-pays and transparency can help guide people to consumer-like behavior in healthcare, even in an imperfect market.

What’s the upside? The upside is that we’re having these discussions, and that we can see a shift to value and consumer focus, even without legislation, which is really how it needs to happen. The other thing to remember is that people want to deliver excellent and quality care. Everyone I met during my finger ordeal, from the admitting staff to the x-ray tech, to the resident who was excited to see a dislocation he’d never seen before was excellent, and that defines quality in my mind. Maybe we have less far to go than we thought.

Posted in: Healthcare Disruption, Healthcare Legislation, Healthcare motivation, Healthcare Policy, Outcomes

Leave a Comment (0) →

Meeting Consumer Expectations in Healthcare

We could talk about this all day, and we do! We’re glad to see healthcare executives start to take ownership of the digital experience, and understand that consumer and patient engagement is key to outcome success.

Consumer expectations are indeed hitting healthcare – hard. Patients are no longer shy about telling physicians and payers what they want and how much they’re willing to pay for it. While these expectations can seem overwhelming to those insiders who have long become accustomed to healthcare’s glacial pace, we shouldn’t be discouraged. These greater expectations can indeed be met, provided we take the time to develop and offer physicians and patients tools that meet their needs and fit their workflows.

Here’s the latest take on this topic from HISTalk

 

Posted in: Healthcare transformation, patient engagement, Patient Satisfaction

Leave a Comment (0) →

Patient Experience Versus Patient Engagement

As a volunteer session reviewer for the Patient and Consumer Engagement track for HIMSS 2018, I’ve been thinking a lot about the difference between engagement and experience, and also what it means to deliver connected health. While Wellpepper is a platform for patient engagement, a session based on Boston University’s study using Wellpepper with people with Parkinson disease actually suited the definition of Connected Health better and was submitted in that track.

As I’ve been reviewing sessions submissions for the track, I noticed that quite a few focus on patient experience rather than engagement. The difference really is about commitment and action. Patient experience is what happens when someone engages with a health system or physician office. Patient engagement is what happens when someone actively participates in their own care as a patient. You could argue that patients can’t help but be engaged because whatever is happening is happening to them, but it’s a bit more than that. (Also that argument gets a bit existential.)

Both engagement and experience are important. With a crappy experience then people may not engage with you, your system, or their own health. This can be as simple as not being able to find parking. Good experience is the pre-requisite for engagement, but it is not engagement on its own. Engagement happens when you empower the patient and treat them as an active participant in their care.

There’s a continuum from experience to engagement, and often the same digital tools represent both, although both also include the physical experience, and both will help you attract and retain patients but more importantly engagement will also help improve outcomes.

If you’re interested in this topic, this article in NEJM Catalyst from Adrienne Boissy, MD of Cleveland Clinic does much better job than I do of explaining it.

Posted in: Healthcare Technology, Healthcare transformation, M-health, patient engagement, Patient Satisfaction

Leave a Comment (0) →

Making Patient-Generated Health Data Count

We’re strong advocates for the power and benefits of patient-generated data. In analyzing results from patients using Wellpepper, we’ve been able to derive insights that wouldn’t be possible if patients weren’t tracking and self-managing. In addition to improving care, using patient-generated data can impact quality, which is why we’re happy to see patient-generated data being included as a valid feedback source for Medicare quality measures. This measure in particular, which is a result of a collaboration between Healthloop, Livongo, and Wellpepper, is a first step in this.

We believe that collaboration and open data is going to be much more powerful if we work together, rather than against each other. This is the new spirit of openness, interoperability, and collaboration that will help move healthcare IT forward. If your vendors aren’t interested in working together, you might want to ask yourself why not.

Here’s the text of the proposed measure on Patient-Generated Data.

 

 

You can find it on page 181 of this document in CY 2018 Updates to the Quality Payment Program, which is open for public comment until August 21st.

There are other measures that would also benefit from including patient-generated data as a validation source. In particular, measures for tracking patient-reported outcomes, care coordination, and discharge and follow-up instructions, use and reconciliation of medications, blood pressure, and blood sugar tracking. As home sensors and connectivity to those are clinically validated and improve, there is even greater opportunity to streamline the process of quality reporting and reduce the burden for physicians and health systems.

Posted in: Health Regulations, Outcomes

Leave a Comment (0) →

But Will It Fly? What Airlines and Healthcare Organizations Have In Common

What do airlines and healthcare systems have in common? Quite a lot it turns out, from a recent power breakfast featuring Rod Hochman, CEO of Providence St. Joseph Hoag Health system, and Brad Tilden, CEO of Alaska Airlines. In addition to the Pacific Northwest roots of both organizations, both have also undertaken mergers to gain market share and increase physical territory. Both serve a large cross-section of the population, and both are in highly-regulated industries that are not necessarily known for customer service that are grappling with new always connected user experiences and expectations.

The wide-ranging discussion included early inspiration for Hochman and Tilden’s early careers, how to motivate and engage a wide range of employees, and how to deal with competition and lead change. Both leaders had early influences on their career direction. Hochman knew he wanted to be a doctor at 16 when assisting on surgeries (!), and Tilden grew up beside Seatac airport watching planes while his peers were watching girls. Tilden grew his career at Alaska, while Hochman is a practicing rheumatologist, who has worked his way from small clinic to major system. Hochman joked that a rheumatology specialty is much more suited to success in administration than say surgery, equating running a hospital to the patient required in managing chronic diseases.

Airlines and health systems have similar challenges with employee experience. Both types of organizations have highly skilled staff, pilots and physicians, who demand a lot of autonomy. Mistakes in both professions can cause loss of life. The difference is that aviation has moved a lot faster in instituting standard procedures and checklists to improve safety and outcomes. Tilden frequently referenced an Alaska Air crash 17 years ago that impacted their approach to safety, and talked about the ways pilots and co-pilots double check settings. Hochman talked about his hope for quality improvements and better collaboration from the younger generation of physicians who have grown up in a world of checklists and standardization, and said that the ones who only care about being left alone to make decisions will retire.

They also have large teams of people who “get stuff done.” Hochman has banned the term ‘middle management’ since he sees those people as the ones who are making things happen, instead he calls them “core team”, a term that Tilden quipped he’d also start using.Rod Hochman & Brad Tilden

Customer experience was also top of mind for both execs. Tilden talked about Alaska adopting Virgin’s mission of being the airline people love. While he seemed to find some of Virgin’s approach to be a bit edgy compared to Alaska, he said you couldn’t find a better mission. Both grappled with the ease of sharing bad experiences on social media, and indicated that social media monitoring has become a key tool in managing consumer expectations. Hochman, also noted that it all comes back to the individual experience when he described that his staff hate when he has his own annual physical, because his expectations as a patient are much higher than what he experiences, especially with respect to convenience and information flow.

Both are optimistic and passionate leaders who genuinely care about the consumer and employee experience, and had as good a time interviewing each other as the audience did listening to them. This event was sold out, so if an opportunity like this comes up again, sign up early.

Posted in: Healthcare Disruption, Healthcare Technology, Healthcare transformation, M-health

Leave a Comment (0) →

Introducing Sugarpod by Wellpepper, a comprehensive diabetes care plan

We’re both honored and excited to be one of five finalists in the Alexa Diabetes Challenge. We’re honored to be in such great company, and excited about the novel device our team is building. You may wonder how a team of software folks ends up with an entry with a hardware component. We did too, until we thought more about the convergence happening in technology.

We were early fans of the power of voice, and we previewed a prototype of Alexa integration with Wellpepper digital treatment plans for total joint replacement at HIMSS in February 2017. Voice is a great interface for people who are mobility or vision challenged, and the design of Amazon Echo makes it an unobtrusive home device. While a mobile treatment plan is always with you, the Amazon Echo is central in the home. At one point, we thought television would be the next logical screen to support patients with their home treatment plans, but it seems like the Echo Show is going to be more powerful and still quite accessible to a large number of people.

Since our platform supports all types of patient interventions, including diabetes, this challenge was a natural fit for our team, which is made up of Wellpepper staff and Dr Soma Mandal, who joined us this spring for a rotation from the University of Georgia. However, when we brainstormed 20 possible ideas for the challenge (admittedly over beer at Fremont Brewing), the two that rose to the top involved hardware solutions in addition to voice interactions with a treatment plan. And that’s how we found ourselves with Sugarpod by Wellpepper which includes a comprehensive diabetes care plan for someone newly diagnosed, and a novel Alexa-enabled device to check for foot problems, a common complication of diabetes mellitus.

Currently in healthcare, there are some big efforts to connect device data to the EMR. While we think device data is extremely interesting, connecting it directly to the EMR is missing a key component: what’s actually happening with the patient. Having real-time device data without real-time patient experience as well, is only solving one piece of the puzzle. Patients don’t think about the devices to manage their health – whether glucometer, blood pressure monitor, or foot scanner – separately from their entire care plan. In fact, looking at both together, and understanding the interplay between their actions, and the readings from these devices, is key for patient self-management.

And that’s how we found ourselves, a mostly SaaS company, entering a challenge with a device. It’s not the first time we’ve thought about how to better integrate devices with our care plans, but is the first time we’ve gone as far as prototyping one ourselves, which got us wondering which way the market will go. It doesn’t make sense for every device to have their own corresponding app. That app is not integrated with the physician’s instructions or the rest of the patient’s care plan. It may not be feasible for every interactive treatment plan to integrate with every device, so are vertically integrated solutions the future? If you look at the bets that Google and Apple are making in this space, you might say yes. It will be fascinating to see where this Alexa challenge takes Amazon, and us too.

We’ve got a lot of work cut out for us before the final pitch on September 25th in New York. If you’re interested in our progress, subscribe to our Wellpepper newsletter, and we’ll have a few updates. If you’re interested in this overall hardware and software solution for Type 2 diabetes care, either for deploying in your organization or bringing a new device to market, please get in touch.

Read more about the process, the pitch, and how we developed the solution:

Ready When You Are: Voice Interfaces for Patient Engagement

Alexa Voice Challenge for Type 2 Diabetes: Evolving a Solution

 

Posted in: Behavior Change, chronic disease, Healthcare Disruption, Healthcare Technology, Healthcare transformation, M-health, Managing Chronic Disease, patient-generated data

Leave a Comment (1) →

In Defense of Patient-Generated Data

There’s a lot of activity going on with large technology companies and others trying to get access to EMR data to mine it for insights. They’re using machine learning and artificial intelligence to crawl notes and diagnosis to try to find patterns that may predict disease. At the same time, equal amounts of energy are being spent figuring out how to get data from the myriad of medical and consumer devices into the EMR, considered the system of record.

There are a few flaws in this plan:

  • A significant amount of data in the EMR is copied and pasted. While it may be true that physicians and especially specialists see the same problems repeatedly, it’s also true that lack of specificity and even mistakes are introduced by this practice.
  • As well, the same ICD-10 codes are reused. Doctors admit to reusing codes that they know will be reimbursed. While they are not mis-diagnosing patients, this is another area where there is a lack of specificity. Search for “frequently used ICD-10 codes”, you’ll find a myriad of cheat sheets listing the most common codes for primary care and specialties.
  • Historically clinical research, on which recommendations and standard ranges are created, has been lacking in ethnic and sometimes gender diversity, which means that a patient whose tests are within standard range may have a different experience because that patient is different than the archetype on which the standard is based.
  • Data without context is meaningless, which is physicians initially balked about having device data in the EMR. Understanding how much a healthy person is active is interesting but you don’t need FitBit data for that, there are other indicators like BMI and resting heart rate. Understanding how much someone recovering from knee surgery is interesting, but only if you understand other things about that person’s situation and care.

There’s a pretty simple and often overlooked solution to this problem: get data and information directly from the patient. This data, of a patient’s own experience, will often answer the questions of why a patient is or isn’t getting better. It’s one thing to look at data points and see whether a patient is in or out of accepted ranges. It’s another to consider how the patient feels and what he or she is doing that may improve or exacerbate a condition. In ignoring the patient experience, decisions are being made with only some of the data. In Kleiner-Perkin’s State of the Internet Report, Mary Meeker estimates that the EMR collects a mere 26 data points per year on each patient. That’s not enough to make decisions about a single patient, let alone expect that AI will auto-magically find insights.

We’ve seen the value of patient engagement in our own research and data collected, for example in identifying side effects that are predictors of post-surgical readmission. If you’re interested, in these insights, we publish them through our newsletter.  In interviewing patients and providers, we’ve heard so many examples where physicians were puzzled between the patient’s experience in-clinic or in-patient versus at home. One pulmonary specialist we met told us he had a COPD patient who was not responding to medication. The obvious solution was to change the medication. The not-so-obvious solution was to ask the patient to demonstrate how he was using his inhaler. He was spraying it in the air and walking through the mist, which was how a discharge nurse had shown him how to use the inhaler.

By providing patients with useable and personalized instructions and then tracking the patient experience in following instructions and managing their health, you can close the loop. Combining this information with device data and physician observations and diagnosis, will provide the insight that we can use to scale and personalize care.

Posted in: Adherence, big data, Clinical Research, Healthcare Disruption, Healthcare Research, Healthcare Technology, Healthcare transformation, Interoperability, M-health, patient engagement, patient-generated data

Leave a Comment (0) →

Consumerization Is Not A Bad Word

When you say consumerization, especially with respect to healthcare, people often jump to conclusions about valuing service over substance. There’s a lot of confusion over the meaning of consumerization, whether it’s possible in healthcare, and whether it’s happening. I recently had the privilege of speaking at the Washington State Health Exchange’s Annual Board Retreat on this topic. (Perhaps you saw it, the event was live-streamed to the public. 😉 ). The Health Exchange is pondering questions of how to attract new users, how to better serve their needs, and how to make the experience more useful and engaging. And, this my friends is consumerism, or at least one facet of it: user focus, better service, understanding needs. Doesn’t sound bad at all, does it? In fact, it sounds like something any good service or organization should be doing for its customers.

Consumer-centered pain scale. Baymax from Disney's Big Hero Six

Consumer-centered pain scale. Baymax from Disney’s Big Hero Six

And there’s that word, customers. That’s the debate. Are patients really customers? Not really, often they don’t have a choice, either because of their insurance coverage or from the necessity of an emergency where decisions are often made for patients. However, patients, and everyone else for that matter (except people in North Korea), are consumers, and they judge healthcare experiences both service delivery and technology as consumers. Think of it like this, your patients will judge your experiences through the lens of any other service they’ve interacted with. Fair or not, they will do that. Why do they do this? It’s human nature to remember positive experiences and try to seek them out. Although there’s another reason: high-deductibles are also driving people to examine where they are spending their healthcare dollars, and they evaluate based on outcomes, convenience, and the overall experience.

Since healthcare technology is my area of expertise, let’s stick to that rather than critiquing hospital parking, food, or beds. (Although these are often things that impact HCAHPS scores.) Consumerization when applied to health IT means that patients have an expectation that any technology you ask them to engage with, and especially technology you ask them to install on their own devices, will be as usable as any other app they’ve installed.

Consumerization also impacts internal health IT. Doctors were the first wave, when they pushed using their own devices to text with other providers within the hospital setting. (In IT this is often referred to as “bring your own device.”) The pager became obsolete and replaced with our own always on, always connected mobile devices. (Sadly, the fax machine, like a cockroach, keeps hanging in there.)

Patients are also bringing their own devices, and using them in waiting rooms and hospital beds. We’ve had patients reporting their own symptoms using Wellpepper interactive care plans from their hospital beds. This presents an opportunity to engage, and at a low cost: they are supplying the hardware. The final wave of consumerism will happen when clinicians and other hospital staff also demand convenient, usable, and well-designed tools for clinical care.

Consumerization is late to arrive in healthcare IT. Other industries have already reached tail end of this wave, and have already realized that technology needs to be easy to use, accessible, interoperable, and designed with the end-user foremost. However, consumerization is coming, both from internal staff demands and patients. Technology, healthcare IT, and the people that build and support it are facing scrutiny, being held to higher standards, and becoming part of the strategic decision-making healthcare organizations. This is a great thing, as it will result in better clinician and patient experiences overall, because at its core consumerism is about expecting value, and ease and getting it, and who doesn’t want that?

Posted in: Healthcare Disruption, Healthcare Technology, Healthcare transformation, Interoperability, M-health, Outcomes, Patient Satisfaction

Leave a Comment (0) →
Page 1 of 15 12345...»
Google+