Archive for January, 2019

See you at HIMSS19

HIMSS19 is a couple weeks away and we have a lot to be excited for!

Stop by and see us in the Personalized Health Experience, Booth 888-96. Alongside our great partners at Ensocare, we will be showcasing our latest product updates, discussing ROI for patient engagement platforms, promoting care plans based on Mayo Clinic best practices, and sharing our vision for the future of patient engagement.

We have a long list of booths to visit and sessions to attend. Below are some of the topics that we’re particularly interested in this year:

We can’t wait to connect with friends, partners, colleagues and industry leaders to continue the journey towards an amazing patient experience. Hope to see you there!

Posted in: Healthcare Disruption, Healthcare Technology, M-health, Outcomes, patient engagement

Leave a Comment (0) →

Calculating Return On Investment For Interactive Care Plans

Patient engagement and education is perhaps one of the most under-utilized resources in the care continuum. Patients who are engaged in their care cost healthcare systems [cost] 8% less than non-engaged patients in their base year, and 21% less in future years.[1]

Interactive treatment plans deliver return on investment in three key ways:

Improving patient satisfaction and outcomes

  • Patients look for quality and convenience when they choose a provider organization. They want to be able to interact with their healthcare organization on the same terms as everything else in their lives. Improving outcomes also increases patient satisfaction.

Increasing access to care

  • By monitoring what patients are doing outside the clinic and enabling patients to self-manage, you can increase access to care by seeing the right patients, seeing more patients, and improving recall. Making sure patients are prepared for surgery decreases no-shows and increases utilization and access to care.

Reducing costs

  • Cost reductions are the most dependent on model of care since a readmission or ED visit could be a source of revenue. However, you can look for reducing hard costs of seminars and handouts as well as costs of readmissions, extra visits in capitated models, and of complications. For patients poor outcomes increase costs both out of pocket and in quality of life. Manual labor costs of administering surveys and follow up questionaries can also be avoided with automated systems.

Long-term impact

Patient engagement can have a much stronger long-term impact, including reducing:

  • Hidden costs of variability in care delivery
  • Hidden costs of lack of standardization and manual processes
  • Costs of poor patient outcomes that result in worsening patient problems

As well, as an industry have only begun to scratch the surface of the types of clinical and behavioral insights that will be derived from patient-reported data, that will enable more efficient and effective treatment based on predictive models, and stronger patient participation in their own care.

For our full whitepaper, and ROI economic models contact or call (844) 899-7377 and press 1 for Sales.

[1] Health Affairs, 32, no.2 (2013):207-214 What The Evidence Shows About Patient Activation: Better Health Outcomes And Care Experiences; Fewer Data On Costs

Posted in: Adherence, Healthcare Legislation, Healthcare transformation, patient-generated data, Return on Investment

Leave a Comment (0) →

Machine Learning In Healthcare: How To Avoid GIGO

There’s a commonly used phrase in technology called “garbage in, garbage out” which means that if you start with flawed data or faulty code, you’re going to get lousy output. It results in high-levels of rigor whether that’s in doing user or market research or designing algorithms. Garbage in/Garbage out (or GIGO) is why, although, we are using machine-learning to improve patient engagement and outcomes, at Wellpepper, we’re also slightly skeptical of efforts by big tech (Google, Microsoft, and Amazon) to partner with healthcare organizations to mine their EMR data using machine learning to drive medical breakthroughs. I’ve talked to a number of physicians who are equally skeptical. The reason is that we and especially many physicians are skeptical is that the data in the EMR is frequently poor quality, much of it highly unstructured, and that a major piece of data is missing: the actual patient outcomes. It has what the doctor prescribed but not what the patient did and often what the result was. As well, the data collected in the EMR is designed for billing not diagnosis so it’s more likely the insights will be about billing codes not diagnosis. Why is the data poor quality?

  • A JAMA study found that only 18 percent of EMR notes are original. 46 percent were imported (without attribution) and 36 percent were copied and pasted. Let’s assume that the 18 percent of original notes have no errors, you’re still dealing with 80% of the notes that have a questionable source. This copying and pasting has also contributed to “note bloat” if the data is bad, having more of it will actually hinder the process of finding insights, even for a machine.
  • The data is not standardized. Since so much of the data in the EMR is in these notes, physicians are using different words for the same issue.
  • The dataset from an EMR is biased in several important ways. First, it was entered by physicians and other practitioners, rather than by a broad set of users. The language in healthcare is very different than how patients talk about their health, so these algorithms are unlikely to generalize well outside of the setting where their training data was acquired. Second, data in the EMR has a built-in selection bias towards sick people. Healthy people are probably missing, or at least substantially underrepresented in the dataset. So don’t be surprised if a classifier trained in this setting decides that everyone is sick.
  • Even without copy and paste errors, the data is often just wrong. I once had an intern read back a note to me where she’d recorded my profession as “construction worker”. Yes, I make things, but it’s not nearly as physically taxing and if a physician treating me thought I regularly did heavy labor with my small frame, you can see where over-treatment might be the result.

CNBC’s Christina Farr wrote more about this data problem, the potential for medical errors, and a strange unwillingness to correct the data. A patient quoted in the story understands all too well the problem of GIGO:

“I hope that companies in tech don’t start looking at the text in physician notes and making determinations without a human or someone who knows my medical history very well,” she said. “I’m worried about more errors.”

  • In addition to incorrect data, there are incorrect semantics or examples of physicians using different words for the same issue. In addition to learning medical synonyms which is no small feat, these EMR ML algorithms are going to have to learn grammar too to be truly effective.

Of course, there are solutions to all of these problems and the data quality can be improved with approaches like more standardized input, proof-reading, and possibly using virtual scribes (ironically using machine-learning to speed up input and improve the quality of the data). However the current issues with it make me question whether this is a garbage in/garbage out effort where everyone would be better off starting from cleaner data. The challenge today is that the experts in ML (big tech), don’t have the data, and the experts in the data (healthcare) don’t have the experts in machine learning, so they are partnering and trying to gain some insights from what they have which is arguably very messy data. Another, and possibly more interesting approach is to get a new data set. In 2014, HealthMap showed that you could glean social media data like Twitter, Facebook, and Yelp for health data, and even predict food poisoning faster than the CDC, and now government health organizations have adopted the approach. This is a great example of finding a new data set and seeing what comes of it. At Wellpepper, our growing body of patient-generated data is starting to show insights. In particular we’ve been able to analyze data to find the following, and use this to automate and improve care:

  • Indicators of adverse events in patient-generated messages
  • Patients at 3-times greater risk of readmission from their own reported side-effects
  • The optimal number of care plan tasks for adherence
  • The most adherent cohort of patients
  • The correlation between provider messages and patient adherence to care plan

We also use machine-learning in our patented adaptive notification system that learns from patient behavior and changes notifications and messages based on their behavior. This is a key drive in our high-levels of patient engagement, and can be applied to other patient interactions. While it’s still hard work to find these insights, and then train algorithms on the data sets, we have an advantage because we are also responsible for creating the structure (the patient engagement platform) in which we collect this data :

  • We know exactly what the patient has been asked to do as part of the care plan
  • We have structured and unstructured data
  • Through EMR integration we also have the diagnosis code and other demographic insights on the patient

If you’re interested in gaining new insights about the effectiveness of your own patient-facing care plans delivered to patients outside the clinic, get in touch. You can create a new and clean data stream based on patient-generated data that can start delivering new insights immediately.

Posted in: Clinical Research, Healthcare Technology, patient engagement

Leave a Comment (0) →