Blog

Archive for January 2nd, 2019

Machine Learning In Healthcare: How To Avoid GIGO

There’s a commonly used phrase in technology called “garbage in, garbage out” which means that if you start with flawed data or faulty code, you’re going to get lousy output. It results in high-levels of rigor whether that’s in doing user or market research or designing algorithms. Garbage in/Garbage out (or GIGO) is why, although, we are using machine-learning to improve patient engagement and outcomes, at Wellpepper, we’re also slightly skeptical of efforts by big tech (Google, Microsoft, and Amazon) to partner with healthcare organizations to mine their EMR data using machine learning to drive medical breakthroughs. I’ve talked to a number of physicians who are equally skeptical. The reason is that we and especially many physicians are skeptical is that the data in the EMR is frequently poor quality, much of it highly unstructured, and that a major piece of data is missing: the actual patient outcomes. It has what the doctor prescribed but not what the patient did and often what the result was. As well, the data collected in the EMR is designed for billing not diagnosis so it’s more likely the insights will be about billing codes not diagnosis. Why is the data poor quality?

  • A JAMA study found that only 18 percent of EMR notes are original. 46 percent were imported (without attribution) and 36 percent were copied and pasted. Let’s assume that the 18 percent of original notes have no errors, you’re still dealing with 80% of the notes that have a questionable source. This copying and pasting has also contributed to “note bloat” if the data is bad, having more of it will actually hinder the process of finding insights, even for a machine.
  • The data is not standardized. Since so much of the data in the EMR is in these notes, physicians are using different words for the same issue.
  • The dataset from an EMR is biased in several important ways. First, it was entered by physicians and other practitioners, rather than by a broad set of users. The language in healthcare is very different than how patients talk about their health, so these algorithms are unlikely to generalize well outside of the setting where their training data was acquired. Second, data in the EMR has a built-in selection bias towards sick people. Healthy people are probably missing, or at least substantially underrepresented in the dataset. So don’t be surprised if a classifier trained in this setting decides that everyone is sick.
  • Even without copy and paste errors, the data is often just wrong. I once had an intern read back a note to me where she’d recorded my profession as “construction worker”. Yes, I make things, but it’s not nearly as physically taxing and if a physician treating me thought I regularly did heavy labor with my small frame, you can see where over-treatment might be the result.

CNBC’s Christina Farr wrote more about this data problem, the potential for medical errors, and a strange unwillingness to correct the data. A patient quoted in the story understands all too well the problem of GIGO:

“I hope that companies in tech don’t start looking at the text in physician notes and making determinations without a human or someone who knows my medical history very well,” she said. “I’m worried about more errors.”

  • In addition to incorrect data, there are incorrect semantics or examples of physicians using different words for the same issue. In addition to learning medical synonyms which is no small feat, these EMR ML algorithms are going to have to learn grammar too to be truly effective.

Of course, there are solutions to all of these problems and the data quality can be improved with approaches like more standardized input, proof-reading, and possibly using virtual scribes (ironically using machine-learning to speed up input and improve the quality of the data). However the current issues with it make me question whether this is a garbage in/garbage out effort where everyone would be better off starting from cleaner data. The challenge today is that the experts in ML (big tech), don’t have the data, and the experts in the data (healthcare) don’t have the experts in machine learning, so they are partnering and trying to gain some insights from what they have which is arguably very messy data. Another, and possibly more interesting approach is to get a new data set. In 2014, HealthMap showed that you could glean social media data like Twitter, Facebook, and Yelp for health data, and even predict food poisoning faster than the CDC, and now government health organizations have adopted the approach. This is a great example of finding a new data set and seeing what comes of it. At Wellpepper, our growing body of patient-generated data is starting to show insights. In particular we’ve been able to analyze data to find the following, and use this to automate and improve care:

  • Indicators of adverse events in patient-generated messages
  • Patients at 3-times greater risk of readmission from their own reported side-effects
  • The optimal number of care plan tasks for adherence
  • The most adherent cohort of patients
  • The correlation between provider messages and patient adherence to care plan

We also use machine-learning in our patented adaptive notification system that learns from patient behavior and changes notifications and messages based on their behavior. This is a key drive in our high-levels of patient engagement, and can be applied to other patient interactions. While it’s still hard work to find these insights, and then train algorithms on the data sets, we have an advantage because we are also responsible for creating the structure (the patient engagement platform) in which we collect this data :

  • We know exactly what the patient has been asked to do as part of the care plan
  • We have structured and unstructured data
  • Through EMR integration we also have the diagnosis code and other demographic insights on the patient

If you’re interested in gaining new insights about the effectiveness of your own patient-facing care plans delivered to patients outside the clinic, get in touch. You can create a new and clean data stream based on patient-generated data that can start delivering new insights immediately.

Posted in: Clinical Research, Healthcare Technology, patient engagement

Leave a Comment (0) →
Google+