Blog

Posts Tagged machine learning

Self-Driving Healthcare

It’s 2019 and your car can drive itself most of the way to your doctor’s office, but once there, you will be handed a clipboard with a paper form asking you for your name, date of birth, and insurance information. Then you will wait to be seen by a doctor, who will spend your visit facing a screen transcribing your spoken complaint into the EMR, and then ask you where you’d like your lab results faxed.

How can it be that technology is making such huge strides in some areas of our lives, while others are seemingly stuck in the last century? When will healthcare have its self-driving moment?

The Promise of Self-Driving Cars

Self-driving cars are a great example of how computer science has been applied to solve difficult real-world problems. It was less than 15 years ago that the computer science community celebrated the first autonomous vehicles successfully completing a 130 mi course in under 10 hours as part of the DARPA grand challenge. Most of the heavily-funded university research teams that entered used traditional programming techniques. The winner of this competition, Stanford University, was characterized by its use of machine learning to train a vehicle by example, rather than writing the if-then-else code by hand.

Since this time, machine learning generally, and deep neural networks in particular, have proven to be unreasonably effective in solving problems with huge and highly complex inputs like image recognition, sensor integration and traffic prediction, among others. Companies like Waymo, Volvo, Uber, and Tesla have been pouring money into the autonomous vehicle space and making rapid progress. Many cars sold today come with some level of assisted driving like lane holding and collision prevention, and Tesla vehicles even come with a “Full Self Driving” option.

Machine Learning in Healthcare

So, what about healthcare? People’s health is a highly complex function of genetics, medicine, diet, exercise, and a number of other lifestyle factors. In the same way you make thousands of little steering corrections to stay in a lane, you make thousands of choices each day that impact your susceptibility to disease, quality of life, and longevity to name a few. Can the same toolset that can help cars drive themselves help us build good predictive models for health and healthcare?

There have certainly been efforts. Including some high profile failures. One big limitation is the data. On the one hand, healthcare is awash in data. Some claim it won’t even fit in the cloud (spoiler: it will). Much of the data in healthcare today is locked up in EMR systems. Once you’ve liberated it from the EMR, the next problem is that it’s not a great input for machine learning algorithms. A recent study in JAMA focused on applications of ML in healthcare found that EMR data had big data quality issues, and that models learned on one EMR’s dataset were not transferrable to another EMR’s dataset, severely limiting the portability of models. Imagine trying to build a self-driving car with partial and incompatible maps from each city and you’ll start to understand the problem with using EMR data to train ML models.

All The Wrong Data

But more important than this: even if the EMR data was clean and consistent, a big piece of the puzzle is missing: the data about the person when they’re not in the doctor’s office. We know that a person’s health is influenced largely by their lifestyle, diet, and genetics. But we largely don’t have good datasets for this yet.

You can’t build a self-driving car no matter how much many fluid level measurements and shop records you have: “I don’t know why that one crashed, its oil was changed just 4 weeks ago. And with a fresh air filter too!”  You also can’t build meaningful healthcare ML models with today’s EMR-bound datasets. Sure, there will be some progress in constrained problems like billing optimization, workflow, and diagnostics (particularly around imaging), but big “change the world” progress will fundamentally require a better dataset.

There are efforts to begin collecting parts of this dataset, with projects like Verily’s Project Baseline, and the recently-failed Arivale. Baseline, and others like it will take years or decades to come to fruition as they track how decisions made today impact a person many years down the line.

On a more modest scale, at Wellpepper we believe that driving high-quality and patient-centered interactions outside the clinic is a major key to unlocking improved health outcomes. This required us to start by building a communication channel between patients and their care providers to help patients follow their care plans. Using Wellpepper, providers can assign care plans, and patients can follow along at home and keep track of health measures, track symptoms, and send messages. Collecting this data in a structured way opens the door to understanding and improving these interactions over time.

For example, using regression analysis we were able to determine certain patterns in post-surgical side-effects that indicate a 3x risk of readmissions. And more recently we trained a machine learned classifier for unstructured patient messages that can help urgent messages get triaged faster. And this is just scratching the surface. Since this kind of patient-centric data from outside the clinic is new, we expect that there is a large greenfield of discovery as we collect more data in more patient care scenarios.

Better patient-centric data combined with state of the art machine learning algorithms hold huge promise in healthcare. But we need to invest in collecting the right patient-centric datasets, rather than relying on the data that happens to be lying around in the EMR.

 

Posted in: Healthcare Disruption, Healthcare Technology, Healthcare transformation, machine learning, patient-generated data

Leave a Comment (0) →

Machine Learning in Medicine

As a new intern, I remember frequently making my way to the Emergency Department for a new admission; “Chest pain,” the attending would tell me before sending me to my next patient. Like any good intern I would head directly to the paper chart where I knew the EKG was supposed to be waiting for me, already signed off on by the ER physician. Printed in standard block print, “Normal Sinus Rhythm, No significant ST segment changes” I would read and place the EKG back on the chart. It would be later in the year before I learned to ignore that pre-emptive diagnosis or even give a thought to about how it got there. This is one of many examples how machine learning has started to be integrated into our everyday life in medicine. It can be helpful as a diagnostic tool, or it can be a red herring.

Example of machine-learning EKG interpretation.

Machine learning is the scientific discipline that focuses on how computers learn from data and if there is one thing we have an abundance of in medicine, data fits the bill. Data has been used to teach computers how to play poker, learn laws of physics, become video game experts, and provide substantial data analysis in a variety of fields. Currently in medicine, the analytical power of machine learning has been applied to EKG interpretation, radiograph interpretation, and pathology specimen identification, just to name a few. But this scope seems limited. What other instances could we be using this technology in successfully? What are some of the barriers that could prevent its utilization?

Diagnostic tools are utilized in the inpatient and outpatient setting on a regular basis. We routinely pull out our phones or Google to risk stratify patients with ASCVD scoring, or maybe MELD scoring in the cirrhotic that just got admitted. Through machine learning, these scoring systems could be applied when the EMR identifies the correct patient to apply it to, make those calculations for the physician, and present it in our results before we even have to think about making the calculation ourselves. Imagine a patient with cirrhosis who is a frequent visitor to the hospital. As a patient known to the system, a physician has at some point keyed in the diagnosis of “cirrhosis.” Now, on their next admission, this prompts this EMR to automatically calculated and provide a MELD Score, a Maddrey Discriminant Function (if a diagnosis of “alcoholic hepatitis” is included in the medical history). The physician can clinically determine relevance of the provided scores; maybe they are helpful in management, or maybe they are of little consequence depending on the reason for admission. You can imagine similar settings for many of our other risk calculators that could be provided through the EMR. While machine learning has potential far beyond this, it is a practical example where it could easily be helpful in every day workflow. However, there are some drawbacks to machine learning.

Some consequences of machine learning in medicine include reducing the skills of physician, the lack of machine learning to take data within context, and intrinsic uncertainties in medicine. One study includes that when internal medicine residents were presented with EKGs that had computer-annotated diagnoses, similar to the scenario I mentioned at the beginning of this post, diagnostic accuracy was actually reduced from 57% to 48% went compared to a control group without that assistance (Cabitza, JAMA 2017). An example that Cabitza brings up regarding taking data in context is regarding pneumonia patients with and without asthma and in-hospital mortality. The machine-learning algorithms used in this scenario identified that patients with pneumonia and asthma had a lower mortality, and drew the conclusion that asthma was protective against pneumonia. The contextual data that was missing from the machine learning algorithm was that the patient with asthma who were admitted with pneumonia were more frequently admitted to intensive care units as a precaution. Intrinsic uncertainties in medicine are present in modern medicine as physician who have different opinions regarding diagnosis and management of the same patient based on their evaluation. In a way, this seems like machine-learning could be both an advantage and disadvantage. An advantage this offers is removing physician bias. On the same line of thought, it removes the physician’s intuition.

At Wellpepper, with the Amazon Alexa challenge, machine learning was used to train a scale and camera device (named “Sugarpod“) in recognizing early changes in skin breakdown to help detect diabetic foot ulcers. Given the complications that come with diabetic foot ulcers, including infections and amputations, tools like this can be utilized by the provider to catch foot wounds earlier and provide appropriate treatment, ideally leading to less severe infections, less hospitalizations, less amputations, and lower burden on healthcare system as a whole. I believe these goals can be projected across medicine and machine learning can help assist us with them. With healthcare cost rising (3.3 Trillion dollars in 2016), most people can agree that any tools which can be used to decrease that cost should be utilized to the best of its ability. Machine learning, even in some of its simplest forms, can certainly be made to do this.

Posted in: Healthcare costs, Healthcare Technology, Healthcare transformation

Leave a Comment (0) →

Healthcare + A.I. Northwest

The Xconomy Healthcare + A.I. Northwest Conference at Cambia Grove featured speakers and panels discussing the opportunities and prospects for applying machine learning and artificial intelligence to find solutions for health care. The consensus was that we are no longer held back by a lack of technological understanding and ability. A.I. and M.L. models can be learned at a large scale by harnessing the power of the cloud and advances in data science. According to the panelists, today’s challenges to incorporating A.I. into healthcare include abundant, but inadequate data and resistance from health systems and providers.

Many researchers have found insufficient data to be an unexpected challenge. As keynote speaker Peter Lee of Microsoft Research pointed out, the more data we have, the better our machine learned models can be. He used an analogy to a speech identifier trained on multiple languages such that the model predicted English better after learning French to illustrate that improvements can be made with large sets of unstructured data. Unfortunately, because we are not capturing enough of the right kind of data for researchers, much patient data is getting lost in the “health data funnel” due to PHI and quality concerns. Lee called for more data sharing and data transparency at every level.

Physician researchers on multiple panels were concerned about a lack of suitable data. Soheil Meshinchi, a pediatric oncologist from Fred Hutchinson Cancer Research Center, is engaged in collecting data specific to children. He discussed his research on Acute Myeloid Leukemia on the panel titled, ‘Will A.I. Help Discover and Personalize the Next Breakthrough Therapy?’. While there is a large body of research on AML in adults, he has found that the disease behaves much differently at a genomic level in children. He also expressed distrust in some published research because studies are rarely reproduced and often a researcher who presents results contrary to existing research faces headwinds at journals who are reticent to publish “negative data”. His focus at this point is gathering as much data as he can.

Matthew Thompson, a physician researcher at the University of Washington School of Medicine, argued on the “Innovations for the Over-Worked Physician” panel that technology has made patient interaction demonstrably worse, but that these problems can and should be solved innovatively with artificial intelligence. His specific complaints include both inputting and extracting data from health system EHRs, as well as an overall glut of raw patient data, often generated by the patient himself, and far too much published research for clinicians to digest.

Both keynote speakers, Microsoft’s Lee and Oren Etzioni of the Allen Institute for Artificial Intelligence, referenced the large numbers of research papers published every year. According to Etzioni, the number of scientific papers published has doubled every nine years since World War II. Lee referenced a statistic that 4000 studies on precision cancer treatments are published each year. They are both relying on innovative machine reading techniques to analyze and categorize research papers to make them more available to physicians (and other scientists). Dr. Etzioni’s team has developed SemanticScholar.org to combat the common challenges facing those who look for research papers. He aims to reduce the number of citations they must follow while also identifying the most relevant and up-to-date research available. One of the advantages of taking this approach to patient data

is that scientific texts have no PHI concerns. Lee’s team is marrying patient data and machine reading to match potential research subjects with appropriate NIH studies.

Dr. Thompson was concerned that too much data is presented to the medical staff and very few of the “predictive rules” used by ER personnel are both ‘accurate and safe’. When reviewing patient outcomes and observations to predict the severity of an infection, he found that patients or their caregivers would provide ample information, but often clinicians would disregard certain details as noise because they were atypical symptoms. The amount of data that providers have to observe for a patient is massive, but machine learned models may be utilized to distill that data into the most relevant and actionable signals.

Before data is gathered and interpreted, it must be collected. Like Dr. Thompson, Harjinder Sandhu of Saykara sees ponderous, physician-driven data entry via EHR as significant barrier to efficient data collection. Sandhu notes that healthcare is the only industry where the highest-paid teammember is performing this onerous task and his company is using artificial intelligence to ease that burden on the physician.

Once patient data has been aggregated and processed into models, the challenge is getting the information in front of providers. This requires buy-in from the health system, physician, and, occasionally, the patient and his caregivers. Mary Haggard of Providence Health and Services spoke on the “Tech Entrepreneurs Journey into Healthcare” panel and stated that the biggest problem for entrepreneurs is defining the correct problem to solve. During the “Investment Perspective” panel, Matt Holman of Echo Health Ventures recommended tech startups emphasize an understanding of the context of the problem within a health system.

One of the most important and difficult hurdles for health technology companies is working into clinical workflow. Mike McSherry from Xealth has found that physician champions who know how they want to use technology help with integrating into a health system or physicians group. Lynn McGrath of Eigen Healthcare believes physicians want their data to be defined, quick to assess, condensed, and actionable, while Shelly Fitz points out that providers are not used to all the data they are receiving and they don’t yet know how to use it all. These are all issues that can and will be solved as healthcare technology continues to become more intelligent.

As Wellpepper’s CTO, Mike Van Snellenberg pointed out, health systems and doctors are resistant to “shiny new things”, for good reason. When approaching a health system, in addition to engaging the administration, clinicians need to understand why the machine learned model is recommending a given course of treatment. After integration, patients will also want to understand why a given course of treatment is being recommended. Applying artificial intelligence solutions to medicine must take into account the human element, as well.

The exciting possibilities of artificial intelligence and machine learning are hindered more by human constraints in health systems and data collection than by available technology. “Patients are throwing off all kinds of data when they’re not in the clinic,” according to our CTO. Wellpepper’s tools for capturing patient-generated data provide a pathway for providers to access actionable analysis.

Posted in: Healthcare Disruption, Healthcare Technology, patient engagement

Leave a Comment (0) →
Google+