Posts Tagged machine learning

Machine Learning in Medicine

As a new intern, I remember frequently making my way to the Emergency Department for a new admission; “Chest pain,” the attending would tell me before sending me to my next patient. Like any good intern I would head directly to the paper chart where I knew the EKG was supposed to be waiting for me, already signed off on by the ER physician. Printed in standard block print, “Normal Sinus Rhythm, No significant ST segment changes” I would read and place the EKG back on the chart. It would be later in the year before I learned to ignore that pre-emptive diagnosis or even give a thought to about how it got there. This is one of many examples how machine learning has started to be integrated into our everyday life in medicine. It can be helpful as a diagnostic tool, or it can be a red herring.

Example of machine-learning EKG interpretation.

Machine learning is the scientific discipline that focuses on how computers learn from data and if there is one thing we have an abundance of in medicine, data fits the bill. Data has been used to teach computers how to play poker, learn laws of physics, become video game experts, and provide substantial data analysis in a variety of fields. Currently in medicine, the analytical power of machine learning has been applied to EKG interpretation, radiograph interpretation, and pathology specimen identification, just to name a few. But this scope seems limited. What other instances could we be using this technology in successfully? What are some of the barriers that could prevent its utilization?

Diagnostic tools are utilized in the inpatient and outpatient setting on a regular basis. We routinely pull out our phones or Google to risk stratify patients with ASCVD scoring, or maybe MELD scoring in the cirrhotic that just got admitted. Through machine learning, these scoring systems could be applied when the EMR identifies the correct patient to apply it to, make those calculations for the physician, and present it in our results before we even have to think about making the calculation ourselves. Imagine a patient with cirrhosis who is a frequent visitor to the hospital. As a patient known to the system, a physician has at some point keyed in the diagnosis of “cirrhosis.” Now, on their next admission, this prompts this EMR to automatically calculated and provide a MELD Score, a Maddrey Discriminant Function (if a diagnosis of “alcoholic hepatitis” is included in the medical history). The physician can clinically determine relevance of the provided scores; maybe they are helpful in management, or maybe they are of little consequence depending on the reason for admission. You can imagine similar settings for many of our other risk calculators that could be provided through the EMR. While machine learning has potential far beyond this, it is a practical example where it could easily be helpful in every day workflow. However, there are some drawbacks to machine learning.

Some consequences of machine learning in medicine include reducing the skills of physician, the lack of machine learning to take data within context, and intrinsic uncertainties in medicine. One study includes that when internal medicine residents were presented with EKGs that had computer-annotated diagnoses, similar to the scenario I mentioned at the beginning of this post, diagnostic accuracy was actually reduced from 57% to 48% went compared to a control group without that assistance (Cabitza, JAMA 2017). An example that Cabitza brings up regarding taking data in context is regarding pneumonia patients with and without asthma and in-hospital mortality. The machine-learning algorithms used in this scenario identified that patients with pneumonia and asthma had a lower mortality, and drew the conclusion that asthma was protective against pneumonia. The contextual data that was missing from the machine learning algorithm was that the patient with asthma who were admitted with pneumonia were more frequently admitted to intensive care units as a precaution. Intrinsic uncertainties in medicine are present in modern medicine as physician who have different opinions regarding diagnosis and management of the same patient based on their evaluation. In a way, this seems like machine-learning could be both an advantage and disadvantage. An advantage this offers is removing physician bias. On the same line of thought, it removes the physician’s intuition.

At Wellpepper, with the Amazon Alexa challenge, machine learning was used to train a scale and camera device (named “Sugarpod“) in recognizing early changes in skin breakdown to help detect diabetic foot ulcers. Given the complications that come with diabetic foot ulcers, including infections and amputations, tools like this can be utilized by the provider to catch foot wounds earlier and provide appropriate treatment, ideally leading to less severe infections, less hospitalizations, less amputations, and lower burden on healthcare system as a whole. I believe these goals can be projected across medicine and machine learning can help assist us with them. With healthcare cost rising (3.3 Trillion dollars in 2016), most people can agree that any tools which can be used to decrease that cost should be utilized to the best of its ability. Machine learning, even in some of its simplest forms, can certainly be made to do this.

Posted in: Healthcare costs, Healthcare Technology, Healthcare transformation

Leave a Comment (0) →

Healthcare + A.I. Northwest

The Xconomy Healthcare + A.I. Northwest Conference at Cambia Grove featured speakers and panels discussing the opportunities and prospects for applying machine learning and artificial intelligence to find solutions for health care. The consensus was that we are no longer held back by a lack of technological understanding and ability. A.I. and M.L. models can be learned at a large scale by harnessing the power of the cloud and advances in data science. According to the panelists, today’s challenges to incorporating A.I. into healthcare include abundant, but inadequate data and resistance from health systems and providers.

Many researchers have found insufficient data to be an unexpected challenge. As keynote speaker Peter Lee of Microsoft Research pointed out, the more data we have, the better our machine learned models can be. He used an analogy to a speech identifier trained on multiple languages such that the model predicted English better after learning French to illustrate that improvements can be made with large sets of unstructured data. Unfortunately, because we are not capturing enough of the right kind of data for researchers, much patient data is getting lost in the “health data funnel” due to PHI and quality concerns. Lee called for more data sharing and data transparency at every level.

Physician researchers on multiple panels were concerned about a lack of suitable data. Soheil Meshinchi, a pediatric oncologist from Fred Hutchinson Cancer Research Center, is engaged in collecting data specific to children. He discussed his research on Acute Myeloid Leukemia on the panel titled, ‘Will A.I. Help Discover and Personalize the Next Breakthrough Therapy?’. While there is a large body of research on AML in adults, he has found that the disease behaves much differently at a genomic level in children. He also expressed distrust in some published research because studies are rarely reproduced and often a researcher who presents results contrary to existing research faces headwinds at journals who are reticent to publish “negative data”. His focus at this point is gathering as much data as he can.

Matthew Thompson, a physician researcher at the University of Washington School of Medicine, argued on the “Innovations for the Over-Worked Physician” panel that technology has made patient interaction demonstrably worse, but that these problems can and should be solved innovatively with artificial intelligence. His specific complaints include both inputting and extracting data from health system EHRs, as well as an overall glut of raw patient data, often generated by the patient himself, and far too much published research for clinicians to digest.

Both keynote speakers, Microsoft’s Lee and Oren Etzioni of the Allen Institute for Artificial Intelligence, referenced the large numbers of research papers published every year. According to Etzioni, the number of scientific papers published has doubled every nine years since World War II. Lee referenced a statistic that 4000 studies on precision cancer treatments are published each year. They are both relying on innovative machine reading techniques to analyze and categorize research papers to make them more available to physicians (and other scientists). Dr. Etzioni’s team has developed to combat the common challenges facing those who look for research papers. He aims to reduce the number of citations they must follow while also identifying the most relevant and up-to-date research available. One of the advantages of taking this approach to patient data

is that scientific texts have no PHI concerns. Lee’s team is marrying patient data and machine reading to match potential research subjects with appropriate NIH studies.

Dr. Thompson was concerned that too much data is presented to the medical staff and very few of the “predictive rules” used by ER personnel are both ‘accurate and safe’. When reviewing patient outcomes and observations to predict the severity of an infection, he found that patients or their caregivers would provide ample information, but often clinicians would disregard certain details as noise because they were atypical symptoms. The amount of data that providers have to observe for a patient is massive, but machine learned models may be utilized to distill that data into the most relevant and actionable signals.

Before data is gathered and interpreted, it must be collected. Like Dr. Thompson, Harjinder Sandhu of Saykara sees ponderous, physician-driven data entry via EHR as significant barrier to efficient data collection. Sandhu notes that healthcare is the only industry where the highest-paid teammember is performing this onerous task and his company is using artificial intelligence to ease that burden on the physician.

Once patient data has been aggregated and processed into models, the challenge is getting the information in front of providers. This requires buy-in from the health system, physician, and, occasionally, the patient and his caregivers. Mary Haggard of Providence Health and Services spoke on the “Tech Entrepreneurs Journey into Healthcare” panel and stated that the biggest problem for entrepreneurs is defining the correct problem to solve. During the “Investment Perspective” panel, Matt Holman of Echo Health Ventures recommended tech startups emphasize an understanding of the context of the problem within a health system.

One of the most important and difficult hurdles for health technology companies is working into clinical workflow. Mike McSherry from Xealth has found that physician champions who know how they want to use technology help with integrating into a health system or physicians group. Lynn McGrath of Eigen Healthcare believes physicians want their data to be defined, quick to assess, condensed, and actionable, while Shelly Fitz points out that providers are not used to all the data they are receiving and they don’t yet know how to use it all. These are all issues that can and will be solved as healthcare technology continues to become more intelligent.

As Wellpepper’s CTO, Mike Van Snellenberg pointed out, health systems and doctors are resistant to “shiny new things”, for good reason. When approaching a health system, in addition to engaging the administration, clinicians need to understand why the machine learned model is recommending a given course of treatment. After integration, patients will also want to understand why a given course of treatment is being recommended. Applying artificial intelligence solutions to medicine must take into account the human element, as well.

The exciting possibilities of artificial intelligence and machine learning are hindered more by human constraints in health systems and data collection than by available technology. “Patients are throwing off all kinds of data when they’re not in the clinic,” according to our CTO. Wellpepper’s tools for capturing patient-generated data provide a pathway for providers to access actionable analysis.

Posted in: Healthcare Disruption, Healthcare Technology, patient engagement

Leave a Comment (0) →