This is the third in a 4-part series called Intro to Risk Adjustment Technology.
As we have covered in the first part our blog series, risk adjustment is traditionally done in a very manual and slow way. This process is time consuming and costly, not to mention that it doesn’t make good use of coders’ expertise. Because of these issues, cognitive computing can meet an important need for risk adjustment. A cognitive computing platform can extract data from patient charts, read and analyze the data for potential HCC information, and then push potential HCCs in front of coders to accept or reject, eliminating a bulk of the manual effort associated with today’s workflow.
The extraction of data was covered in an earlier blog post. In this blog post, we will cover the second half of cognitive computing technology, machine learning and adaptive analytics. Or in simpler terms, getting insights from the patient data you’ve extracted and making use of them.
The first step in analyzing patient data is teaching a computer to read
As any coder can tell you, once all the patient charts that need reviewing are in one place, risk adjustment is still a challenge. Finding evidence of HCCs in this pile of data is like finding a needle in a haystack.
To do this, we have to teach a computer to read. We give the computer a lot of examples, or training data as we call it, basic rules, and it learns how to read the medical charts for supporting evidence of a condition. In fact, the platform gets smarter every time a coder uses it.
To explain this further, let’s use an example. Facebook actually uses machine learning for its tagging feature in photos. Have you ever noticed that when you post a photo on Facebook, sometimes you get eerily accurate suggestions for who to tag in the photo? The way Facebook develops these suggestions, is that it uses all the instances when you tag people in photos as training data for its machine learning algorithm. It then makes suggestions for your new photos; if you reject them and pick your own, it learns.
Similar to this Facebook example, a cognitive computing technology platform learns to identify HCCs using common terms and phrases and coder interaction.
Machine learning is different from natural language processing because it learns from experiences as opposed to rules
How does machine learning relate to natural language processing, which is another term you may have heard at industry conferences and events? Natural language processing uses rules to build systems that understand language, while machine learning uses experiences to build systems that understand language.
I’ll give you an example to help explain. Take the following sentence: “Let Harrison Ford Focus on Star Wars.” An NLP system, which has only been fed rules, might not know whether this sentence is about Harrison Ford, or the Ford Focus car. Unless the system has been instructed on what to do in the particular case that Harrison Ford and Ford Focus end up in a sentence together, it would not know what to do. However, a machine learning system, which has been fed with past experiences, would fare better. It would understand that when “Harrison Ford” and “Star Wars” appear together in a sentence, the sentence is likely about Star Wars, and not a car.
In the same way, when a machine learning program sees a sentence like, “this is a COP patient” in a note that also includes pulmonary tests, it would deduce that the sentences is about COPD (misspelled). A natural language processing program, which strictly follows the rules it has been given, might think that the sentence is about a patient who is a police officer. In this way, machine learning is better at drawing meaning from the free-text in patient records.
After the platform reads all the charts, it finds evidence for chronic conditions and gathers it into individual bundles for each potential HCC. Coders then go through the bundles and accept or reject them. We’ll discuss this more in the next part of the series.
To learn more see parts I, II, and IV of this series.