Skip links

On-Demand Webinar: To Learn How Payers Can Automate Prior Auth at Scale with AI. View Now.

Industry blog

How We Use Technology to Get Risk Adjustment Data Out of EHRs

This is the second in a 4-part series called Intro to Risk Adjustment Technology.

As we discussed in the first post in this series, traditional risk adjustment is incredibly manual. Coders will comb through thousands of pages of patient charts and look for documented chronic conditions. This process is time consuming and costly, not to mention that it doesn’t make good use of a coder’s expertise. A  technology can remedy these challenges and provide a solution that can make risk adjustment  more productive, accurate, efficient, transparent, and predictive.

Cognitive computing is the technology solution that risk adjustment needs

Cognitive computing is a combination of technologies that enable a computer to learn from its experiences and improve its performance over time. It was popularized by IBM’s Watson supercomputer, which famously used cognitive computing to win Jeopardy. Cognitive computing can serve an important need for risk adjustment: a cognitive computing platform can extract  data from patient charts, read and analyze the data for potential HCC information, and then push potential HCCs in front of coders to accept or reject.

Data acquisition and processing is the initial component of cognitive computing

The first part of the technology platform, and the part we will discuss in this blog post, is data acquisition and processing. That sounds complicated, but it is just a fancy way of saying how the computer gets data out of an EHR and other file formats.

Screen Shot 2016-07-07 at 10.01.21 AM

As you may know, documentation that supports risk adjustment comes in different formats. The two main types are EHR files, which give you electronic data, and scanned documents, which give you data in image form.

Screen Shot 2016-07-07 at 10.02.04 AM

Why is getting data from these files so difficult? Well for one, it’s important to do this in a secure and HIPAA-compliant way. So every person who touches the files has to go through HIPAA-training and the files have to be encrypted and decrypted several times throughout the process. Second, there are often different EHR systems across an organization, or different versions of the same EHR system, and they have to be reconciled. Lastly, scanned data isn’t automatically readable by the computer. When the computer looks at a scanned document, instead of English text, all it sees is a series of images or symbols.

Acquiring patient data from EHRs is a challenge

Getting data out of EHR systems is a particular challenge because useful risk adjustment data is often in a different place in each system. We are looking for very specific data, face-to-face encounter data to be specific, and this is a tiny fraction of all the data an EHR has. It may be in one corner of the EHR platform in Allscripts, and another corner of it in Epic. It may be hidden several layers down in NextGen, and sitting right on the surface in GE Centricity. The stakes are high, as accurate risk adjustment depends on complete and correct data in order to accurately calculate risks.

The way we get data out of the EMR, is at once pretty complex and pretty simple. Basically, we write code that is instructions for the computer to give up the correct information. The instructions might say, “go to place x,” “retrieve data that is structured like y” and “send it back to me.” Just like any other resource, computers can do what they’re told–but you need to speak to them in their language – machine code. The coded queries are able to find and return the face-to-face encounters, out of all the encounter data that is in the EHR, bypassing telemedicine encounters, home health service encounters, and others.

Screen Shot 2016-07-07 at 10.35.02 AM

Of course, this process doesn’t just happen once. Different code has to be written for each EHR, and for the RAPS data. And even after the data is retrieved from the EHR, it still doesn’t play well together. If you think of it abstractly, the data is isolated but it still looks different. The next steps in making the data usable is transforming it into a standard format, and carving it up into smaller pieces, to separate by encounter and document type, for example.

This process is completely non-intrusive. You can run your inquiries whenever you want, or even after hours, to minimize disruption to patient care and normal operations.

Screen Shot 2016-07-07 at 10.36.48 AM

A one-time process

The good thing is, after we do all this work once, we don’t have to do it again. Unlike with manual risk adjustment, where you might have to retrieve charts from a physician office every 6, 12, or 18 months, with technological risk adjustment you can use the existing data pipeline to repeatedly get data out of the provider EHR, without bothering them–as long as you have permission, of course.

And just like that, data acquisition is transformed from a time-consuming nuisance into a quick, secure and HIPAA-compliant process.

To learn more, see parts I, III, and IV of this series.

Part I: Why Does Risk Adjustment Need Technology?

Part III: How We Use Machine Learning to Analyze Patient Data in Medical Records

Part IV: What Risk Adjustment Technology Means for Coders

🍪 This website uses cookies to improve your web experience.