Secure and complete data acquisition
Much of healthcare data is difficult to access and use unless you have the right tools. Our AI platform uses a secure and straightforward extraction process to pull and encrypt documents, images, billing claims, and other data types from our customer source systems. These data are securely transmitted to our Data Loader for import into our cloud-based platform for processing and analysis.
Data processing and validation
Once data has been transmitted to the Apixio Platform, they are indexed using a proprietary data specification. We execute hundreds of separate data validation checks against imported datasets to ensure that we have the minimum elements required for customer projects. Processing does not occur until the data is considered satisfactory.
The Apixio Data Coordinator shuttles files to different processing routines based upon their specific properties. Our data processing pipeline pipeline operates in a parallel computing environment for speed and scale. We can process hundreds of millions of clinical notes in just hours.
Images are processed by our optical character reader (OCR) pipeline. Signal pathway enhancements are used to ensure that very poorly scanned text is made machine readable.
The Customer Data Inventory system then tracks files during their journey from data import to insight generation.
Data mining for knowledge
Imported data is stored in the Apixio Patient Object Model (APOM), which can be considered a phenotype. Each APOM contains reported information about individual healthcare, such as diseases, medications, procedures, biometric values, as well as derived information from prior analyses. APOMs can be analyzed using classifiers, ensembles, and predictive models to generate events. These events are then combined to support decisions made by application users.
Putting the data to work
Our platform extracts a variety of signals from our data using various machine learning techniques. These signals can be used to answer specific questions about individuals or source data. We combine signals using ensembles to create insights, which are then bundled into configurable application workflows to support user decision making. Feedback from application users are stored in our APOMs and later used to improve and re-train our algorithms.
User annotation and data labeling (via automated and manual methods) are used to continuously update our models. We employ supervised and unsupervised techniques to train models. There are mechanisms built into our proprietary science infrastructure to deal with noisy annotations and labeling errors. We specially configure our workflow applications to reduce errors and improve the accuracy of expert annotation, which is essential for crafting and maintaining high performing algorithms.
By using Apixio, we’re improving our auditing bandwidth and enhancing the ability of our coders to focus on other chart audits and other projects that we couldn’t do before.
We’ve previously relied on tedious manual review of our charts that required lots of manpower and physician time...the HCC Profiler has allowed us to mine our EHR and scanned chart data for valid, risk-adjusting conditions with incredible transparency and efficiency.
The coding team was wonderful to work with, and incorporated our specific coding guidelines when working on our project. When they noticed a trend in what we were rejecting, they shared a Coding Clinic we were unaware of for clarification on how to code the scenarios.
Apixio & Magna Health Plan
Fast Facts 40,633 lives reviewed 2,755 HCC deletes found 95.3% agreement with…View Case Study ⟶
How a Veteran Coder Used Apixio’s HCC Profiler to Eliminate Data Entry Error and Double Productivity
Fun Stats Increased productivity from 3 charts per hour to 10 charts…View Case Study ⟶