Last week, I attended the Society of Actuaries (SOA) 2019 Health Meeting in Phoenix. There were roughly 1,000 attendees, with every major health plan being represented. On the second day of the meeting, I had the pleasure of presenting with Andrew Loewer, FSA, and Katherine Zhao from Evolent Health. Despite being the last time slot of the day, we had about 40 people join our session.
Most actuaries, including 77 percent of our audience, primarily rely on medical and pharmacy claims data—a relatively well-structured, standardized, and clean data source. However, the audience reported that procedures and costs from all care settings, instant notifications, detailed medical charts and physician notes, and non-traditional sources such as wearables and consumer habits would all be potentially useful data sources.
It can be challenging and intimidating to move from well-structured data to unstructured data such as admission, discharge, and transfer (ADT), Continuity of Care Documents (CCDs), and using tools such as natural language processing (NLP). Our presentation focused on the benefits of utilizing these unstructured data sources, and the audience had a particular interest in the ways these data sources can improve risk adjustment.
I focused on the use cases of supplementing claims data with ADT data for emergency department (ED) visits and inpatients stays. Each hospital visit typically results in several ADT messages with different pieces of information such as:
- Encounter type
- Date and time
- Chief complaint
- Patient information
Andrew focused on CCDs, an electronic document exchange standard for sharing patient health information. Summaries are in a form shareable by all computer applications and electronic health record (EHR) systems. They include commonly needed past and current health status information:
- Family & social history
- Vital signs
- Advance directives
- Care plans
Katherine covered NLP, a type of artificial intelligence (AI) that enables computers to better understand and process human language. Some major NLP tasks are:
- Stemming, or reducing words into their root form (i.e. ‘diabet’ would be root for ‘diabetes’ or ‘diabetic’
- Named entity recognition
- Terminology extraction
- Part of speech tagging
Many open-sourced tools are not sufficient in processing free text healthcare data due to its complexity. NLP solutions have to recognize specific vocabulary, medical terms, drug names, procedure names, misspellings, and countless abbreviations and acronyms.
Katherine received several questions about NLP. These questions indicated many attendees have access to clinical data and are working to figure out how to use it, or are trying to decide if it’s worth going after.
Overall, we had a great turnout and a very engaged audience. The most common questions revolved around how actuaries at health plans can get access to clinical data for risk-adjustment purposes and how easy it is to use compared to claims data. We had a great time at the conference and look forward to seeing everyone next year!
Kelvin Wursten, FSA, CERA, MAAA
Director, Data Science