As I noted in Part 1 on Friday, AI already in common use in everyday life, AI is also being used frequently and increasingly in health care.
AI is used to promote member engagement, both to identify those members most likely to benefit from a program or intervention, and then to recruit them. Anthem says it uses AI to match clinicians to members based on common interests and demographics; United Healthcare says it uses AI to better answer member questions about benefits. Multiple benefit plan selection tools use versions of AI to drive better recommendations.
AI is also used in clinical care. Woebot offers a mental health chatbot; patients know they are talking to a “robot,” and some even prefer to avoid a human for therapy. Cleveland Clinic uses AI to match oncology patients with precision medicine, and Twin Health uses AI to create a “twin” to engage patients in their metabolic health. AI can also predict which molecules are likely to be effective pharmaceutical agents, which could speed up development of new drugs. Dentists are using AI to detect early cavities, although some worry this could lead to overtreatment.
AI can also be used to increase efficiency in health care delivery. Multiple companies including Carbon Health use AI to create office notes for physician review - which eliminates the need for scribes and can decrease physician burnout. Sedgwick uses AI to make preliminary disability claims determinations, and many carriers use AI to make preliminary coverage determinations and to detect fraud.
Here are a few thoughts about what this could mean going forward for employers:
- Better targeting of members and tailoring of messages through AI could increase engagement for some members and decrease noise for others. Employers should be careful, though, that misfires of AI don’t undermine these efforts.
- Cognitive behavioral therapy is already often delivered via apps – the recommendations or content pathways can be improved by machine learning.AI can offer lookup functions and advice, whether for plan choice, benefit explanations, or clinical questions. We’ll need quality controls in place to be sure that the output is not biased and is not just plain hallucinated.
- I’m not confident that we can use AI to decrease the friction of prior authorization. I think we’ll see an arms race where health plans and providers are each using AI to either suppress costs or enhance revenues.
- I’m also not confident that this will lower medical costs. AI requires upfront capital investment, and requires highly skilled professionals, often clinicians, to oversee it. Even if the provider community can lower its costs, those cost savings will not automatically be returned to plan sponsors. And at least some providers will use this technology to increase their revenue.
- I believe that employers will also have to negotiate hard to obtain some portion of the savings vendors achieve in their service delivery, as incumbent vendors will seek to capture these savings as margin. Disruptive innovator vendors which lower their costs through effective use of AI could offer much lower prices which could reset the market in some instances. Again, we will always need human oversight of AI - which diminishes cost savings.
- AI in drug development could lead to breakthroughs which will lead to better outcomes, and these are likely to raise costs in our current health care finance system.
- Employers will need to take inventory of AI used by their carriers and vendors and provide oversight to ensure that the AI is being appropriately used to improve outcomes or efficiency.
Tuesday: ChatGPT and Dr Google (AI Part 3)
Thanks for reading. You can find previous posts in the Employer Coverage archive
Please “like” and suggest this newsletter to friends and colleagues. Thanks!
Illustration by Dall-E with prompt “Impressionist painting of artificial intelligence”