You might have read about an organization that has announced an artificial intelligence chatbot to send daily or even hourly health prompts to participants. The AI underlying this chatbot could ingest medical records, biometrics, and data from wearables to help diabetics better tailor their diet and could remind people exactly when they should get cancer screening. AI could allow a level of personalization that isn’t possible now. For instance, optimal colonoscopy frequency based on risk factors and medical history might be within 8 years for some people, but within 12 years for others. The AI would facilitate radical personalization.
So far, this is only an idea; the actual product is months or even years away. And some people might love this! I enjoy looking at my smart watch to learn how many calories I have burned in a day and I obediently stand when it’s ten minutes of the hour and my watch signals me that I’ve been sitting too long. But my behavior is on the far end of the “self-quantification” scale and is not predictive of the behavior of others. Further, some research suggests that setting specific goals might diminish physical activity.
People also get fatigued from multiple reminders, and often “shut it all down” if they are pestered too often. That’s why Human Resource departments limit how many surveys and emails they send out, and why doctors give a limited number of recommendations during each office visit. Patients given too many instructions tend to follow none of them.
Although setting highly individualized goals sounds good for individual people, it could create difficulties in measuring population health. For instance, if everyone has a different recommended frequency for colonoscopy, we will only know if we are reaching our screening goals if researchers can correlate each individual’s medical records with their AI-produced recommendations. This type of reporting is going to take some time to establish.
Implications for employers:
Artificial intelligence has the potential to help better identify risks, and tailor recommendations to individuals.
Hopefully, products that are developed will focus on prioritization, as delivering too many recommendations could lead to worse health, even if the recommendations are evidence-based and highly personalized.
Any chatbot intervention should be fully studied before it is deployed, to be sure that it does not hallucinate, and to evaluate whether it is inadvertently introducing or amplifying racial or other bias.
Thanks for reading. You can find previous posts in the Employer Coverage archive
Please subscribe, “like” and suggest this newsletter to friends and colleagues. Thanks!
Illustration by Dall-E
Tomorrow: Thursday Shorts, including virtual mental health, GLP effectiveness, bird flu and pedi mortality.