I know some worry that autonomous robots will destroy the world, and many of us remember that HAL stopped obeying human orders in the classic movie “2001: A Space Odyssey.” Risks for employers in the health space are real, although less existential.
Risk 1: AI sometimes hallucinates. AI has been shown to make up imaginary references and even report imaginary events.
Risk 2: AI incorporates existing bias. AI is trained on data from the “real world,” where there are terrible disparities. Researchers at Duke used machine learning to design an algorithm to identify young children with fevers who needed intensive investigation because they were highly likely to have life-threatening infections. Just before deploying, they realized that their algorithm was not detecting some of the high-risk Hispanic children, likely mirroring performance of their clinicians. They fixed the algorithm before deploying it.
Risk 3: Use of AI engines could compromise personal privacy. Data put into AI is used to train it to improve future output. This includes personally identifiable data which can leak.
Risk 4: AI could infringe on intellectual property. Writers and artists whose works have been ingested by AI have often not been compensated.
Risk 5: AI could lead to accidental disclosure of proprietary information. Samsung engineers used AI to check source code for its semiconductors, and were dismayed when this data leaked to competitors.
Risk 6: AI uses a LOT of energy. Data centers already represent about 3% of US energy consumption, and generative AI could dramatically increase energy use.
Risk 7: Capital requirements of AI could mean we will see limited competition. The major technology companies are already market leaders in AI -and high up-front costs could mean little competition. This would likely lead to high prices for AI computing services.
Risk 8: AI makes it difficult to distinguish what is true. As AI generates more content (articles, pictures, other) it will become increasingly difficult to understand what is factual and what has been generated by AI. This may erode trust in institutions and may push companies to certify the veracity of their output.
Implications for employers
- Employers can mitigate these risks, although most cannot be absolutely eliminated.
- AI generally needs human oversight - so that a patently false answer or wrong approach will be detected before it causes damage.
- Companies can evaluate processes and databases used to train AI to decrease the likelihood that output will perpetuate or exacerbate racist, sexist or other biases.
- Policies should prevent transmission of personally identifiable data to AI models, and should hold AI companies responsible for respecting intellectual property.
- Companies should take care not to put their proprietary data into a public AI database. Increasingly, companies will have their own private AI installations where they can use an AI engine without allowing their data to leave their servers.
- Individual companies can seek to reduce their energy footprint elsewhere and use renewable energy to the extent possible to avoid adverse climate impact from use of AI.
-. Companies can evaluate whether their “cyber” and other insurance is adequate for potential AI liability.
Tomorrow: Bots are available (and used) outside of business hours
Thanks for reading. You can find previous posts in the Employer Coverage archive
Please “like” and suggest this newsletter to friends and colleagues. Thanks!
Illustration by Dall-E. Prompt “Large caution side on a road”