AI Chatbots Now Seek Your Medical Records: Risks and What You Need to Know

19

Tech companies are aggressively pushing to integrate artificial intelligence (AI) chatbots with your most private data: your health records. This trend, rapidly gaining momentum, raises serious questions about privacy, accuracy, and potential harm.

The Push for Data Access

For years, the tech industry has insisted that AI improves with more data. Now, companies like Microsoft, Amazon, OpenAI, and Anthropic are actively testing tools that allow users to upload their full medical histories, combined with wearable device data (Apple Watch, Fitbit, etc.). The goal? To provide AI-generated health overviews.

Microsoft’s Copilot is the latest example, enabling users to connect health records from multiple providers. Similar initiatives—Amazon’s Health AI, OpenAI’s ChatGPT Health, and Anthropic’s Claude for Healthcare—are already in testing. This represents a significant escalation in data collection, especially considering these same chatbots have been linked to negative psychological effects in some users.

The Upsides and the Serious Downsides

Some doctors see potential benefits, such as making health insights accessible to those who struggle with rising healthcare costs. However, experts warn that sharing sensitive health data with tech companies introduces major privacy risks. Beyond potential data breaches, these chatbots could exacerbate health anxiety or lead to unnecessary medical visits, mirroring the issues seen with previous self-diagnosis technologies.

How It Works: A Step-by-Step Breakdown

Microsoft’s Copilot platform will allow users to create a health profile by entering basic demographic information (age, sex) and then opting to share records. The system pulls data from connected devices, like fitness trackers, and provides AI-driven summaries. Other platforms will follow a similar model.

Why This Matters: The Bigger Picture

The move to integrate AI with medical data is not just about convenience. It’s part of a broader trend where tech companies seek deeper control over personal information. Given recent lawsuits against OpenAI and Microsoft for alleged copyright infringement (including news content used to train AI systems), the incentive to monetize user data is clear.

The long-term consequences of handing over medical records to for-profit tech companies are unknown, but the potential for misuse is high. Proceed with extreme caution.

This is a rapid development with significant implications for healthcare, privacy, and the relationship between individuals and technology.