
In short
- OpenAI says ChatGPT Health will be rolled out to select users starting this week, with broader access planned in the coming weeks.
- The feature stores health conversations separately from other chats and does not use them to train OpenAI’s models.
- Privacy advocates warn that health data shared with AI tools often falls outside U.S. medical privacy laws.
On Wednesday, OpenAI announced a new feature in ChatGPT that will allow users to link medical records and wellness data, raising concerns among some experts and advocacy groups about the use of personal data.
The San Francisco, California-based AI giant said the tool, called ChatGPT Health, developed in collaboration with doctors, is designed to support care rather than diagnose or treat ailments. The company is positioning it as a way to help users better understand their health.
For many users, ChatGPT has already become the platform for questions about medical care and mental health.
OpenAI told Declutter that ChatGPT Health only shares general, “factual health information” and does not provide “personalized or unsafe medical advice.”
For higher-risk questions, it will provide high-level information, flag potential risks and encourage people to talk to a pharmacist or healthcare provider who knows their specific situation.
The move comes shortly after the company reported in October that more than 1 million users discuss suicide with the chatbot every week. That amounted to approximately 0.15% of all ChatGPT users at the time.
While these numbers represent a relatively small portion of the total user population, most security and data privacy concerns will need to be addressed, experts say.
“Even when companies claim to have privacy safeguards, consumers often lack meaningful consent, transparency, or control over how their data is used, retained, or reused,” JB Branch, Big Tech advocate at Public Citizen, told me. Declutter. “Health data is extremely sensitive, and without clear legal boundaries and enforceable oversight, self-policed safeguards are simply not enough to protect people from misuse, re-identification or downstream harm.”
OpenAI said in its statement that health data in ChatGPT Health is encrypted by default, stored separately from other chats and not used to train the base models.
According to Andrew Crawford, senior policy advisor at the Center for Democracy and Technology, many users wrongly assume that health data is protected based on its sensitivity, rather than who holds it.
“If your health information is in the hands of your doctor or insurance company, HIPAA privacy rules apply,” Crawford told Declutter. “The same does not apply to entities not covered by HIPAA, such as developers of health apps, wearable health trackers or AI companies.”
Crawford said the launch of ChatGPT Health also underscores how the burden of responsibility falls on consumers in the absence of a comprehensive federal privacy law governing health data held by tech companies.
“It is unfortunate that our current federal laws and regulations place that burden on individual consumers to analyze whether they are comfortable with the way the technology they use every day processes and shares their data,” he said.
OpenAI said ChatGPT Health will be rolled out to a small group of users first.
The waitlist is open to ChatGPT users outside the European Union and the UK, with wider access planned in the coming weeks on web and iOS. No Google or Android devices were mentioned in OpenAI’s announcement.
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.

