OpenAI announced ChatGPT Health on Wednesday, creating a dedicated space within the platform for users to discuss health topics with ChatGPT, as over 230 million people query health and wellness matters each week.
The ChatGPT Health feature separates health-related conversations from standard interactions. This siloing ensures that details from health discussions do not influence or appear in everyday chats with the AI. OpenAI designed this separation to maintain distinct contexts across user sessions.
When users initiate health topics outside the designated Health section, ChatGPT prompts them to relocate the conversation. This mechanism directs discussions to the appropriate area, preserving the isolation of sensitive health data from general use.
Inside the Health section, ChatGPT draws on information from a user’s prior standard interactions. For instance, if a user previously mentioned being a runner while seeking a marathon training plan, the AI recalls this detail when addressing fitness goals in Health. Such cross-referencing personalizes responses without merging conversation histories fully.
ChatGPT Health supports integration with personal data and medical records from third-party wellness applications. Compatible services include Apple Health, Function, and MyFitnessPal. Users can link these sources to provide the AI with relevant health metrics, enhancing the tool’s utility for wellness tracking.
OpenAI commits to excluding Health section conversations from model training processes. This policy safeguards user privacy by preventing health data from contributing to improvements in the underlying AI systems.
Fidji Simo, CEO of Applications at OpenAI, detailed in a blog post how ChatGPT Health addresses specific healthcare challenges. These include high costs, access barriers, overbooked doctors, and discontinuities in patient care. Simo positions the tool as a targeted response to these systemic issues.
OpenAI emphasizes limitations of large language models like ChatGPT. These systems generate responses by predicting the most probable output based on patterns, without verifying factual accuracy. They lack an inherent understanding of truth and remain susceptible to hallucinations, where incorrect information is confidently presented.
The company’s terms of service state explicitly that ChatGPT is “not intended for use in the diagnosis or treatment of any health condition.” This disclaimer underscores the experimental nature of the feature.
ChatGPT Health rollout begins in the coming weeks, extending access to eligible users progressively.





