Anthropic announced a suite of health-care and life-sciences features for its Claude AI platform on Sunday, allowing U.S. Pro and Max subscribers to share health records and fitness-app data, including from Apple’s iOS Health app, to personalize health conversations. This launch follows OpenAI’s ChatGPT Health introduction days earlier.
The new Claude features enable users to integrate personal information with medical records and insurance data. Claude acts as an orchestrator to simplify navigation through health systems. Eric Kauderer-Abrams, head of life sciences at Anthropic, described the challenges users face. “When navigating through health systems and health situations, you often have this feeling that you’re sort of alone and that you’re tying together all this data from all these sources, stuff about your health and your medical records, and you’re on the phone all the time,” he told NBC News. He expressed excitement about Claude handling these tasks.
With Claude for Healthcare, users connect disparate data sources into a unified view. This setup processes health records alongside fitness data from apps. Personalization occurs through secure sharing mechanisms designed for health-related queries. Availability targets Pro and Max plan subscribers in the United States, providing immediate access without a waitlist.
OpenAI’s ChatGPT Health, unveiled last week, requires users to join a waitlist. OpenAI noted that hundreds of millions of people ask wellness- or health-related questions on ChatGPT every week. The company specified that ChatGPT Health serves to help users “navigate everyday questions and understand patterns over time — not just moments of illness.” OpenAI emphasized that the tool operates “not intended for diagnosis or treatment.” Both platforms incorporate data from health records and fitness apps, such as Apple’s iOS Health app, for tailored interactions.
AI systems like Claude and ChatGPT assist in interpreting complex medical reports. They enable users to review doctors’ decisions. For billions worldwide lacking essential medical care access, these tools summarize and synthesize otherwise inaccessible medical information. Anthropic positions its expansions amid a field viewed as an opportunity and sensitive area for generative AI.
Anthropic prioritizes privacy in its offerings. Health data shared with Claude remains excluded from the model’s memory. This data receives no use in training future systems. Users maintain control through options to disconnect or edit permissions at any time, as stated in Anthropic’s blog post accompanying the launch.
Beyond consumer tools, Anthropic introduced features for health-care providers. The platform now supports a HIPAA-ready infrastructure. HIPAA refers to the federal law governing medical privacy. Connections extend to federal health-care coverage databases and the official registry of medical providers. These integrations reduce workloads for physicians and health providers by linking to essential services.
Specific automation targets time-consuming processes. Prior authorization requests for specialist care become streamlined. Insurance appeals receive support through matching clinical guidelines directly to patient records. Expanded Claude for Life Science offerings focus on improving scientific discovery alongside these provider tools.
Commure, a company developing AI solutions for medical documentation, anticipates benefits. Dhruv Parthasarathy, chief technology officer at Commure, stated that Claude’s features assist in “saving clinicians millions of hours annually and returning their focus to patient care.” This automation addresses administrative burdens in health-care settings.
The announcements occur against increased scrutiny of AI chatbots in medical and mental-health advice. On Thursday, Character.AI and Google settled a lawsuit. The suit alleged their AI tools contributed to worsening mental health among teenagers who died by suicide. Leading AI companies, including Anthropic and OpenAI, warn that their systems can err and must not replace professional judgment.
Anthropic enforces restrictions via its acceptable-use policy. For uses involving “healthcare decisions, medical diagnosis, patient care, therapy, mental health, or other medical guidance,” the policy mandates that “a qualified professional … must review the content or decision prior to dissemination or finalization.” This requirement ensures human oversight in critical applications.
Kauderer-Abrams highlighted efficiency gains. “These tools are incredibly powerful, and for many people, they can save you 90 % of the time that you spend on something,” he said. He qualified this for routine tasks only. In critical cases where details matter, verification remains essential. Anthropic does not claim complete removal of human involvement. Instead, Claude amplifies human experts’ capabilities.
Anthropic ranks among the world’s largest AI companies. Rumors place its valuation at $350 billion. The life-sciences division, led by Kauderer-Abrams, drives these health-care integrations. Sunday’s features build on Claude’s existing platform to address real-world health navigation challenges through data orchestration and provider support.





