STOCKS
Loading stock data...
AI NEWS

OpenAI’s Health Assistant ChatGPT Health Goes Live: Medical Records Entrusted to AI

AI-powered health consulting is no longer a fantasy. OpenAI has launched a new service by opening an independent health section within the ChatGPT platform, where users can analyze their medical records, laboratory results, and fitness data. This feature, called ChatGPT Health, stands out as one of the company’s most ambitious steps in the digital health space.

230 Million People Ask Health Questions to AI Every Week

According to figures released by OpenAI, more than 230 million users worldwide direct health-related questions to ChatGPT each week. This number demonstrates that health consulting has become one of the platform’s most popular use cases. The new ChatGPT Health feature was designed precisely to address this need.

The system allows users to connect health data from various sources. Electronic health records through the American health data network b.well, exercise and sleep tracking via Apple Health, and nutrition data from wellness apps like MyFitnessPal and Weight Watchers can be integrated. Platforms such as AllTrails, Instacart, and Peloton are also among the supported sources.

Two Years of Development with Support from 260 Doctors

OpenAI emphasizes it didn’t rush the development of ChatGPT Health. Over two years, more than 260 physicians from 60 different countries across numerous specialties were consulted. These doctors evaluated over 600,000 model outputs across 30 different medical focus areas, providing feedback on how the system should respond.

The physicians’ contributions were particularly decisive in three critical areas: how urgently the system should recommend seeing a doctor, how to make medical information understandable without oversimplifying it, and how to ensure safety in critical situations. OpenAI developed a custom benchmarking system called HealthBench to measure performance and claims its latest language models produce better results than actual doctor responses.

Extra Security Layers for Sensitive Data

Given the sensitivity of health data, OpenAI states it has implemented extra protection measures. ChatGPT Health operates as a completely separate section from the rest of the platform. Health-related conversations, connected apps, and files are stored separately from normal chats, and the system has its own isolated “memory.”

Data flow is also unidirectional: context from regular ChatGPT conversations can be transferred to the Health section, but health information never flows back to standard chats. The company also states that Health conversations aren’t used to train AI models. For health data integration in the US, a partnership was established with b.well, and all apps wishing to connect to Health must meet OpenAI’s privacy and security standards and pass additional security review.

Europe Still Waiting

ChatGPT Health is not yet available in the European Economic Area, Switzerland, and the United Kingdom. OpenAI plans to start with a small early access group and expand access via web and iOS in the coming weeks. For now, only ChatGPT Free, Go, Plus, and Pro subscribers outside Europe can use the system.

Some features are also subject to geographic restrictions: medical record integration is available only in the US, and Apple Health connections require iOS. Europe’s stricter data protection regulations are seen as the main reason for this delay. The European Union AI Act is also a significant factor. A consumer health chat tool may remain in the low-risk category if it helps understand documents, but could become high-risk if the EU classifies it as medical device software.

Information or Medical Advice? The Danger in the Fine Line

OpenAI’s emphasis that “Health is not meant to replace doctors, only for preparation and background information” largely reflects legal concerns. However, when it comes to health, the line between information and advice is quite blurry. Considering how persuasive ChatGPT can be and its potential to mislead users, especially regarding mental health issues, this situation could lead to dangerous consequences.

Yet we cannot ignore success stories. In a widely shared Reddit post, a user described how ChatGPT identified a diagnosis that numerous doctors had missed for over ten years. The AI spotted a genetic MTHFR mutation affecting seven to twelve percent of the population.

A study published in the International Journal of Medical Informatics in November 2025 revealed that ChatGPT-4o surpassed general practitioners in both accuracy and empathy when answering 200 common medical questions. Researchers emphasize that human oversight is essential for AI-generated responses to remain safe and reliable. New York University studies also show that patients often find chatbot responses more empathetic than messages from time-pressed doctors.

Call for Regulation from the World Health Organization

The World Health Organization is calling for clear guidelines on transparency, safety, and accountability as AI becomes part of medical care. Some researchers also warn that control over this health data should not rest solely with big tech companies.

The success of ChatGPT Health will determine how the role of AI in healthcare will be shaped. The system can help prevent medical errors and empower patients, but the cost of misuse could be severe. The coming period will show how we strike this balance.

Stay Updated

Get the latest news delivered to your inbox.

We respect your privacy. Unsubscribe at any time.