Users must exercise caution when sharing ChatGPT Health data, UAE cybersecurity experts warn

What happens to your health data when you share it with AI? UAE cybersecurity experts warn of data breaches, misuse, and long-term risks

Sharon Benjamin
Sharon Benjamin
ChatGPT Health
Image: Shutterstock

Article summary

AI Generated

Cybersecurity experts warn that health data shared with AI systems like ChatGPT Health may never truly disappear, potentially impacting insurance, employment, and privacy long after a chat ends. They stress the need for clear privacy controls, user consent for data storage and use, and complete deletion of conversations to build trust in AI health tools.

Key points

  • Health data shared with AI may never truly disappear, impacting future insurance and employment.
  • Clear privacy controls are vital; users must understand data use and have control over storage.
  • Complete deletion of health chats from all systems is essential for user trust and security.

Once health information is shared with an AI system, it may never fully disappear, cybersecurity specialists told Lana, warning that data linked to medical questions can further affect insurance, employment, and privacy long after a chat ends.

The warnings come as ChatGPT Health expands into health-related queries and integrations, raising questions about how much information users are asked to share, how long it is stored, and whether deletion removes all traces of the data.

According to Haider Pasha, CSO EMEA at Palo Alto Networks, users should not begin using ChatGPT Health without understanding exactly what information is being requested and why.

Clear privacy controls essential for safe use of AI health tools, say UAE experts

“Privacy settings should be easy to understand, with health chats turned off from memory by default. Users should be able to choose whether their data is saved, kept only for that session, or not stored at all. As a cybersecurity organisation, we always believe it’s easier when platforms give users simple and upfront choices to reduce risk and build trust. Users should also be able to change their settings or stop sharing at any time,” he said.

Echoing the sentiment, Morey Haber, Chief Security Advisor at BeyondTrust added health data differs from other personal information because it remains linked to individuals over time.

Advertisement

“Health data is not just personal. It is permanently associated with you,” Haber said.

He warned that once disclosed, health information can affect insurance claims or reveal personal issues that could lead to employment or legal consequences.

“If ChatGPT Health (or equivalent AI engines) are to earn user trust, privacy controls must be explicit, enforceable, and owned by the user for alterations, deletion, and future AI training,” Haber said.

However, one of the most sensitive areas of AI health use involves the ability to connect medical records or wellness applications.

Medical records can increase long-term data exposure

Connecting medical records with wellness applications is among the most sensitive uses of AI in healthcare. Image: Shutterstock

According to cybersecurity experts, this increases the amount of data accessed and extends the period during which it may be exposed.

Advertisement

Pasha said users must be informed exactly what data will be accessed before any connection is made and whether that access is limited or ongoing.

“Before linking medical records or wellness apps, users should know exactly what data will be accessed and how it will be used, including whether the connection is one-time or ongoing,” he said.

He said this data may include personal details such as date of birth and contact information, which could increase cyber risk if exposed.

Pasha added that users should understand where their data is stored and whether third-party applications meet basic security standards.

“Clear explanations will usually help users avoid sharing more than they intend and reduce the risk of unexpected privacy or security issues later,” he said.

In addition, Haber said anonymised health data can still be monetised. “Providing a free service is truly never free and the data you enter has value to the provider,” he said.

Advertisement

“For example, if a large portion of the queries are for headaches, those statistics can be sold to help market pain killers in a geolocation or to persuade pharmaceutical companies to develop new medications,” he said.

He said users who want to avoid inclusion in such datasets must be able to limit data to a single session, block storage entirely, and define categories of health topics that are never retained.

How can users limit data collection to reduce privacy risks?

However, from a cybersecurity perspective, Pasha said AI health systems should collect only the information needed to answer a specific question and avoid saving extra details by default.

“ChatGPT Health should only collect the information needed to answer a specific question and avoid saving extra details by default,” he said, adding that storing less data reduces the damage if a system is attacked.

“Processing health information in real time and deleting it soon after lowers exposure to breaches, misuse, or unauthorised access,” he said.

Advertisement

Clear limits on what data is stored and how long it is kept reduce the chance of errors or unintended access, Pasha added.

In addition, certain categories of health information should not be processed unless users give explicit consent, according to Haber.

“Mental health, reproductive health, prescriptions, genetic data, and chronic conditions, in my opinion, should always be excluded from additional processing unless the user consents to storage and future processing,” he said.

“This implies that that AI Health engines should only process the information required to answer the user’s question and nothing more. No passive collection, no background enrichment, and no inferred health profiling that can be monetised or stored for future training. Health conversations should remain purpose bound and session scoped unless the user requests continuity,” Haber added.

Complete deletion should be priority

Deletion is one of the most critical issues for trust in AI health tools. Pasha said deleting a ChatGPT Health chat must result in complete removal from all systems.

Advertisement

“When a user deletes a ChatGPT Health chat, it should be completely removed from all systems,” he said. “This includes the conversation, any stored copies, metadata, or supporting data created during the session.”

He warned that partial deletion leaves behind information that could still be accessed or exposed in a breach.

“From a cybersecurity perspective, partial deletion leaves behind data that could be exposed in a breach or accessed without authorisation,” he said.

According to Pasha, secure, end-to-end deletion reduces risk and limits the impact of cyberattacks.

Health data, according to Haber, should also have a short default retention window.

“Thirty days or less is reasonable and users should also be able to enable automatic deletion by category, timeframe, or session,” he said, adding that end users of third-party applications integrations, should have the authority to choose what data flows in, what flows out and how long access persists.

Advertisement

Haber further argued that no permanent integrations or silent permissions should exist without explicit user consent.

Individuals would not want solicitations from a doctor or pharmaceutical company based on personal health information they may have shared with an AI system, without clear authorisation, he warned.

Shared devices, accounts increase risk of unauthorised access

Stronger privacy controls – such as chat locks, automatic logouts, and activity alerts – help protect sensitive health conversations from unauthorised access and cybersecurity. Image: Shutterstock

Moreover, security risks increase when other people may have access to a device or account used for health conversations.

Pasha said users should be able to lock health chats using a PIN, password, or biometric controls, and inactive sessions should log out automatically.

“Alerts for unusual activity can help detect potential breaches early. These measures limit the cyber risk if someone gains access to a device or account. By adding multiple layers of protection, private health information stays secure and reduces the chance of unauthorised access or data theft,” he said.

Advertisement

However, even if health conversations are not used to train AI models, Pasha said data may still be used for other purposes, including security.

“The data from ChatGPT Health chats can be used to help keep the system safe from attacks, detect unusual activity, or prevent unauthorised access. When it comes to cybersecurity, it’s important that these uses are limited and monitored so sensitive health information isn’t exposed,” he said.

Transparency is ‘fundamental’ to building AI trust

Pasha also advised the use of certain tools that allow users to see what happens to their data are necessary to build trust.

“Simple tools like a privacy summary, activity log, or a clear settings page can help. Users should be able to check what health data was saved, what wasn’t, and when data will be deleted,” he explained.

BeyondTrust’s Haber also emphasised that transparency is fundamental to building trust, noting that trust in one’s physical doctor is essential to personal health outcome.

Advertisement

“These characteristics need to be applied to AI as best as possible. In healthcare, trust is not optional, it is the foundation. All AI Health engines must treat privacy as a security control, not a feature, and the end user should be in complete control,” he concluded.