The Impact of AI on Patient Trust in Healthcare
- Dr. Mike
- Oct 22
- 3 min read
Updated: Nov 8
Understanding AI's Role in Medicine
Everywhere we look, we see someone suggesting that artificial intelligence (AI) is revolutionizing their industry. Medicine is no exception. There are claims that AI can improve medical office functionality, enhance doctor efficiency with documentation, and even assist in diagnosing and managing medical conditions. However, a recent interaction with a patient reminded me that not everyone views AI as beneficial. Some patients may feel that their doctor lacks knowledge beyond what they could find through their own AI searches at home.
This led me to ask the following question of Perplexity.ai, ChatGPT.com, OpenEvidence.com, and CoPilot.com:
Are there data to indicate that patients distrust or lose confidence in their doctors if they are aware of the doctor using an AI tool? Do their opinions change if they know the AI tool has specifically been trained on medical literature?
Patient Perceptions of AI in Healthcare
Common among all four AI tool responses was the idea that there is evidence that patients lose confidence in their doctors when they know AI is being used. This is true regardless of whether the tools are used for office practices (e.g., administration or documentation), diagnostic, or therapeutic applications.

Perplexity.ai indicated that doctors scored lower on patient-perceived competency, trustworthiness, and empathy when patients knew AI was in use. There was no distinction made if the tool was a general AI tool or one trained on medical literature. Concerns included loss of privacy, physician distraction due to focus on the computer, and decreased empathy.
OpenEvidence.com agreed that there is a loss of confidence, particularly if the AI tool is used in diagnosis or treatment planning. As a counterpoint to Perplexity, OpenEvidence suggested that when patients are aware the AI tool is trained on medical literature, they tend to be more supportive of its use. They emphasized the importance of stressing a collaborative role between the AI tool and the physician, clearly stating that there is oversight and not simply blindly following the AI output. OpenEvidence suggested that the use of AI should be transparent, explainable, based on medical literature, and mandated direct physician scrutiny.
ChatGPT.com discussed data from surveys and experiments indicating that patients lose trust in physicians if they are aware of the physician’s use of AI. This includes feelings of loss of confidence and worsened empathy. Interestingly, ChatGPT noted that the loss of faith in the doctor was directly related to the severity of the condition being treated. In a large survey, patients expressed concerns about how their information would be used, particularly regarding privacy.
CoPilot.com similarly indicated that trust improves when the use of AI is transparent and well-explained. They also emphasized the importance of training using medical literature.
The Future of AI in Healthcare
All told, patients may lose trust and confidence in their doctors if the doctors are using AI today. However, the sources disagree on whether there is any merit to saying the AI used is trained on medical literature. Perhaps patients' concerns stem from the fact that the widespread use of generative AI is relatively new, with ChatGPT debuting in November 2022. In 3-5 years, there may be such widespread use of AI that patients expect their physicians to use it to improve clinic efficiency and ensure they are up to date with the latest guidelines and medical research.
Communicating AI's Role to Patients
If a physician plans to discuss their use of AI with patients, it is crucial to clarify how it is used. Emphasizing that the tools are used collaboratively, with an experienced doctor reviewing the information obtained from the AI, is essential. Physicians should avoid simply regurgitating what the AI tool says.
To ensure clear communication with patients, physicians should use straightforward language to detail AI’s role, the physician’s oversight, and the rationale for using AI. Addressing data privacy and potential regulatory implications guiding the use of AI in clinical settings is also important. For example, California passed a law for 2025 requiring AI-generated content to include a disclaimer stating it was generated by AI. Consider including a tag-line similar to “This document was generated by an AI tool [name] and reviewed by Dr. [name].”
Conclusion
In conclusion, the integration of AI in healthcare presents both opportunities and challenges. While AI has the potential to enhance efficiency and improve patient outcomes, it is vital to address the concerns patients have regarding trust and transparency. By fostering open communication and ensuring that patients understand the collaborative nature of AI in their care, we can work towards a future where AI is seen as a valuable tool rather than a source of distrust.
As we navigate this evolving landscape, it is essential to prioritize patient comfort and confidence in their healthcare providers. By doing so, we can help ensure that the benefits of AI are fully realized while maintaining the trust that is fundamental to the doctor-patient relationship.
