epocrates logo
epocrates logo
epocrates logo
  • 0

Internally Generated Content

epocrates

Survey: clinicians are using AI but do not trust it

April 19, 2026

card-image

The AI adoption curve in medicine has bent sharply upward in recent years. What began as tightly controlled, institutional uses of machine learning, largely in imaging, has been succeeded by frontline clinicians experimenting with generative AI to manage the daily grind of practice.

But even as AI use has moved beyond being tightly controlled, clinician trust has eroded. Roughly 7 in 10 clinicians now use AI at work, according to a recent epocrates survey of 519 clinicians, even as trust remains elusive.

Trust issues

Even as clinicians are using AI in practice, most do not trust its outputs. Among clinicians surveyed 82% cited lack of trust as a problem and among those actively using AI that number rises to 89%.

Accuracy, liability, and lack of transparency were common concerns across respondents. Some worry that the lack of regulatory certainty and clear governance could leave them accountable for errors not fully in their control.

AI clinician adopters homed in more narrowly on issues arising from their usage experiences, such as hallucinations, inconsistent performance, and documentation errors. Users also wanted more reassurance around liability, source transparency, and clear boundaries around when AI should or should not be used.

“Those who used the early models, perhaps the ones that were commercially available to non-physicians and started questioning them regarding medical diagnosis. . .They experienced a lot of hallucinations, confabulations, maybe fake links, or misrepresented medical summaries. They really got a bad first impression, and that first impression is very difficult to overcome,” summed up Sam Ashoo, MD, FACEP, a clinical associate professor of medicine at Florida State University in a recent webinar hosted by epocrates.

That early friction appears to linger. First impressions of unreliable outputs are proving difficult to overcome, even as tools continue to improve.

Still, clinicians are finding ways to make AI useful in practice. Unlike earlier machine learning tools, which were narrow, validated, and institutionally governed, generative AI is open-ended and less predictable.

In response, clinicians are developing their own informal frameworks to determine when outputs are trustworthy enough to use. As a result, 52% of clinicians using AI report that it improves their ability to support patients.

Charting the way

Where AI is gaining traction is not surprising. Its most common uses map directly to areas of greatest friction in clinical work, particularly documentation and administrative burden. Clinicians, like other knowledge workers, are turning to AI to reduce time spent on routine tasks that compete with patient care.

Documentation and charting are by far the most commonly cited cases of AI use. AI transcription has long been considered reliable, but fallible. Everyday use of ambient scribes or AI-assisted notes introduces new challenges related to background noise, interruptions, and contextual errors that still require careful review.

“The most common, and probably most visible, use of AI in healthcare has been the ambient AI scribe,” notes Mindy Lee, a pediatric endocrinologist and physician scientist at Stanford University. She later added, “I review everything that’s generated by the ambient scribe.” This type of routine review is not without precedent in the clinician’s workflow; Dr. Lee likens it to what is required when signing off on documentation generated by fellows or residents.

In this sense, AI is not removing oversight from the workflow. It is redistributing it. Clinicians are still reviewing, validating, and taking responsibility for what is documented, even as AI changes how that work is produced.

Among AI users, 57% report using it for documentation and charting, while 54% use it for research or literature review.

Too big to verify

The expansion of AI into literature synthesis introduces a different kind of challenge. Large language models have made it easier to summarize vast bodies of medical information, but that very scale makes verification more difficult. The problem is no longer access to information, but confidence in its accuracy and traceability.

More narrow queries, on a specific guideline or a particular study or set of studies, can be more easily checked, but summaries of wide swaths of evidence from unspecified sources are much more challenging to substantiate. Lack of verification can be manageable when the subject is highly familiar but becomes far more difficult when clinicians are exploring less familiar areas.

“I worked with a colleague who we used to joke was sometimes wrong, but never in doubt and that’s really how I describe AI,” said Dr. Ashoo. “You are never going to get out of it a measure of uncertainty. It is never going to tell you, ‘I’m about 50% certain this is the right answer.’”

AI outputs project confidence regardless of underlying uncertainty, shifting the cognitive burden to clinicians to determine not only whether the information is correct, but whether it can be verified.

“If you’re not taking the time to validate that information, to look at those citations, to see how it came up with that summary, you’re just taking it at face value and you’re risking your own license,” he added.

Thought partner

Clinical decision support is emerging as a more cautious area of AI use, with 44% of clinicians applying it this way. Here, AI is not being used to make decisions, but to support thinking, whether by checking guideline alignment, brainstorming differential diagnoses, or reinforcing an existing line of reasoning. AI can help clinicians work through complex diagnostic and treatment scenarios.

Clinicians are using AI for clinical decision support in narrow, highly supervised ways, primarily as a thinking aid. It can serve as a structured second opinion, help to recall evidence and surface possibilities, or confirm that nothing obvious has been overlooked.

At the same time, clinicians are explicit about the limitations of AI‑based decision support. They emphasize that AI outputs must be reviewed, verified against primary sources, and interpreted within the full clinical context because accuracy, transparency, and accountability remain unresolved concerns.

In many ways, clinicians increasingly describe AI as functioning like an intern. It can draft, suggest, and help organize information, but it requires oversight. The attending clinician remains responsible for interpretation, final decisions, and patient outcomes.

“AI is not licensed to practice medicine. . . It’s the human whose name goes at the bottom of the note,” summed up Dr. Ashoo.

Building trust

An overwhelming source of concern for clinicians is the potential for a lack of accuracy. Three out of four clinicians surveyed indicated this was a persistent issue for them. As AI shifts from answering narrow questions to synthesizing broader bodies of evidence, clinicians report that verifiability becomes the limiting factor.

Other common concerns include liability and regulatory uncertainty, reported by 45% of respondents, as well as a lack of transparency.

“If AI use results in patient harm, who is responsible and who is at fault?” queried one respondent, with another saying, “Everything still falls on the clinician, but the AI has no accountability.”

There is currently no comprehensive regulatory framework governing the clinical use of generative AI. Oversight is fragmented across existing structures, including medical device regulation, HIPAA, malpractice law, and professional accountability, while clinicians remain fully responsible for decisions made with AI support.

Others felt adrift amidst limited guidance at the federal, state, professional, or institutional level. “AI use is outpacing education, guidance, and regulatory standards,” summed up one clinician. Said another, “Clear guidelines and regulations would increase my confidence.”

Privacy concerns add another layer of complexity. Some clinicians note that patient information entered into AI tools may not be protected without enterprise-level safeguards, raising questions about HIPAA compliance and data security.

Clinicians are looking for additional points of support here from the developers of AI tools and the bodies that regulate them. This includes human-in-the-loop AI systems that are routinely monitored to support quality assurance, with clinicians remaining accountable for final decisions.

Trust hinges not just on what AI produces but also on how it produces it. Systems that offer transparent sourcing, acknowledge uncertainty, and align with existing clinical accountability structures are viewed as more trustworthy than black-box outputs. Clinicians may accept some loss of direct verifiability, but only if transparency and safeguards increase alongside it.

Even with these challenges, the perceived value of AI remains clear for many clinicians. It offers a way to reclaim time and attention for patient care. Said one respondent, “Now I can treat the patient, and not a computer screen.”

Trending icon

TRENDING THIS WEEK

EPOCRATES CME

View Catalog

view all CME activities
learn more about epocrates plus
Clinical FAQ icon

Clinical FAQs

Check out the answers to frequently asked questions about our clinical content.

Download Epocrates from the App StoreDownload Epocrates from the Play Store
About UsFeaturesBusiness SolutionsHelp & Feedback
© 2026 epocrates, Inc.   Terms of UsePrivacy PolicyEditorial PolicyDo Not Sell or Share My Information