LUMA NEWS Tonic is now a part of Luma: Read more here >>
X

Patients and AI in Healthcare: 2025 Trust Survey

Balancing Promise with Caution

Artificial intelligence is quickly moving into healthcare. Scheduling systems, digital assistants, and diagnostic tools are being piloted across the country. Health systems and technology companies often highlight efficiency and innovation. But one important voice has been missing from the conversation: the patient.

To better understand how patients feel, KLAS Research partnered with Luma Health to survey more than 1,000 U.S. adults in July 2025. The findings highlight both the opportunities and the barriers to trust. Patients see real value in AI, but they also want clear boundaries, strong oversight, and reassurance that their care will remain human-centered.

Key findings

  • Efficiency in operations is welcome. Patients are most comfortable with AI in scheduling, check-in, and billing.
  • Caution in clinical care. Trust drops when AI is used for diagnosis or treatment.
  • Oversight is critical. Most patients want AI to be supervised by a human at all times.
  • Privacy and accuracy matter. Data security and error risks top the list of concerns.
  • AI won’t drive provider choice. Trust in doctors remains more important than use of new technology.

Where do patients welcome AI most?

The survey shows patients are most comfortable when AI improves speed, access, and affordability. They see clear value in administrative tasks such as scheduling appointments, managing check-ins, or automating billing. These functions feel helpful and low risk, especially when they reduce wait times or free up staff to spend more time with patients.

One respondent summed it up simply:

“I like the idea of AI making things faster, like appointments or billing. Just don’t let it make medical decisions without a real doctor involved.”

This comfort level drops sharply when AI moves closer to clinical care. Tools that diagnose conditions, recommend treatments, or influence medical decisions generate far more hesitation. Patients draw a line between efficiency and judgment.

Why do patients value AI in healthcare operations?

The survey makes it clear that patients see the strongest role for AI in administrative and operational tasks. These are the moments that shape their daily experience with the health system, from finding an appointment to checking in at the front desk.

Patients said they value AI when it:

  • Reduces delays by making scheduling and billing faster
  • Simplifies communication through reminders and digital check-in
  • Cuts down errors in paperwork and transcription
    Frees up staff to spend more time on patient interaction

One respondent captured this balance clearly:

“AI might be helpful, but only if someone is watching it carefully. Mistakes in healthcare are too serious.”

In other words, AI is welcomed when it clears the way for smoother access to care. Patients see less risk when AI automates background processes, and more benefit when those gains translate into shorter waits, fewer hassles, and lower costs.

For healthcare leaders, this feedback underscores the importance of focusing on operations-first AI use cases. These are areas where technology can immediately improve the patient journey without crossing into sensitive clinical decisions. 

What makes patients cautious about AI?

Trust is the deciding factor. Most respondents said AI should always or usually be supervised by a human, regardless of whether it is used in administrative workflows or clinical care.

Top concerns include:

  • Accuracy: risk of errors or “hallucinations” in results
  • Privacy: fears of data misuse or leaks
  • Accountability: uncertainty about who is responsible if AI makes a mistake

Older adults voiced the strongest skepticism. Many manage complex conditions and place high value on personal connection with providers. They worry AI could depersonalize care or reduce face-to-face time.

Should the government play a role in oversight?

Most patients said yes. The majority supported some form of government regulation of AI in healthcare, though the survey revealed a generational divide. Younger and middle-aged patients were more likely to endorse strong regulation. Older patients, despite being the most concerned about AI overall, were less likely to want government involvement.

This tension suggests health systems cannot rely on regulation alone. Building trust through clear communication, voluntary safeguards, and transparency may matter more than waiting for rules to be written.

Will AI influence how patients choose providers?

Not much. While most respondents believe AI could improve healthcare, they said it would not play a major role in deciding where to seek care.

“I’m not picking a doctor because they use AI. I just want someone I trust,” one patient explained.

Trust, transparency, and human connection remain the anchors of patient loyalty. Technology may help, but it does not replace the provider relationship at the heart of care.

What does responsible innovation look like?

The survey results show patients are open to AI, but only on their terms. They want innovation that makes healthcare faster, more affordable, and more efficient, but they are not ready to see AI take the lead in diagnosing or treating conditions.

To succeed, healthcare leaders should:

  • Pair AI with consistent human oversight
  • Protect patient privacy and data security
  • Communicate clearly and openly about how AI is used
  • Keep care human-centered, not technology-centered

AI is not just a technology shift. It is a trust shift. Patients are open to new tools, but they want assurance that their safety, privacy, and humanity will not be compromised.

As healthcare organizations adopt AI, success will depend not only on how advanced the technology becomes, but also on how well it aligns with patient expectations and values.

Reach out to the KLAS team or directly to Adam Cherrington to request access to the full report or to discuss the findings further.