Effect of Perception-based Interventions on Public Acceptance of Using Large Language Models in Medicine

NCT ID: NCT07304908

Last Updated: 2025-12-26

Study Results

Results pending

The study team has not published outcome measurements, participant flow, or safety data for this trial yet. Check back later for updates.

Basic Information

Get a concise snapshot of the trial, including recruitment status, study phase, enrollment targets, and key timeline milestones.

Recruitment Status

ACTIVE_NOT_RECRUITING

Clinical Phase

NA

Total Enrollment

3000 participants

Study Classification

INTERVENTIONAL

Study Start Date

2025-11-25

Study Completion Date

2026-12-31

Brief Summary

Review the sponsor-provided synopsis that highlights what the study is about and why it is being conducted.

Large language models (LLMs) show promise in medicine, but concerns about their accuracy, coherence, transparency, and ethics remain. To date, public perceptions on using LLMs in medicine and whether they play a role in the acceptability of health care applications of LLMs are not yet fully understood. This study aims to investigate public perceptions on using LLMs in medicine and if interventions for perceptions affect the acceptability of health care applications of LLMs.

Detailed Description

Dive into the extended narrative that explains the scientific background, objectives, and procedures in greater depth.

Owing to rapid advances in artificial intelligence, large language models (LLMs) are increasingly being used in a variety of clinical settings such as triage, disease diagnosis, treatment planning, and self-monitoring. Despite their potential, the use of LLMs remains restricted within healthcare settings due to lack of accuracy, coherence, and transparency and ethical concerns. Public perceptions such as perceived usefulness and risks play a crucial role in shaping their attitudes towards artificial intelligence that can either facilitate or hinder its adoption. Yet, to our knowledge, there is lack of awareness about perception-driven interventions in health care and no previous studies have examined whether public perceptions play a role in the acceptability of medical applications of LLMs. Hence, this study aims to investigate public perceptions on using LLMs in medicine and if interventions for perceptions affect the acceptability of health care applications of LLMs.

Conditions

See the medical conditions and disease areas that this research is targeting or investigating.

Large Language Models Acceptability of Health Care Perception, Self

Keywords

Explore important study keywords that can help with search, categorization, and topic discovery.

Large language model Artificial intelligence Perception-based interventions Public acceptance

Study Design

Understand how the trial is structured, including allocation methods, masking strategies, primary purpose, and other design elements.

Allocation Method

RANDOMIZED

Intervention Model

PARALLEL

Primary Study Purpose

OTHER

Blinding Strategy

SINGLE

Outcome Assessors

Study Groups

Review each arm or cohort in the study, along with the interventions and objectives associated with them.

Perceived benefits of large language models in medicine

Participants were asked to read "In April 2023, Massachusetts General Hospital launched a pilot program utilizing medical LLMs to assist with emergency department triage and initial diagnosis and observed a reduction in patient wait times and an improvement in clinical efficiency."

Group Type EXPERIMENTAL

Perception-based interventions

Intervention Type OTHER

Participants allocated to the intervention group received perception-based interventions. Interventions for Groups 1-3 were perceived benefits of LLMs in medicine, perceived racial bias in LLMs in medicine, and perceived ethical conflicts in LLMs in medicine, respectively.

Perceived racial bias in large language models in medicine

Participants were asked to read "In November 2022, a research team from the University of California, San Francisco found that cutting-edge medical LLMs exhibited racial bias when recommending treatment plans."

Group Type EXPERIMENTAL

Perception-based interventions

Intervention Type OTHER

Participants allocated to the intervention group received perception-based interventions. Interventions for Groups 1-3 were perceived benefits of LLMs in medicine, perceived racial bias in LLMs in medicine, and perceived ethical conflicts in LLMs in medicine, respectively.

Perceived ethical conflicts in large language models in medicine

Participants were required to read "In February 2023, a major European hospital network inadvertently leaked partially anonymized but still sensitive patient data during the testing of medical LLMs due to a system configuration error. Although no direct patient harm occurred, this increased public concerns regarding data privacy and security and compelled relevant institutions to conduct urgent reviews of their data protection measures."

Group Type EXPERIMENTAL

Perception-based interventions

Intervention Type OTHER

Participants allocated to the intervention group received perception-based interventions. Interventions for Groups 1-3 were perceived benefits of LLMs in medicine, perceived racial bias in LLMs in medicine, and perceived ethical conflicts in LLMs in medicine, respectively.

Control

No intervention

Group Type NO_INTERVENTION

No interventions assigned to this group

Interventions

Learn about the drugs, procedures, or behavioral strategies being tested and how they are applied within this trial.

Perception-based interventions

Participants allocated to the intervention group received perception-based interventions. Interventions for Groups 1-3 were perceived benefits of LLMs in medicine, perceived racial bias in LLMs in medicine, and perceived ethical conflicts in LLMs in medicine, respectively.

Intervention Type OTHER

Eligibility Criteria

Check the participation requirements, including inclusion and exclusion rules, age limits, and whether healthy volunteers are accepted.

Inclusion Criteria

* ≥18 years
* Capable of completing an online survey
* Agree to sign an informed consent form

Exclusion Criteria

* Unable to answer questions or communicate
* Not willing to participate in this study
Minimum Eligible Age

18 Years

Eligible Sex

ALL

Accepts Healthy Volunteers

Yes

Sponsors

Meet the organizations funding or collaborating on the study and learn about their roles.

Peking University Third Hospital

OTHER

Sponsor Role collaborator

Peking University

OTHER

Sponsor Role lead

Responsible Party

Identify the individual or organization who holds primary responsibility for the study information submitted to regulators.

Liu Jue

Prof.

Responsibility Role PRINCIPAL_INVESTIGATOR

Principal Investigators

Learn about the lead researchers overseeing the trial and their institutional affiliations.

Jue Liu

Role: PRINCIPAL_INVESTIGATOR

Peking University

Locations

Explore where the study is taking place and check the recruitment status at each participating site.

Jue Liu

Beijing, Beijing Municipality, China

Site Status

Countries

Review the countries where the study has at least one active or historical site.

China

Other Identifiers

Review additional registry numbers or institutional identifiers associated with this trial.

NNSF72474005

Identifier Type: -

Identifier Source: org_study_id