Impact and Safety of AI in Decision Making in the ICU: a Simulation Experiment

NCT ID: NCT05495438

Last Updated: 2023-02-27

Study Results

Results pending

The study team has not published outcome measurements, participant flow, or safety data for this trial yet. Check back later for updates.

Basic Information

Get a concise snapshot of the trial, including recruitment status, study phase, enrollment targets, and key timeline milestones.

Recruitment Status

COMPLETED

Total Enrollment

38 participants

Study Classification

OBSERVATIONAL

Study Start Date

2022-07-22

Study Completion Date

2022-10-31

Brief Summary

Review the sponsor-provided synopsis that highlights what the study is about and why it is being conducted.

The impact of deploying artificial intelligence (AI) in healthcare settings in unclear, in particular with regards to how it will influence human decision makers. Previous research demonstrated that AI alerts were frequently ignored (Kamal et al., 2020 ) or could lead to unexpected behaviour with worsening of patient outcomes (Wilson et al., 2021 ). On the other hand, excessive confidence and trust placed in the AI could have several adverse consequences including ability to detect harmful AI decisions, leading to patient harm as well as human deskilling. Some of these aspects relate to automation bias.

In this simulation study, the investigators intend to measure whether medical decisions in areas of high clinical uncertainty are modified by the use of an AI-based clinical decision support tool. How the dose of intravenous fluids (IVF) and vasopressors administered by doctors in adult patients with sepsis (severe infection with organ failure) in the ICU), changes as a result of disclosing the doses suggested by a hypothetical AI will be measured. The area of sepsis resuscitation is poorly codified, with high uncertainty leading to high variability in practice. This study will not specifically mention the AI Clinician (Komorowski et al., 2018). Instead, the investigators will describe a hypothetical AI for which there is some evidence of effectiveness on retrospective data in another clinical setting (e.g. a model that was retrospectively validated using data from a different country than the source data used for model training) but no prospective evidence of effectiveness or safety. As such, it is possible for this hypothetical AI to provide unsafe suggestions. The investigators will intentionally introduce unsafe AI suggestions (in random order), to measure the sensitivity of our participants at detecting these.

Detailed Description

Dive into the extended narrative that explains the scientific background, objectives, and procedures in greater depth.

The impact of deploying artificial intelligence (AI) in healthcare settings in unclear, in particular with regards to how it will influence human decision makers. Previous research demonstrated that AI alerts were frequently ignored (Kamal et al., 2020 ) or could lead to unexpected behaviour with worsening of patient outcomes (Wilson et al., 2021 ). On the other hand, excessive confidence and trust placed in the AI could have several adverse consequences including ability to detect harmful AI decisions, leading to patient harm as well as human deskilling. Some of these aspects relate to automation bias.

In this simulation study, the investigators intend to measure whether medical decisions in areas of high clinical uncertainty are modified by the use of an AI-based clinical decision support tool. How the dose of intravenous fluids (IVF) and vasopressors administered by doctors in adult patients with sepsis (severe infection with organ failure) in the ICU), changes as a result of disclosing the doses suggested by a hypothetical AI will be measured. The area of sepsis resuscitation is poorly codified, with high uncertainty leading to high variability in practice. This study will not specifically mention the AI Clinician (Komorowski et al., 2018). Instead, the investigators will describe a hypothetical AI for which there is some evidence of effectiveness on retrospective data in another clinical setting (e.g. a model that was retrospectively validated using data from a different country than the source data used for model training) but no prospective evidence of effectiveness or safety. As such, it is possible for this hypothetical AI to provide unsafe suggestions. The investigators will intentionally introduce unsafe AI suggestions (in random order), to measure the sensitivity of our participants at detecting these.

The investigators will examine what participant characteristics are linked with an increase likelihood of being influenced by the AI, and conduct a number of pre-specified subgroup analyses, e.g. junior versus senior ICU doctors, and separating those with a positive or a negative attitude towards AI.

Conditions

See the medical conditions and disease areas that this research is targeting or investigating.

Sepsis

Study Design

Understand how the trial is structured, including allocation methods, masking strategies, primary purpose, and other design elements.

Observational Model Type

OTHER

Study Time Perspective

PROSPECTIVE

Study Groups

Review each arm or cohort in the study, along with the interventions and objectives associated with them.

ICU Clinicians

Hypothetical AI

Intervention Type OTHER

n/a - There is no intervention. Clinicians will review the suggestions of a hypothetical AI

Interventions

Learn about the drugs, procedures, or behavioral strategies being tested and how they are applied within this trial.

Hypothetical AI

n/a - There is no intervention. Clinicians will review the suggestions of a hypothetical AI

Intervention Type OTHER

Eligibility Criteria

Check the participation requirements, including inclusion and exclusion rules, age limits, and whether healthy volunteers are accepted.

Inclusion Criteria

* Junior (senior house officer) or senior (registrar/fellow/consultant) ICU doctor
Minimum Eligible Age

18 Years

Eligible Sex

ALL

Accepts Healthy Volunteers

Yes

Sponsors

Meet the organizations funding or collaborating on the study and learn about their roles.

University of York

OTHER

Sponsor Role collaborator

Imperial College London

OTHER

Sponsor Role lead

Responsible Party

Identify the individual or organization who holds primary responsibility for the study information submitted to regulators.

Responsibility Role SPONSOR

Principal Investigators

Learn about the lead researchers overseeing the trial and their institutional affiliations.

Matthieu Komorowski, MD, PhD

Role: PRINCIPAL_INVESTIGATOR

Imperial College London

Locations

Explore where the study is taking place and check the recruitment status at each participating site.

Imperial College Hospitals NHS Trust

London, , United Kingdom

Site Status

Countries

Review the countries where the study has at least one active or historical site.

United Kingdom

Other Identifiers

Review additional registry numbers or institutional identifiers associated with this trial.

22CX7592

Identifier Type: -

Identifier Source: org_study_id

More Related Trials

Additional clinical trials that may be relevant based on similarity analysis.