Exploring the Use of AI-Assisted Video Monitoring to Predict Accidental Events in ICU Patients

NCT ID: NCT07307521

Last Updated: 2025-12-29

Study Results

Results pending

The study team has not published outcome measurements, participant flow, or safety data for this trial yet. Check back later for updates.

Basic Information

Get a concise snapshot of the trial, including recruitment status, study phase, enrollment targets, and key timeline milestones.

Recruitment Status

NOT_YET_RECRUITING

Total Enrollment

300 participants

Study Classification

OBSERVATIONAL

Study Start Date

2026-01-01

Study Completion Date

2028-12-31

Brief Summary

Review the sponsor-provided synopsis that highlights what the study is about and why it is being conducted.

This study aims to improve the safety and care of patients in the Intensive Care Unit (ICU) by using artificial intelligence (AI) to analyze video monitoring. ICU patients often face serious risks such as delirium, accidental removal of breathing tubes or lines, and sleep problems. These events can lead to medical emergencies, longer ICU stays, higher costs, and worse outcomes.

To address these challenges, we will place a small video camera above each ICU bed. The camera will record patient movements, body activity, and sleep patterns. At the same time, routine medical monitors will record heart rate, blood oxygen levels, and other vital signs. Noise levels in the room will also be measured. All these data help us understand the patient's behavior and condition more accurately.

The video recording does not involve extra treatment or additional procedures. All data are collected passively and safely. Patient privacy is strictly protected: the system will blur faces or replace them with digital avatars, and any information that could identify the patient or the environment will be masked. All videos are stored securely inside the hospital and are processed only after privacy protection.

Using these recordings, an AI model will be trained to recognize early warning signs of dangerous situations. For example, the system may detect early movements that suggest the patient is becoming agitated, confused, or trying to remove medical tubes. It may also identify severe sleep disturbance that may lead to delirium. If the AI can recognize these early changes, medical staff can intervene sooner and prevent harm.

About 300 patients from Fudan University Zhongshan Hospital will participate. Participation is voluntary. Patients or families will sign an informed consent form before being enrolled. The study has three stages:

Screening - understanding the study and signing consent. Data collection - video and medical monitor data are collected during the ICU stay.

Follow-up - telephone or in-person follow-up at 1 month and 6 months after discharge to evaluate recovery, sleep, mental status, and overall safety.

There are no direct medical risks from participating in this study because it only collects behavioral and monitoring data. The cameras do not interfere with treatment. Privacy and data security are the main considerations, and all measures strictly follow national laws and hospital regulations.

Participants may benefit from earlier identification of dangerous situations, which may help prevent accidental tube removal, severe agitation, or other emergencies. Even if no direct benefit occurs, the information collected may help improve future ICU care by enabling safer and more accurate monitoring systems.

Taking part in the study will not affect the patient's medical care. Patients may withdraw at any time without any consequences or loss of benefits.

This study hopes to build a reliable AI tool that can assist nurses and doctors in recognizing early signs of trouble, improving safety, and enhancing the quality of care for ICU patients.

Detailed Description

Dive into the extended narrative that explains the scientific background, objectives, and procedures in greater depth.

This study investigates whether artificial intelligence (AI)-assisted video monitoring can identify early behavioral changes that precede accidental or harmful events in Intensive Care Unit (ICU) patients. ICU patients are vulnerable to a series of sudden and potentially dangerous events-such as agitation, delirium, accidental device removal, and significant sleep disruption-many of which develop gradually and are difficult to detect solely from routine physiological monitoring. This project aims to determine whether AI analysis of continuous bedside video recordings, combined with noise-level information and vital-sign data already collected during standard ICU care, can provide clinicians with timely warnings before these events occur.

Rationale Traditional ICU monitoring systems focus on physiological parameters such as heart rate, blood pressure, and oxygen saturation. While essential, these measurements do not fully represent patient behavior. Many high-risk events are preceded by subtle motor patterns or behavioral cues-for example, repeated reaching toward tubes, rising restlessness, or disturbed sleep cycles. Such cues are often intermittent, brief, or masked by sedation or other treatments, making them difficult for staff to detect in busy clinical environments.

Computer vision and AI technologies offer an opportunity to objectively observe and interpret patient movements and behavioral trends continuously, without adding clinical workload. By integrating video information with physiologic data and environmental noise levels, the AI system may identify patterns that indicate emerging delirium, increased agitation, or imminent attempts to remove medical devices. Early identification may support timely preventive interventions and reduce the rates of adverse events.

Study Overview The study will prospectively enroll ICU patients who consent to video monitoring and data use. A small camera will be installed above each bed to continuously capture patient movement and posture. The camera view is restricted to the patient zone, excluding unnecessary areas such as the nursing station. All recordings follow strict privacy-protection procedures, including automated face masking, background blurring, and removal of identifying information from objects in the frame.

Environmental noise is recorded through a decibel meter, and routine vital-sign data are synchronized with the video timeline. These combined multimodal data will serve as input for AI model development.

The study is divided into three components:

Data collection phase - real-world continuous recording of behavioral and physiological data.

Data processing and annotation - cleaning, de-identification, and labeling of key behavioral events by trained researchers.

Model development and evaluation - training AI models to identify behavioral patterns associated with clinically meaningful events, and evaluating their predictive performance.

Data Integration and Processing

All raw videos remain stored securely inside the hospital's protected data environment and are not transferred outside. A standardized de-identification pipeline is applied before any analytical use. This includes:

Masking or replacing patient faces. Removing identifying elements such as bed numbers and equipment labels. Blurring all background areas outside the patient zone. Excluding frames containing staff faces or unrelated activities. After de-identification, videos are aligned with vital-sign and noise-level timelines to create multimodal time-series datasets. Human annotators, trained with a unified labeling guideline, identify episodes of agitation, possible delirium-related behavior, attempts at device removal, and sleep-wake transitions. These labels serve as ground truth for AI training.

AI Model Development Multiple AI architectures will be explored, particularly those suited for temporal video analysis. Potential approaches include convolutional neural networks (CNNs), 3D CNNs, long short-term memory networks (LSTM), or transformer-based models capable of learning long-range dependencies in behavior sequences. Additional feature extraction methods will be evaluated to integrate physiologic and environmental signals.

To avoid model overfitting and ensure generalizability, the dataset will be split into training, validation, and independent test sets. Cross-validation will be used during parameter tuning. Model output will include risk scores or prediction probabilities indicating the likelihood of an impending accidental event.

Performance will be evaluated using accuracy, sensitivity, specificity, F1 score, and lead time (the time interval between system alert and actual event). The lead-time metric is particularly important because practical utility in clinical care depends on whether alerts occur early enough for staff to intervene.

Outcome Interpretation This study does not impose any medical intervention on participants. All adverse events are part of routine clinical care; the study merely investigates whether AI can anticipate them. Through continuous monitoring and analytical modeling, the research aims to quantify how much predictive information is contained in patient behavior, movement patterns, and environmental context captured by video.

The findings will help determine the feasibility and clinical value of AI-assisted behavioral monitoring in real-world ICUs. If successful, such systems may provide early warnings of delirium, accidental device removal, or other behavior-linked risks. This may reduce emergency interventions, shorten ICU stays, and improve overall patient safety.

Follow-Up To understand the longer-term relevance of the AI predictions, patients will undergo follow-up assessments after discharge at 1 month and 6 months. Follow-up evaluates general health recovery, sleep status, cognition, and whether any delayed complications occurred. Patient and family feedback regarding video monitoring-including comfort level, perceived benefit, or privacy concerns-will also be collected to guide system refinement.

Ethical and Privacy Considerations The study emphasizes privacy protection and informed consent. Cameras are positioned to minimize exposure of unnecessary areas. De-identification is applied before analysis, and all data are managed within controlled hospital systems. Participants may withdraw at any point without affecting their care. The study involves no experimental treatment or additional medical procedures beyond standard ICU monitoring.

Scientific and Clinical Significance This research addresses a critical gap in ICU safety: behavior-based early warning. By combining AI, video analysis, physiology, and environmental data, the study explores an approach that could complement routine monitoring. Beyond predicting specific events, the project may contribute to broader understanding of ICU patient behavioral trajectories and the role of environmental factors such as noise.

The long-term vision is to create a clinically deployable system that supports early intervention, reduces preventable harm, and enhances the efficiency of ICU care.

Conditions

See the medical conditions and disease areas that this research is targeting or investigating.

Behavioral Monitoring in ICU Prediction of Accidental Events Using AI-Assisted Video Analysis ICU Patient Safety and Early Warning System

Keywords

Explore important study keywords that can help with search, categorization, and topic discovery.

Delirium Accidental Extubation Unplanned Device Removal Sleep Disturbance Agitation in ICU Patients Critical Illness

Study Design

Understand how the trial is structured, including allocation methods, masking strategies, primary purpose, and other design elements.

Observational Model Type

OTHER

Study Time Perspective

PROSPECTIVE

Study Groups

Review each arm or cohort in the study, along with the interventions and objectives associated with them.

Single Cohort

This cohort includes ICU patients who undergo continuous bedside video monitoring combined with routine vital-sign collection. Video, physiologic, and noise-level data are used for AI-based analysis to identify patterns associated with delirium, agitation, and accidental device removal. No clinical treatment or care procedures are altered. The study is observational and involves data collection only

AI-Assisted Video Monitoring

Intervention Type OTHER

This intervention consists of continuous bedside video monitoring combined with routine physiologic data and environmental noise levels. A ceiling-mounted camera captures patient movements and posture without altering clinical care. All video is de-identified through face masking or avatar replacement, and background areas are blurred to protect privacy. Data are synchronized with vital signs and used solely for AI-based behavioral analysis to identify early patterns associated with delirium, agitation, sleep disruption, and accidental device removal. No treatments, medications, or clinical decisions are changed as part of this study. This intervention involves data collection only and does not modify standard ICU care

Interventions

Learn about the drugs, procedures, or behavioral strategies being tested and how they are applied within this trial.

AI-Assisted Video Monitoring

This intervention consists of continuous bedside video monitoring combined with routine physiologic data and environmental noise levels. A ceiling-mounted camera captures patient movements and posture without altering clinical care. All video is de-identified through face masking or avatar replacement, and background areas are blurred to protect privacy. Data are synchronized with vital signs and used solely for AI-based behavioral analysis to identify early patterns associated with delirium, agitation, sleep disruption, and accidental device removal. No treatments, medications, or clinical decisions are changed as part of this study. This intervention involves data collection only and does not modify standard ICU care

Intervention Type OTHER

Eligibility Criteria

Check the participation requirements, including inclusion and exclusion rules, age limits, and whether healthy volunteers are accepted.

Inclusion Criteria

* Adult or pediatric patients admitted to the Intensive Care Unit (ICU).
* Patient or legally authorized representative is capable of understanding the study information and providing informed consent.
* Patient is expected to remain in the ICU long enough to allow video and physiologic data collection.
* Agreement to participate and allow video monitoring during the ICU stay.

Exclusion Criteria

* Refusal to participate from the patient or legally authorized representative.
* Patients for whom continuous video monitoring is medically inappropriate or not feasible (e.g., isolation conditions preventing camera installation).
* Patients whose condition or legal status requires special restrictions on video recording (e.g., certain forensic or custodial cases).
* Any situation judged by the clinical team to place the patient at increased privacy or safety risk by participation.
* Withdrawal of consent at any point during the study.
Eligible Sex

ALL

Accepts Healthy Volunteers

No

Sponsors

Meet the organizations funding or collaborating on the study and learn about their roles.

Shanghai Zhongshan Hospital

OTHER

Sponsor Role lead

Responsible Party

Identify the individual or organization who holds primary responsibility for the study information submitted to regulators.

Responsibility Role SPONSOR

Central Contacts

Reach out to these primary contacts for questions about participation or study logistics.

Gu zhunyong, Gu

Role: CONTACT

Phone: 8613918677995

Email: [email protected]

Other Identifiers

Review additional registry numbers or institutional identifiers associated with this trial.

B2024-512

Identifier Type: -

Identifier Source: org_study_id