Laypersons Cannot Select Preferred Surgeon Based on Videos of Simulated Robot-assisted Radical Prostatectomies

NCT ID: NCT05607485

Last Updated: 2022-11-07

Study Results

Results pending

The study team has not published outcome measurements, participant flow, or safety data for this trial yet. Check back later for updates.

Basic Information

Get a concise snapshot of the trial, including recruitment status, study phase, enrollment targets, and key timeline milestones.

Recruitment Status

COMPLETED

Clinical Phase

NA

Total Enrollment

151 participants

Study Classification

INTERVENTIONAL

Study Start Date

2021-04-01

Study Completion Date

2022-05-01

Brief Summary

Review the sponsor-provided synopsis that highlights what the study is about and why it is being conducted.

The goal of this comparative blinded assessment study is to compare ratings of crowd workers and expert ratings in simulated robot-assisted radical prostatectomies

The main question\[s\] it aims to answer are:

* to examine the use of crowdsourced assessment for assessing the performance of robot-assisted rad-ical prostatectomy (RARP) compared with using experienced surgeons
* to explore if some CW are better than others. Participants will assess edited videos of simulated robot assisted radical prostatecotmies using a standardized assessment tool. The laypersons will be asked to answer yes/no to the question: 'Would you trust this doctor to perform robot-assisted surgery on you?' after each surgery. All participants were blinded to the identity of the surgeon performing the videos of the robot-assisted radical prostatectomy Researchers will compare laypersons with expert raters to see if any difference between their ratings

Detailed Description

Dive into the extended narrative that explains the scientific background, objectives, and procedures in greater depth.

3\. Trial design 3.1 Content This study will evaluate global robotic skills for the three modules performed on the RobotiX, Sim-bionix: bladder neck dissection, nerve sparring dissection, and ureterovesical anastomosis all recorded from the previous study: 'Validation of a novel simulation-based test in robot-assisted radical prostatectomy.' 3.2 Response process Experienced surgeons and crowd workers will first be presented with a short, written instruction describing the trial. Before enrolment all participants will have signed an informed consent (appendix 2) and a demographic questionnaire for baseline characteristics of the crowd and surgical experience of the experienced surgeons (Appendix 3). After completion of the informed consent and demographic questionnaire the survey links will be sent to the participants. Afterwards, both crowd raters and experts will be trained on how to assess the videos with use of the assessment tool, mGEARS. mGEARS is composed of 5 domains: depth perception, bimanual dexterity, efficiency, force sensitivity, and robotic control. Performance in each domain is measured on a 5-point Likert scale. A rating of 1 corresponds to the lowest level of performance, whereas a rating of 5 corresponds to the highest level of performance. An overall performance score is derived by summing the scores of each of the domains (25 points). The raters will have time to read and understand the assessment tool before rating the videos. An elaborate explanation of the chosen domains will be given to the raters including how to rate each video.

3.4 Video material The participants will be assessing the videos using the assessment tool in a survey sent by E-boks. The surveys will be sent using a URL-link from Redcap. All videos are stored at 23video system and a link to the videos will be included in the survey. The survey has successfully been tested on different devices.

The investigators will randomly choose videos from the third repetition from 5 novice surgeons, 5 experienced robotic surgeons and 5 experienced robotic surgeons in RARP. The investigators will use edited videos to the length of maximum 5 minutes. The videos will be edited from start (0 minutes) and 5 minutes forward, where the video will be stopped. Therefore, the videos will show how far the surgeon has come after 5 minutes of simulated operation. A total of 4548 edited videos will be used for crowd-sourced assessment.

To secure response process of Messick's framework all participants will be blinded to the identity and skill level of the surgeon on the recorded video. The experienced surgeons could potentially rate their own videos, which could be a threat to validity for the response process, but as the vide-os are blinded, they will not know which videos are their own. In addition, there will be a signifi-cant time delay between them having performed the task and rating the videos. Thus, it is unlikely that they will be able to identify their own videos. All videos will be given a randomly allocated identification ID.

3.5 Video-rating Each participant will rate ten randomly chosen videos using GEARS. The participants will be given a randomized ID number, which is used to match the ten videos to the participant. They will be asked to evaluate each video with the five different domains of GEARS on a scale from one to five. After rating the video, the participant will be asked to answer 'yes' or 'no' to the question: 'Would the participant trust this doctor to operate on he participant, if the participant were to have their prostate removed using robotic-assisted surgery?'. The participants will fill in the answers after the video-rating in RedCap.

3.6 Evaluation questions After the crowd-raters finish the video-ratings, they will receive a final questionnaire in RedCap, where they are asked their opinion about a possible future role as crowd-raters regarding time use and possible payment level (appendix 4).

3.7 Data-collection All data will be collected and stored in RedCap, which is a platform designed to store research data. All data will be pseudo anonymized as all participants will get a unique link only known to the participant and the principal investigator (RGO). The participants can only rate the video once. The data will be blinded by RGO prior to statistical analysis.

4\. Selection of participants The crowd workers will be recruited by a Danish Association for volunteer patients who would like to contribute to research, Forskningspanelet. e-mail, Facebook, at the website of the Danish prostate cancer association (PROPA) or the monthly PROPA membership magazine.

The expert panel will be invited by e-mail.

Conditions

See the medical conditions and disease areas that this research is targeting or investigating.

Clinical Competence

Study Design

Understand how the trial is structured, including allocation methods, masking strategies, primary purpose, and other design elements.

Allocation Method

NA

Intervention Model

SINGLE_GROUP

comparative blinded assessment study
Primary Study Purpose

OTHER

Blinding Strategy

NONE

All videos were blinded to the participants about the surgeon and the surgical skill level of the surgeon

Study Groups

Review each arm or cohort in the study, along with the interventions and objectives associated with them.

Crowd workers

Crowd workers watched 10 (oiut 45 possible) random videos and assessed with a standard assessment tool.

All participants were blinded to the identity and skill level of the surgeon

Group Type OTHER

Randomized video numbers

Intervention Type PROCEDURE

See arm/group description

Interventions

Learn about the drugs, procedures, or behavioral strategies being tested and how they are applied within this trial.

Randomized video numbers

See arm/group description

Intervention Type PROCEDURE

Eligibility Criteria

Check the participation requirements, including inclusion and exclusion rules, age limits, and whether healthy volunteers are accepted.

Inclusion Criteria

* Member of Forskningspanelet


* none

Exclusion Criteria

* Under the age of 18

Expert raters


* Senior surgeons in urology
* Conducted \>50 robotic-assisted radical prostatectomy procedures
Minimum Eligible Age

18 Years

Eligible Sex

ALL

Accepts Healthy Volunteers

Yes

Sponsors

Meet the organizations funding or collaborating on the study and learn about their roles.

Copenhagen Academy for Medical Education and Simulation

OTHER

Sponsor Role lead

Responsible Party

Identify the individual or organization who holds primary responsibility for the study information submitted to regulators.

Rikke Groth Olsen

Principal Investigator

Responsibility Role PRINCIPAL_INVESTIGATOR

Principal Investigators

Learn about the lead researchers overseeing the trial and their institutional affiliations.

Flemming Bjerrum, MD

Role: STUDY_CHAIR

Copenhagen Academy for Medical Education and Simulation

Locations

Explore where the study is taking place and check the recruitment status at each participating site.

Copenhagen Academy for Medical Education and Simulation

Copenhagen, Østerbro, Denmark

Site Status

Countries

Review the countries where the study has at least one active or historical site.

Denmark

Other Identifiers

Review additional registry numbers or institutional identifiers associated with this trial.

P-2020-701

Identifier Type: -

Identifier Source: org_study_id

More Related Trials

Additional clinical trials that may be relevant based on similarity analysis.