AI-Powered Artificial Vision for Visual Prostheses

NCT ID: NCT06117332

Last Updated: 2025-07-31

Study Results

Results pending

The study team has not published outcome measurements, participant flow, or safety data for this trial yet. Check back later for updates.

Basic Information

Get a concise snapshot of the trial, including recruitment status, study phase, enrollment targets, and key timeline milestones.

Recruitment Status

ENROLLING_BY_INVITATION

Clinical Phase

NA

Total Enrollment

7 participants

Study Classification

INTERVENTIONAL

Study Start Date

2023-10-02

Study Completion Date

2027-08-31

Brief Summary

Review the sponsor-provided synopsis that highlights what the study is about and why it is being conducted.

Visual impairment is one of the ten most prevalent causes of disability and poses extraordinary challenges to individuals in our society that relies heavily on sight. Living with acquired blindness not only lowers the quality of life of these individuals, but also strains society's limited resources for assistance, care and rehabilitation. However, to date, there is no effective treatment for man patients who are visually handicapped as a result of degeneration or damage to the inner layers of the retina, the optic nerve or the visual pathways. Therefore, there are compelling reasons to pursue the development of a cortical visual prosthesis capable of restoring some useful sight in these profoundly blind patients.

However, the quality of current prosthetic vision is still rudimentary. A major outstanding challenge is translating electrode stimulation into a code that the brain can understand. Interactions between the device electronics and the retinal neurophysiology lead to distortions that can severely limit the quality of the generated visual experience. Rather than aiming to one day restore natural vision (which may remain elusive until the neural code of vision is fully understood), one might be better off thinking about how to create practical and useful artificial vision now.

The goal of this work is to address fundamental questions that will allow the development of a Smart Bionic Eye, a device that relies on AI-powered scene understanding to augment the visual scene (similar to the Microsoft HoloLens), tailored to specific real-world tasks that are known to diminish the quality of life of people who are blind (e.g., face recognition, outdoor navigation, reading, self-care).

Detailed Description

Dive into the extended narrative that explains the scientific background, objectives, and procedures in greater depth.

The investigators will perform basic experimental studies involving humans (BESH) designed to quantify the perceptual experiences of visual prosthesis patients. These experiments will follow standard procedures for collecting behavioral data, and involve simple perceptual tasks (e.g., signal detection, object recognition) and behavioral tasks (e.g., walking towards a goal location).

The investigators will produce visual percepts in visual prosthesis patients either by directly stimulating electrodes (using standard, safe pulse trains), or by asking them to view a computer or projector screen and using standard stimulation protocols (as is standardly used for their devices) to convert the computer or projector screen image into pulse trains on their electrodes. Informed by psychophysical data and computational models, the investigators will test the ability of different stimulus encoding methods to support simple perceptual and behavioral tasks (e.g., object recognition, navigation). These encoding methods may include computer vision methods to e.g. highlight important objects in the scene or machine learning to tailor stimuli to each individual patient. Performance of prosthesis patients will be compared both across stimulus encoding methods and to performance in normally sighted control subjects viewing stimuli manipulated to match the expected perceptual experience of prosthesis patients.

The normal method of stimulation is a chain from a camera mounted on eye glasses through a video processing unit (VPU) which converts the video image into electronic pulse trains. Sometimes the investigators will test subjects using the camera. More often, the investigators will carry out 'direct stimulation' when using an external computer to directly specify pulse trains (e.g., a 300 Hz pulse train with 400 us pulse duration). These direct pulse trains are then sent to the VPU. This VPU contains software that makes sure that these pulse trains are within FDA-approved safety limits. For example, these pulses must be charge-balanced (equal anodic/cathodic charge) and must have a charge density below 35 microCoulombs/cm2. Sometimes the investigators will test subjects using the camera. Sometimes the investigators will directly send pulses to the VPU by directly specifying pulse trains (e.g., send a 1 s 10 Hz cathodic pulse train, with a current amplitude of 100 microAmps and a pulse width of 45 microAmps to Electrode 12 of an Argus II implant).

Important parameters for safety include a) pulses must be charge-balanced (an anodic pulse must be followed quickly by a cathodic pulse and vice versa or the electrode will dissolve), b) charge density should be limited. The frequency of the pulse train and the current amplitude of the pulse train is not actually a critical safety issue, since the electronic/neural interface is robust to extremely high rates of stimulation and high current levels. However, high frequency pulse trains or high amplitude pulse trains can produce discomfort in patients (analogous to going from a dark movie theatre to sunlight) due to inducing large-scale neuronal firing. The investigators will normally be focusing on pulse-train frequencies/amplitudes that are in the normal range used by the patient when using their device. If the investigators use parameters that might be expected to produce a more intense neural response (and therefore have the potential to cause discomfort), they will always introduce them in a step-wise function (e.g. gradually increasing amplitude) while checking that the sensation is not 'uncomfortably bright', and the investigators will immediately decrease the intensity of stimulation if patients report that the sensation approaches discomfort. The PI has experience in this approach and will train all personnel on these protocols.

In response to the stimulation/image on the monitor, subjects will be asked to either make a perceptual judgment or perform a simple behavioral task. Examples include detecting a stimulus ('did you see a light on that trial'), reporting size by drawing on a touch screen, or walking to a target location. Both patient response and reaction time will be recorded.

None of these stimuli will elicit emotional responses or be aversive in any way.

In some cases, the investigators will also collect data measuring subjects' eye position. This is a noninvasive procedure that will be carried out using standard eye-tracking equipment via an infra-red camera that tracks the position of the subjects' pupil. Only measurements like eye position or eye blinks will be recorded, so these data do not contain identifiable information.

Subjects are encouraged to take breaks as often as needed (they may leave the testing room). The investigators use various experimental techniques including: (1) Same-different - e.g. subjects are shown two percepts and are asked if they are the same or different. (2) Method of adjustment - e.g. subjects are asked to adjust a display/stimulation intensity until a percept is barely visible, (3) 2-alternative-forced choice - e.g. subjects will be presented with two stimuli and asked which of the two stimuli is brighter (4) Identification - subjects are asked to identify which letter was presented.

In some cases, as well as measuring accuracy, the investigators will also measure improvement with practice by repeating the same task across multiple sessions (up to 5 sessions, each carried out on different testing days).

Conditions

See the medical conditions and disease areas that this research is targeting or investigating.

Blindness, Acquired

Study Design

Understand how the trial is structured, including allocation methods, masking strategies, primary purpose, and other design elements.

Allocation Method

NA

Intervention Model

SINGLE_GROUP

Primary Study Purpose

BASIC_SCIENCE

Blinding Strategy

NONE

Study Groups

Review each arm or cohort in the study, along with the interventions and objectives associated with them.

Perception resulting from AI-powered artificial vision

The investigators will produce visual percepts in visual prosthesis patients either by directly stimulating electrodes (using FDA-approved pulse trains), or by asking them to view a computer or projector screen and using standard stimulation protocols (as is standardly used for their devices) to convert the computer or projector screen image into pulse trains on their electrodes. Informed by psychophysical data and computational models, the investigators will test the ability of different stimulus encoding methods to support simple perceptual and behavioral tasks (e.g., object recognition, navigation). These encoding methods may include computer vision and machine learning methods to highlight important objects in the scene or to highlight nearby obstacles and may be tailored to each individual patient.

Group Type EXPERIMENTAL

Visual prosthesis

Intervention Type DEVICE

In response to the stimulation/image on the monitor, subjects will be asked to either make a perceptual judgment or perform a simple behavioral task. Examples include detecting a stimulus ('did you see a light on that trial'), reporting size by drawing on a touch screen, or walking to a target location. Both patient response and reaction time will be recorded.

In some cases, the investigators will also collect data measuring subjects' eye position. This is a noninvasive procedure that will be carried out using standard eye-tracking equipment via an infra-red camera that tracks the position of the subjects' pupil. Only measurements like eye position or eye blinks will be recorded, so these data do not contain identifiable information.

Interventions

Learn about the drugs, procedures, or behavioral strategies being tested and how they are applied within this trial.

Visual prosthesis

In response to the stimulation/image on the monitor, subjects will be asked to either make a perceptual judgment or perform a simple behavioral task. Examples include detecting a stimulus ('did you see a light on that trial'), reporting size by drawing on a touch screen, or walking to a target location. Both patient response and reaction time will be recorded.

In some cases, the investigators will also collect data measuring subjects' eye position. This is a noninvasive procedure that will be carried out using standard eye-tracking equipment via an infra-red camera that tracks the position of the subjects' pupil. Only measurements like eye position or eye blinks will be recorded, so these data do not contain identifiable information.

Intervention Type DEVICE

Other Intervention Names

Discover alternative or legacy names that may be used to describe the listed interventions across different sources.

Utah array (Cortivis)

Eligibility Criteria

Check the participation requirements, including inclusion and exclusion rules, age limits, and whether healthy volunteers are accepted.

Inclusion Criteria

* Subject must be at least 18 years of age;
* Subject has been implanted with a visual prosthesis (e.g., Argus II, Orion, Cortivis)
* Subject has healed from surgery and the surgeon has cleared the subject for programming;
* Subject has the cognitive and communication ability to participate in the study (i.e., follow spoken directions, perform tests, and give feedback);
* Subject is willing to conduct psychophysics testing up to 4-6 hours per day of testing on 3-5 consecutive days;
* Subject is capable of understanding patient information materials and giving written informed consent;
* Subject is able to walk unassisted.

Criteria for inclusion of sighted control subjects:

* Subject speaks English;
* Subject must be at least 18 years of age;
* Subject has visual acuity of 20/40 or better (corrected);
* Subject has the cognitive and communication ability to participate in the study (i.e., follow spoken directions, perform tests, and give feedback);
* Subject is capable of understanding participant information materials and giving written informed consent.
* Subject is able to walk unassisted

Exclusion Criteria

* Visual prosthesis users: Subject is unwilling or unable to travel to testing facility for at least 3 days of testing within a one-week timeframe;
* Sighted controls: Subject has a history of motion sickness or flicker vertigo
* All: Subject has language or hearing impairment
Minimum Eligible Age

18 Years

Eligible Sex

ALL

Accepts Healthy Volunteers

Yes

Sponsors

Meet the organizations funding or collaborating on the study and learn about their roles.

Universidad Miguel Hernandez de Elche

OTHER

Sponsor Role collaborator

University of California, Santa Barbara

OTHER

Sponsor Role lead

Responsible Party

Identify the individual or organization who holds primary responsibility for the study information submitted to regulators.

Responsibility Role SPONSOR

Principal Investigators

Learn about the lead researchers overseeing the trial and their institutional affiliations.

Michael Beyeler, PhD

Role: PRINCIPAL_INVESTIGATOR

University of California, Santa Barbara

Locations

Explore where the study is taking place and check the recruitment status at each participating site.

University of California, Santa Barbara

Santa Barbara, California, United States

Site Status

University Miguel Hernandez

Elche, Alicante, Spain

Site Status

Countries

Review the countries where the study has at least one active or historical site.

United States Spain

Other Identifiers

Review additional registry numbers or institutional identifiers associated with this trial.

DP2LM014268

Identifier Type: NIH

Identifier Source: org_study_id

View Link

More Related Trials

Additional clinical trials that may be relevant based on similarity analysis.

Click2Print Artificial Eyes
NCT05093348 COMPLETED NA