Environmental Localization Mapping and Guidance for Visual Prosthesis Users

NCT ID: NCT04359108

Last Updated: 2025-03-10

Study Results

Results available

Outcome measurements, participant flow, baseline characteristics, and adverse events have been published for this study.

View full results

Basic Information

Get a concise snapshot of the trial, including recruitment status, study phase, enrollment targets, and key timeline milestones.

Recruitment Status

COMPLETED

Clinical Phase

NA

Total Enrollment

26 participants

Study Classification

INTERVENTIONAL

Study Start Date

2021-03-16

Study Completion Date

2024-09-30

Brief Summary

Review the sponsor-provided synopsis that highlights what the study is about and why it is being conducted.

This study is driven by the hypothesis that independent navigation by blind users of visual prosthetic devices can be greatly aided by use of an autonomous navigational aid that provides information about the environment and guidance for navigation through multimodal sensory cues. For this study, the investigators developed a navigation system that uses on-board sensing to map the user's environment and compute navigable paths to desired destinations in real-time. Information regarding obstacles and directional guidance is communicated to the user via a combination of sensory modalities including limited vision (through the user's visual prosthesis), haptic, and audio cues. This study evaluates how effectively this navigational aid improves prosthetic vision users' ability to perform navigational tasks. The participants for this study include both retinal prosthesis users of the Argus II Retinal Prosthesis System (Argus II) and normally sighted individuals who use a virtual reality headset to simulate the limited vision of the Argus II system.

Detailed Description

Dive into the extended narrative that explains the scientific background, objectives, and procedures in greater depth.

About 1.3 million Americans aged 40 and older are legally blind, a majority because of diseases with onset later in life, such as glaucoma and age-related macular degeneration. Second Sight Medical Products (SSMP) has developed the world's first FDA approved retinal implant, Argus II, intended to restore some functional vision for people suffering from retinitis pigmentosa (RP).

In this era of smart devices, generic navigation technology, such as GPS mapping apps for smartphones, can provide directions to help guide a blind user from point A to point B. However, these navigational aids do little to enable blind users to form an egocentric understanding of the surroundings, are not suited to navigation indoors, and do nothing to assist in avoiding obstacles to mobility. The Argus II, on the other hand, provides blind users with a limited visual representation of the users surroundings that improves users' ability to orient themselves and traverse obstacles, yet lacks features for high-level navigation and semantic interpretation of the surroundings. The proposed study aims to address these limitations of the Argus II through a synergy of state-of-the-art simultaneous localization and mapping (SLAM) and scene recognition technologies.

This study is driven by the hypothesis that independent navigation by blind users of visual prosthetic devices can be greatly aided by use of an autonomous navigational aid that provides information about the environment and guidance for navigation through multimodal sensory cues. The investigators developed a navigation system that uses on-board sensing and SLAM-based algorithms to continuously construct a map of the user's environment and locate the user within that map in real-time. On-board path planning algorithms compute optimal navigation routes to reach desired destinations based on the constructed map. The system then communicates obstacle locations and navigational cues to the user while navigating via a combination of sensory modalities. The participants for this study include blind Argus II users, who use their retinal implant for vision, and normally sighted individuals, who use a virtual reality headset to simulate the limited vision of a retinal prosthesis.

The sensory modalities used by the navigational aid to communicate information back to the user include:

* Limited vision is provided via the user's visual prosthesis, with the Argus II retinal implant supporting an image size of 10 x 6 pixels spanning approximately 18 x 11 degrees field-of-view. Images sent to the visual implant are derived from video frames provided by forward-facing cameras integrated within headgear worn by the user. Three forms of vision feedback are evaluated in this study including: 1) the standard vision output of the Argus II (which uses texture-based processing including difference-of-Gaussian and contrast enhancement filters), 2) an enhanced depth-based vision mode that uses the depth sensing capabilities of the navigational aid to highlight above-ground obstacles, and 3) a high field of view depth-based vision mode that doubles the pixel size and field of view of the visual feedback in each dimension. The depth-vision modes display only above-ground obstacles with increasing brightness relative to decreasing distance from the user; the ground and obstacles beyond a threshold distance are not displayed in order to declutter the visual scene. The high field-of-view depth-vision mode is only utilized by normally sighted participants, as this mode exceeds the vision capabilities of the Argus II implant.
* Haptic cues indicate the direction in which the user should advance in order to follow the path computed by the navigational aid to reach a target destination. The haptic cues are generated by vibrators situated at five positions located left-to-right along the users forehead, which are built into user-worn headgear. The five vibration points indicate for the user to turn in directions: "far left", "slight left", "straight ahead", "slight right", and "far right".
* Audio cues provide an audible alert when the user approaches an obstacle within 1.5 feet and provide verbal updates on the remaining distance to reach the destination along the path computed by the navigational aid.

This study compares participants' performance in completing navigation tasks using five different modes and combinations of the foregoing sensory modalities as follows: 1) Argus vision, 2) depth vision, 3) depth vision with haptic and audio, 4) haptic and audio (without vision), and 5) high field-of-view depth vision.

The navigation tasks performed by the participants using these modalities include navigating through a dense obstacle field and navigating between rooms within an indoor facility that requires successful traversal of non-trivial paths.

In addition, a third experiment evaluates the effect of resolution and field-of-view of the retinal implant upon participants' ability to visually discern relative distances to different obstacles based on optical flow patterns induced by the participant's motion when approaching obstacles situated at different distances ahead of the user. For this experiment, the following four vision settings are evaluated: 1) low resolution / low field-of-view, 2) low resolution / high field-of-view, 3) high resolution / low field-of-view, and 4) high resolution / high field-of-view. The "low" settings correspond to the values of the Argus II system, whereas the "high" settings corresponding to a doubling of the "low" values along each dimension. For Argus user participants, only the low resolution / low field-of-view setting is evaluated since the Argus II retinal implant is incapable of supporting the higher vision settings.

Conditions

See the medical conditions and disease areas that this research is targeting or investigating.

Retinitis Pigmentosa Visual Impairment Visual Prosthesis

Study Design

Understand how the trial is structured, including allocation methods, masking strategies, primary purpose, and other design elements.

Allocation Method

NON_RANDOMIZED

Intervention Model

SINGLE_GROUP

Primary Study Purpose

BASIC_SCIENCE

Blinding Strategy

NONE

Study Groups

Review each arm or cohort in the study, along with the interventions and objectives associated with them.

Evaluation of normally sighted participants using a simulated visual prosthesis

All normally sighted participants will be assigned to this arm of the study. This study group will perform all experiments using a virtual reality headset (Oculus Go) to simulate the limited vision of a retinal prosthesis system. This study group will include all interventions, including those that both match and exceed the visual performance of the Argus II system.

Group Type EXPERIMENTAL

Navigation system mode: Argus Vision

Intervention Type DEVICE

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: Argus mode
* Haptics: none
* Audio: none Since this mode only provides the standard Argus vision, it is equivalent to using the base Argus II system without the navigational aid.

Navigation system mode: Depth Vision

Intervention Type DEVICE

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: Depth mode (at Argus II resolution and field-of-view)
* Haptics: none
* Audio: none

Navigation system mode: Depth Vision with Haptic / Audio

Intervention Type DEVICE

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: Depth mode (at Argus II resolution and field-of-view)
* Haptics: yes
* Audio: yes

Navigation system mode: Haptic / Audio

Intervention Type DEVICE

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: none
* Haptics: yes
* Audio: yes Since this mode does not provide any visual feedback, it is equivalent to using navigational aid completely blind without a visual prosthesis.

Navigation system mode: High Field-of-View Depth Vision

Intervention Type DEVICE

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: High Field-of-View Depth mode (at twice the Argus II resolution and field-of-view along each dimension)
* Haptics: none
* Audio: none

Distance test vision mode: Low Resolution / Low Field-of-View

Intervention Type DEVICE

This intervention performs the relative distance to two obstacles test with the user's vision output configured as follows:

* Resolution: Low
* Field-of-View: Low

Distance test vision mode: Low Resolution / High Field-of-View

Intervention Type DEVICE

This intervention performs the relative distance to two obstacles test with the user's vision output configured as follows:

* Resolution: Low
* Field-of-View: High

Distance test vision mode: High Resolution / Low Field-of-View

Intervention Type DEVICE

This intervention performs the relative distance to two obstacles test with the user's vision output configured as follows:

* Resolution: High
* Field-of-View: Low

Distance test vision mode: High Resolution / High Field-of-View

Intervention Type DEVICE

This intervention performs the relative distance to two obstacles test with the user's vision output configured as follows:

* Resolution: High
* Field-of-View: High

Evaluation of blind participants using the Argus II retinal prosthesis

All blind participants who have been implanted with the Argus II Retinal Prosthesis System will be assigned to this arm of the study. This study group will be limited to performing a subset of the interventions including only those that do not exceed the visual performance of the Argus II system.

Group Type EXPERIMENTAL

Navigation system mode: Argus Vision

Intervention Type DEVICE

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: Argus mode
* Haptics: none
* Audio: none Since this mode only provides the standard Argus vision, it is equivalent to using the base Argus II system without the navigational aid.

Navigation system mode: Depth Vision

Intervention Type DEVICE

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: Depth mode (at Argus II resolution and field-of-view)
* Haptics: none
* Audio: none

Navigation system mode: Depth Vision with Haptic / Audio

Intervention Type DEVICE

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: Depth mode (at Argus II resolution and field-of-view)
* Haptics: yes
* Audio: yes

Navigation system mode: Haptic / Audio

Intervention Type DEVICE

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: none
* Haptics: yes
* Audio: yes Since this mode does not provide any visual feedback, it is equivalent to using navigational aid completely blind without a visual prosthesis.

Distance test vision mode: Low Resolution / Low Field-of-View

Intervention Type DEVICE

This intervention performs the relative distance to two obstacles test with the user's vision output configured as follows:

* Resolution: Low
* Field-of-View: Low

Interventions

Learn about the drugs, procedures, or behavioral strategies being tested and how they are applied within this trial.

Navigation system mode: Argus Vision

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: Argus mode
* Haptics: none
* Audio: none Since this mode only provides the standard Argus vision, it is equivalent to using the base Argus II system without the navigational aid.

Intervention Type DEVICE

Navigation system mode: Depth Vision

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: Depth mode (at Argus II resolution and field-of-view)
* Haptics: none
* Audio: none

Intervention Type DEVICE

Navigation system mode: Depth Vision with Haptic / Audio

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: Depth mode (at Argus II resolution and field-of-view)
* Haptics: yes
* Audio: yes

Intervention Type DEVICE

Navigation system mode: Haptic / Audio

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: none
* Haptics: yes
* Audio: yes Since this mode does not provide any visual feedback, it is equivalent to using navigational aid completely blind without a visual prosthesis.

Intervention Type DEVICE

Navigation system mode: High Field-of-View Depth Vision

This intervention uses the navigational aid with the output sensory modalities configured as follows:

* Vision: High Field-of-View Depth mode (at twice the Argus II resolution and field-of-view along each dimension)
* Haptics: none
* Audio: none

Intervention Type DEVICE

Distance test vision mode: Low Resolution / Low Field-of-View

This intervention performs the relative distance to two obstacles test with the user's vision output configured as follows:

* Resolution: Low
* Field-of-View: Low

Intervention Type DEVICE

Distance test vision mode: Low Resolution / High Field-of-View

This intervention performs the relative distance to two obstacles test with the user's vision output configured as follows:

* Resolution: Low
* Field-of-View: High

Intervention Type DEVICE

Distance test vision mode: High Resolution / Low Field-of-View

This intervention performs the relative distance to two obstacles test with the user's vision output configured as follows:

* Resolution: High
* Field-of-View: Low

Intervention Type DEVICE

Distance test vision mode: High Resolution / High Field-of-View

This intervention performs the relative distance to two obstacles test with the user's vision output configured as follows:

* Resolution: High
* Field-of-View: High

Intervention Type DEVICE

Eligibility Criteria

Check the participation requirements, including inclusion and exclusion rules, age limits, and whether healthy volunteers are accepted.

Inclusion Criteria

* Subject speaks English;
* Subjects must be an adult (at least 18 years of age);
* Subject has the cognitive and communication ability to participate in the study (i.e., follow spoken directions, perform tests, and give feedback);
* Subject is willing to conduct psychophysics testing up to 4-6 hours per day of testing on 3-5 consecutive days;
* Subject has visual acuity of 20/40 or better (corrected);
* Subject is capable of understanding participant information materials and giving written informed consent.
* Subject is able to walk unassisted

Criteria for inclusion of Argus II users:


* Subject is at least 25 years of age;
* Subject has been implanted with the Argus II system;
* Subject's eye has healed from surgery and the surgeon has cleared the subject for programming;
* Subject has the cognitive and communication ability to participate in the study (i.e., follow spoken directions, perform tests, and give feedback);
* Subject is willing to conduct psychophysics testing up to 4-6 hours per day of testing on 3-5 consecutive days;
* Subject is capable of understanding patient information materials and giving written informed consent;
* Subject is able to walk unassisted.

Exclusion Criteria

* Subject is unwilling or unable to travel to testing facility for at least 3 days of testing within a one-week timeframe;
* Subject does not speak English;
* Subject has language or hearing impairment.
Minimum Eligible Age

18 Years

Eligible Sex

ALL

Accepts Healthy Volunteers

Yes

Sponsors

Meet the organizations funding or collaborating on the study and learn about their roles.

Carnegie Mellon University

OTHER

Sponsor Role collaborator

National Eye Institute (NEI)

NIH

Sponsor Role collaborator

Johns Hopkins University

OTHER

Sponsor Role lead

Responsible Party

Identify the individual or organization who holds primary responsibility for the study information submitted to regulators.

Responsibility Role SPONSOR

Principal Investigators

Learn about the lead researchers overseeing the trial and their institutional affiliations.

Seth Billings, Ph.D.

Role: PRINCIPAL_INVESTIGATOR

Johns Hopkins University

Locations

Explore where the study is taking place and check the recruitment status at each participating site.

Johns Hopkins Medicine - Wilmer Eye Institute

Baltimore, Maryland, United States

Site Status

Johns Hopkins Applied Physics Laboratory

Laurel, Maryland, United States

Site Status

Countries

Review the countries where the study has at least one active or historical site.

United States

Provided Documents

Download supplemental materials such as informed consent forms, study protocols, or participant manuals.

Document Type: Study Protocol and Statistical Analysis Plan

View Document

Other Identifiers

Review additional registry numbers or institutional identifiers associated with this trial.

1R01EY029741-01A1

Identifier Type: NIH

Identifier Source: secondary_id

View Link

IRB00228932

Identifier Type: -

Identifier Source: org_study_id

More Related Trials

Additional clinical trials that may be relevant based on similarity analysis.

SmartHMD for Improved Mobility
NCT03781583 COMPLETED NA