Sensory Integration of Auditory and Visual Cues in Diverse Contexts

NCT ID: NCT04479761

Last Updated: 2025-02-05

Study Results

Results available

Outcome measurements, participant flow, baseline characteristics, and adverse events have been published for this study.

View full results

Basic Information

Get a concise snapshot of the trial, including recruitment status, study phase, enrollment targets, and key timeline milestones.

Recruitment Status

COMPLETED

Clinical Phase

NA

Total Enrollment

107 participants

Study Classification

INTERVENTIONAL

Study Start Date

2021-09-15

Study Completion Date

2024-12-30

Brief Summary

Review the sponsor-provided synopsis that highlights what the study is about and why it is being conducted.

More than 1/3 of adults in the United States seek medical attention for vestibular disorders and hearing loss; disorders that can triple one's fall risk and have a profound effect on one's participation in activities of daily living. Hearing loss has been shown to reduce balance performance and could be one modifiable risk factor for falls. Patients with vestibular hypofunction tend to avoid busy, hectic, visually complex, and loud environments because these environments provoke dizziness and imbalance. While the visual impact on balance is well known, less is known about the importance of sounds. In search for a possible mechanism to explain a relationship between hearing and balance control, some studies suggested that sounds may serve as an auditory anchor, providing spatial cues for balance, similar to vision. However, the majority of these studies tested healthy adults' response to sounds with blocked visuals. It is also possible that a relationship between hearing loss and balance problems is navigated via an undetected vestibular deficit. By understanding the role of auditory input in balance control, falls may be prevented in people with vestibular disorders and hearing loss. Therefore, there is a critical need for a systematic investigation of balance performance in response to simultaneous visual and auditory perturbations, similar to real-life situations.

To answer this need, the investigators used recent advances in virtual reality technology and developed a Head Mounted Display (HMD) protocol of immersive environments, combining specific manipulations of visuals and sounds, including generated sounds (i.e., white noise) and real-world recorded sounds (e.g., a train approaching a station). This research will answer the following questions: (1) Are sounds used for balance and if yes, via what mechanism? (2) Do individuals with single-sided hearing loss have a balance problem even without any vestibular issues? (3) Are those with vestibular loss destabilized by sounds? To address these questions, the following specific aims will be investigated in individuals with unilateral peripheral vestibular hypofunction (n=45), individuals with single-sided deafness (n=45), and age-matched controls (n=45): Aim 1: Establish the role of generated and natural sounds in postural control in different visual environments; Aim 2: Determine the extent to which a static white noise can improve balance within a dynamic visual environment.

Detailed Description

Dive into the extended narrative that explains the scientific background, objectives, and procedures in greater depth.

Introduction: Aim 1 is to establish the role of generated and natural sounds in postural control given the visual environment and sensory loss. For that the investigators will measure postural sway in individuals with unilateral peripheral vestibular hypofunction (n=45), individuals with SSD (n=45) and age-matched controls (n=45). They will be tested in an immersive virtual reality environment displaying an abstract 3-wall display of stars or a subway station. Within each environment, we will compare changes in postural sway in response to visual (static, dynamic) and auditory perturbations (no sound, dynamic sound, i.e., rhythmic white noise in the stars environment or natural sounds, such as moving trains, in the subway environment). Aim 2 is to determine the extent to which a static white noise can improve balance (reduce postural sway) within a dynamic visual environment in individuals with and without sensory loss. To accomplish this aim, the 3 groups of participants will be tested within the same visual environment but here we will compare their sway within a sound-free dynamic visual environment to that with static white noise.

System: Visuals were designed in C# language using standard Unity Engine version 2018.1.8f1(64-bit) (©Unity Tech., San Francisco, CA, USA). The scenes will be delivered via an HTC Vive headset (Taoyuan City, Tai-wan) controlled by a Dell Alienware laptop 15 R3 (Round Rock, TX, USA). The Vive has built-in positional track-ing operating at 60Hz and a refresh rate at 90 Hz. Sounds will be delivered via Bose (Bose Corporation, Fram-ingham, MA, USA) QuietComfort 35 wireless headphones II with active noise cancellation and 360º spatial audio. The process of creating auditory cues included over 20 hours of sound field recording based on the targeted scenes and their intensity levels in New York City. Auditory cues were captured with the Sennheiser Ambeo microphone in first order Ambisonics format. The background sounds merged with a sound design process which involved simulating the detailed environmental sounds that exist within the natural environment to develop a real-world sonic representation. The audio files were processed in Wwise and integrated into Unity. Postural sway will be recorded at 100 Hz by Qualisys software for a Kistler 5233A force-platform (Winterthur, Switzer-land).

Data Collection: Potentially eligible participants will complete a demographics form and go through the following diagnostic screening at the Ear Institute: Caloric Test, Video Head Impulse Test (vHIT), Ocular / Cervical Vestibular Evoked Myogenic Potential, and Audiogram. Visual and somatosensory screening will be done at the Ear Institute as well. This first session is expected to take 2.5 hours to complete. Participants will receive questionnaires to complete at home or on the next session. The Dizziness Handicap Inventory (DHI) was designed to identify difficulties that a patient may be experiencing because of dizziness. The Activities-Specific Balance Confidence (ABC) is a measure of confidence in performing various ambulatory activities without falling or feeling 'unsteady'. The State-Trait Anxiety Inventory (STAI) assesses the severity of anxiety symptoms and a generalized tendency to be anxious. The Speech, Spatial and Quality of Hearing 12-item Scale (SSQ12) is a valid, short version of the original SSQ which provides insights on day-to-day hearing loss impact. The virtual reality protocol (testing by the PI at the NYU Human Performance Laboratory) includes 12 conditions: 2 environments (an abstract display of stars, a subway station) X 2 visuals (moving, static) X 3 sounds (dynamic, none, static white noise) each repeated 3 times for a total of 36 trials. It will be randomized and completed over 1-2 sessions, as needed of up to 90 minutes each. Sounds will be played at the highest level that is comfortable to the participant. Scenes are 60 seconds long. Throughout all sessions, the patients will complete the Simulator Sickness Questionnaire, used to monitor participants' symptoms.

Data Analysis: For each of the 3 measures of interest and for each environment, we will fit a linear mixed effects model. Each model will include main effects of group, visual condition, and auditory condition, as well as all 2 and 3-way interactions. The models will also control for caloric and Video Head Impulse Test (vHIT) test results as well as Age Related Hearing Loss and age. For aim 1, we will assess the significance of contrasts between no sounds / dynamic sounds for the different visual conditions and groups. For aim 2, the same will be done for contrasts between no sounds / static sounds. These models estimate the difference in visual weighting and reweighting between the groups, maximizing the information we can obtain from the data by accounting for the inherent multi-level study design (person, conditions, repetitions). Since each person completes various trials for each condition, the linear mixed effects model accounts for these sources of variability. P-values for the fixed effects will be calculated using the Satterthwaite approximation for the degrees of freedom for the T-distribution80. In addition, we will descriptively explore the relationship between DP, area, self-reported outcomes (DHI, ABC, STAI, SSQ12), and age.

Conditions

See the medical conditions and disease areas that this research is targeting or investigating.

Vestibular Disorder Hearing Loss, Sensorineural

Study Design

Understand how the trial is structured, including allocation methods, masking strategies, primary purpose, and other design elements.

Allocation Method

NA

Intervention Model

SINGLE_GROUP

We will have 3 groups: people with unilateral vestibular loss, people with single-sided hearing loss and controls. All groups will go through the same procedure.
Primary Study Purpose

DIAGNOSTIC

Blinding Strategy

NONE

Study Groups

Review each arm or cohort in the study, along with the interventions and objectives associated with them.

Virtual Reality

Participants will be wearing a virtual reality headset and observing 2 types of scenes: abstract (a display of stars) or contextual (a subway station).

Group Type EXPERIMENTAL

Visual and Auditory Cues

Intervention Type BEHAVIORAL

Within each scene there will be 2 levels of visual input (static or dynamic) combined with 3 levels of sounds (static, none or dynamic). Postural responses to each combination will be evaluated in order to assess the role of generated and natural sounds in postural control and whether static sounds can improve balance within dynamic virtual environments.

Interventions

Learn about the drugs, procedures, or behavioral strategies being tested and how they are applied within this trial.

Visual and Auditory Cues

Within each scene there will be 2 levels of visual input (static or dynamic) combined with 3 levels of sounds (static, none or dynamic). Postural responses to each combination will be evaluated in order to assess the role of generated and natural sounds in postural control and whether static sounds can improve balance within dynamic virtual environments.

Intervention Type BEHAVIORAL

Eligibility Criteria

Check the participation requirements, including inclusion and exclusion rules, age limits, and whether healthy volunteers are accepted.

Inclusion Criteria

Group 1: Unilateral peripheral vestibular hypofunction and normal hearing, e.g., vestibular neuritis.

a complaint of head motion provoked instability or dizziness affecting their functional mobility and quality of life at least 1 positive finding indicating unilateral vestibular hypofunction on the following clinical tests: head thrust, subjective visual vertical and horizontal, post head shaking nystagmus, spontaneous and gaze holding nystagmus a score of at least 16 (mild handicap) on the Dizziness Handicap Inventory (DHI).

meeting at least 1 of the following diagnostic criteria: 25% or above unilateral weakness on caloric testing; Low gain on Video Head Impulse Test (vHIT) \<.8; Ocular Vestibular evoked myogenic potential (oVemp) amplitude asymmetry greater than 34%; Cervical (cVemp) amplitude asymmetry greater than 40%. Normal hearing, defined as an unaided PTA \< 26dB HL (0.5-4 kHz) bilaterally.

Group 2: Acquired severe / profound unilateral hearing loss (i.e., single-sided deafness \[SSD\]), no evidence of retrocochlear pathology on MRI and no active complaint of dizziness (DHI score \< 10) or imbalance. SSD will be defined as having an unaided pure-tone average (PTA) of hearing thresholds at 0.5, 1, 2, and 4 kHz in the affected ear \> 70 dB HL and normal hearing in the contralateral ear. Normal hearing will be defined as an unaided PTA \< 26dB HL (0.5-4 kHz). This is considered healthy hearing according to the World Health Organization.

Group 3: Healthy controls who are matched for age and sex with group 1.

For those above 65 years of age, symmetric age-related hearing loss (ARHL) in the mild hearing loss range, specifically an unaided PTA \< 40 dB (0.5-4KHz) will be included.

Exclusion Criteria

a medical diagnosis of peripheral neuropathy; lack of protective sensation based on the Semmes-Weinstein 5.07 Monofilament Test; conductive hearing loss or air bone gap; visual impairment above 20/63 (NYS Department of Motor Vehicle cutoff for driving) on the Early Treatment Diabetic Retinopathy Study (ETDRS) Acuity Test that cannot be corrected with lenses; pregnancy; any neurological condition interfering with balance or walking (e.g. multiple sclerosis, Parkinson's disease, stroke); acute musculoskeletal pain at time of testing; currently seeking medical care for another orthopaedic condition; inability to read an informed consent in English, Spanish or Chinese. Control participants will be excluded for any positive finding on the vestibular diagnostic testing or history of vestibular symptoms (dizziness, vertigo) or any hearing loss that does not fit ARHL as per the criteria specified above.

Patients with vestibular hypofunction will be excluded if they are diagnosed with an unstable peripheral lesion, e.g., Meniere's Disease, Perilymphatic Fistula, Superior Canal Dehiscence, or Acoustic Neuroma.
Minimum Eligible Age

18 Years

Eligible Sex

ALL

Accepts Healthy Volunteers

Yes

Sponsors

Meet the organizations funding or collaborating on the study and learn about their roles.

The New York Eye & Ear Infirmary

OTHER

Sponsor Role collaborator

New York University

OTHER

Sponsor Role lead

Responsible Party

Identify the individual or organization who holds primary responsibility for the study information submitted to regulators.

Responsibility Role SPONSOR

Principal Investigators

Learn about the lead researchers overseeing the trial and their institutional affiliations.

Anat V Lubetzky, PhD

Role: PRINCIPAL_INVESTIGATOR

New York University

Locations

Explore where the study is taking place and check the recruitment status at each participating site.

New York Eye and Ear Infirmary of Mount Sinai

New York, New York, United States

Site Status

New York University Physical Therapy Department

New York, New York, United States

Site Status

Countries

Review the countries where the study has at least one active or historical site.

United States

References

Explore related publications, articles, or registry entries linked to this study.

Lubetzky AV, Cosetti M, Harel D, Sherrod M, Wang Z, Roginska A, Kelly J. Real sounds influence postural stability in people with vestibular loss but not in healthy controls. PLoS One. 2025 Jan 24;20(1):e0317955. doi: 10.1371/journal.pone.0317955. eCollection 2025.

Reference Type DERIVED
PMID: 39854326 (View on PubMed)

Provided Documents

Download supplemental materials such as informed consent forms, study protocols, or participant manuals.

Document Type: Study Protocol

View Document

Document Type: Statistical Analysis Plan

View Document

Document Type: Informed Consent Form

View Document

Other Identifiers

Review additional registry numbers or institutional identifiers associated with this trial.

20-0312

Identifier Type: -

Identifier Source: org_study_id

More Related Trials

Additional clinical trials that may be relevant based on similarity analysis.

Gaining Insight Into Dual Sensory Loss
NCT06362213 NOT_YET_RECRUITING
Cognitive Screening of Patients with Hearing Loss
NCT04672174 ENROLLING_BY_INVITATION NA