Comparing Traditional and Biofeedback Telepractice Treatment for Residual Speech Errors

NCT ID: NCT04625062

Last Updated: 2023-02-13

Study Results

Results available

Outcome measurements, participant flow, baseline characteristics, and adverse events have been published for this study.

View full results

Basic Information

Get a concise snapshot of the trial, including recruitment status, study phase, enrollment targets, and key timeline milestones.

Recruitment Status

COMPLETED

Clinical Phase

PHASE1

Total Enrollment

7 participants

Study Classification

INTERVENTIONAL

Study Start Date

2020-09-01

Study Completion Date

2021-03-18

Brief Summary

Review the sponsor-provided synopsis that highlights what the study is about and why it is being conducted.

This study aims to evaluate the relative efficacy of biofeedback and traditional treatment for residual speech errors when both are delivered via telepractice. In a single-case randomization design, up to eight children with RSE will receive both visual-acoustic biofeedback and traditional treatment via telepractice. Acoustic measures of within-session change will be compared across sessions randomly assigned to each condition. It is hypothesized that participants will exhibit a clinically significant overall treatment response and that short-term measures of change will indicate that biofeedback is associated with larger increments of progress than traditional treatment.

Detailed Description

Dive into the extended narrative that explains the scientific background, objectives, and procedures in greater depth.

The COVID-19 crisis has forced speech-language pathologists to migrate from in-person delivery of speech treatment services to remote delivery via telepractice. This study will compare the efficacy of visual-acoustic biofeedback treatment versus non-biofeedback treatment in this setting. Specifically, participants will receive both visual-acoustic biofeedback treatment and non-biofeedback treatment via telepractice (Zoom call with screen-sharing) in a single-case randomization design. The hypothesis of interest is that sessions featuring visual-acoustic biofeedback will be associated with larger short-term gains than sessions featuring non-biofeedback treatment. To test this hypothesis, the study team will recruit up to 8 participants who will receive an initial treatment orientation followed by an equal dose of both types of treatment (10 sessions of visual-acoustic biofeedback and 10 sessions of non-biofeedback treatment). Participants will complete approximately two sessions per week via telepractice; each week will feature one session of each type, randomly ordered. They will also complete 4 pre-treatment baseline sessions and 3 post-treatment maintenance sessions to evaluate the overall magnitude of change over the course of treatment.

Conditions

See the medical conditions and disease areas that this research is targeting or investigating.

Speech Sound Disorder

Study Design

Understand how the trial is structured, including allocation methods, masking strategies, primary purpose, and other design elements.

Allocation Method

RANDOMIZED

Intervention Model

CROSSOVER

In this within-subjects single-case randomization design, each participant will receive an equal number of sessions of visual-acoustic biofeedback and traditional treatment (n = 10 each), with randomized allocation of treatment types to individual sessions. Randomization will be blocked, with each week of treatment serving as a block; within each week/block, one session will be randomly assigned to feature visual-acoustic and one to feature traditional treatment.
Primary Study Purpose

TREATMENT

Blinding Strategy

SINGLE

Outcome Assessors
All perceptual ratings will be obtained from blinded, naive listeners recruited through online crowdsourcing. Following protocols refined in previous published research, binary rating responses will be aggregated over at least 9 unique listeners per token.

Study Groups

Review each arm or cohort in the study, along with the interventions and objectives associated with them.

Visual-acoustic biofeedback

Visual- acoustic biofeedback treatment (behavioral) administered via telepractice

Group Type EXPERIMENTAL

Visual-acoustic biofeedback

Intervention Type BEHAVIORAL

In visual-acoustic biofeedback treatment, participants view a dynamic display of the speech signal in the form of a real-time LPC (Linear Predictive Coding) spectrum. Because correct vs incorrect productions of /r/ contrast acoustically in the frequency of the third formant (F3), participants were cued to make their real-time LPC spectrum match a visual target characterized by a low F3 frequency.

Motor-based treatment

Motor-based articulation treatment administered via telepractice

Group Type EXPERIMENTAL

Motor-based treatment

Intervention Type BEHAVIORAL

Motor-based articulation treatment involves providing auditory models and verbal descriptions of correct articulator placement, then cueing repetitive motor practice. Images and diagrams of the vocal tract were used as visual aids; however, no real-time visual display of articulatory or acoustic information will be made available.

Interventions

Learn about the drugs, procedures, or behavioral strategies being tested and how they are applied within this trial.

Visual-acoustic biofeedback

In visual-acoustic biofeedback treatment, participants view a dynamic display of the speech signal in the form of a real-time LPC (Linear Predictive Coding) spectrum. Because correct vs incorrect productions of /r/ contrast acoustically in the frequency of the third formant (F3), participants were cued to make their real-time LPC spectrum match a visual target characterized by a low F3 frequency.

Intervention Type BEHAVIORAL

Motor-based treatment

Motor-based articulation treatment involves providing auditory models and verbal descriptions of correct articulator placement, then cueing repetitive motor practice. Images and diagrams of the vocal tract were used as visual aids; however, no real-time visual display of articulatory or acoustic information will be made available.

Intervention Type BEHAVIORAL

Eligibility Criteria

Check the participation requirements, including inclusion and exclusion rules, age limits, and whether healthy volunteers are accepted.

Inclusion Criteria

* Must be between 9;0 and 15;11 years of age at the time of enrollment.
* Must speak English as the dominant language (i.e., must have begun learning English by age 2, per parent report).
* Must speak a rhotic dialect of English.
* Must pass a brief examination of oral structure and function.
* Must exhibit less than thirty percent accuracy, based on trained listener ratings, on a probe list eliciting /r/ in various phonetic contexts at the word level.

Exclusion Criteria

* Must not receive a T score more than 1.3 standard deviations (SD) below the mean on the Wechsler Abbreviated Scale of Intelligence-2 (WASI-2) Matrix Reasoning.
* Must not receive a scaled score below 6 on the CELF-5 Recalling Sentences or Formulated Sentences subtests.
* Must not have history of sensorineural hearing loss or failed infant hearing screening.
* Must not have an existing diagnosis of developmental disability, major neurobehavioral syndrome such as cerebral palsy, Down Syndrome, or Autism Spectrum Disorder, or major neural disorder (e.g., epilepsy, agenesis of the corpus callosum) or insult (e.g., traumatic brain injury, stroke, or tumor resection).
* Must not show clinically significant signs of apraxia of speech or dysarthria.
* Must not have major orthodontia that could interfere with tongue-palate contact.
Minimum Eligible Age

9 Years

Maximum Eligible Age

15 Years

Eligible Sex

ALL

Accepts Healthy Volunteers

No

Sponsors

Meet the organizations funding or collaborating on the study and learn about their roles.

Syracuse University

OTHER

Sponsor Role collaborator

Montclair State University

OTHER

Sponsor Role collaborator

National Institute on Deafness and Other Communication Disorders (NIDCD)

NIH

Sponsor Role collaborator

New York University

OTHER

Sponsor Role lead

Responsible Party

Identify the individual or organization who holds primary responsibility for the study information submitted to regulators.

Responsibility Role SPONSOR

Locations

Explore where the study is taking place and check the recruitment status at each participating site.

Montclair State University

Bloomfield, New Jersey, United States

Site Status

Syracuse University

Syracuse, New York, United States

Site Status

Countries

Review the countries where the study has at least one active or historical site.

United States

References

Explore related publications, articles, or registry entries linked to this study.

Dugan SH, Silbert N, McAllister T, Preston JL, Sotto C, Boyce SE. Modelling category goodness judgments in children with residual sound errors. Clin Linguist Phon. 2019;33(4):295-315. doi: 10.1080/02699206.2018.1477834. Epub 2018 May 24.

Reference Type BACKGROUND
PMID: 29792525 (View on PubMed)

Campbell H, Harel D, Hitchcock E, McAllister Byun T. Selecting an acoustic correlate for automated measurement of American English rhotic production in children. Int J Speech Lang Pathol. 2018 Nov;20(6):635-643. doi: 10.1080/17549507.2017.1359334. Epub 2017 Aug 10.

Reference Type BACKGROUND
PMID: 28795872 (View on PubMed)

Campbell H, McAllister Byun T. Deriving individualised /r/ targets from the acoustics of children's non-rhotic vowels. Clin Linguist Phon. 2018;32(1):70-87. doi: 10.1080/02699206.2017.1330898. Epub 2017 Jul 13.

Reference Type BACKGROUND
PMID: 28703653 (View on PubMed)

McAllister Byun T. Efficacy of Visual-Acoustic Biofeedback Intervention for Residual Rhotic Errors: A Single-Subject Randomization Study. J Speech Lang Hear Res. 2017 May 24;60(5):1175-1193. doi: 10.1044/2016_JSLHR-S-16-0038.

Reference Type BACKGROUND
PMID: 28389677 (View on PubMed)

McAllister Byun T, Tiede M. Perception-production relations in later development of American English rhotics. PLoS One. 2017 Feb 16;12(2):e0172022. doi: 10.1371/journal.pone.0172022. eCollection 2017.

Reference Type BACKGROUND
PMID: 28207800 (View on PubMed)

McAllister Byun T, Campbell H. Differential Effects of Visual-Acoustic Biofeedback Intervention for Residual Speech Errors. Front Hum Neurosci. 2016 Nov 11;10:567. doi: 10.3389/fnhum.2016.00567. eCollection 2016.

Reference Type BACKGROUND
PMID: 27891084 (View on PubMed)

McAllister Byun T, Halpin PF, Szeredi D. Online crowdsourcing for efficient rating of speech: a validation study. J Commun Disord. 2015 Jan-Feb;53:70-83. doi: 10.1016/j.jcomdis.2014.11.003. Epub 2014 Dec 15.

Reference Type BACKGROUND
PMID: 25578293 (View on PubMed)

Harel D, Hitchcock ER, Szeredi D, Ortiz J, McAllister Byun T. Finding the experts in the crowd: Validity and reliability of crowdsourced measures of children's gradient speech contrasts. Clin Linguist Phon. 2017;31(1):104-117. doi: 10.3109/02699206.2016.1174306. Epub 2016 Jun 7.

Reference Type BACKGROUND
PMID: 27267258 (View on PubMed)

Hitchcock ER, Harel D, Byun TM. Social, Emotional, and Academic Impact of Residual Speech Errors in School-Aged Children: A Survey Study. Semin Speech Lang. 2015 Nov;36(4):283-94. doi: 10.1055/s-0035-1562911. Epub 2015 Oct 12.

Reference Type BACKGROUND
PMID: 26458203 (View on PubMed)

Byun TM, Hitchcock ER. Investigating the use of traditional and spectral biofeedback approaches to intervention for /r/ misarticulation. Am J Speech Lang Pathol. 2012 Aug;21(3):207-21. doi: 10.1044/1058-0360(2012/11-0083). Epub 2012 Mar 21.

Reference Type BACKGROUND
PMID: 22442281 (View on PubMed)

Provided Documents

Download supplemental materials such as informed consent forms, study protocols, or participant manuals.

Document Type: Study Protocol

View Document

Document Type: Statistical Analysis Plan

View Document

Other Identifiers

Review additional registry numbers or institutional identifiers associated with this trial.

R01DC017476

Identifier Type: NIH

Identifier Source: secondary_id

View Link

C-RESULTS-TPT

Identifier Type: -

Identifier Source: org_study_id

More Related Trials

Additional clinical trials that may be relevant based on similarity analysis.

Speech Signals in Stuttering
NCT05668923 RECRUITING NA