Trial Outcomes & Findings for Comparing Traditional and Biofeedback Telepractice Treatment for Residual Speech Errors (NCT NCT04625062)

NCT ID: NCT04625062

Last Updated: 2023-02-13

Results Overview

To assess /r/ production accuracy, participants read probe lists eliciting 25 utterances of /r/ in various phonetic contexts at the start and end of each treatment session. Recorded probe words are presented in randomized order for binary rating (correct/incorrect) by naive listeners who are blind to treatment condition and time point; the accuracy of each token is quantified as the percentage of "correct" ratings across 9 blinded listeners. We then compute the mean percent correct ratings for each probe; the change in this value from pre to post session ("within-session change") is our outcome measure of interest. Summary statistics report the mean and standard deviation of within-session change for each treatment condition, pooled across participants and sessions. This Outcome Measure is assessed using a two-tailed paired-samples t-test comparing mean change in percent correct for each treatment condition across subjects. Outcomes are evaluated relative to a superiority criterion.

Recruitment status

COMPLETED

Study phase

PHASE1

Target enrollment

7 participants

Primary outcome timeframe

Change in word probe accuracy was measured in each treatment session, which were administered over ten weeks.

Results posted on

2023-02-13

Participant Flow

Institutional review board approval was obtained from the Biomedical Research Alliance of New York (BRANY, protocol number 18-10-393). Participants, who could come from anywhere in the US, were recruited through recruitment flyers, listserv announcements, and social media posts.

This study used a within-subjects design. Each participant received both treatment conditions, with sessions randomly assigned to feature one condition or the other (n = 10 sessions in each condition).

Unit of analysis: Sessions

Participant milestones

Participant milestones
Measure
Visual-acoustic Biofeedback
Visual-acoustic biofeedback: In visual-acoustic biofeedback treatment, the elements of motor-based treatment (i.e., auditory models and verbal descriptions of articulator placement) are enhanced with a dynamic display of the speech signal in the form of the real-time LPC (Linear Predictive Coding) spectrum. Because correct vs incorrect productions of /r/ contrast acoustically in the frequency of the third formant (F3), participants will be cued to make their real-time LPC spectrum match a visual target characterized by a low F3 frequency. They will be encouraged to attend to the visual display while adjusting the placement of their articulators and observing how those adjustments impact F3. All treatment will be provided over video calls. All participants completed 10 sessions in the visual-acoustic biofeedback condition.
Motor-based Treatment
Motor-based treatment: Motor-based articulation treatment involves providing auditory models and verbal descriptions of correct articulator placement, then cueing repetitive motor practice. Images and diagrams of the vocal tract will be used as visual aids; however, no real-time visual display of articulatory or acoustic information will be made available. All treatment will be provided over video calls. All participants completed 10 sessions in the motor-based treatment condition.
Overall Study
STARTED
7 10
7 10
Overall Study
COMPLETED
7 10
7 10
Overall Study
NOT COMPLETED
0 0
0 0

Reasons for withdrawal

Withdrawal data not reported

Baseline Characteristics

Comparing Traditional and Biofeedback Telepractice Treatment for Residual Speech Errors

Baseline characteristics by cohort

Baseline characteristics by cohort
Measure
Visual-acoustic Biofeedback and Motor-based Treatment
n=7 Participants
This study used a within-subjects randomization design. Each participant received both treatment conditions, with sessions randomly assigned to feature one condition or the other. Visual-acoustic biofeedback treatment (behavioral) administered via telepractice Visual-acoustic biofeedback: In visual-acoustic biofeedback treatment, the elements of motor-based treatment (i.e., auditory models and verbal descriptions of articulator placement) are enhanced with a dynamic display of the speech signal in the form of the real-time LPC (Linear Predictive Coding) spectrum. Because correct vs incorrect productions of /r/ contrast acoustically in the frequency of the third formant (F3), participants will be cued to make their real-time LPC spectrum match a visual target characterized by a low F3 frequency. They will be encouraged to attend to the visual display while adjusting the placement of their articulators and observing how those adjustments impact F3. All treatment will be provided over video calls. Motor-based articulation treatment administered via telepractice Motor-based treatment: Motor-based articulation treatment involves providing auditory models and verbal descriptions of correct articulator placement, then cueing repetitive motor practice. Images and diagrams of the vocal tract will be used as visual aids; however, no real-time visual display of articulatory or acoustic information will be made available. All treatment will be provided over video calls.
Age, Continuous
132 months
STANDARD_DEVIATION 21.3 • n=5 Participants
Sex: Female, Male
Female
2 Participants
n=5 Participants
Sex: Female, Male
Male
5 Participants
n=5 Participants
Ethnicity (NIH/OMB)
Hispanic or Latino
1 Participants
n=5 Participants
Ethnicity (NIH/OMB)
Not Hispanic or Latino
6 Participants
n=5 Participants
Ethnicity (NIH/OMB)
Unknown or Not Reported
0 Participants
n=5 Participants
Race (NIH/OMB)
American Indian or Alaska Native
0 Participants
n=5 Participants
Race (NIH/OMB)
Asian
0 Participants
n=5 Participants
Race (NIH/OMB)
Native Hawaiian or Other Pacific Islander
0 Participants
n=5 Participants
Race (NIH/OMB)
Black or African American
1 Participants
n=5 Participants
Race (NIH/OMB)
White
5 Participants
n=5 Participants
Race (NIH/OMB)
More than one race
1 Participants
n=5 Participants
Race (NIH/OMB)
Unknown or Not Reported
0 Participants
n=5 Participants
Region of Enrollment
United States
7 participants
n=5 Participants
Percent /r/ sounds correct
19.4 Percentage words rated correct
STANDARD_DEVIATION 18.6 • n=5 Participants

PRIMARY outcome

Timeframe: Change in word probe accuracy was measured in each treatment session, which were administered over ten weeks.

Population: Note that this study used a within-subjects design. Each participant received both treatment conditions, with individual sessions randomly assigned to feature one condition or the other.

To assess /r/ production accuracy, participants read probe lists eliciting 25 utterances of /r/ in various phonetic contexts at the start and end of each treatment session. Recorded probe words are presented in randomized order for binary rating (correct/incorrect) by naive listeners who are blind to treatment condition and time point; the accuracy of each token is quantified as the percentage of "correct" ratings across 9 blinded listeners. We then compute the mean percent correct ratings for each probe; the change in this value from pre to post session ("within-session change") is our outcome measure of interest. Summary statistics report the mean and standard deviation of within-session change for each treatment condition, pooled across participants and sessions. This Outcome Measure is assessed using a two-tailed paired-samples t-test comparing mean change in percent correct for each treatment condition across subjects. Outcomes are evaluated relative to a superiority criterion.

Outcome measures

Outcome measures
Measure
Condition 1: Visual-acoustic Biofeedback
n=10 Sessions
Visual-acoustic biofeedback treatment (behavioral) administered via telepractice Visual-acoustic biofeedback: In visual-acoustic biofeedback treatment, the elements of motor- based treatment (i.e., auditory models and verbal descriptions of articulator placement) are enhanced with a dynamic display of the speech signal in the form of the real-time LPC (Linear Predictive Coding) spectrum. Because correct vs incorrect productions of /r/ contrast acoustically in the frequency of the third formant (F3), participants will be cued to make their real-time LPC spectrum match a visual target characterized by a low F3 frequency. They will be encouraged to attend to the visual display while adjusting the placement of their articulators and observing how those adjustments impact F3. All treatment will be provided over video calls.
Condition 2: Motor-based Treatment
n=7 Participants
Motor-based articulation treatment administered via telepractice Motor-based treatment: Motor-based articulation treatment involves providing auditory models and verbal descriptions of correct articulator placement, then cueing repetitive motor practice. Images and diagrams of the vocal tract will be used as visual aids; however, no real-time visual display of articulatory or acoustic information will be made available. All treatment will be provided over video calls.
Within-session Change in Percentage of "Correct" Ratings by Blinded Naive Listeners for /r/ Sounds Produced in Word Probes
1.4 Percent correct
Standard Deviation 8.3
1.9 Percent correct
Standard Deviation 9.6

Adverse Events

Condition 1: Visual-acoustic Biofeedback

Serious events: 0 serious events
Other events: 0 other events
Deaths: 0 deaths

Condition 2: Motor-based Treatment

Serious events: 0 serious events
Other events: 0 other events
Deaths: 0 deaths

Serious adverse events

Adverse event data not reported

Other adverse events

Adverse event data not reported

Additional Information

Dr. Tara McAllister

New York University

Phone: 212-992-9445

Results disclosure agreements

  • Principal investigator is a sponsor employee
  • Publication restrictions are in place