Riccardo Fusaroli Profile picture
Social interactions and cognition, stats, computational modeling and machine learning, complex systems, language, and mental disorders. He/Him

Jun 24, 2019, 13 tweets

#NewPrePrintOut Acoustic Measures of Prosody in Right-Hemisphere Damage: A Systematic Review and Meta-Analysis biorxiv.org/content/10.110… with @ethanweed Thread below for a meta-reflection on the research (disillusionment and open science): 1/n

I have been interested for a while in the descriptions practitioners give of their interaction with neuropsychiatric patients, especially of the voice. People with ASD are described as monotone, sing-songy, robotic. People with schizophrenia as sluggish, monotono. 2/n

People with RHD have "impaired prosody". About 40 years of research have produced 10s of studies & significant p-values, showing high effects on perceptual judgments (human ratings) and very heterogeneous effects on acoustic properties (physical properties of voice) 3/n

I did some early attempts at machine learn the sh*t out of the issues, "objectively" identifying markers and so on (oh the naive days, e.g. pure.au.dk/portal/files/5…, w likely overfitting and leaking), before realizing I needed to have a more principled and informed approach.

Trying to achieve some clarity on the field, we produced 3 (so far) systematic reviews & meta-analysis of voice/prosody in ASD (biorxiv.org/content/10.110…, updated here: pure.au.dk/portal/files/1…), schizophrenia (biorxiv.org/content/10.110…) and RHD (biorxiv.org/content/10.110…) 4/n

The findings are pretty consistent: strong differences in perceptual ratings of voice in patients and non-patients (cohen's d's > 1); smaller differences in acoustic features (.2-.4), but with huge heterogeneity between studies 5/n

likely due to sample, data collection and data processing heterogeneity. We also identify potential effects of task: social voice production shows perhaps bigger effects than monologic. 6/n

Machine learning approaches make big sweeping claims (>80% accuracy), but the details are sparse, nobody is even trying to replicate, and from experience the confounds/overfitting/etc huge. 7/n

We thus identify some good practices for future studies (repeated measures, within-subject task variation, etc.), and strongly advocate for open science practices: open data processing/analysis scripts benchmarked against each other, 8/n

where possible open data to cumulative build larger and more representative datasets (clin populations are heterogeneous!) 9/n

We are also trying to put our money where our mouth is. @ethanweed and me are running an informed follow-up study on ASD showcasing what the sys review is leading us to do and the consequences (sneak peak); 10/n

@AlbertoParola2 will be starting a marie curie postdoc w me on creating cross-national consortia to collect cross-site, cross-linguistic theoretically informed data on voice in schizophrenia and implement appropriate machine learning and benchmarking procedures 11/n

It's freakingly slow science, but I got tired of publishing yet another study in the existing constellation, maybe adding something, maybe not. I also got tired of the nth M-A and then everything proceeds as before. So how do we do better? We'll see if my approach fails :-) 12/12

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling