Do you want to do a psychology experiment while following best practices in open science? My collaborators and I have created Experimentology, a new open web textbook (to be published by MIT Press but free online forever).
The book is intended for advanced undergrads or grad students, and is designed around the flow of experimental project - from planning through design, execution, and reporting, with open science concepts like reproducibility, data sharing, and preregistration woven throughout.
We start by thinking through what an experiment is, highlighting the role of randomization in making causal claims and introduce DAGs (causal graphs) as a tool for thinking about these. We then discuss how experiments relate to psychological theories. experimentology.io/1-experiments
We introduce issues of reproducibility, replicability, and robustness and review the meta-science literature on each of these. We also give a treatment of ethical frameworks for human subjects research and the ethical imperative for open science. experimentology.io/3-replication
In our chapters on statistics, we introduce estimation and inference and both Bayesian and frequentist approaches. Our emphasis is on model-building and data description, rather than on dichotomous p<.05 inference. experimentology.io/7-models
Next, we move to the meat of the book, with chapters on measurement, design, and sampling. I'm very proud of these chapters because I don't know of any similar treatment of these topics, and they are critical for experimentalists! experimentology.io/8-measurement
How do you organize your files for sharing? Should you include manipulation checks? What are best practices for piloting? The next section of the book has chapters on preregistration, data collection, and project management. experimentology.io/11-prereg
The final section contains chapters on presenting and interpreting research, including writing, visualization, and meta-analysis. experimentology.io/14-writing
Throughout, the book features case studies, "accident reports" (issues in the published literature), code boxes for learning how to reproduce our examples, and boxes highlighting ethical issues that come up during research.
We also have four "tools" appendices, including introductions to RMarkdown, github, the tidyverse, and ggplot.
Use Experimentology in your methods course! We include a guide for instructors with sample schedules and projects, and we'd love to get your feedback on how the material works in both undergrad and grad courses. experimentology.io/E-instructors
Experimentology is still work in progress, and we're releasing it in part to gather feedback on errors, omissions, and ways that we can improve the presentation of complex topics. Please don't hesistate to reach out or to log issues on our issue tracker:
For two years, @mbraginsky, @danyurovsky, Virginia Marchman, and I have been working on a book called "Variability and Consistency in Early Language Learning: The Wordbank Project" (@mitpress).
We look at child language using a big dataset of parent reports of children's vocabulary from wordbank.stanford.edu, w/ 75k kids and 25 languages. (Data are from MacArthur-Bates CDI and variants). Surprisingly, parent report is both reliable and valid! langcog.github.io/wordbank-book/…
First finding: It's long been known that children are variable with respect to language. The striking thing is that the level of variability is very consistent across languages. The world around, toddlers are all over the place with respect to language! langcog.github.io/wordbank-book/…
What is "the open science movement"? It's a set of beliefs, research practices, results, and policies that are organized around the central roles of transparency and verifiability in scientific practice. An introductory thread. /1
The core of this movement is the idea of "nullius in verba" - take no one's word for it. The distinguishing feature of science on this account is the ability to verify claims. Science is independent of the scientist and subject to skeptical inquiry. /2
These ideas about the importance of verification are supported by a rich and growing research literature suggesting that not all published science is verifiable. Some papers have typos, some numbers can't be reproduced, some experiments can't be replicated independently. /3
A thought on grad advising. When I was a second year, an announcement went out to our dept. with the abstract for a talk I was giving in the area talk series. A senior faculty member wrote back with a scathing critique (cc'd to my advisor, @LanguageMIT). /1
The part that made the biggest impression on me: they said that the first line of my abstract was *so embarrassing that they thought my graduate training had failed*! Actual quote: "You look naive at best, many other things at worst." And on from there. /2
My advisor wrote back immediately: "Hi [critic], I wrote that line." /3
Prosocial development throwdown at #icis18: presentations by Audun Dahl, Felix Warneken, and @JKileyHamlin. Three opinions on a fascinating topic! [livetweet thread]
Dahl up first. Puzzles of prosociality: there’s an amazing ability to help others prosocially from an early age, but some infants don’t! Why? Behaviors emerge via 1) social interest and 2) socialization.
Framework of co-action. Starting with early turn-taking and contingency, caregivers scaffold social interaction. They even encourage and facilitate helping behaviors.
Everyone makes mistakes during data analysis. Literally everyone. The question is not what errors you make, it's what systems you put into place to prevent them from happening. Here are mine. [a thread because I'm sad to miss #SIPS2018]
External Tweet loading...
If nothing shows, it may have been deleted
by @siminevazire view original on Twitter
Since then, we've audited dozens of papers (I like this term much more than "data thugged" @jamesheathers). E.g. in @Tom_Hardwicke's new manuscript: osf.io/preprints/bits…. Summary: the error rate is very high. Most errors don't undermine papers, but most papers have errors.