I'm taking a break from my own grant application by assessing other grant applications, because I'm a nerd like that. Doing this is providing a good reminder of the benefit of leaving some white space and including plenty of figures in my own application
Personally, I aim to have at least ONE object per page. This object could be either be a figure, text box, or table.
I’ve had a few people tell me you should leave about 1/5 of the final page blank to demonstrate that your project is so clear that don’t even need the whole page limit to describe it. That’s some 3D chess right there...
Another thought: Anyone can write in their application that "they have a commitment to open science principles", but it's much better to include examples of how you've *already* been doing this.
It can take time to implement open science practices in your lab. The best time to have gotten started was a few years ago. The second best time is now. It's ok if this is incremental, don't let anyone tell you otherwise. Start by posting your analysis scripts, for example.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
And here's your annual reminder that a "study" cannot be underpowered, but rather, a *design and test combination* can be underpowered for detecting hypothetical effect sizes of interest towardsdatascience.com/why-you-should…
If you want to determine the range of hypothetical effect sizes a that given field can reliably detect (and reject), here's a demo with #Rstats code sciencedirect.com/science/articl…
My guide to calculating study-level statistical power for meta-analyses using the 'metameta' #Rstats package and web app is out now in AMPPS 🔓 doi.org/10.1177/251524…
Here's how this tool can be used for your next meta-analysis OR for re-analysing published meta-analyses 🧵
There's been a lot of talk recently about the quality of studies that are included in meta-analyses—how useful is a meta-analysis if it's just made up of studies with low evidential value?
But determining the evidential value of studies can be hard. Common approaches for looking at study quality or risk of bias tend to be quite subjective. You're not likely to get the same conclusions from different authors. These tasks can also be quite time consuming.
“Nearly 100% of published studies… confirm the initial hypothesis. This is an amazing accomplishment given the complexity of the human mind and human behaviour. Somehow, as psychological scientists, we always find the expected result; we always win!” royalsocietypublishing.org/doi/10.1098/rs…
One thing I want to add: the psychological sciences need to broadly adopt practices that help us determine when we're wrong. The standard NHST p-value approach cannot be used to provide support for absence of an effect. This paper provides two solutions academic.oup.com/psychsocgeront…
Strong auxiliary assumptions are also required for falsifying hypotheses
Participants who ate soup from bowls that were refillable, unbeknownst to them, ate 73% more soup than those eating from normal bowls pubmed.ncbi.nlm.nih.gov/15761167/
Oxytocin receptor (OXTR) expression patterns in the brain across development osf.io/j3b5d/
Here we identify OXTR gene expression patterns across the lifespan along with gene-gene co-expression patterns and potential health implications
[THREAD]
So, let's begin with some background.
As well as being an oxytocin researcher I'm also a meta-scientist, which means that a lot of my work on improving research methods is focused on improving oxytocin research (that's what got me into meta-science in the first place)
Earlier this year, we published a paper, led by @fuyu00, in which we proposed that three things are required to improving the precision of intranasal oxytocin research: Improved methods, reproducibility, and theory.