Profile picture
Sherry Pagoto @DrSherryPagoto
, 13 tweets, 3 min read Read on Twitter
I’ve been reviewing a lot of grants lately and have some advice on things that negatively impact scores on proposals for behavioral intervention studies. A thread! 1/12
The intervention is not based on any conceptual/theoretical model. No discussion or testing of any processes of change. In other words, investigator doesn’t seem to know how or why intervention will impact the proposed outcomes. 2/12
The intervention is not informed by past intervention research in the topic or related topics. In other words, it seems made up. Think of your study as the next chapter in a book, it has to logically follow the previous chapters. 3/12
No pilot data demonstrating feasibility as measured by recruitment yields, attendance/engagement, retention, acceptability. See this paper for designing pilot studies: ncbi.nlm.nih.gov/pubmed/21035130 4/12
Efficacy is being proposed as the primary endpoint in a feasibility study. 5/12
The control group is inappropriate for the stage of research. For ex, comparing a new intervention to another new intervention in a pilot study. See: scholars.northwestern.edu/en/publication… 6/12
Use of attention control on outcome that would not likely be affected by attention (e.g,. cholesterol). Or using a brand new attention control, the effect of which is unknown so can only be guestimated. See: ncbi.nlm.nih.gov/pmc/articles/P… 7/12
No content expert on the team or no behavioral intervention expert on the team. 8/12
The intervention is technology-based but investigator team lacks any computer science, engineering, or other technology expert (consultant not enough, student not enough). 9/12
The sample size estimation section lacks enough detail for reviewers to evaluate (and no references). For ex: ‘A sample of 60 in our three arm trial is adequate to detect a medium effect size with 20% attrition.” Say whaa? 10/12
“Me too!” study---applying an already-used model to a new population segment/new topic and equating that to innovation. Since we don’t have the $ to test every model on every population segment, the study has to extend the literature more than this. 11/12
A resubmission of a proposal that originally got a middle of the pack score but is virtually identical to original submission because investigator argues majority of critiques instead of amending the application. 12/12
Would love to hear other's advice as well! 13/12 :)
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Sherry Pagoto
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!