Discover and read the best of Twitter Threads about #JSM2022

Most recents (8)

I am fortunate to have an #NIH #K25 award.

In this 🧵, I will share advice about K25 awards. These opinions are my own, based on my experience, and do not represent the NIH.

Hope this helps anyone preparing a K25!

#careerdev #nihgrants #funding #statstwitter #epitwitter 1/
Who should apply for a K25 award?

K25s are for quantitative experts seeking training in a new clinical area. These are perfect for (bio)statisticians starting in a faculty role pivoting to a new area that aligns with their institution.

K25s cover 75% FTE for up to 5 years. 2/
⭐️Most important advice in this thread⭐️

Include a figure connecting your career goals, proposed training, mentor expertise, and research aims.

The next career step should be submitting an R01 and becoming an independent scientist in the new clinical field. 3/
Read 10 tweets
My #JSM2022 talk focused on Design Principles for Data Analysis, based on a (very recent!) paper with @rdpeng and @stephaniehicks

🎞 slides: lucymcgowan.com/talk/asa_joint…
🆕 paper: tandfonline.com/doi/abs/10.108…
We layout 6 design principles for data analysis and have a bit of empirical evidence to go with:

👯‍♀️ data matching
🕵️‍♀️ exhaustive
🤔 skeptical
📝 second-order
💎 clarity
👥 reproducibility

🔗 tandfonline.com/doi/abs/10.108…
👯‍♀️ data matching

How well does the available data match the data needed to investigate a question?

🔗 tandfonline.com/doi/abs/10.108…
Read 10 tweets
Final presenter, @TiffanyTimbers talking about reproducibility! #JSM2022

bit.ly/timbers-jsm-20…
Tiffany begins by showing the difference between whether an analysis is

👉 reproducible
👉 replicable
👉 robust
👉 generalizable

#JSM2022
Aligning with diagnosing failure from @rdpeng’s talk, @TiffanyTimbers had students describe situations where they’ve observed reproducibility failures #JSM2022
Read 5 tweets
Next up we have @rlbarter talking about “Veridical Data Science” #JSM2022

📖 Verifical: “truthful”
Rebecca begins by defining what makes a data analysis “successful”

🤝 It needs to be “trustworthy”
🌎 It needs to provide a useful answer to a relevant and ethical real world question

#JSM2022
How can you “show” your analysis is successful? Remember PCS, your results should be:

💭 predictable
💻 computationally accessible
📝 stable and reproducible

#JSM2022
Read 6 tweets
Starting now! I’ll be live tweeting this @DataSciJedi-sponsored session on Delivering Data Differently at #JSM2022. #JEDIatJSM
Our first speaker is @ajrgodfrey. He speaks from his experiences as a blind person. He emphasizes the importance of independence and dignity for the visually impaired. #JEDIatJSM
“A blind person must be able to collect, analyze, interpret, and manipulate scientific data in order to answer questions and communicate the knowledge gained from their results in a way that can be readily understood by their sighted peers.“ @ajrgodfrey #JEDIatJSM
Read 12 tweets
Kicking us off, we have @rdpeng talking about “Diagnosing Data Analytic Problems in the Classroom” #JSM2022

Roger starts by directing us to a recent paper on the topic coauthored with @stephaniehicks @jtleek, Eric Bridgeford, and Athena Chen
🔗 doi.org/10.1080/269391…
This talk is focused on diagnostics. Here is a common flow chart for modeling, @rdpeng focuses on this circled part.

🔍 What makes us go back to refit?
👩‍🏫 How can we teach that process?
Here’s a diagram of the process, @rdpeng and co are coming up with ways to help students come up with these expected outcome sets so they can compare what they see to that #JSM2022
Read 8 tweets
Just in time for #JSM2022, our new paper has been published in the Journal of Machine Learning Research! jmlr.org/papers/v23/21-… Elena A. Erosheva and I propose the first joint statistical model for rankings and scores (1/n)
Rankings and scores are two common types of preference data, which occur in contexts like voting, polling data, recommender systems, and peer review. Because it’s difficult to combine ordinal rankings and cardinal scores, they’re almost always modeled separately (2/n)
But rankings and scores provide different, and complementary, information! Rankings make direct comparisons but are coarse. Scores are more granular but make only implicit (and often unreliable) comparisons (3/n)
Read 8 tweets
I enjoyed participating in yesterday's #EEID2022 panel on scientific communication - what has worked, what hasn't, and what I've learned. For these types of panels, I have made a conscious decision to be very honest, including the good and the bad experiences. 1/
Yesterday, that included me telling the audience how much I angsted over questions like "Is it safe to do X? Our viewers want to know!" Or pressure to stay up to date on everything, or say yes to all requests. Worry that I'm saying the wrong thing or don't belong. 2/
Admitting vulnerability is a trait I admire in others because it takes bravery and normalizes common challenges. IMO, it's a similar bravery to scientific communication in the first place. Public engagement involves putting yourself out there in a way that can be intimidating. 3/
Read 4 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!