My Authors
Read all threads
1/13) This semester's teaching on Bayesian stats and cognitive modeling is over! Thanks to COVID (ironically!), I recorded all my teaching sessions w/ @zoom_us, and they are available on #Youtube.

Wondered what have we covered to the cog-neuro audience? A thread.
2/13) After the overall intro, we had two sessions on #R #Rstudio to bring everyone onto the same page. I am rather old-schooled, so we only covered base R (not #tidyverse)

L01 Intro,
L02 R (P1)
L03 R (P2)
3/13) Then we covered foundations of probability theory, and the Bayes' rule. We used a simple and classic binomial example to show how posterior is updated in light of prior and incoming data.

L04 Prob & Bayes'
L05 Binomial
4/13) Moving on, computing p(D) can be v/ complicated and also not of interest, which motivated the use of sampling methods, like MCMC. Everyone wrote their first-ever @mcmc_stan model for the binomial example.

L06 MCMC
L07 Stan
5/13) Next, we had a rethinking of the simple linear regression model, and discussed weakly informative priors, in particular, half-cauchy.

L08 Regression
6/13) The second half of the seminar focuses on cognitive modeling, esp. RL models. We started with the simple Rescorla-Wagner model in a 2-armed bandit task. Crucially, we used simulation to deliver implications of parameters.

L09 Rescorla-Wagner
7/13) We then implemented Rescorla-Wagner in @mcmc_stan! And, we discussed ways to fit multiple subj including the hierarchical Bayesian approach (HBA). We covered the important reparameterization in Stan.

L10 RW in Stan
L11 HBA
8/13) In reality, we often have >1 model, so how to decide on the winning model and avoid overfitting? We discussed the {loo} pkg to perform model comparison.

L12 Model comparison
9/13) Lastly, we discussed common errors and warnings in Stan, and we had a fun debugging exercise.

L13 Stan debugging
10/13) IMO, what is the most important materials the community may benefit from? --> L11 & L13!
There are indeed lots of tutorials of @mcmc_stan already, but from a cognitive modeling perspective, you find nowhere else that covers @mcmc_stan optimization and debug.
11/13) Additionally, I summarized useful summer schools, workshops on this topic.
Not to forget the upcoming @neuromatch academy! I will also participate.
12/13) All slides and other materials are on #github. I started using github badges and it is a lot of fun.

repo: github.com/lei-zhang/Baye…
13/13) Lastly, I started teaching BayesCog since 2016, and the online experience this year is really pushing it to the next level.
Thank @ClausLamm @ScanUnit @univienna for the support during this time!
And apparently, open teaching shows commitment to #openscience.

Fin.
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Lei Zhang | 张磊

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!