Lots of people have asked me if studying biostats has actually been relevant in my career as a software engineer, and I’ve found the answer to be a resounding yes! It's super relevant in lots of engineering problems and in understanding the world generally. 🧵 follows!
When I worked on payment fraud prevention, I was always talking about diagnostic testing for rare diseases!
Diagnostic testing was something we studied at length in our early biostat & epi classes in grad school and it turns out “fraud” behaves similarly to a “rare disease” in a lot of ways.
Evaluating a predictive fraud model is a lot like evaluating a diagnostic test! In particular, ✨ PPV is a function of population prevalence ✨ or, as I said in many a meeting, ✨ model precision is a function of our overall fraud rate ✨
Model evaluation and comparison was much, much harder than I’d expected it to be when I first started in industry. There’s no one right answer! But having a good foundation for understanding the tradeoffs ended up being super useful.
Topics like positive/negative predictive value, uncertainty/risk assessment, and treatment effectiveness have also come very obviously front and center with covid, and I’ve seen them sneaking around in the background in politics too.
for example: to combat an outcome of voter fraud (the very definition of a rare event), politicians have proposed requiring ID to vote (a test). But the treatment adversely affects double-digit percentages of non-fraudulent votes, and has a positive predictive value of ~0.
This framing makes it pretty easy to see that voter ID requirements aren't really being made in good faith.
The thread has admittedly moved outside of tech now 😅 so circling back — a biostat background has served me super well! I use biostat skills basically daily at work in applied fields totally unrelated to biology, and beyond.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
🧵 time! I’d love to talk about the responsibilities we have as data practitioners. In this ~~information age~~ I think it’s critical we use data, ML, stats, and algorithms fairly, and with an eye toward making the world better for people.
Gerrymandering gets its name from one Elbridge Gerry, who in 1812 drew a voting district in Boston that looked like a salamander because it was politically expedient.
the practice persists through today, from city council districts all the way up to (arguably) the Electoral College!
math, statistics, and measurement have played a key role in several court cases related to the ongoing discussion and fight for fair and representative districts.
One more quick tweet, unrelated to the Gelman-Rubin diagnostic.
Someone asked, "I hear C++ is fast but a little hard to grasp. That true?"
Mostly yes. Like Python, R is mostly easier to learn and often is slower than C/C++.
I recommend you think about how your code will be used when you decide what language to code in. If you're coding for yourself and you probably just need to run it once, then R may be a good choice. Optimizing for speed may be overkill. (2/)
If you are writing a function/package for public consumption, then speed is much more of a concern. You can profile your code to see which parts are time-consuming. You can also just google what things R is slow at (ex loops). (3/)
Let's extend the linear model (LM) in the directio of the GLM first. If you loosen up the normality assumption to instead allow Poisson, binomial, etc (members of the "exponential family" of distributions), then you can model count, binary, etc responses. (4/)
You've probably heard of Poisson regression or logistic regression. These fall under the umbrella of GLM. (5/)
The LM regression equation is E(Y) = X Beta, where X is the model matrix, Beta is the vector of coefficients, Y is the response vector, and E(Y) is the expected val.
For Poisson regression, we have log(E(Y)) = X Beta.
For logistic regression, we have log(p/(1-p))= X Beta (6/)
Ready or not, here comes a thread on making R packages. I want to tweet about this on the Women in Stat account because women are underrepresented as R package maintainers. (1/)
If you find yourself using the same (or similar) code a few times, take a little time now to save time later. This isn't an all-or-nothing thing: if you have the same code written a few times, the first step is to make a function. Then you can call that function. (2/)
Sometimes (like in the situation I just described) you realize after-the-fact that you should turn code into a function. Other times, you will have the foresight to recognize that a function would be wise. As you program more, you'll recognize this more easily. (3/)
Should we start with mentors and support systems? A single mentor won't cut it. You'll have questions your mentor can't answer. Your mentor will have times when they're busy/stressed. I recommend a whole bunch of people you can turn to when you have questions or need support.
I like to have mentors at all levels, though I don't really call them mentors usually. It's also nice to have some in your institution and some outside of it. (2/?)
For ex, as a first year grad student, I developed a support system with other first years, dissertating students, my advisors, and some professors from undergrad. Just grab coffee or lunch or beer with other students to connect and hear what others are dealing with. (3/?)