🧵 time! I’d love to talk about the responsibilities we have as data practitioners. In this ~~information age~~ I think it’s critical we use data, ML, stats, and algorithms fairly, and with an eye toward making the world better for people.
“Knowledge is never subjective.” As the creator of a graph, you hold the narrative power.
Visualizations don’t merely reflect unchangeable truths: “the visual is really important to our sense of what’s ‘normal.’…we should be analyzing things from the minority perspective, the unexpected position.”
There’s constant chatter in our community about algorithms reflecting harmful stereotypes and systems of oppression. Someone inevitably says there’s nothing we can do about this because models just spit back what they know about the world, so it must be true! It’s JuSt fAcTs!
I think these kinds of comments are either a failure of imagination or made in active bad faith.
Stats was basically invented as a field to correct for crappy real-world data! Aren’t we tasked with solving super hard problems? If our models are perpetuating injustice…we should either correct it immediately, or use another tool.
It’s a hard problem, but there’s lots of research on simple ways to at least attempt to create fairer algorithms, e.g.: ai.google/responsibiliti…
And when simple corrections don’t work, or when the harm caused is too great, I believe we should rethink the whole strategy. What problem are we solving, and at what cost?
Even with “simple” statistics (like crime rates), I think we have a responsibility to dig deeply and grapple with the systems (often racist) and big picture that gave us that data:
Lots of people have asked me if studying biostats has actually been relevant in my career as a software engineer, and I’ve found the answer to be a resounding yes! It's super relevant in lots of engineering problems and in understanding the world generally. 🧵 follows!
When I worked on payment fraud prevention, I was always talking about diagnostic testing for rare diseases!
Diagnostic testing was something we studied at length in our early biostat & epi classes in grad school and it turns out “fraud” behaves similarly to a “rare disease” in a lot of ways.
Gerrymandering gets its name from one Elbridge Gerry, who in 1812 drew a voting district in Boston that looked like a salamander because it was politically expedient.
the practice persists through today, from city council districts all the way up to (arguably) the Electoral College!
math, statistics, and measurement have played a key role in several court cases related to the ongoing discussion and fight for fair and representative districts.
One more quick tweet, unrelated to the Gelman-Rubin diagnostic.
Someone asked, "I hear C++ is fast but a little hard to grasp. That true?"
Mostly yes. Like Python, R is mostly easier to learn and often is slower than C/C++.
I recommend you think about how your code will be used when you decide what language to code in. If you're coding for yourself and you probably just need to run it once, then R may be a good choice. Optimizing for speed may be overkill. (2/)
If you are writing a function/package for public consumption, then speed is much more of a concern. You can profile your code to see which parts are time-consuming. You can also just google what things R is slow at (ex loops). (3/)
Let's extend the linear model (LM) in the directio of the GLM first. If you loosen up the normality assumption to instead allow Poisson, binomial, etc (members of the "exponential family" of distributions), then you can model count, binary, etc responses. (4/)
You've probably heard of Poisson regression or logistic regression. These fall under the umbrella of GLM. (5/)
The LM regression equation is E(Y) = X Beta, where X is the model matrix, Beta is the vector of coefficients, Y is the response vector, and E(Y) is the expected val.
For Poisson regression, we have log(E(Y)) = X Beta.
For logistic regression, we have log(p/(1-p))= X Beta (6/)
Ready or not, here comes a thread on making R packages. I want to tweet about this on the Women in Stat account because women are underrepresented as R package maintainers. (1/)
If you find yourself using the same (or similar) code a few times, take a little time now to save time later. This isn't an all-or-nothing thing: if you have the same code written a few times, the first step is to make a function. Then you can call that function. (2/)
Sometimes (like in the situation I just described) you realize after-the-fact that you should turn code into a function. Other times, you will have the foresight to recognize that a function would be wise. As you program more, you'll recognize this more easily. (3/)
Should we start with mentors and support systems? A single mentor won't cut it. You'll have questions your mentor can't answer. Your mentor will have times when they're busy/stressed. I recommend a whole bunch of people you can turn to when you have questions or need support.
I like to have mentors at all levels, though I don't really call them mentors usually. It's also nice to have some in your institution and some outside of it. (2/?)
For ex, as a first year grad student, I developed a support system with other first years, dissertating students, my advisors, and some professors from undergrad. Just grab coffee or lunch or beer with other students to connect and hear what others are dealing with. (3/?)