Albert Rapp Profile picture
May 19 26 tweets 10 min read Twitter logo Read on Twitter
Ever heard of logistic regression? Or Poisson regression? Both are generalized linear models (GLMs).

They're versatile statistical models. And by now, they've probably been reframed as super hot #MachineLearning.

Brush up on their math with this thread. #rstats
Let's start with logistic regression. Assume you want to classify a penguin as male or female based on its

* weight,
* species and
* bill length

Better yet, let's make this specific. Here's a dataviz for this exact scenario. It is based on the {palmerpenguins} data set. Image
As you can see, the male and female penguins form clusters that do not overlap too much.

However, regular linear regression (LR) won't help us to distinguish them. Think about it. Its output is something numerical. Here, we want to find classes.
How about trying to predict a related numerical quantity then? Like a probability that a penguin is male. Could we convert the classes to 0 and 1 and then run a LR?

Well, we could. But this won't give us probabilities either. Why? Because predictions are not restricted to [0, 1] Image
I suspect you're REALLY determined to use LR. After all, what have you learned ordinary least squares (OLS) for if not for using it everywhere?

So, what saves you from huge predictions? That's the glorious logistic function (applied to LR's predictions). Image
I've applied this strategy to our data to "predict" probabilities. Then, I used a 50% threshold for classification (Note that 50% is in general not a good threshold but that's beside the point here.)

So, does this yield good results? Have a look for yourself. ImageImage
The predictions for male and female penguins overlap quite much. This leads to many incorrect classifications.

Not bueno. At this point, you may as well have trained a model that answers "Is this a male penguin?" with "Nope, just Chuck Testa".
Our classification is bad and I have hopefully convinced you that OLS isn't the way to go here. What now?

Well, it wasn't all bad. The idea of linking a desired quantity (like a probability) to a linear predictor is actually what GLMs do.

To make it work, take a step back.
Usually, we model our response variable Y by decomposing it into

1️⃣ deterministic function f(X_1,..., X_n) dependent on predictors
2️⃣ random error term

Thus, regression is nothing but finding a fct describing the AVERAGE outcome. A change in notation makes this clearer. Image
In linear regression, this deterministic function is given by a linear predictor. It depends on a parameter beta (that's a vector here). Image
Alright, we've emphasized that we're really trying to model an expectation. Now, think about what we're trying to predict. We're looking for probabilities, are we not?

And do we know a distribution whose expectation is a probability? Bingo! We're thinking about Bernoulli.
Therefore, let us assume that our response variable Y is Bernoulli-distributed (given our predictors).

And now we're back with our idea to link the average outcome to a linear predictor via a suitable transformation (the same one as before). This sets up our model. Image
You're thinking we've tried this already, aren't you? How will we get different results?

Isn't this new setup just semantics? Theoretic background is useless in practice, right?

(I've actually heard someone say that to a speaker at a scientific workshop. A shitshow ensued 😆)
Previously, we used the OLS estimator to find the linear predictor's parameter beta. But with our new model setup comes a new way of estimating beta.

Take a look. Compare the results of using the OLS estimator with what we get when we "maximize the likelihood". ImageImage
Much better results, right? And that's despite having used the same 50% threshold once I predicted probabilites.

This means that maximizing the likelihood delivers a way better estimator. Let's see how that works.
The likelihood function is the product of the densities of the assumed distribution of Y given the predictors (here Bernoulli) . This makes it the joint probability of the observed data.

We find beta by maximizing this function or equivalently (but easier) its logarithm. Image
How do we find this maximum? By using the same strategy as for any other function that we want to maximize:

Compute the first derivative and find its root.

In this context, this derivative is also known as score fct. Image
Here, finding a root is not easy as no analytical solutions exist. Thus, we'll rely on numerical methods.

A well-known procedure is Newton's method. In each iteration it tries to get closer to a function's root by moving along its gradient. Have a look at this GIF from Wikipedia
Congrats! You've brushed up on ONE example of GLMs, namely logistic regression. But GLMs wouldn't be general if that were all.

Depending on the assumed distribution and the function that links linear predictor and expectation, GLMs have many names. Here's one more.
Poisson regression is a GLM which assumes that Y follows a Poisson distribution (who would have seen that coming?). A suitable link function is the exponential fct.

This model is used when you try to estimate count data and the formulas looks very similar to logistic regression. ImageImageImage
This begs two questions 🤔

1️⃣ Does this work with any distribution?
2️⃣ How in the world do we choose the link function?

The secret ingredient that has been missing is a concept known as exponential families. It can answer BOTH questions. Isn't that just peachy?
Exponential families are distributions whose density can be rewritten in a *very* special form.

Honestly, this curious form is anything but intuitive. Yet, it is surprisingly versatile and the math just works. If you ask me, that's quite mathemagical. ✨ Image
It probably doesn't come as a surprise that Bernoulli and Poisson distributions are exp. families (see table).
But what may surprise you is this:

The function b plays an extraordinary role: Its derivative can be used as link function! In fact, that's the canonical choice. Image
Now you know GLMs' ingredients: Exponential families and link functions.

You can find more example distributions on Wikipedia en.wikipedia.org/wiki/Exponenti…

Also, an interesting perspective on what makes exponential families so magical can be found on SE: stats.stackexchange.com/questions/4118… Image
Bam! We made it. And we've covered

* two popular GLMs and
* mathematical foundations of GLMs

Let me know if you liked this thread by liking the start of the thread below ☺️

And if you want to see more of my content, feel free to follow @rappa753.

See you next time 👋
If you liked this post, you may enjoy my 3-minute newsletter too.

Every week, my newsletter shares insights on
- R & dataviz,
- Shiny and web dev

Reading time: 3 minutes or less

You can join at
alberts-newsletter.beehiiv.com/subscribe

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Albert Rapp

Albert Rapp Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @rappa753

May 12
The best way to learn data analysis is to actually practice it.

Each week, the #tidyTuesday challenge gives you plenty of opportunity for this.

Don't know how to get started with the challenge? In case you missed it, I've put together an #rstats guide in January.
First, get the data.

Head over to the tidyTuesday's GitHub repo at github.com/rfordatascienc…

Just copy the code from the "Get the data" section. Image
Next, I suggest that you use the clean_names() function from the {janitor} package.

This will format the column names of your data set so that it's easier to work with.

Huge time saver! Image
Read 24 tweets
May 5
I used to think tables are boring.

But they can be beautiful & engaging.

Here's a nice example from @infobeautiful.

It uses many eye-catching elements but you don't need them to create a great table.

Just stick to these guidelines 🧵#dataviz A huge table describing wha...
Let's start with a not so great table and improve it.

Here's a table I would have created just a few months ago.

Not so sexy, right? Let's clean that up. Image
1. Avoid vertical lines

The above table uses waaaay to many grid lines.

Without vertical lines, the table will look less cramped.

Have a look for yourself. Image
Read 16 tweets
Apr 28
I hate code duplication. It's just a sure way to bloat code and do copy-and-paste mistakes 🙈

In Shiny, modules help me to avoid that.

BONUS: They move the app's logic to separate + reusable functions for cleaner code.

Here's how modules work. #rstats
Let's build an app that displays a scatterplot of two variables of a given data set.

Let's imagine that each data set needs its own page in our app. Here's how that could look. ImageImage
Every element of our app will need to get a unique ID. And we will need to repeat that for every data set.

For two data sets a non-modularized Shiny UI could look something like this.

Notice how I have to append "_iris" each time for the second tabPanel / data set. Image
Read 11 tweets
Apr 4
Paired bar charts suck at comparing values. The only reason they're used all the time is because they are easy to create.

But there are better alternatives that are just as easy.

Here's how to create 4 better alternatives with #rstats.
0 // Where's the code?

The code for all plots can be found at albert-rapp.de/posts/ggplot2-…

This thread walks you through the code quickly.
1 // Dot plot

Instead of using bars next to each other, why not points on the same line?

Makes comparison suuper easy.

And it takes only a geom_point() layer. Dead-simple, right?

I think it's even easier to create than a paired bar chart.
Read 15 tweets
Mar 31
Tired of lackluster visualizations that don't tell you anything?

Discover how storytelling and nuanced color use can
- transform your bar charts.
- inform readers on key insights & actions

Here's a step-by-step guide (with full code at the end). #rstats
Here's our starting point.

Note that this tutorial is a ggplot2 recreation of

(And once you've mastered the technique you can enhance this visual with advanced stats beyond comparing error rates to average.)
// Labels on y-axis

First, move the names to the y-axis.

This is important when the labels are real names instead of IDs.

No one likes to tilt their head for reading.
Read 19 tweets
Mar 29
Data visualization doesn't have to be complicated. 🤯

In fact, ggplot makes it dead-simple to implement some of the most effective dataviz principles.

Here are six dataviz principles that are so easy that any beginner’s course should teach them. #rstats
1 // Make sure your labels are legible

This one is super easy to fix. Any beginner can do it.

Img 1: Way too small fonts & unclear labels
Img 2: Fixed with labs() and theme_gray(base_size = 20)
Img 3: Full code ImageImageImage
2 // Use a minimal theme

As a rule of thumb, you should minimize everything that could potentially distract your audience.

That’s why I usually recommend to use a minimal theme: Just use `theme_minimal()` instead of `theme_gray()`. Image
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(