Tivadar Danka Profile picture
Dec 27, 2021 15 tweets 4 min read Read on X
Entropy is not the easiest thing to understand.

It is rumored to describe something about information and disorder, but it is unclear why.

What do logarithms and sums have to do with the concept of information?

Let me explain!

↓ A thread. ↓ Image
I have randomly selected an integer between 0 and 31.

Can you guess which one? You can ask as many questions as you want.

What is the minimum number of questions you have to ask to be 100% sure?

You can start guessing the numbers one by one, sure. But there is a better way!
If you ask, "is the number larger or equal than 16?" you immediately eliminate half the search space!

Continuing with this tactic, you can find the number for sure in 5 questions.
In other words, we need to take the base two logarithm of 32 to get the number of questions required.

This logic applies to all numbers! If I pick a number between 0 and 𝑛-1, you need 𝑙𝑜𝑔(2, 𝑛) questions to find it for sure, by cutting the possibilities in half with each.
Because the answers are yes-or-no questions, we can encode each with a 0 or 1.

If we write down the answers in a row, we effectively encode the numbers in 𝑛 bits!

𝟎: 00000
𝟏: 00001
𝟐: 00010
...
𝟑𝟏: 11111

Each "code" is simply the number in base 2!
No matter which number I pick, five questions are needed to find it.

So, the average number of bits needed is also five.

However, we use a critical assumption here: I pick each number with an equal probability.

What if that is not the case?
Let's say I am picking between 0, 1, and 2, but I am picking 0 at 50% of the time, while 1 and 2 only 25% of the time.

We should put this into mathematical form!

Let's denote the number I pick with 𝑋. This is a random variable.

How many bits do we need now? Image
We can be more bit-efficient than before! Consider this.

1st question: did you pick 0?
If the answer is yes, the 2nd question is not needed. If not, we proceed!

2nd question: did you pick 1?
No matter what the answer is, we know the solution! Yes implies 1, no implies 2.
Following this idea, we can calculate the average number of bits as below. Image
(This is just the expected value of the number of bits.

If you didn't understand this step, check out my explanation about the expected value!)

)
Now we are almost there! Let's see the general case.

Suppose I pick between 𝑥₁, 𝑥₂, ..., 𝑥ₙ, and I pick 𝑥ₖ with probability 𝑝ₖ.

As before, the number of questions needed to find 𝑘 is the base two logarithm of 1/𝑝ₖ! Image
So, the entropy of a random variable is simply the average bits of information needed to guess its value successfully! Even though the formula is complicated, its meaning is simple.

Entropy is simpler than you thought! (And probably also simpler than what you were taught.) Image
Having a deep understanding of math will make you a better engineer. I want to help you with this, so I am writing a comprehensive book about the subject.

If you are interested in the details and beauties of mathematics, check out the early access!

tivadardanka.com/book
A few extra comments!

1. What happens if the logarithm of the probability is not an integer?

Not all questions provide 100% new information. Sometimes, the answer is partially contained in other bits.

Hence, the "amount of new information" is not always an integer.
2. Does the base of the logarithm matter?

In general, we can easily swap the base of the logarithms, as shown below.

Thus, swapping bases in the entropy formula is just multiplication with a constant. Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tivadar Danka

Tivadar Danka Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TivadarDanka

Mar 22
I am Hungarian, living in Hungary for 35 years. Everything is government propaganda in this thread.

Let me provide all the context.

Learn from this, and maybe your country can succeed in stopping an authoritarian takeover, in which Hungary have failed.

"1. No income tax for women with at least two children for life."

This is an election hack, meant to buy votes for the upcoming 2026 election. Fidesz (Hungary's ruling party) is significantly down in the polls after it was leaked that a convicted p*d*ph*le accessory was given a presidential pardon.

Hell, they even let a child p*rn*gr*phy wholesaler with 96000 images on his computer walk away with ~$1500 fine. (Check en.wikipedia.org/wiki/G%C3%A1bo… if you don't believe me.)

Thus, the government is scraping to buy back the trust of families.

Even if it wasn't an empty promise, waiving the income tax is unrealistic for budgetary reasons. Hungary's economy is in the toilet.
"3. Housing incentives for young couples.

Offers a low interest loan for couples raising or committing to having one child or more."

This loan is another propaganda trick. In practice, this loan resulted in the biggest housing crisis of the country's history, because all it did was raise the price of every real estate by the amount of the loan, making real estate ownership virtually impossible for the young generation.
Read 10 tweets
Feb 28
I am an evangelist for simple ideas.

No matter the field, you can (almost always) find a small set of mind-numbingly simple ideas making the entire thing work.

In machine learning, the maximum likelihood estimation is one of those. Image
I'll start with a simple example to illustrate a simple idea.

Pick up a coin and toss it a few times, recording each outcome. The question is, once more, simple: what's the probability of heads?

We can't just immediately assume p = 1/2, that is, a fair coin.
For instance, one side of our coin can be coated with lead, resulting in a bias. To find out, let's perform some statistics! (Rolling up my sleeves, throwing down my gloves.)
Read 28 tweets
Feb 26
The Law of Large Numbers is one of the most frequently misunderstood concepts of probability and statistics.

Just because you lost ten blackjack games in a row, it doesn’t mean that you’ll be more likely to be lucky next time.

What is the law of large numbers, then? Image
The strength of probability theory lies in its ability to translate complex random phenomena into coin tosses, dice rolls, and other simple experiments.

So, let’s stick with coin tossing. What will the average number of heads be if we toss a coin, say, a thousand times?
To mathematically formalize this question, we’ll need random variables.

Tossing a fair coin is described by the Bernoulli distribution, so let X₁, X₂, … be such independent and identically distributed random variables. Image
Read 17 tweets
Feb 24
The expected value is one of the most important concepts in probability and statistics.

For instance, all the popular loss functions in machine learning, like cross-entropy, are expected values. However, its definition is far from intuitive.

Here is what's behind the scenes. Image
It's better to start with an example.

So, let's play a simple game! The rules: I’ll toss a coin, and if it comes up heads, you win $1. However, if it is tails, you lose $2.

Should you even play this game with me? We’ll find out.
After n rounds, your earnings can be calculated by the number of heads times $1 minus the number of tails times $2.

If we divide total earnings by n, we obtain your average earnings per round. Image
Read 16 tweets
Feb 21
You have probably seen the famous bell curve hundreds of times before.

It is often referred to as some sort of “probability”. Contary to popular belief, this is NOT a probability, but a probability density.

What are densities and why do we need them? Image
First, let's talk about probability.

The gist is, probability is a function P(A) that takes an event (that is, a set), and returns a real number between 0 and 1.

The event is a subset of the so-called sample space, a set often denoted with the capital Greek omega (Ω). Image
Every probability measure must satisfy three conditions: nonnegativity, additivity, and the probability of the entire sample space must be 1.

These are called the Kolmogorov axioms of probability, named after Andrey Kolmogorov, who first formalized them. Image
Read 21 tweets
Feb 19
The single biggest argument about statistics: is probability frequentist or Bayesian?

It's neither, and I'll explain why.

Buckle up. Deep-dive explanation incoming. Image
First, let's look at what is probability.

Probability quantitatively measures the likelihood of events, like rolling six with a dice. It's a number between zero and one. This is independent of interpretation; it’s a rule set in stone. Image
In the language of probability theory, the events are formalized by sets within an event space.

The event space is also a set, usually denoted by Ω.) Image
Read 33 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(