The reason PhD school is difficult is not because of the research.

Besides that, there are several key choices whose importance is underestimated by the students. Most of them are unrelated to your hard skills.

Here are the most impactful ones. ↓
1. Picking your advisor.

Young researchers usually value fame and prestige over personal relations. However, your advisor and your fellow labmates will determine your everyday work environment.

Don't sacrifice this for some scientific pedigree.
A healthy relationship with your advisor is essential for your professional performance. Pick someone who is not only a good scientist but a good person as well. Avoid abusive personalities.

Interview students and lab alumni about your prospective advisor if you can.
2. Picking your research topic.

This will set the direction for the rest of your career. There are several aspects to be considered.

▪ Are you interested in the field?
▪ Are others interested in the field?
▪ How difficult is it?
▪ How big is the competition?
Find the balance between popularity and what is interesting for you.

For instance, I chose a really niche field that I was really interested in, but it was hard to get new results. This held back my research career for years.
Competition is also good (because it signals interest and keeps you sharp), but too much competition can make it difficult to get results.

Going into extremely competitive fields can set you up for failure. Try to strike a good balance.
3. Picking the problems to work on.

Regarding the research problem difficulty, there are two ends of the spectrum. On one end, there are problems so simple that they don't even worth a publication, and you won't get any awesome scientist points for solving them.
On the other end, some problems are so hard that they have remained unsolved for decades.

Do not - I repeat - do not attempt these as a young scientist. Only a few succeed, but the vast majority fails and throws away their career. Don't fall for the survivorship bias.
The best strategy is to always keep a difficult problem in the back of your mind, working towards it with smaller problems that you can solve within the foreseeable future.

Once you become familiar with your field, you'll be able to form the problem hierarchy quickly.
4. Focus on communicating your results.

You can have fantastic results, but no one will care if you can't communicate them properly.

Science is a team sport. If you can raise interest in your work, your opportunities to build relations will increase exponentially.
Your visibility heavily influences your success as a researcher.

Grants and awards are given out based on this.

Don't automatically assume that people are going to be interested in whatever you do. You have to earn it.
5. Don't take failure personally.

I know, making mistakes hurt. You have to stop letting them get to you.

Fear of failure is one of the biggest demotivators. Moreover, every failure is a learning opportunity. Use it.

(But learn from the mistakes of others if possible.)
6. Don't equate your self-worth with your CV.

A shiny CV doesn't mean that you have done something notable. A light CV doesn't mean that you haven't done anything either.

It is easy to "overfit" on building a pretty CV, doing nothing significant meanwhile.
7. Stop working on weekends and holidays.

I know you think you can do it. Trust me, you can't. If you learn how to rest, you are going to burn out very soon. Once you do, it'll be tough to get back on track.
To summarize, as a PhD student, you have quite a challenge ahead of you.

With some smart choices, you can make things much easier for you. On top of your research, you don't want to deal with stress and burnout.
Besides technical topics such as mathematics and machine learning, I am passionate about helping students making the best decisions in their life and career.

If you enjoyed this thread, make sure to give me a follow! I frequently post about topics like this.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tivadar Danka

Tivadar Danka Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TivadarDanka

5 Oct
There is one big reason we love the logarithm function in machine learning.

Logarithms help us reduce complexity by turning multiplication into addition. You might not know it, but they are behind a lot of things in machine learning.

Here is the entire story.

🧵 👇🏽
First, let's start with the definition of the logarithm.

The base 𝑎 logarithm of 𝑏 is simply the solution of the equation 𝑎ˣ = 𝑏.

Despite its simplicity, it has many useful properties that we take advantage of all the time.
You can think of the logarithm as the inverse of exponentiation.

Because of this, it turns multiplication into addition. Exponentiation does the opposite: it turns addition into multiplication.

(The base is often assumed to be a fixed constant. Thus, it can be omitted.)
Read 10 tweets
4 Oct
As you know, I am working on teaching mathematics in a way that maximizes value for machine learning practitioners.

Do you have any work stories where mathematical knowledge was a genuine advantage?

I would appreciate it if you could share!

I'll start. ↓
As a bioimage analyst, one of my projects involved the pixel-perfect identification of very thin objects: plant seedlings. (Like below.)

This was a classical semantic segmentation problem.

At first, I trained a UNet model using cross-entropy loss, but it didn't quite work.
The problem was that on the segmentation output, objects were not defined at all. My model predicted almost every pixel as background.

With some basic mathematical thinking, I suspected that the problem is caused by the cross-entropy loss.
Read 8 tweets
30 Sep
🤔 Should you learn mathematics for machine learning?

Let's do a thought experiment! Imagine moving to a new country without speaking the language and knowing the way of life. However, you have a smartphone and a reliable internet connection.

How do you start exploring?

1/8
With Google Maps and a credit card, you can do many awesome things there: explore the city, eat in nice restaurants, have a good time.

You can do the groceries every day without speaking a word: just put the stuff in your basket and swipe your card at the cashier.

2/8
After a few months, you'll start to pick up some language as well—simple things, like saying greetings or introducing yourself. You are off to a good start!

There are built-in solutions for common tasks that just work. Food ordering services, public transportation, etc.

3/8
Read 8 tweets
29 Sep
I just released a new chapter for the early access of my book, the Mathematics of Machine Learning!

This week, we are diving deep into the geometry of matrices.

What does this have to do with machine learning? Read on to find out. ↓

tivadar.gumroad.com/l/mathematics-…
Matrices are the basic building blocks of learning algorithms.

Multiplying the data vectors with a matrix is equivalent to transforming the feature space. We think about this as a "black box", but there is a lot to discover.

For one, how they change the volume of objects.
This is described by the determinant of the matrix, which is given by

• how the transformation scales the volume,
• and how it changes the orientation of basis vectors.

The determinant is given by the formula below. I am a mathematician, and even I find this intimidating.
Read 5 tweets
21 Sep
You don't need to go to a university to learn machine learning - you can do it from your living room, for completely free.

Here is an extensive list of curated free courses and tutorials, from beginner to advanced. ↓

(Trust me, you want to bookmark this tweet.)
This is how I'll group the courses.

Machine learning
├── Getting started
├── Computer vision
├── NLP
├── Reinforcement learning
└── Applications

Coding
├── Python
├── R
├── Javascript
└── Machine learning frameworks

Let's start!
Machine learning
└── Getting started

1. Neural networks (by @3blue1brown)

youtube.com/playlist?list=…
Read 40 tweets
20 Sep
What do you get when you let a monkey randomly smash the buttons on a typewriter?

Hamlet from Shakespeare, of course. And Romeo and Juliet. And every other finite string that is possible.

Don't believe me? Keep reading. ↓
Let's start at the very beginning!

Suppose that I have a coin that, when tossed, has a 1/2 probability of coming up heads and a 1/2 probability of coming up tails.

If I start tossing the coin and tracking the result, what is the probability of 𝑛𝑒𝑣𝑒𝑟 having heads?
To answer this, first, we calculate the probability of no heads in 𝑛 tosses. (That is, the probability of 𝑛 tails.)

Since tosses are independent of each other, we can just multiply the probabilities for each toss together.
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(