2/🧵 Allow your approach to be sloppy at first and burn some of your initial time, energy, and data on informing a good direction later. That's right, you're supposed to start sloppily ON PURPOSE.
3/🧵 Have a phase where the only result you’re after is *an idea of how to design your ultimate approach better.*
4/🧵 In other words, start with a pilot phase where the objective isn't finding answers, it's finding a good approach to finding answers.
5/🧵 That means you're encouraged (ENCOURAGED!) to start with everything your stats classes told you not to do:
6/🧵 Low-quality data: use small sample sizes, synthetic data, and non-randomly sampled data to gain insights about the data collection process itself.
7/🧵 Rough-and-dirty models: seek an understanding of what the payoff from minimum effort looks like. Start with bad algorithms which you know are only going to give you a benchmark, not your best solution.
8/🧵 Multiple comparisons: instead of picking a single hypothesis test, feel free to throw the kitchen sink at your data to discover signals worth basing your final approach on. Add deadlines and MVP milestones to avoid the trap of infinite polishing, poking, and prodding.
9/🧵 If the statistician in you isn’t screaming yet, I admire your sangfroid. This advice breaks pretty much every rule you learned in class. So why am I endorsing these “bad behaviors”?
10/🧵 So why am I endorsing these “bad behaviors”? Because this is the pilot phase. I’m all about following the standard advice later, but this early phase has different rules.
11/🧵 The important thing is to avoid rookie mistakes by remembering these 2 crucial principles:
12/🧵 Principle 1: Don’t take any findings from the early phase too seriously.
13/🧵 Principle 2: Always collect a clean new dataset when you’re ready for the final version.
14/🧵 You’re using your initial iterative exploratory efforts to inform your eventual approach (which you’ll take just as seriously as the most studious statistician would). The trick is to use the best of exploratory nimbleness to inform what’s worth considering along the way.
15/🧵 If you’re used to the rigidity of traditional statistical inference, it’s time to rediscover the benefits of pilot studies in science and find ways to embed the equivalent into your data science projects.
16/🧵 The key thing to understand about this advice is that
- finding good questions
- finding good answers
- finding good approaches going from one to the other
are all different objectives that require different approaches. Sometimes there's homework to do before answers...
• • •
Missing some Tweet in this thread? You can try to
force a refresh
0/ Essential philosophy for #DataScience, a thread of 32 questions.
Grab a friend (virtually) and tackle these 32 essential questions (all with more than one reasonable answer) that every serious #data professional should answer for themselves.
2/10 It's not helpful to form an uninformed opinion & then look for media that confirms your views. You'll find it, maybe you'll feel better, but you may as well skip it - it's a waste of time. You already know you'll just confirm whatever you wanted to believe. Instead, do this:
3/10 Think carefully about your ethics/values/goals (for world, community, family, self). (This post isn't about telling you how much of a nice person to be, only how to make decisions within your own moral framework... but I hope you'll choose to be nice.)