Are you a scientist applying ML?
I wrote a tutorial with ready-to-use notebooks to make your life easier!
Let's focus on 3 aspects:
• More Citations
• Easier Review
• Better Collaboration
Let's see how:
First things first:
This was a @EuroSciPy tutorial in 2022.
In the future, a talk recording will be available. Until then here's the gist:
1. Model Evaluation
2. Benchmarking
3. Model Sharing
4. Testing
5. Interpretability
6. Ablation
#euroscipy
github.com/JesperDramsch/…
@EuroSciPy 📐 Model Evaluation
In science, we want to describe the world.
Overfitting gets in the way of this.
With real-world data, there are many ways to overfit, even if we use a random split and have a validation and test set!
Save yourself the pain!
github.com/JesperDramsch/…
@EuroSciPy A machine learning model that isn't evaluated correctly is not a scientific result.
This leads to desk rejections, tons of extra work, or in the worst case maybe redactions and being the "bad example".
Especially on:
• Time Data
• Spatial Data
More?
dramsch.net/books
@EuroSciPy 🔬 Benchmarking
Compare your models using the right metrics and benchmarks.
Here are great examples:
• DummyClassifiers
• Benchmark Datasets
• Domain Methods
• Linear Models
• Random Forests
Always ground your model in the reality of science!
github.com/JesperDramsch/…
@EuroSciPy Proper benchmarks make stronger papers!
Metrics on their own don't always paint a full picture.
Use benchmarks to tell a story of "how well your model should be doing" and disarm many many comments by Reviewer 2 before they're even written down.
@EuroSciPy 🤝 Model Sharing
Sharing models is great for reproducibility and collaboration.
Export your models and fix the random seed for paper submissions.
Share your dependencies in a requirements.txt or env.yml so other researchers can use & cite your work!
github.com/JesperDramsch/…
@EuroSciPy Good code is easy to use and cite!
Use these libraries:
• flake8 for linting
• black for formatting
Write docstrings for docs!
(@code has a fantastic extension called autoDocstring)
Provide a @Docker container for ultimate reproducibility.
Your peers will thank you.
@EuroSciPy @code @Docker ⚗️ Testing
I know code testing in science is hard.
Here are ways that make it incredibly easy:
• Doctests for small examples
• Data Tests for important samples
• Deterministic tests for methods
github.com/JesperDramsch/…
@EuroSciPy @code @Docker You can make your own life and that of collaborators 1000 times easier!
Use Input Validation.
Pandera is a nice little tool that lets you define how your input data should look like. Think:
• Data Ranges
• Data Types
• Category Names
It's honestly a game changer and easy!
@EuroSciPy @code @Docker 🧠 Interpretability
This is a great communication tool for papers and meetings with domain scientists!
No one cares about your mean squared error!
How does the prediction depend on changing your input values?!
What features are important?!
github.com/JesperDramsch/…
@EuroSciPy @code @Docker ✂️ Ablation Studies
You know it. I know it.
Data science is trying a lot and finding what works.
It's iterative!
Use ablation studies to switch off components in your solution to evaluate the effect on the final score!
This care is great in a paper!
github.com/JesperDramsch/…
@EuroSciPy @code @Docker The creation was supported by @SoftwareSaved.
You made it all this way down, you might be a great SSI fellow and get £3,000 for this stuff too!
Doubt? Read "would I even fit in?!":
software.ac.uk/blog/2022-08-0…
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
