Just 3 days ago, I had the pleasure of watching the #rstudioconf2022 kick off.
I've been attending since 2018 and watching even longer than that.
And, I was just a normal spectator in the audience until this happened.
@topepos and @juliasilge's keynote showed all of the open source work their team has been working on to build the best machine learning ecosystem in R called #tidymodels.
And then they brought this slide up.
Max and Julia then proceeded to talk about how the community members have been working on expanding the ecosystem.
- Text Recipes for Text
- Censored for Survival Modeling
- Stacks for Ensembles
And then they announced me and my work on Modeltime for Time Series!!!
I had no clue this was going to happen.
Just a spectator in the back.
My friends to both sides went nuts. Hugs, high-fives, and all.
My students in my slack channel went even more nuts.
Throughout the rest of the week, I was on cloud-9.
My students that were at the conf introduced themselves.
Much of our discussions centered around Max & Julia's keynote and the exposure that modeltime got.
And all of this wouldn't be possible without the support of this company. Rstudio / posit.
So, I'm honored to be part of something bigger than just a programming language.
And if you'd like to learn more about what I do, I'll share a few links.
The first is my modeltime package for #timeseries.
This has been a 2-year+ passion project for building the premier time series forecasting system.
It now has multiple extensions including ensembles, resampling, deep learning, and more.
Forecasting time series is what made me stand out as a data scientist.
But it took me 1 year to master ARIMA.
In 1 minute, I'll teach you what took me 1 year.
Let's go. π§΅
1. ARIMA and SARIMA are both statistical models used for forecasting time series data, where the goal is to predict future points in the series.
2. Business Uses: I got my start with ARIMA using it for sales demand forecasting. But ARIMA and forecasting are also used heavily in econometrics, finance, retail, energy demand, and any situation where you need to know the future based on historical time series data.
Polars is a fast and efficient DataFrame library designed for data analysis and manipulation in Rust and Python.
It is built to provide high-performance data processing capabilities, often outperforming traditional libraries like pandas, especially with large datasets.
1. Performance: Polars is designed with performance in mind, leveraging Rust's speed and safety to handle large datasets efficiently.
XGBoost is now the go-to number 1 must-have algorithm in my data science toolkit.
But for years, I had no clue what I was doing. In 3 minutes, Iβll share 3 months of research (business case included).
Letβs go: π§΅
1. XGBoost, which stands for Extreme Gradient Boosting, is an advanced implementation of the gradient boosting machine (GBM) algorithm. It was developed to optimize both computational speed and model performance.
2. Gradient Boosting Machine (GBM): GBMs are an ensemble approach that combines multiple weak learners (typically decision trees) to create a strong predictive model.
Bayes' Theorem is a fundamental concept in data science.
But it took me 2 years to understand its importance.
In 2 minutes, I'll share my best findings over the last 2 years exploring Bayesian Statistics.
Let's go.
1. Background: "An Essay towards solving a Problem in the Doctrine of Chances," was published in 1763, two years after Bayes' death. In this essay, Bayes addressed the problem of inverse probability, which is the basis of what is now known as Bayesian probability.
2. Bayes' Theorem: Bayes' Theorem provides a mathematical formula to update the probability for a hypothesis as more evidence or information becomes available. It describes how to revise existing predictions in light of new evidence, a process known as Bayesian inference.