Just 3 days ago, I had the pleasure of watching the #rstudioconf2022 kick off.
I've been attending since 2018 and watching even longer than that.
And, I was just a normal spectator in the audience until this happened.
@topepos and @juliasilge's keynote showed all of the open source work their team has been working on to build the best machine learning ecosystem in R called #tidymodels.
And then they brought this slide up.
Max and Julia then proceeded to talk about how the community members have been working on expanding the ecosystem.
- Text Recipes for Text
- Censored for Survival Modeling
- Stacks for Ensembles
And then they announced me and my work on Modeltime for Time Series!!!
I had no clue this was going to happen.
Just a spectator in the back.
My friends to both sides went nuts. Hugs, high-fives, and all.
My students in my slack channel went even more nuts.
Throughout the rest of the week, I was on cloud-9.
My students that were at the conf introduced themselves.
Much of our discussions centered around Max & Julia's keynote and the exposure that modeltime got.
And all of this wouldn't be possible without the support of this company. Rstudio / posit.
So, I'm honored to be part of something bigger than just a programming language.
And if you'd like to learn more about what I do, I'll share a few links.
The first is my modeltime package for #timeseries.
This has been a 2-year+ passion project for building the premier time series forecasting system.
It now has multiple extensions including ensembles, resampling, deep learning, and more.
A Python Library for Time Series using Hidden Markov Models.
Let me introduce you to hmmlearn.
1. Hidden Markov Models
A Hidden Markov Model (HMM) is a statistical model that describes a sequence of observable events where the underlying process generating those events is not directly visible, meaning there are "hidden states" that influence the observed data, but you can only see the results of those states, not the states themselves
2. HMM for Time Series with hmmlearn
hmmlearn implements the Hidden Markov Models (HMMs).
We can use HMM for time series. Example: Using HMM to understand earthquakes.
❌Move over PowerBI. There's a new AI analyst in town.
💡Introducing ThoughtSpot.
1. AI Analyst
ThoughtSpot’s Spotter is an AI analyst that uses generative AI to answer complex business questions in natural language, delivering visualizations and insights instantly.
It supports iterative querying (e.g., “What’s next?”) without predefined dashboards.
2. Self-Service Analytics
Unlike Tableau and Power BI, which rely on structured dashboards, ThoughtSpot emphasizes self-service analytics with a search-based interface, making it accessible to non-technical users.
Its AI-driven approach feels like “ChatGPT for data.”
Top 7 most important statistical analysis concepts that have helped me as a Data Scientist.
This is a complete 7-step beginner ROADMAP for learning stats for data science. Let's go:
Step 1: Learn These Descriptive Statistics
Mean, median, mode, variance, standard deviation. Used to summarize data and spot variability. These are key for any data scientist to understand what’s in front of them in their data sets.
2. Learn Probability
Know your distributions (Normal, Binomial) & Bayes’ Theorem. The backbone of modeling and reasoning under uncertainty. Central Limit Theorem is a must too.
🚨 BREAKING: IBM launches a free Python library that converts ANY document to data
Introducing Docling. Here's what you need to know: 🧵
1. What is Docling?
Docling is a Python library that simplifies document processing, parsing diverse formats — including advanced PDF understanding — and providing seamless integrations with the gen AI ecosystem.
2. Document Conversion Architecture
For each document format, the document converter knows which format-specific backend to employ for parsing the document and which pipeline to use for orchestrating the execution, along with any relevant options.
The concept that helped me go from bad models to good models: Bias and Variance.
In 4 minutes, I'll share 4 years of experience in managing bias and variance in my machine learning models. Let's go. 🧵
1. Generalization:
Bias and variance control your models ability to generalize on new, unseen data, not just the data it was trained on. The goal in machine learning is to build models that generalize well. To do so, I manage bias and variance.
2. Low vs High Bias:
Models with low bias are usually complex and can capture the underlying patterns in data very well. They are flexible enough to fit the training data closely. Models with high bias are overly simple and cannot capture the complexity in the data. They often underfit the training data, meaning they perform poorly even on the data they were trained on.