BIG NEWS: #ChatGPT breaks #Python vs #R Barriers in Data Science!

Data science teams everywhere rejoice.

A mind-blowing thread (with a FULL chatgpt prompt walkthrough). 🧵

#datascience #rstats
It's NOT R VS Python ANYMORE!

This is 1 example of how ChatGPT can speed up data science & GET R & PYTHON people working together.

(it blew my mind)
This example combines #R, #Python, and #Docker.

I created this example in under 10 minutes from start to finish.
I’m an R guy.

And I prefer doing my business research & analysis in R.

It's awesome. It has:

1. Tidyverse - data wrangling + visualization
2. Tidymodels - Machine Learning
3. Shiny - Apps
But the rest of my team prefers Python.

And they don't like R... it's just weird to them.

So I wanted to see if I could show them how we could work together...
Let’s start with a prompt.

I asked chatgpt to find a data set that I used for this example. Image
...ChatGPT found it... Image
... And gave me this code to read the data... Image
I prefer the tidyverse, so I asked Chatgpt to update the code. Image
That looks better. Image
With the data in hand, it’s time for some Data Science.

I asked this simple question. Image
ChatGPT's response was impressive. Image
But, even though I’m an R guy, my team uses Python for Deployment…

In the past, that’s a huge problem.

(resulting in days of translations from R to Python with Google and StackOverflow)
But now, that’s 1 minute of effort with chatGPT.

Can I show you?
I asked chatgpt to convert the R script to python... Image
And in 10 seconds chatgpt made this python code with pandas and scikit learn. Image
ChatGPT did in 10 seconds something that would have taken me 2 hours.

But let’s continue.

The reason we had to convert to Python is for “deployment”

Deployment is just a fancy word for allowing others to access my model so they can use it on-demand.
So I asked chatGPT this: Image
And ChatGPT made me a Python API using FastAPI. Image
But this code is useless…

… Without a docker environment.

So I asked chatGPT to make one: Image
And chatGPT delivered my Docker Environment's Dockerfile: Image
So in under 10 minutes, I had ChatGPT:

1. Make my research script in R.

2. Create my production script in Python for my Team

3. And create the API + Docker File to deploy it.
But when I showed my Python team, instead of excited...

...They were worried.

And I said, "Listen. There's nothing to be afraid of."

"ChatGPT is a productivity enhancer."

They didn't believe me.
My Conclusion:

You have a choice. You can rule AI.

Or, you can let AI rule you.

What do you think the better choice is?
If you want help, I'd like you to join me on a free #ChatGPT for #DataScientists Workshop on April 26th. And I will help you Rule AI.

What's the next step?

👉Register Here: us02web.zoom.us/webinar/regist… Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with 🔥 Matt Dancho (Business Science) 🔥

🔥 Matt Dancho (Business Science) 🔥 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mdancho84

Jul 4
Principal Component Analysis (PCA) is the gold standard in dimensionality reduction.

But PCA is hard to understand for beginners.

Let me destroy your confusion: Image
1. What is PCA?

PCA is a statistical technique used in data analysis, mainly for dimensionality reduction. It's beneficial when dealing with large datasets with many variables, and it helps simplify the data's complexity while retaining as much variability as possible.
2. PCA has 5 steps:

1. Standardization
2. Covariance Matrix Computation
3. Eigen Vector Calculation
4. Choosing Principal Components
5. Transforming the data
Read 12 tweets
Jul 1
🚨 Say goodbye to manual ETL

Cleaned a 100k-word PDF dataset in 3 lines of Python code: Image
1. What is DocETL?

DocETL is a system for LLM-powered data processing.

You can create LLM-powered data processing pipelines. Image
2. Quick Example:

I made a quick, messy-PDF-to-Structured Output pipeline in 3 lines of Python: Image
Read 7 tweets
Jun 30
🚨 Synthetic Data is the Future of AI

Introducing The Synthetic Data Vault (SDV).

This is what you need to know: Image
Synthetic Data is the Future of AI

Synthetic data keeps your data private.

SDV generates fake datasets that look REAL.

Here's how: Image
I built a 100k-row customer dataset in 4 lines:

Perfect for HIPAA-compliant Machine Learning & AI.

Google Colab Example: colab.research.google.com/drive/1L6i-JhJ…Image
Read 7 tweets
Jun 29
Logistic Regression is the most important foundational algorithm in Classification Modeling.

In 2 minutes, I'll crush your confusion.

Let's dive in: Image
1. Logistic regression is a statistical method used for analyzing a dataset in which there are one or more independent variables that determine a binary outcome (in which there are only two possible outcomes). This is commonly called a binary classification problem.
2. The Logit (Log-Odds):

The formula estimates the log-odds or logit. The right-hand side is the same as the form for linear regression. But the left-hand side is the logit function, which is the natural log of the odds ratio. The logit function is what distinguishes logistic regression from other types of regression.Image
Read 9 tweets
Jun 27
These 7 statistical analysis concepts have helped me as an AI Data Scientist.

Let's go: 🧵 Image
Step 1: Learn These Descriptive Statistics

Mean, median, mode, variance, standard deviation. Used to summarize data and spot variability. These are key for any data scientist to understand what’s in front of them in their data sets. Image
2. Learn Probability

Know your distributions (Normal, Binomial) & Bayes’ Theorem. The backbone of modeling and reasoning under uncertainty. Central Limit Theorem is a must too. Image
Read 12 tweets
Jun 27
🚨BREAKING: New Python library for Bayesian Marketing Mix Modeling and Customer Lifetime Value

It's called PyMC Marketing.

This is what you need to know: 🧵 Image
1. What is PyMC Marketing?

PyMC-Marketing is a state-of-the-art Bayesian modeling library that's designed for Marketing Mix Modeling (MMM) and Customer Lifetime Value (CLV) prediction.
2. Benefits

- Incorporate business logic into MMM and CLV models
- Model carry-over effects with adstock transformations
- Understand the diminishing returns
- Incorporate time series and decay
- Causal identification Image
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(