Deploying GPT-like language models on a chatbot is tricky.

You might wonder
• How to access the model?
• Where to host the bot?

In this 🧵I walk you through how easily I deployed a GPT-J-6B model by #EleutherAI on a #Telegram bot with @huggingface and @Gradio.

For FREE 🚀
By the end of this🧵, you’ll have your very own Telegram bot that can query the GPT-J model with any text you send it 👇
🤖Token From the BotFather

To create a bot, you must have a Telegram account.

Next, get a TOKEN from the BotFather. This TOKEN allows you to access the bot.

Keep this TOKEN private🤫. Anyone with this TOKEN can access your bot.

🐍 Python Telegram Bot

Next, install a Telegram bot wrapper library👉python-telegram-bot (PTB).

Now, we can code our bot using Python!

github.com/python-telegra…
🤯With PTB, you can run your own bot with only 8 lines of code!
👋 Running the above code results in a simple bot that replies hello to the user
Now, how do we access the GPT-J model?

Enter👉 Hugging Face Hub.

Hugging Face Hub is a central place where anyone can share their models, dataset, and app demos.

The GPT-J-6B model is on the Hub! Anyone can use it.

huggingface.co/EleutherAI/gpt…
Let's create a @gradio demo on the Hub to interact with the GPT model.

You can create your own or use my demo. Play around with some text as the input to the GPT-J model.

Here's the demo app 👇
huggingface.co/spaces/dnth/gp…
With the @gradio app, we also gain access to a HTTP endpoint that allows us to query the GPT model from elsewhere!

I've used this feature to deploy large models on an Android app.

Now, all we have to do here is call the endpoint from the Telegram bot!

For example 👇
🕹Putting everything together, we now have a Telegram bot that can query the GPT model!
Just one problem 👉 we must keep this script running 24/7 on a computer to keep the bot alive.

Is there a better way?🤔
Yes! I recently discovered you can host your bot on Hugging Face Space! 🤫

All you have to do is create a @Gradio app, make a requirements.txt file and upload the above script!

Here's my repo for reference 👇
huggingface.co/spaces/dnth/pt…
You do not want to expose your token in your source code in the repo.

For that, create an environment variable to hide the token as a secret.
The end result 👉 a 24/7 working Telegram bot that has access to the GPT-J-6B model 🥳

For FREE 🚀
😍 That’s a wrap!

In this 🧵 I showed you how you can utilize any model on the @huggingface Hub and deploy it on a Telegram bot.

There are tons of models on the Hub. What models are you thinking of? #dalle? CLIP? Try it!

Link to my bot 👉 t.me/ptbgptbot
Or, just for fun, replace the GPT-J-6B model with a GPT-Neo model that paraphrases texts.

huggingface.co/spaces/dnth/gp…
🙏 I hope you’ve learned something from this 🧵. I'm curious what other bots you'd create with this. Tag me in your Tweets!

💡 If you find value in this post, consider following me for more bite-size deployment tips like this!

🖥 Details in the blog post
dicksonneoh.com/portfolio/depl…
🤒 Lastly, I need your help.

During the pandemic, my day job was impacted.

I’m now seeking remote employment opportunities as a Data Scientist or Machine Learning Engineer.

If you know of any openings, please tag or connect me. I'll be forever grateful 🙏
Or help me re-tweet this post so that it will reach the right hiring manager.

Sincerely, thank you.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Dickson Neoh 🚀

Dickson Neoh 🚀 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @dicksonneoh7

May 11
Picking an object detection model is a PAIN.

Within the YOLO family there's YOLOv1 to YOLOv5, YOLOR, YOLOX, PPYOLO.. it's never ending 😵

How do you pick the right one for your application?

In this thread, I will show you how to squeze the best of YOLOX using @weights_biases
By the end of this thread you will learn how to:

💡Install the Wandb client and log the YOLOX training metrics.

🔭Compare training metrics on the Wandb dashboard.

🎯 Pick the best model with mAP and FPS values.

P/S: The best model scored > 100 FPS on a CPU 🤯
🕹 Wandb - Google Drive for Machine Learning

Life is short they say. So why waste it on monitoring your models training when you can automate them?

Wandb does this extremely well.

You can compare models, log metrics, and collaborate with teammates. It’s free to start.
Read 21 tweets
May 3
Deploying object detection models on a CPU is a PAIN.

In this thread I will show you how I optimized and 10x my YOLOX model from 5 FPS to 50 FPS on a CPU.

Yes!! CPU!

And yes for FREE.

The optimized model runs FASTER on a CPU than GPU 🤯

dicksonneoh.com/portfolio/how_…

A thread 👇
By the end of this thread, you will find out how we go from this 👇🐌
To this 👇🚀
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(