One very interesting task on the NLP fields is text generation.
There are very advanced techniques and a lot of research on it and even business based solely on it!
But how does it work?
[7.47min]
[I guarantee it's a much better read then doom scrolling!!!]
1/11🧵
Let's think: what a model would have to do to generate text?
The rationale is, as humans we form sentences by trying to create a sequence of words that makes sense.
The less random this sequence looks like, the better the output text is and closer to human like.
2/11🧵
Here is where ML can help.
A model should learn how to combine the words the best way possible.
The simplest way to teach this is: given a sentence, hide the last word and let the model try to guess it.
The loss function measures how good the model's guess is.
3/11🧵
This is what's called creating n-grams
You can do by word, by letter, phonemes, etc
Generating text is more than a technical challenge.
The results of a model can be used for good and there are already startups that are based on GPT-3!
One interesting consequence of huge models that generate text is: These models are trained using the internet as source. The more these models generate text and this is published on sites, the more the model will use itself as training material later!
👀🤔
10/11🧵
Sorry for the long thread but this subject is fascinating for me!
And no, this thread wasn't automatically generated by ML…. or was it?
🤖🤔👀😱
11/11🧵
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Sometimes you need to create your own model for your specific data corpus (eg: legal, science, medical texts)
To create your own model, AutoML Natural Language can help you:
2/4🧵
If you want to build everything from scratch, then you'll need:
• a language embedding (like BERT, ELMO, USE) and #TFHub have all you need
• a dataset and this github.com/juand-r/entity… can help you find one
Encoding text in numbers is a very important part of NLP as the better this can be done, the better are the possible results!
Word embedding works but they don't have the full context of the sentence.
This is where BERT comes in
But what is BERT?
1/9🧵
When we do word embedding, both sentences
• They are running a test
• They are running a company
Will have very similar embeddings but the meaning of both sentences are very different. Without this context, the model using this encoding will be blind to the context
2/9🧵
This is where Bidirectional Encoder Representations from Transformers (BERT) comes in play!
It is a Transformer-based network created in 2018 and
takes into account the context of the word's occurrence. For the previous example, it gives very different embeddings.
To make Apps with Magical User Experiences, you need to get all the performance possible from the hardware.
From the on-device ML perspective, you can achieve that using the TFLite Delegates.
They enable you to access the power of HW acceleration.
1/6🧵
Your phone's CPU is usually very fast but as a multi-purpose processor it's not optimized for the heavy math that ML needs
Like on their big brothers (servers 🤓) phones have also more specialized chips more suitable for ML, the most popular being the GPUs
2/6🧵
Another popular accelerator is the Qualcomm Hexagon DSP that has shown 75% reduction in power consumption.
On the Apple side, you can use the Core ML delegate to access the Neural Engine processor on newer i[Phones|Pads] and that can give huge boosts in performance!