A paper on "AlphaFold-multimer", a version of AlphaFold that works on protein complexes, was released by @DeepMind.

Accurately predicted structures can lead to better understand the function of such protein complexes that underpin many biological processes!

#DeepLearning 1/4
Before "AlphaFold-multimer", people discovered that AlphaFold can predict complexes if you connect them with a long linker (this tweet was cited in the above paper!) 2/4
The new model, which had various adjustments to handle the larger protein complex structures, shows improved performance over this linker approach, along with other approaches 3/4
Code and weights will be released soon... I am very excited to see what this will enable in the field of structural biology! 4/4

#AI #MachineLearning #ArtificialIntelligence

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tanishq Abraham is Fully Vaccinated 😷

Tanishq Abraham is Fully Vaccinated 😷 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @iScienceLuvr

22 Sep
In my blog post about GitHub Copilot/Codex (tmabraham.github.io/blog/github_co…), I pointed out lack of knowledge of newer libraries like @fastdotai v2. Testing @OpenAI Codex yesterday, it provided an almost working (regex was off by one character😛) example of fastai v2 code
A few observations:
1. You have to specifically ask for fastai v2 code, but then the import needs to be changed "fastai2.vision.all" →"fastai.vision.all"
2. It has understanding of the differences between the fastai v1 and v2 APIs (correct use of ImageDataLoaders, the fine_tune function new to v2, use of item_tfms to resize before batching)
Read 5 tweets
30 Aug
After you train a machine learning model, the BEST way to showcase it to the world is to make a demo for others to try your model!

Here is a quick thread🧵on two of the easiest ways to make a demo for your machine learning model:
Currently, Gradio is probably the fastest way to set up a machine learning demo ⚡

Just a couple lines of code allows you to use your inference code to make a beautiful demo that you can share with the world.

Learn more here → gradio.app
Using Gradio, I was able to quickly make this demo of my CycleGAN package (screenshot was taken using Gradio's built-in functionality!):

upit-cyclegan.herokuapp.com
Read 10 tweets
20 Aug
The Tesla team discussed how they are using AI to crack Full Self Driving (FSD) at their Tesla AI Day event.

They introduced many cool things:
- HydraNets
- Dojo Processing Units
- Tesla bots
- So much more...

Here's a quick summary 🧵:
They introduced their single deep learning model architecture ("HydraNet") for feature extraction and transforming into a "vector space"
This includes multi-scale features from each of the 8 cameras, integrated with a transformer to attend to important features, incorporating kinematic features, processing in a spatiotemporal manner using a feature queue and spatial RNNs, all trained multi-task learning.
Read 11 tweets
8 Jul
OpenAI has released a 35-page paper on Codex (the model that powers GitHub Copilot)!
arxiv.org/abs/2107.03374
"We fine-tune GPT models containing up to 12B parameters on code to produce Codex."

They note that GitHub Copilot and the upcoming OpenAI API for the model is powered by descendants of the one in this paper.
They introduce a new dataset of Python programming problems in order to evaluate their models:
github.com/openai/human-e…
Read 5 tweets
7 Jul
Yes, this is definitely about television! 🤣🤣🤣
I find it very interesting that Twitter recommends relevant tweets to me, but the topic suggestion is completely off. It looks to me like the recommendation and topic selection algorithm are completely different.
While the tweet recommendation algo is more sophisticated that likely takes into consideration the semantic content of the tweet, the topic selection algo seems to be a simple algorithm that heavily weighs the presence of keywords.
Read 5 tweets
26 Jan 20
Saw few tweets on pigeon-based classification of breast cancer (@tunguz @hardmaru, @Dominic1King, & ML Reddit), which was published in 2015. I work with the legend himself @rml52! I thought for my 1st Twitter thread I'd go over the papers's main points & our current work! (1/11)
My PI often likes to say AI stands for avian intelligence. And indeed his paper shows pigeons can learn the difficult task of classifying the presence of breast cancer in histopathological images. (2/11)
The pigeons were placed in an apparatus and the 🔬 image was shown to the pigeons on a touchscreen. The pigeons were given food if they pressed the correct button on the screen. (This is opposed to regular pathologists who are not given free food when analyzing images!) (3/11) Image
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(