Discover and read the best of Twitter Threads about #AlphaFold2

Most recents (4)

Two large #antitrust probes in the same screenshot that relate to #Genomics and #Bioinformatics
(1) Possible buyout of #ARM by #NVIDIA does have an effect on the #Bioinformatics field: many applications now are deployable on CPU/GPUs with #ARM and/or #NVIDIA chips on them. Some recent examples are:
(a) the Oxford @nanopore MinION Mk1c device, which originally was specced at Jetson TX2 ARM+Pascal GPU accelerators (ARM processor 6 cores, 256 Core GPU), 8 GB RAM (may have changed since then.
Read 42 tweets
Since the publication of #AlphaFold2 and #RoseTTAFold and now that the tools and models have been made accessible, there has been an avalanche of attempts to solve old crystal structures. This thread covers tips from the #PhaserTeam for doing #MR with these models. (1/...)
The first thing to be aware of is that the B-factor fields contain measures of confidence in the correctness of the prediction, not actual B-factors. This means: 1) we can use that confidence to trim the model and 2) we need to convert that to an appropriate B-factor. (2/...)
Phaser will take those B-factors and use them to weight the different parts of the model. This can improve your chances of success with the model. (3/...)
Read 11 tweets
the more I read about the #alphafold2 details, the cooler it gets. This is not your basic machine learning, but incorporates a ton of domain expertise. Seeing how far deep learning is, I realize it embodies what I had in mind during my PhD with molecular chemometrics :)
the first generation DL applications where just throwing a lot of data at it, but #alphafold2 actually shows it has no problem with embedding prior knowledge... the attention/transformer aspect (think of it like variable selection) is pretty awesome.
besides intelligent variable selection, you will also find traces of idea like kernels (like those in SVMs) and domain-driven measures of similarity.
Read 5 tweets
Interesting write-up of how @DeepMind 's #Alphafold2 works.

The takeaway from this is that a *lot* of human insight was required and there's a lot of highly specialized feature engineering going on here.

Why is that important?…
It's important because it shows us that @DeepMind is obviously nowhere near making any kind of general AI; they're doing good old-fashioned down-and-dirty hacks with great computational resources and a clever team.
If this paper had basically said something different like "Oh they just used a really big supertransducer on the raw data", then I would be scared.

But no, they didn't. They did a bunch of artisanal architecture engineering that's good for this specific task only.
Read 10 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!