Tamay Besiroglu Profile picture
May 29 3 tweets 1 min read
You might have expected that with large ML models being not publicly accessible and very costly to train, it would become unclear whether key impressive results would replicate. However, the reproducibility situation for these models has arguably so far been surprisingly good.
Different labs routinely produce very similar models (e.g. DALLE-2 vs Imagen and AlphaCode vs Codex) that yield highly similar results, providing validation of key results.

See for example @benjamin_hilton's thread comparing DALL-E to Google's Imagen
(thought inspired by comment from @phillip_isola).

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with Tamay Besiroglu

Tamay Besiroglu Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @tamaybes

Jun 20
I recently organized a contest for @Metaculus on investigations into predictions of the future of AI. This resulted in two-dozen insightful analyses by forecasters into the prospects of transformatively advanced AI systems. Here are my short summaries of some that stood out:
This piece by @EgeErdil2 uses a hyperbolic growth model to argue that an economy could be transformed fairly quickly following the widespread deployment of advanced AI
He finds that a basic model implies that it'd take ~3 months to go from widespread deployment of AI to a radical transformation (with some uncertainty, but not much). At best, we may see transformative AI coming a year or two in advance.
Read 13 tweets
Feb 22, 2021
A recent paper about innovation over the long run reveals a very neat snapshot of the composition of inventions over time. Using data on US patents, it identifies the following key waves:
1840s—70s: Key manufacturing innovations occur (pneumatic process for cheap steel and sewing machine are invented); Transport (improvements in steam-engines. The Bollman bridge, air brake system, cable car are patented); Consumer Goods (board game, toothbrush, picture machine).
1870s-1900s: Electricity and Electronics (Edison patents the electric light, Bell the telephone. Others invent the microphone, computer motion picture, and the radio). In the 1890s Transport innovation peaks (the automobile, airplane, and the submarine are all patented).
Read 7 tweets
Nov 22, 2020
A few months ago, I wrote an economics dissertation on whether machine learning models are getting harder to find. Here’s a summary of what I found:
Some background. @ChadJonesEcon, @johnvanreenen and others wrote an awesome article that found that ideas are getting harder to find: in semiconductors, agricultural production and medicine, research productivity has been declining steadily.
In my dissertation, I explored to how this story holds up for machine learning. I used a dataset on the top performing ML models on 93 machine learning benchmarks—mostly related to computer vision and NLP—and data on research input derived from data on publications.
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!


0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy


3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!