Tamay Besiroglu Profile picture
Thinking about economics, computing and machine learning @EpochAIResearch. prev: @MIT_CSAIL, @Cambridge_Uni
May 16 9 tweets 3 min read
A few weeks ago, we attempted to replicate the Chinchilla paper. We found that their estimated model fails to adequately fit the reconstructed data, that it implies inconsistent scaling policies, and that their confidence intervals are implausibly narrow.
The authors responded, clarifying that this was the result of their optimizer stopping early due to a bad loss scale choice. They plan to update their results and release the data. We appreciate @borgeaud_s and others' openness in addressing this issue.
Apr 17 10 tweets 4 min read
The Chinchilla scaling paper by Hoffmann et al. has been highly influential in the language modeling community. We tried to replicate a key part of their work and discovered discrepancies. Here's what we found. (1/9) Image We reconstructed the data by extracting the SVG from the paper, parsing out the point locations & colors, mapping the coordinates to model size & FLOP, and mapping the colors to loss values. This let us closely approximate their original dataset from just the figure. (2/9) Image
Mar 12 11 tweets 4 min read
Language models have come a long way since 2012, when recurrent networks struggled to form coherent sentences. Our new paper finds that the compute needed to achieve a set performance level has been halving every 5 to 14 months on average. (1/10) Image This rate of algorithmic progress is much faster than the two-year doubling time of Moore's Law for hardware improvements, and faster than other domains of software, like SAT-solvers, linear programs, etc. (2/10) Image
Dec 13, 2022 6 tweets 3 min read
How much progress in machine learning has been due to advances in algorithms (architectures, optimisers, activation functions, etc.), and how much as been due to the scaling of compute or datasets?
@EgeErdil2 and I provide new answers: arxiv.org/abs/2212.05153 We use a dataset of over a hundred computer vision models from the last decade to investigate how better algorithms and architectures have enabled researchers to use compute and data more efficiently.
Jun 20, 2022 13 tweets 6 min read
I recently organized a contest for @Metaculus on investigations into predictions of the future of AI. This resulted in two-dozen insightful analyses by forecasters into the prospects of transformatively advanced AI systems. Here are my short summaries of some that stood out: This piece by @EgeErdil2 uses a hyperbolic growth model to argue that an economy could be transformed fairly quickly following the widespread deployment of advanced AI
metaculus.com/notebooks/1061…
Feb 22, 2021 7 tweets 2 min read
A recent paper about innovation over the long run reveals a very neat snapshot of the composition of inventions over time. Using data on US patents, it identifies the following key waves:
nber.org/system/files/w… 1840s—70s: Key manufacturing innovations occur (pneumatic process for cheap steel and sewing machine are invented); Transport (improvements in steam-engines. The Bollman bridge, air brake system, cable car are patented); Consumer Goods (board game, toothbrush, picture machine).
Nov 22, 2020 12 tweets 4 min read
A few months ago, I wrote an economics dissertation on whether machine learning models are getting harder to find. Here’s a summary of what I found: Some background. @ChadJonesEcon, @johnvanreenen and others wrote an awesome article that found that ideas are getting harder to find: in semiconductors, agricultural production and medicine, research productivity has been declining steadily.