We release our initial paper below. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. Includes scientific text and also scientific modalities such as proteins, compounds and more.
Galactica performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%.
We train for over four epochs and experience improving performance with use of repeated tokens. For the largest 120B model, we trained for four epochs without overfitting.
Despite not being trained on a general corpus, Galactica outperforms BLOOM and OPT-175B on BIG-bench. Galactica is also significantly less toxic than other language models based on evaluations.
This is just the first step on our mission to organize science. And there is a lot more work to be done. We look forward to seeing what the open ML community builds with the model.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Here is a thread to catchup on the top 10 trending papers of August on @paperswithcode.
1) An Image is Worth One Word - a new approach that allows for more creative freedom with image generation; proposes "textual inversions" to find pseudo-words that compose new sentences that guide personalized creations.
2) Cold Diffusion - proposes diffusion models built around arbitrary image transformations without Gaussian noise; discusses the potential for generalized diffusion models that invert arbitrary processes.
Check out these trending papers to catchup on the latest developments in language models. ↓
1) N-Grammer (Roy et al.) - takes inspiration from statistical language modeling and augments Transformers with latent n-grams; it matches strong baseline models like Transformer and Primer while being faster in inference.
2) Language Models (Mostly) Know What They Know (Kadavath et al.) - investigates whether an LM can be trained to perform well at predicting which questions it will be able to answer correctly; this enables self-evaluation on open-ended sampling tasks.
Here is a thread to catchup on the top 10 trending papers of June on @paperswithcode. ↓
1️⃣ Mask DINO (Li et al) - extends DINO (DETR with Improved Denoising Anchor Boxes) with a mask prediction branch to support image segmentations tasks (instance, panoptic, and semantic).
2️⃣ Hopular (Schäfl et al) - proposes a deep learning architecture based on continuous Hopfield networks for competitive results on small-sized tabular datasets.
Here is a thread to catchup on the top 10 trending papers of May on @paperswithcode. 1/11
1⃣ OPT (Zhang et al) - release open pre-trained transformer language models ranging from 125M to 175B parameters. The release include: logbook detailing infrastructure challenges and code to experiment with the released models. 2/11
2⃣ CoCa (Yu et al) - a new foundation model that achieves new state-of-the-art on ImageNet (90.6%); proposes minimal strategy to jointly pre-train an image-text encoder decoder with contrastive loss and captioning loss. 3/11
In this thread, we summarize ten recent trends and insights in language models. ↓
1) Scaling Laws
Kaplan et al. report that language models (LMs) performance improves smoothly when increasing model size, dataset size, and compute. Recent works provide empirical evidence that LMs are underexplored and can be improved in other ways.
Hoffmann et al find that large LMs are undertrained and that for a compute-optimal model, Chinchilla, model size & number of training tokens should be scaled equally. Chinchilla (70B) outperforms Gopher (280B) on several tasks.
Announcing Best Paper Awards for ML Reproducibility Challenge 2021!
We had over 100+ submissions and we are happy to accept 43 reports in our main program. Congratulations to our best and outstanding paper award winners. See more here: paperswithcode.com/rc2021
Our program would not be possible without the support of our awesome reviewers! To honor their hard work, we are excited to announce the Outstanding Reviewer Awards!
Stay tuned for more updates regarding the release of ReScience journal, and our plans for a one-day workshop on Reproducibility where we showcase these reports.