Papers with Code Profile picture
Our mission is to organize science by converting information into useful knowledge.
GollyG 💙 Profile picture Parvaneh Aliniya Profile picture 2 subscribed
Nov 15, 2022 7 tweets 4 min read
🪐 Introducing Galactica. A large language model for science.

Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.

Explore and get weights: galactica.org We believe models should be open.

To accelerate science, we open source all models including the 120 billion model with no friction. You can access them here.

github.com/paperswithcode…
Aug 31, 2022 12 tweets 6 min read
🔥Top Trending ML Papers of the Month

Here is a thread to catchup on the top 10 trending papers of August on @paperswithcode. Image 1) An Image is Worth One Word - a new approach that allows for more creative freedom with image generation; proposes "textual inversions" to find pseudo-words that compose new sentences that guide personalized creations.

paperswithcode.com/paper/an-image…
Jul 19, 2022 8 tweets 4 min read
Keeping up with Language Models

Check out these trending papers to catchup on the latest developments in language models. ↓ 1) N-Grammer (Roy et al.) - takes inspiration from statistical language modeling and augments Transformers with latent n-grams; it matches strong baseline models like Transformer and Primer while being faster in inference.

paperswithcode.com/paper/n-gramme…
Jul 5, 2022 12 tweets 7 min read
🔥Top Trending ML Papers of the Month

Here is a thread to catchup on the top 10 trending papers of June on @paperswithcode. ↓ 1️⃣ Mask DINO (Li et al) - extends DINO (DETR with Improved Denoising Anchor Boxes) with a mask prediction branch to support image segmentations tasks (instance, panoptic, and semantic).

paperswithcode.com/paper/mask-din…
May 31, 2022 12 tweets 7 min read
🔥Top Trending ML Papers of the Month

Here is a thread to catchup on the top 10 trending papers of May on @paperswithcode. 1/11 1⃣ OPT (Zhang et al) - release open pre-trained transformer language models ranging from 125M to 175B parameters. The release include: logbook detailing infrastructure challenges and code to experiment with the released models. 2/11

paperswithcode.com/paper/opt-open…
Apr 25, 2022 12 tweets 5 min read
10 Recent Trends in Language Models

In this thread, we summarize ten recent trends and insights in language models. ↓ 1) Scaling Laws

Kaplan et al. report that language models (LMs) performance improves smoothly when increasing model size, dataset size, and compute. Recent works provide empirical evidence that LMs are underexplored and can be improved in other ways.

paperswithcode.com/paper/scaling-…
Apr 12, 2022 4 tweets 3 min read
Announcing Best Paper Awards for ML Reproducibility Challenge 2021!

We had over 100+ submissions and we are happy to accept 43 reports in our main program. Congratulations to our best and outstanding paper award winners. See more here: paperswithcode.com/rc2021 Image Our program would not be possible without the support of our awesome reviewers! To honor their hard work, we are excited to announce the Outstanding Reviewer Awards! Image
Mar 24, 2022 8 tweets 4 min read
Vision-language pre-trained models are driving lots of progress on machine learning tasks that require vision & language modalities.

Let’s take a look at some recent models ↓ Vision-language pre-trained models aim to tackle more complex tasks that require understanding of multiple modalities such as visual question answering and image captioning. They are typically pre-trained with large datasets using different objectives. Below are some examples ↓
Dec 30, 2021 4 tweets 8 min read
⏪ Papers with Code: Year in Review

We’re ending the year by taking a look back at the top trending machine learning papers, libraries and new datasets for 2021. Read on below!

medium.com/paperswithcode… 📄Trending Papers for 2021, featuring:

• ADOP @mcstammi
• The Bayesian Learning Rule @EmtiyazKhan
• Program Synthesis with Large LMs @jacobaustin132
• Masked Autoencoders Are Scalable Vision Learners @inkynumbers
- as well as work by @IrwanBello, @Chitwan_Saharia & others
Dec 14, 2021 4 tweets 2 min read
📊 Robustness Reports on ImageNet!

We've indexed AugLy's robustness reports, which show model vulnerabilities to different manipulations. Check it out below:

paperswithcode.com/sota/image-cla…

1/4 Reports shows that EfficientNet not only has higher accuracy on ImageNet, but is also significantly more robust than older models.

Accuracy is one measure of performance, but it doesn’t capture how robust the model is to simple manipulations such as noise and rotation...

2/4
Dec 7, 2021 6 tweets 3 min read
Text-Driven Neural Stylization for 3D Meshes

This new paper proposes a text-driven, neural network based approach (Text2Mesh) for stylizing 3D meshes.

Paper & Code: paperswithcode.com/paper/text2mes…

(A short thread) 1/6 Text2Mesh is a framework to stylize meshes by predicting color and geometric details which conform to a target text prompt. Optimization occurs through rendered 2D images & augmentations, then using CLIP similarity to train the network to produce consistent stylized meshes.

2/6
Nov 10, 2021 7 tweets 4 min read
Deep learning models for tabular data continue to improve. What are the latest methods and recent progress?

Let’s have a look ↓ 1) Wide&Deep jointly trains wide linear models and deep neural networks to combine the benefits of memorization and generalization for real-world recommender systems. The model was productionized and evaluated on Google Play.

paperswithcode.com/method/wide-de…
Oct 27, 2021 7 tweets 4 min read
Graph neural networks are driving lots of progress in machine learning by extending deep learning approaches to complex graph data and applications.

Let’s take a look at a few methods ↓ 1) A Graph Convolutional Network, or GCN, is an approach for semi-supervised learning on graph-structured data. It’s based on an efficient variant of CNNs which operates directly on graphs and is useful for semi-supervised node classification.

paperswithcode.com/method/gcn
Oct 12, 2021 8 tweets 4 min read
StyleGAN3 is out and results are 🤯!

It proposes architectural changes that suppress aliasing and forces the model to implement more natural hierarchical refinement which improves its ability to generate video and animation.

paperswithcode.com/paper/alias-fr…

1/8 In the cinemagraph below, we can see that in StyleGAN2 the texture (e.g., wrinkles and hairs) appears to stick to the screen coordinates. In comparison, StyleGAN3 (right) transforms details coherently:

2/8
Oct 4, 2021 6 tweets 3 min read
In a new paper from @wightmanr et al. a traditional ResNet-50 is re-trained using a modern training protocol. It achieves a very competitive 80.4% top-1 accuracy on ImageNet without using extra data or distillation.

[mini-thread]

paperswithcode.com/paper/resnet-s… The paper catalogues the exact training settings to provide a robust baseline for future experiments:
Jan 20, 2021 4 tweets 6 min read
🚨 Newsletter Issue #3. Featuring a new state-of-the-art on ImageNet, a trillion-parameter language model, 10 applications of transformers you didn’t know about, and much more! Read on below:

paperswithcode.com/newsletter/3 👩‍🔬 Research: featuring work by @hieupham789 et al., @LiamFedus et al., @Pengcheng2020 et al., Stergiou et al., Ding et al., @quocleix, among others.
Dec 30, 2020 4 tweets 8 min read
⏪ Papers with Code: Year in Review. We’re ending the year by taking a look back at the top trending papers, libraries and benchmarks for 2020. Read on below!

medium.com/paperswithcode… 📄 Trending Papers for 2020, featuring:

- EfficientDet @tanmingxing @quocleix
- ResNeSt @zhanghang0704
- Big Transfer @__kolesnikov__ @giffmana
- FixRes @HugoTouvron
- as well as work by @QizheXie @colinraffel @ctnzr, and others