Stephen Bach Profile picture
Asst. prof. @BrownCSDept. Working on improving how humans teach computers. Weak supervision, zero-shot learning, few-shot learning, and high-level knowledge.
Sep 26, 2022 8 tweets 2 min read
Recently, there has been a lot of interest in compositionality in large pre-trained models. We’re excited to share work led by Nihal Nayak and Peilin Yu on making learned prompts more compositional:
arxiv.org/abs/2204.03574

A 🧵👇 A graphical representation of compositional soft prompting. We focus on compositional zero-shot learning. The task is to label classes composed of primitive concepts that represent objects and attributes e.g., "old cat" vs "young cat" vs "young dog". Perhaps unsurprisingly, CLIP doesn't do a great job out of the box on this task.

(2/8)
Apr 25, 2022 9 tweets 5 min read
If you’re at #ICLR2022, hope you’ll check out our spotlighted poster: “Multitask Prompted Training Enables Zero-Shot Task Generalization.” arxiv.org/abs/2110.08207

Poster session 5, Tue 1:30-3:30 ET Image This work was a big undertaking from many at the @BigscienceW Workshop, particularly @SanhEstPasMoi, @albertwebson, @colinraffel, and @srush_nlp. It’s been awesome to see all the people already using and building on the T0 family of models for zero-shot learning.