Research Scientist at Google. PhD from CMU | Ex-AdobeResearch | IIT Roorkee
Oct 31, 2022 โข 9 tweets โข 4 min read
Can instruction tuning improve zero and few-shot performance on dialogue tasks? We introduce InstructDial, a framework that consists of 48 dialogue tasks created from 59 openly available dialogue datasets #EMNLP2022๐
Paper ๐ arxiv.org/abs/2205.12673
Work done at @LTIatCMU
๐งต๐
Instruction tuning involves fine-tuning a model on a collection of tasks specified through natural language instructions (T0, Flan models). We systematically studied instruction tuning for dialogue tasks and show it works a lot better than you might expect!