Prakhar Gupta Profile picture
Research Scientist at Google. PhD from CMU | Ex-AdobeResearch | IIT Roorkee
Oct 31, 2022 โ€ข 9 tweets โ€ข 4 min read
Can instruction tuning improve zero and few-shot performance on dialogue tasks? We introduce InstructDial, a framework that consists of 48 dialogue tasks created from 59 openly available dialogue datasets
#EMNLP2022๐Ÿš€
Paper ๐Ÿ‘‰ arxiv.org/abs/2205.12673
Work done at @LTIatCMU
๐Ÿงต๐Ÿ‘‡ We investigate instruction ... Instruction tuning involves fine-tuning a model on a collection of tasks specified through natural language instructions (T0, Flan models). We systematically studied instruction tuning for dialogue tasks and show it works a lot better than you might expect!