Gabriel Poesia Profile picture
CS PhD student @Stanford Learning models that do math to help human learning Mineirinho from Brasil
Apr 25, 2022 19 tweets 5 min read
Language models like GPT-3 and Codex can generate code, but they can miss your intent and their code can have bugs. Can we improve that? Perhaps guarantee the absence of certain errors? Come checkout Synchromesh at #ICLR2022 tomorrow! We start by identifying two broad classes of mistakes that these models can make:
1- Conceptual errors, when they miss or ignore parts of the specification
2- Implementation errors, where their output can fail to parse, type-check, execute, or violate other desirable constraints