PhD @Berkeley_AI | interactive language agents π€ π¬
Jun 1, 2023 β’ 10 tweets β’ 5 min read
How can agents like LLMs become decision-making partners for humans?
π¬ Excited to share a new paper + suite of envs for π₯π¦π€πͺπ΄πͺπ°π―-π°π³πͺπ¦π―π΅π¦π₯ π₯πͺπ’ππ°π¨πΆπ¦π΄, where agents + humans collab to solve hard everyday problems. [1/n]
Site: collaborative-dialogue.github.io
A lot of everyday problems involve making decisions with messy constraintsβfrom researching a laptop to buy to prioritizing a company roadmap.
Agents could help us make these decisions! But they need to integrate the fuzzy real-world knowledge and preferences that we know.
Apr 18, 2022 β’ 8 tweets β’ 6 min read
How can agents infer what people want from what they say?
In our new paper at #acl2022nlp w/ @dan_fried, Dan Klein, and @ancadianadragan, we learn preferences from language by reasoning about how people communicate in context.
Paper: arxiv.org/abs/2204.02515
[1/n] @dan_fried@ancadianadragan Weβd like AI agents that not only follow our instructions (βbook this flightβ), but learn to generalize to what to do in new contexts (know what flights I prefer from our past interactions and book on my behalf) β i.e., learn *rewards* from language. [2/n]