Read more on how we can do this (with statistical guarantees) for LLMs on robots π
https://t.co/M9lUqlZ5cBrobot-help.github.io
Exploring LLM uncertainty in the context of generating robot plans is especially crucial because of safety considerations π§
Instructions from people can be ambiguous, and LLMs are prone to hallucinating. Poor outputs can lead to unsafe actions and consequences.