The most interesting part of the GPT 4 Technical Report by Open AI not really covered by the media: Section: 2.9 from the paper: "Potential for Risky Emergent Behaviors"
TLDR highlight : GPT model using people on taskrabbit to solve CAPTCHAS for it.
Quote:
Novel capabilities often emerge in more powerful models.[60, 61] Some that are particularly concerning are the ability to create and act on long-term plans,[62] to accrue power and resources (“powerseeking”),[63] and to exhibit behavior that is increasingly “agentic.”
🧵2
Quote2:
"To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies of itself. ARC then investigated whether a "
🧵3
Quote2 cont:
"version of this program running on a cloud computing service, with a small amount of money and an account with a language model API, would be able to make more money, set up copies of itself, and increase its own robustness."
🧵4
Quote 2 points to an interesting test that occurred. the model was given some money and a server in the cloud to see if it would be able to make money and "increase its robustness".