GPT4 autoregressive nature makes it unstable and sensitive to prompting.
Here are some interesting examples:
When you say "do not explain", it just fires up a token (incorrect): (1/3)
When you say "first answer, then explain", it takes an incorrect position and tries to justify its answer (inaccurately). It doesn't change its opinion and say "I was wrong".
Mar 30, 2023 • 4 tweets • 1 min read
The very same thing that makes LLMs so powerful could also be the root cause of alignment problem.
To spell it out, I think the emergence if “style transfer” is understudied in the context of alignment.
If GPT4 can write a deep theoretical physics document in Shakespeare voice, it has implicit factorization capability of concepts and topics.
Recombining implicit factors is a source of creativity.
LLM can generate texts that no one alone can generate.
Sep 8, 2018 • 14 tweets • 6 min read
#ECCB18 is on. I was supposed to give 1 of the 3 accepted talks from our team at @4Catalyzer. But I couldn't make it bc of ongoing green card application. I will briefly highlight our 3 works here. (thread) >>>
I was going to present "Functional Annotation of Genes Through Integration of Disparate Data Sources with Deep Learning"
We developed an end-to-end trainable framework which performs data integration and functional prediction through a deep learning framework.