Apps to answer math questions have been around for a while.
But ChatGPT is different - it can do things that previously required human judgment and analysis, like writing full essays or solving complex problem sets.
The result? An "existential crisis" for educators.
What's next? I see three paths forward:
1β£ Schools adjust assignments to prevent the use of AI.
Take-home work largely disappears. Class time is used for proctored essays, problem sets, and exams.
Homework time is spent learning asynch via video - a "flipped classroom" model.
2β£ Schools embrace AI.
Students will use AI in real life. Why make them do things the "old fashioned way" at school?
Instead, lessons will incorporate AI - teaching students how to write prompts, analyze outputs, and edit as needed (CC: @emollick).
In this case, AI assistance is viewed like plagiarism. Educators learn how to detect it, and have policies in place to downgrade or disqualify assignments.
A "GPT watermark" may already be in the works at OpenAI π
Open source models are at the bleeding edge of AI - but most consumers have no idea how to use them.
Enter a wave of products that bring these models to the browser (or app!), with consumer-friendly UI/tooling.
More from me + @omooretweets on startups building here π
@omooretweets To start: our team @a16z couldn't be more excited about OS models and the community behind them.
You don't always need to train a foundation model to build a great app - in many cases, you can use OS models and differentiate on workflow/tooling.
Examples across categories:
1) Text βπΌ
GPT initially dominated, but we now see apps using OS alternatives like Mistral and LLaMA.
This includes both general chat platforms and NSFW apps - the latter were early adopters of OS LLMs, as they often violate content policies of closed model providers.