Entrepreneurship has been declining, even though 1/3 of Americans have had a startup idea in the past 5 years.
Part of the reason is that getting started can be daunting, and founders often need a little help to get moving. I think ChatGPT can do that. oneusefulthing.substack.com/p/chatgtp-is-m…
Folks are commenting that the reason is healthcare. Sure, but only half of Americans with ideas even do basic followup stuff, like search the internet for research. Overall, the road to entrepreneurship is long & bumpy and lots of people drop out at the first sign of difficulty.
Asking ChatGPT riddles underlines how it can help us approach problems in a different (if not always right) way.
Example: “Turn me on my side and I am everything. Cut me in half and I am nothing. What am I?” The answer is usually 8 (turn it to become ∞, cut it to be zero).
It has no patience for clever riddles. This a practical problem, damn it.
It also solves the hardest logic puzzle in the world (more here: nautil.us/how-to-solve-t…) in a way that seems to be right & well-explained but which is actually nonsense.
I wrote in Harvard Business Review about why I think AI has suddenly reached a tipping point for useful work & how that might shift what jobs look like in ways that are hard to anticipate.
This Twitter thread shows you how to use ChatGPT to boost your writing. Very useful if you are an expert on a topic, it multiplies your abilities and time.
Popular business advice is full of theories that are "useful, but not necessarily right," reducing complex issues into simpler factors, even if the framework is not particularly robust.
I regret to inform you that AI is very good at this. Witness the "Technology Adoption Matrix"
I asked it for a theory of technology adoption, it tried to give me Rogers' Diffusion Theory, and then the Technology Adoption Cycle, but after enough prodding we have this new approach. Everything was done by the AI.
Here's how a consultant should pitch it, according to the AI.
Business experiments help companies and startups succeed, but many of them do the wrong experiments because they test small changes, which can require huge samples if you are trying to identify tiny differences between two options
This paper on A/B tests at Bing shows why.... 1/
Most A/B results for mature products are small & precise (impacts of .06%) but some are much larger. To find those "fat tails," it is better to conduct lots of smaller, but less accurate, experiments. papers.ssrn.com/sol3/papers.cf…
So it depends on the market:
Where tails are thin: “perform thorough prior screening of potential innovations & run a few high-powered precise experiments”
Where tails are thick: “run many small experiments, and test a large number of ideas in hopes of finding a big winner” 3/
Teachers, the semester is ending, it is too late to change assignments, & almost all of your students are using AI. And while some are cheating, others use it as an editor/tutor
One option: ask they cite AI use & provide prompts. And assign a reflection on how AI helped (or not)
I did this, and it worked. It let students off the hook (no one knows what the plagiarism rules were for AI - just copying is wrong, but using AI to edit your essay? Suggest changes? Provide a draft?) & it helped me understand what was happening. Also reflections were thoughtful.