1) Today's #GPT4 thoughts after another day of use:

People are worried about AI "automating" the writing of code, causing developers to be fired. The reality is that as of last week, the world shifted to where efficiency and computing cost is now the only reason to write code.
2) You can pretty much ask #GPT4 to execute any code you can think of, because natural language is essentially code execution. Last night, my brother used @turbotax for part of his taxes, and asked #GPT4 to complete the theft loss forms for the @GenesisTrading scam.
3) #GPT4 provided accurate information and was able to explain exactly what to do, and it knew that @CelsiusNetwork could be treated as a Ponzi scheme and how the IRS has a specific document for Ponzi schemes as a result of the Madoff scandal.
4) However, #GPT4 used a lot more computer power to generate the output than @turbotax did. It is much more expensive and slower to use a huge neural network to compute the data for the forms than it is to use a rules-based program like TurboTax.
5) But in this case, speed doesn't matter, because @turbotax only performs simple math. So as soon as it is possible to execute more than 25 queries in 3 hours, and get the words back faster, all software where performance is unimportant - like TurboTax - will be obsolete.
6) This realization leads to two conclusions. First, a significant amount of software currently on computers is already obsolete. We don't need #GPT4 to help developers write faster code; we just don't need a lot of rules-based software anymore at all.
7) Second, as I pointed out yesterday, even at the current level of development, the #1 issue limiting the use of general AI like #GPT4 (and it is general AI) is simply that there aren't enough AI chips in the world that can be produced cheaply enough.
8) Therefore, my choice is to short firms that program rules-based software that inputs and stores simple data, like accounting products and POS systems. It's also to buy chip manufacturers that have spare fabs because there is infinite demand for chips.
9) So to summarize: when looking at software, ask: is the software computationally simple, or is it computationally complex? If it's simple, the AI will just do it. These AIs won't be emulating 3D videogames for some time, but they will write code to run them efficiently.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Steve Sokolowski

Steve Sokolowski Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @SteveSokolowsk2

Mar 24
1) Yesterday's #GPT4 experiment: I input the evidence against Zac Prince's, Erin's, and @BlockFi's conduct into the system, and then asked it questions about the case.
2) It suggested that the causes of action were negligent misrepresentation and fraudulent misrepresentation. It stated the odds of success on the former were 30-50% and the odds of winning on the latter were 40-60%.
3) It stated that the odds of winning on either of the issues was 60-80%, and stated that legal fees should cost $50,000 to $150,000.
Read 14 tweets
Mar 22
1) I spent a large part of yesterday using GPT-4. One thing I was able to do with it was to design and then simulate an entire D&D battle sequence. The software knew the rules and taught me new things.
2) One thing that didn't happen, though, was any weird output whatsoever by the AI that suggested it was unaligned. It understood what a game simulation was and it didn't go off the rails and suggest ways to actually play out the battle in real life.
3) This was different from older AIs, which obviously didn't understand the requests. We should consider at least with low probability that as models become smarter, if trained with all of humanity's data, they might understand that most humans don't want the world destroyed.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(