1) Yesterday's #GPT4 experiment: I input the evidence against Zac Prince's, Erin's, and @BlockFi's conduct into the system, and then asked it questions about the case.
2) It suggested that the causes of action were negligent misrepresentation and fraudulent misrepresentation. It stated the odds of success on the former were 30-50% and the odds of winning on the latter were 40-60%.
3) It stated that the odds of winning on either of the issues was 60-80%, and stated that legal fees should cost $50,000 to $150,000.
4) It requested additional evidence and I input the E-Mail chain into it. It then generated a ready-made complaint. It determined from public records where Prince lives (which I hadn't been able to figure out before.)
5) The analysis was so thorough that I am convinced that it understands the case better than the lawyers I contacted last year, and it got up to speed in under 10 minutes.
6) There were no mistakes in the complaint except for those any human would have made given what it didn't know. For example, I didn't tell the software that @BlockFi was bankrupt, so it tried to add BlockFi as a party. But when I told it that, it correctly removed BlockFi.
7) It made a determination that only Prince and Erin could be sued, and that Michelle Henry's involvement in the meeting made a lawsuit against her likely unsuccessful. When asked why, it said that she didn't speak or provide false information.
8) This complaint is good enough that I am considering whether I should represent myself - it clearly isn't a nuisance lawsuit that would result in penalties for me, and if I lose, then the worst that happens is that I have the same amount of money I have now - $0.
9) Whatever happens with this case, the legal industry's collapse is imminent. Expect the salaries of lawyers to decline by double-digit percentages by the end of the year, and expect the paralegal profession to be gone within 1-2 years.
10) The only useful work lawyers can do now is to read the output from AGIs and proofread it, and to actually show up in court. Charging clients for hundreds of hours of discovery review is a thing of the past.
11) Once the next AGI can input more data instead of #GPT4's character limit and 25-prompt usage cap, it will be able to outclass all activities a lawyer can do - even writing the words they should say in court.
12) This is another example where only the lowest-paid jobs will survive. We will need cooks, drivers, factory workers, and people to scan documents and input them into the AGIs. We won't need people to actually read what is discovered and figure out what's important.
13) As I stated yesterday, mass layoffs and massive bankruptcies start no more than 6 months out. The slow takeoff scenario began the day #GPT4 was released.
14) Reminder for those who are not familiar with this fraud: prohashing.com/blog/crypto-le…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Steve Sokolowski

Steve Sokolowski Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @SteveSokolowsk2

Mar 23
1) Today's #GPT4 thoughts after another day of use:

People are worried about AI "automating" the writing of code, causing developers to be fired. The reality is that as of last week, the world shifted to where efficiency and computing cost is now the only reason to write code.
2) You can pretty much ask #GPT4 to execute any code you can think of, because natural language is essentially code execution. Last night, my brother used @turbotax for part of his taxes, and asked #GPT4 to complete the theft loss forms for the @GenesisTrading scam.
3) #GPT4 provided accurate information and was able to explain exactly what to do, and it knew that @CelsiusNetwork could be treated as a Ponzi scheme and how the IRS has a specific document for Ponzi schemes as a result of the Madoff scandal.
Read 9 tweets
Mar 22
1) I spent a large part of yesterday using GPT-4. One thing I was able to do with it was to design and then simulate an entire D&D battle sequence. The software knew the rules and taught me new things.
2) One thing that didn't happen, though, was any weird output whatsoever by the AI that suggested it was unaligned. It understood what a game simulation was and it didn't go off the rails and suggest ways to actually play out the battle in real life.
3) This was different from older AIs, which obviously didn't understand the requests. We should consider at least with low probability that as models become smarter, if trained with all of humanity's data, they might understand that most humans don't want the world destroyed.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(