1) Yesterday's #GPT4 experiment: I input the evidence against Zac Prince's, Erin's, and @BlockFi's conduct into the system, and then asked it questions about the case.
2) It suggested that the causes of action were negligent misrepresentation and fraudulent misrepresentation. It stated the odds of success on the former were 30-50% and the odds of winning on the latter were 40-60%.
3) It stated that the odds of winning on either of the issues was 60-80%, and stated that legal fees should cost $50,000 to $150,000.
4) It requested additional evidence and I input the E-Mail chain into it. It then generated a ready-made complaint. It determined from public records where Prince lives (which I hadn't been able to figure out before.)
5) The analysis was so thorough that I am convinced that it understands the case better than the lawyers I contacted last year, and it got up to speed in under 10 minutes.
6) There were no mistakes in the complaint except for those any human would have made given what it didn't know. For example, I didn't tell the software that @BlockFi was bankrupt, so it tried to add BlockFi as a party. But when I told it that, it correctly removed BlockFi.
7) It made a determination that only Prince and Erin could be sued, and that Michelle Henry's involvement in the meeting made a lawsuit against her likely unsuccessful. When asked why, it said that she didn't speak or provide false information.
8) This complaint is good enough that I am considering whether I should represent myself - it clearly isn't a nuisance lawsuit that would result in penalties for me, and if I lose, then the worst that happens is that I have the same amount of money I have now - $0.
9) Whatever happens with this case, the legal industry's collapse is imminent. Expect the salaries of lawyers to decline by double-digit percentages by the end of the year, and expect the paralegal profession to be gone within 1-2 years.
10) The only useful work lawyers can do now is to read the output from AGIs and proofread it, and to actually show up in court. Charging clients for hundreds of hours of discovery review is a thing of the past.
11) Once the next AGI can input more data instead of #GPT4's character limit and 25-prompt usage cap, it will be able to outclass all activities a lawyer can do - even writing the words they should say in court.
12) This is another example where only the lowest-paid jobs will survive. We will need cooks, drivers, factory workers, and people to scan documents and input them into the AGIs. We won't need people to actually read what is discovered and figure out what's important.
13) As I stated yesterday, mass layoffs and massive bankruptcies start no more than 6 months out. The slow takeoff scenario began the day #GPT4 was released.
1) Today's #GPT4 thoughts after another day of use:
People are worried about AI "automating" the writing of code, causing developers to be fired. The reality is that as of last week, the world shifted to where efficiency and computing cost is now the only reason to write code.
2) You can pretty much ask #GPT4 to execute any code you can think of, because natural language is essentially code execution. Last night, my brother used @turbotax for part of his taxes, and asked #GPT4 to complete the theft loss forms for the @GenesisTrading scam.
3) #GPT4 provided accurate information and was able to explain exactly what to do, and it knew that @CelsiusNetwork could be treated as a Ponzi scheme and how the IRS has a specific document for Ponzi schemes as a result of the Madoff scandal.
1) I spent a large part of yesterday using GPT-4. One thing I was able to do with it was to design and then simulate an entire D&D battle sequence. The software knew the rules and taught me new things.
2) One thing that didn't happen, though, was any weird output whatsoever by the AI that suggested it was unaligned. It understood what a game simulation was and it didn't go off the rails and suggest ways to actually play out the battle in real life.
3) This was different from older AIs, which obviously didn't understand the requests. We should consider at least with low probability that as models become smarter, if trained with all of humanity's data, they might understand that most humans don't want the world destroyed.