ChatGPT wrote a surprisingly (to me) coherent answer to a contracts exam prompt.
I regenerated the response three times. This is the only one that discussed the reasonable expectations doctrine, though it wasn't particularly in-depth.
It would get a lot of "?" "why?" and "so?" responses from me because it doesn't explain why, for example, that the contract calls for "re-pav[ing]", sealcoating is excluded. It's mostly facts and conclusion.
As for R.E.D., it doesn't explain when this rule applies and whether those conditions are satisfied, given these facts.
When prompted specifically to discuss the R.E.D. and if it obligates Eddie to seal the parking lot, ChatGPT says it's a hard question. But the points on an (or at least my) exams are from discusses those hard issues.
At this stage, no #lawstudent hoping for an A should try this
We created a novel data set drawn from the @CFPB's consumer complaint database. Using only student loan complaints, we found 212 companies were complained about.
We then identified which complained-about companies were fintechs.
We then compared the complaints against these fintech lenders and/or servicers to the complaints against non-fintech lender/servicers.
In general, we find very few complaints against fintechs. Are fintechs doing a particularly good job making and servicing student loans?