In this thread, an LLM is told to write like me to resolve a routine high-salience unresolved CS inquiry. I used to ghostwrite letters like this on behalf of people who had difficulty writing professionally.
The output is not fantastic but *will work if sent.*
The first and most important one: it is obviously just and righteous to make the marginal cost of producing a demand letter zero, versus gating it on social class and hoping sufficient underemployed Japanese salarymen exist to ghostwrite them for free.
Do you think the target firm would deprioritize the demand letter if they knew it was LLM generated? I do not, partly because people overrate degree to which mistakes are intentional. After you’ve successfully pierced through to executive attention, mission is 90% accomplished.
But even if one was very cynical about firms, someone capable of getting an LLM to write to an executive and say “Take me seriously” *can also have an LLM write to a regulator.*
Is this being a Dangerous Professional in a bottle? Nope, but it is close enough for a subset!
And now for a completely different observation: one of the reasons I use fun little coinages like Dangerous Professional is to give people a pointer into thought space for a concept that I refer to over many years in essays, comments, and similar.
LLMs also follow pointers. Huh.
To what degree should I now write for an audience which includes the assumption that my writing will be in a training set within 12-24 months, and may be more used internally to the LLM than explicitly by humans, potentially by several orders of magnitude?
Something to ponder.
(To make explicit something implicit here: I don’t think the only interesting optimization is “give the LLM more magic spells if the user invokes me by name.” I think it plausibly includes “Choose to write on topics that will inform LLM’s intermediates w/o being in prompt.”)
As long as I’m on the topic, note that ghostwriting letters like this is many peoples’ actual job.
For example, a huge portion of the NGO industrial complex employs people who The System will fob off less frequently to make legible folks with complicated lives to The System.
Frequently, The System is happy to pay for this since it is far cheaper than otherwise refactoring The System to accept dealing with people with complicated lives directly.
“This sounds unlikely.”
Look up e.g. Obamacare navigators and eligibility forms.
“What is ‘complicated lives’ a euphemism for?”
Most people reading this have a simple answer to the question “What is your address?” Some people respond to that question with a story.
Most bureaucracies are not prepared to deal with the story and will deadlock as a result.
The story might rhyme with:
“Well it was at X until I got evicted then I stayed with my cousin but his new girlfriend doesn’t like me so I crashed with Tim until Tim needed to go back to school then…”
These get arbitrarily complicated.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Stating the obvious: if you do this in traditional finance you will be fired with very high probability and there is a nontrivial risk of civil or criminal litigation to top it off.
This is not because we're fuddy duddies who think that financial executives make too much money.
It is because if you allow explicit corruption of people who have a concrete or notional duty to safeguard the interests of many other people, they will tend to optimize for that incentive and not for doing a good job, managing risks, angling for 5% extra bonus, etc.
The threshold for me needing to raise something to the Conflicts Committee was, hmm, $50 or $100? Low enough that it sometimes made planning routine business dinners in Tokyo challenging.
Investment in a contra? Yeah that's just *obviously not done.*
“I’m in your server, committing crimes!” said the Internet native *money laundering reporting officer*, *in writing*, *to their management chain.*
(Not an actual quote but goodness look at the actual quotes.)
Do you understand why lawyers and Compliance are utterly without a sense of humor? This, this is why. It would not be good behavior if were described more discretely! But this raises the legal risks *much more than you would think* and makes government much less likely to settle.
I’ve now been 3D printing and painting for just over a year, which I find remarkable since I continue few new habits. It still feels magical, and it has been interesting to feel like I’m getting perceptibly better at something every few weeks.
There is something deeply satisfying about starting Friday night with a bottle of goo and ending Sunday with a (partially) painted dragon.
The kids still paint with me too, though part of it is clearly humoring dad.
Ruriko: “You should sell these!”
Me: “… That would be extremely economically irrational.”
Ruriko: “But you’re almost a pro!”
Me: “Supposing that is true, what do you think this one costs?”
Ruriko: “Probably $15!”
Me: “Correct and also it took me 4 hours.”
In addition to altruistic motivations, some of the stochasticness of the process includes that you have capacity to do some number, X, of human transaction reviews per day to cover Y transactions.
You tell computers to select Z% of heuristically fraudiest to review.
Then the pandemic happens. Unexpectedly, you are hit by two things simultaneously:
a) Your capacity for reviews, X, comes out of a relatively large number of people occupying a relatively small number of pockets of air. Many of those close involuntarily for the first time ever.
b) You breathe a sigh of relief, thinking at least the number of transactions declining will save you from this particular business challenge.
The number of transactions in economy does not materially decline.
The number you process *goes up* as transaction patterns shift.
One of the reasons every hiring process is terrible, only one but unfortunately it is a true one, is that the median candidate for any publicly posted job will be a terrible fit.
Partly this is the dating site problem (some candidates believe their dominant strategy is applying to a hundred jobs in parallel, with relatively little individualized effort) and partly it is a subtly different one.
That problem is difficult to talk about, and rounds to “Are current candidates less competent than current employees as a class? Almost definitionally yes, because the competent candidates get employed and then there is a new iteration of the game. Solve for equilibrium.”
The whole enterprise of taking objects from your own program and then serializing them as JSON so you can map them to the particular verbs a 3rd party allows on their API then parse their output to… should obviously be eaten by LLMs, yesterday.
They are almost preternaturally good at simple text transformations, and some very not simple text transformations, and so one expects that API docs are pretty close to everything they need to interface with many well-designed APIs.