Here's what she does: "I consider decision-making constrained by considerations of morality, rationality, or other virtues. The decision maker has a true preference over outcomes, but feels compelled to choose among outcomes that are top-ranked" by a "virtue/duty" preference.
3/
Being a decision theorist, she does decision theory on this.
In particular, she asks how we can identify the agent's notion of duty (or whatever other virtue he feels constrained by) if we know his true preference.
4/
She also shows that choice behavior substantially restricts both the true preference and justifications when neither is known, and gives a mathematical characterization of how.
5/
What I like about this is that it takes seriously the conflict that can arise between duty and preference. It doesn't insist on some dogmatic conflation of the two (as in my first tweet) but creates a formalism giving them both space to be real things.
6/
A wonderful example of decision theory being helpful by giving us good, clear ways to talk about (and "be economists about") things that we should talk about, but didn't yet have good language for.
7/7
PS/ I think this is a piece of decision theory that Bernard Williams (who unfortunately is not on Twitter) would have liked.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I don't care at all about homework being done with AI since most of the grade is exams, so this takes out the "cheating" concern.
Students seem motivated to learn and understand, which makes the class very similar to before despite availability of an answer oracle.
2/
It's possible that (A) all the skills I'm trying to teach will be automated, not just the problem sets AND (B) nobody will need to know them and (C) nobody will want to know them.
Notice: A doesn't imply B and B doesn't imply C.
3/
A survey of what standard models of production and trade are missing, and how network theory can illuminate fragilities like the ones unfolding right now, where market expectations seem to fall off a cliff.
When AGI arrives and replaces all human work, there won't be human sports.
Instead of watching humans play basketball, we'll watch humanoid robots play basketball; robots will, after all, play better.
Similarly, robot jockeys will ride robot horses at the racetrack.
1/
There won't be humans getting paid to compete in chess tournaments.
MagnusGPT will not only play better than any human plays today, but also make that characteristic smirk and swivel his head around in that weird way.
2/
There certainly won't be humans getting paid to work as nurses for the sick and dying, because robots with soft hands will provide not only sponge baths but better (superhuman!) company and comfort.
3/
Played around with OpenAI Deep Research today. Thoughts:
1. Worst: asked it to find the fourth woman ever elected to Harvard's Society of Fellows - simple reasoning was required to assess ambiguous names. Gave wrong person. High school intern would do better.
1/
2. Asked it to list all economists at top 15 econ departments in a specific subfield w/ their citation counts. It barely figured out the US News ranking, its list of people was incomplete, and it ran into problems accessing Google Scholar so cites were wrong/approximate.
2/
3. Asked it to find excerpts of bad academic writing of at least 300 words each.
Thought for 10 minutes, came up with stuff like this (obviously non-compliant with request).