A short thread on an obvious selection effect with some big consequences.
The social networks that are huge and very powerful now are the ones that grew the fastest. All else equal, these tend to be those with compelling products, but also another crucial thing:
1/
Being willing to make most trade-offs in favor of growth during a crucial period, which often was pretty long.
That process isn't pretty: it involves being willing to manipulate users and operate as many viral loops as possible, as long as they don't have a *growth* downside
2/
There's also a large, and maybe more important, effect on corporate culture: the people who grow most powerful and influential at the company during this period are the ones who were willing to give up a lot of other things for growth.
3/
So now you have these companies that influence world events at the highest level. And, just by the nature of how they got to be that way, their senior leaders are selected on having been obsessed, to the exclusion of other values, with user growth.
4/
Not all the leaders, of course, but the directional effect is there.
And now, these are the people (naturally - who else?) who run earnest corporate responsibility initiatives to make the platforms serve society better.
5/
Facebook is, of course, the perfect example of this, but not the only one.
So part of what we are learning about in the early 21st century is what a politics and a culture presided over by growth hackers looks like.
6/
I think a recognition of this phenomenon is why some of the people who understand these companies (and the leaders involved) best feel darkest about things.
Thread inspired (most proximately) by this useful history
I don't care at all about homework being done with AI since most of the grade is exams, so this takes out the "cheating" concern.
Students seem motivated to learn and understand, which makes the class very similar to before despite availability of an answer oracle.
2/
It's possible that (A) all the skills I'm trying to teach will be automated, not just the problem sets AND (B) nobody will need to know them and (C) nobody will want to know them.
Notice: A doesn't imply B and B doesn't imply C.
3/
A survey of what standard models of production and trade are missing, and how network theory can illuminate fragilities like the ones unfolding right now, where market expectations seem to fall off a cliff.
When AGI arrives and replaces all human work, there won't be human sports.
Instead of watching humans play basketball, we'll watch humanoid robots play basketball; robots will, after all, play better.
Similarly, robot jockeys will ride robot horses at the racetrack.
1/
There won't be humans getting paid to compete in chess tournaments.
MagnusGPT will not only play better than any human plays today, but also make that characteristic smirk and swivel his head around in that weird way.
2/
There certainly won't be humans getting paid to work as nurses for the sick and dying, because robots with soft hands will provide not only sponge baths but better (superhuman!) company and comfort.
3/
Played around with OpenAI Deep Research today. Thoughts:
1. Worst: asked it to find the fourth woman ever elected to Harvard's Society of Fellows - simple reasoning was required to assess ambiguous names. Gave wrong person. High school intern would do better.
1/
2. Asked it to list all economists at top 15 econ departments in a specific subfield w/ their citation counts. It barely figured out the US News ranking, its list of people was incomplete, and it ran into problems accessing Google Scholar so cites were wrong/approximate.
2/
3. Asked it to find excerpts of bad academic writing of at least 300 words each.
Thought for 10 minutes, came up with stuff like this (obviously non-compliant with request).