Profile picture
James Hardiman @hardimanjames
, 13 tweets, 3 min read Read on Twitter
Just finished Prediction Machines by Ajay Agrawal @joshgans & @avicgoldfarb. Wow. I have been investing in AI companies for years @DCVC and still found this a very insightful read. Here are some of my crib notes 1/n
AI can be thought of as a fall in the cost of prediction (i.e., here’s a picture, predict the label). Turns out economists have tools for thinking about falling costs! 2/n
When costs fall the value of complements increase. This means data (and its generators), judgement (assigning value and defining objective function), and action. 3/n
Data collection is costly tho, so how do you think about value of data? Prediction has diminishing returns for marginal data point, but overall there may be increasing returns to scale. This happens when there is lots of value in good prediction for rare events. 4/n
So people vs. machines? Machines > when data is high dimensional or when most important factors are interactions. Humans > when data is small or missing or when reasoning about the world broadly is required. The future is probably human + machine. 5/n
Impact of AI on labor: uncertain. Job spec will shift to emphasize judgement and relationships. Difficult to objectively assess quality of employee’s judgement, so individual cos will want these employees in house. 6/n
Impact of AI on capital: uncertain. Better prediction of future could allow cos to do better contracting => more outsourcing, fewer assets. Could also make possible more complex decisions leading to owning assets in order to properly execute. 7/n
What does it mean to have an “AI first strategy”? Only meaningful if you are making a trade-off to emphasize prediction / data collection at the expense of something else. 7/n
This is where incumbents get exposed to an innovators dilemma. Shifting to AI-first strategy may lead to worse product for existing customers (in short term). Start-ups don’t have existing customer base to satisfy so can start AI-first and eventually eclipse incumbents. 8/n
To conclude, what are some things we should be concerned about? 9/n
Experience is a scarce resource, if we give it to machines then humans won't get it. Fail over to a human in an extreme condition (i.e., difficulty driving situation) is not useful when they haven’t logged the hours in normal situations! 10/n
AI does not rely on causal experimentation, just on correlation and so is susceptible to all associated traps 11/n
Data risks: garbage in, garbage out. Can learn competitors’ algos when can observe input/output data. Malicious attacker can manipulate algo when it learns from feedback … ex: @TayandYou 12/12
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to James Hardiman
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!