Eg. predicting self-driving cars easier than predicting if Uber or Waymo or …
And necessarily, if it's hard and unluckily, it becomes impossible to predict who will succeed. Only a technology can play large-number iterated games until it succeeds; humans, only limitedly.
(I'm currently jetlagged, so don't take the above too seriously)
Predicting what will happen over breeds of information (a DNA / a tech / a self-reproducing piece of info)
is easier than predicting over space (where)
which is easier than predicting over people (who)
which is easier than predicting over time (when)
(useful = produces a learning and doesn't threaten survival)
(I explain this in gum.co/powerofa)