* The legal/law system can be seen as an objective function for an AI system.
* Legal code is called "code" for some reason. Objective functions are not new we have been writing them for 1000's years.
* Asimov rules are not practical.
*It's surprising that traditional college textbooks didn't believe in SGD optimization on non-convex systems with much more parameters than observations.
*ML is the science of sloppiness.
*Graphs & Logic for ML ( PGM's) are too rigid not scalable due to knowledge acquisition cost. Valid for expert systems in general.
* We don't have any good ideas to create an intelligent assistant with decent common sense. Anyone that says the contrary is lying.
* Humans don't have AGI. Human intelligence is narrowed to the structure of the machinery of the human brain.
* Active-learning makes labelling more efficient but it's definitely not transformative on the path to progress in AI.
* Humans have pre-built predictive models of the world that allows us not to do stupid things. It FasTracks learning.
* Self supervising + pre-train is magic.
* Autonomy will initially be expert-driven with lidars and progressively learning components with most likely self-supervised learning.