When modern civilization voted away monarchies, we collectively sought to get rid ourselves of leaders that were psychopaths. Yet here we are today.
When optimization is the primary driver of civilization, we structure our lives as if we are cogs in a great machine. As a consequence, our leadership also treats people as if they were also machines.
The fear of AI is because they replace us as cogs in the machinery. Thus we lose our relevance. We cannot stomach the possibility that AI replaces our psychopathic leaders. Thus we lose our agency.
The danger of AI has more to do with how we organize our civilization and has less to do with its growing competence. If we remain collectively confused as to what it means to be humans, we will inevitably create a civilization that ignores our humanity.
What advanced technology affords to humanity is the luxury of time. When we are given time, we can pay attention to what it means to be human. It is only when we collectively are given enough time that we recognize why we are relevant.
History tells us that people who inherit the luxury of time have the most revolutionary breakthroughs in understanding and improving the human condition. The affairs of humanity are sufficiently complex that it is easy to get lost in the trees and overlook the grand forest.
The most valuable of all technologies are the kinds that open up new perspectives of understanding. AI is this kind of technology just as the internet and literature that came before it.
We can either wield this new technology to drive greater disinformation (see: selling sugar water) or employ it to better understand what makes us human.
If wealth implies having the luxury of time to engage in your passionate interests, then why don't wealthy intellectuals hire tutors so they can understand complex subjects faster?
The strange thing is consultants are usually hired so that the wealthy do not have to think about some things (for example: financial planning), but it is a rare case where they are hired to do something better.
Yet it is not unusual for the wealthy to hire personal trainers because it's common sense that you can't outsource your physical exercise. So why aren't there personal 'cognitive' trainers?
Finally a credible mathematical framework for understanding how to build deep learning architectures for different problem domains. @mmbronstein
5G's of Geometric Deep Learning:
Now every Deep Learning practitioner needs to include groups, geodesics, and gauge invariances in their working vocabulary. Our brains are about to explode!
The number of papers are indications of interest and not impact. Indeed more people working on the same problem can generate more ideas. But more ideas do not necessarily generate more impactful ideas when ideas are constrained by groupthink.
What is driving the interest in deep learning is of course its phenomenal success. This leads to more funding and more advanced tools. There are diminishing returns in every field as the low-hanging fruit is picked.
Like in any field the early adopters are always rewarded disproportionately more than the latecomers. Unfortunately, it is human bias to recognize more the pioneers.
I think many people have failed to express what the brain does at Marr's computational level. You've got to begin with a hypothesis and then frame your evidence in support of the hypothesis. Anything less is stamp collecting.
What we know of the brain is what we know about butterflies if all we do is collect a lot of butterflies. This knowing is not the same as knowing how butterflies generate their wings.
In the context of general intelligence, there is no opposite to the notion of similarity. What we describe as dissimilar is also a kind of similarity.
That's because similarity hinges on a change of reference frame. It is what remains the same when a reference frame is changed.
Dissimilarity is what becomes different when some things remain the same. It's contextual and subordinate to the change in reference frames to determine similarity.
What is so frustrating about the FDA and CDC response to the pandemic is the lack of transparency of their decision-making process. If we are ever to evolve into a world of greater complexity, we need governance to be more transparent in their complex decision-making process.
What is preventing governance from creating a decision-making diagram that shows what information was available for every decision and the sequence of steps leading to the final decision?
As we become more dependent on automated decision-making algorithms, we cannot accept the state where decisions are in a black box and not explainable. If a car crashes without explanation, we should be able to review the decision process to discover the flaw.