If wealth implies having the luxury of time to engage in your passionate interests, then why don't wealthy intellectuals hire tutors so they can understand complex subjects faster?
The strange thing is consultants are usually hired so that the wealthy do not have to think about some things (for example: financial planning), but it is a rare case where they are hired to do something better.
Yet it is not unusual for the wealthy to hire personal trainers because it's common sense that you can't outsource your physical exercise. So why aren't there personal 'cognitive' trainers?
Why is it fine that we send our children to schools with the finest teachers but it's not okay for us to hire the finest teachers for our own adult education? Is there a cultural taboo that adults do not need good teachers?
Is this similar to the taboo where many people avoid psychiatrists? Is there not a recent phenomenon that people are hiring performance coaches? Perhaps the word coach is better for adults than the word tutor or teacher.
After all, professional athletes still hire all kinds of coaches. Why then aren't their coaches for professional knowledge workers?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
When modern civilization voted away monarchies, we collectively sought to get rid ourselves of leaders that were psychopaths. Yet here we are today.
When optimization is the primary driver of civilization, we structure our lives as if we are cogs in a great machine. As a consequence, our leadership also treats people as if they were also machines.
The fear of AI is because they replace us as cogs in the machinery. Thus we lose our relevance. We cannot stomach the possibility that AI replaces our psychopathic leaders. Thus we lose our agency.
Finally a credible mathematical framework for understanding how to build deep learning architectures for different problem domains. @mmbronstein
5G's of Geometric Deep Learning:
Now every Deep Learning practitioner needs to include groups, geodesics, and gauge invariances in their working vocabulary. Our brains are about to explode!
The number of papers are indications of interest and not impact. Indeed more people working on the same problem can generate more ideas. But more ideas do not necessarily generate more impactful ideas when ideas are constrained by groupthink.
What is driving the interest in deep learning is of course its phenomenal success. This leads to more funding and more advanced tools. There are diminishing returns in every field as the low-hanging fruit is picked.
Like in any field the early adopters are always rewarded disproportionately more than the latecomers. Unfortunately, it is human bias to recognize more the pioneers.
I think many people have failed to express what the brain does at Marr's computational level. You've got to begin with a hypothesis and then frame your evidence in support of the hypothesis. Anything less is stamp collecting.
What we know of the brain is what we know about butterflies if all we do is collect a lot of butterflies. This knowing is not the same as knowing how butterflies generate their wings.
In the context of general intelligence, there is no opposite to the notion of similarity. What we describe as dissimilar is also a kind of similarity.
That's because similarity hinges on a change of reference frame. It is what remains the same when a reference frame is changed.
Dissimilarity is what becomes different when some things remain the same. It's contextual and subordinate to the change in reference frames to determine similarity.
What is so frustrating about the FDA and CDC response to the pandemic is the lack of transparency of their decision-making process. If we are ever to evolve into a world of greater complexity, we need governance to be more transparent in their complex decision-making process.
What is preventing governance from creating a decision-making diagram that shows what information was available for every decision and the sequence of steps leading to the final decision?
As we become more dependent on automated decision-making algorithms, we cannot accept the state where decisions are in a black box and not explainable. If a car crashes without explanation, we should be able to review the decision process to discover the flaw.