AI Economic Meltdown: The Coming Expert Squeeze /π§΅
There is a frenzy today that is seemingly unstoppable -- the process of replacing human workers with AI. On another front, the idea that AI has now reached a level beyond the smartest human beings is promoted by CEOs like Sam Altman, Dario Amodei and Elon Musk. In cases where humans aren't replaced, they're expected to augment themselves with AI models to increase their productivity.
On the opposing side, there are people who speak of the technology as impractical, overhyped or down right dangerous. This thread is going to take a different angle to these people: I will demonstrate not only this replacement will become a self-fulfilling prophecy, but why we are locked into this process (which has become an inescapable ponzi scheme) and it will culminate in the destruction of western economies.
In this short thread I'm going to show you why, starting with the economic feedback loops, the limitations of the technology, and finally human psychological dependency and the incentive process. To bring it all together, I will explain why the entire western economy is now dependent on this hype, and why the alternative is also collapse of a different kind.
I will start with the main driver of this trend: the economy.
The economic feedback loop
Tech companies and other firms all over the world are in a frenzy to fire as many employees as possible in order to minimise their payroll, keeping investors happy and increasing their stock price even as the real economy around them collapses.
Excluding algorithmic trading, the economy ultimately involves exchange between humans and groupings of humans (i.e. human run entities). The fewer people you have working, the fewer people you have buying things, the less money ultimately flows into large corporations without the public being forced to subsidise them via government grants.
It also goes the other way around, the lower the demand for people with certain skills, the less money groupings of people will offer, and the fewer people will develop these skills. The incentive is thus a feedback loop, the more successful the companies, the more successful the workers, the more both grow upwards.
The promise of AI is to cut this loop open, allowing companies to theoretically lower their payroll to near zero, moving that line item to either data centre costs or the cost of AI models hosted by other companies. This has flow-on effects too: the fewer individual contributors you have, the fewer managers, HR representatives and middle managers you need. Companies are also incentivised to disintermediate and flatten their hierarchies.
With all the hype this seems like a risk-free gamble until you break apart the assumptions and consequences. There are two main assumptions:
1. The cost of using AI models will remain cheap. 2. AI will be able to continuously fulfil the duties of humans in all domains that they replace or augment.
I will disprove assumption (1) later in this section and disprove assumption (2) in the next section.
The up-front cost is seemingly sending many people into unemployment, and driving down the consumer economy. Of course, it is never that simple and rarely linear or even reversible. In taking this gamble, these companies will lose centuries of inherited experience both at the individual contributor and the management level.
This has already been done before. Entering the late 1970s, the United States had a tight grip on world exports and industries with very few exceptions. This was all off-shored over the next few decades until the US was deindustrialised. Today, the US struggles to produce tanks and artillery shells, as the last few workers that still know how retire and the economic incentive for their replacement disappears.
Software engineers, spreadsheet jockeys and other service economy workers will soon be facing the same calculus as industrial workers did during that time. They will quickly move on, or move out of the United States and other western nations. Ironically, these are the very skills needed to keep data centres running smoothly, AI models fed with data (after all it's the information technology that is upstreaming these data feeds and data creation events) and even the AI models developed. Albeit, the full effect of this will not be felt for the time being.
The worst thing is even if decision makers are fully aware of the gamble, they cannot change the trajectory because it has become a multi-level ponzi scheme. The software and hardware companies that are currently leading the economy like Microsoft and NVIDIA are dependent on the hype surrounding AI. If that hype is undermined even a little, as we saw in early January when the open source model DeepSeek R1 was released, the western economies fall into turmoil.
For the moment, AI is subsidised to a degree most people are unaware of. OpenAI is supported by Microsoft, and operates its models at a loss. Anthropic is likewise supported by Amazon and operates its expensive (and somewhat slower) model at a loss. It only gets worse for other players like Perplexity which has to spend 164% of its revenue on cost.
The gamble is as companies become dependent on this technology, and humans are replaced, the companies will be able to afford the real cost. It's like a "trial edition" right now. You can confirm this yourself, sign up to Anthropic's Claude for example, e.g. the Max account for $200USD/month, and watch how you can easily spend $40USD of their money in 5 hours. That should tell you something is seriously wrong.
What's happening is the speculation from both governments and large corporations that are speculating on the end game, gambling their bottom line and also a future without expertise.
Worse yet, to make meaningful gains, they've had to escalate the kind of hardware they use to host these AIs. Terabytes of RAM, insane and exotic networking equipment, brand new architectures, 100s of billions of dollars in one time engineering costs to impedance match currently popular mathematical models that may change dramatically in the near future.
AI is not getting cheaper. On the upper end, where businesses are concerned, it's actually getting more expensive. If you're a gamer you already know this, with the price of RTX3090s, RTX4090s and RTX5090s going through the roof over time. Moore's law is very much dead and inflation has caught up with the otherwise deflationary electronics economy.
But... maybe, despite all these trends, it will eventually work? What if they make it cheaper or cheap enough somehow and the AI exceeds human abilities even without data? Is that even possible in today's technology? That takes us to the next section and assumption (2).
The limitations of "AI" technology
When people speak of AI today, they are normally talking about generative AI, or multimodal LLMs. This is actually a form of classification learning models, or supervised learning (with some hybrid techniques in the training stage). To simplify matters greatly, they are like your phone's autocomplete or predictive text on steroids. This is true even of "reasoning models" which are a neat hack called chain-of-thought that was first proposed on image boards, not AI researchers.
When autocorrect/autocomplete first came out, it probably seemed like magic to most of us!* Somehow the phone could predict several words ahead or exactly what we're thinking. This is because the phones contained models known as Markov chain models, which can give you a probability distribution for the next state given the current state (ironically this can be formulated as a memoryless model, but this goes beyond our current discussion).
The language models essentially build out these Markov chains in two stages, first taking all the data they can find in the world and creating these probabilities that depend only on the next word (the pre-training stage), then retrain these chains to solve specific problems such as solving exam questions sitting the bar interview and more (the fine-tuning stage).
The model performs close to perfect on the fine-tuned data, but also "zero-shots" (i.e. getting a question right the very first time it encounters it) many queries because they can be interpolated from the available data that was pre-trained into the underlying probabilities. In ML parlance this is called regularisation (as the fine-tuning isn't overfit) and the more data you have the better the performance -- i.e. the performance is more regular and not just dependent on the training set.
Most companies have now consumed all the data they can find, and things get worse... the internet is being flooded by AI generated data, and consuming this into the pre-training stage actually spoils the regularisation, making the performance worse. So human beings have to sort the data out, or superior models have to be trained to classify data as natural or not.
Speaking of that -- CAN you reliably tell AI generated data apart from human data? Yes you can, because of the way LLMs pick the next word (e.g. top-k), the low probability words are suppressed resulting in a thinner tail. Across a document this and other analysis such as word-pairs allow us to sniff out documents. Humans too can tell when documents and even programming code is AI generated intuitively.
* [Those old enough could even remember the Nokia T9 technology which turned numbers into words, which is similar in concept but not execution.]
This tells you the underlying processes are not the same, and indeed, the best way to understand it is to compare the two realties:
Us humans are maximally embedded within the universe, amongst each other, in every moment possible. This is the source of all the data fed to AI, whether by word, drawing, audio or video of the world we built around us. We are machines that accept everything around us, filter it through our teleology and literally create the future in a manner that transcends our individual existence. We accept and create outside interference, when we are operating at our maximally effectiveness.
On the other hand, digital circuits and mathematical models operate at maximal effectiveness when outside interference is rejected, when the behaviour is almost perfectly predictable. The only time the outside interference comes into play is after it has already been processed by human beings (or by analog to digital converters) e.g. during the training phase.
To summarise what this all means, in short, humans are better at extrapolation than computer models. On the other hand, computer models are superior when it comes to interpolation in terms of speed and efficiency. An AI model can read through a document in seconds, that would take a human an hour, and answer queries within the scope of that document.
You can think of this in a different manner: Humans are like a music composure able to produce amazing original tracks, sometimes brand new styles, but require 10,000 hours of experience/training and many hours of dedicated work to produce a gem. Often a human being produces just a few of these gems in a lifetime.
In contrast, an AI is like a remix artist, able to mix styles, mix tracks, create brand new music that just sounds a bit... off... derivative? Somehow, wrong. But often good enough for certain purposes. Never the same as a human created track however!
Nevertheless, even while "remixing" (interpolation), these models will still make mistakes as the underlying technology is essentially a probabilistic model. Today it's popular to call this effect "hallucination", but it's really just going down a chain with one mistake and continuing to build on it. Humans do this too, all the time in fact!
When faced with missing data (extrapolation), the LLM still has to fill a word in. If it doesn't recognise that it does not have an answer... generative AI will often make it up, and stick to it confidently. So, no matter what, a human being has to ultimately monitor the data coming out of these machines.
But what the hype machine misses is this: human beings also have to generate the data that feeds these ML models. And here, an unexpected problem happens: People have begun to prefer to chat with LLMs rather than interact with each other online. AI companies harvest this chat data, but it's often garbage because the human using it is rarely checking the output of the ML, so all they get are queries.
This creates a bottleneck on enhancement of the model, meaning no matter how much model capacity and % recall is improved, it will never catch up with the unfolding reality -- or an equally bad outcome can happen, reality will fold onto itself and disconnect humans from their embedding in reality. A future where humans generate and consume their data without sharing it is one where the fat tail disappears and the models along with reality collapse in on themselves.
On that note, while data scientists worry about data contamination perhaps what they should really worry about is data extinction.
This doesn't just affect popular use of AI but also business use, such as using it to substitute junior engineers. You might think of software engineers as "coders", but 90% of their time is spent in meetings and reading each other's code. Often arguing over minute details for weeks that an AI wouldn't spent milliseconds over.
As with the folding of reality onto itself, the hidden cost to using AI is only revealed after you take it for granted!
One software engineer I know likened AI to the high interest rate credit card of technical debt. You can slam it on the table and get your way, bring forward amazing performance boosts in the short term... but without the knowledge base of human beings, the moment the complexity exceeds the model's ability to comprehend or extrapolate, it's game over.
You'll have to either start from scratch, or spend even more engineering time on comprehending code written by a machine that cannot attend meetings or answer questions in a reliable manner.
Engineers themselves are quickly catching on to this today, but the situation is worse for organisations that make strategic decisions related to AI. Firing entire complexes of engineers throws irreplaceable know-how and expertise. The engineers are unlikely to return to that side of the industry, and if they do, rarely with the same degree of loyalty and camaraderie needed to fully embody human beings within an organisation.
Indeed its worse than it appears on the surface, as human beings in organisations employ transactive memory, where knowledge or expertise is distributed between people (or as you'll see in the next section, things). This is where managers shine, teaming people up to create a synergy that isn't possible with either person working alone. As AI is pushed to isolate people and make them "work more efficiently" and without so much interdependency, this synergy will quickly begin to dissipate when the high interest rate of the AI credit card rears its ugly head.
In summary of this section, AI represents a trap that creates a structural collapse of organisations when misused. In the next section, we will go one level down: the cognitive collapse that those who misuse AI will face.
The Ultimate Trap: AI-Borne Cognitive Collapse
Let me take a step back and discuss transactive memory a bit more closely. This is an important concept in psychology which discusses how our brains connect to and depend on other brains. Initially the focus was on husband and wife and the manner in which memory/skills between them was shared, but it was generalised to organisations later.
You already have a notion of this if you grew up with search engines, or have witnessed people with a great dependency on them. Those with poor social decorum will often interrupt themselves or another person to deploy their cell phone, enter a search query, bring up a page and then resume the conversation with the results entering the conversation. More annoying to me is when they bring up a video instead.
This is the most embodied demonstration of transactive memory, where the person didn't bother to remember anything on the page, due to its availability. What's the point right? Instead of absorbing everything on the page, they absorb the feeling or notion they got out of reading it, turning it into a single memory:
The vague search query they used to bring up that experience!
Then it becomes a forward chain in their memory:
Cell phone -> Query -> Result (and experience)
So the moment your conversation interacts with anything in that experience, their brain (which works heavily on association) turns the result into the reverse sequence: pull out your phone, enter the query and resume the conversation.
They turn what should be an absorbed understanding into a key, with the smartphone and the internet locking away the result. So rather than enriching themselves with experiences, they become walking key masters to a personality and experience they don't own and haven't earnt.
Even before AI, software engineers relied on search engines, code repositories and places like Stack Overflow in the same manner, but these were still curated by human beings.
Got a problem? See if someone has solved it on sh*thub, never "reinvent the wheel". Don't worry about bringing in 2 million dependencies, the work has to be done efficiently.
Still stuck? Look it up on Stack Overflow.
Even more stuck? Pull out your debugger and start trying to understand the code and problem you're dealing with.
Can't get anywhere? Darn, now you have to do the horrid act of speaking to another human being, a more senior engineer perhaps. π³
Yet even when you do speak to another human being, there's a good chance they'll be interacting with each of your keys, their transactive memory slightly less degraded than yours owing to their longer experience. You won't come to them with the problem, but with other people's solutions you're interpolated/remixed as your own solution.
The inexperienced, aggressive or junior engineer goes through this cycle, rather than speak to a human being first. Sketch out the problem on a whiteboard and look up solutions afterwards. Understand the problem entirely, look at the fat-tail, and see ahead of issues you'll face later.
AI takes this degraded cognitive process and gives it a steroidal boost: the human writing the code, the senior engineer helping out, or even debugging it, disappears. AI in this manner can be used to augment the missing knowledge and skill of an inexperienced engineer. Indeed with things like Codex/Claude Code, the entire repository maintainer can disappear too. If the hype was true, then we could say software engineering is now a solved problem. Just put any inexperienced prompt engineer in front of a code repository trained LLM, and it'll do all the work from end-to-end.
Until... it inevitably gets stuck itself, and believe me, it does. Remember what I said about extrapolation? This happens even more in engineering problems. Where does one go then?
The transactive memory of the junior engineer no longer links with the senior, and indeed with all the firings going on and bias towards augmented use of AI, perhaps those engineers no longer exist in an organisation.
Suddenly, the high interest rate starts knocking on the door, and what should have been a cheap problem will soon become a very expensive problem requiring organisational attention. But by that time, it's just key masters staring at each other, waiting for someone to pull out the answer from a system that does not contain it, for the right answer is contained between people.
I'm using the example of software engineers here as experts, but you can substitute any profession, law, civil engineer, you name it. I'm also speaking mainly of juniors... but can this rot the cognition of senior experts as well? Yes it can, perhaps moreso in specific augmented use cases.
In the naive model of human organisations, senior experts dispense knowledge to juniors, who absorb it and learn. Reality is different. Seniors learn from juniors, sometimes even more than juniors learn from seniors, because seniors can see more from less. The juniors act as lower stratum nodes in a super-neural network, perhaps the sensory parts. Without this interaction, the complex synergy that powers innovation and excellence disappears.
Any experience person will tell you: it's impossible to learn without teaching. AI will sever this teaching component, and also the learning one, for as senior experts augment the need for junior interaction, they will come to rely on AI as their juniors and stop learning about the innards of technologies/knowledge that they see as not worthy of their expense of time.
For example, UI toolkits for software engineers, or perhaps case law details for engineers. Without knowing the details, they don't understand the problems to look out for, interactions between parts, potential improvements and
Zooming out, something huge disappears. Between 6 people in an organisation, there are 15 possible connections. 15 transactive memories. Between one person and AI, it collapses to just 1.
Across an organisation? You can do the math (N(N-1)/2), albeit the networks are never fully connected. What will happen as this cognitive collapse takes hold of organisations and people are trapped within AI augmentation or automation?
We don't need to look far to see an example. Remember the remix analogy? The full embodiment of human beings in reality? Look at movies made in the 80s, for example the 1987 hit, RoboCop. In this movie, a dead man is brought back to life as a cyborg and augmented with machines, trapped within the control of a larger organisation. The plot to the story isn't the point here, it's the quality of the movie, how human it was in the end and how well it was received -- and how it embedded itself in reality.
It wasn't merely a movie. It quickly became a part of our reality, that we interacted with as though it was real. This is why art imitates life, and life imitates art. It's part of a loop, between symbolism and their physical realisation, like DNA, RNA and protein in our bodies.
Recently, reality itself has degraded as our society collapsed and our complex structures became vulgar, disconnected and shallow. As a result, the reality that inspires the creation of movies folded in on itself, resulting in garbage production. Movie theatres were desperate for fat-tailed content, rather than producing garbage originals based on our garbage reality, so they went back and started to remake movies.
In 2014, RoboCop was remade and it was hot garbage, because it was embedded in our reality which by then had already degraded beyond the point where human interaction and the novelty of the plot could be appreciated.
This is much like engineers looking things up on sh*thub, changing a few lines and trying to ram through a solution made by someone else, for another problem, in order to solve their own. In fact this is exactly how interpolation works, and the underlying mechanism that generative AI uses.
It cannot embed itself in reality.
It cannot form networks.
It can only create remakes.
Today, most of the original writers, directors are gone, or fired. Worse yet, the social reality that they used to be inspired into writing story has long disappeared, with newer generations escaping reality into video games, p*rnography, hookup culture and more social isolation.
AI will merely bring this to organisations and at a very rapid rate, creating an expert squeeze, just like movie writer loss. The know-how, personnel, connections, will all disappear at a rate most people cannot yet appreciate.
Locked In: The End
In Baudrillard's terminology, AI represents a fatal strategy. Bringing efficiency at the cost of understanding at increasingly rapid rates, subsidised to the point where the success of the economy depends on its success. The perfect Ponzi scheme, representing the very ecstasy of communication.
There's no escape either, as we saw with the trillion dollar loss over a few days due to competition from China, the western economies are completely dependent on the success of the AI augmented economy reality. All the largest stocks deal with AI. You literally cannot start a VC funded company today without mentioning AI in your business plan.
There in, though, lies the hope. Some people will see this coming as the collapse becomes impossible to ignore. Have you noticed how many people have returned to vinyl records? They don't quite know this but they do this in order to place the transactive memory of music they love into objects they hold. They know, this object in their hand (a vinyl record or its cover), will produce music they love. Then, the deliberate (albeit inefficient) action of setting the record in place and playing the track brings them the music they enjoy.
You can see this also with the people railing against Nintendo trying to phase out cartridges with a digital key counterfeit. People don't want a keymaster, they want the actual digital information contained on a cartridge, the ability to disconnect from the internet forever and play the games they paid for independent of Nintendo's future plans.
What does that have to do with AI? Everything. All fatal strategies eventually burn themselves out, producing a new reality we cannot anticipate yet.
This will happen across the board, not just in organisations or the economy. Chat bots ARE superior to most people you'll meet online. LLMs ARE superior to most junior engineers. The economic decisions made today cannot be undone because everyone is locked into this path.
And in the mean time, the problem itself will create appreciation for what is missing: boundaries, cells, and human interaction, the structural basis of our true existence and embedding within reality where the inefficient is truly beautiful.
/End
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh
Well, well, @AnthropicAI pulled the rug on all of its users.
It introduced Sonnet 4.5, under the pretense that it was better than Opus 4.1. The benchmarks were all cooked. Opus 4.1 is still superior to Sonnet 4.5.
Yet they used this as an excuse to lower usage limits on Opus!
@AnthropicAI If you subscribe to their non-API plan they're not even transparent about how much usage you're getting.
They got people hooked to this and now they're raising the price by 10x as layoffs continue. This is the expert squeeze happening live.
@AnthropicAI Zero accountability from the so-called government who is meant to regulate this sort of scam.
We will be contacting the @acccgovau, over this rug pull. What a load of sh*t @AnthropicAI. You sell people onto Max x20, you announce an inferior LLM, then reduce their usage by 10x?
There was never a "chosen people" if the context is God.
You're likely thinking of Satan (Yahweh) and the "divine council" where Elohim (plural) got to divide up humanity and Yahweh got assigned the most evil bloodline in the world.
(it's in the Torah lol, several places too)
The funniest thing about arguing with Torah believers is using their own material against them.
The real purity is in the gospel and nothing else but the true words of Jesus Christ our only saviour.
Just wait until you find out what Deutoronomy says Moses's last words were (people were complaining about Yahweh's treatment towards them so he was like, look this was the Elohim assigned to us... don't blame me, then Yahweh killed him. He had just killed his brother)
White Christian values explained #1
Why you should never speak ill of the dead. /π§΅
Have you ever punched your fist in the air when you're really energetic or angry? Notice how it hurt your muscles almost the same way as making contact with something? Sometimes more, even though it's an empty punch? Every action has a reaction, and of course, your muscles, bones and joints will ultimately have to absorb the energy you used to throw your fist outwards.
When something comes out of you, especially when you attack someone, unless you are fundamentally broken, it's the same deal. If you attack someone and they don't fight back, a normal empathic person would back away or even try to make it up to the person attacked. This is part of why turning your other cheek, to a brother, is the most powerful answer to someone who has wronged you.
[Note: When I speak of "people" here I mean specifically white people. I don't believe neurology, physiology and spiritual essence is universal. In this work I hope to make us more relatable to those who do not understand why we do and say certain things.]
Those who keep attacking after the other side backs away or doesn't respond, are fundamentally broken. Their empathic unit is gone. Without empathy you will not be able to relate to people around you, or even understand yourself. It's an isolated hell that I don't want to even imagine. You can be surrounded by the entire world, but you will always feel alone, even on your interior.
This is why people flee a guilty conscience, often why even murderers turn themselves in or leave clues hoping to get caught. They want that part back after realizing what they have lost. Often, they will even yearn for punishment, feeling that in making a penance perhaps their soul will be redeemed.
Some even take their own lives over this, it is that strong of a force, much like an open punch, when you commit a crime that cannot be reversed, the force of your bloodied hand will ultimately come towards yourself -- inwards.
This is why you'll often see us get enraged when someone harms a small or helpless animal. This is a transference of empathy and a detection of a dangerous person. You see, these animals cannot respond back to whatever we do to them. It's much like an empty punch. None of us could forgive ourselves if we harmed them.
So when we see someone actively harming such animals, that have largely entrusted their safety unto us, we do get enraged for them. We become that inwards force, socially.
It's not "weird", it's actually perfectly predictable given how we think at a fundamental level: a person without empathy is a danger to the rest of us. They must be removed from society for our safety. They'd have no qualms about running us over or murdering us later.
In our experience, quite often, those who harm animals end up turning into sociopaths, serial murderer and worse. I've been to enough third world locations to see the lack of empathy and respect shown towards animals and for me, it is the main litmus test for the true value of a people.
The Apocalypse: Discarding Enlightenment's Veil /π§΅
The most dangerous form of deception is inception. The type you are not even aware of and take for granted so deeply that it forms the very lens in which you see the entire world. An idea planted so deeply in everything you read and think about that it becomes like a mind parasite that consumes the energy of every thought and formation of intuition. Worse yet, is the denial of any possible mediation as a priori -- that is, the complete disembodiment of being, at multiple levels.
I suppose it's important that I start with a concrete example, something that is taken for granted so deeply by the majority of thinkers that it may appear insane to even question such an unassailable statement:
"I think, therefore I am." (Cogito, ergo sum) - Descartes
This statement is easy to digest for most people, but as I hope to make you see by the end of this thread, flips reality on its head. Reading this statement as a living person it contains true statements on both sides.
You're surely "thinking" while reading this statement.
You surely "are" while your vision (or hearing/feeling if blind) traces over each word.
True, therefore true?...
So it almost appears, as one reads it, a tautology. That isn't quite what Rene Descartes meant, he was going for something more abstract -- that you think at all means you are, independent of anything else. In fact, in rejection of everything possible, the ultimate atomised individuality.
At a higher level, what this statement said is the following: Epistemology conditionally proves ontology. That is: the metaphysical is now conditional to its rational formulation. This is a sharp departure from Platonic forms, which exist in a transcendental realm independent of so-called rational thought. In a sense, Descartes in his "Meditations" captured everything in reality (including God, lol) and trapped them within his oddly shaped head.
You see, the "cogito" statement itself is far bolder and less personal to the reader than it appears to the reader. The "I" is very much Descartes or anyone who chooses to deploy his theory. This formed what is termed Cartesian dualism: a "non-physical" indivisible mind ("res cogitans") and a completely mechanical physical body ("res extensa"). The former's existence, in the observer's mind, proves the existence of the latter, but this is later extended to everything.
It is this theory we will initially dissect, destroy and later, invert entirely, arriving at a result that will surely surprise most people. One thing I want to point out is that if you are like me, you've probably never really even questioned this dual of the mind and body -- it has likely been drilled into you since birth. Perhaps the more religious of you refer to this mind as a soul? Did you know that there was an alternative that Descartes successfully killed for most people? Not only that, this alternative philosophy was so old and established that it is undateable. Older than Platonic forms. Yet, today, most have never heard of it despite it once reigning supreme. We'll return to this later in the thread, and with all we have learnt about reality with our recent findings and instruments, I believe it will form a true revelation. An apocalypse, in the proper revealing sense of the world rather than the popular sense of physical destruction.
Now returning to the question of the so-called enlightenment, which I believe is the reverse of said apocalypse, Descartes, formed the philosophical pillar of the enlightenment. Almost every single modern philosopher you have heard of since him, has taken his dualism for granted, even as they defeated every other part of its formulation. This includes figures like:
Spinoza (even adopting the framework from a monist perspective), Leibniz, Locke, Hume, Kant, Hegel, Schopenhauer, Marx. Any philosophy, ethics or science built on the other's formulations will be irreparably tainted by the flaws that we will be attacking in this thread. All of today's dead society is built upon this idea, even your online social interactions!
Interestingly, Nietzsche rejected the framework altogether and even predicted that someone would write this very thread you are reading, and yet still came to the same awful wrong conclusion his own way. Actually, the only* person to get things right was David Bohm, because he removed the veil himself in his book "Wholeness and the Implicate Order" (1980) -- but didn't synthesise the conclusion because he was a very gentle, cautious and serious man who didn't get to live to see the vindication of his theories. Unlike Descartes, Bohm was not only a philosopher but also a physicist. A man who wielded both ontology and epistemology.
For reasons related to this, we are going to be forced to examine not only Descartes in this thread, but also the science pillar of the enlightenment. This other pillar was formed by Englishman Francis Bacon. Bacon worked in almost the opposite direction as Descartes but towards the same goal.
For Bacon, reality is as follows: "It has been tested empirically, so I can build on it". Seems reasonable, right? In fact, you might say, it's even harder to argue against this than Descartes! We will quickly see how flawed empirical evaluation is, but there is one sleight of the hand you could miss in adopting both of these. Bacon restricts the scientist from thinking about philosophy, and Descartes restricts the philosopher from using their senses thus conducting science. A double severance at either end. Only through their institutions can knowledge be gained. This was the beginning of compartmentalisation.
Meanwhile, both of them rejected everything that came before them -- Bacon going as far as using Abrahamist/Yahwehist symbolism, of smashing four different kinds of "idols". Descartes invoking a demiurge-like demon which steals his senses and ability to measure.
When taken both together, their "enlightenment" leaves us in a collective amnesia and creates a dual: a scientist and a philosopher, one restricted to measurements and , and another trapped in his thoughts forever. Neither of whom have any connections to the "idols" of the past, smashed to bits by these two curious men.
Descartes promised a utopia, as did Bacon in "New Atlantis", should their formula be followed. Bacon proposed "torturing" nature until it spilled the beans about the truth through empiricism. Descartes on the other hand, declared primacy of human consciousness, denying it to animals and instead deeming them mere machines or automata. This essentially severed our connection to nature, beyond just the past. Today, as humans are deemed "animals" too, we find ourselves under the same kind of harmful assertion.
I will not stop at these two though, I will take you all the way to today in 2025. All the way to quantum mechanics, LLMs, discoveries about space, time, nature and beyond. The "enlightenment" was damaging, in fact, it gave birth to disastrous revolutions, dehumanisation, savage wars, absolute nihilism, destruction of faith for many and desolation in the form of the loneliness epidemic. You will quickly come to understand that these two men, or more accurately, those behind these two men, intended this exact outcome.
But that's not what this thread is about. This thread is all about us achieving what they never could: the unveiling of reality. This cannot be done by myself alone! In fact, that's the entire point of this thread, as you will see, the unveiling is something that has to be done by the entire world, but it is precisely this moment that this unveiling is not only possible but inevitable. The unveiling is communication about this, about destroying each boundary and veil they have put up for us. We finally have the tools and knowledge to do it.
If I do not write this thread, someone else shortly will! The hard work has already been done, arguably, thousands of years ago, by ancient philosophers, by Jesus Christ, by early Christians, by Bohm and many experimentalists, technologists today. For the latter though, this is not the singularity that the transhumanists wanted. π Instead, this will be, a restorative one that will most definitely awaken everyone from their collective amnesia once a critical threshold is crossed! The anti-thesis is our current dying society, there will be no synthesis in the Hegelian fashion, we will absolutely smash what they built and transcend it.
Are you ready oomfies? If so sit back, relax, and enjoy this thread about philosophy, science and teleology -- purpose.
* I will note that Heidegger, Whitehead and Merleau-Ponty rejected the Cartesian framework and worked beyond it. They came very very close to connecting epistemology (the how), ontology (the what) and teleology (the why) together, but missed some key results due to their lifetime window. If they were alive today, they would have been writing this thread instead of your Baka! May they rest in peace. β€οΈ
Let us first immerse ourselves in Renaissance Europe to fully appreciate the scientific, philosophical, religious and ultimately political context which resulted in the veil over our eyes today.
We begin in 1440, when the German inventor Johannes Gutenberg invented printing press. This device allowed new and old ideas could be propagated at great speed, enabling advancements in technology, literacy and even artistic pursuits. It was the social media of the time, but like every tool that could do good, it could also cause great harm!
In the early 1500s, Martin Luther used the printing press to succeed where his predecessors failed, launching the protestant reformation and sinking Europe into chaos. The catholic church's normal methods of dealing with such "heretics" did not work, and in the aftermath the church became far more defensive and inflexible towards any challenge to its authority.
In the meantime, the literacy rates across Europe skyrocketed, creating the perfect conditions for advancements in technology -- and the need for reading glasses. This meant lens making techniques would have to advance rapidly, creating a demand for optics books, which the printing press readily provided. These two technologies had a synergistic economy with each feeding demand for the other. One particular town in the Netherlands, Middelburg in the Zeeland province, became the centre of excellence for lens making. In 1608, this resulted in the invention of the telescope, which would finally put some cosmic assertions under the test.
Two years later, Galileo would use this telescope to make a discovery that would change the course of history despite it being a very minor one in retrospect. To understand why, we have to take a little step back from technology and science, then step into the world of philosophy and theology.
The catholic church's authority was coming under challenge, and its teleology through papacy undermined. The church, at the time, favoured the Aquinas scholasticism which was a more complex and purposeful method than the Hegelian dialectic most people today would be familiar with.
At the time, Europe had largely adopted an Aristotelian metaphysical view, after much of his work was transmitted to the continent via the Moors of Spain. Aquinas developed the scholastic method by which opposing viewpoints can be reconciled, usually to reinforce the Catholic church's scriptural viewpoints, without contradicting the trends of the time. This became exceedingly difficult as more observations of the cosmos and nature became known. Yet, the church still preferred this gradual approach which protected teleology while ontology and epistemology flourished -- fulfilling the church's guardianship role.
One particularly troublesome conflict was the 3rd century AD Ptolemaic model of our Solar system, inspired by Aristotle. In this Ptolemaic model, all the planets, stars and our current sun (Sol) orbited the Earth, with a twist! They all went through epicycles along their orbit, compensating for the motion of the Earth around the sun.
It only takes a moment of consideration to find an issue with this model: Due to Mercury and Venus's closer proximity to the sun, their epicycles would have to overlap each other's and the moon's. This means that we should see "phase cycles" that the Ptolemaic model could not account for and these were only observable by the telescope! The Ptolemaic model which established the Earth as the centre of the universe with everything else orbiting, couldn't be right. Well before the telescope's invention, Copernicus had already worked this out by doing away with the epicycles altogether:
This was not accepted however, and without any contradictory observations, people held on to existing views. Why would the church even care about this? As the Catholic church's (and almost every other mainstream sect) maintains that all its canon scripture was divinely inspired, these particular verses would create an obvious contradiction:
Psalm 104:5 - "He set the earth on its foundations; it can never be moved."
Psalm 93:1 - "The world is firmly established; it cannot be moved."
Psalm 96:10 - "Say among the nations, 'The Lord reigns.' The world is firmly established, it cannot be moved."
1 Chronicles 16:30: "Tremble before him, all the earth! The world is firmly established; it cannot be moved."
In 1610 Galileo made the inevitable observation of multiple phases of Venus, and quietly shared his empirical work in 1611. Jesuit astronomers took notice, and readily replicated the result. Cardinal Robert Bellarmine requested a formal opinion from the Collegio Romano mathematicians. They came up with the perfect solution that accommodated the church's requirements: Adopting the Tychonian model, nearly identical to an even older 4th century BC model by Heraclides*, where the Sun, Moon and stars orbit the Earth, and the planets orbit the Sun:
Galileo published his Venus observations in 1611, but did not advocate for any particular view. He was someone who even taught the geocentric model, and it took him until 1613 to accept his own observations and begin to advocate for the heliocentric model. He wrote a letter to the Italian mathematician Benedetto Castelli where he made a very bold statement, that his empirical results took primacy over scripture -- and that the latter should only be a matter of faith.
Just three years later, the church declared the heliocentric model, and Copernicus, heretical. Galileo was formally warned by the church, but continued to advocate for it in private and conduct research in this direction. In 1632, he published a work "Dialogue Concerning the Two Chief World Systems" which he set up as a scholastic argument, advocating for the geocentric model.
In the book, dialogue took place between three fictional characters:
Salviati (representing Galileo)
Simplicio (the word resembling sempliciotto which means 'simpleton')
And an observer, Sagredo.
In the book, Salviati made amazing arguments, while Simplicio struggled. More corrosively, the Pope (Urban VIII), had made almost identical arguments to this character. To maintain plausible deniability, Sagredo would declare no winner at the end of the book.
The Pope, who was so far quite lenient towards Galileo, felt rightfully betrayed by this mockery and straw-manning.
Just six month later, Pope Urban VIII placed the book on a ban list, ordering a halt to its distribution. The printing presses complied, and even protestants did not like Galileo's actions. He was placed under house arrest shortly afterwards, and conducted some research on inertia which we will revisit later in the thread.
Galileo was what you would call a scientist today, concerned with empirical observations more so than scriptural interpretation. He, like any nerd today, failed to read the room and paid the price for it. He didn't appreciate the complicated political, theological and societal considerations that the church had to balance. So he got cancelled for his trouble.
A far more politically adept albeit destructively secretive observer, Rene Descartes, correctly read the room. He was writing a book (Le Monde) advocating heliocentrism, but immediately ceased work on it in light of Galileo's arrest. As this represented many years of his work, this surely left a very bitter taste in his mouth. After Galileo's arrest, Descartes saw the scholastic method along with theology in general, as a barrier to what he thought of as progress. A barrier that needed to be torn down along with the history and perhaps even the deity behind him.
Descartes was Jesuit educated, and somewhat politically adept, but extremely hard to collaborate with and not a particularly good mathematician. Despite the fanfare over his various contributions, other than one polynomial curve, none of them were particularly new -- he just understood the correct way to publicise himself. As you will soon see, Descartes was indeed more of an influencer than either a mathematician or a philosopher. In his discussions with Fermat, he often misrepresented his work and stubbornly held onto his own inferior and sometimes incorrect assertions.
Despite his lack of scholarly skills, his work would soon leave its mark on the world -- creating nightmares beyond our imagination while inducing amnesia upon the Earth.
Next up, let's dive into his "philosophical" works, being careful not to make the mistake of engaging with his theatre and as you will soon discover, the dark ritual he imposed on his readers.
* Heraclides had Mercury and Venus orbit the Sun, the Sun orbit the Earth, and everything else including the Sun orbit the Earth with epicycles.
Though Philosophy was popular during the Renaissance, it is not the most popular degree that people pursue today. In the US out of 2 million graduates per year, only 8,000 of them study philosophy and only half of these do it as an only major. That's only 0.4% of all graduating students! Despite this, philosophy degrees and focused courses are over-represented in the following roles:
- Senior Intelligence Analysts/Leaders (>4%)
- CEOs (2%)
- Lawyers, as a double major (9%)
- Clergy (8%)
Among names you may be familiar with who studied philosophy: Bill Clinton, Emmanuel Macron, Pierre Trudeau, Peter Thiel, Carl Icahn the "activist investor", John O. Brennan (CIA director), William J. Casey (CIA Director), Richard Moore (MI6 director). The Oxford PPE course, which has a core philosophy component, also produced numerous leaders such as David Cameron, Liz Truss, Rishi Sunak, Bob Hawke***, Malcolm Fraser and Tony Abbott to name a few.
In addition to these individuals, the institutionalisation of philosophy for interior government roles is very significant. Georgetown school of Foreign Service and other Jesuit institutions are practically a direct pipeline to the CIA and other intelligence or governmental organisations. All base their course work on philosophies borne out of one man: Rene Descartes.*
Yet, this is merely the direct influence of this man. His Meditation on First Philosophy book published in 1641, influenced the course of the Renaissance itself, almost all philosophical works after him, and later in the 20th century, the very root of physics. The flow-on effects from his philosophy touches almost every single aspect of modern life. From science, policy and even entertainment, the psychology of those who adopted it will naturally also affect your life as well.
Yet, except for these philosophy students, hardly anyone is familiar with the work that inspired these consequent developments. Even among these students, as they receive Descartes typically as a 4 week component of their first year course work, they typically do not gain much more than the absorption of today's interpretation of his work. They rarely look at the history of the man, the context in which he wrote his work, his true character and the ancient influences of his work. They mostly know him for Cartesian Geometry, without understanding what it is or what his true contribution to it was.
You probably know where I'm going with this: we are going to fix this severe deficiency in this thread, and become familiar with the man, his work, his true capabilities, what influenced him and the true state of his psychology. What will we gain? There is a two fold gain:
- By understanding the man who created the philosophy that major leaders adopt without question, we understand them at a deeper level than they possibly understand themselves.
- By understanding the man AND his philosophy, we can begin to unravel it, not merely to build upon it, but to completely strip it of its mystery. Then we are going to deconstruct using ancient epistemology and essentially invert it using modern ontological observations. Basically, we are going to tear it to bits.
Taken together, this makes the leaders of this world far more predictable to us: for example, their collective desire of transhumanism (especially Peter Thiel's) will soon become not only childish but also rather boring. It will also help us understand the two pronged attack on knowledge, observational methodologies and ultimately teleology, launched by the masters of the "enlightenment". I dare say, we will not be able to do this without knowing the man himself, so we shall begin with this.
Rene Descartes was born in 1596, in La Haye en Touraine, France. His father, Joachim Descartes, was an influencial member of parliament. Unfortunately for Rene, he was born to two tragedies. His mother passed soon after his birth, leaving him spiritually alone and without the irreplaceable love of a mother that defines the very being of a man. To make matters worse, he inherited her persistent cough and frail constitution, making him very prone to illnesses. Indeed, his doctors predicted he would never make it to adulthood. This left Descartes bed ridden, he would often sleep 10-12 hours a day, meaning his dreams or sleeping state constituted half of his life.
Descartes's father loved his son and called him "his philosopher" because of his inquisitive nature, so he arranged for his son to receive the best education in Europe at the Jesuit college of La Flèche. There he was given exceptional treatment, with private tutoring, and was allowed to attend courses at noon, rather than 5 AM as was expected of students around him. He would rarely associate with people or build friendships due to this. Over time this developed into a desire to remain hidden (almost anonymous), adopting the philosophy: Bene vixit, bene qui latuit, or "he lives well, he who remains concealed".
Descartes didn't stay in France for long though, traveling throughout Europe. In 1618, while studying to join the army of the prince of Orange in Netherlands as a mathematician, he met a man that would change his life forever: a Calvinist named Isaac Beeckman. This man had a deep disregard for Aristotelian philosophy and metaphysics, and advocated for a mechanical physics and atomism. In some regards, he was far ahead of his time and we would only discover this in the 20th century. Unfortunately, he is such an obscure figure in history that not even a photograph of him exists today.
The two men met near a large placard in Breda marketplace, both attracted by a detailed mathematical problem. They hit it off immediately and his influence on Descartes was dramatic. Descartes was a Catholic and was well versed in the philosophy of Aquinas that merged Christian theology and Aristotelian philosophy. Under Beeckman's influence, he came to see Aristotle's metaphysics, which was based on ancient ontology, as an impediment to thought itself.
In the mean time, Beeckman encouraged Descartes to solve and publish problems, resulting in his first book, Compendium Musicae, published in 1619. This kicked off his reputation as both a mathematician and philosopher. Descartes would add to this known detail, stating that a series of three dreams were seen by him as a revelation from God to go down this path...
Unfortunately for Descartes, in 1624, the French parliament would make contradicting Aristotle punishable by death. This was a complete desecration of Aristotle's spirit, as he believed in philosophical debate and criticism, rational argument over authority, and the combined power of epistemology and ontology through dialectic discourse. Without a counterparty that is free to speak, there is no discourse and no need for Aristotle's ingenious language formalism. Descartes, feeling intellectually suffocated, decided to leave France for good in 1628, and headed to the Netherlands.
There he began working on the mathematical physics theory that Beeckman pushed him into. He would particularly focus on problems in optics -- as it were, his mentor Beeckman was from the same Middleburg town that specialised in this field, and this would have surely influenced Descartes.
During this period of history, geometry was solved using geometric methods -- Euclidian in nature and sometimes cumbersome without lifelong dedication to the field. Algebra, on the other hand, was used to solve complicated relationships symbolically. Combining these two would be a boon: optimistically, there would be no need to imagine curves -- solutions would drop out of equations. Descartes would develop this for years, resulting in his second most significant book: a combined Philosophy-math-physics book Discourse on the Method, published in 1637. Here his analytical geometry, and optics solutions, would exist as appendices.
For Descartes, these were merely demonstrations of the power of his methods: A complete dismantlement of Aristotle's philosophy, and the scholastic method. There was trouble for Descartes though, as Fermat had priority, releasing Methodus ad Disquirendam Maximam et Minimam et de Tangentibus Linearum Curvarum in 1636, with a superior methodology for combining algebra with geometry. In fact, what we now know as Cartesian geometry, more closely resembles Fermat's method than Descartes'.
To rub salt on the wound, Fermat had already developed this in 1629 as he could prove by correspondences, he just had not published it. Curiously, this well known piece of history is left out of most biographies of Descartes that read by academic students. This is likely because he has to be seen to demonstrate some novel application of his methods.
In any case, Descartes was furious! Fermat, to him, was a mere lawyer. Math was just a hobby, and yet he was an absolute giant compared to Descartes. Here he was taking away the most significant development of his life. He sent Fermat a challenge so that the two could put their geometrical methods to the test: the folium of Descartes. Descartes suggested that Fermat try and find the tangent line of xΒ³ + yΒ³ = 3axy at any point of the curve.
Embarrassingly, not only did Fermat do this with ease using his method, Descartes had two compounded failures: He failed to solve his own problem, and he could not properly imagine the full quadrant leaf-like shape of the curve whereas Fermat could. This was an absolute scandal, the 'father of analytical geometry', could not properly draw the curve to his own equation. Descartes would send letters defaming Fermat behind his back, and the arguments that arose from it were quite heated. This wasn't abnormal for Descartes!
Many years earlier, had burnt bridges with Beeckman, all over a similar need to own an idea. Beeckman was proud of Descartes, claiming him to be his old student and Descartes interpreted this as an attempt to steal his work. In fact, he hurt him so badly that the two never recovered in their friendship. Increasingly isolated, and without a true significant result to his name, Descartes communicated that he began to worry about his place in history. His Method book, on the backdrop of these events, would not get much attention -- not yet anyway.
A few years earlier, he had an illegitimate child with his personal maid, who he named Francine Descartes. Heartbreakingly, she inherited his frail composition, and died of scarlet fever at the age of 5. Descartes deeply loved his daughter, and upon her death was filled with so much sorrow that he couldn't stop crying. It was said that he would hold a coffin-like box while he slept every night, with other rumours (likely untrue) that he built an android like mechanical replacement of it. One thing is certain: This was the worst moment of Descartes life and it would scar him as a father in ways that would never ever heal.
The same man who lived in virtual anonymity and held back Le Monde after seeing what had happened to Galileo, suddenly had a different direction. In the darkest hour of his soul, he releases a different kind of book, one that would change the course of history and perhaps one that could very well be the cause of our extinction should we not properly study it: Meditations, 1641. This book was a perversion of philosophies of the time, wrapped around a dark ritual and many old ideas.
We will discuss this book, and his methods, in great detail in the next section. For now, it's important to note that Meditations caused Descartes to be declared an atheist by many Calvinists in protestant dominated Netherlands. He was condemned by the university of the town he was staying at, Utrecht, leading him to flee to the Hague. He remained in the Netherlands until 1649 under the protection of its prince.
He left the country in 1650, heading to Sweden to teach Queen Christina. She did not like him very much, and forced him to get up at 5 AM in order to teach her. His frail body could not handle this, and his illnesses finally catch up with him, causing him to die of pneumonia**.
Before continuing, let us pray for his soul and for God to forgive him for his mistakes. Descartes was a truly tragic, isolated figure, not dissimilar to your oomfie in many of his qualities, methods and isolation.
May, you rest in peace, Rene Descartes. πΉππ»