“There is no wave function...” This claim by Jacob Barandes sounds outlandish, but allow me to justify it with a blend of intuition regarding physics and rigor regarding math. We'll dispel some quantum woo myths along the way. (1/13)
Most people think of quantum mechanics as being about wave functions. What if Ψ isn't' fundamental? What if it's just a mathematical convenience? What happens to the associated devices like Hilbert spaces and state vectors? To Jacob, sure, they're useful, but they're not "real."
(I understand that you should technically be dealing with the endomorphisms of Hilbert spaces rather than direct members of them, but this is something unnecessary to get into currently.) (2/13)
There are five axioms of quantum mechanics. I've spelled them out here with both their math and their meaning in case you're interested: . (3/13)curtjaimungal.substack.com/p/the-interpre…
Instead, Jacob suggests you start with a more general notion: indivisible stochastic processes. "Indivisible" means atomic (in the sense that you can't break it down further), and "stochastic" is the mathematician's word for random. "Processes" means something that starts off initially in some way and gets transformed into something else. Extremely general. (4/13)
The difference with these ISPs is that they're unlike a coin flip or a Markov chain; the probabilities for these systems can't be divided into smaller time steps. Also, they don't monitor your shady internet usage you disgusting pig. The laws are indivisible. They don't tell you what happens from moment to moment but over finite chunks of time. (5/13)
You're likely thinking "Curt, this in some way sounds like QM, but also it sounds so vague. Where the heck is superposition? The interference? The complex numbers?" What Jacob found in his 2023 paper () is that it turns out there's a mathematical correspondence. (6/13)arxiv.org/pdf/2302.10778
You can take any of these indivisible stochastic systems and represent them in a Hilbert space. The Hilbert space is a representation in this model, not a fundamentality! (“representations” in math mean something specific so I do have to be careful saying this)
The probabilities become mod-squares of complex amplitudes, just like in vanilla QM. Superposition in the Hilbert space picture is actually just a classical probability distribution over configurations in the indivisible stochastic picture. No cats that are both alive and dead. There's just a system that will be in one state or another (with certain probabilities). (7/13)
Now for measurement, it's not some undefined collapse caused by large systems or conscious observers. All it is is an interaction that creates a new "division event."
I specifically asked Jacob if a division event was just pushing the measurement problem back a step (youtu.be/7oWip00iXbo), and he said that when a "measuring device" interacts with a system under observation, their combined evolution is still governed by indivisible stochastic laws. What we perceive as "collapse" is the probabilistic evolution of the composite system into a configuration where the measuring device displays a definite outcome.
Importantly (and interestingly), this is NOT instantaneous. However, in practice, it does happen considerably quickly for macroscopic devices. (8/13)
The probabilities for each outcome are determined by the underlying stochastic dynamics, and they precisely match the predictions of the Born rule, given by p_i(t) = tr(P_i ρ(t)), where p_i(t) is the probability of outcome i at time t, P_i is the projection operator for that outcome, and ρ(t) is the density matrix.
By the way, those of you who don't deal with math often may mistake the ρ for a p. This is dangerous but forgivable mistake. It's okay. I feel you. (9/13)
Decoherence is often invoked to explain the emergence of classicality, but Jacob has a different interpretation of it. When a system interacts with a large environment, the indivisible stochastic process describing their joint evolution leads to a rapid suppression of interference terms in the Hilbert space representation.
This is because the environment effectively carries away information about the system's configuration, making it practically impossible to observe interference effects.
Mathematically, this is captured by the decay of off-diagonal elements in the density matrix when expressed in the configuration basis. The density matrix is defined as ρ(t) = Θ(t) ρ(0) Θ†(t), where Θ(t) is the time-evolution operator. So, ρ(0) is the initial density matrix. This suppression arises directly from the indivisible stochastic dynamics and doesn't require any new postulates or interpretations. (10/13)
Recall (and you can take a look at my write-up mentioned in my third tweet above) that in the standard formulation, the Born rule states that the probability of a measurement outcome is given by the squared magnitude of the corresponding amplitude. It's simply postulated. Here, though, it's induced as a consequence of the correspondence between indivisible stochastic processes and their Hilbert space representations. The probabilities in the stochastic picture, which are fundamental, are mapped to the mod-squares of amplitudes in the Hilbert space picture, so p(i, t | j, 0) = Γ_ij(t) = |Θ_ij(t)|². (11/13)
Furthermore, in the stochastic picture, probabilities are real and non-negative, as they should be. However, the mapping to the Hilbert space actually introduces complex amplitudes. This can be understood as a consequence of representing indivisible dynamics in a divisible formalism.
The complex phases, which are responsible for interference effects, encode the MEMORY of the indivisible process in the divisible language of the Hilbert space.
This is the key that no one else saw.
These are not "Markovian dynamics" but "non-Markovian." Mathematicians love to confuse you with non-descriptive terminology, but the translation is that Markovian can be read as "memory-less," which means non-Markovian is non-memoryless, thus memory-full. We get transition probabilities given by the equation Γ_ij(t) = tr(Θ†(t) P_i Θ(t) P_j), where Θ(t) is the time-evolution operator, and P_i, P_j are projection operators. (12/13)
Jacob's approach is interesting to me because of its implications for quantum field theory and (specifically) the standard model, and thus for unifying the SM with GR (a so-called "Theory of Everything").
I'll be speaking with him in just a day or two, so if you have any questions, watch his interview here: youtu.be/7oWip00iXbo and let me know in follow-up tweets. (13/13)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
What if the universe isn’t actually made of points, waves, fields, particles, or even Lagrangian submanifolds, but instead… natural transformations? The Yoneda Lemma, a theorem that identifies elements u ∈ F(A) with transformations Φᵤ: hₐ → F, makes this view mathematically concrete. The proof of the Yoneda Lemma takes perhaps three lines (hence why it’s a “lemma” despite its weight). 👇🧵 (1/22)
Its implications, however, reach algebraic geometry, representation theory, and even parts of theoretical physics. If you truly want to actually understand Tannaka duality, Isbell conjugation, or Grothendieck’s schemes, then you don’t get terribly far without Yoneda. But how can chasing idₐ be this impactful? The “Yoneda perspective,” that objects are their relations, is forced on you by the math. (2/22)
The full faithfulness of the Yoneda embedding Y: 𝒞 → [𝒞ᵒᵖ, Set] is that perspective, formalized. Faithfulness sounds godly, but it’s a technical term meaning injectivity-on-morphisms, which just means different maps stay visibly different, which just means no confusion allowed. People argue about nonduality vs. duality, but in category theory, duality isn’t optional. Why? Because it’s not exactly about “things” (per se), nor is it about “relationships” (per se).
It’s actually about the parallels between things and their relationships. (3/22)
You've probably been seeing viral videos saying particles take "all possible paths." It's based on an extreme misunderstanding of path integrals. Let's disambiguate what the path integral is about and why Feynman's tool isn't a literal map of reality. Firstly, we need to stop saying electrons "go through both slits." That's not what quantum mechanics (even textbook QM) says. 👇🧵 (1/20)
It's a hangover from misinterpreting wave functions in 3D space when they really live in something else called "configuration space." It confuses a calculational trick with physical ontology. Time for some rigor. \int D\phi, e^{iS[\phi]/\hbar People love to say that particles explore every possible path simultaneously—even going back in time or to the moon. Firstly, what is this word “possible”? Possible isn't a physics word. Do you mean to say every continuous path in R^4? Every once differentiable path in R^3? What is it? (2/20)
And saying “possible” just makes you stop and think… “OK, if quantum mechanically you can tunnel and plenty else that I thought wasn't possible is actually possible, then how informative is saying that the particle takes all possible paths? What is the rigorous domain of 'possible'? Furthermore, we're already including impossible paths classically, like going backward in time and not being differentiable? Why can't you go through the blocked parts of the slits? Why is that not possible?” (3/20)
“All the towering materialism which dominates the modern mind rests ultimately upon one assumption; a false assumption. It is supposed that if a thing goes on repeating itself it is probably dead; a piece of clockwork. 👇🧵 (1/15)
People feel that if the universe was personal it would vary; if the sun were alive it would dance. This is a fallacy even in relation to known fact. (2/15)
For the variation in human affairs is generally brought into them, not by life, but by death; by the dying down or breaking off of their strength or desire. A man varies his movements because of some slight element of failure or fatigue. (3/15)
Think you know what energy is? You probably don’t. That’s okay. Einstein probably didn’t either, at least not in the context of his own masterpiece, General Relativity. Forget the pop-sci soundbites you hear from people like NDT. Energy is NOT simply “mass in motion” or “mass because E=mc²” or even the neatly “conserved currency of our universe.” These definitions (to the degree they’re definitions) don’t hold up in dynamically curved spacetime. 👇🧵 (1/20)
Most likely, your GR instructor glossed over energy, perhaps mumbled something about “pseudo-tensors” under their breath, then quickly changed the subject. Why the rush? Why the evasion on such a supposedly fundamental concept? The full, honest treatment is extremely messy, deeply controversial, and fundamentally unresolved even after a century. Einstein himself wrestled with it, and the compromises he made are still debated today. Let’s talk about that mess. (2/20)
The heart of the problem is that general relativity has two foundational pillars: general covariance (physics laws don’t depend on coordinates) and the equivalence principle (gravity = local acceleration). In flat spacetime, energy-momentum conservation is neat: $\partial_\mu T^{\mu\nu} = 0$, where $T^{\mu\nu}$ is the stress-energy tensor of matter. GR *looks* similar: $\nabla_\mu T^{\mu\nu} = 0$. (3/20)
Philosophers who use Gödel's incompleteness theorem to make claims about "fundamental limits of human knowledge" have made a category error. It's about axiomatization, not epistemology. 1/
Firstly, epistemology is just fancy technical jargon for "what and how we can know." So, knowledge. Questions in the field of epistemology are questions that deal with the nature, sources, and limits of knowledge. The "nature" of this knowledge even includes defining what knowledge is (see Gettier's problem), but this is beside the point. 2/
On the other hand, Gödel's (first) incompleteness theorem is regarding what can be proven *within* a "formal" system. These words are important. "Formal" has a specific meaning, and "within" also has a specific meaning. The claims of Gödel's theorem are of a completely different domain than claims in epistemology. The confusion between them has led to some quite wild misinterpretations. 3/
“Entropy is geometry. And geometry is entropy.” This is a new finding by Gabriele Carcassi, and I'll explain the reasoning below, along with the math. Don't worry, I'll hold your hand (metaphorically, of course, unless you're into that). (1/19)
Firstly, note that entropy is NOT a measure of disorder… It's a way you count states. From this, we'll see that geometry *is* entropy, and entropy *is* geometry. Let's start with classical mechanics... (2/19)
Why do we use phase space (position q and momentum p)? To Gabriele, it's because phase space counts configurations. The key is something called the *symplectic form*, ω. This is what I described in a previous Tweet (). (3/19)