Computer scientists often study how difficult certain problems are to solve. But a subset of them are fascinated by a more complex question: How hard is it to determine how hard those problems are?
This field is known as meta-complexity. 🧵
Meta-complexity focuses on the differences in difficulty between certain types of computational problems. For example, Hamiltonian paths (paths that pass through every point of a graph) become exponentially harder to compute as the graphs grow larger.
Eulerian paths, which go over every edge of a graph exactly once, become harder as graphs get larger, but at a rate much slower than their Hamiltonian cousins.
The differences in Eulerian and Hamiltonian paths illustrate a foundational problem in computer science. Why should some problems be so much harder than others to solve? And is there a hidden algorithm that would solve all of these difficult-to-solve conundrums?
Complexity researchers discovered they could reformat computational problems using the language of circuit complexity, with circuits representing Boolean functions. They hoped to use this strategy to prove that some easy-to-check problems were difficult to solve.
Attempts to prove hardness ran up against the natural proofs barrier, a result by Alexander Razborov and Steven Rudich that essentially meant that mathematicians wouldn’t succeed if they used techniques they already had.
To some meta-complexity researchers, however, the natural proofs barrier looked like an opportunity. They found that a certain kind of unclassified algorithm could be used to construct a powerful “learning” algorithm that can spot patterns in seemingly unrelated numbers.
The potential of powerful learning algorithms made researchers take notice, and there have recently been several new breakthroughs in the field that come closer to classifying certain problems that have mystified scientists for years, extending a timeline that dates back to 1921.
To learn more about the field of metacomplexity and its breakthroughs, read @benbenbrubaker’s feature for Quanta: quantamagazine.org/complexity-the…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Albert Einstein upended our view of the universe by merging space and time into a single dynamic fabric. Now, many physicists are coming to their own radical realization: The fabric of space-time seems to emerge from something else. 🧵
What does it mean for nature’s fundamental building blocks to exist outside of space and time?
How could such entities ever be described?
And why do physicists think it’s true?
Today, Quanta is wrangling with these questions in “The Unraveling of Space-Time,” a collection of articles, graphics, explainers, interviews and a video exploring physicists’ quest for the fundamental nature of reality.
The idea that space-time emerges from something more fundamental comes in part from difficulty reconciling the two prominent theories of physics: general relativity and quantum mechanics.
Our new documentary explains why these ideas don’t mesh well and how emergent space-time, well, emerges from the mess.
New brain imaging research, published today in @Nature, suggests that psilocybin, the active compound in “magic mushrooms,” generates psychedelic experience by disrupting the brain's default mode network. But what is the default mode network? 🧵nature.com/articles/s4158…
It was first characterized in 2001 when the neurologist Marcus Raichle studied people’s brain activity while they weren’t doing anything in particular — they let their minds wander. He dubbed the active areas the “default mode” of the brain.
The default mode network consists of regions across the brain that interact to bring about effects that they can only produce together. These regions are associated with several functions including memory, experience replay, prediction, action consideration, reward/punishment and information integration.
Proteins do it all. Hemoglobin ferries oxygen around the body. Keratin structures hair, nails and skin. Insulin helps glucose convert into energy.
The fold of a protein is critical to its function. Yet no one really knows specifically how protein folding happens.
1/15
It’s the early 1960s. Biologists are growing proteins into crystals, bombarding them with X-rays and measuring how the rays bend — a technique known as X-ray crystallography.
From there, they create ball and stick models. Every finished model represents years of work.
2/15
As protein science expands, problems arise. Experimentalists work precisely but slowly; computationalists work quickly but are too removed from biophysical realities.
John Moult sets out to find a way to bring the best of both approaches together.
Today, we published our annual list of the year’s biggest discoveries in computer science. Here are a few of the stories that we included 🧵
Why is it hard to understand what makes hard problems hard? The mission to quantify practical solvability defines a field of computer science called meta-complexity. This year, Quanta dove into this puzzle in an extended article and short documentary. quantamagazine.org/complexity-the…
In the physical world, large enough groups of individuals often display surprising “emergent behaviors.” This year, computer scientists saw it happen with large language models. quantamagazine.org/the-unpredicta…
Yesterday we published an article with the headline “Physicists Create a Wormhole Using a Quantum Computer.” It described the efforts by a team of physicists led by Maria Spiropulu of Caltech to implement a “wormhole teleportation protocol” on a quantum computer. (1/10)
As the article details, the work is an experimental exploration of “holographic duality,” which for two decades has been the dominant research thrust in efforts to reconcile quantum mechanics and general relativity. (2/10)
The wormhole, in this case, is the holographic dual to a quantum protocol run on Google’s chip. At publication, our social media channels published language that read “BREAKING: Physicists have built a wormhole and successfully sent information from one end to the other.” (3/10)
2022 is almost here. Did this year feel like it went by quickly for you, or slowly?
Will you use a clock, a slowly descending mirrorball or three entangled atoms to know when it’s the new year? This is a good time for a thread about time: 🧵⏰
Over the millennia, our clever methods for timekeeping have included sundials, pendulums and strontium lattice clocks. Here’s how inventions to measure time have paralleled efforts to understand what time actually is:
Time flies when you’re having fun, and it grinds to a halt when you’re not. Recent research bolsters a theory about why that happens. quantamagazine.org/how-the-brain-…