Computer scientists often study how difficult certain problems are to solve. But a subset of them are fascinated by a more complex question: How hard is it to determine how hard those problems are?
This field is known as meta-complexity. 🧵
Meta-complexity focuses on the differences in difficulty between certain types of computational problems. For example, Hamiltonian paths (paths that pass through every point of a graph) become exponentially harder to compute as the graphs grow larger.
Eulerian paths, which go over every edge of a graph exactly once, become harder as graphs get larger, but at a rate much slower than their Hamiltonian cousins.
The differences in Eulerian and Hamiltonian paths illustrate a foundational problem in computer science. Why should some problems be so much harder than others to solve? And is there a hidden algorithm that would solve all of these difficult-to-solve conundrums?
Complexity researchers discovered they could reformat computational problems using the language of circuit complexity, with circuits representing Boolean functions. They hoped to use this strategy to prove that some easy-to-check problems were difficult to solve.
Attempts to prove hardness ran up against the natural proofs barrier, a result by Alexander Razborov and Steven Rudich that essentially meant that mathematicians wouldn’t succeed if they used techniques they already had.
To some meta-complexity researchers, however, the natural proofs barrier looked like an opportunity. They found that a certain kind of unclassified algorithm could be used to construct a powerful “learning” algorithm that can spot patterns in seemingly unrelated numbers.
The potential of powerful learning algorithms made researchers take notice, and there have recently been several new breakthroughs in the field that come closer to classifying certain problems that have mystified scientists for years, extending a timeline that dates back to 1921.
To learn more about the field of metacomplexity and its breakthroughs, read @benbenbrubaker’s feature for Quanta: quantamagazine.org/complexity-the…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The mathematician John Conway didn’t fit into a box.
🧵
Conway’s first love was geometry. In 1966, he discovered the symmetries of the Leech lattice, a 24-dimensional structure with lots of applications. Mathematicians later proved that it gave the densest possible way to pack spheres in 24-dimensional space — each sphere touches 196,560 others.
2/13
Despite expecting the problem to take a year, Conway finished in only 24 hours. The collection of symmetries he discovered is now known as the Conway group.
In chaotic systems, the smallest fluctuations get amplified. As scientist Edward Lorenz put it in the 1960s and 70s, even a seagull flapping its wings might eventually make a big difference to the weather. Here's how scientists came to understand what chaos is, and how to wrangle it:
🧵
350 BC: Even the ancients knew that small changes can have large and seemingly unpredictable effects. In On The Heavens, Aristotle wrote, “The least initial deviation from the truth is multiplied later a thousandfold.”
2/16
1890: While investigating how a system of three orbiting bodies will evolve over time (now known as the three-body problem), Henri Poincaré discovered that the smallest variation in their initial state will lead to enormous discrepancies later on. He suggested that this sensitive dependence on initial conditions might be widespread in nature.
Cosmic forces interact with our bodies every day, sometimes in surprising ways.
🧵
The universe is ruled by four forces: 1. The electromagnetic force, which guides the behavior of light and charged particles 2. The weak force, which rules radioactive decay 3. The strong force, which holds nuclei together 4. Gravity, which causes objects to attract
2/12
When you get a sunburn, it’s because of the electromagnetic force. Invisible ultraviolet light from the sun has enough energy to break molecular bonds and damage skin cells. The body responds with inflammation, and you feel the burn.
Andrew Barto and Richard Sutton have won the A.M. Turing Award for developing the theoretical foundations of reinforcement learning, a key method behind many major breakthroughs in artificial intelligence. 🧵
In reinforcement learning, AI systems are trained to accomplish tasks using “reward” signals that steer them toward useful actions. In the 1980s, Barto and Sutton devised mathematical techniques that made this basic idea applicable to many different problems. 2/6 buff.ly/mLfJu3M
Decades later, advances in computing power revealed the true power of reinforcement learning. In 2016, Google researchers used it to train an AI system called AlphaGo to master the board game Go. The original version of AlphaGo learned to play from a combination of reinforcement learning and studying transcripts of expert human games. Later versions used pure reinforcement learning. 3/6 buff.ly/CPMXWkS
At the turn of the 20th century, Spanish pathologist and artist Santiago Ramón y Cajal became the first person to realize that every neuron in the brain is a separate cell. For this work, he was awarded the 1906 Nobel Prize.
2/9
Cajal’s observations defined modern neuroscience. Using his microscope and keen artist’s eye, he deduced that nervous signals enter the neuron through its dendrites and exit through its axon, and that one neuron relays messages to the next by passing information across the synapse.
3/9
Tiny rocks, known as space or cosmic dust, speckle the otherwise empty space between planets and stars. Some float to Earth from hundreds of millions of miles away, falling to the ground as micrometeorites. Round and multicolored like tiny marbles, each bears
a distinctive message.
2/11
Every year, roughly 10 particles of space dust land on each square meter of Earth’s surface. “That means that they are everywhere. They are on the streets. They are in your home. You may even have some cosmic dust on your clothes.” — Matthew Genge, planetary scientist