, 77 tweets, 11 min read
My Authors
Read all threads
We’re approaching the limits of computer power – we need new programmers now Ever-faster processors led to bloated software, but physical limits may force a return to the concise code of the past
Way back in the 1960s, Gordon Moore, the co-founder of Intel, observed that the number of transistors that could be fitted on a silicon chip was doubling every two years. Since the transistor count is related to processing power, that meant that computing
power was effectively doubling every two years. Thus was born Moore’s law, which for most people working in the computer industry – or at any rate those younger than 40 – has provided the kind of bedrock certainty that Newton’s laws of motion did for mechanical engineers.
There is, however, one difference. Moore’s law is just a statement of an empirical correlation observed over a particular period in history and we are reaching the limits of its application.
In 2010, Moore himself predicted that the laws of physics would call a halt to the exponential increases.“In terms of size of transistor,” he said, “you can see that we’re approaching the size of atoms, which is a fundamental barrier, but it’ll be two or three generations
before we get that far – but that’s as far out as we’ve ever been able to see. We have another 10 to 20 years before we reach a fundamental limit.”We’ve now reached 2020 and so the certainty that we will always have sufficiently powerful computing hardware for our expanding needs
is beginning to look complacent.Since this has been obvious for decades to those in the business, there’s been lots of research into ingenious ways of packing more computing power into machines, for example using multi-core architectures in which a CPU has two or more
separate processing units called “cores” – in the hope of postponing the awful day when the silicon chip finally runs out of road.(The new Apple Mac Pro, for example, is powered by a 28-core Intel Xeon processor.) And of course there is also a good deal of frenzied research into
quantum computing, which could, in principle, be an epochal development.But computing involves a combination of hardware and software and one of the predictable consequences of Moore’s law is that it made programmers lazier.
Writing software is a craft and some people are better at it than others. They write code that is more elegant and, more importantly, leaner, so that it executes faster. In the early days, when the hardware was relatively primitive, craftsmanship really mattered.
When Bill Gates was a lad, for example, he wrote a Basic interpreter for one of the earliest microcomputers, the TRS-80. Because the machine had only a tiny read-only memory, Gates had to fit it into just 16 kilobytes.
He wrote it in assembly language to increase efficiency and save space; there’s a legend that for years afterwards he could recite the entire program by heart.There are thousands of stories like this from the early days of computing.
But as Moore’s law took hold, the need to write lean, parsimonious code gradually disappeared and incentives changed. Programming became industrialized as “software engineering”.
The construction of sprawling software ecosystems such as operating systems and commercial applications required large teams of developers; these then spawned associated bureaucracies of project managers and executives.
Large software projects morphed into the kind of death march memorably chronicled in Fred Brooks’s celebrated book, The Mythical Man-Month, which was published in 1975 and has never been out of print, for the very good reason that it’s still relevant.
And in the process, software became bloated and often inefficient.But this didn’t matter because the hardware was always delivering the computing power that concealed the “bloatware” problem. Conscientious programmers were often infuriated by this.
“The only consequence of the powerful hardware I see,” wrote one, “is that programmers write more and more bloated software on it. They become lazier, because the hardware is fast they do not try to learn algorithms nor to optimize their code… this is crazy!”
It is. In a lecture in 1997, Nathan Myhrvold, who was once Bill Gates’s chief technology officer, set out his Four Laws of Software. 1: software is like a gas – it expands to fill its container. 2: software grows until it is limited by Moore’s law.
3: software growth makes Moore’s law possible – people buy new hardware because the software requires it. And, finally, 4: software is only limited by human ambition and expectation.
As Moore’s law reaches the end of its dominion, Myhrvold’s laws suggest that we basically have only two options. Either we moderate our ambitions or we go back to writing leaner, more efficient code. In other words, back to the future.
Will advances in quantum computing affect internet security?Google has built a super-fast computer, but whether it can break the encryption we take for granted is moot
Something intriguing happened last week. A paper about quantum computing by a Google researcher making a startling claim appeared on a Nasa website – and then disappeared shortly afterwards.
Conspiracy theorists immediately suspected that something sinister involving the NSA was afoot. Spiritualists thought that it confirmed what they’ve always suspected about quantum phenomena. (It was, as one wag put it to me, a clear case of “Schrödinger’s Paper”.)
Adherents of the cock-up theory of history (this columnist included) concluded that someone had just pushed the “publish” button prematurely, a suspicion apparently confirmed later by stories that the paper was intended 4⃣ a major scientific journal b4⃣ being published on the web
Why was the elusive paper’s claim startling? It was because – according to the Financial Times – it asserted that a quantum computer
built by Google could perform a calculation “in 3⃣ minutes and 20 seconds that would take today’s most advanced classical computer … approximately 10,000 years”.
As someone once said of the book of Genesis, this would be “important if true”. A more mischievous thought was: how would the researchers check that the quantum machine’s calculation was correct?
A quantum computer is one that harnesses phenomena from quantum physics, the study of the behaviour of subatomic particles, which is one of the most arcane specialisms known to humankind.
We all inhabit – and intuitively understand – a world governed by Newtonian physics – which explains the behavior of tangible things such as billiard balls, planets and falling apples.
But it turns out that Newton’s laws don’t apply to subatomic particles; quantum theory evolved to explain what goes on in that strange space. The polite term for what goes on there is “counter-intuitive”.
The less polite term is “weird”. In certain situations, for example, quantum theory says that one subatomic particle’s behavior is bound up with that of another, even if the second one is on the other side of the galaxy. This is known as “entanglement”.
Another principle is that a particle can be in two different states at the same time – as with Schrödinger’s imaginary cat, who was both alive and dead at the same time. This is known in the jargon as “superposition”.
Superposition is at the heart of quantum computing. Ordinary computers work with bits that can be either on or off – coded as zero or one. But quantum computers work with qubits, which can have a value of 0, 1 or both!
Thus two qubits can represent four states simultaneously (00, 01, 10, 11) – which apparently means that 100 qubits can represent 1.3 quadrillion quadrillion states.
This means that a quantum computer would be much faster and efficient at some kinds of computation than would be a classic computer,
which has to chunter along with bits that are only on or off – and it explains why the mysterious Google machine might represent a working model of “quantum supremacy” in action.
Why might this be important? Because the security of our networked world depends on public-key cryptography – the encryption that protects communications, bank accounts and other sensitive data.
At the core of this approach is the fact that factoring very large numbers takes a long time. In 2016, for example, it took several hundred computers two years to crack a message encrypted with a key that was 768 bits long.
The same process for material encrypted with a 1,024-bit key would take 1,000 times longer, and cracking anything encrypted with the current highest standard 4,096-bit key would possibly outlast the presence of life on earth. So our security depends on the speed of computers.
In principle, industrial-scale quantum computers could make a mockery of all this – but that’s in theory. In practice, quantum supremacy is still a long way off, as Scott Aaronson, a leading academic in the field, points out in a post on his blog.
There are, he says, two big obstacles. The first is that a quantum machine capable of tackling current encryption methods would need several thousand logical qubits: “With known error-correction methods, that could easily translate into millions of physical qubits,
and those probably of a higher quality than any that exist today. I don’t think anyone is close to that, and we have no idea how long it will take”.
The second caveat is that quantum machines would be able to crack some codes but not all possible codes. The public-key codes that would be vulnerable happen to be the ones we use to secure online transactions and to protect data.
But private-key encryption will probably still be invulnerable. And researchers have been working on new types of public-key crypto that no one knows how to break – even in principle – after two decades of trying.
When the Google paper does emerge, it will be interesting for all kinds of reasons – not least as evidence that the researchers have actually built a working 53-qubit machine.
But as a harbinger of crypto-apocalypse it’s likely to be a disappointment. At best, it’ll be a proof of concept; at worst, it’ll be the canary in the crypto-mine.
What is quantum computational supremacy?

Often abbreviated to just “quantum supremacy,” the term refers to the use of a quantum computer to solve some well-defined set of problems that would take orders of magnitude longer to solve with any currently known
algorithms running on existing classical computers—and not for incidental reasons, but for reasons of asymptotic quantum complexity. The emphasis here is on being as sure as possible that the problem really was solved quantumly and really is classically intractable, and ideally
achieving the speedup soon (with the noisy, non-universal QCs of the present or very near future). If the problem is also useful for something, then so much the better, but that’s not at all necessary.
If Google has indeed achieved quantum supremacy, does that mean that now “no code is uncrackable”, as Democratic presidential candidate Andrew Yang recently tweeted?
No, it doesn’t. (But I still like Yang’s candidacy.)

There are two issues here. First, the devices currently being built by Google, IBM, and others have 50-100 qubits and no error-correction. Running Shor’s algorithm to break the RSA
cryptosystem would require several thousand logical qubits. With known error-correction methods, that could easily translate into millions of physical qubits, and those probably of a higher quality than any that exist today.
I don’t think anyone is close to that, and we have no idea how long it will take.But the second issue is that, even in a hypothetical future with scalable, error-corrected QCs, on our current understanding they’ll only be able to crack some codes, not all of them.
By an unfortunate coincidence, the public-key codes that they can crack include most of what we currently use to secure the Internet: RSA, Diffie-Hellman, elliptic curve crypto, etc. But symmetric-key crypto should only be minimally affected.
And there are even candidates for public-key cryptosystems (for example, based on lattices) that no one knows how to break quantumly after 20+ years of trying, and some efforts underway now to start migrating to those systems.
What calculation is Google planning to do, or has it already done, that’s believed to be classically hard?

So, I can tell you, but I’ll feel slightly sheepish doing so. The calculation is: a “challenger” generates a random quantum circuit C (i.e., a random sequence of
1-qubit and nearest-neighbor 2-qubit gates, of depth perhaps 20, acting on a 2D grid of n = 50 to 60 qubits). The challenger then sends C to the quantum computer,
and asks it apply C to the all-0 initial state, measure the result in the {0,1} basis, send back whatever n-bit string was observed, and repeat some thousands or millions of times.
Finally, using its knowledge of C, the classical challenger applies a statistical test to check whether the outputs are consistent with the QC having done this So, this is not a problem like factoring with a single right answer.
The circuit C gives rise to some probability distribution, call it DC, over n-bit strings, and the problem is to output samples from that distribution. In fact, there will typically be 2n strings in the support of DC—so many that, if the QC is working as expected, the same
output will never be observed twice. A crucial point, though, is that the distribution DC is not uniform. Some strings enjoy constructive interference of amplitudes and therefore have larger probabilities while others suffer destructive interference and have smaller probabilities
And even though we’ll only see a number of samples that’s tiny compared to 2n, we can check whether the samples preferentially cluster among the strings that are predicted to be likelier, and thereby build up our confidence that something classically intractable is being done.
So, tl;dr, the quantum computer is simply asked to apply a random (but known) sequence of quantum operations—not because we intrinsically care about the result, but because we’re trying to prove that it can beat a classical computer at some well-defined task.
But if the quantum computer is just executing some random garbage circuit, whose only purpose is to be hard to simulate classically, then who cares? Isn’t this a big over hyped nothing burger?
No. As I put it the other day, it’s not an everythingburger, but it’s certainly at least a somethingburger!

It’s like, have a little respect for the immensity of what we’re talking about here, and for the terrifying engineering that’s needed to make it reality.
Before quantum supremacy, by definition, the QC skeptics can all laugh to each other that, for all the billions of dollars spent over 20+ years, still no quantum computer has even once been used to solve any problem faster than
your laptop could solve it, or at least not in any way that depended on its being a quantum computer. In a post-quantum-supremacy world, that’s no longer the case.
A superposition involving 250 or 260 complex numbers has been computationally harnessed, using time and space resources that are minuscule compared to 250 or 260.
Years ago, you scolded the masses for being super-excited about D-Wave, and its claims to get huge quantum speedups for optimization problems via quantum annealing. Today you scold the masses for not being super-excited about quantum supremacy. Why can’t you stay consistent?
Because my goal is not to move the “excitement level” in some uniformly preferred direction, it’s to be right! With hindsight, would you say that I was mostly right about D-Wave, even when raining on that particular parade made me unpopular in some circles?
If quantum supremacy calculations just involve sampling from probability distributions, how do you check that they were done correctly?
This is the subject of a fair amount of theory that I and others developed over the last decade. I already gave you the short version in my answer to Q3: you check by doing statistics on the samples that the QC returned, to verify that they’re preferentially clustered in the
“peaks” of the chaotic probability distribution DC. One convenient way of doing this, which Google calls the “linear cross-entropy test,” is simply to sum up Pr[C outputs si] over all the samples s1,…,sk that the QC returned, and then to declare the test
a “success” if and only if the sum exceeds some threshold—say, bk/2n, for some constant b strictly between 1 and 2.Admittedly, in order to apply this test, you need to calculate the probabilities Pr[C outputs si] on your classical computer—and the only known ways to calculate
them require brute force and take ~2n time. Is that a showstopper? No, not if n is 50, and you’re Google and are able to handle numbers like 250 (although not 21000, which exceeds a googol, har har). By running a huge cluster of classical cores for (say) a month, you can
eventually verify the outputs that your QC produced in a few seconds—while also seeing that the QC was many orders of magnitude faster. However, this does mean that sampling-based quantum supremacy experiments are almost specifically designed for ~50-qubit devices like the
ones being built right now. Even with 100 qubits, we wouldn’t know how to verify the results using all the classical computing power available on earth.
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with James Mitchell Ⓥ

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!