I read Google's paper about their quantum computer so you don't have to.
They claim to have ran a quantum computation in 5 minutes that would take a normal computer 10^25 years.
But what was that computation? Does it live up to the hype?
I will break it down.🧵
First we need a crash course in quantum computing.
A classical computer keeps information in 64-bit groups called "words."
Each of these groups of bits are run through logic gates such as AND, OR, and NOT.
These gates take one or two bits as an input and return a single bit as an output.
AND returns 1 if both the inputs are 1,
OR returns 1 if at least one of the inputs is 1
NOT returns 1 if the input is 0.
A quantum computer, instead of using bits, uses a qbit.
A qbit is a two-dimensional vector of real numbers. This means one qbit can hold "infinitely" more information than a single regular bit.
A bit can only hold 0 or 1, but a qbit can hold a 2d vector of real numbers, so it is much more expressive.
You can think of an individual qbit as a "probability distribution" -- it averages around certain values but doesn't have a concrete value.
However, when the qbit is "observed" it is forced into a vector representation of either [1, 0] represnting a binary 0, or [0, 1] representing a binary 1.
We call the transformation from a "probability distribution" to a concrete binary value "collapsing" the qbit.
For example, a non-collapsed qbit might look like [0.8366, 0.5477].
A quantum gate is effectively matrix multiplication. If you multiply a qbit (2-dim vector) by a 2x2 matrix, you get another qbit (2-bit vector) back. For example, a quantum NOT gate looks like the 2x2 matrix below.
If that is multiplied by qbit [1, 0] then the qbit is flipped to [0, 1]. This still works as expected for non-collapsed qbits.
Like a regular computer, a quantum computer keeps bits in groups. So a 64 bit quantum computer would have a vector of 64 2d vectors serving as it's "word."
Here is where the speedup happens: in a regular computer, each of the 64 bits don't know anything about the value of any of the other 64 bits.
If we want one bit to affect another bit, we have to explicilty combine them with a logic gate.
However, in a quantum computer, each of the 64 qbits can "talk to each other" via "quantum entanglement."
Running a quantum circuit means you plug in a quantum vector, run it through a bunch of matrix multiplications, then collapse the output.
The final vector will be the correct answer. Technically, quantum computers can give wrong answers, but if you run the computation multiple times, then you will get the correct answer on average.
So basically, quantum computing is "matrix multiplication of real numbers on crack."
The current problem with quantum computers is that as the circuit gets bigger, they become less correct on average. All of the "talking to each other" creates so much noise the system stops working.
Once your probability of being correct drops below a certain threshold your quantum computer becomes useless. This is a major blocker for current quantum compute.
Let's look at a specific (oversimplified but helpful) example. Suppose you shine a laser beam into an ice cube.
Actually simulating what the laser will do when it exits the ice cube is very hard to predict because some quantum phenomena is involved.
To actually compute what the laser will do means you have to explicilty compute quantum entanglement, which is slow for classical computers but "built in" to a quantum computer.
However, you can *estimate* the distribution of how the laser will scatter without a quantum computer, so you can have at least a rough idea if your answer might be correct.
Theoretically, a quantum computer can predict what the laser will do when it enters and exits the ice cube, assuming it knows the distribution of particle structure of the ice cube.
You could model the laser traveling through the ice as a lot of matrix multiplications. It would effectively be multiplication by a lot of seemingly random matrices, with some entanglement involved.
By analogy, this is what Google was doing. The computation Google was doing was a "pseudo-random quantum circuit" (think pseudoranom ice cube) but we know a quantum circuit is just matrix multiplications (on crack). Therefore, it is a bunch of random matrix multiplications with an output that looks right.
Google's actual breakthrough was that the output of the circuit "looks correct" -- which sounds underwhealming -- and compared to the headlines, it definitely is. The academic breakthrough is that Google was able to use a larger circuit and notice an apparent *increase* in accuracy when modeling how a laser shines through an ice cube. That is noteworthy.
You can definitely tell if a computation has failed, and it seemed to be failing less as the circuit got bigger.
Normally, a quantum computer simulating a laser going through an ice cube would produce a garbled mess because quantum computers lose too much accuracy as the circuit gets bigger, and the output no longer collapses to the right answer on average. But that was not the case in the Google experiment.
However, note that the problem is "rigged" in favor of quantum computers. The benchmark is explicitly modeling a quantum phenomenon, so *of course* we get a speedup.
In other words, Google created a random distribution on the output that "seems correct." Why does it "seem correct?" well because by design, the computation cannot be run on a classical computer. But if we can't run it on a classical computer, how do we know the quantum computer is actually giving the right answer? The answer is we don't, and this is a serious gap.
But how big a step is this towards quantum computers breaking cryptography?
What google did is create a circuit with 9 gates, 25 gates, and 49 gates (very roughly speaking) and noticed that the accuracy of the output seemed to improve — or rather the computation failed less.
Therefore, by extrapolation, something with a lot more gate should have more accuracy. Right? Skeptics would be right to point out that extrapolation from 49 to 49,000 is a very big leap.
Quantum computing is kind of at the stage right now where some smart teenager wired a few logic gates together in a random fashion and said "hey look, my circuit made a random output and didn't explode!" Compared to previous attempts, it is an improvement. But he is still a long way from training an LLM.
Going from random circuits to ones that do something useful, especially ones that require orders of magnitude more gates, is a whole different problem.
In my opinion, no practical "quantum supremacy" has been demonstrated. The problem at hand is to explicilty model quantum phenomena, so it should be no surprise that a quantum computer could do better.
In my opinion, this whole thing is a regular exercise in technological marketing
1. Write about some seemingly powerful tech that creates fear (AI, nuclear, quantum, etc). Fear creates engagement.
2. Do a PR campaign where journalists write clickbaity headlines on a subject they are incapable of explaining, let alone understanding.
3. Try to enhance your brand by showing you are achieving some success on a hard problem.
Google desperately needed to do this because they are losing the AI narrative, so they have a strong incentive to do a coordinated PR stunt.
Ok, this thread took off.
For those who are new followers, I’m the founder of the company @RareSkills_io
We focus on making difficult subjects in computer science, particularly blockchain, approachable and useful to you.
Our most famous resource is the “RareSkills ZK Book” (do an internet search for that, X doesn’t like links).
It is 100% free and requires no email or login.
It teaches you how the field of cryptography “Zero Knowledge Proofs” work and how to code them up yourself.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I’m bearish too — for the same reason’s I’m bearish on my own company — not because of the mid takes your timeline is parroting.
This thread discusses the following topics:
1. On Vitalik’s resistance to be the face of the brand 2. On the Ethereum Foundation not supporting companies in the ecosystem 3. On parasitic L2s
🧵
1. On Vitalik’s resistance to be the face of the brand
Few things make me happier when I meet someone and they are like “oh you are from RareSkills, thank you so much, love the company!”
On Twitter, it’s kind of common knowledge that I run RareSkills, but for people finding RS randomly online, they often don’t know I exist.
And that is a good thing.
I want the company to stand on the quality of its work, not leverage whatever “celebrity/influencer status” I have.
I’m not a fan of using my personal brand to advance the company, but several people I trust have said that’s not the right mindset. I disagree with them — but I’m trying to find a middle ground. I’m sure Vitalik is doing the same.
Personal brands create very serious single points of failure, both from an enterprise value perspective and from a risk perspective. They are like levering perps with 100x. Yes, you can get high returns fast because people love authentic connections with real people, but it also puts resistance on your upside and greatly increases your downside.
I want anyone who works at RareSkills to have their personal brand enhanced by it — not the other way around.
Take for example how Rolex uses Roger Federer as a brand ambassador. If he stopped shilling Rolex today, their sales would not be seriously hurt. That’s the place I want RS to be at, and I’m sure that’s the place where Vitalik wants Ethereum to be at.
To a certain extent, RareSkills could probably survive the exit of its founder better than a lot of other web3 companies. The reason I know is because many people outside of Twitter don’t know I exist, but they still love the brand. Similarly, aside from Bitcoin, there’s no question that Ethereum is best positioned to survive, relative to other ecosystems, if something were to happen to its founder.
This is a very long term view that frustrates people, because everyone knows that a celebrity founder bullposting works — but what they don’t know is that it only works in the short term.
2. On the ETH foundation not supporting companies in the ecosystem
First, let me say that my interactions with individuals at EF has been great. But every conversation feels like they are hamstrung by a bureaucracy older than the Catholic Church.
There is no question that I’ve delivered a ton of value for EF, but because they don’t have a framework for funding content (as opposed to software), I’ve consistently gotten rejected from every single grant application.
RareSkills has literally gotten zero money from EF.
I understand where they are coming from, but as a devout EVM content creator who creates far more value than captured, this feels very wrong.
One concern they had was they can’t be “siding” with a content creator lest it create a “chilling effect” for other content creators to create the same thing.
As much as I believe that turn-of-events is extremely unlikely, I get it.
Neutrality and open source are values they take very seriously.
Numerous companies have offered to pay us money to include content about them in our bootcamp. Some of the content is actually worthwhile and good.
We haven’t taken money for that (yet).
I want students who are paying for something at RareSkills to believe (correctly) that our bootcamps were optimized based on what we believed (in our expert opinion) was the most effective use of their time, not something that *could* be suboptimal because someone paid us.
However, I think both EF and RS take this tradeoff to the extreme.
The fact of the matter is both EF and RS have so much excess brand equity that if either lost 10% overnight, it would not matter. That 10% could be traded for an investment, that, in the long run, would help the companies accomplish their long term vision better.
Brand is not the long term goal.
Optics are not the end goal.
The end goal for RS is to help very talented engineers and developers learn as fast as possible so they can focus on building and finding bugs.
“Appearing neutral” — while important — is subordinate to that goal.
The end goal for Ethereum is to be a fully trustworthy base layer for settling the world’s compute. They can accomplish this without appearing totally neutral, as a foundation.
Extreme example: People in the Solana system recommend RareSkills content even though we they know we are very pro EVM. “Wanna learn DeFi? Read the RareSkills books — you should be smart enough to translate the idea to SVM anyway” So if we can be successful despite being (wrongly) perceived as “biased,” so can the Ethereum Foundation.
Some thoughts on the akutars bug good bad and ugly:
- some of my friends have lost due the floor dropping as a result. My heart goes out to you.
- the bug was well intentioned. They blocked themselves from withdrawing the money until everyone got their refund… 👇
… so it had an “anti rug” mechanism built in. They tried to do something novel which was very pro customer and I’m a big fan. Intention was 100% on point. A bug in the anti rug logic left the funds unwithdrawable
- the developers clearly did not follow or were not aware of… 👇
… best practices. No code formatted was used, and it was clearly deployed from Remix based on the autogenerated comments I saw.
- solidity’s relative ease of use is a double edged sword. I have a serious respect for Solana now using Rust as a filter for “legitness”… 👇