Profile picture
Dr Vega @queenofquanta
, 24 tweets, 8 min read Read on Twitter
Day 2 of quantum advantage workshop at @InHenriPoincare, featuring Ashley Montanaro (@quantumashley), Juan Bermejo-Vega (myself) and Dominik Hangleiter. #LTQI
We kick off with Ashley Montanaro @quantumashley, giving a tutorial about IQP circuits (IQP = Instantaneous Quantum Polynomial time) and potential for demonstrating superior quantum computational performance. #LQTI
- Shepherd, Bremner 0809.0847;
- Bremner, Jozsa, Shepherd 1005.1407;
- Bremner, Montanaro, Shepherd 1504:0799;
- Bremner, Montanaro, Shepherd 1610:01808;
Will the day end without me having misspelled IQP at least once? Let's find out! ;)
IQP circuits are diagonal circuits in the Hadamard basis. They can encode hard to approximate output probabilities such as (conplex) Ising model partition functions (if we choose \sqrt{CZ}, To gates), and gaps of degree 3 boolean polynomials (choice: Z, CZ, CCZ gates).
These probabilities are "very hard" to compute, namely, #P-hard, i.e., harder than counting the number of solutions of a 3SAT formula in exact a. First proofs of hardness known before quantum computing was even a thing) :O
Bremner-Shepher-Jozsa: If a classical computer can exactly sample from the output distribution of IQP circuits then the Polynomial Hierarchy collapses.
Today (in Ash's talks and later) we want to rule out classical simulations that imperfectly sample quantum-generated probability distributions, ie, within an ε constant additive error.
For this we will use Stockmeyer's Counting theorem: If a classical computer can efficiently sample from prob. dist. p(s), then there is an BPPNP algorithm that approximates p(s) within ε relative error, or, equivalently, an (1±ε)p(s) additive one.…
Setting epsilon to be nearly zero (zero or multiplicatively small) Stockmeyer's theorem can be used to prove Bremner-Shepherd-Jozsa's result (for the record, their original proof uses post-selection, but we will be working with Stockmeyer's today).
Bremner-Montanaro-Shepherd's 1st result -> "Approximate version" of BJS's: Assuming an additional average-case conjecture, an efficient probabilistic classical simulation of IQP circuits with an ε constant additive error implies a polynomial hierarchy collapse to its 3rd level.
Average case conjecture: random Ising model's partition functions and random 3-degree boolean polynomial's gaps are #P-hard to approximate in average. Motivation: are provably hard in worst-case, and in-built randomness doesn't intuitively simplify the average-case.
Ingredients behind BMS's proof:
1. Stockmeyer's Theorem => If a "good" additive-error classical sampler exists then a BPP^NP algorithm can approximate output probabilities with 1/poly(n) additive error. This is Good but not there Yet.
2. A proof that IQP circuits anti-concentrate => The same BPP^NP algorithm yields constant relative error approximations of output probabilities in average. But this was assumed to be #P-hard => Polynomial Hierarchy collapses to 3r level coz
BPP^NP ⊆ PH_3 ⊆ PH ⊆ P^{#P}
After a coffee reboot ☕️, @quantumashley will tell us more about hardness of classically simulating IQP circuits, complexity theory and implementations on physically-realistic architectures. #LTQI
More on the proof: Stockmeyer's algorithm gives you O(ε/2^n + p(s)/poly(n)) errors. The anticoncentration property ensures that a large fraction of the IQP output probabilities fulfils p(s) ≥ ε’/2^n. Hence, the additive term is smaller than a constant relative error in average.
The proof of anti-concentration combines:
- The Paley Zygmund inequality: shows that the probability of p(s) being larger than 1/2^n is proportional to 2^{2n} /(expected value of p(s)^2).
- A combinatorial proof that E(p(s)^2) ≥ 1/2^n
=> Final constant anti-concentration ratio.
2nd result of Bremner-Montanaro-Shepherd, extends hardness results to "sparse" n-qubit IQP circuits. The latter can be implemented in depth O(√n log n) on a √n×√n square lattice (in average) using sorting networks and parallelization: #LTQI
"Sparse" means that 2-qubit gates are only applied with probability O(log n/n) to arbitrary pairs of qubits (i.e., typically, each qubit interacts with other log n qubits instead of n).
Can the result bv extended to depth much smaller than √n?? Unlikely, because then one would expect tensor network classical simulation algorithms would yield a sub-exponential algorithm for a #P-hard problem... :(
specifically, it would seemingly suggest the existence of a sub-exponential algorithm for counting 3-SAT, in violation with the counting exponential time hypothesis.
However, constant-depth can be achieved via different approaches inspired by measurement-based quantum computation (-> next talk):
- Gao, Wang, Duan:
- Bermejo-Vega, Hangleiter, Schwarz, Raussendorf, Eisert:
- Miller, Sanders, Miyake:
The morning session ends. I cannot live tweet the next talk, but stay tuned to @fgrosshans ! #LTQI
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Dr Vega
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!