Profile picture
Frédéric Grosshans @fgrosshans
, 24 tweets, 9 min read Read on Twitter
This morning at the @InHenriPoincare focus week on quantum advantage, Aleksandr Arkhipov from @MIT_CSAIL presents a Tutorial on BosonSampling
#LTQI
Aleksandr Arkhipov:
Looks at a classical patchinko, with classical balls. The input output relations can be defined by a stochastic matrix M.
For example, Prob(one ball/input slot → one ball/output slot)=det(M)
#LTQI
Aleksandr Arkhipov: If on now plays the same game with identical bosons (photons in a linear optical network), the matrix M is unitary and the same probability is then |perm(M)|²
#LTQI
Aleksandre Arkhipov:
More generally, for different photon pattern transition S→T , S=(s₁,s₂…s_n), sₓ being sₓ photons in mode x, we define submatrix M_ST with row(column) x repeated sₓ (tₓ) times,
P(S→T)=perm(M_ST)|²/S!T!
with S!=s₁!s₂!⋯s_n! and same for T
#LTQI
Aleksandr Arkhipov: Describes the photon pattern S in second quantification, by a polynomial:
∏ₓ âₓ⁺^sₓ/√sₓ!

(he actually uses x and/or y as variable, but as a physicist, I know they stand for creation operators)
A n photon state is a degree n polynomial
#LTQI
Aleksandre Arkhipov: studies in detail the Hong–Ou–Mandel dip.
The dip basically comes from the symmetry of identical particles.
It translates in polynomial as xy=yx, while x⊗y≠y⊗x. Actually, with photons, one should use x⊙y=y⊙x=(x⊗y+y⊗x)/√2
#LTQI
Aleksandr Arkhipov introduces an homomorphism ϕ₂(U)(xy)=(Ux)(Uy)
U acts on V=ℂⁿ
ϕ_n(U) acts on ℂ^N
N= # degree n monomial in m vars
= ((n ¬ m))=(n¬ m-n+1)

¬ is my twitter notation for “over”: k choice n = (n ¬ k)

#LTQI
Now, Aleksandr Arkhipov defines the BosonSampling computational problem:
Input: a n×m unitary matrix
output: the probabilities of output patterns

Since the probabilities are permanent, and the permanent computing is ♯P-hard, this problem is hard

#LTQI
Aleksandr Arkhipov: use the reasoning we’ve heard several times this week to show that solving exactly BosonSampling on a BPP machine would collapse the polynomila hierarchy to the 3rd level.

#LTQI
Aleksandr Arkhipov: In this proof, one embeds the problem of computing perm(M) embedding εM as a submatrix of a larger unitary matrix, using a classical boson sampler and a NP-oracle (Stockmeier)
#LTQI
Aleksandre Arkhipov:
However, requiring exact boson sampling is unfair, given that any physical realization would be noisy.
We should look at approximate BS, with adversarial errors. But the adversary could easily ruin the probability of a special outcome.
#LTQI
Aleksandr Arkhipov:
To prevent this,
εR is a random submatrix,
m≥n^{2+ε} to avoid collisions,
εM needs to look like random submatrix of unitary (no arbitrary M anymore).

If m≥n^(6+ε) , the last condition (for Haar random U) is fulfilled by iid Gaussians. #LTQI

#LTQI
Aleksandr Arkhipov conjectures that it’s true for less sparse matrices, with m≥n^(2+ε), but it would need a much finer study
#LTQI
Aleksandr Arkhipov needs an additional conjecture to show BosonSampling cannot be doe approximately in SampBPP: the additive permanent of Gaussians Conjectures (PGC): additive approx permanent of most random Gaussian matrices is ♯P hard

#LTQI
Aleksandr Arkhipov: implied by two conjectures: PGC× difficulty of multiplicatively approximate these permanents, + and anticoncentration conjecture, most of these permanents are not close to 0.
Variants of PGC× known to be ♯P-complete (worst case approximate, avrg case exact)
Aleksandr Arkhipov: look at the self reducibility of perm (Lipton ’89).
Pick random n×n matrix A
f(t)=perm(M+tA) polynomial of deg n. f(0) can be found by interpolation with n+1 points
If Perm(random M) with P≥1-1/3(n+1), we get perm (M) whp.
Can be improved to 1/poly(n) #LTQI
Aleksandr Arkhipov: The previous shows that being able to compute the exact permanent of a random matrix often enough is ♯P complete
#LTQI
Aleksandr Arkhipov: applying the same reasoning in the always right but approximate case dose not work. If M=I, perm(M)=1, A=diag(1, –1, 1, –1), perm(M+At)=(1-t²)^n/2 is a sharp peak ⇒ random M with noise would miss the peak.
#LTQI
Aleksandr Arkhipov is surprised that the permanent anti-concentration conjecture is so hard to prove. It is known to be true for the determinant, and the determinant seems to be statistically identical to the permanent, as far as we can numerically test
#LTQI
Aleksandr Arkhipov moves on to verifaction. The problems :
• perm ∉ NP (unlike factoring)
• perm hard to compute
• sampling occurs over an exponentially large space
#LTQI
Aleksandr Arkhipov: Several strategies:
1) Look at small size, but not scalable
2) Rig the network : make one outcome very likely, but hide which one. This is impossible: if P>1-1/poly ⇒ block structure
#LTQI
Aleksandr Arkhipov:
3) Statistical verification: check the output obeys expected statistics we can compute. This is not adversary resistant, but useful to test experiments.
E.g. row norm verification: submatrix rows with large norm are likelier to output photon
#LTQI
Aleksandr Arkhipov: One can also look at the collision statistics, using the boson pirthday paradox: √(m ln2) instead of classical √(2m ln2).
Also look at k-fold collisions
It is checking in a regime were we don’t know whether BosonSampling is hard, but it can check the device
Aleksandr Arkhipov:
Other statistics : Forurier samplig methods (slit interefernce like, to force some values to be 0), generalized to linear statistics, like 5n₁–3n₂+7n₃
#LTQI
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Frédéric Grosshans
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!