, 19 tweets, 30 min read Read on Twitter
Seth Lloyd from @MIT is visiting us at LIP6 and presents his work with András Gilyén (from @QuSoftAmsterdam ) and @ewintang (from @UW) on Quantum-inspired low-rank stochastic regression with logarithmic dependence on the dimension arXiv:1811.04909 arxiv.org/abs/1811.04909
#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW Seth Lloyd: presents the recommendation problem. For m users, n movies, a n×m matrix A encodes the movie preferences. The goal is to recommend movies a user likes, given their previous preferences.
A is assumed to be low rank (the rank is the effective number of genres).
#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW Seth Lloyd: The @netflix way to solve this problem is through singular value decomposition of matrix A.

A=∑_l σ_l U^l V^l ^T
σ_l: singular values
U^l and V^l are left/right singular vectors
A V^l= σ_l U^l V^l
In short V^l^T b = p_l encodes weighted preference for genre l
#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix Seth Lloyd: Fro a long time, the classical agorithm is in O((mn) → (mn)² )
The Quantum algo in O(log(mn)) for low rank matrices sampling |V^l⟩ with weight |⟨v^l | b⟩|².
Last spring, @ewintang found a classical algorithm in O(polylog(mn))
#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix Seth Lloyd was working on optimization portfolio algorithm based on pseudo inverse, and wondered whether @ewintang applied. It did, hence this paper.
#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix Seth Lloyd: This method works only with low rank matrices/operation. It works for sampling, but e.g. you cannot classically “simulate” QFT of the vector (⇒you cannot dequantize Shor), you probably cannot use it for quantum simulation (needs exponential of high rank matrix).#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix I think I misunderstood Seth Lloyd. In his portfolio quantum algorithm arXiv:1811.04909 arxiv.org/abs/1811.04909 needs the evaluation of a high rank observable O, so @ewintang’s does not apply.
#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix Seth Lloyd: «In short, @ewingtang does not harm much of quantum computing community, except myself, and maybe Kerenidis and Prakash»
#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix @EwingTang Seth Lloyd now explains how the algorithm works. The pseudo inverse is
A⁺=∑_l σ_l⁻¹ U^l V^l ^T
min |Ax-b| is given by x=A⁺b

Goal: given b, a_ij elements of A, samples from entries x_i of x with probabilities ∝x_i² (l² samples)
#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix @EwingTang Seth Lloyd: Assume l² sampling access to rows, columns of A, entries of b, can we sample from
x=A⁺b==∑_l σ_l⁻¹ U^l (V^l ^T b)

Fact; FKV (2004) + @ewintang : can do this by sampling a constant # of samples from A, b, indep of m, n, dependent on rankk, κ=max(σ_l⁻¹) #LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix @EwingTang Seth Lloyd: l² sampling is key, can be seen as access to a database. The database preparation cost has a cost of at least nm (both for quantum and classical algorithm)
#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix @EwingTang Seth Lloyd: l² samples of r>k rows from A → r×n matrix R
Rⁱ=||A_F|| Aⁱ /(√r ||Aⁱ|| )

⟨R⁺R⟩=A⁺A
R’s SVD decomposition is an approximate SVD decomposition of the one of A
But this decomposition is still dependent of n.
#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix @EwingTang Seth Loyd: So we now sample s columns of r, giving r×s matrix S , with approximately same singular values, and vectors. It allows to approximately sample from U^l, V^l.

#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix @EwingTang Seth Lloyd: Reminder: the goal is sampling from
x==∑_l σ_l⁻¹ U^l (V^l⁺ b).
@ewintang showed how to estimate V^l⁺ b from estimates of V^l, b
sampling from U^l and estiamting σ_l through the SVD procedure above the allows to sample from x
#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix @EwingTang Seth Lloyd: the dependency is then polylog in n,m , but with prefactors with huge exponents in k and κ.
Õ(κ¹⁶ k⁶/ϵ⁶⋅log(mn))
The κ¹⁶ exponent seems pretty lose, but the k⁶ and ϵ⁶ seem pretty tight.
Quantum algorithms is
O(κk log(nm)log(1/ϵ)), which is much better #LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix @EwingTang Seth Lloyd: actually, the nasty prefactors means that currently, even usual classical algorithms are often better than this. #LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix @EwingTang Seth Lloyd: These quantum inspired classical algorithms makes quantum information much richer, suggesting new ways to solve problems, even if we do not have access to a quantum computer.
#LTQI
@MIT @QuSoftAmsterdam @ewintang @UW @netflix @EwingTang Seth Lloyd: A remark from Jonas (from @IRIF_Paris ) to which everyone agreed. Given the improvement in the k and κ exponents, if Kerenidis and Prakash’s work had come after @ewintang’s it would have been seen as a good quantum speedup, much better than Grover’s
#LTQI
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Frédéric Grosshans
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!