"Can quantum androids dream of quantum electrodynmic sheep?"๐ค๐คโ๏ธ๐ญ๐โก๏ธ
We explore this in a new paper on quantum-probabilistic generative models ๐ฒโ๏ธ๐ค๐ญ and information geometry ๐๐ from our former QML group @Theteamatx ๐ฅณ
Time for an epic thread ๐
scirate.com/arxiv/2206.046โฆ
For some context: quantum-probabilistic generative models (QPGM) are a class of hybrid ML models which combine classical probabilistic ML models (e.g. EBM's) and quantum neural networks (QNNs). As we show in this new paper, turns out these types of models are optimal in many ways
It's really important to be able to model *mixed states* (probabilistic mixtures of quantum states) as most states in nature are not pure states/zero temperature! ๐ฅต๐ฅ๐๏ธ Nature is a mix of probabilistic ๐ฒ and quantum โ๏ธ, hence so should your models of it! ๐คโ๏ธ๐ฒ
So given quantum data โ๏ธ๐ or target physics ๐ฏโณ๐ฅ you want this model to mimic, how to you train these?
Q: What loss do you use to compare the model and target? โ๏ธ
A: Quantum relative entropy is king ๐
Q: What kind of QPGM works best?
A: E?.... EBM's --> QHBMs
Quantum Hamiltonian-Based Models (QHBM's) ๐ฒโ๏ธ๐ค๐ฅare great because they separate out the model into a classical EBMโก๏ธthat runs MCMC for sampling from *classical* Boltzmann๐ฅ+ a purely unitary QNN ๐, thus using QCs to circumvent sign problem for sampling of quantum statesโ๐๐
What's cool is that due to the *diagonal*โ๏ธ exponential (Gibbs ๐ฅ) parameterization, these models have very clean unbiased estimators ๐ค๐ for the gradients โคต๏ธ of both the forwards โก๏ธ quantum relative entropy, and the backwards one โฌ ๏ธ. Allowing for both generating and learning
How do you train these many-headed thermal beasts?๐๐ฅโ๏ธ
Regular gradient descent can struggle; small moves ๐๏ธ๐ค in parameter space can lead to big/rapid moves ๐๏ธ๐จ in *quantum state space* ๐ฑ๐ฉ
Your optimizer can then get lost in the sauce ๐ตโ๐ซ๐ฅด๐ฅฃ and struggle to do downhill๐
So what's the trick?
First, use small changes in quantum relative entropy as your notion of length; look at a circle โญ๏ธ๐, pull it back to parameter space, it becomes squished (ellipsoid ๐), now unsquishing your coordinates (๐Euclideanizing) now means small ๐คโก๏ธ๐maps to smol๐
That's great! What's the catch? Well in n dim an ellipsoid ๐ is an n^2 matrix (think of n semimajor raddii vectors of n dim each), that's a lot of parameters to estimate for each step ๐ซคโณ๐โฐ๏ธ
That's quantum-probabilistic NGD, perfect theoretically ๐, but slow practically โณ๐
What if we could have our cake and eat it too? ๐ค๐๐
The trick is to use the bowl force ๐ฅฃ๐ like a rubber band ๐ช with an anchor point, and use gradient descent ๐ as an inner loop to find an NGD-equivalent update. This is Quantum-Probabilistic *Mirror* Descent (QPMD). ๐๐ฆพ
In our numerics, QPMD was *goated* ๐๐๐ฆพ, even compared to full QPNGD (see left plot) ๐ฎ
It's a first-order (only needs gradients) method that's asymptotically equivalent to *full* QPNGD ๐
Correcting GD steps with QPMD yields way better optimization trajectories๐ฏ (right)
This is where things get nutty ๐ฅ๐ค. Turns out that our approach is using a type of metric (e.g. ๐vs ๐) called the Kubo-Mori metric, and this is the *only* QML metric that achieves theoretical perfect (Fisher) efficiency! ๐คฏ See @FarisSbahi ๐งต4 deets ๐
What's the catch? ๐คจ Well, to achieve this perfect efficiency and provable convergence guaranteed poractically, you need to be in the convex bowl ๐ฅฃ near the optimum of the loss landscape ๐๏ธ๐
Are there scenarios where that happens? ๐ค Yes! Huge number of scenarios of interest!๐ฎ
What if you wanted to *evolve* continuously a quantums state? ๐ Could you just continuously surf the landscape as it shifts under you? ๐ค๐
Yes! You just gotta stay in the bowl ๐ฅฃ and ride the wave as the landscape shifts between *tasks* ๐คฏ a form of geometric transfer learning
You can imagine most tasks as following a *path* in some task space. ๐ก๐ If you have a sequence of close (๐ค๐ in ๐ฅspace) tasks along this path, you can use our anchored rubber band (QPMD) to smoothly transfer your parameters from one task to another, chaining the wins ๐โก๏ธ๐
Great! โบ๏ธ But what can you do with this?๐ง Well, you can dream up ๐คโ๏ธ๐ญ just about any quantum simulation that you can parameterize, and make a quantum simulated "movie" ๐ฅ๐๏ธโ๏ธ
@Devs_FX fans can appreciate ๐
What happens when you try to generatively simulate, for example, time evolution? ๐ฅโฒ๏ธโ๏ธ Here's a genertaively modelled density matrix over time for a magnetic quantum system (transverse field Ising model) โ๏ธ๐งฒ
Turns out we can recursively variationally learn to integrate time dynamics using our QVARTZ method (recursive โช๏ธ variational ๐๏ธ quantum โ๏ธ time โฒ๏ธ evolution ๐) by evolving models a bit and re-learning them. This is a big deal because the quantum circuits ๐ผ can remain small ๐ค
What about *imaginary* time evolution, aka simulating a slow cooling process?๐ฅโก๏ธโ๏ธ We can do that too! Sequentially rather than recursively. We see it's way better to slowly go up in *coldness* (inverse temperature)๐๏ธ๐ than to try to go straight for a target ๐ช temperature ๐ฅ๐ฅต
We call this approach to imaginary time evolution META-VQT. ๐ถโ๐ซ๏ธ๐ฅ We think it could be a serious disruption to anyone interested even in basic VQE. It's a special case our our-geometric-transfer-learning methods
(FYI imaginary/Euclidean time+coldness ๐ฅถ are all the same thing)
Phiew! ๐
๐ค That was a lot! ๐ฅต Tons of new possibilites with these methods... this is just the beginning of the story... are you ready to jump into *the simulation* with us? ๐๐ค๐ฒโ๏ธ๐
If so...
"Can I just use this stuff?" ๐งโ๐ป๐จโ๐ป๐ฉโ๐ปโ๏ธ๐ฒ๐ค Yes! We released an accompanying open-source library (QHBM lib), now accessible on GitHub ๐ check it out and start quantum hacking of all sorts of quantum spaces you can dream of! ๐คฏ
github.com/google/qhbm-liโฆ
I could keep going..๐ค๐
ending ๐งต
This broad research program was a long road for ~2 years. Huge congrats to the whole team, @FarisSbahi leading the paper w/ @zaqqwerty_ai lead infra dev, huge s/o to @dberri18 @geoffrey_roeder Jae @sahilpx as well all their efforts & help๐๐ซ
@Theteamatx *electrodynamic sheep
Aside/Easter egg (spoilers for @Devs_FX fans)
I actually came up with QVARTZ from watching DEVS ๐
*practically
@Theteamatx Here is the arxiv link directly for those who want to save a click
arxiv.org/abs/2206.04663
So many typos upon review ๐๐ wrote this tweetstorm in a hurry
As an aside: Huge shout-out to my former QML Padawan turned quantum Jedi @FarisSbahi for going HAM ๐ค taming the quantum metric zoo โ๏ธ๐๐บ๏ธ๐๐ฆ๐ฆ๐ฆก๐ฆ and proving many important theoretical results not covered in this thread!
If you don't like reading threads, there's also a recorded talk of mine from a few months back (back at #qhack 2022) on the topics and intuitions of this paper๐
Congrats ๐๐ You've successfully reached the end of the thread ๐ฅณ๐ before you try to read the paper: ๐พ๐งโโ๏ธit's dangerous to go alone, take this ๐ก๏ธ
And finally, when you feel ready ๐ช for an even greater adventure ๐, you can head to arxiv and dive deep into the maths ๐คฟ๐๐โ๏ธ
glhf! ๐ซก DMs open for Questions! ๐ฒ
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.