Gill Profile picture
Founder & CEO @Extropic_AI โ€ข prev: Physics & AI R&D @ (Alphabet X / Google) โ€ข Founder @ TensorFlow Quantum โ€ข (PhD(ABD) + MMath) @ (IQC / UWaterloo / PI) โ€ข e/acc

Jun 10, 2022, 32 tweets

"Can quantum androids dream of quantum electrodynmic sheep?"๐Ÿค”๐Ÿค–โš›๏ธ๐Ÿ’ญ๐Ÿ‘โšก๏ธ

We explore this in a new paper on quantum-probabilistic generative models ๐ŸŽฒโš›๏ธ๐Ÿค–๐Ÿ’ญ and information geometry ๐ŸŒ๐Ÿ“ from our former QML group @Theteamatx ๐Ÿฅณ

Time for an epic thread ๐Ÿ‘‡
scirate.com/arxiv/2206.046โ€ฆ

For some context: quantum-probabilistic generative models (QPGM) are a class of hybrid ML models which combine classical probabilistic ML models (e.g. EBM's) and quantum neural networks (QNNs). As we show in this new paper, turns out these types of models are optimal in many ways

It's really important to be able to model *mixed states* (probabilistic mixtures of quantum states) as most states in nature are not pure states/zero temperature! ๐Ÿฅต๐Ÿ”ฅ๐Ÿž๏ธ Nature is a mix of probabilistic ๐ŸŽฒ and quantum โš›๏ธ, hence so should your models of it! ๐Ÿค–โš›๏ธ๐ŸŽฒ

So given quantum data โš›๏ธ๐Ÿ” or target physics ๐ŸŽฏโณ๐Ÿ”ฅ you want this model to mimic, how to you train these?

Q: What loss do you use to compare the model and target? โš–๏ธ
A: Quantum relative entropy is king ๐Ÿ‘‘

Q: What kind of QPGM works best?
A: E?.... EBM's --> QHBMs

Quantum Hamiltonian-Based Models (QHBM's) ๐ŸŽฒโš›๏ธ๐Ÿค–๐Ÿ”ฅare great because they separate out the model into a classical EBMโšก๏ธthat runs MCMC for sampling from *classical* Boltzmann๐Ÿ”ฅ+ a purely unitary QNN ๐Ÿ˜‡, thus using QCs to circumvent sign problem for sampling of quantum statesโž–๐Ÿ”š๐Ÿ‘‹

What's cool is that due to the *diagonal*โ†˜๏ธ exponential (Gibbs ๐Ÿ”ฅ) parameterization, these models have very clean unbiased estimators ๐Ÿค๐Ÿ“ for the gradients โคต๏ธ of both the forwards โžก๏ธ quantum relative entropy, and the backwards one โฌ…๏ธ. Allowing for both generating and learning

How do you train these many-headed thermal beasts?๐Ÿ‰๐Ÿ”ฅโš›๏ธ

Regular gradient descent can struggle; small moves ๐ŸŽ›๏ธ๐Ÿค in parameter space can lead to big/rapid moves ๐ŸŽ๏ธ๐Ÿ’จ in *quantum state space* ๐Ÿ˜ฑ๐Ÿ˜ฉ

Your optimizer can then get lost in the sauce ๐Ÿ˜ตโ€๐Ÿ’ซ๐Ÿฅด๐Ÿฅฃ and struggle to do downhill๐Ÿ“‰

So what's the trick?
First, use small changes in quantum relative entropy as your notion of length; look at a circle โญ•๏ธ๐ŸŒ, pull it back to parameter space, it becomes squished (ellipsoid ๐Ÿ‰), now unsquishing your coordinates (๐Ÿ“Euclideanizing) now means small ๐Ÿคโžก๏ธ๐Ÿ“maps to smol๐Ÿ‘

That's great! What's the catch? Well in n dim an ellipsoid ๐Ÿ‰ is an n^2 matrix (think of n semimajor raddii vectors of n dim each), that's a lot of parameters to estimate for each step ๐Ÿซคโณ๐Ÿ’€โšฐ๏ธ

That's quantum-probabilistic NGD, perfect theoretically ๐Ÿ†, but slow practically โณ๐Ÿ˜–

What if we could have our cake and eat it too? ๐Ÿค”๐ŸŽ‚๐Ÿ†

The trick is to use the bowl force ๐Ÿฅฃ๐Ÿ–– like a rubber band ๐Ÿช€ with an anchor point, and use gradient descent ๐Ÿ“‰ as an inner loop to find an NGD-equivalent update. This is Quantum-Probabilistic *Mirror* Descent (QPMD). ๐Ÿ‘‘๐Ÿฆพ

In our numerics, QPMD was *goated* ๐Ÿ๐Ÿ‘‘๐Ÿฆพ, even compared to full QPNGD (see left plot) ๐Ÿ˜ฎ

It's a first-order (only needs gradients) method that's asymptotically equivalent to *full* QPNGD ๐Ÿ™Œ

Correcting GD steps with QPMD yields way better optimization trajectories๐ŸŽฏ (right)

This is where things get nutty ๐Ÿฅœ๐Ÿค“. Turns out that our approach is using a type of metric (e.g. ๐Ÿ‰vs ๐Ÿˆ) called the Kubo-Mori metric, and this is the *only* QML metric that achieves theoretical perfect (Fisher) efficiency! ๐Ÿคฏ See @FarisSbahi ๐Ÿงต4 deets ๐Ÿ‘‡

What's the catch? ๐Ÿคจ Well, to achieve this perfect efficiency and provable convergence guaranteed poractically, you need to be in the convex bowl ๐Ÿฅฃ near the optimum of the loss landscape ๐Ÿ”๏ธ๐Ÿ‚
Are there scenarios where that happens? ๐Ÿค” Yes! Huge number of scenarios of interest!๐Ÿ˜ฎ

What if you wanted to *evolve* continuously a quantums state? ๐Ÿ„ Could you just continuously surf the landscape as it shifts under you? ๐Ÿค™๐ŸŒŠ

Yes! You just gotta stay in the bowl ๐Ÿฅฃ and ride the wave as the landscape shifts between *tasks* ๐Ÿคฏ a form of geometric transfer learning

You can imagine most tasks as following a *path* in some task space. ๐Ÿšก๐ŸŒ If you have a sequence of close (๐Ÿค๐Ÿ“ in ๐Ÿฅ”space) tasks along this path, you can use our anchored rubber band (QPMD) to smoothly transfer your parameters from one task to another, chaining the wins ๐Ÿ†โžก๏ธ๐Ÿ†

Great! โ˜บ๏ธ But what can you do with this?๐Ÿง Well, you can dream up ๐Ÿค–โš›๏ธ๐Ÿ’ญ just about any quantum simulation that you can parameterize, and make a quantum simulated "movie" ๐ŸŽฅ๐ŸŽž๏ธโš›๏ธ

@Devs_FX fans can appreciate ๐Ÿ˜

What happens when you try to generatively simulate, for example, time evolution? ๐ŸŽฅโฒ๏ธโš›๏ธ Here's a genertaively modelled density matrix over time for a magnetic quantum system (transverse field Ising model) โš›๏ธ๐Ÿงฒ

Turns out we can recursively variationally learn to integrate time dynamics using our QVARTZ method (recursive โ†ช๏ธ variational ๐ŸŽ›๏ธ quantum โš›๏ธ time โฒ๏ธ evolution ๐Ÿ„) by evolving models a bit and re-learning them. This is a big deal because the quantum circuits ๐ŸŽผ can remain small ๐Ÿค

What about *imaginary* time evolution, aka simulating a slow cooling process?๐Ÿ”ฅโžก๏ธโ„๏ธ We can do that too! Sequentially rather than recursively. We see it's way better to slowly go up in *coldness* (inverse temperature)๐Ÿ”๏ธ๐Ÿš  than to try to go straight for a target ๐Ÿช‚ temperature ๐Ÿ”ฅ๐Ÿฅต

We call this approach to imaginary time evolution META-VQT. ๐Ÿ˜ถโ€๐ŸŒซ๏ธ๐Ÿ”ฅ We think it could be a serious disruption to anyone interested even in basic VQE. It's a special case our our-geometric-transfer-learning methods

(FYI imaginary/Euclidean time+coldness ๐Ÿฅถ are all the same thing)

Phiew! ๐Ÿ˜…๐Ÿ˜ค That was a lot! ๐Ÿฅต Tons of new possibilites with these methods... this is just the beginning of the story... are you ready to jump into *the simulation* with us? ๐Ÿš€๐Ÿค–๐ŸŽฒโš›๏ธ๐Ÿ––

If so...

"Can I just use this stuff?" ๐Ÿง‘โ€๐Ÿ’ป๐Ÿ‘จโ€๐Ÿ’ป๐Ÿ‘ฉโ€๐Ÿ’ปโš›๏ธ๐ŸŽฒ๐Ÿค– Yes! We released an accompanying open-source library (QHBM lib), now accessible on GitHub ๐Ÿš€ check it out and start quantum hacking of all sorts of quantum spaces you can dream of! ๐Ÿคฏ
github.com/google/qhbm-liโ€ฆ

I could keep going..๐Ÿค“๐Ÿ˜… ending ๐Ÿงต
This broad research program was a long road for ~2 years. Huge congrats to the whole team, @FarisSbahi leading the paper w/ @zaqqwerty_ai lead infra dev, huge s/o to @dberri18 @geoffrey_roeder Jae @sahilpx as well all their efforts & help๐Ÿ†๐Ÿซ‚

@Theteamatx *electrodynamic sheep

Aside/Easter egg (spoilers for @Devs_FX fans)

I actually came up with QVARTZ from watching DEVS ๐Ÿ˜†

*practically

@Theteamatx Here is the arxiv link directly for those who want to save a click
arxiv.org/abs/2206.04663

So many typos upon review ๐Ÿ˜†๐Ÿ™ˆ wrote this tweetstorm in a hurry

As an aside: Huge shout-out to my former QML Padawan turned quantum Jedi @FarisSbahi for going HAM ๐Ÿ˜ค taming the quantum metric zoo โš›๏ธ๐ŸŒ๐Ÿ—บ๏ธ๐Ÿ“๐Ÿฆ™๐Ÿฆ›๐Ÿฆก๐Ÿฆ’ and proving many important theoretical results not covered in this thread!

If you don't like reading threads, there's also a recorded talk of mine from a few months back (back at #qhack 2022) on the topics and intuitions of this paper๐Ÿ‘‡

Congrats ๐ŸŽ‰๐Ÿ‘ You've successfully reached the end of the thread ๐Ÿฅณ๐Ÿ† before you try to read the paper: ๐Ÿ‘พ๐Ÿง™โ€โ™‚๏ธit's dangerous to go alone, take this ๐Ÿ—ก๏ธ

And finally, when you feel ready ๐Ÿ’ช for an even greater adventure ๐Ÿš€, you can head to arxiv and dive deep into the maths ๐Ÿคฟ๐Ÿ“ƒ๐Ÿ“šโœ๏ธ

glhf! ๐Ÿซก DMs open for Questions! ๐Ÿ“ฒ

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling