"Can quantum androids dream of quantum electrodynmic sheep?"๐Ÿค”๐Ÿค–โš›๏ธ๐Ÿ’ญ๐Ÿ‘โšก๏ธ

We explore this in a new paper on quantum-probabilistic generative models ๐ŸŽฒโš›๏ธ๐Ÿค–๐Ÿ’ญ and information geometry ๐ŸŒ๐Ÿ“ from our former QML group @Theteamatx ๐Ÿฅณ

Time for an epic thread ๐Ÿ‘‡
scirate.com/arxiv/2206.046โ€ฆ ImageImageImageImage
For some context: quantum-probabilistic generative models (QPGM) are a class of hybrid ML models which combine classical probabilistic ML models (e.g. EBM's) and quantum neural networks (QNNs). As we show in this new paper, turns out these types of models are optimal in many ways Image
It's really important to be able to model *mixed states* (probabilistic mixtures of quantum states) as most states in nature are not pure states/zero temperature! ๐Ÿฅต๐Ÿ”ฅ๐Ÿž๏ธ Nature is a mix of probabilistic ๐ŸŽฒ and quantum โš›๏ธ, hence so should your models of it! ๐Ÿค–โš›๏ธ๐ŸŽฒ Image
So given quantum data โš›๏ธ๐Ÿ” or target physics ๐ŸŽฏโณ๐Ÿ”ฅ you want this model to mimic, how to you train these?

Q: What loss do you use to compare the model and target? โš–๏ธ
A: Quantum relative entropy is king ๐Ÿ‘‘

Q: What kind of QPGM works best?
A: E?.... EBM's --> QHBMs ImageImageImage
Quantum Hamiltonian-Based Models (QHBM's) ๐ŸŽฒโš›๏ธ๐Ÿค–๐Ÿ”ฅare great because they separate out the model into a classical EBMโšก๏ธthat runs MCMC for sampling from *classical* Boltzmann๐Ÿ”ฅ+ a purely unitary QNN ๐Ÿ˜‡, thus using QCs to circumvent sign problem for sampling of quantum statesโž–๐Ÿ”š๐Ÿ‘‹ Image
What's cool is that due to the *diagonal*โ†˜๏ธ exponential (Gibbs ๐Ÿ”ฅ) parameterization, these models have very clean unbiased estimators ๐Ÿค๐Ÿ“ for the gradients โคต๏ธ of both the forwards โžก๏ธ quantum relative entropy, and the backwards one โฌ…๏ธ. Allowing for both generating and learning ImageImage
How do you train these many-headed thermal beasts?๐Ÿ‰๐Ÿ”ฅโš›๏ธ

Regular gradient descent can struggle; small moves ๐ŸŽ›๏ธ๐Ÿค in parameter space can lead to big/rapid moves ๐ŸŽ๏ธ๐Ÿ’จ in *quantum state space* ๐Ÿ˜ฑ๐Ÿ˜ฉ

Your optimizer can then get lost in the sauce ๐Ÿ˜ตโ€๐Ÿ’ซ๐Ÿฅด๐Ÿฅฃ and struggle to do downhill๐Ÿ“‰ Image
So what's the trick?
First, use small changes in quantum relative entropy as your notion of length; look at a circle โญ•๏ธ๐ŸŒ, pull it back to parameter space, it becomes squished (ellipsoid ๐Ÿ‰), now unsquishing your coordinates (๐Ÿ“Euclideanizing) now means small ๐Ÿคโžก๏ธ๐Ÿ“maps to smol๐Ÿ‘ Image
That's great! What's the catch? Well in n dim an ellipsoid ๐Ÿ‰ is an n^2 matrix (think of n semimajor raddii vectors of n dim each), that's a lot of parameters to estimate for each step ๐Ÿซคโณ๐Ÿ’€โšฐ๏ธ

That's quantum-probabilistic NGD, perfect theoretically ๐Ÿ†, but slow practically โณ๐Ÿ˜– Image
What if we could have our cake and eat it too? ๐Ÿค”๐ŸŽ‚๐Ÿ†

The trick is to use the bowl force ๐Ÿฅฃ๐Ÿ–– like a rubber band ๐Ÿช€ with an anchor point, and use gradient descent ๐Ÿ“‰ as an inner loop to find an NGD-equivalent update. This is Quantum-Probabilistic *Mirror* Descent (QPMD). ๐Ÿ‘‘๐Ÿฆพ Image
In our numerics, QPMD was *goated* ๐Ÿ๐Ÿ‘‘๐Ÿฆพ, even compared to full QPNGD (see left plot) ๐Ÿ˜ฎ

It's a first-order (only needs gradients) method that's asymptotically equivalent to *full* QPNGD ๐Ÿ™Œ

Correcting GD steps with QPMD yields way better optimization trajectories๐ŸŽฏ (right) ImageImage
This is where things get nutty ๐Ÿฅœ๐Ÿค“. Turns out that our approach is using a type of metric (e.g. ๐Ÿ‰vs ๐Ÿˆ) called the Kubo-Mori metric, and this is the *only* QML metric that achieves theoretical perfect (Fisher) efficiency! ๐Ÿคฏ See @FarisSbahi ๐Ÿงต4 deets ๐Ÿ‘‡
What's the catch? ๐Ÿคจ Well, to achieve this perfect efficiency and provable convergence guaranteed poractically, you need to be in the convex bowl ๐Ÿฅฃ near the optimum of the loss landscape ๐Ÿ”๏ธ๐Ÿ‚
Are there scenarios where that happens? ๐Ÿค” Yes! Huge number of scenarios of interest!๐Ÿ˜ฎ Image
What if you wanted to *evolve* continuously a quantums state? ๐Ÿ„ Could you just continuously surf the landscape as it shifts under you? ๐Ÿค™๐ŸŒŠ

Yes! You just gotta stay in the bowl ๐Ÿฅฃ and ride the wave as the landscape shifts between *tasks* ๐Ÿคฏ a form of geometric transfer learning
You can imagine most tasks as following a *path* in some task space. ๐Ÿšก๐ŸŒ If you have a sequence of close (๐Ÿค๐Ÿ“ in ๐Ÿฅ”space) tasks along this path, you can use our anchored rubber band (QPMD) to smoothly transfer your parameters from one task to another, chaining the wins ๐Ÿ†โžก๏ธ๐Ÿ† Image
Great! โ˜บ๏ธ But what can you do with this?๐Ÿง Well, you can dream up ๐Ÿค–โš›๏ธ๐Ÿ’ญ just about any quantum simulation that you can parameterize, and make a quantum simulated "movie" ๐ŸŽฅ๐ŸŽž๏ธโš›๏ธ

@Devs_FX fans can appreciate ๐Ÿ˜
What happens when you try to generatively simulate, for example, time evolution? ๐ŸŽฅโฒ๏ธโš›๏ธ Here's a genertaively modelled density matrix over time for a magnetic quantum system (transverse field Ising model) โš›๏ธ๐Ÿงฒ
Turns out we can recursively variationally learn to integrate time dynamics using our QVARTZ method (recursive โ†ช๏ธ variational ๐ŸŽ›๏ธ quantum โš›๏ธ time โฒ๏ธ evolution ๐Ÿ„) by evolving models a bit and re-learning them. This is a big deal because the quantum circuits ๐ŸŽผ can remain small ๐Ÿค ImageImage
What about *imaginary* time evolution, aka simulating a slow cooling process?๐Ÿ”ฅโžก๏ธโ„๏ธ We can do that too! Sequentially rather than recursively. We see it's way better to slowly go up in *coldness* (inverse temperature)๐Ÿ”๏ธ๐Ÿš  than to try to go straight for a target ๐Ÿช‚ temperature ๐Ÿ”ฅ๐Ÿฅต ImageImage
We call this approach to imaginary time evolution META-VQT. ๐Ÿ˜ถโ€๐ŸŒซ๏ธ๐Ÿ”ฅ We think it could be a serious disruption to anyone interested even in basic VQE. It's a special case our our-geometric-transfer-learning methods

(FYI imaginary/Euclidean time+coldness ๐Ÿฅถ are all the same thing) Image
Phiew! ๐Ÿ˜…๐Ÿ˜ค That was a lot! ๐Ÿฅต Tons of new possibilites with these methods... this is just the beginning of the story... are you ready to jump into *the simulation* with us? ๐Ÿš€๐Ÿค–๐ŸŽฒโš›๏ธ๐Ÿ––

If so...
"Can I just use this stuff?" ๐Ÿง‘โ€๐Ÿ’ป๐Ÿ‘จโ€๐Ÿ’ป๐Ÿ‘ฉโ€๐Ÿ’ปโš›๏ธ๐ŸŽฒ๐Ÿค– Yes! We released an accompanying open-source library (QHBM lib), now accessible on GitHub ๐Ÿš€ check it out and start quantum hacking of all sorts of quantum spaces you can dream of! ๐Ÿคฏ
github.com/google/qhbm-liโ€ฆ
I could keep going..๐Ÿค“๐Ÿ˜… ending ๐Ÿงต
This broad research program was a long road for ~2 years. Huge congrats to the whole team, @FarisSbahi leading the paper w/ @zaqqwerty_ai lead infra dev, huge s/o to @dberri18 @geoffrey_roeder Jae @sahilpx as well all their efforts & help๐Ÿ†๐Ÿซ‚ Image
@Theteamatx *electrodynamic sheep
Aside/Easter egg (spoilers for @Devs_FX fans)

I actually came up with QVARTZ from watching DEVS ๐Ÿ˜†
*practically
@Theteamatx Here is the arxiv link directly for those who want to save a click
arxiv.org/abs/2206.04663
So many typos upon review ๐Ÿ˜†๐Ÿ™ˆ wrote this tweetstorm in a hurry
As an aside: Huge shout-out to my former QML Padawan turned quantum Jedi @FarisSbahi for going HAM ๐Ÿ˜ค taming the quantum metric zoo โš›๏ธ๐ŸŒ๐Ÿ—บ๏ธ๐Ÿ“๐Ÿฆ™๐Ÿฆ›๐Ÿฆก๐Ÿฆ’ and proving many important theoretical results not covered in this thread! Image
If you don't like reading threads, there's also a recorded talk of mine from a few months back (back at #qhack 2022) on the topics and intuitions of this paper๐Ÿ‘‡
Image
Congrats ๐ŸŽ‰๐Ÿ‘ You've successfully reached the end of the thread ๐Ÿฅณ๐Ÿ† before you try to read the paper: ๐Ÿ‘พ๐Ÿง™โ€โ™‚๏ธit's dangerous to go alone, take this ๐Ÿ—ก๏ธ
And finally, when you feel ready ๐Ÿ’ช for an even greater adventure ๐Ÿš€, you can head to arxiv and dive deep into the maths ๐Ÿคฟ๐Ÿ“ƒ๐Ÿ“šโœ๏ธ

glhf! ๐Ÿซก DMs open for Questions! ๐Ÿ“ฒ

โ€ข โ€ข โ€ข

Missing some Tweet in this thread? You can try to force a refresh
ใ€€

Keep Current with Gill Verdon โš›๏ธ๐ŸŽฒ๐Ÿค–

Gill Verdon โš›๏ธ๐ŸŽฒ๐Ÿค– Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @quantumVerd

May 13
In January, @PetarV_93 & @mmbronstein graciously invited me to share what I found most promising for 2022 progress in QML (below).

This week, a total of 3 (!) papers extending QGNN's into Geometric QDL were posted!

See below for related threads ๐Ÿ‘‡

towardsdatascience.com/predictions-anโ€ฆ
Read 7 tweets
Jul 1, 2020
To understand the state of the quantum industry: imagine if there were 200+ companies trying to be @SpaceX the same year Sputnik made it to orbit, before @NASA even landed on the Moon.
Meanwhile, fully aware that the tech has to mature so that the market for commercial spaceflight reaches sufficiently substantial numbers to warrant high valuations, startups pitch to commercial freight companies that the future of shipping is rocket-based.
Using a quantum computer for classical data processing needs will be like renting a rocket to ship cargo. Huge engineering/energetic/budget overhead, likely won't be cost/benefit worth it, even though it will be technically faster when the technology is mature and reliable enough
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(