Read on as we explore this interesting question in a bit more detail.
[THREAD]
For instance, though Adam was born from deep learning, it is used in many QML papers, since it often works nicely out of the box.
No optimizer is guaranteed to work well across all types of problems.
When you’re tackling tougher domain-specific problems, it can make sense to use an optimizer that “knows” a little more about the structure of your problem.
These optimizers have taken advanced courses in quantum computing and are eager to put their new-found skills into practice. ⚛️💻
This can potentially lead to optimizations which are quicker, easier, or cheaper.
Rotosolve/Rotoselect
Quantum Natural Gradient
iCANS/Rosalin
Since most QML models are inherently sinusoidal, you can actually solve for the optima (at least wrt a single parameter)
pennylane.ai/qml/demos/tuto…
arxiv.org/abs/1905.09692
It exploits the intrinsic geometry of quantum information to determine more “natural” parameter updates than conventional gradient descent.
pennylane.ai/qml/demos/tuto…
arxiv.org/abs/1909.02108
arxiv.org/abs/1912.08660
arxiv.org/abs/2004.14666
arxiv.org/abs/2005.05172
pennylane.ai/qml/demos/tuto…
arxiv.org/abs/1909.09083
arxiv.org/abs/2004.06252
Could the next frontier be optimizers that are specifically designed to work well for hybrid quantum-classical models? 🤔
What other optimization strategies could we come up with that cleverly leverage the structures and constraints of QM?
