Earl I will make a few comments. The first is pushing average gate fidelity has not been the problem. In fact we regularly see above >99.9 in two qubit gate error. The hard part has been *stability* of two qubit gate error and cross-talk and both of these needed new technology. https://t.co/hKRZdcTnNf
In fact in each of our birds it is has never been about the number of qubits. Falcon - bump bonds, Hummingbird - multiplex readout, Eagle - Multi-level wiring and through substrate via, Osprey - Flex I/O, Condor - I/O and Heron a new gate. Not number of qubits they come for free
This figure show both isolated and simultaneous gates and here you see the that for Heron the cross talk is almost zero. However, what is interesting is that it is no longer linear in a quantile plot so the statistics are not normal.
This can be seen in this plot as well. Egret was a test device at 33 qubits for the Heron chip. The tails have been our problem and tails needs lots of device data to understand.
These tails are due to Two-level systems jumping on an off the transom making the gate not stable. In fact if we re-cool a device this all changes again. So by having many device we have learnt a lot about the physics.
So by putting many devices online we have been able to advance error mitigation, error suppression and got lots of device data. From this data we have have learnt that we can tune these TLS in some test devices
and as a result make all the gates in a small test device basically at the coherence limit
And we have also shown in *many* isolated devices the coherence time can be increased and regular we measure coherence times over a 1ms.
So to me there has been lots of progress enabled by all the data and while we have not integrated the TLS mitigation and the new coherence into the larger devices the future looks very promising for large quantum devices with both lots of qubits and good error rates.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
My sincerest thanks to the many IBM Quantum staffers, clients, and partners who gathered in New York City today for IBM Quantum Summit 2022. Here’s a look back at some of the biggest announcements from the event (thread).
We’ve once again delivered the world’s largest quantum processor with the IBM Quantum Osprey—a remarkable 433 qubits arranged in our heavy-hex lattice topology. More details on the IBM Research Blog: research.ibm.com/blog/next-wave…
Today we released our updated IBM quantum development roadmap. To learn more, check out the video () and the blog (research.ibm.com/blog/ibm-quant…). 1/n
As we go forward into the quantum future, we are focusing on scaling qubits, increasing quality and maximizing speed of quantum circuits. 2/n
This year and next year we push the limits of single chip processors with our 433 qubit Osprey processor and our 1121 qubit Condor processor. 3/n
David I read the paper on the Hartree-Fock state preparation, and once again I’m impressed by circuits and hardware and see value in any benchmark. However I’m struggling with the answers to the questions below
1. The Hartree-Fock state in a non-molecular basis can be found efficiently with a classical computer, and then preparation of fermionic Gaussian operators can also be compiled efficiently on a quantum computer.
2. So what is the need for that variational circuit to find the HF state? I think we should focus on problems that are hard to simulate classically.