The current issue of @Nature has three articles that show how to make those error-correcting mechanisms achieve over 99% accuracy, which would make silicon-based qubits a viable option for the large-scale quantum computational devices.
Quantum Computing is the merger of computational algorithms with the principles of Quantum Mechanics. Due to some of its non-trivial and counterintuitive traits, Quantum Mechanics enables certain computational operations to be done many orders of magnitude faster 4/
than with the conventional logical operations. However, quantum systems are incredibly delicate, and often require attainment of exceptional physical conditions in order to make them viable in their pure useful form. 5/
Until very recently it has only been possible to create systems comprising of a handful of quantum bits (qubits). Regrettably, many such systems (superconductors, trapped ions, nitrogen-vacancy in diamonds) are hard to manufacture and scale. 6/
Another approach to qubits, electrons trapped in silicon, is particularly promising for large scale fabrication. We have over three quarters of a century worth of experience with manufacturing large-scale silicon-based systems, which underlie all of modern computing. 7/
Unfortunately, silicon-based systems have had issues with error-correcting mechanism, which made them unsuitable for the large-scale computer systems – until now. 8/
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I've worked for 4 different tech companies in various Data Science roles. For my day job I have never ever had to deal with text, audio, video, or image data. 1/4
Based on the informal conversations I've had with other data scientists, this seems to be the case for the vast majority of them. 2/4
Almost a year later this remains largely true: for the *core job* related DS/ML work, I have still not used any of the aforementioned data. However, for work-related/affiliated *research* I have worked with lots of text data. 3/4
2/ A year ago I was approached with a unique and exciting opportunity: I was asked to help out with setting a Kaggle Open Vaccine competition, where the goal would be to come up with a Machine Learning model for the stability of RNA molecules.
3/ This is of a pressing importance for the development of the mRNA vaccines. The task seemed a bit daunting, since I have had no prior experience with RNA or Biophysics, but wanted to help out any way I could.
One of the unfortunate consequences of Kaggle's inability to host tabular data competitions any more will be that the fine art of feature engineering will slowly fade away. Feature engineering is rarely, if ever, covered in ML courses and textbooks. 1/
There is very little formal research on it, especially on how to come up with domain-specific nontrivial features. These features are often far more important for all aspects of the modeling pipeline than improved algorithms. 2/
I certainly would have never realized any of this were it not for tabular Kaggle competitions. There, over many years, a community treasure trove of incredible tricks and insights had accumulated. Most of them unique. 3/