Exciting news broke last month as Rigetti Computing unveiled a new paper on Arxiv, “Unsupervised Machine Learning on a Hybrid Quantum Computer“. So I set about to understand what it is this startup does and why they think they can take on the megalithic IBM, Microsoft and Google in the quantum computing space.
A new challenger appears
Rigetti is not a household name like its competitors unless you are in the quantum computing research space, but it might become one soon. The founder is Chad Rigetti, who got his PhD from Yale and worked in IBM for a while. Chad’s previous work and the focus of this company is on building quantum computers using superconducting circuits.
Superconducting qubits are not the only way one can build quantum computers (other alternatives including trapped ions, optics etc) but it seems to be architecture that is currently leading the race. My earlier post about Google’s quantum computing result was based on the D-Wave quantum annealer which is also built with superconducting qubits.
The paper details the performance of their flagship 19 qubit quantum computer. They then go on to implement an algorithm called QAOA – Quantum Approximate Optimization Algorithm. I went through the QAOA paper and, to be honest, was a bit disappointed. The algorithm breaks a really hard problem (the NP complete MAXCUT problem) into two parts, one part is efficiently solvable on QC and the second part is still open for future work. I wanted to write more about the algorithm but decided not to since I felt that there is a lot of room for doubt regarding its actual speed advantage on practical applications.
My best guess as to why the Rigetti folks chose this algorithm is that they wanted to showcase their computer and picked a novel algorithm to demonstrate rather than chase the leaders in the field. I think it worked. Their publication caught significant press buzz and brought the company into the public consciousness. For the people in the know, this paper highlights the more important fact – that Rigetti’s 19 qubit quantum computer can perform both entangling and measurement based quantum operations and run custom algorithms.
Let a thousand startups bloom
It is an exciting time for quantum computing; major results are coming through faster than I can read and write about them. I still haven’t gone through IBM’s big result from 2 months ago.
The ongoing research war between D-Wave, Google, IBM, Microsoft is riveting and churning out some interesting results. These companies are attracting top talent from universities who are also getting a big recruitment and funding boost. Now that VCs are interested, startups like Rigetti, Quantum Circuits and hbar are beginning to appear and challenge the big dogs. I am really rooting for QC to break through the cost/effectiveness barrier and become mainstream. My career would come to an ideal full circle if I end up as a QAI/QML engineer someday.
Moreover, there is plenty of money and glory to be had. The race is still on to reach quantum supremacy – when a quantum computer beats the best classical computer+algorithm on a given problem. Once quantum computers are better than classical computers on a certain task, it is unlikely that it will ever be reversed, barring some truly incredible finding on the same order of awesomeness as P=NP. How profitable that task will be remains to be seen.
Beware of quantum winter
However, to ring a somber note, I am beginning to feel that the hype is slowly creeping up beyond what can actually be delivered in the near term. John Preskill, a top Caltech researcher, wrote a detailed essay suggesting that quantum supremacy may not be the correct goal.
Based on my research experience I have to agree, and I think there is much more at stake than just some minor disappointment. There is a real chance that quantum computing might go through something similar to what artificial intelligence went through during the “AI winter” between 1970-2000.
Back in the 1950s-60s, the academic research in AI/neural networks became extremely popular. Billions of dollars were pumped into the AI market from academia, defense and industry. However the hype grew too fast and the chip technology and computing power were just not there to support it. As practical applications became hard to come by and companies failed, mass disillusionment followed. As a result, not only did the press and the general public get tired of AI, but even defense, venture capital and academic funding dried up. Even though computing technology kept advancing, it is generally believed that the lack of funding and mass exodus of researchers in the 70s-90s greatly delayed the eventual resurgence of AI/ML.
The technological challenges facing practical quantum computers such as noise and decoherence, magnetic and optical trap stability, crystals and detector efficiency are steadily improving year after year. Most academics I know believe that the technology will surely get there, but is not there yet. Now thanks to the hype created by D-Wave, Google, IBM, Rigetti etc., it seems that the world is ready for quantum computers… today.
The first signs of “QC mania” have already started and money is pouring in. There are only two ways this can end well –
- Either people, especially investors and managers, should temper their excitement and allow the technologies to mature before demanding returns on their dollar. Needless to say, I don’t have much faith in this option.
- The other option is that the researchers in all the companies and universities are able to race and outperform the market hype and we do indeed get beyond proof-of-concept in the next 5-10 years. I am hoping this happens.
If neither happens, we might end up creating a QC winter and push the timeline for practical quantum computers from 15 years to 50 years, and that would be a real tragedy. Because I would have retired by then 🙂