A month or so ago, Google held a press release to announce their latest result in quantum computing using superconducting qubits. The full technical paper is up on Physical Review X and arxiv, the easier to understand summary on their research blog. Shortly after that my friend and former classmate Shantanu Debnath was part of the new breakthrough result from University of Maryland for making programmable trapped ion quantum computer. I found it interesting that the latest advancements in QC/QI are coming from both industry and academia.
I went to academia with a very staunch belief that research needed to be unbiased and unfettered by monetary constraints. Of course everything still costs money, but academia does create an approximation of the ideal. Absence of direct monetary incentives and tenure for professors encourages curiosity-based research and taking risks. There is definitely some element of ‘selling’ your research, which many researchers might find a bit distasteful. However the ‘buyers’ are often other researchers : peer reviewers, funding agency program managers and journal editors. Even the worst of this bunch belongs in the top 0.01% of the world in their respective fields.
Having jumped ship to industry now, I am trying to observe how research in industry works. At first glance, it seems that all of the above criteria are violated. Research in industry is always motivated by monetary gains, either for a company or an individual. Researchers also usually do not have the security of tenure. Their position is under much greater threat from market forces, management decisions and economic factors beyond their control. Finally, even though they might publish their work in peer reviewed journals, the measure of success of these researchers is ultimately tied to the success of the company. In that sense, the true ‘buyer’ of their work is not a specialized group, rather it is the general public. ‘Selling’ research to the general public is far more difficult and requires vastly more packaging.
The interplay of all these forces makes for a fascinating arc through history. Almost always academic research spends at least 10-30 years pursuing a given direction of research before anyone in industry get wind of it. First mover advantage is definitely huge in academia. The most agile research groups which are able to invest in a new technology can reap the rewards for many decades to come. The most relevant example is NIST Maryland, which pioneered laser cooling technology back in 1990’s. Today the collaboration of NIST and University of Maryland has some of the best technology in laser cooling and optical traps which is still making them a leader in related fields decades later.
Once the academics have chipped away at a difficult and risky problem and brought it within striking distance, a few bold souls from the private sector will attempt to seize the first mover advantage. The strong monetary and competitive incentives push industrial research rapidly in the next few years. Yet, as I am learning more about the history of innovation in the Bay area, it seems that when it comes to world changing technologies, being the prime mover in the private sector is not necessarily an advantage.
GO Corporation, General Magic and to some extent Fairchild semiconductors are examples of companies that could not last long enough to see the fruits of the technologies that they themselves pioneered. In every case, the companies that came around 10-20 years later perfected their product when the technology was more mature and the general public more receptive. Intel, born out of a dying Fairchild, went on to become one of the most dominant corporations ever. Apple used GO and General Magic’s vision, and some of the founders, to create the iPhone which is probably the single most successful product in history.
I think in academia being too early is not as disastrous. For one academic research does not have to turn a profit year after year to sustain itself. Secondly since the adjudicators of success are often other scientists, it is possible to convince them of the value of the work even if experimental confirmation and technological validation are many years in the future *cough* neural networks, string theory *cough*.
But being too early is death in the marketplace. Tony Fadell (the man behind the iPod and iPhone) has explained in many-a-interview how Apple literally sculpted the market by first selling the iPod and then adding one feature at a time (video, podcasts, iTunes) until the market was ripe for the iPhone. The general public is not much into beta testing new technologies, so the first few iterations of consumer technologies are often destined to fail.
It is tempting to extrapolate this “first mover disadvantage” to today’s hubbub around quantum computing. The seeds of quantum computing were sown in 1970’s and it has been a hotbed of academic research in the last couple of decades. In the last few years, QC has rocketed into mainstream news with every major tech company angling to have a stake in the future.
Realistically I think that a consumer product utilizing quantum computing is still some distance away. Of course, as with most research, unforeseen challenges and unexpected discoveries may drastically alter the timeline. But based just on my personal assessment of the past developments, estimated time to consumer for QC should be a minimum of 10 years, maybe 20. That also seems exactly like the time-frame that pioneers of previous revolutions took to file bankruptcy so that the next generation of companies could stand on their shoulders and change the world. The bold claims of General Magic in 1993 to build an anytime, anywhere handheld communications device seem like a close parallel to D-Wave announcing the world’s first commercial quantum computer in 2011.
So am I saying that all the companies that are around today investing heavily in quantum computing will be gone before you can solve the travelling salesman problem on your phone? 🙂