Published: November 13, 2019

Introduction

Google says it has reached quantum supremacy, claiming that its quantum computer can solve a problem in minutes that would take a supercomputer millennia. IBM has taken umbrage at Google's celebrations, and thinks Google is overhyping its progress.


The 451 Take

Research in quantum is an exciting and relevant area, and there are massive opportunities for quantum computing. However, there are still colossal steps to be taken to make quantum impactful to society. More qubits are great, but they all rely on better error correction, stability, algorithms and hardware. For every tiny advancement, hundreds of other tiny advancements are needed, too. The race is a marathon, not a sprint, and the winner (that is, if a win can be achieved) will need to show that quantum computing doesn't just make the unsolvable solvable, but makes the expense of quantum computing worthwhile for the value it unlocks, particularly when compared with traditional computing. Google's step is an important one, but there are many steps that don't get such attention. Our advice is to focus on the value generated by announcements: Google has made important progress, regardless of whether it has achieved quantum supremacy or not. That should be cause for celebration.



Quantum Surpremacy

A few weeks ago, NASA accidentally published a draft of a landmark paper in quantum computing (you can find a primer on quantum computing here). In the paper, the authors claimed Google's Sycamore quantum computer had solved a problem that would have taken conventional computers 10,000 years in about four minutes. The 54-qubit computer had successfully managed to quickly verify that a set of numbers were randomly distributed compared with a claimed 10,000 years to do the same verification on IBM's Summit supercomputer. There is no doubt that this is a big step for quantum computing, as well as for computer science, but there is controversy in a term used by the Google team – quantum supremacy.

The term, coined in 2012 by CalTech professor John Preskill, loosely means the point when quantum computing dramatically accelerates finding a solution to a 'hard' problem that traditional computing could not have solved in a practical and valuable timeframe – finding a solution in hours compared with lifetimes. This is different to a quantum advantage, which means the problem is solved faster using quantum algorithms, but doesn't make an unsolvable problem solvable. There is no universal definition on how sped-up the resolution would have to be, nor how hard or useful the problem was – Google defined the parameters of the term itself. As an analogy, Concorde was a really fast commercial jet, but if we wanted to benchmark it, is it more appropriate to say it's two times the speed of a 747 or 50x that of the Wright brothers' first successful flight? But Google's initial claim of 10,000 years vs. three minutes is not to be taken lightly.

IBM's response to Google's claims was that clever programming tricks mean the solution can be found in 2.5 days on its supercomputer, not 10,000 years as claimed. In fact, IBM stated this was a worst-case timeframe, and with better use of storage IBM might be able to squeeze it down to similar timeframes as the quantum computer, although IBM hasn't actually published data of such an experiment. Google's model of traditional computing assumed the data for the algorithm would be loaded into memory, and huge memory capacity could not be achieved. IBM argues that this data could be cached on storage drives. As a result, IBM claimed Google wasn't making the unsolvable solvable – it was just speeding things up.

Nevertheless, Google's achievement is impressive – it is essentially saying that a single chip can do a calculation that takes at least the same amount of time (or faster depending on who you believe) to solve as a cutting-edge supercomputer with PBs of storage and thousands of cores.

Economics is Critical

Economics has a massive role to play in this argument of quantum supremacy. What premium would an enterprise pay to solve a problem that would give a meaningful return in hours not millennia? The Haber Bosch process produces fertilizer for crop production, but produces an enormous 1% of global CO2 emissions – quantum computing is being mooted as being able to produce a lower-energy process, if enough qubits can be implemented. Surely the production of such a process is worth the hefty investment in quantum computing, compared with waiting centuries for traditional simulations to be completed. But is solving a 50-qubit problem in three minutes compared with 2.5 days worth the investment of the quantum computer and the skills to design the algorithm?

Similarly, if quantum computing could be used to solve the configuration of atoms in a medicine for a specific characteristic, or to simulate the flow of gases over a supersonic jet, the acceleration can be economically justifiable if it makes innovations happen in months rather than decades, and brings them to market quicker and provides a business advantage. Quantum might be an academic study that huge corporations are interested in, but money talks. If quantum computing doesn't give a clear ROI for a specific use case, then so what? Every step taken in quantum's development is important, and researchers are building on the shoulders of giants, but the product must do something that betters what we have, at a price that is more appropriate than options already in existence today, considering the value added. Concorde could fly passengers from London to New York at twice the speed of jumbo jets of the time, but there are no Concordes in flight today – the premium for the speed simply wasn't worth the value of any extra few hours at the destination.

For this argument, we don't think Google hasn't achieved useful quantum supremacy. For us, useful supremacy means the solution to the algorithm must provide a financial return that is justified compared with the investment in quantum computing versus the cheaper investment in traditional computing. If Google's claim was demonstrably correct, the claim might be justifiable, but with IBM's counter response it is now a grayer area. IBM can further its case through empirical evidence that the use of storage as algorithms as suggested can provide an equivalent acceleration using standard computing. Google can continue what it's doing – the more examples it can show, the more robust its case. But frankly, why does the term 'quantum supremacy' matter? It's a term with few specific parameters that define success. What really matters is the detail of the achievements and the progress toward a return on investment.

With this in mind, Google's announcement is an important starting point that some have compared to the Wright brothers' first flight – yes, a big leap forward, but not a big enough leap to enable safe flight for the majority. Now that Google has made this first flight, we expect it to make many more, and others to be more bullish about its achievements. Frank words between Google and IBM on blog posts and press conferences show there isn't just academic pride at stake, there is a first-to-market advantage. IBM has been talking increasingly about quantum in the past year, whereas Google has been (and continues to be) relatively tight-lipped. IBM has made much noise about its IBM Q Experience, its Qiskit development software, its Q Network of partners and its story around use cases. Microsoft has also defined its own programming interface, Q#, and is working with Honeywell to bring quantum to its Azure cloud platform. D-Wave has a quantum cloud service, Leaf and its Ocean software development kit. Publicly, at least, Google has made less progress in bringing quantum computing to developers and the market than others.

Perhaps this announcement is the tipping point where Microsoft, Honeywell, Intel, Huawei and others will shout louder about their research. Maybe this announcement is aimed at bringing Google into the mindsets of those that may desire commercial quantum relationships in the future.

Challenges

More qubits, better stability of these qubits and fewer errors will ultimate define a winner. Shor's algorithm, which could crack RSA encryption by factoring keys into their component parts, is a big benchmark of quantum supremacy, and there is no doubt of the economic value (even nefarious) of an investment that can crack most of the internet's security in minutes, not millennia. But for such a use case, millions of qubits would be needed. With just 50 being offered by Google and IBM, we're a long way off yet, especially considering these qubits don't stay in their desired state long and are highly sensitive to external noise. Quantum information theory may be sound, but the engineering is potentially impossible – there are no guarantees that quantum research will lead us to useful supremacy. Quantum decoherence and error correction at scale remain unresolved. IBM has defined a term – quantum volume – that aims to measure progress in all areas of quantum computing. The pull of complete quantum supremacy will likely remain very strong, but may be elusive. And even if we make progress, how can we ever verify the results we are getting are correct? Today, some academics say we are in the NISQ (rhymes with risk) phase – 'noisy intermediate-scale quantum.'

NIST is already investigating encryption algorithms that are 'quantum-proof.' Ironically, we wonder if quantum computing could be used to solve how to increase the number of qubits in a quantum computer.

All publicity is good publicity, some say, and the discovery of Google's unpublished paper, care of NASA, probably caused the findings to get more attention than they might have otherwise. Gaining attention is critically important for quantum computing – not because it will lead to business, but because it will lead to funding. Most accept that there is massive potential, but when the potential can be realized into financial gain for the likes of Microsoft, Google, IBM, D-Wave and others is not clear. Quantum today is a bit of a gamble, and those involved must demonstrate continuing progress and a competitive edge.

Because of the vast engineering problems with controlling very wide quantum computers, it might be a more pragmatic approach to target the development of, say, 64-qubit quantum compute engines tightly integrated into classical system designs, much like FPGAs or ASICs, to offload general-purpose processors. The idea would be to dramatically speed up certain tasks in an otherwise classical application stack. The results would remain within the realm of classical verification (validating the operation of a quantum computer as fault-tolerant with a classical supercomputer), but at 1,000x speed – we are far closer to achieving that today, compared with millions of qubits. Cloud-based quantum services such as D-Wave Leap and Microsoft's interest in an Azure quantum service show that this is a viable model, which should be more accessible to the masses than a hefty dedicated quantum computer.
Owen Rogers
Research Vice President, Digital Economics Unit

As Research Vice President, Owen Rogers leads the firm's Digital Economics Unit, is the architect of the Cloud Price Index and is head of 451 Research's Center of Excellence for Quantum Technologies. In 2013, he completed his PhD thesis on the economics of cloud computing at the University of Bristol. Owen was named 'Innovative Analyst of the Year' in the Institute of Industry Analyst Relations' global awards in 2018.

Daniel Bizo
Principal Analyst, Datacenter Services & Infrastructure

Daniel Bizo is a Principal Analyst for Datacenter Services & Infrastructure Channel at 451 Research. His research focuses on advanced datacenter design, build and operations, such as prefabricated modular datacenters, highly efficient cooling and integrated facilities and IT management to achieve superior economics.Daniel is also a regular contributor to 451 Research's silicon and systems technology research in the Systems and Software Infrastructure Channel. Dan is also a member of 451 Research’s Center of Excellence for Quantum Technologies.
Jean Atelsek
Analyst, Cloud Price Index

Jean Atelsek is an analyst for 451 Research’s Digital Economics Unit, focusing on cloud pricing in the US and Europe. Prior to joining 451 Research, she was an editor at Ovum, spiffing up reports, forecasts and data tools covering telecoms and service providers, fixed and wireless networks, and consumer technology among other topics. 

Want to read more? Request a trial now.