web counter
February 2, 2022 - 48 min

EP02. Fault Tolerance and Error Correction

The second episode of Quantum Well (the series where we explore the barriers to useful quantum computing) focuses on error mitigation, error correction and fault tolerance.

Quantum Programming
Listen to our podcast on all main broadcasting platforms

In our first discussion, we talked with two scientists, John Morton and Chris Monroe who are working on building quantum hardware. One of the topics that came up during that was the need to suppress errors that occur during computation. That's going to be the focus of our discussion this time. 

Quantum computers are particularly susceptible to errors for exactly the same reason that quantum sensors are so sensitive. Encoding the processing of quantum information in such a way that any interactions with its environment can be corrected and undone is extremely challenging. But it's the only path we know that leads to a proven advantage over classical computing. Whether achieved through active error correction, or through other techniques, such as self-correction, dynamic decoupling, or decoherence-free subspaces a high degree of error mitigation will be necessary if we are to achieve large-scale quantum computation.

Our guests this episode are:

  • John Preskill, Director of the Institute for Quantum Information and Matter at California Institute of Technology
  • Dave Bacon, Senior Staff Software Engineer at Google

The discussion will be hosted by Joe Fitzsimons and Si-Hui Tan, CEO and CSO of Horizon Quantum Computing.

Guests of the “Fault Tolerance and Error Correction” episode: John Preskill and Dave Bacon

Si-Hui Tan

So John, you started your career in physics working on quantum field theory and cosmology, what led you to quantum computing?

John Preskill

Hi Si-Hui. Good to see everybody. I'm an admirer of Joe Fitzsimons and Dave Bacon, so glad to be on the same panel with them. Why did I get interested in quantum computing is the question. Well, I could give a very long answer, but it was a combination of factors. As you said, my background was in fundamental particle physics. For my generation of physicists, we eagerly looked forward to the next generation accelerator back in the 90s, which was called the superconducting supercollider, which was going to do the exploration of physics beyond the standard model that we all hungered for. Then that was cancelled for complicated reasons in 1993. And so, I asked myself, "What am I going to do now? It's going to be a long time before we know what's really going on at those high-energy scales." But there was something else I was interested in, which was having to do with quantum information, particularly how it gets processed with black holes. There are still lingering mysteries about that. In order to explore that question, I had learned things about quantum information, which most particle physicists didn't know that much about at the time, about entanglement, and teleportation, and quantum cryptography, and so on. And then something magical happened in 1994 when I heard that Peter Shor had discovered that quantum computers, at least theoretically, would be able to efficiently factor large numbers. Although I may have embellished the memory in the time since, I recall being awestruck by that finding-- the idea that we can solve problems with future technologies that would otherwise be indefinitely out of reach, because we can make use of the principles of quantum theory. I thought that was one of the most amazing ideas I'd ever heard in my scientific life, and it eventually led me to change the direction of my own research, from particle physics to quantum information science.

Joe Fitzsimons

Dave, if you don’t mind, I’m going to turn to you. I was a reader of your blog for many years. So, I've kind of followed your career pretty closely. It seems you can't escape quantum computing no matter how hard you try. Have you given up trying?

Dave Bacon

I think I did. The question is, when do I try again? So yeah, indeed, my path was definitely to be in quantum computing, and then to leave the field to become a real, like all the real software engineers. I was actually an undergraduate at Caltech when Peter Shor was making this discovery. And I actually did an intern project on quantum computing. John was actually the judge on the talk that I gave. And afterwards, I didn't make much progress on the problem, he came up to me and said, "That was a hard problem you worked on."

John Preskill

Let's recall, Dave, you were trying to solve NP-hard problems with quantum computing.

Dave Bacon

I was, I was trying to solve what is on Scott Aaronson's blog now, so it says it's not possible, right? 

Joe Fitzsimons

He doesn't say it’s not possible…

Dave Bacon 

Yeah, that's right. Possibly. So, it was an optimistic time and why not work on that, right? Why not work on trying to see what quantum computers could do? I left once, in some ways, when I went to grad school, I was going to do astrophysics. So, I took all astronomy classes. And then I got sucked back in as I began to understand error correction, which people like John were in the middle of discovering at the time. I wasn't paying close enough attention while I was an undergraduate, frankly, and got drawn back into that. And then I became a professor. So, sort of the straight academic life. And then for reasons just like, life, I decided it was time to try something new and I became a software engineer at Google. And then kind of the amazing thing, is that in the preceding years, nobody needed a software engineer in quantum computing, but it's shifted now that we've started to try to build larger and larger quantum computers. All of a sudden, they need people who are both quantum computing folks and also software engineers. And so, I sort of got drawn back into the field. But I will always threaten that I might try something crazy new again, just because that's life.

John Preskill

I'd like to add that, Dave, of course, is one of our fondly remembered undergraduate alumni, not least, because he was one of the few physics students who minored in English.

Dave Bacon

And I have a BS in literature, which is the correct initial I think, for that degree. Or English, I think, it's what it's officially called now at Caltech.

Joe Fitzsimons

I kind of left out on this and that I'm the only one that's never been associated with Caltech.

Dave Bacon

Yeah, Caltech's a great place. I'm completely biased, my grandfather went there, so you’ll never get me to say negative things about Caltech.

John Preskill

Where you also did a postdoc, by the way.

Dave Bacon

And I did a postdoc for John Preskill, which was a fantastic time. I realized, in retrospect, one of the great things is as a postdoc in quantum computing: you often spend your life on the road, because you're trying to get jobs and you're doing research and you're moving around. But the great thing at Caltech was John just invited everybody, so they came through and it was a really spectacular place to just get to meet people in the field and that's been useful now in my new life. People ask if I know this person, like, "Oh, yeah, they came by Caltech." Generally the story!

What makes quantum computers so vulnerable to error?

Joe Fitzsimons

I wanted to start off with talking about the need for quantum computing. So as I say, last episode, we talked about the barriers to building scalable hardware both in terms of what kind of physical systems you need, the need for control and so on. But let me ask both of you what makes quantum computers so vulnerable to error? Why are error mitigation techniques and fault tolerance so important?

John Preskill

One of the fundamental differences between classical information and quantum information is you can't look at a quantum state without disturbing it in some uncontrollable way. And even if we don't look at it ourselves, we can't, even though we try, perfectly isolate quantum information that we want to process from the outside world. So the environment, so to speak, is observing it all the time. And that causes quantum computation to fail. It removes the magic of superposition that makes quantum computing powerful. So ideally, we would like to perfectly isolate the qubits that we're processing from the outside. And since that's not possible, what we have learned theoretically is possible, is to use the idea of quantum error correction to, in effect, protect the information by making it invisible to the environment. But maybe I'm getting ahead of myself, because you want to know why we want to do it, not how we do it. 

John Preskill

Look, quantum computing is really, really hard, Joe. The idea that we can solve hard problems with quantum computers, is 40 years old, Dave remembers around the time he was starting graduate school, I guess, when we were first able to do quantum gates, that could entangle two qubits. That was about 25 years ago. And the hardware keeps advancing, but it's not nearly good enough. It's much better than it was and I'm sure it will continue to advance, but to run applications that we're particularly excited about, we need far, far more reliable qubits. The key thing being able to do highly accurate entangling gates between pairs of qubits. And although we've gotten a lot better, the error rate in the best devices, multi-qubit devices, is something like the 1% per operation level, maybe 1 in 1,000 under ideal conditions, and it's just not nearly good enough. So, either we're going to make much, much better hardware, and Dave told me when he was a postdoc, you're going to do that with topological quantum computing, I don't know if you still think that. But in the absence of much, much better hardware, we are expecting it will be necessary to correct errors at the software level. That's the idea of quantum error correction.

Dave Bacon

Yeah, I find it useful to just talk about sort of sizes of things we can do today. I was reading the paper last week where they were able to do a quantum computation with about 272 qubit operations, which is a lot compared to what we could do a few years ago. But that's sort of about the size of what we can do before we start to get drowned out by noise. And that's not very useful-- I think about it as just operations doing by pencil and paper. Even I can write down-- I probably handed in homeworks at Caltech where I had to do 270 operations, right? Like that's not very much computation. And right now, that's where a lot of our hardware is sort of at. But, this miracle of error correction shows that if we have sufficiently noise-free initial qubits, we can build up this larger structure to perform longer computations.

Si-Hui Tan

Both of you mentioned that hardware has been improving and people are still actively working on error correction, and error correction was known to be important, even before the first quantum computer was built. Do you think there's a family, or one error-correcting code that will serve all needs? Or do you think we have to tailor codes to different systems?

John Preskill

I don't know. 

Dave Bacon 

You just wrote a paper on it. I mean, you've written papers on the different ways of doing that. So, I think I could get your answer.

John Preskill 

Well, look, we can talk about the near term and the longer term. There is an idea, which also goes back over 20 years, about how to do quantum error correction, which sprung from the fertile imagination of Alexei Kitaev. The principle of quantum error correction more broadly is that if we want to protect quantum information, then we should store it in a highly entangled form. Entanglement has this wonderful feature that if you have a system of many qubits and they're entangled with one another, then you can store information in that many-qubit system in such a way that when you look at just parts of the system one at a time, a few qubits at a time, that information is completely invisible. And that's how we intend to fool the environment and prevent it from learning about the encoded state. 

Kitaev's great idea was that we can imagine building materials that have that feature that stores information in that very highly entangled form, that's well concealed. But he also pointed out that we don't necessarily have to make the material out of, say, a solid-state system. We can, in effect, simulate that material with any qubits of our choice. And 20 years or more later, that's still the best idea we have to do quantum error correction-- what we call the Toric code, the surface code that Kitaev invented, and it has some big advantages. 

One is that the processing that we need to do to detect the errors is quite simple. It's geometrically local in two dimensions. So, you can have the qubits laid out on a table and just act on four neighbouring qubits at once to learn about the errors, and also it tolerates a relatively high error rate. And that's going to be really important in the near term, where we're trying to just barely get below the error rate that makes error correction possible, that makes it effective. And so, in the near term, we don't have a better idea than Toric code. 

On the other hand, part of Kitaev's insight was that we can do some of our error correction at the hardware level, and then sort of clean up the rest of it at the software level. So, I think increasingly in the future, we will take advantage of the principles of quantum error correction in the design of the hardware. But I expect that over, say, the next five years where we hope, we will see persuasive evidence that quantum error correction is working and seeing it steadily improving. It's probably going to be using the surface code.

What's the path towards fault-tolerant quantum computers?

Joe Fitzsimons  

Actually that's an interesting point about, I guess, self-correction and using ideas from error correction in hardware. I wanted to ask you, given the wide variety of tools that have been built up over the years for starting to mitigate errors in quantum systems, starting with things like robust pulse sequences, moving on to things like decoupling and decoherence-free subspaces, self-correction, different things like this, onto error correction and fault tolerance. What do you think the path forward looks like? Will it be that we go straight to that we're looking at building a fault-tolerant quantum computer doing error correction codes on top of essentially raw physical operations or are we likely to see some combination of these effects or some different focuses through time? How do you view these things as progressing?

Dave Bacon 

I always have a weird view on this, which is that I think we will always be doing the thing we can do, that's the hardest we can possibly do at the given time to get the most bang out of it. So in my mind, there is this sort of what I always called the “brute force approach” which is “our hardware is right on the cusp of being able to do these computations, let's see how much we can get out of that.” And for most systems that are laid out on a 2D spatial grid, they lead you to the surface code. It's clearly a code that has a lot of really awesome properties. 

Then you think, "Well, what could happen to change that?" Well, one thing that could happen to change that is thinking about the connectivity, the architecture. In trapped ions – I worked for the startup IonQ for a little bit – within the trap, they have all connectivity and you try to ask the question, can you leverage that to perform error correction using different codes? And in fact, we've seen that. We've seen demonstration of some error-correcting protocols in trapped ions that are made possibly easier because of this all-talk connectivity, any qubit can talk to anything else. But you kind of think about it as they're doing the best they can possibly do within that physical system. 

But I also think the other thing is what was mentioned, which is finding physical systems that embody error correction. The most famous example of this is Microsoft's effort in topological quantum computing, trying to use Majorana fermions to do quantum computation. And then you have this physical substrate because of the physical device that's protected. But I, like John, believe actually we're maybe even just in the beginning of exploring what we can do with hardware that looks like maybe the Majorana fermions, but maybe it's not exactly that. So, applying error correction to the engineering of small, medium-size systems to do error correction. This is a thing I love, because it's this middle way. The topological one is an engineered or condensed matter system that has these exotic properties. And it's very hard, because with condensed matter things are messy and dirty. And then we have these other efforts, which is we've gotten really good at actually sort of engineering and building interesting quantum systems where we have pretty explicit control. And the question is, can we marry these things to come up with something in that middle path? And actually, if I have to predict, that's what I'll bet on these days. But as John likes to remind me, I once did say that the only way to build a quantum computer is topological and he’s going to hold on to that I think for the rest of my life. 

Joe Fitzsimons 

And you went to Google and not Microsoft. 

Dave Bacon

That's right.

Si-Hui Tan  

There is a joke at Microsoft I think, in Copenhagen, that basically, will they find the topological qubits first, or will you find Majorana himself? 

Joe Fitzsimons 

So, given that we have all of these different fault tolerance schemes that have been worked out, different error correction codes, different approaches, different techniques for mitigating error, and we've started to see experimental demonstrations of error correction and fault-tolerant gate sets and different things like this, what are the barriers that still exist to building fault-tolerant quantum computers? Dave, you are at Google, which has obviously been one of the leading experimental efforts on superconducting qubits. I'm not asking you to speak for experimentalists, but what do you see as the main barriers ahead of us?

Dave Bacon

John's said it in some fundamental way, it's still extremely hard to get these experiments to work. They’re still experiments. We are just learning how to do this. I think there are some things that we're starting to see that are interesting, which is, there's a word of quantum computing that I don't like to say because I don't really know what it means, which is “scalable.” People talk about scalable technologies. Scalable feels like one of these things to me that, you know it when you see it. It's kind of hard to describe because it has a lot of components. But we definitely see that a lot of quantum computing for many years was focused inwardly on improving parts of a system, getting better base decoherence rates, working on controlling single qubits, working on particular parts, getting measurement to work without reading the wrong bit out, right? So, it was always focused on components. And then what we've seen in the last few years is the bringing together of these things at the same time and coordinating. Of course, when you do that, nothing survives. They're being put together and mashed together to work correctly at the same time. So, I think that's one key challenge we're seeing– especially for these “brute force” type approaches– is getting that to work, everything at the same time. 

The other thing is just the technology. To scale this up is extremely challenging. Like wiring: you see this picture of Google inside of the cryostat and you realise these are all wires going down to this chip. That obviously doesn't scale. Again, it's one of the things that you can't tell me exactly, but you can say that doesn't really scale. So, I think it's trying to figure out how to scale while keeping all the principles of getting these qubit decoherence rates low. 

The final thing to say is that if we can find ways to do them all at the same time and be significantly better, we should be doing that, and that will lessen our overhead and the scaling becomes less of a challenge. So, it's not straightforward which of these you should actually focus on. Should you focus on scaling things up? Maybe. Maybe you should also focus on 10 to the -5 error rate for your two-qubit gates. That'd be incredible, right? That would change the dynamics of how much scaling you need to do. So, it's not clear how to play those tradeoffs right now.

John Preskill  

We would like to see that as you increase the size of a quantum error-correcting code block, say in the surface code, the error rates decline exponentially with the size of that code block. That's kind of the hallmark of quantum error correction. And we would also like to see that gates can be protected with that exponential improvement in fidelity as we scale things up. We haven't seen that yet. Why haven't we? Well, the short answer is, the gates aren't good enough, the error rates are too high. Now, actually, there was a very interesting experiment that the Google group did, where they did see exponential improvement in the error rate as they increased the size of the code block, but the catch was, it wasn't a full-blown quantum error-correcting code, they can only correct one type of error, the dephasing error, not the bit flips, in that configuration. It was still a very interesting experiment because they were able to do up to 50 repeated rounds of quantum error correction. That's another thing we'd like to see. We'd like to see many successive rounds. And they were able to say that as they increased the size of the code block from 3 to 7 to 11 each time, the error rate went down by an order of magnitude, which was what the theory predicted should happen. Like I said, the catch was, they couldn't correct all the errors. 

We've seen some other experiments recently, like one done by Honeywell, where they were able to do repeated rounds of error correction for a real full-blown quantum error-correcting code that can correct any one arbitrary error in a block of seven, but they did not succeed in getting an error rate, which was improved by using quantum error correction compared to the unprotected error rates. So, of course, Dave was right, it's a war with many fronts. There are a lot of things we have to improve. But the most important one, in my view, is the physical error rates have to go down.

Dave Bacon

That's right. And there are interesting trade-offs there, too. In the same paper that John described for the Google result, they didn't try to do an analysis of what are the main contributing errors. And it's true that, for example, two-qubit gates are important, but one of the other biggest problems is, in those systems you have long measurement times. If you have long measurement times, your other qubits are idling. And if they're idling, they are decohering. So, you need to do things like dynamic decoupling, and other techniques to preserve this while this is occurring. So, there's a bunch of different things that are playing off each other here. The art of understanding that landscape is a challenging one. And we are seeing more and more people trying to work in that space and understand it. Of course, John and his team have been doing this for a long time, but there's a new burgeoning field where people are starting to really think about all these tradeoffs and understand that landscape, or at least maybe get some idea about where you need to be focusing your improvements.

John Preskill  

It's not just the time to do the measurement, which is an issue, you also have to reset the qubits after the measurement. That's actually where a lot of the time-budget went.

Dave Bacon 

Yeah, that's right. Measurement and reset: you have the general thing of getting it back to where it was.

John Preskill  

This is a bit of an aside, but I think it's worth mentioning. In that Google experiment, each round of error correction took about a microsecond. And in the Honeywell work, each round of error correction took 200 milliseconds. So, it's many orders of magnitude difference in the cycle time. And maybe that's not a big deal in the short term, but eventually, it's going to be a big deal. Because if you can get the time to solution to be shorter by a factor of 10 to the 5, that's a big win. So, I think this is going to be a big challenge for ion traps going forward. They want much faster gates. And potentially, there are ways of doing that with more laser power and so on. But that's a big technical challenge in itself.

Joe Fitzsimons  

Yeah, absolutely. So, one of the interesting things about this Google experiment, as you mentioned, only correcting one type of error. One of the works that stood out to me: I visited QEC in 2007, a long time ago, and I believe you and Panos Aliferis had a paper about correcting against biased noise, and getting a much better threshold than you might otherwise get. It seems some of the recent work in the last year or two that has captured people's imagination a bit has been, for example, these XZZX code and things like this, which is basically the surface code, but you're just writing the stabilizers in a slightly more suitable way to deal with biased noise. The question I have is, is it not natural that we would start to correct one type of error, the dominant kind of error first, in the same way that for example, if you're trying to build a better trapped ion or something like that, you try to knock off the sources of error one at a time, starting with the most, whatever your dominant source of error is. 

John Preskill  

It's a natural idea, but it's tricky to implement. So, thank you for remembering that 2007 paper with Aliferis. What we were very troubled by at the time was that we wanted to consider a noise model, which was highly biased. So, in the jargon, we use Z errors at a higher rate and X errors had a lower error rate. So essentially, there are two main types of errors, and one was much worse than the other. So, we wanted to focus our error correcting power on these more frequent Z errors. But we also wanted to do processing, we wanted to do quantum computation. And when you start doing the gates, you have to worry about how the logic affects the noise. So, we figured out how to build little gadgets that could take advantage of the bias and still enable us to do universal quantum computation. The thing that came as a surprise to me, which was first pointed out a couple of years ago, is that in the setting of a certain kind of qubit, you can realise in particular using  superconducting technology, it's possible to do operations that flip the bits while preserving the bias. We had to find some other more complicated workaround. 

So, that has generated some optimism about taking advantage of the bias in the noise. And well, I can say a little bit more about that but maybe we can save that for later. But it requires a particular type of way of encoding the qubits to get that to work. 

Si-Hui Tan  

That really brings up a lot of memories of that time, quite a while ago. I guess we sort of moved onto like basically talking about some concepts like thresholds and stuff. So, taking into account these experimental improvements, like the threshold theorem. For those in the audience who doesn't know, it states that error correction can only be achieved if a certain required quantum manipulation can be performed with very low error below a certain threshold, the threshold theorem. So, when theorists calculate this threshold, what contributes to it, and how is it calculated? Is it difficult? Does it take into account these experimental parameters, like measurement, timescales, and all these tradeoffs? Does it take a lot to calculate it?

Dave Bacon 

John, you've calculated it.

John Preskill  

Well, I calculated, calculated, and calculated it. When David Poulin was a postdoc at Caltech, and sadly, he's no longer with us, but he took me aside, he said, "Why are you always proving threshold theorem? Do we have to beat that to death? But there were a number of questions of principle that, in those days, I was trying to address by proving threshold theorems under different conditions. 

First of all, what do you need to make it work at all, if you're going to have a threshold? Well, you need to do parallel processing, you need to be able to perform logic in different parts of your device simultaneously, so that you can take care of the errors that are occurring at some constant rate throughout the device, and you need the noise to have nice properties, in particular, it's got to be weak enough. That's the whole point of the threshold, that there's some level of the noise such that when the noise is below that level, error correction, in principle, can make logical error rate as small as you please, so you can compute for as long as you want. But it also has to be sufficiently uncorrelated, the error correction methods that have been most extensively studied, and for good reasons to principle, don't work well if there are errors that add collectively on many qubits at once. 

John Preskill

And then the details, the answer to your question about how hard it is, depends on what you want to assume about the noise. So, what we tried to do in our work was I assume as little as we could get away with it, but what we always had in as an essential ingredient in the noise model is that it was sufficiently local and the correlations and the noise were sufficiently suppressed. So, we considered models where the noise acts on qubits one at a time or two at a time, or many at a time. And the condition that you need on the noise that acts on the qubits, two at a time or three at a time – is a lot more stringent than the condition on the noise that acts on the qubits one at a time, which I guess isn't a surprise. But it was interesting to work that out. 

Another issue is coherence. Often people ignore the coherence in the noise. The way we usually think about noise at a fundamental level is that there's some interaction between the system and the environment that's itself described by some Hamiltonian. So, the errors are not the way we often like to model them, for simplicity, exactly described stochastically. I spoke of an error rate of 1% or 10 to the -3 . So implicitly, it's like the gate is either good 99 times out of 100, or with probability 1% it's bad, that's a bit of an oversimplification as well. So, if you want to allow the noise to be coherent, then that also enters into the threshold calculation and makes it more complicated, and also more pessimistic.

Dave Bacon

Yeah, let me chime in. One of the questions here is, I have a problem that I'm working on where I'm trying to do a threshold calculation. One of the interesting things is they're just the different assumptions that you are making. And sometimes you're trying to prove thresholds, like John was talking about. Sometimes you're also just trying to numerically get a handle on where things are and what the tradeoffs are. They are a sort of a hierarchy of hardness and understanding. They are sort of very simple noise models where it's easier to do this. And then as we get more complicated, we have less understanding of what happens and that there are likely things hiding there, right? In fact, I mean, I saw one in the last year, which is that there are codes that when I first started, I worked on these things called decoherence-free subspaces, and there was a code that I worked on as well, that was an error correcting code. And it's natural to look at these things on a 3x3 grid or a 5x5 grid for a (code of) distance three, distance five. Distance four is weird. It's like in between those two. Distance three can correct one error, distance five can correct two errors. Why would you do four? Well, it turns out, if you look at the 4x4 case, these codes actually had in this particular choice of what people called gauge, that can be robust to coherent error. So, one of these areas that John described as not being described by this stochastic thing, and in fact, it's robust to sort of a collective magnetic field across this 4x4 patch. If it's in one particular direction, it's sort of protected. That's really fascinating, because that's an example of a coherent error. It won't be 100% uniform, so you won't be 100% effective, but you might take an error rate from that process that's large, and get it down by a factor of 100. That would be a major improvement, and that can then impact the design of your qubits. Different architectures may have that problem, and now, it may be less of a problem for them. So, understanding places how the noise model interacts with the error correcting code is hard. And one of the issues is just it's very hard to simulate those systems. I love this paper but I don't know if many people know it, the late David Poulin had a paper, which basically said his argument was essentially like, you'd need to run the quantum computation to actually figure out what the right way to deal with the errors are. And maybe that's true, I'm not sure. But that's a fascinating sort of world. It gets to the point where actually, it's really hard to even simulate this, and maybe we have to just go and try it and see what's going on. It's hard for me to understand that.

John Preskill  

I'm going to make one other point of principle, which I neglected to make a minute ago, an additional thing that's essential for having a threshold, is that you need to be able to refresh the qubits, or in some way or other, flush entropy out of the device. You might just do it by measuring the qubits and storing a classical record of how you measured them. But at any rate, if you're thinking about how to make quantum computing fault tolerant, you always have to have in mind how you're going to remove the entropy. And that enters into a threshold calculation. 

Also, I guess another thing we haven't mentioned is leakage. There's a type of error where the qubit just goes off to Neverland for a while, out of the encoded space. And that happens in real devices. So, if you're going to deal with it, you have to have something like a reset after leakage and that can also affect the threshold. So, the story can get complicated.

Dave Bacon 

Interesting thing about the leakage thing I always love is counter-intuitively, what you really want is a place to put your quantum information that is really bad at storing it. You really want to flush it as quick as possible. And I think sometimes we get caught in a thing where we're only trying to improve these base decoherence things not thinking like when we engineer it, we actually need a place to dump it. We've seen examples of that where if you don't have a fast reset and you just let your system relax back to zero, you can improve the lifetime of your qubit, but then that slows down your computation, because if you're just letting it relax fast. You have to wait longer because you've got a better qubit. So, sometimes it's good to have really gnarly bad things in your system, but you need to be able to switch them on and off, of course, and that's part of the challenge. But keeping it cold is important.

How to achieve fault tolerance despite cosmic rays or earthquakes?

Joe Fitzsimons  

When we take all of these factors into account, well, it seems that when we talk about thresholds, there's some subtlety that creeps in, because we don't really have a good axiomatization of what noise should look like for a quantum system, unless I'm mistaken. One of the criticisms that people, who are skeptical of quantum computing and think that perhaps quantum fault tolerance should never be possible, is susceptibility of some of these schemes to collective noise, to highly correlated noise. We know, for example, that the Google lab is in California, and California is prone to earthquakes. And an earthquake could potentially be a very, very, very large correlated error in a device. So, it seems hard to prove that such things can never happen, because clearly, they can.

John Preskill 

The thing I'm more worried about than earthquakes, because they happen more often, is cosmic rays.

Joe Fitzsimons 

Sure. Cosmic rays are a good point. So, they inject a large correlated error. Is it clear that this is a particularly bad kind of error? Or is that just because of the way we have come to quantum error correction, quantum fault tolerance, coming from thinking about kind of i.i.d noise and that kind of direction?

John Preskill  

It's not good. 

Dave Bacon

Yeah, it's not good. So, just to describe the effects. In superconducting circuits, you can often have cosmic rays that cause large correlated errors across your system. That seems very bad for most ways we know dealing with error correction. Now there are technological ideas that people have. We're working on this for superconductors. Other platforms may not have the same type of problem. I do think that's definitely, especially when you think about the sort of “brute force” approach, it's definitely a worry that we have to think about. 

I think building one of these large error-corrected fault-tolerant quantum computers is going to be, if we brute force it, one of the most large, interesting scientific experiments ever done. This is not a small thing. The idea that you're doing this while you're getting hit by cosmic rays that cause correlated errors just makes it even more scary. Anytime we add more things like this, it just increases the size and complexity of what we're doing. 

Dave Bacon 

That said, I think the skepticism tends to be along the lines of thinking about weird correlated errors that can occur when your noise models are very odd. John and others have done a lot of experiments sort of cornering that and trying to understand where that is. Again, what is the physics of the device give you is the real question here. I'm skeptical that they're really weird ones coming from things we don't understand, but obviously, cosmic rays hitting your device is something we kind of do understand. It's known physics that's causing this.

John Preskill  

And there are known ways to mitigate it, though you might not like it, you can go deep underground.

Dave Bacon 

Yeah, there's a famous science fiction book by Robert Sawyer. I used to laugh because it was  called Hominids. It starts in the Neanderthal universe, and they have a quantum computer, and they put it in the base of a cave or a giant mineshaft to keep it isolated. And I remember reading that, at the time, I'm thinking, "Oh, that's ridiculous, you could never do that. That's not really the isolation we care about," and now I'm going to have to eat my words. We may just go underground.

Joe Fitzsimons  

In some sense, an upside to correlated error is that you can sense that it occurs without measuring your qubit.

John Preskill  

It might be a reason to revisit code concatenation, you know, that was the first natural idea for understanding the threshold, the kind of hierarchy of codes within codes. And as you go higher and higher in this hierarchy, the effective error rate gets less and less. And if you're worried about errors that can affect many qubits at once, maybe you want to have an architecture in which that would only affect a code at some lower level of the hierarchy, which could then be corrected at the next level. Now that's just one of the possible ideas for how you handle something like that.

Where do we stand in achieving full tolerance?

Joe Fitzsimons  

Actually, let me ask you aside from this, what is your current feeling of where we stand in relation to achieving full tolerance? How far are we away from, for example, demonstrating a logical qubit or a full set of fault tolerant operations between two logical qubits, by which I mean sustained error correction over a significant period of time and with the kind of exponential suppression of errors that you talked about earlier, John?

John Preskill  

I don't know. I don't even think Dave knows what the answer is. But I think the gold standard should be an encoded entangling gate, which is substantially improved by doing quantum error correction. In order for it to work, we'll need low error rates and we'll need the hardware to function the way it's supposed to, and we'll need the correlations not to be too strong, as we've been discussing. It's actually interesting that different hardware approaches have tradeoffs there, even in the superconducting circuit space. Although the fundamental qubits at IBM and Google are sort of similar, they're based on the idea of a transmon, which goes back 15 years and was such a good idea, we haven't changed much, it has the advantage of being really simple. 

But as far as doing the gates, Google has these couplers, that they can turn on and off between qubits and IBM has fixed frequency gates, and they tried to drive them to get them to couple. And so the IBM approach, on the face of it, has more of a problem with what they call crosstalk, a type of correlated error. And so, their hardware layout is designed to attempt to mitigate that, but at a cost with just less connectivity, which makes error correction a little bit less effective. I'm hoping that we'll see what you're asking for on a timescale of a few years, say, five years. But it is going to require, I think, most importantly, improved physical error rates and a multi-qubit device, which works the way it's supposed to.

Dave Bacon 

Of course, I'm always an optimist, I actually think we're entering the super exciting period, which is people are doing the protocols and trying to understand why they're failing, which is kind of what you need to do. If you're not doing it, you're just talking about it, you're just optimising things that you haven't actually done it. We're going to continue to see claims of basic things being slightly better, which is one thing. In each of your components, you'd like to do an encoded way that is slightly better. This other thing that John mentioned, which is, as you scale up the size of the code, we want to see the suppression. They're connected, but they're different types of experiments that people are going to be doing. I'm particularly excited because I do think we're right on this era of like trying to do these first experiments. Now, of course, amusingly, what will happen in the near term is if you're above the threshold, error correction does the opposite, it makes it worse. They're error-versity experiments. So, you'll see a lot of amazing error-versity experiments, which is fine. That's actually great progress. 

And we'll see ones that are very close to these thresholds and nebulous thing. There's lots of numbers that are going in here, not just one, and we'll see things that are improving in some way, but it'll be very minimal, I think at first. The real thing I think we really want to see is substantially getting this amplification to work. So, as you scale it up, really see an exponential suppression with a constant that's big enough. And that I think, will be the magic time. And if this brute force versus the right way this, this will be the thing that I think will really herald in that we're in the right era for error correction. But it'll be messy until then, which is great and fine.

How is the threshold calculated in quantum computing?

Si-Hui Tan 

You both mentioned, with regards to exceeding the threshold - superconducting qubits. As the follow-up, we also have solid-state qubits, like silicone qubits and NV  centres. Do you think that there's any interesting developments there now, where maybe they can also catch up to the threshold or to exceed the threshold?

John Preskill  

People are trying. I think it's important that a lot of different hardware approaches are being pursued and are steadily advancing. There are lots of ways to encode a qubit physically, and one natural way, of course, is to use an atom. Even within that space, there are a lot of different things you can do. It can be a trapped ion or it can be a neutral atom held in a tweezer array with lasers or can be a neutral atom and an optical lattice, and so on. And there are lots of tradeoffs. These different approaches have different advantages and disadvantages. 

I think it is noteworthy that if we were having this discussion a few years ago, I probably wouldn't have brought up Rydberg atoms, because it's a hardware approach, which has become competitive rather suddenly. I think that's a good indicator that we're not ready to put all our eggs in one basket. If you weren't looking at the long view when you were given a few zillion dollars and told to build a quantum computer right away, you'd have to choose a hardware approach and invest heavily in it to try to perfect it, but we're not at the stage where that would be a wise thing to do, in my opinion, even if you had the resources, because we really don't know at this point which hardware approach is going to be the most promising in the long run. It might be some kind of hybrid technology involving different types of hardware, playing different roles. The different types of hardware might find different niches, maybe not quantum computing but quantum sensing or something like that. So, I'm all for continuing to pursue them all. It's a tempting goal to shoot for with any hardware platform to demonstrate quantum error correction, which gives a convincing improvement.

Dave Bacon 

I just want to riff on what John said there that I think is super important, which is one thing that the field, in general, has always done is, we have this big bang moment, Shor's algorithm, which is a very challenging algorithm to do. And it's an application that, of course, is mostly for cryptography. And then we have ideas about simulation. And these are very challenging things. One of the things we don't know is what is the medium-scale quantum computer most useful for. Some of the most interesting ideas are actually probably in Cincy right now. It's not obvious to me that actually, the answer isn't both when people ask me to compare two platforms. It may actually be that certain platforms are just better for certain types of applications. That's not surprising when you step back and think about it. But I think in quantum, everybody puts their eggs in their basket that's in front of them that they did in grad school and goes that direction. That's the way to focus and that's, of course, important. But it wouldn't be surprising to me if actually we have a lot of different technologies that are useful in different ways. And maybe even using different fault tolerant protocols. It kind of sounds bizarre, that one protocol rules them all, which has come out somehow, we still probably envision the future. I actually suspect it'll be more diverse in terms of the different ways we approach this problem.

John Preskill 

On this issue of intermediate scale applications, as there's another point that I'd like to emphasise cause we've been talking here about the wonderful day when we'll be able to do a single logical two-qubit gate with a much improved error rate compared to physical eight gates. And that will be a real milestone, I think, in physics, as well as technology. But come on, it's just two qubits doing a really good gate, you can't run an application on that. So it's important that in the meantime, we'll be pursuing another route, which actually I think is pretty exciting. We have, with platforms that currently exist and are likely to exist in the near future, the capability of doing physics experiments that were never possible before, because we can control and observe very highly entangled states of matter under very programmable conditions. We couldn't do that before. And I think there's an opportunity to do a lot of exciting physics say, in the next five years with a variety of these platforms. And that, it seems to me is the most likely application-- if you want to call it an application-- that is going to lead to really interesting results in the timescale of the next several years.

In the future, can quantum computers weigh no more than 1.5 tons?

Joe Fitzsimons 

We're coming up to the end of the hour. I just wanted to finish with one question. I told you beforehand that we also wanted to get your ideas about the most interesting ways going forward. In 1949, Popular Mechanics published this article on ENIAC, and in the body of the article, it makes the claim that in the future, computers may weigh no more than 1.5 tons. And if you look at the way current approaches are to photonic quantum computing, many of those approaches really explode the number of qubits that are necessary. You encode one logical qubit into 100 physical qubits or 1000 physical qubits. You do enormous amounts of magic state distillation, for example, which may require vast numbers of ancilla qubits. Is there a prospect that in the future quantum computers may weigh no more than one and a half tons? Is there a way of getting away from these very large multiplicative overheads and getting to something more efficient? What are the prospects in that direction?

John Preskill 

First of all, when we do build the first fault-tolerant quantum computer that can run useful applications, it's probably going to be a big model. It's going to be a big complicated device, like what you were describing and weigh a lot more than one and a half tons. I think the most important step along the road of making it a much more compact device is not more clever error correction algorithms, it's much better hardware. I guess I keep saying that. And maybe that will come from incorporating insights from quantum error correction into the design and building of hardware. But I mean, I assume in the long run, that's where quantum computing will go. I can't believe that 100 years from now, people will be trying to squeeze as much juice as they can out of an error rate of 10 to the -3, I imagine they'll have much, much better dates than we currently foresee some way or another, but I don't really know how they're going to do it.

Dave Bacon 

Is our future the steampunk, where we have steam engines powering all of our computation, these giant dilution refrigerators the size of small football fields? Or is it like our day-to-day computers, which are nuts in terms of the performance and requires significant changes in our hardware? I think the fascinating thing is that error correction points out that it's possible to build these robust quantum computers. And I still believe strongly that finding the right substrate and doing that will lead to a future of reasonable-sized fun computers. Is that 50 years or 100, or 1000? I don't know. But it's just amazing that nature even allows us to do this. It was an amazing discovery after Shor's algorithm, that we can encode information to protect it. And if we can find a way to do that without having to do all the crazy software that I have to write to program the damn thing, I'll be super happy to not have to write the error-correcting software. 

John Preskill 

To be fair, the competitor of the quantum computer is the most powerful supercomputers that we have today, which are digital and classical. And they're big mothers, too.

Dave Bacon

That's true. But when you think about it, actually, current experiments are small compared to the summit, which is one of the largest supercomputers. So, we sort of set ourselves up in for tall tasks, right? Competing against the supercomputer with a physics experiment on 50 or 100 qubits. It's kind of crazy. In some sense, maybe a David and Goliath story, which is part of the excitement in the field.

John Preskill  

But I guess once we can build what Joe is envisioning like a desktop-sized quantum computer, then it'll be irresistible to put a huge number of them together. 

Joe Fitzsimons  

I wasn't so much suggesting that as wondering whether we might transition to a period where instead of doing one qubit to many physical qubits, we might be doing few qubits to many, so that the overhead starts to not look multiplicative anymore, for example, as you enter regime, what lower physical error rates. I wasn't necessarily suggesting quantum iPhones, although that would be cool if you guys can build that. It's definitely worth the $1,000 to me.

John Preskill 

There is some theory backing up the vision, but it's going to require much better error rates and also probably the type of nonlocality Dave referred to earlier to realize these more powerful coding schemes. Seems like being restricted to geometric locality in three dimensions will be a limitation.

Joe Fitzsimons  

Let me be respectful of your time and thank you both very, very much for joining us. Do either of you have any last words for the episode?

Dave Bacon 

Nobody wants to go first on a last words thing, because then your words aren't last, but I'll go first in the last words. I do believe we’re in an extremely exciting time for error correction, but it's still a long push to getting even the basic building blocks working. So we should be realistic about the timelines we're seeing for fault tolerance. But on the other hand, I think that like reading the arXiv now for error correction is exciting. There are people thinking crazy thoughts now every day in academia and in industry. And so, I think we are entering another one of these heydays of real interesting progress in the field. And it was thanks to people like John who kept a lot of the lights on while the hardware was catching up that this exists. So, it's been fun to talk to everyone about this.

John Preskill  

I think quantum error correction is one of the great insights in the history of physics actually. Of course, I never expected when I began my career as a theoretical physicist to ever do anything that would be of interest to technologists. So, that seems like kind of a miracle that that happened. But as a physicist, I also find exciting, not just how we're going to use quantum error correction to make quantum computers work, but that it gives us new ways of thinking about deep problems in physics, understanding nature. Does nature make use of error correction in surprising ways? We think that's probably the case in trying to understand how gravitation works at quantum level. Error correction ideas seem to enter very naturally. So, I think an idea this good as quantum error correction is probably going to yield many surprises down the road. But my last word is that this was really fun. So, thanks for the opportunity to join this conversation.

Joe Fitzsimons  

Thank you both once again sincerely for joining us and for sharing your expertise with us. And thanks to the audience for joining as well. 

In the next instalment of Quantum Well, we'll discuss what happens when you allow quantum computers to talk to one another. Quantum communication between nodes on a quantum network opens up a range of new applications from cryptography and secure computing to sensor networks in distributed quantum computing. Quantum networks are particularly interesting because they allow for a quantum advantage for communications problems with significantly less complex devices that are required to show a computational advantage. Something we already see with quantum key distribution. 

What are the barriers we face to achieving such networks? And what are the new applications they unlock? We'll explore these questions in the next episode of Quantum Well.

What is Quantum Well?

What barriers will we have to overcome to make quantum computing relevant for solving real-world problems? We explore this question in Horizon’s Quantum Well series, where we invite experts to discuss how different barriers are being addressed. In each episode, we talk with two scientists who are putting their energy into tunnelling through these barriers to useful quantum computing.

Related articles

View all