Por que os protocolos de correção de erros só funcionam quando as taxas de erro já são significativamente baixas?


15

A correção de erros quânticos é um aspecto fundamental da computação quântica, sem a qual as computações quânticas em larga escala são praticamente inviáveis.

Um aspecto da computação quântica tolerante a falhas que é freqüentemente mencionado é que cada protocolo de correção de erros associou um limite de taxa de erro . Basicamente, para que um determinado cálculo seja protegido contra erros através de um determinado protocolo, a taxa de erro dos portões deve estar abaixo de um determinado limite.

Em outras palavras, se as taxas de erro de portas simples não forem baixas o suficiente, não será possível aplicar protocolos de correção de erros para tornar o cálculo mais confiável.

Por que é isso? Por que não é possível reduzir as taxas de erro que ainda não são muito baixas?


Bem, em algum momento há simplesmente barulho. É estranho que haja um ponto em que a correção de erros tenha mais probabilidade de corrigir as partes certas em ruídos?
Lagarto discreto

1
@ Discretelizard nem tanto que exista uma, talvez, mas os limites geralmente são muito baixos (ou altos em termos de fidelidade). Por que?
glS 27/03

Respostas:


4

Queremos comparar um estado de saída com algum estado ideal, portanto, normalmente, fidelidade, F(|ψ,ρ) é usado como este é uma boa maneira de dizer o quão bem os possíveis resultados de medição de ρ comparar com os possíveis resultados de medição de |ψ , onde |ψ é o estado de saída ideal e ρ é alcançado o estado (potencialmente misto) depois de algum processo de ruído. Como estamos comparando estados, isto é

F(|ψ,ρ)=ψ|ρ|ψ.

Descrevendo ambos os processos de correcção de ruído e de erro utilizando operadores Kraus, onde é o canal de ruído com operadores Kraus N i e E é o canal de correcção de erros com Kraus operadores E j , o estado depois de ruído é ρ ' = N ( | ip ip | ) = Σ i N i | ip ip | N i e o estado após a correção de ruído e erro é ρ = ENNiEEj

ρ=N(|ψψ|)=iNi|ψψ|Ni
ρ=EN(|ψψ|)=i,jEjNi|ψψ|NiEj.

A fidelidade deste é dada por

F(|ψ,ρ)=ψ|ρ|ψ=i,jψ|EjNi|ψψ|NiEj|ψ=i,jψ|EjNi|ψψ|EjNi|ψ=i,j|ψ|EjNi|ψ|2.

Para que o protocolo de correção de erros seja de alguma utilidade, queremos que a fidelidade após a correção de erros seja maior que a fidelidade após o ruído, mas antes da correção de erros, para que o estado de correção de erros seja menos distinguível do estado não corrigido. Ou seja, queremos Isso dá

F(|ψ,ρ)>F(|ψ,ρ).
Como a fidelidade é positiva, isso pode ser reescrito comoi,j| Ip| EjNi| ip| 2>Σi| Ip| Ni| ip| 2.
i,j|ψ|EjNi|ψ|2>i|ψ|Ni|ψ|2.
i,j|ψ|EjNi|ψ|2>i|ψ|Ni|ψ|2.

Dividindo na parte corrigível, N c , para os quais EN c ( | ip ip | ) = | ip ip | e a parte não corrigível, N N C , para o qual EN N C ( | ip ip | ) = σ . Denotando a probabilidade do erro ser corrigível como P cNNcENc(|ψψ|)=|ψψ|NncENnc(|ψψ|)=σPce não corrigível (ou seja, muitos erros de ter ocorrido a reconstruir o estado ideal) como Σ i , j | Ip | E j N i | ip | 2 = P c + P n cip | σ | ip P c , onde a igualdade será assumida pelo assumindo ip | σ | ip = 0Pnc

i,j|ψ|EjNi|ψ|2=Pc+Pncψ|σ|ψPc,
ψ|σ|ψ=0. Essa é uma falsa 'correção' projetada em um resultado ortogonal para o correto.

Para qubits, com uma probabilidade (igual) de erro em cada qubit como p ( nota : este não é o mesmo que o parâmetro ruído, que teria que ser usado para calcular a probabilidade de um erro), a probabilidade de ter um O erro corrigível (assumindo que os n qubits foram usados ​​para codificar k qubits, permitindo erros em até t qubits, determinado pelo limite de Singleton n - k 4 t ) é Pnpnktnk4t

Pc=jt(nj)pj(1p)nj=(1p)n+np(1p)n1+12n(n1)p2(1p)n2+O(p3)=1(nt+1)pt+1+O(pt+2)
.

Ni=jαi,jPjPj χj,k=iαi,jαi,k

i|ψ|Ni|ψ|2=j,kχj,kψ|Pj|ψψ|Pk|ψχ0,,0,
χ0,0=(1p)n

1(nt+1)pt+1(1p)n.
ρ1ppt+1p é pequeno.

ppt+1pn=5, giving t=1, this happens at p0.29, although this is very much just an estimate.

Edit from comments:

As Pc+Pnc=1, this gives

i,j|ψ|EjNi|ψ|2=ψ|σ|ψ+Pc(1ψ|σ|ψ).

Plugging this in as above further gives

1(1ψ|σ|ψ)(nt+1)pt+1(1p)n,
which is the same behaviour as before, only with a different constant.

This also shows that, although error correction can increase the fidelity, it can't increase the fidelity to 1, especially as there will be errors (e.g. gate errors from not being able to perfectly implement any gate in reality) arising from implementing the error correction. As any reasonably deep circuit requires, by definition, a reasonable number of gates, the fidelity after each gate is going to be less than the fidelity of the previous gate (on average) and the error correction protocol is going to be less effective. There will then be a cut-off number of gates at which point the error correction protocol will decrease the fidelity and the errors will continually compound.

This shows, to a rough approximation, that error correction, or merely reducing the error rates, is not enough for fault tolerant computation, unless errors are extremely low, depending on the circuit depth.


I think you're trying to explain up to which physical error rate the probability of uncorrectable errors is low? Note that fault-tolerance thresholds are smaller (orders of magnitudes for many codes)
M. Stern

@M.Stern So this is a (very rough) estimate for when an error correction 'decreases the error' (i.e. increases the fidelity by some amount after noise is applied), so it's definitely not a fault tolerant threshold, no. Performing error correction may have increased the fidelity after the noise by some amount, but it hasn't reset it or anything, so fidelity will just decrease (and the error(s)) propagate even if error correction is constantly applied, showing error correction by itself isn't enough for fault tolerance
Mithrandir24601

Hm, glS will have to judge if that answers the question. In any case it's interesting and well written. So you assume that the state is orthogonal if the errors were uncorrectable, right? (That's certainly reasonable in many scenarios.) The other extreme would be when there is a 50/50 chance of a logical error in case of uncorrectable errors.
M. Stern

@M.Stern Thanks! Yes, either that states are orthogonal, or taking the lower bound. As comparing one lower bound with another isn't a great idea, I went with the assumption that they're orthogonal. If there's any edits you feel would be useful to add to the end of this, work away! Hmm... I think taking a 50/50 chance of logical error would lead to the same result, only with different prefactors at the end
Mithrandir24601

4

There is a good mathematical answer already, so I'll try and provide an easy-to-understand one.

Quantum error correction (QEC) is a (group of) rather complex algorithm(s), that requires a lot of actions (gates) on and between qubits. In QEC, you pretty much connect two qubits to a third helper-qubit (ancilla) and transfer the information if the other two are equal (in some specific regard) into that third qubit. Then you read that information out of the ancialla. If it tells you, that they are not equal, you act on that information (apply a correction). So how can that go wrong if our qubits and gates are not perfect?

QEC can make the information stored in your qubits decay. Each of these gates can decay the information stored in them, if they are not executed perfectly. So if just executing the QEC destroys more information than it recovers on average, it's useless.

You think you found an error, but you didn't. If the comparison (execution of gates) or the readout of the information (ancilla) is imperfect, you might obtain wrong information and thus apply "wrong corrections" (read: introduce errors). Also if the information in the ancillas decays (or is changed by noise) before you can read it out, you will also get wrong readout.

The goal of every QEC is obviously to introduce less errors than it corrects for, so you need to minimize the aforementioned effects. If you do all the math, you find pretty strict requirements on your qubits, gates and readouts (depending on the exact QEC algorithm you chose).


4

Classical Version

Think about a simple strategy of classical error correction. You've got a single bit that you want to encode,

000000111111
I've chosen to encode it into 5 bits, but any odd number would do (the more the better). Now, let's assume some bit-flip errors have occurred, so what we have is
01010.
Was this originally the encoded 0 or 1? If we assume that the probability of error per bit, p, is less than a half, then we expect that fewer than half the bits have errors. So, we look at the number of 0s and the number of 1s. Whichever there's more of is the one that we assume is the one we started with. This is called a majority vote. There's some probability that we're wrong, but the more bits we encoded into, the smaller this probability.

On the other hand, if we know that p>12, we can still do the correction. You'd just be implementing a minority vote! The point, however, is that you have to do completely the opposite operation. There's a sharp threshold here that shows, at the very least, that you need to know which regime you're working in.

For fault-tolerance, things get messier: the 01010 string that you got might not be what the state actually is. It might be something different, still with some errors that you have to correct, but the measurements you've made in reading the bits are also slightly faulty. Crudely, you might imagine this turns the sharp transition into an ambiguous region where you don't really know what to do. Still, if error probabilities are low enough, or high enough, you can correct, you just need to know which is the case.

Quantum Version

In general, things get worse in the quantum regime because you have to deal with two types of errors: bit flip errors (X) and phase flip errors (Z), and that tends to make the ambiguous region bigger. I won't go further into details here. However, there's a cute argument in the quantum regime that may be illuminating.

Imagine you have the state of a single logical qubit stored in a quantum error correcting code |ψ across N physical qubits. It doesn't matter what that code is, this is a completely general argument. Now imagine there's so much noise that it destroys the quantum state on N/2 qubits ("so much noise" actually means that errors happen with 50:50 probability, not close to 100% which, as we've already said, can be corrected). It is impossible to correct for that error. How do I know that? Imagine I had a completely noiseless version, and I keep N/2 qubits and give the remaining qubits to you. We each introduce enough blank qubits so that we've got N qubits in total, and we run error correction on them. cloning demonstration If it were possible to perform that error correction, the outcome would be that both of us would have the original state |ψ. We would have cloned the logical qubit! But cloning is impossible, so the error correction must have been impossible.


2

To me there seem to be two parts of this question (one more related to the title, one more related to the question itself):

1) To which amount of noise are error correction codes effective?
2) With which amount of imperfection in gates can we implement fault-tolerant quantum computations?

Let me firs stress the difference: quantum error correction codes can be used in many different scenarios, for example to correct for losses in transmissions. Here the amount of noise mostly depends on the length of the optical fibre and not on the imperfection of the gates. However if we want to implement fault-tolerant quantum computation, the gates are the main source of noise.

On 1)

Error correction works for large error rates (smaller than 1/2). Take for example the simple 3 qubit repetition code. The logical error rate is just the probability for the majority vote to be wrong (the orange line is f(p)=p for comparison):

plot physical vs logical error rate

So whenever the physical error rate p is below 1/2, the logical error rate is smaller than p. Note however, that is particularly effective for small p, because the code changes the rate from O(p) to a O(p2) behaviour.

On 2)

We want to perfrom arbitrarily long quantum computations with a quantum computer. However, the quantum gates are not perfect. In order to cope with the errors introduced by the gates, we use quantum error correction codes. This means that one logical qubit is encoded into many physical qubits. This redundancy allows to correct for a certain amount of errors on the physical qubits, such that the information stored in the logical qubit remains intact. Bigger codes allow for longer calculations to still be accurate. However, larger codes involve more gates (for example more syndrome measurements) and these gates introduce noise. You see there is some trade-off here, and which code is optimal is not obvious.
If the noise introduced by each gate is below some threshold (the fault-tolerance or accuracy threshold), then it is possible to increase the code size to allow for arbitrarily long calculations. This threshold depends on the code we started with (usually it is iteratively concatenated with itself). There are several ways to estimate this value. Often it is done by numerical simulation: Introduce random errors and check whether the calculation still worked. This method typically gives threshold values which are too high. There are also some analytical proofs in the literature, for example this one by Aliferis and Cross.


The second paragraph is touching the right points, but it is still very qualitative. You are saying that you need the gates introduced by the error correction protocol to reduce the error rate more than they increase it. However, how does one go from this intuitive idea to an actual quantitative estimate over the threshold? Also, does this imply an universal lower threshold that no error correcting protocol can beat?
glS

@glS I suspect that there is such a "universal lower threshold", i.e. an error value above which there exist no fault tolerant correction protocols. However, the value should depend on both your gate set and your error model. People tend to be more interested in positive results here (showing the existence of a good fault tolerant protocol). It may be interesting to find upper bounds in order to see "how much room we have left" in making our fault tolerant schemes better. I'd guess there isn't much room left.
Jalex Stark

@glS You're right, some actual quantitative calculation would improve this answer. I think these calculations are typically done numerically? But I also want to know about this
M. Stern

@JalexStark What makes you think there is not much room left? For example the surface code doesn't seem to be optimized w.r.t. this threshold. It uses only nearest neighbor interactions on a lattice and you could do a lot more in general.
M. Stern

@M.Stern I don't have any theorem-based evidence, and I'm not an expert in the area. I was just guessing based on the amount of work done and on how large the best thresholds are.
Jalex Stark

2

You need a surprisingly large number of quantum gates to implement a quantum error correcting code in a fault-tolerant manner. One part of the reason is that there are many errors to detect since a code that can correct all single qubit errors already requires 5 qubits and each error can be of three kinds (corresponding to unintentional X, Y, Z gates). Hence to even just correct any single qubit error, you already need logic to distinguish between these 15 errors plus the no-error situation: XIIII, YIIII, ZIIII, IXIII, IYIII, IZIII, IIXII, IIYII, IIZII, IIIXI, IIIYI, IIIZI, IIIIX, IIIIY, IIIIZ, IIIII where X, Y, Z are the possible single qubit errors and I (identity) denotes the no-error-for-this-qubit situation.

The main part of the reason is, however, that you cannot use straight-forward error detection circuitry: Every CNOT (or every other nontrivial 2 or more qubit gate) forwards errors in one qubit to another qubit which would be disastrous for the most trivial case of a single qubit error correcting code and still very bad for more sophisticated codes. Hence a fault-tolerant (useful) implementation of needs even more effort than one might naively think.

With many gates per error correcting step, you can only permit a very low error rate per step. Here yet another problem arises: Since you may have coherent errors, you must be ready for the worst case that an error ϵ propagates not as Nϵ after N single qubit gates but as N2ϵ. This value must remain sufficiently low such that you overall gain after correcting some (but not all) errors, for example single qubit errors only.

An example for a coherent error is an implementation of a gate G that does, to first order, not simply G but G+ϵX which you might call an error of ϵ because that is the probability corresponding to the probability amplitude ϵ and hence the probability that a measurement directly after the gate reveals that it acted as the error X. After N applications of this gate, again to first order, you have actually applied GN+NϵGNX (if G and X commute, otherwise a more complicated construct that has N distinct terms proportional to ϵ). Hence you would, if measuring then, find an error probability of N2ϵ.

Incoherent errors are more benign. Yet if one must give a single value as an error threshold, then one cannot choose to only assume benign errors!


thanks for the answer, but I would appreciate if you could expand the answer to say more about some of the points here. In particular, 1) what do you mean exactly by saying that you need many gates in the error correcting code because there are "many errors to detect"? 2) What do you mean with "straight-forward logical construct"? 3) Why do "coherent errors" imply an error propagation scaling like N2ϵ instead of Nϵ?
glS

@glS I have substantially expanded the answer to address all your questions. Did I manage to do that?
pyramids
Ao utilizar nosso site, você reconhece que leu e compreendeu nossa Política de Cookies e nossa Política de Privacidade.
Licensed under cc by-sa 3.0 with attribution required.