Agora, acrescentando à resposta de M. Stern :
A principal razão pela qual a correção de erros é necessária para computadores quânticos é porque os qubits têm um continuum de estados (estou considerando computadores quânticos baseados em qubit apenas, no momento, por uma questão de simplicidade).
α | 0 ⟩ + β|1⟩α|0⟩+βeiϕ|1⟩α | 0 ⟩ + βei(ϕ+δ)|1⟩. The actual state is close to the correct state but still wrong. If we don't do something about this the small errors will build up over the course of time and eventually become a big error.
Moreover, quantum states are very delicate, and any interaction with the environment can cause decoherence and collapse of a state like α|0⟩+β|1⟩ to |0⟩ with probability |α|2 or |1⟩ with probability |β|2.
In a classical computer if say a bit's value is being replicated n-times as follows:
0→00000...n times
and
1→11111...n times
In case after the step something like 0001000100 is produced it can be corrected by the classical computer to give 0000000000 because majority of the bits were 0′s and most probably the intended aim of the initial operation was replicating the 0-bit 10 times.
But, for qubits such a error correction method won't work, because first of all duplicating qubits directly is not possible due to the No-Cloning theorem. And secondly, even if you could replicate |ψ⟩=α|0⟩+β|1⟩ 10-times it's highly probably that you'd end up with something like (α|0⟩+β|1⟩)⊗(αeiϵ|0⟩+βeiϵ′|1⟩)⊗(αeiϵ2|0⟩+βeiϵ′2|1⟩)⊗... i.e. with errors in the phases, where all the qubits would be in different states (due to the errors). That is, the situation is no-longer binary. A quantum computer, unlike a classical computer can no longer say that: "Since majority of the bits are in 0-state let me convert the rest to 0 !", to correct any error which occurred during the operation. That's because all the 10 states of the 10 different qubits might be different from each other, after the so-called "replication" operation. The number of such possible errors will keep increasing rapidly as more and more operations are performed on a system of qubits. M. Stern has indeed used the right terminology in their answer to your question i.e. "that doesn't scale well".
So, you need a different breed of error correcting techniques to deal with errors occurring during the operation of a quantum computer, which can deal not only with bit flip errors but also phase shift errors. Also, it has to be resistant against unintentional decoherence. One thing to keep in mind is that most quantum gates will not be "perfect", even though with right number of "universal quantum gates" you can get arbitrarily close to building any quantum gate which performs (in theory) an unitary transformation.
Niel de Beaudrap mentions that there are clever ways to apply classical error correction techniques in ways such that they can correct many of the errors which occur during quantum operations, which is indeed correct, and is exactly what current day quantum error correcting codes do. I'd like to add the following from Wikipedia, as it might give some clarity about how quantum error correcting codes deal with the problem described above:
Classical error correcting codes use a syndrome measurement to
diagnose which error corrupts an encoded state. We then reverse an
error by applying a corrective operation based on the syndrome.
Quantum error correction also employs syndrome measurements. We
perform a multi-qubit measurement that does not disturb the quantum
information in the encoded state but retrieves information about the
error. A syndrome measurement can determine whether a qubit has been
corrupted, and if so, which one. What is more, the outcome of this
operation (the syndrome) tells us not only which physical qubit was
affected, but also, in which of several possible ways it was affected.
The latter is counter-intuitive at first sight: Since noise is
arbitrary, how can the effect of noise be one of only few distinct
possibilities? In most codes, the effect is either a bit flip, or a
sign (of the phase) flip, or both (corresponding to the Pauli matrices
X, Z, and Y). The reason is that the measurement of the syndrome has
the projective effect of a quantum measurement. So even if the error
due to the noise was arbitrary, it can be expressed as a superposition
of basis operations—the error basis (which is here given by the Pauli
matrices and the identity). The syndrome measurement "forces" the
qubit to "decide" for a certain specific "Pauli error" to "have
happened", and the syndrome tells us which, so that we can let the
same Pauli operator act again on the corrupted qubit to revert the
effect of the error.
The syndrome measurement tells us as much as possible about the error
that has happened, but nothing at all about the value that is stored
in the logical qubit—as otherwise the measurement would destroy any
quantum superposition of this logical qubit with other qubits in the
quantum computer.
Note: I haven't given any example of actual quantum error correcting techniques. There are plenty of good textbooks out there which discuss this topic. However, I hope this answer will give the readers a basic idea of why we need error correcting codes in quantum computation.
Recommended Further Readings:
Recommended Video Lecture:
Mini Crash Course: Quantum Error Correction by Ben Reichardt, University of Southern California