4
$\begingroup$

I have started learning about quantum computing, and I have been told that you can forget about the physics and think of qubits as a natural generalization of the notion of bit. According to this view, one should think about a qubit as a generalization of a bit, where the states $0$ and $1$ are in superposition; there is "a little bit of $0$ and a little bit of $1$".

This is all fine, I think, but if I were to model this formally, I would model a qubit as a real probability coefficient $\alpha \in [0,1] \subseteq \mathbb{R}$, representing the amount of $0$ in my qubit (thus $1-\alpha$ being the amount of $1$). Instead, in quantum computing you define a qubit as

$$\alpha_0|0\rangle + \alpha_1|1\rangle$$ where $\alpha_0, \alpha_1 \in \mathbb{C}$, and $|\alpha_0|^2 + |\alpha_1|^2 = 1$.

Given that complex numbers rarely appear in classical computer science, this seems an odd design choice if we are to approach quantum computing from a purely theoretical computer science point of view. I assume that the reason for complex numbers showing up here is that they are strictly necessary in quantum mechanics, but this is quite unsatisfactory of an answer for a theoretical computer scientist. My questions are two:

  1. Is there an intuitive, non-physics way, of justifying these complex numbers?
  2. Is it maybe that restricting to real numbers as coefficients yields a computational model equivalent to classical probabilistic Turing machines (in terms of efficiency)?

I would appreciate references where I could find more about the definition of qubits from a purely TCS point of view.

$\endgroup$
4
  • 2
    $\begingroup$ Scott Aaronson has a blog post on this very natural question. $\endgroup$ Commented Feb 8, 2021 at 11:41
  • 3
    $\begingroup$ Some answers from a physics perspective can be found in notes of Karam. $\endgroup$ Commented Feb 8, 2021 at 11:42
  • 1
    $\begingroup$ Lipton and Regan wrote a textbook on quantum algorithms which, to the best of my knowledge, strives to avoid complex numbers. Lipton has a blog post on the book, with links to two previous blog posts exemplifying the approach. $\endgroup$ Commented Feb 8, 2021 at 11:47
  • $\begingroup$ Anyone who has read that comprehensive and insightful notes of Karam should upvote Yuval's comment above. $\endgroup$
    – John L.
    Commented Feb 9, 2021 at 6:42

1 Answer 1

4
$\begingroup$

You can run any standard quantum algorithm on a real-amplitude quantum computer with one additional qubit and only constant-factor slowdown (or perhaps linear-factor slowdown considering loss of parallelism) by replacing each $a{+}bi$ in your unitary matrices by $\big(\begin{smallmatrix}a&-b\\b&a\end{smallmatrix}\big)$. Likewise, you can simulate a quaternionic quantum computer on a standard quantum computer with one additional qubit by replacing $a{+}bi{+}cj{+}dk$ with $\big(\begin{smallmatrix}a+bi&-c+di\\c+di&a-bi\end{smallmatrix}\big)$. So these models are all polynomially equivalent.

I doubt that there's a good non-physical argument for the appearance of complex numbers. In physics they're related to the duality between zeroth and first derivatives of position in harmonic oscillation (the same reason that complex numbers are used in modeling AC circuits), and that duality doesn't exist in the discrete quantum circuit model. Another argument from physics is that in Lagrangian QM the amplitude of a transition is $\int e^{iS(γ)} \, \mathrm dγ$ (where $γ$ ranges over "histories"), and there doesn't seem to be anything you can sensibly replace $i$ with in the real case. But this model, where each history contributes an amplitude with the same absolute value but a different phase, only works because of cancellation of nearby histories, and there aren't any "nearby" histories in the quantum circuit model.

$\endgroup$

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.