I have started learning about quantum computing, and I have been told that you can forget about the physics and think of qubits as a natural generalization of the notion of bit. According to this view, one should think about a qubit as a generalization of a bit, where the states $0$ and $1$ are in superposition; there is "a little bit of $0$ and a little bit of $1$".
This is all fine, I think, but if I were to model this formally, I would model a qubit as a real probability coefficient $\alpha \in [0,1] \subseteq \mathbb{R}$, representing the amount of $0$ in my qubit (thus $1-\alpha$ being the amount of $1$). Instead, in quantum computing you define a qubit as
$$\alpha_0|0\rangle + \alpha_1|1\rangle$$ where $\alpha_0, \alpha_1 \in \mathbb{C}$, and $|\alpha_0|^2 + |\alpha_1|^2 = 1$.
Given that complex numbers rarely appear in classical computer science, this seems an odd design choice if we are to approach quantum computing from a purely theoretical computer science point of view. I assume that the reason for complex numbers showing up here is that they are strictly necessary in quantum mechanics, but this is quite unsatisfactory of an answer for a theoretical computer scientist. My questions are two:
- Is there an intuitive, non-physics way, of justifying these complex numbers?
- Is it maybe that restricting to real numbers as coefficients yields a computational model equivalent to classical probabilistic Turing machines (in terms of efficiency)?
I would appreciate references where I could find more about the definition of qubits from a purely TCS point of view.