Here's a more detailed explanation of the method I sketched in a comment. This method lowers the upper bound in some cases.
We'll perform an operation which (1) reduces the number of computers under consideration (by about half), and (2) preserves the fact that more than half of the computers under consideration are good. Repeating this operation eventually reduces the number of computers to 1, at which point the more-than-half condition tells us that the computer is good, and we're done.
The operation is this:
- Pair up all the computers arbitrarily. (Maybe one computer is left out; see step 3.)
- In each pair, ask one computer about the other. If the answer is "bad", discard both computers; if the answer is "good", discard just the testing computer and keep the tested one.
- If there was one computer left out of the pairing in step 1, then either keep or discard it so that you keep in total an odd number of computers.
Proof that after this operation, more than half of the computers are good: Let $G$ be the number of good computers to start with, and $B$ the number of bad computers. Write $(GG)$ for the number of pairs produced in step 1 with both computers good; write $(GB)$ for the number of pairs with the testing computer good and the tested computer bad; write $(BG)$ and $(BB)$ similarly. Let $G_2$ and $B_2$ denote respectively the number of good and bad computers that are kept after step 2, and let $G_3$ and $B_3$ denote the corresponding number for after step 3. We know that $G > B$ and want to show that $G_3 > B_3$.
In step 2, every $(GG)$ pair answers "good", so you keep the second good computer; this shows $G_2 \ge (GG)$. On the other hand, the only way for a bad computer to survive step 2 is if a bad computer vouches for it; this shows $B_2 \le (BB)$. Now consider three cases.
Case 1: There were an even number of computers to start with. Then $G = 2(GG) + (GB) + (BG)$ and $B = 2(BB) + (GB) + (BG)$. By hypothesis, $G > B$, so these identities imply $(GG) > (BB)$, which gives $G_2 > B_2$. Since there is no unpaired computer, $G_3 = G_2$ and $B_3 = B_2$, so we're done.
Case 2: There were an odd number of computers to start with, and the unpaired computer was good. Then $G = 2(GG) + (GB) + (BG) + 1$ and $B = 2(BB) + (GB) + (BG)$, which since $G > B$ implies $(GG) \ge (BB)$, so $G_2 \ge B_2$. If $G_2 > B_2$ then $G_3 > B_3$ whether or not we keep the unpaired good computer. If $G_2 = B_2$ then the total number of computers kept after step 2 is even, so in step 3 we keep the unpaired good computer, obtaining $G_3 = G_2+1 > B_2 = B_3$.
Case 3: There were an odd number of computers to start with, and the unpaired computer was bad. Then $G = 2(GG) + (GB) + (BG)$ and $B = 2(BB) + (GB) + (BG) + 1$, which since $G > B$ implies $(GG) > BB$. As in case 1 this yields $G_2 > B_2$. If $G_2 = B_2 + 1$ then the total number of computers kept after step 2 is odd, so in step 3 we discard the unpaired bad computer, obtaining $G_3 = G_2 > B_2 = B_3$. If $G_2 > B_2 + 1$ then $G_3 > B_3$ whether or not we keep the unpaired bad computer.
So in all cases, at the end of step 3 we have kept more good computers than bad computers, as desired.
As Alon said in comments, in the worst case (when all tests say "good") this method uses $Q(n) = n - h(n)$ questions, where $h(n)$ is the number of bits in the binary representation of $n$. (This can be easily proved by (strong) induction.) Some specific values:
- $Q(100) = 97$, just as in the question.
- $Q(2^m) = 2^m-1$, which is slightly worse than the estimate in the question, which in this case is $2^m-3$ (for $m\ge 2$, I presume).
- In particular, $Q(2) = 1$, which is suboptimal, since the hypothesis that more than half the computers are good here tells us that all of them are good, so there is no need to ask questions. But I guess adding this special case to the algorithm saves us at most one question.
- $Q(2^m-1) = 2^m - m$, which is asymptotically better than the estimate in the question (it's $n-\log n$ instead of $n-c$).
- In particular, $Q(7) = 4$, slightly improving the estimate in the question.
Here's another way to look at this puzzle. Consider the following game. There is a (directed) graph with $2n$ vertices, labelled $x_1,\dotsc,x_n$ and $\neg x_1,\dotsc,\neg x_n$. At the beginning of the game, the graph has no edges. Every round, player A chooses two numbers $i$ and $j$; player B draws either the directed edge $x_i\to x_j$ or the directed edge $x_i\to\neg x_j$. At the end of the round, player A may claim victory by choosing some number $k$ and drawing $x_k\to\neg x_k$; player A then wins if, interpreting the graph as specifying a boolean formula (where each directed edge represents an implication, which is a 2-clause), it is not possible to satisfy that boolean formula with more than half of the variables set to "true". Player A's goal is to win as quickly as possible; player B's goal is to delay player A's victory as long as possible.
In short, the players jointly construct an instance of MAX 2-SAT, with player A trying to make it unsatisfiable and player B trying to keep it satisfiable.
(See, when player A chooses $k$, adds $x_k\to\neg x_k$, and shows that the resulting MAX 2-SAT instance is unsatisfiable, that amounts to a proof (by contraposition) that, if more than half the computers are good, the answers so far prove that computer $k$ is good.)
I have no insights from this way of thinking about the puzzle, except that it makes me suspect the problem is hard.