17
$\begingroup$

This question is inspired by the Georgia Tech Algorithms and Randomness Center's t-shirt, which asks "Randomize or not?!"

There are many examples where randomizing helps, especially when operating in adversarial environments. There are also some settings where randomizing doesn't help or hurt. My question is:

What are some settings when randomizing (in some seemingly reasonable way) actually hurts?

Feel free to define "settings" and "hurts" broadly, whether in terms of problem complexity, provable guarantees, approximation ratios, or running time (I expect running time is where the more obvious answers will lie). The more interesting the example, the better!

$\endgroup$
5
  • 1
    $\begingroup$ Downvoted. This question seems to me like a question about rhetoric because the focus of the question seems to be how to argue that a particular fact can be referred to as “randomizing hurts.” $\endgroup$ Commented Oct 15, 2010 at 22:21
  • 1
    $\begingroup$ Fair enough. But let me give you an example of what I had in mind. Say we have a learning algorithm that has actions it can take, and, in the learning phase, takes them round robin. Suppose it has some guarantee. Now, say, we instead consider taking actions uniformity at random and find the guarantee is lost. Hard to argue that this isn't an example of randomizing "hurting." And feel free to define "hurts" for yourself! Though that might be part of your criticism... $\endgroup$
    – Lev Reyzin
    Commented Oct 15, 2010 at 22:27
  • 6
    $\begingroup$ let it play out: maybe we'll get an interesting discussion. I know of at least one case where the simple randomized strategy is actually worse than a careful deterministic algorithm $\endgroup$ Commented Oct 15, 2010 at 22:40
  • 1
    $\begingroup$ The reason I do not like this question as is stated is probably because I expect that most voted answers will be “interesting” only in their interpretations of the question. The question seems to be encouraging creative and rhetorical interpretations. If this is not what you want and you can think of a better way to phrase the question, please revise it (but I cannot think of any). $\endgroup$ Commented Oct 15, 2010 at 22:50
  • 2
    $\begingroup$ Eek I didn't expect this question to be so controversial :) Anyhow I do not mind interesting interpretations! I guess we'll have to disagree in this one. Btw if the vagueness of the question is bothersome, I completely don't mind @Suresh making it a CW... $\endgroup$
    – Lev Reyzin
    Commented Oct 15, 2010 at 22:57

3 Answers 3

25
$\begingroup$

Here is a simple example from game theory. In games in which both pure and mixed Nash equilibria exist, the mixed ones are often much less natural, and much "worse".

For example, consider a simple balls and bins game: there are n bins, and n balls (players). Each player gets to pick a bin, and incurs a cost equal to the number of people in his bin. The pure Nash equilibrium has everyone each picking a unique bin, and nobody incurs cost more than 1. However, there is a mixed Nash equilibrium in which everyone randomly chooses a bin, and then with high probability, there will be one bin with ~ $\log(n)/\log\log(n)$ people. Since OPT is 1, that means that (if what we care about is max player cost), if randomization is not allowed, then the price of anarchy is 1. But if randomization is allowed, it grows unboundedly with the number of players in the game.

The takeaway message: randomization can harm coordination.

$\endgroup$
2
  • 1
    $\begingroup$ cool - I like this interpretation of balls and bins as a 2 player game. This is the kind of answer I had in mind! $\endgroup$
    – Lev Reyzin
    Commented Oct 15, 2010 at 22:59
  • 1
    $\begingroup$ It is sometimes discussed in its disguised form as "the load balancing game on identical machines" :-) $\endgroup$
    – Aaron Roth
    Commented Oct 15, 2010 at 23:06
13
$\begingroup$

Here is a simple example from the field of distributed algorithms.

Typically randomness helps tremendously. Randomised distributed algorithms are often easier to design and they are faster.

However, if you have a fast deterministic distributed algorithm, you can mechanically convert [1, 2] it into a fast self-stabilising algorithm. In essence, you will get a very strong version of fault-tolerance for free (at least if the bottleneck resource is the number of communication rounds). You can simplify your algorithm design by focusing on fault-free synchronous static networks, and the conversion will give you a fault-tolerant algorithm that can handle asynchronous dynamic networks.

No such conversion is known for randomised distributed algorithms in general.

$\endgroup$
6
$\begingroup$

Let me first bring about an issue regarding randomness:

Does there exist any randomness in the universe, or is everything deterministic?

This is a philosophical question which is both controversial and unrelated to the context here. Yet I used it as word of warning, since the upcoming answer will be controversial if one digs too deep into the above question.


Shannon–Hartley theorem describes the capacity of a communication channel in the presence of noise. The noise changes 0s to 1s and vice versa, with some pre-specified probability.

If the channel behaved in a deterministic way---That is, if we could model noise in a way that we were able to determine what bits would change---The capacity of the channel would be infinitely large. Very desirable!

I like to analogize randomness to friction: It is resisting movement, yet movement is impossible without it.

$\endgroup$

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.