This question is inspired by the Georgia Tech Algorithms and Randomness Center's t-shirt, which asks "Randomize or not?!"
There are many examples where randomizing helps, especially when operating in adversarial environments. There are also some settings where randomizing doesn't help or hurt. My question is:
What are some settings when randomizing (in some seemingly reasonable way) actually hurts?
Feel free to define "settings" and "hurts" broadly, whether in terms of problem complexity, provable guarantees, approximation ratios, or running time (I expect running time is where the more obvious answers will lie). The more interesting the example, the better!