Sorting 32
Sorting 32
Sorting 32
Quick-Sort
• To understand quick-sort, let’s look at a high-
level
description of the algorithm
• 1) Divide : If the sequence S has 2 or more
elements,
select an element x from S to you pivot. Any
arbitrary element, like the last, will do. Remove
all
the elements of S and divide them into 3
sequences:
- L, holds S’s elements less than x
- E, holds S’s elements equal to x
- G, holds S’s elements greater than x
• 2) Recurse: Recursively sort L and G
• 3) Conquer: Finally, to put elements back into
S in
order, first inserts the elements of L, then those
of E,
and those of G.
• Here are some pretty diagrams....
sorting 33
Quick-Sort Tree
85 24 63 45 17 31 96 50
24 45 17 31 50 85 63 96
sorting 34
Randomized Quick-Sort
• The main drawback to quick-sort is that it
achieves
its worst-case time complexity on data sets that
are
common in practice: sequences that are already
sorted (or mostly sorted)
• To avoid this, we modify quick-sort so that it
selects
the pivot as a random element of the sequence
• The expected time of a randomized quick-sort
on a
sequence of size n is O(nlog n).
• Justification: we say that an invocation of
quicksort,
on an input sequence of size m is “good” if
neither L
nor G is less than m/4.
- there are m/2 “good” pivots and m/2 “bad”
ones
- The probablility that an invocation is “good” is
1/2
- Suppose we choose a good pivot at node v: the
algorithm recurs on sequences with size at most
(3/4)m each
- On average, the height of the tree representing
a
randomized quick-sort is at most 2log4/3 n
• Total time complexity: O(n log n)
sorting 45
In-Place Quick-Sort
• Divide step: l scans the sequence from the left,
and r
from the right.
• A swap is performed when l is at an element
larger
than the pivot and r is at one smaller than the
pivot.
85 24 63 45 17 31 96 50
lr
85 24 63 45 17 31 96 50
lr
31 24 63 45 17 85 96 50
lr
sorting 46