Problemset 3 Sol
Problemset 3 Sol
Problemset 3 Sol
operation. This assumption becomes unrealistic if the numbers grow very large. For instance, to multiply two n-bit integers, u and v , the traditional algorithm (do it by hand to see) requires (n2 ) bit multiplications and additions. In this problem we will inverstigate a divide-and-conquer algorithm for multiplying two numbers. This will be similar to Strassens algorithm, but applied to n-bit numbers instead of matrices of size n n. We will divide each number into two parts, the high order bits and the low order bits. Thus u can be written as a2n/2 + b where a represents the high order bits of u and b represents the low order bits of u. Similarly, v can be written as c2n/2 + d. Therefore, uv = (a2n/2 + b)(c2n/2 + d) = ac2n + (ad + bc)2n/2 + bd Multiplying by a power of 2 is a simple shifting operation that can be done in linear time (n). So the dominating operations are the 4 multiplications ac, ad, bc, and bd. (a) (10 points) Write pseudocode for a divide-and-conquer algorithm that recursively computes the products ac, ad, bc, and bd, and combines them to compute the product uv . Write a recurrence describing the algorithm running time and solve it.
ANSWER: NOTE: << x shifts to the left x times by lling zeros i.e. multiplies by 2x . MULTIPLY(u, v ) p 2n bit number initialized to zero DIVIDE nlength[u] m n/2 if n=1 then do p u.v return p au[m + 1..n] bu[1..m] cv [m + 1..n] dv [1..m] CONQUER p1 MULTIPLY(a,c) p2 MULTIPLY(a,d) p3 MULTIPLY(b,c) p4 MULTIPLY(b,d) COMBINE pp1 << 2m + (p3 + p4 ) << m + p2 return p 1
Each problem recursively solves four subbproblems of half the size. The divide step can be done in linear time by scanning the bits of u and v . The combine step can also be done in linear time because it involves additions of m = n/2 bit numbers. Note that the multiplications by 2n and 2n/2 are shifting operations which can be done is linear time as well. Therefore, the divide and combine steps take (n) time. The recurrence will be T (n) = 4T (n/2) + (n) which has a solution T (n) = (n2 ) using the master method. Therefore, there is no benet over the usual (n2 ) multiplication process. (b) (10 points) Design a dierent divide-and-conquer algorithm: compute ac and bd, then compute ad + bc using only one multiplication along with addition and substraction, thus reducing the number of sub-problems in each level of the recursion to 3 instead of 4. Write the recurrence describing the algorithm running time and solve it. ANSWER: They key to this part is that (a + b).(c + d) = ac + bd + ad + bc. Therefore, we compute ac and bd with two multiplications. We can then add a + b and c + d and perform one more multiplication to obtain (a + b).(c + d). Finally substraction of ac + bd from the last multiplication gives ad + bc. MULTIPLY(u, v ) p 2n bit number divide nlength[u] m n/2 if n=1 then then do p u.v return p au[m + 1..n] bu[1..m] cv [m + 1..n] dv [1..m] ea + b fc + d conquer p1 MULTIPLY(a,c) p2 MULTIPLY(b,d) p3 MULTIPLY(e,f) combine pp1 << 2m + (p3 p1 p2 ) << m + p2 return p
Note that we have two extra additions and two extra substractions. But the time for the divide and combine steps is still (n). The recurrence for this algorithm is now T (n) = 3T (n/2) + (n) which has a solution T (n) = (nlog2 3 ) = (n1.58 ) using the master method. This is better than the (n2 ) time algorithm.
Problem 2: Solving Recurrences Use the Master Theorem to solve the following recurrences. State which case of the Master Theorem you are using (and justify) for each of the recurrences. NOTE: In all of the following problems, the regularity condition for case 3 of the master theorem is satised because f (n) = nk . a. T (n) = 2T ( n ) + n3 2 ANSWER:
f (n) nlog2 2
n3 n
n b. T (n) = T ( 9 )+n 10
ANSWER:
f (n) n
log10/9 1
n n0
n2 n2
d. T (n) = 7T ( n ) + n2 3 ANSWER:
f (n) nlog3 7
n2 nlog3 7
= n2log3 7 . Since 2 log3 7 > 0, this is case 3 again, and T (n) = (n2 ).
) + n2 e. T (n) = 7T ( n 2 ANSWER:
f (n) nlog2 7
= =
n2 nlog2 7
)+ f. T (n) = 2T ( n 4 ANSWER:
f (n) nlog2 4
n
n1/2 n1/2
Problem 3: Partitionning in Quicksort Consider the following code for partitioning an array in Quicksort. PARTITION(A,p,r) xA[p] this is the pivot ip-1 i starts at the far left jr+1 j starts at the far right while TRUE do repeat jj-1 move j down until it nds an element x until A[j] x repeat ii+1 move i up until it nds an element x until A[i] x if they did not cross if i<j then swap A[i]A[j] else return j The Quicksort algorithm used with that partitioning code is the following: QUICKSORT(A,p,r) if p<r then qPARTITION(A,p,r) QUICKSORT(A,p,q) QUICKSORT(A,q+1,r) When PARTITION terminates, every element in A[1..j ] is less than or equal to every element in A[j + 1..r], for p j < r. Moreover, it always places the pivot value (originally in A[p]) into one of these two partitions. Since p j < r, this split is always nontrivial i.e. both partitions are non-empty. (a) (3 points) Show the result of above PARTITION on the array A = [13, 19, 9, 5, 12, 8, 7, 4, 11, 2, 6, 21] step by step. ANSWER:
A[p..r] = [13 19 9 5 12 8 7 4 11 2 6 21] Pivot = A[p] = 13 i 13 19 i 6 19 9 5 12 8 7 4 11 j 6 2 9 5 12 8 7 4 11 9 5 12 8 7 4 11 2 j 2 i 19 13 21 13 21 j 6 21
There are technicalities that make the pseudocode of PARTITION a little tricky. For example, the choice of A[p] as pivot is not arbitrary. (a) (5 points) Show that if A[r] is used as a pivot element in the PARTITION code, the QUICKSORT algorithm above might never terminate. ANSWER: If A[r] is taken as pivot and it happens that it is the uniquely largest element in the array, then i will stop at r because A[r] is the rst element greater or equal to the pivot that i will encounter. Similarly, j will stop at r because A[r] is the rst element less or equal to the pivot that j will encounter. Therefore, i and j will stop at r and from the condition at the end of the PARTITION code (i and j are crossing), the returned value will be j = r which goes into q in the QUICKSORT code. As a result, QUICKSORT(A, p, r) will recursively call QUICKSORT(A, p, q ) i.e. QUICKSORT(A, p, r). Since nothing has changed in the array A[p..r] during the partioning process, this will result in an innite loop where the same thing happens over and over. (b) (5 points) The PARTITION code here is faster in practice than the one we saw in class. Construct an array of size n for any n such that the PARTITION code here performs zero swaps while the one we saw in class performs n swaps. ANSWER: Consider a sorted array A[p..r] = [p, p + 1, ..., r]. For the PARTITION code above, the pivot is x = A[p] = p. Therefore, i will stop at A[p] (this is the rst element x it will encounter) and j will stop at A[p] (this is the rst element x it will encounter). Since i = j , i.e. it is not the case that i < j , the code will return j = p without any swaps. For the partition code we saw in class, however, the pivot is x = A[r] = r. j will scan the array from p to r 1. At every position of j from p to r 1, the element A[j ] will always be less than the pivot A[r], so it will be swapped with A[i] after incrementing i by 1. The rst time this happens, we swap A[p] with A[p]. The second time, we swap A[p + 1] with A[p + 1], etc... We keep on performing redundant swaps until we swap A[r 1] with A[r 1] where i and j are both equal to r 1. Finally, we swap A[r], the pivot, with A[i + 1] = A[r]. Therefore, the total number of swaps is euqal to the number of element, which is equal to r p + 1 = n. (c) (5 points) The partionning code we saw in class does not yield a balanced partitioning when the elements are not distinct. Construct an array of size n for any n such that the PARTITION code here splits the array in two halves while the the one we saw in class produces a worst-case partition. ANSWER: Consider the array A[p..r] = [x, x, x, ..., x], i.e. all the elements are equal to x. The pivot will denitely be x in all cases. In the partition code above, i and j will stop at every position on their way towards the middle of the array, and will be swapping equal elements. The paritioning will be optimal because i and j will partition the array A at the middle (this is where i and j will cross). The partition code we saw in class will move j all the way from p up to r 1 swapping every element with itself on the way (because every element is to the pivot). Finally, it will swap the pivot A[r] with itself and return r, yielding to a parition of size n 1, which is the worst case. NOTE: The example of making A[r] the smallest or largest element works but is not a fair example because the same kind of example can cause the PARTITION code above to produce a bad partition, simply making A[p] the smallest or largest.