Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Positive Constants - Data Structures - Solved Problems, Exams of Data Structures and Algorithms

Main points of this exam paper are: Positive Constants, Mathematical Definition, Convenience, Insertion Sort, Recursive Procedure, Smaller Arguments, Same Machine

Typology: Exams

2012/2013

Uploaded on 04/07/2013

seshu_lin3
seshu_lin3 🇮🇳

4

(3)

59 documents

1 / 7

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
CSE3358 Problem Set 2
Solution
Please review the Homework Honor Policy on the course webpage
http://engr.smu.edu/˜saad/courses/cse3358/homeworkpolicy.html
Problem 1: Practice with asymptotic notations
(a) (3 points) Explain clearly why the statement “The running time of algorithm Ais at least O(n3)”
does not make sense.
ANSWER:Onotation is an upper bound notation. Therefore, O(n3) by iself means “at most c.n3”.
So the statement “The running time of algorithm Ais at least at most c.n3 does not make sense.
(b) (6 points) Let f(n) and g(n) be asymptotically nonnegative functions. Let h(n) = max(f(n), g(n)).
Using the precise mathematical definition of Θ-notation, show that h(n) = Θ(f(n) + g(n)). Hint:
Remember this means you have to show that h(n) = O(f(n) + g(n)) and h(n) = Ω(f(n) + g(n)).
Can we say the same thing about h0(n) = min(f(n), g(n), i.e. is h0(n) = Θ(f(n) + g(n))? Explain.
ANSWER: To show h(n) = Θ(f(n) + g(n)), we need to show two things:
h(n) = O(f(n) + g(n))
h(n) = Ω(f(n) + g(n))
First note that if f(n)g(n), then h(n) = f(n). Similarly, if g(n)f(n), then h(n) = g(n).
For the first part, we need to show that h(n)c(f(n) + g(n)) for some positive constant cand large
n.
f(n)g(n)h(n) = g(n)f(n) + g(n) = 1.(f(n) + g(n))
f(n)g(n)h(n) = f(n)f(n) + g(n) = 1.(f(n) + g(n))
For the second part, we need to show that h(n)c(f(n) + g(n)) for some positive constant cand
large n.
f(n)g(n)h(n) = g(n), but
1
2g(n)1
2f(n)
1
2g(n)1
2g(n)
Therefore, g(n) = h(n)1
2(f(n) + g(n)).
f(n)g(n)h(n) = f(n), but
1
2f(n)1
2f(n)
1
2f(n)1
2g(n)
Therefore, f(n) = h(n)1
2(f(n) + g(n)).
1
pf3
pf4
pf5

Partial preview of the text

Download Positive Constants - Data Structures - Solved Problems and more Exams Data Structures and Algorithms in PDF only on Docsity!

CSE3358 Problem Set 2 Solution

Please review the Homework Honor Policy on the course webpage http://engr.smu.edu/˜saad/courses/cse3358/homeworkpolicy.html

Problem 1: Practice with asymptotic notations

(a) (3 points) Explain clearly why the statement “The running time of algorithm A is at least O(n^3 )” does not make sense.

ANSWER: O notation is an upper bound notation. Therefore, O(n^3 ) by iself means “at most c.n^3 ”. So the statement “The running time of algorithm A is at least at most c.n^3 ” does not make sense.

(b) (6 points) Let f (n) and g(n) be asymptotically nonnegative functions. Let h(n) = max(f (n), g(n)). Using the precise mathematical definition of Θ-notation, show that h(n) = Θ(f (n) + g(n)). Hint: Remember this means you have to show that h(n) = O(f (n) + g(n)) and h(n) = Ω(f (n) + g(n)).

Can we say the same thing about h′(n) = min(f (n), g(n), i.e. is h′(n) = Θ(f (n) + g(n))? Explain.

ANSWER: To show h(n) = Θ(f (n) + g(n)), we need to show two things:

  • h(n) = O(f (n) + g(n))
  • h(n) = Ω(f (n) + g(n)) First note that if f (n) ≥ g(n), then h(n) = f (n). Similarly, if g(n) ≥ f (n), then h(n) = g(n).

For the first part, we need to show that h(n) ≤ c(f (n) + g(n)) for some positive constant c and large n.

  • f (n) ≤ g(n) ⇒ h(n) = g(n) ≤ f (n) + g(n) = 1.(f (n) + g(n))
  • f (n) ≥ g(n) ⇒ h(n) = f (n) ≤ f (n) + g(n) = 1.(f (n) + g(n))

For the second part, we need to show that h(n) ≥ c(f (n) + g(n)) for some positive constant c and large n.

  • f (n) ≤ g(n) ⇒ h(n) = g(n), but 1 2 g(n)^ ≥^

1 1 2 f^ (n) 2 g(n)^ ≥^

1 2 g(n) Therefore, g(n) = h(n) ≥ 12 (f (n) + g(n)).

  • f (n) ≥ g(n) ⇒ h(n) = f (n), but 1 2 f^ (n)^ ≥^

1 1 2 f^ (n) 2 f^ (n)^ ≥^

1 2 g(n) Therefore, f (n) = h(n) ≥ 12 (f (n) + g(n)).

Intuitively, h(n) which is the maximum of h(n) and g(n) is ≤ their sum but ≥ their average, which is their sum divided by 2.

We cannot say the same about h′(n). Although h′(n) = O(f (n) + g(n)), it is not true that h′(n) = Ω(f (n) + g(n)). To show this, it is enough to provide a counter example. Let f (n) = n and g(n) = n^2. Then h′(n) = n. But it is not true that n = Ω(n+n^2 ). In deed for this to be true, we need n ≥ c(n+n^2 ) ∀n ≥ n 0 for some positive constants c and n 0. This means (^) n+nn 2 ≥ c for large n. But this cannot be because limn→∞ (^) n+nn 2 → 0.

(c) (3 points) Show that for any real constants a and b, where b > 0, (n + a)b^ = Θ(nb).

ANSWER: First, n + a = Θ(n) because low order terms and leading constants don’t matter (but we can actually prove this using the definition). Therefore, n + a = Ω(n) and n + a = O(n). So,

c 1 n ≤ n + a, n ≥ n 1

n + a ≤ c 2 n, n ≥ n 2

Therefore, c 1 n ≤ n + a ≤ c 2 n, n ≥ n 0

for some constants c 1 , c 2 , n 0 (n 0 = max(n 1 , n 2 )).

Since b is positive, we can take everything to power b, we get:

(c 1 n)b^ ≤ (n + a)b^ ≤ (c 2 n)b, n ≥ n 0

(c 1 )bnb^ ≤ (n + a)b^ ≤ (c 2 )bnb, n ≥ n 0 c′ 1 nb^ ≤ (n + a)b^ ≤ c′ 2 nb, n ≥ n 0

Therefore, c′ 1 nb^ ≤ (n + a)b, n ≥ n 0 ⇒ (n + a)b^ = Ω(nb) (n + a)b^ ≤ c′ 2 nb, n ≥ n 0 ⇒ (n + a)b^ = O(nb)

Therefore, (n + a)b^ = Θ(nb).

(d) (4 points) Is 2n+1^ = O(2n)? Is 2^2 n^ = O(2n)?

Is 2n+1^ = O(2n)? ANSWER: yes. 2n+1^ = 2. 2 n. Is 2^2 n^ = O(2n)? ANSWER: no. 2^2 n^ = (2^2 )n^ = 4n and 4n^6 = O(2n). If 4n^ = O(2n) then 4n^ ≤ c. 2 n^ for some constant c and large n. Therefore 4

n 2 n^ ≤^ c^ for large n which means that 2n^ ≤ c for large n, impossible.

(e) (4 points) Find two non-negative functions f (n) and g(n) such that neither f (n) = O(g(n)) nor g(n) = O(f (n)).

ANSWER: Consider f (n) = 1 + cos(n) and g(n) = 1 + sin(n). Therefore, both f (n) and g(n) are periodic and take values in [0..2]. But when f (n) = 0, g(n) = 2, and when g(n) = 0, f (n) = 2. Therefore, we cannot find a positive constant c, such that f (n) ≤ c.g(n) for large n, because g(n) always comes back to 0 when f (n) is 2. Therefore, f (n) cannot be O(g(n)). The same argument works for the other case.

Let T (n) be the running time of this algorithm. The first line consists of checking a condition which takes Θ(1) time. The second line is a recursive call to INSERTION-SORT with argument (A, p, r − 1), i.e. with an input of size n − 1, so this takes T (n − 1) time. The last line consists of the shifting process of the regular INSERTION-SORT which takes Θ(n) time. Therefore, T (n) = Θ(1)+T (n−1)+Θ(n) = T (n − 1) + Θ(n).

T (n) =

{ Θ(1) n = 1 T (n − 1) + Θ(n) n > 1

(c) (10 points) Solve for T (n) by expending it in a tree-like structure as we did in class.

T (n) = Θ(n) + T (n − 1) = Θ(n) + Θ(n − 1) + T (n − 2) = Θ(n) + Θ(n − 1) + Θ(n − 2) + T (n − 3) = ....

∑n i=2 Θ(i) +^ T^ (1) =^

∑n i=2 Θ(i) + Θ(1) =^

∑n i=1 Θ(i) = Θ(

∑n i=1 i) = Θ(n

Θ( n - 1 )

Θ( n - 2 )

Θ( n - 3 )

Θ( n )

T ( n )

T ( n -1)

T ( n -2)

T ( n -3)

Problem 3: Insertion sort and merge sort

Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size n, insertion sort runs in 8n^2 steps, while merge sort runs in 64n log n steps.

(a) (5 points) For which values of n does insertion sort beat merge sort?

ANSWER: For n ≤ 43, 8n^2 ≤ 64 n log n and insertion sort beats merge sort

Although merge sort runs in Θ(n log n) worst-case time and insertion sort runs in Θ(n^2 ) worst-case time, the constant factors in insertion sort make it faster for small n. Therefore, it makes sense to use insertion sort within merge sort when subproblems become sufficiently small. Consider a modification of merge sort in which subarrays of size k or less (for some k) are not divided further, but sorted explicitly with Insertion sort.

MERGE-SORT(A, p, r) if p < r − k + 1 then q←bp+ 2 rc MERGE-SORT(A, p, q) MERGE-SORT(A, q + 1, r) MERGE(A, p, q, r) else INSERTION-SORT(A[p..r])

(b) (5 points) Show that the total time spent on all calls to Insertion sort is in the worst-case Θ(nk).

ANSWER: As a technical remark, if n <= k, then this modified MERGE-SORT will just call INSERTION-SORT on the whole array. Therefore, the running time will be Θ(n^2 ) = Θ(n.n) = O(nk). So let us assume that n > k.

First note that the length of subarray A[p..r] is l = r − p + 1. So the condition p < r − k + 1 is the same as l > k. Therefore, any subarray of length l on which we call INSERTION-SORT has to satisfy l ≤ k (this is actually in the given of the algorithm). Moreover, any subarray A[p..r] of length l on which we call INSERTION-SORT has to satisfy k 2 ≤ l. The reason for this is the following: if INSERTION- SORT is called on A[p..r], then A[p..r] must be the first or second half of another subarray A[p′..r′] that was divided (we assumed that n > k so A[p..r] cannot be A[1..n]). Thus, A[p′..r′] has length l′^ > k. Therefore, since l ≥ bl ′ 2 c^ and^ l

′ (^) > k, then l ≥ k

The running time of INSERTION-SORT on A[p..r] is Θ(l^2 ), where l = r − p + 1 is the length of A[p..r]. But since k 2 ≤ l ≤ k, then l = Θ(k) and the running time of INSERTION-SORT on A[p..r] is Θ(k^2 ).

Now let us see how many such subarrays (on which we call INSERTION-SORT) we have. Since all the subarrays are disjoint (i.e. their indices do not overlap) and for every one of them k 2 ≤ l ≤ k, then the number of these subarrays, call it m, satisfies nk ≤ m ≤ (^2) kn. Therefore m = Θ(n/k).

The total time spent on INSERTION-SORT is therefore Θ(k^2 ).Θ(n/k) = Θ(nk).

(10 points) [This is intended to (1) practice the implementation of recursive functions and (2) really appreciate the difference in efficiency between Insertion sort and Merge sort on large size inputs]

Implement in a programming language of your choice this modified version of Merge sort. Therefore, you have to have Insertion sort implemented as well (but hopefully you can use your implementation of Problem Set 1). Compare the running times of Merge sort and Insertion sort on large values of n and report your findings.