Fancy Mouse
- -|||
FAQ about the Master theorem
Q1: Why in case 1, f(n) must be polynomially smaller than n^log(b,a)?
Recall the lemma proved in the proof of the master theorem
i.e. for T(n) = a*T(n/b) + f(n)
T(n) = Θ(n^log(b,a)) + sigma(j=0~log(b,n)-1, a^j * f(n/b^j))
So if f(n) is only o(n^log(b,a)) instead of O(n^(log(b,a)-ε)
then T(n) = Θ(n^log(b,a)) + log(b,n) * o(n^log(b,a))
So how can you determine if T(n) = Θ(n^log(b,a)) or Θ(n^log(b,a) * logn) or some functions between these bounds?

Q2: In the regularity condition in case 3, a*f(n/b) <= c*f(n), what if c>=1?
A: If c>=1, you can't decide sigma(j=0~log(b,n)-1, a^j * f(n/b^j)) = O(f(n)).
See the proof in CLRS.

Q3: (Ex 4.4-3) Why the case 3 is overstated? i.e. Prove regularity condition a*f(n/b) <= c*f(n) (c<1) implies f(n) = Ω(n^(log(b,a)+ε))
A: It's easy to conclude that f(n) <= (c/a)^k * f(n*b^k) for any positive integer k
Consider g(n) = n^(log(b,a)+ε), when ε is positive and to be determined
Assume a*f(n/b) <= c*f(n) is correct for all n >= m, when m is a constant
f(m*b^k) >= (a/c)^k * f(m)
g(m*b^k) = (m*b^k)^(log(b,a)+ε) = b^(k*ε) * a^k * g(m) = (a*b^ε)^k * g(m)
Thus, if ε is selected such that a*b^ε <= a/c (it is possible to select a positive ε because a/c > a)
Assume x = m*b^k and k is sufficiently large
g(x) / f(x) = (c*b^ε)^k * (g(m) / f(m))
Since g(m)/f(m) is constant, c*b^ε<1, k is sufficiently large,
g(x) / f(x) -> 0 when k -> +∞
Thus for k is sufficiently large, f(m*b^k) > g(m*b^k) is correct
Using the similar technique in CLRS to eliminate the restriction of exact powers, it can be concluded that f(n) > g(n) is correct for all sufficiently large n

Q4: (Ex 4.3-5) Give an example of a>=1, b>1, f(n) that satisfies all the conditions in case 3 of the master theorem except the regularity condition
T(n) = T(n/2) + n/lgn
n/(2*(lgn-1)) <= c*n/lgn <==> lgn <= 2*c*(lgn-1) <==> 2*c <= (2*c-1)*lgn
Q5: In Rujia Liu's book, the statement of the Master theorem is not exactly that of CLRS'. Which is correct? Or are they equivalent?
A: They're almost equivalent. However, the statement in Liu's book is quite ambiguous since he uses "if a*f(n/b) = f(n) then..." (otherwise, how could it be a "simplified master theorem"?)

Exercises
4.3-5 See FAQ4 above

4.4-3 See FAQ3 above

Problems
4-1 Recurrence examples
Give asymptotic upper and lower bounds for T(n) in each of the following recurrences.
a. T(n) = 2*T(n/2) + n^3
Θ(n^3)
b. T(n) = T(9n/10) + n
Θ(n)
c. T(n) = 16*T(n/4) + n^2
Θ(n^2logn)
d. T(n) = 7*T(n/3) + n^2
Θ(n^2)
e. T(n) = 7*T(n/2) + n^2
Θ(n^lg7), or Θ(n^2.80)
f. T(n) = 2*T(n/4) + sqrt(n)
Θ(sqrt(n)*logn)
g. T(n) = T(n-1) + n
Θ(n^2)
h. T(n) = T(sqrt(n)) + 1
Θ(loglogn)

4-4 More recurrence examples
Give asymptotic upper and lower bounds for T(n) in each of the following recurrences.
a. T(n) = 3*T(n/2) + nlgn
Θ(n^lg3), or Θ(n^1.58)
b. T(n) = 5*T(n/5) + n/lgn
Θ(nloglogn)
Use the recursive tree method to conclude that the result is:
Θ(n*(1/log(5,n) + 1/log(5,n/5) + ...))
=Θ(n*(1/log(5,n) + 1/(log(5,n)-1) + ...))
=Θ(n*H(log(5,n)) (H(n) = 1+1/2+...+1/n)
=Θ(nloglogn)

c. T(n) = 4*T(n/2) + n^2*sqrt(n)
Θ(n^2*sqrt(n)) or Θ(n^2.5)
d. T(n) = 3*T(n/3 + 5) + n/2
Θ(nlogn)
Let a[n] denote for the problem size of n-th level in the recursive tree
Let b[n] = a[n]/3 - 15/2, then b[n] = b[n-1]/3
Thus the depth of the tree is Θ(logn)
So T(n) = Θ(nlogn)

e. T(n) = 2*T(n/2) + n/lgn
Θ(nloglogn)
f. T(n) = T(n/2) + T(n/4) + T(n/8) + n
Θ(n)
Also use recursive tree method, but force each level to be T(n/2^k). This means an addition may happen between lines.
Pay special attention to how many n/2^k calculated in each level. It can be concluded that the quantity also satisfies a[n] = a[n-1]+a[n-2]+a[n-3] recurrence. Just solve the recurrence, and then the complexity of the final level Θ(n^lg(1.83)) can be concluded. Here 1.83 is the approximate root of x^3 - x^2 - x - 1 = 0. Thus the overall complexity is Θ(n)

g. T(n) = T(n-1) + 1/n
Θ(logn)
h. T(n) = T(n-1) + lgn
Θ(nlogn)
i. T(n) = T(n-2) + 2lgn
Θ(nlogn)
j. T(n) = sqrt(n)*T(sqrt(n)) + n
Θ(nloglogn)

4-6 VLSI chip testing
Professor Diogenes has n supposedly identical VLSI chips that in principle are capable of testing each other. A good chip always reports accurately whether the other chip is good or bad, but the answer of a bad chip cannot be trusted. Thus the four possible outcomes of a test are as follows:
Chip A says Chip B says Conclusion
------------------------------------
B is good A is good both are good or both are bad
other cases... at least one is bad

a. Show that if more than n/2 chips are bad, the professor cannot necessarily determine which chips are good using any strategy based on this kind of pairwise test. Assume that the bad chips can conspire to fool the professor.
If more than n/2 chips are bad, then there are fewer than n/2 good chips. Good chips can reflect true states, that is, returning "good" for good chips and "bad for bad chips.
Let's divide all chips into three groups, m good chips(Group A), m bad chips(Group B), and n-2m bad chips(Group C). If test results are as below:
ABC
A:GBB
B:BGB
C:BBB
Then Group A and Group B behave identically. Thus we can know one of the two groups is made of good chips, but we can't determine which group is.

b. Consider the problem of finding a single good chip from among n chips, assuming that more than n/2 of the chips are good. Show that floor(n/2) pairwise tests are sufficient to reduce the problem to one of nearly half the size.
Split [n/2]*2 chips into [n/2] pairs of chips
If the test result is "good" for both chips, it can be determined that the two chips are of the same state, both good or both bad. So we just need to find the state of one chip, and the other's state is determined.
If there's "bad" result for either chip, just discard them temporarily. When the final result is determined, that is, good chips are determined, we can just use these good chips to determine the chips that we discarded.
Thus the problem is of almost half the size.

c. Show that the good chips can be identified with Θ(n) pairwise tests, assuming that more than n/2 of the chips are good. Give and solve the recurrence that describes the number of tests.
T(n) = T(n/2) + Θ(n)
Thus T(n) = Θ(n)

posted on 2008-01-10 16:24  Fancy Mouse  阅读(3809)  评论(11编辑  收藏  举报