Skip to main content

Exercises

note

The solutions for the following exercises should be considered together with my modest mathetmatical knowledge. As I am not a professional mathematician, I can not give you 100% guarantee that all of this is correct. It is only my try. If you see the mistake, please feel free to open issue or create a pull request.

Exercise 3.3-1

Show that if f(n)f(n) and g(n)g(n) are monotonically increasing functions, then so are the functions f(n)+g(n)f(n) + g(n) and f(g(n))f(g(n)), and if f(n)f(n) and g(n)g(n) are in addition nonnegative, then f(n)g(n)f(n) * g(n) is monotonically increasing.

First of all let's remember what is the monotonically increasing functions:

A function f(n)f(n) is monotonically increasing if mnm \le n implies f(m)f(n)f(m) \le f(n).

In accordance with the conditions of the exercise, the both our functions f(n)f(n) and g(n)g(n) are monotonically increasing functions. Let's consider the function h(n)=f(n)+g(n)h(n) = f(n) + g(n). We have to prove that it is also monotonically increasing function. If we will take two numbers mm and nn where mnm \le n, we will have: f(m)f(n)f(m) \le f(n) and g(m)g(n)g(m) \le g(n) by the definition of monotonically increasing functions. It implies that f(m)+g(m)f(n)+g(n)f(m) + g(m) \le f(n) + g(n).

In the same way since the g(m)g(n)g(m) \le g(n) and the f(n)f(n) function is monotonically increasing, the f(g(m))f(g(n))f(g(m)) \le f(g(n)).

If in the same time our both functions are nonnegative, we can safely multiple them and according to the facts above it also should be monotonically increasing function.

We may have even simple proof using coq proof assistant:

Require Import Coq.Arith.Le.

Parameter f : nat -> nat.
Parameter g : nat -> nat.

Axiom f_monotonic : forall x y : nat, x <= y -> f x <= f y.
Axiom g_monotonic : forall x y : nat, x <= y -> g x <= g y.

Theorem sum_of_monotonic_funs : forall x y : nat, x <= y -> f x + g x <= f y + g y.
Proof.
intros x y H.
apply Nat.add_le_mono.
- apply f_monotonic. assumption.
- apply g_monotonic. assumption.
Qed.

Theorem application_of_monotonic_funs : forall x y : nat, x <= y -> f(g(x)) <= f(g(y)).
Proof.
intros x y H.
apply f_monotonic.
apply g_monotonic.
assumption.
Qed.

Exercise 3.3-2

Prove that αn+(1α)n=n\lfloor \alpha * n \rfloor + \lceil (1 - \alpha)n \rceil = n for any integer nn and real number \alpha in the range of $0 \le \alpha \le 1.

The cases when α=0\alpha = 0 or α=1\alpha = 1 are trivial.

If α=0\alpha = 0 we will have:

αn+(1a)n=0+(10)n=0+n=n\lfloor \alpha \cdot n \rfloor + \lceil (1 - a) \cdot n \rceil = \lfloor 0 \rfloor + \lceil (1 - 0) \cdot n \rceil = 0 + n = n

if α=1\alpha = 1 we will have:

αn+(1a)n=1n+(11)n=n+0=n\lfloor \alpha \cdot n \rfloor + \lceil (1 - a) \cdot n \rceil = \lfloor 1 \cdot n \rfloor + \lceil (1 - 1) \cdot n \rceil = n + 0 = n

If 0<α<10 < \alpha < 1 we may use the equations that is given in the book: x=x-\lceil x \rceil = \lfloor -x \rfloor and x=x-\lfloor x \rfloor = \lceil -x \rceil. And in addition the fact that the integer number nn and real number xx we may have \lfloor n + x \rfloor = n + \lfloor x \rfloor. So let's take a look at the given expression:

αn+(1α)n=αn+nnα=αn+n+((αn))=αn+nαn=n\lfloor \alpha * n \rfloor + \lceil (1 - \alpha)n \rceil = \lfloor \alpha * n \rfloor + \lceil n - n \cdot \alpha \rceil = \lfloor \alpha * n \rfloor + n + \lceil (- (\alpha \cdot n)) \rceil = \lfloor \alpha \cdot n \rfloor + n - \lfloor \alpha \cdot n \rfloor = n

In addition we can see that the growth of both functions are equal:

import math
import matplotlib.pyplot as plt

x = []
for i in range(0, 1001):
x.append(i)

y1 = []
for i in x:
y1.append(i)

y2 = []
for i in x:
y2.append(math.floor(0.3 * i) + math.ceil((1 - 0.3) * i))

plt.plot(x, y1, linewidth=6, color='red', label ='f(n) = n')
plt.plot(x, y2, '-.', linewidth=6, color='green', label ='f(n) = ⌊αn⌋ + ⌈(1 - α)n⌉')

plt.ylim(ymin=0)
plt.xlim(xmin=0)
plt.xlabel("X")
plt.ylabel("Y")
plt.legend()
plt.title('f(n) = ⌊αn⌋ + ⌈(1 - α)n⌉ and f(n) = n')
plt.show()

The result we may see: alpha-function

Exercise 3.3-3

Use equation (3.14) or other means to show that (n+o(n))kθ(nk)(n + o(n))^k \in \theta(n^k) for any real constant kk. Conclude that nkθ(nk)\lfloor n \rfloor^k \in \theta(n^k).

At the beginning the Use equation (3.14) confused me pretty much. As the equation mentioned in the exercise text is 1+xex1 + x \le e^x. I have spent enough time thinking about it but did not find a way how to apply this equation to the exercise. Luckily I remembered about the Errata for Introduction to Alogorithms and according to it, it turns out that it is a typo. It must be Use the equation 3.13. This equation says: nbo(an)n^b \in o(a^n) or that any exponential function than any polynomial function. Maybe this note will help you, but it did not help me. So I decided to look at this using the most straightforward way. The (n+o(n))k(n + o(n))^k is:

(n+o(n))k=i=0k(ki)nkif(n)i(n + o(n))^k = \sum_{i = 0}^{k} \binom{k}{i} n^{k-i} \cdot f(n)^{i}

The term with the highest power of nn is nkn^k which is θ(nk)\theta(n^k). For all other terms we will have nkin^{k-i} and f(n)if(n)^i. As our f(n)o(n)f(n) \in o(n) by definition, it means that with the growth of nn our f(n)f(n) will tend to 00 and as a result will bring insignificant growth to nkin^{k-i}.

Exercise 3.3-4

Proove the following:

a. Equation (3.21)

b. Equations (3.26)-(3.28)

c. log(θ(n))θ(log(n))log(\theta(n)) \in \theta(log(n))

Equation (3.21)

The equation 3.21 is:

alogb(c)=clogb(a)a^{log_{b}(c)} = c^{log_{b}(a)}

Let's try to prove it:

alogb(c)=clogb(a)=сlogc(a)logc(b)=clogc(a)logc(b)=a1logc(b)=alogbca^{log_{b}(c)} = c^{log_{b}(a)} = с^{\frac{log_{c}(a)}{log_{c}(b)}} = \sqrt[log_{c}(b)]{c^{log_{c}(a)}} = a^{\frac{1}{log_{c}(b)}} = a^{log_{b}{c}}

Equations (3.26)

The equation 3.26 is:

n!o(nn)n! \in o(n^n)

We can try to prove it using Stirling's approximation:

n!=2πn(ne)n(1+θ(1n))n! = \sqrt{2 \cdot \pi \cdot n} \cdot (\frac{n}{e})^n \cdot (1 + \theta(\frac{1}{n}))

The definition of oo is:

o(g(n))=f(n): for any positive constant c>0, there exist a constant    n0 such that 0f(n)<cg(n) for all n>=n0\begin{align*} & o(g(n)) = f(n) : \text{ for any positive constant } c > 0, \text{ there exist a constant } \\ & \qquad \qquad \ \ \ n_{0} \text{ such that } 0 \le f(n) < cg(n) \text{ for all } n >= n_{0} \end{align*}

Which says us that f(n)f(n) grows significantly slower than g(n)g(n). So let's see what we can get out of it:

limnnn2πn(ne)n(1+θ(1n))=limnen2πn(1+θ(1n))=2πlimnenn(1+θ(1n))=limnenn\lim_{n\to\infty} \frac{n^n}{\sqrt{2 \cdot \pi \cdot n} \cdot (\frac{n}{e})^n \cdot (1 + \theta(\frac{1}{n}))} = \lim_{n\to\infty} \frac{e^n}{\sqrt{2 \cdot \pi \cdot n} \cdot (1 + \theta(\frac{1}{n}))} = 2 \cdot \pi \lim_{n\to\infty} \frac{e^n}{\sqrt{\cdot n} \cdot (1 + \theta(\frac{1}{n}))} = \lim_{n\to\infty} \frac{e^n}{\sqrt{n}}

Now using the L'Hôpital's rule:

limn(enn)=limnen12n=limnenn=\lim_{n\to\infty} (\frac{e^n}{\sqrt{n}})’ = \lim_{n\to\infty} \frac{e^n}{\frac{1}{2 \cdot \sqrt{n}}} = \lim_{n\to\infty} e^n \cdot \sqrt{n} = \infty

Equation (3.27)

The equation 3.27 is:

n!ω(2n)n! \in \omega(2^n)

We can do the same what we did in the previous exercise - to apply Stirling's approximation and see what we can get. But before this, let's try to remember the definition of the ω\omega:

ω(g(n))=f(n): for any positive constant c>0, there exist a constant    n0 such that 0cg(n)<f(n) for all n>=n0\begin{align*} & \omega(g(n)) = f(n) : \text{ for any positive constant } c > 0, \text{ there exist a constant } \\ & \qquad \qquad \ \ \ n_{0} \text{ such that } 0 \le cg(n) < f(n) \text{ for all } n >= n_{0} \end{align*}

Or in other words g(n)g(n) grows significantly slower than f(n)f(n). So let's see:

limn2n2πn(ne)n(1+θ(1n))=limn2n(ne)n=limn(2e)nnn\lim_{n\to\infty} \frac{2^n}{\sqrt{2 \cdot \pi \cdot n} \cdot (\frac{n}{e})^n \cdot (1 + \theta(\frac{1}{n}))} = \lim_{n\to\infty} \frac{2^n}{(\frac{n}{e})^n} = \lim_{n\to\infty} \frac{(2 \cdot e)^n}{n^n}

Since nnn^n grows much faster than (2e)n(2 \cdot e)^n the limit will strive to 0=>n!ω(nn)0 => n! \omega(n^n).

Equations (3.28)

The equation 3.28 is:

log(n!)θ(nlog(n))log(n!) \in \theta(n \cdot log(n))

Going over the same way, using the Stirling's approximation we will get:

log(n)!=log(2πn(ne)n(1+θ(1n)))=log(2πn)+log((ne)n)=θ(sqrtn)+nlog(ne)log(n)! = log(\sqrt{2 \cdot \pi \cdot n} \cdot (\frac{n}{e})^{n} \cdot (1 + \theta(\frac{1}{n}))) = log(\sqrt{2 \cdot \pi \cdot n }) + log((\frac{n}{e})^n) = \theta(sqrt_{n}) + n \cdot log(\frac{n}{e})

In the last expression the leading term is nlog(ne)n \cdot log(\frac{n}{e}) or nlog(n)n(log(e))nlog(n) - n(log(e)). Skipping the least significant terms we will get what we need to prove: log(n!)θ(nlog(n))log(n!) \in \theta(n \cdot log(n)).

Equation (c)

According to the definition of θ\theta we need to show that:

log(c1n)log(f(n))log(c2n)log(c_{1} \cdot n) \le log(f(n)) \le log(c_{2} \cdot n)

We can consider f(n)f(n) as a linear function so we can rewrite our inequation as:

log(c1n)log(f(an+b))log(c2n)log(c_{1} \cdot n) \le log(f(a \cdot n + b)) \le log(c_{2} \cdot n)

In other words, we need to find such constants c1c_{1} and c2c_{2} and nn0n \ge n_{0} after which this inequation will be true. For this case we can consider c1=1nc_{1} = \frac{1}{n} and c2nc_{2} \le n. Let's take a look at the left part of the inequation:

log(c1n)log(f(an+b))log(c1)+log(n)log(f(an+b))log(1n)+log(n)log(an+b)\begin{align*} & log(c_{1} \cdot n) \le log(f(a \cdot n + b)) \\ & log(c_{1}) + log(n) \le log(f(a \cdot n + b)) \\ & log(\frac{1}{n}) + log(n) \le log(a \cdot n + b) \\ \end{align*}

The left part gives 00 for any nn.

Now we consider the right side of inequation with c2nc_{2} \le n or even the case c2=nc_{2} = n:

log(f(an+n))log(c2n)log(f(an+n))log(c2)+log(n)log(f(an+n))log(n)+log(n)log(f(an+n))2log(n) since a and b are constants \begin{align*} & log(f(a \cdot n + n)) \le log(c_{2} \cdot n) \\ & log(f(a \cdot n + n)) \le log(c_{2}) + log(n) \\ & log(f(a \cdot n + n)) \le log(n) + log(n) \\ & log(f(a \cdot n + n)) \le 2 \cdot log(n) \text{ since } a \text{ and } b \text{ are constants } \end{align*}

Exercise 3.3-5

Is the function log2(n)!\lceil log_{2}(n) \rceil! polynomially bounded? Is the function log2(log2(n))!\lceil log_{2}(log_{2}(n)) \rceil! polynomially bounded?

From the chapter Polynomials we know that polynomially bounded means - f(n)O(nk)f(n) \in O(n^{k}). The definition of the O is:

O(g(n))=f(n): there exist positive constants c and n0 such that    0<=f(n)<=cg(n) for all n>=n0\begin{align*} & O(g(n)) = f(n) : \text{ there exist positive constants c and } n_{0} \text{ such that } \\ & \qquad \qquad \ \ \ 0 <= f(n) <= cg(n) \text{ for all } n >= n_{0} \end{align*}

So we basically should prove (or refute) that log(n)!O(nk)\lceil log(n) \rceil! \in O(n^{k}) and log2(log2(n))!O(nk)\lceil log_{2}(log_{2}(n)) \rceil! \in O(n^{k}).

As soon as we see factorial function, the first thing that comes to mind from this book is Stirling's approximation. Let's remember how its definition looked in the book:

n!=2πn(ne)n(1+θ(1n))n! = \sqrt{2 \cdot \pi \cdot n} \cdot (\frac{n}{e})^{n} \cdot (1 + \theta(\frac{1}{n}))

First of all let's consider the log2(n)!\lceil log_{2}(n) \rceil! function and will try to apply Stirling's approximation to it:

log2(n)!=2πlog2(n)(log2(n)e)log2(n)(1+θ(1log2(n)))\lceil log_{2}(n) \rceil! = \sqrt{2 \cdot \pi \cdot log_{2}(n)} \cdot (\frac{log_{2}(n)}{e})^{log_{2}(n)} \cdot (1 + \theta(\frac{1}{log_2{(n)}}))

The leading term here is (log2(n)e)log2(n)(\frac{log_{2}(n)}{e})^{log_{2}(n)} so we can skip others and consider only it. By the using well known formula of logarithms alogb(c)=clogb(a)a^{log_{b}(c)} = c^{log_{b}(a)} we can rewrite our expression as:

(log2ne)log2n=nlog2(log2ne)=nlog2(log2nlog2e)\left(\frac{\log_{2}n}{e}\right)^{\log_{2}n} = n^{\log_{2}\left(\frac{\log_2{n}}{e}\right)} = n^{\log_{2}(\log_{2}n - \log_{2}e)}

To answer the question does it polynomially bounded, we need to compare it with nkn^{k}. As kk is constant (as well as log2e\log_{2}e, we will have: nlog2(log2nlog2e)>nkn^{\log_{2}(\log_{2}n - \log_{2}e)} > n^k with growth of nn which means log(n)!\lceil \log(n) \rceil! is not polynomially bounded.

The another and more easy way to prove that could be considering n=2kn = 2^k. In this case the log2(n)!\lceil log_{2}(n) \rceil! becomes just k!k!. The factorial function is 1223k1 \cdot 2 \cdot 2 \cdot 3 \cdot \ldots \cdot k. From another side our polynom is 2mk2^{mk}. As we know the factorial function growth faster than polynom, thus the function log2(n)!\lceil log_{2}(n) \rceil! is not polynomially bounded.

For the second function log(log(n))!\lceil log(log(n)) \rceil! we can consider that nn is reresented by the 22k2^{2^{k}}. By the logarithm rules it becomes k!22kk! \le 2^{2^{k}}. The k!k! is 123k1 \cdot 2 \cdot 3 \cdot \ldots \cdot k. The 22k2^{2^{k}} is 4142434k4^1 \cdot 4^2 \cdot 4^3 \cdot \ldots \cdot 4^k. Thus our main expression k!22kk! \le 2^{2^{k}} is true which means the log(log(n))!\lceil log(log(n)) \rceil! function is polynomially bounded.

Exercise 3.3-6

Which is asymptotically larger: log\*(log(n))log^\*(log(n)) or log(log(n))log(log^*(n))?

Personally for me it was much easier to use the definition of the iterated logarithm from wikipedia:

log\*(n)={0ifn1;1+log\*(log(n))ifn>1log^\*(n) = \begin{cases} 0 & \qquad if n \le 1; \\ 1 + log^\*(log(n)) & \qquad if n > 1 \\ \end{cases}

So basically iterated logarithm is a number of times we need to apply logarithm function to nn while we will not get 1. To understand it better, we can consider the following simple examples:

log(log\*(65536))=log(1+log\*(16))=log(1+1+log\*(4))=log(1+1+1+log\*(2))+log(1+1+1+1)=2log(log^\*(65536)) = log(1 + log^\*(16)) = log(1 + 1 + log^\*(4)) = log(1 + 1 + 1 + log^\*(2)) + log(1 + 1 + 1 + 1) = 2

and:

log\*(log(65536))=log\*(16)=1+log\*(log(4))=1+log\*(2)=1+1+1=3log^\*(log(65536)) = log^\*(16) = 1 + log^\*(log(4)) = 1 + log^\*(2) = 1 + 1 + 1 = 3

To understand which one function growth faster, we may divide one into another and to check the result. If the result is 0 the function that is in the dividend growth slower and vice versa. So let's consider the following expression in assumption that n=2mn = 2^m:

log\*(log(n))log(log(n))=log(1+log\*(m))log\*(m)=log(1+1+log\*(k))1+log\*(k)=log(1+1+1++1)1+1+1+1\frac{log^\*(log(n))}{log(log^*(n))} = \frac{log(1 + log^\*(m))}{log^\*(m)} = \frac{log(1 + 1 + log^\*(k))}{1 + log^\*(k)} = \frac{log(1 + 1 + 1 + \ldots + 1)}{1 + 1 + 1 + \ldots 1}

Obviously the function in the dividend growth slower than in divisor. This means the log(log(n))log(log^*(n)) function grows assymptotically faster than log\*(log(n))log^\*(log(n)) function.

Exercise 3.3-7

Show that golden ratio ϕ\phi and its conjugate ϕ\overline \phi both satisfy the equation x2=x+1x^2 = x + 1

Let's look closer at the equation:

x2=x+1x2x1=0x1=1+12(41121)=1+(5)2x2=112(41121)=1(5)2\begin{align*} & x^2 = x + 1 \\ & x^2 - x - 1 = 0 \\ & x_{1} = \frac{1 + \sqrt{1^2 - (4 \cdot 1 \cdot -1}}{2 \cdot 1)} = \frac{1 + \sqrt(5)}{2} \\ & x_{2} = \frac{1 - \sqrt{1^2 - (4 \cdot 1 \cdot -1}}{2 \cdot 1)} = \frac{1 - \sqrt(5)}{2} \\ \end{align*}

Where x1x_{1} and x2x_{2} are exactly ϕ\phi and ϕ\overline \phi.

Exercise 3.3-8

Prove by induction that the ithi_{th} Fibonacci number satisfies the equation Fi=ϕiϕi5F_{i} = \frac{\phi^i - \overline \phi^i}{\sqrt{5}}

Using mathematical induction, we should consider the base of induction first of all:

P(1)=1+(5)21(5)2(5)=1+51+525=1P(1) = \frac{\frac{1 + \sqrt(5)}{2} - \frac{1 - \sqrt(5)}{2}}{\sqrt(5)} = \frac{\frac{1 + \sqrt{5} - 1 + \sqrt{5}}{2}}{\sqrt{5}} = 1

As for the i=1i = 1 the Fibonacci number is 11 the base of induction is correct. For the Fibonacci numbers for i=i1i = i - 1 and i=i2i = i - 2 we will have:

Fi=Fi1+Fi2=ϕi1ϕi1+ϕi2ϕi25=ϕi2(ϕ+1)ϕi2(ϕ+1)5=ϕi2ϕ2ϕi2ϕ25=ϕiϕi5\begin{align*} & F_{i} = F_{i - 1} + F_{i - 2} = \\ & \frac{\phi^{i - 1} - \overline \phi^{i - 1} + \phi^{i - 2} - \overline \phi^{i - 2}}{\sqrt{5}} = \\ & \frac{\phi^{i - 2}(\phi + 1) - \overline \phi^{i - 2}(\overline \phi + 1)}{\sqrt{5}} = \\ & \frac{\phi^{i - 2} \cdot \phi^2 - \overline \phi^{i - 2} \cdot \overline \phi^2}{\sqrt{5}} = \\ & \frac{\phi^i - \overline \phi^i}{\sqrt{5}} \end{align*}

Exercise 3.3-9

Show that klog(k)θ(n)k \cdot log(k) \in \theta(n) implies k=θ(n/log(n))k = \theta(n / log(n))

By the rule of symmetery we have f(n)θ(g(n)) if and only if g(n)θ(f(n))f(n) \in \theta(g(n)) \text{ if and only if } g(n) \in \theta(f(n)). According to this rule, we may rewrite our expression as:

nθ(klog(k))n \in \theta(k \cdot log(k))

Let's take logarithm of both parts:

log(n)=log(klog(k))=log(k)+log(log(k))log(n) = log(k \cdot log(k)) = log(k) + log(log(k))

The log(log(k))log(log(k)) grows significantly slower than log(k)log(k) so we may omit it. Now let's devide nn to log(n)log(n):

nlog(n)=klog(k)log(k)=kθ(k)\frac{n}{log(n)} = \frac{k \cdot log(k)}{log(k)} = k \in \theta(k)

So we should that kθ(n/log(n))k \in \theta(n / log(n)).