The solutions for the following exercises should be considered together with my modest mathetmatical knowledge. As I am not a professional mathematician, I can not give you 100% guarantee that all of this is correct. It is only my try. If you see the mistake, please feel free to open issue or create a pull request.
Show that if f(n) and g(n) are monotonically increasing functions, then so are the functions f(n)+g(n) and f(g(n)), and if f(n) and g(n) are in addition nonnegative, then f(n)∗g(n) is monotonically increasing.
First of all let's remember what is the monotonically increasing functions:
A function f(n) is monotonically increasing if m≤n implies f(m)≤f(n).
In accordance with the conditions of the exercise, the both our functions f(n) and g(n) are monotonically increasing functions. Let's consider the function h(n)=f(n)+g(n). We have to prove that it is also monotonically increasing function. If we will take two numbers m and n where m≤n, we will have: f(m)≤f(n) and g(m)≤g(n) by the definition of monotonically increasing functions. It implies that f(m)+g(m)≤f(n)+g(n).
In the same way since the g(m)≤g(n) and the f(n) function is monotonically increasing, the f(g(m))≤f(g(n)).
If in the same time our both functions are nonnegative, we can safely multiple them and according to the facts above it also should be monotonically increasing function.
We may have even simple proof using coq proof assistant:
Require Import Coq.Arith.Le. Parameter f : nat -> nat. Parameter g : nat -> nat. Axiom f_monotonic : forall x y : nat, x <= y -> f x <= f y. Axiom g_monotonic : forall x y : nat, x <= y -> g x <= g y. Theorem sum_of_monotonic_funs : forall x y : nat, x <= y -> f x + g x <= f y + g y. Proof. intros x y H. apply Nat.add_le_mono. - apply f_monotonic. assumption. - apply g_monotonic. assumption. Qed. Theorem application_of_monotonic_funs : forall x y : nat, x <= y -> f(g(x)) <= f(g(y)). Proof. intros x y H. apply f_monotonic. apply g_monotonic. assumption. Qed.
Prove that ⌊α∗n⌋+⌈(1−α)n⌉=n for any integer n and real number \alpha in the range of $0 \le \alpha \le 1.
The cases when α=0 or α=1 are trivial.
If α=0 we will have:
⌊α⋅n⌋+⌈(1−a)⋅n⌉=⌊0⌋+⌈(1−0)⋅n⌉=0+n=n
if α=1 we will have:
⌊α⋅n⌋+⌈(1−a)⋅n⌉=⌊1⋅n⌋+⌈(1−1)⋅n⌉=n+0=n
If 0<α<1 we may use the equations that is given in the book: −⌈x⌉=⌊−x⌋ and −⌊x⌋=⌈−x⌉. And in addition the fact that the integer number n and real number x we may have \lfloor n + x \rfloor = n + \lfloor x \rfloor. So let's take a look at the given expression:
Use equation (3.14) or other means to show that (n+o(n))k∈θ(nk) for any real constant k. Conclude that ⌊n⌋k∈θ(nk).
At the beginning the Use equation (3.14) confused me pretty much. As the equation mentioned in the exercise text is 1+x≤ex. I have spent enough time thinking about it but did not find a way how to apply this equation to the exercise. Luckily I remembered about the Errata for Introduction to Alogorithms and according to it, it turns out that it is a typo. It must be Use the equation 3.13. This equation says: nb∈o(an) or that any exponential function than any polynomial function. Maybe this note will help you, but it did not help me. So I decided to look at this using the most straightforward way. The (n+o(n))k is:
(n+o(n))k=i=0∑k(ik)nk−i⋅f(n)i
The term with the highest power of n is nk which is θ(nk). For all other terms we will have nk−i and f(n)i. As our f(n)∈o(n) by definition, it means that with the growth of n our f(n) will tend to 0 and as a result will bring insignificant growth to nk−i.
We can do the same what we did in the previous exercise - to apply Stirling's approximation and see what we can get. But before this, let's try to remember the definition of the ω:
ω(g(n))=f(n): for any positive constant c>0, there exist a constant n0 such that 0≤cg(n)<f(n) for all n>=n0
Or in other words g(n) grows significantly slower than f(n). So let's see:
In the last expression the leading term is n⋅log(en) or nlog(n)−n(log(e)). Skipping the least significant terms we will get what we need to prove: log(n!)∈θ(n⋅log(n)).
According to the definition of θ we need to show that:
log(c1⋅n)≤log(f(n))≤log(c2⋅n)
We can consider f(n) as a linear function so we can rewrite our inequation as:
log(c1⋅n)≤log(f(a⋅n+b))≤log(c2⋅n)
In other words, we need to find such constants c1 and c2 and n≥n0 after which this inequation will be true. For this case we can consider c1=n1 and c2≤n. Let's take a look at the left part of the inequation:
Is the function ⌈log2(n)⌉! polynomially bounded? Is the function ⌈log2(log2(n))⌉! polynomially bounded?
From the chapter Polynomials we know that polynomially bounded means - f(n)∈O(nk). The definition of the O is:
O(g(n))=f(n): there exist positive constants c and n0 such that 0<=f(n)<=cg(n) for all n>=n0
So we basically should prove (or refute) that ⌈log(n)⌉!∈O(nk) and ⌈log2(log2(n))⌉!∈O(nk).
As soon as we see factorial function, the first thing that comes to mind from this book is Stirling's approximation. Let's remember how its definition looked in the book:
n!=2⋅π⋅n⋅(en)n⋅(1+θ(n1))
First of all let's consider the ⌈log2(n)⌉! function and will try to apply Stirling's approximation to it:
The leading term here is (elog2(n))log2(n) so we can skip others and consider only it. By the using well known formula of logarithms alogb(c)=clogb(a) we can rewrite our expression as:
To answer the question does it polynomially bounded, we need to compare it with nk. As k is constant (as well as log2e, we will have: nlog2(log2n−log2e)>nk with growth of n which means ⌈log(n)⌉! is not polynomially bounded.
The another and more easy way to prove that could be considering n=2k. In this case the ⌈log2(n)⌉! becomes just k!. The factorial function is 1⋅2⋅2⋅3⋅…⋅k. From another side our polynom is 2mk. As we know the factorial function growth faster than polynom, thus the function ⌈log2(n)⌉! is not polynomially bounded.
For the second function ⌈log(log(n))⌉! we can consider that n is reresented by the 22k. By the logarithm rules it becomes k!≤22k. The k! is 1⋅2⋅3⋅…⋅k. The 22k is 41⋅42⋅43⋅…⋅4k. Thus our main expression k!≤22k is true which means the ⌈log(log(n))⌉! function is polynomially bounded.
Which is asymptotically larger: log\*(log(n)) or log(log∗(n))?
Personally for me it was much easier to use the definition of the iterated logarithm from wikipedia:
log\*(n)={01+log\*(log(n))ifn≤1;ifn>1
So basically iterated logarithm is a number of times we need to apply logarithm function to n while we will not get 1. To understand it better, we can consider the following simple examples:
To understand which one function growth faster, we may divide one into another and to check the result. If the result is 0 the function that is in the dividend growth slower and vice versa. So let's consider the following expression in assumption that n=2m:
Obviously the function in the dividend growth slower than in divisor. This means the log(log∗(n)) function grows assymptotically faster than log\*(log(n)) function.