This is a follow up on the post regarding Big O Notation for Calculus. You will need MathML enabled in order to see this post properly...
Contents
1.1 Slope
1.2 and
1.3 Division by Zero?
1.4 Big O for the Bonus Parts
2 Derivative
2.1 Divide by Zero, and You Go To Hell!
2.2 Product Rule
2.3 Chain Rule
1 Weird Numbers
1.1 Slope
Consider some linear function
(1.1) |
for some nonzero real number , and an arbitrary real number . We can calculate the slope by considering
(1.2) |
some constant "shift" in , and using this to figure out the change in
(1.3) |
What is this? Well, we plug in the definition of to find
(1.4) |
which reduces to
(1.5) |
Thus we may write the slope of as
(1.6) |
which is independent of both and the choice of .
Can we do this in general for a polynomial
(1.7) |
for some ? Let us try! We consider some nonzero term, and we write (for )
(1.8) |
where the "bonus parts" are other stuff. Actually by the binomial theorem, it would have to be a polynomial in with a nonzero constant term. This information is really encoded in
(1.9) |
where is a more rigorous way of saying "bonus parts at least quadratic in ." This gives us a more precise way to specify the error when writing out terms at most linear in .
We see that we are abusing notation and writing
(1.10) |
for some . So by dividing through by we obtain
(1.11) |
This implies
(1.12) |
and similar reasoning suggests
(1.13) |
So let us go on with our considerations.
We then have
(1.14) |
where we were slick and noted the definition of in order to plug it in. So, we can rewrite this as
(1.15) |
and we want to divide both sides by . But we know how to do this now! First we will write
(1.16) |
as shorthand, and rewrite our equation as
(1.17) |
We divide both sides by
(1.18) |
But we have a problem that we didn't have before: the slope depends on and .
Historically, people noted that we were working with a term . If we could make that term equal to 0, then everything would work out nicely. How do we do this? Well, we formally invent a number and use it instead of a finite nonzero number .
1.2 and
We know that we have a "number" satisfying
(1.19) |
There is no real number which satisfies this, but we can "adjoin" to . That is, we pretend that is a variable satisfying equation (1.19), then we have polynomials of the form
(1.20) |
Of course, we can formally multiply these polynomials together, and we end up with the number system ("ring") of complex numbers (we would have to prove that exists to make it a field).
Lets consider it. Suppose we did have
(1.21) |
Then we plug in (1.19) to find
(1.22) |
which simplifies to merely
(1.23) |
But this is precisely of the form we described: there is some term which is a multiple of (the imaginary term) and another independent of (the real term).
Lets consider a similar problem. We want a nonzero "number" which is the "smallest" number possible. What would this mean? Suppose we have a "small" finite number
(1.24) |
Then we see the property specifying that is small would be
(1.25) |
But if we had the smallest number, then the general argument is we expect
(1.26) |
We call such a an "Infinitesimal" number. If we formally consider such an (i.e., pretend it exists and obeys this relationship), then we can run into some problems. For example: what is ?
1.3 Division by Zero?
The problem is: what is ? The answer is: we don't know.
However, why would ever be useful? We can consider
(1.27) |
for some . Then
(1.28) |
can be simplified to what? Lets consider the case:
(1.29) |
But the term vanishes, so
(1.30) |
We see that
(1.31) |
can be carried out as if it were polynomial multiplication. We then obtain
(1.32) |
and again, the term vanishes. We thus obtain
(1.33) |
Indeed the general pattern appears to be
(1.34) |
We would like to write
(1.35) |
Notice the difference this time: we don't have any terms. The only price we paid is we cannot get rid of the factor .
1.4 Big O for the Bonus Parts
The take home moral is that enables us to rigorously consider infinitesimals. How? Well, the most significant terms are written out explicitly, and the rest are swept under the rug with . For our example of
(1.36) |
we saw we could write
(1.37) |
which tells us the error of "truncating," or cutting off the polynomial to be explicitly first order plus some change. This change we consider to be in effect "infinitesimal" in comparison to the term.
2 Derivative
We still have these bonus parts when considering the slope. That is, for some nonzero and arbitrary , we have
(2.1) |
which gives us
(2.2) |
We want to get rid of that on the right hand side. How to do this?
Lets be absolutely clear before moving on. We want to consider the slope of our function . To do this we considered a nonzero , and then constructed
(2.3) |
This function described the difference between the values of at and at . So, to describe the rate of change we take
(2.4) |
But we want to describe the instantaneous rate of change. Although this sounds scary, it really means we don't want to work with some extra parameter . We want to consider the rate of change and describe it in such a way that it doesn't depend on .
So what do we do? Well, the first answer is to set to be 0. This is tempting, but wrong, because we end up with
(2.5) |
which is not well-defined. The second answer is to consider the limit , so we can avoid division-by-zero errors. This is better, and we write
(2.6) |
following Leibniz's notation. This is the definition of the derivative of .
2.1 Divide by Zero, and You Go To Hell!
Well, formally, we need to take the limit . What does that mean for the left hand side? Could we accidentally be dividing by and get infinities? This is a problem we have to seriously consider.
The first claim is that
(2.7) |
This would imply that
(2.8) |
for some function . There would be no division by zero errors, but still we have to prove that equation (2.7) is true in general, i.e. for every function . We have seen it is true only for polynomials.
So, let us consider a function
(2.9) |
for some . What to do? Well, lets consider what happens when , we change to be . We have
(2.10) |
by definition of . We would expect then
(2.11) |
What to do? Well, lets gather the terms together
(2.12) |
which we can do, since we multiply both terms by 1 (the first term is , the second term is ). We can then add the fractions together
(2.13) |
and consider expanding the numerator and denominators out. We see that to first order, we have
(2.14) |
which shouldn't be surprising (we've proven this many times so far!). The denominator expands out to be
(2.15) |
which, for nonzero , cannot be made 0.
We combine these results to write
(2.16) |
We observe that we can factor out a in the numerator (the upstairs part of the fraction) and then we can divide both sides by it:
(2.17) |
So what happens if we set on the right hand side? Do we run into problems? Well, we run into problems on the left hand side, but not on the right hand side.
So what to do? Well, the formal mathematical procedure is to take the limit , which then lets us write
(2.18) |
for the left hand side. For the right hand side, we can symbolically just set . This is sloppy, because it's not quite true. But this is what's done in practice. We get
(2.19) |
Observe that we can combine these results to write
(2.20) |
There was no risk of dividing by zero anywhere.
2.2 Product Rule
Suppose we have two arbitrary functions and . Lets define a new function
(2.21) |
then what's
(2.22) |
I don't know, let us look. We see that we first pick some nonzero and then consider
(2.23) |
Now we plug in this expression to equation (2.21), the equation where we defined , and we find
(2.24) |
We do the following trick: add to both sides
(2.25) |
and we obtain
(2.26) |
We can gather terms together
(2.27) |
which simplifies to
(2.28) |
As usual, we divide both sides by
(2.29) |
By taking the limit we end up with
(2.30) |
Notice that we implicitly noted
(2.31) |
Of course, we assume that is continuous at , which turns out to be correct since differentiability implies continuity (we will prove this at some other time).
We've already proven this. So lets consider an example.
(2.34) |
where , and
(2.35) |
Thus
(2.36) |
The claim is that
(2.37) |
Is this surprising? No, but the surprising part is that it is a consequence of the product rule. How to prove this? Well, we need to do it by induction on .
Base Case () we see that
(2.38) |
and we can see immediately that
(2.39) |
So this proves the base case.
Inductive Hypothesis: suppose this will work for arbitrary .
Inductive Case: for , we have
(2.40) |
Observe we can consider the first term and apply the base case
(2.41) |
which is then
(2.42) |
The second term is (recall ) simpler
(2.43) |
We add both of these together to find
(2.44) |
But this is precisely what we wanted! And that concludes the inductive proof.
2.3 Chain Rule
We can combine functions together through composition. This looks like
(2.45) |
The question is: what's the derivative (rate of change) of in terms of the derivatives of and ?
Here we really take advantage of big-O notation. Observe for some nonzero we have
(2.46) |
but we argued that
(2.47) |
Lets plug this in
(2.48) |
So we conclude that
(2.49) |
We can divide both sides by simply
(2.50) |
Now what to do?
Well, we can do the following trick: multiply both sides by
(2.51) |
This would give us
(2.52) |
But what is ? We recall equation (2.47) and write
(2.53) |
Using this, we can simplify our equation
(2.54) |
Observe that we may take the limit as , which gives us
(2.55) |
which intuitively looks like fractions cancelling out to give the right answer. Although this is the intuitive idea, DO NOT cancel terms!
Moreover, we should really clarify what is meant by
(2.56) |
Let us first consider
(2.57) |
Then really
(2.58) |
describes what we should do. Namely, first take the derivative of and then evaluate it at .
Theorem 2.2. Let , be differentiable at , and let
(2.59) |
Then
(2.60) |
describes the derivative of at .
Again, we also proved this, which concludes this post.
No comments:
Post a Comment