Quantcast
Channel: Fundamental theorem of calculus, integration, connection between "methods" - Mathematics Stack Exchange
Viewing all articles
Browse latest Browse all 2

Fundamental theorem of calculus, integration, connection between "methods"

$
0
0

Jump down if you're lazy to read all the babble and want to get to the point:

Hello, everyone! Lately, I've been exploring calculus on my own from the theoretical stuff I'd heard/read previously. The problem is that I can "derive" on my own two different "implementations" of integration, one by enumerating blocks of products of function return values at incrementing input and the delta between inputs (which, by definition of multiplication gives a rectangular area delimited by the two). Some definitions I've read thus far were just implying some non-deterministic "pick an x in an interval" which just managed to irritate me, denoting things with an asterisk.

So, if anyone could help me with connecting the following with the antiderivative difference concept, I'd be most grateful. ( I know how to intuitively derive the rule, it's just the connection between the two that eludes me ).

Here's my go at it, I apologize if it's not strict enough. Thanks for the insight on strict analysis, @Antonio and @André.

Let say we have a smooth (differentiable, eh?) function and we want to know the area below it in the closed interval between $a$ and $b$. We can simply get the linear x-distance between $ and $ by subtracting the latter from the former. The distance by itself isn't much to go on, but we can subdivide the distance on $n$ equal parts which will give us a side of our rectangles. After that, we can use the concept of limits to push $n$ to $\infty$ to increase the precision of our solution.

$$\Delta = \lim_{n \rightarrow \infty} \frac{b-a}{n}$$

All that remains is to link that with the function per increment-level. The function $f(x)$ has to increase incrementally with the assistance of the precalculated $\Delta$ and the $\Sigma$ iterator i as it traverses the "loop" and adds together all the previous blocks.

A specific instance could be:$$S_i = \lim_{n \rightarrow \infty} f(a + i\Delta)\frac{b-a}{n}$$

Generalizing:

$$S = \lim_{n \rightarrow \infty} \sum_{i=0}^{n} f(a + i\Delta)\frac{b-a}{n}$$

I have written a simple 32-bit program that operates on an n-degree polynome, it drops maximum precision data at around ~1 million samples. Due to 32-bit constraints, sampling over a billion makes the 32-bit nature of the double data type simply say it's $0$ (because it's a too small sample for a double-precision $\Delta$). At the end of the loop, the difference between $b$ and the final input to the function differs by only $10^{-7}$, one in a ten million. And that's just imprecision (doesn't affect the result because it goes offscale).

Example, a simple $f(x) = x^2$ processed at $a = 2$ and $b = 5$ resolves to an area of $39$ units squared. The same that you'd get with the difference of the function's antiderivative with inputs $5$ and $2$.

Here you go:

While this is all nice, fine and dandy, it brings me to the point of this question. This is evidently true:

$$F(b) - F(a) = \lim_{n \rightarrow \infty} \sum_{i=0}^{n} f(a + i\Delta)\frac{b-a}{n}$$

I ask - Why? Or better yet - how? While I can derive both sides alone just fine following a bit different trains of thought, how did people connect the two? It's quite necessary according to the FTC, or am I wrong? It seems to me like two different ways at looking at the same problem, but to conclude the left side requires an insight into the rhs approach.

I am considering exploring a book by F. Burk, goes by the name "A garden of integrals", to get a more indepth view of this. But can somebody show me the link between the two?

All I can say is that they are equal by value. Thoughts?


Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles



Latest Images