This exercise will clarify the meaning of Leibniz notation by exploring the original insights of Leibniz using modern computing tools.

As a beginning, let's summarize Leibniz's understanding of his operators $d$ and $\int$. This information is taken from *A History of Mathematics: An Introduction* by Katz, 3rd ed, from Section 16.2, titled "Gottfried Wilhelm Leibniz".

The basic idea out of which the calculus of Leibniz developed was the observation that if $A,B,C,D,E$ was an increasing sequence of numbers and $L,M,N,P$ was the sequence of differences, then $E-A = L+M+N+P$. This is a crude form of the fundamental theorem of calculus.

To clarify, let's do an example. Suppose we have an increasing sequence `x = [1,3,5,7,9]`

. Then the corresponding sequence of differences is `dx = [2,2,2,2]`

. This is because $3-1 = 2$, $5-3 = 2$, $7-5=2$ and $9-7=2$. The sequence of differences is formed by taking the difference between each two subsequent entries in the list `x`

. Note that there is one less difference than there are numbers in the original list.

*Exercise*: Given `x=[1,4,7,10,19]`

, compute the associated list of differences `dx`

.

*Solution*: `dx = [3,3,3,9]`

.

The difference operation (which Leibniz denoted eventually as $d\,$) occurs as a primitive in many scientific programming libraries. In this assignment the library we will be concerned with is called *numpy* which is short for Numerical Python.

This is how you compute differences using numpy:

In [ ]:

import numpy as np x = [1,3,5,7,9] dx = np.diff(x) print(dx)

In [ ]:

x = [1,4,7,10,19] dx = np.diff(x) print(dx)

As we proceed through the exercises please change the values in the cells and run them on your own. Be aware that you can immediately receive help on any command by executing it in a cell, with a '?' character at the end.

For example, the following command will show you the documentation for `np.diff`

:

In [ ]:

np.diff? #execute me!

In Leibniz's conception of the calculus the distinction between a variable and a sequence of numbers is somewhat blurry. He eventually considers `dx`

to be a small increment when he presents his ideas on the difference quotient. But his original ideas on differences are motivated by the Harmonic Triangle, from which it is clear that sequences of differences and partial sums were much on his mind. It is often fruitful to think of both `dx`

and `∫ x`

as sequences rather than numbers. Without this understanding, some of Leibniz's formulations (e.g. `d ∫ x = x`

) are difficult to comprehend. Leibniz's ideas evolved over time and our modern notation is slightly different even from his refined ideas. Here is an accessible summary and links to further resources.

For us every variable will denote a sequence.

For example the letter `x`

will represent some sequence of numbers, such as `x=[1,2,3,4]`

. Similarly `dx = [1,1,1]`

will also be a sequence. If we write $y = x^2$ then we mean `y`

to be the sequence of squares of elements of `x`

. Here is an illustration:

In [ ]:

x = np.array([1,2,3,4]) y = x**2 y

In the above cell we wrote `y=x**2`

, which is what we might write to express the dependence of a single number `y`

on a single number `x`

. But in reality this line of code stands for a computation on *lists*. What it says is that each element of the list `y`

comes from squaring the corresponding element of the list `x`

.

We usually call this style of programming vectorized code or sometimes array programming.

We now give a sequence of examples to show how vectorization works in numpy.

In [ ]:

#Vectorization example x = np.array([0,1,2,3]) y = np.array([3,4,5,6]) print("x = {}".format(x)) print("y = {}".format(y)) print("x + y = {}".format(x+y)) print("x*y = {}".format(x*y)) print("x/y = {}".format(x/y)) c = 7 print("if c = {} then c*y = {}".format(c,c*y))

Vectorized programming not only makes for cleaner expressions, it is also efficient. This is because backend libraries are highly optimized to use hardware parallelism and other numerical tricks to make vectorized operations execute quickly.

Vectorized notation is also highly compatible with Leibniz notation. In a way you can think of Leibniz's original notation as being itself a kind of vectorized code.

In the notes above we took the "differences" of a list `y`

which depended on a list `x`

in a functional way.

Can we go one step further and compute the "differences of the differences"?

Yes!

We can also talk about the differences in `y`

just as we did with respect to `x`

. In the above example, we had

`y = [1,4,9,16]`

.

It is easy to see that

`dy = [3,5,7]`

.

We use the worksheet to automatically compute `dy`

using numpy.

In [ ]:

y = [1,4,9,16] dy = np.diff(y) dy

Now we "go to town" and compute differences of differences of differences in the following cell.

In [ ]:

#Iterated differences... x = np.array([1,2,3,4,5,6,7,8,9,10]) y = x**3 - 2*x**2 + 1 print("y = {}".format(y)) dy = np.diff(y) print("dy = {}".format(dy)) ddy = np.diff(dy) print("ddy = {}".format(ddy)) dddy = np.diff(ddy) print("dddy = {}".format(dddy))

We will later see that these "differences" are very similar to derivatives. The fact that `dddy`

(or $d^3y$) is constantly 6 is essentially saying that the 3rd derivative of the cubic polynomial $y=x^3-2x^2+1$ is a line of slope 6.

Leibniz referred to the sequence of inputs `x`

as "abscissae" and the sequence `y`

which functionally depends on `x`

as the "ordinates". Nowdays these terms are rarely used. We might instead say that `x`

is the independent variable, and `y`

is the dependent variable. But for fun I will continue to say *abscissae* and *ordinates* here, especially since it means less writing.

Sometimes we want to have a lot of numbers in our ordinate. There are two functions built into numpy which make this easy to create. They are *arange()* and *linspace()*.

The *arange()* command takes three inputs: a starting point $a$, and ending point $b$, and the step size $\Delta x$. What is returned is a list of $\frac{b-a}{\Delta x}$ numbers, which begin at $a$ and increase in increments of $\Delta x$. Here is an example:

In [ ]:

x = np.arange(1,10,0.5) x

You can see that the abscissa `x`

consists of $\frac{10-1}{0.5} = 18$ numbers. The list begins at $a=1$ and proceeds by steps of size $\Delta x = 0.5$.

The *linspace* operator is similar to *arange*, but instead of taking a step size $\Delta x$ as an argument, it instead takes the desired number $n$ of points.

Let's do an example:

In [ ]:

x = np.linspace(1,10,18, endpoint=False) x

The output of the above example is the same as the one before: 18 evenly spaced numbers beginning at 1 and increasing by increments of $\Delta x = 0.5$. The `endpoint=False`

argument tells the function that I do not want the endpoint `10`

to be included in the abscissa list. (Try deleting that part and rerunning the command.)

Whether we use `linspace`

or `arange`

is mostly a matter of preference. In this document I will tend to use `arange`

.

You should be familiar with plotting pairs of abscissae and ordinates to make a visual description of an equation (aka a *graph*). Below you can see how to use the matplotlib library to turn the lists `x`

and `y`

into a familiar graph.

In [ ]:

%matplotlib inline import matplotlib.pyplot as plt x = np.arange(0,10,0.1) y = x**3 plt.plot(x,y) plt.title(r"The graph $y=x^3$") plt.show()

Now that we have a little familiarity with numpy and vectorization, let's revisit Leibniz's central insight, which we mentioned in the first cell.

Here it is as expressed in Katz:

Leibniz considered a curve defined over an interval divided into subintervals and erected ordinates $y_i$ over each point $x_i$ in the division. If one forms the sequence $\{dy_i\}$ of differences of these ordinates, its sum $\sum_i dy_i$, is equal to the difference $y_n-y_0$ of the final and initial ordinates.

That's a bit of a mouthfull. The first sentence just says that we have some list `x`

of abscissae and some set `y`

of ordinates and there is a functional relationship, such as `y=x**2`

. The second sentence says that if we sum the elements of the list `dy`

then we just get the difference between the first and last elements of the list `y`

.

Let's see if that's actually true...

In [ ]:

x = np.arange(0,10,0.1) y = x**2 dy = np.diff(y) sum(dy) == y[-1]-y[0], sum(dy)

The above cell describes a case in which we partition the interval $[0,10]$ into $100 = \frac{10-0}{1/10}$ subintervals, and let the list `x`

represent the left endpoint of each respective subinterval. We then "erect the ordinates `y`

" where $y=x^2$ (understood as a vectorized expression). We then compute the differences for the list `y`

as `dy`

and sum those up.

Using the notation `y[-1]`

to describe the last element of `y`

and `y[0]`

to describe the first, we see from the output (True) that Leibniz's rule holds in this case. We also see that the sum of the differences happens to be 98.01.

If you experiment (which you should) by changing the increment $\Delta x = 0.1$ to other values, you will see that the equation continues to hold. In particular it holds as $\Delta x \rightarrow 0^+$, or in other words as the size of the partition goes to infinity.

To see that $\Delta x$ doesn't matter to the truth of the equation, you have to observe that the sum "telescopes", meaning most of the summands cancel out.

We'll walk through that argument now.

First notice that we can express the list `x`

as

`x =`

$[0,\Delta x, 2\Delta x, 3\Delta x,\ldots, (n-1)\Delta x]$.

Hopefully this way of thinking is familiar from your calculus class. Because $y=x^2$, we must have

`y =`

$[0,\Delta x^2, (2\Delta x)^2, (3\Delta x)^2,\ldots, ((n-1)\Delta x)^2]$.

Finally it must be true that

`sum(dy)`

$= \Delta x^2 - 0 + (2\Delta x)^2 - \Delta x^2 + (3\Delta x)^2 - (2\Delta x)^2 + \cdots + ((n-1)\Delta x)^2 - ((n-2)\Delta x)^2$.

Now we will argue that this sum "telescopes", meaning most of the things that are written cancel out.

Note that in the sum each term appears exactly once positively and once negatively except for $a=0$ and $((n-1)\Delta x)^2$, which each appear only once. Therefore everything except these terms cancels out (and $a=0$ so it can go as well).

`sum(dy)`

$=((n-1)\Delta x)^2.$

Because $n = \frac{b-a}{\Delta x}$, we have that $(n-1)\Delta x$ is simply $b-\Delta x$.

In the above example, $b-\Delta x= 10-0.1 = 9.9$ and

`sum(dy)`

$= 9.9^2 = 98.01$, as indicated.

In general, `sum(dy)`

$= (b-\Delta x)^2$. And as $\Delta x \rightarrow 0^+$ this becomes $b^2$.

If $a$ had not happened to be zero then we would have had

`sum(dy)`

$= (b-\Delta x)^2-a^2.$

Because $f(x)=x^2$ is continuous, the limit as $\Delta x \rightarrow 0^+$ is $b^2-a^2$.

By making $\Delta x \rightarrow 0^+$, we have, in a way, turned `dy`

from a finite list of differences into an infinite list of very tiny differences.

What we just showed, a little bit rigorously, is that Leibniz's sum rule can be true *even if* the list of differences `dy`

is infinite. The basic idea is very similar to the Fundamental Theorem of Calculus.

The second of Leibniz's insights about calculus can be described like this (Katz again):

Similarly, if one forms the sequence $\{d \sum y_i\}$, where $\sum y_i = y_0+y_1 +\cdots+y_i$, the difference sequence $\{d\sum y_i\}$ is equal to the original sequence of the ordinates.

As usual, thinking of `y`

as a list of numbers, we will use $\int y$ to denote the new sequence of partial sums of the entries of `y`

. To illustrate, we use the corresponding operator in numpy, which is `cumsum`

(cumulative sum).

In [ ]:

x = np.array([1,2,3,4,5]) y = x**3 print("Here is x = {}".format(x)) print("Here is y = {}".format(y)) print("Here is ∫ y = {}".format(np.cumsum(y)))

You can see that `∫ y`

is a sequence of the same length as `y`

. Each element $\sum y_i$ (Katz notation) in `∫ y`

is the sum of the elements in `y`

of index at most $i$.

Leibniz's second insight is the observation that ∫ and d are inverse operators.

That is, `d ∫ y = ∫ dy = y`

.

Let's try it out in numpy, using `np.diff`

for `d`

and `np.cumsum`

for `∫`

.

In [ ]:

x = np.array([1,2,3,4,5]) y = x**3 print("Here is x = {}".format(x)) print("Here is y = {}".format(y)) print("Here is ∫ y = {}".format(np.cumsum(y))) print("Here is d ∫ y = {}".format(np.diff(np.cumsum(y)))) print("Here is ∫ dy = {}".format(np.cumsum(np.diff(y))))

There has been an annoying technical snafu here: Each element in the sequence `∫ dy`

is off by one.

That is because Leibniz assumes that all ordinate sequences begin with zero.

Let's fix that and try again.

In [ ]:

x = np.array([0,1,2,3,4,5]) y = x**3 print("Here is x = {}".format(x)) print("Here is y = {}".format(y)) print("Here is ∫ y = {}".format(np.cumsum(y))) print("Here is d ∫ y = {}".format(np.diff(np.cumsum(y)))) print("Here is ∫ dy = {}".format(np.cumsum(np.diff(y))))

This time we were successful.

Note that something is still a little strange. The sequence for `y`

begins with a 0, but this has been left out of the last two expressions. It was inevitable that something be omitted for the simple reason that the difference operator results in a list which is one shorter than the initial list.

With this one caveat, it is not hard to see why Leibniz's 2nd insight is true. Analyzing `d ∫ y`

we see that that (in Katz's notation) the $i$th element of the sequence `d ∫ y`

is the $i$th element of the sequence `y`

:

$\sum y_i - \sum y_{i-1} = y_i.$

Note, however, that $y_0$ cannot be computed in this way, and so the initial element of the sequence `y`

is forgotten.

Going the other way, we consider the $i$th element of the sequence `∫ dy`

= $\{\sum dy_i\}$ in Katz's notation. Using Leibniz's first insight, we see that $\sum dy_i = y_i - y_0$. If $y_0=0$, as Leibniz always assumes, then we again arrive at $y_i$.

This proves the statement (in Leibniz's terms) `d ∫ y = ∫ dy = y`

.

Technically we should make a note to omit the first element of `y`

on the right hand side of these equations.

To make things work out numerically, we will often have to remove the first element of our ordinate sequences in numpy. The syntax for doing this is the following.

In [ ]:

y = np.array([1,2,3,4,5,6,7]) print("y[1:] = {}".format(y[1:]))

This kind of array manipulation is called *array slicing*. More sophisticated slices are possible, even when `y`

is multidimensional, but we will not use them here.

In [ ]:

dx = np.diff(x) fig,axes = plt.subplots(1,2) axes[0].plot(x[1:],dy/dx) axes[0].plot(x[1:],dy) axes[1].plot(x,3*x**2) plt.show()

In [ ]:

```
```

In [ ]:

```
```

In [ ]:

# We see that ∫ dy/dx dx = y f = np.cumsum(dy/dx*dx) fig,axes = plt.subplots(1,2) axes[0].plot(x[1:],f) axes[1].plot(x,y) plt.show()

In [ ]:

#Inverse relationship of ∫ and d dx = 0.5 x = np.arange(0,5,dx) y = x**2 # We see that ∫ dy = d∫ y = y print(np.cumsum(np.diff(y))==np.diff(np.cumsum(y))) print(np.cumsum(np.diff(y)) == y[1:])

In [ ]:

```
```

In [ ]:

#Calculus of differentials

In [ ]:

#Sum rule dx = 0.1 x = np.arange(0,10,dx) y = np.sin(x) z = x**2 fig,axes = plt.subplots(1,2) axes[0].plot(x,d(y+z)) axes[1].plot(x,d(y)+d(z)) plt.show()

In [ ]:

#Leibniz Rule def d(Y): return np.hstack((np.array([0.000001]),np.diff(Y))) fig,axes = plt.subplots(1,2,figsize=(13,5)) axes[0].plot(x,d(y*z),label="d(yz)") axes[0].plot(x,d(y)*z+y*d(z),label="dy*z+y*dz") axes[0].legend() axes[1].plot(x,(d(y*z)/dx),label="d(yz)/dx") axes[1].plot(x,(d(y)/dx)*z+y*(d(z)/dx),label=r"$\frac{dy}{dx}z+y\frac{dz}{dx}$") axes[1].legend() plt.show()

In [ ]:

```
```

In [ ]:

```
```

In [ ]:

In [ ]:

```
```