Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Simple chaotic behavior in non-linear systems

In the last lecture, we discussed how floating point arithmetic can be subtle: its finite precision leads to the fact that even simple properties of elementary arithmetic, like the associativity of addition, don’t hold:

a, b, c = 1.0, 1e-16, 1e-16
print(f"(a + b) + c = {(a + b) + c}")
print(f"a + (b + c) = {a + (b + c)}")
(a + b) + c = 1.0
a + (b + c) = 1.0000000000000002

The problem here is caused by the default rounding behavior of floating point numbers. These rounding errors can accumulate and grow as we run more code.

We should add that there’s a subtle interplay here between the default printing representation of floating point numbers and issues of rounding error; diving into all the gory details is beyond the scope of this class, our intent here is to make you at least aware of issues you may run into in real-world work with lots of numerical data.

While the above difference may seem harmless, in a large sum it can accumulate. Let’s see what happens if we add these many times:

N = 1_000_000
sum = a
tiny = b  # play with this being b or 2*b
for i in range(N):
    sum += tiny
print(f"Loop : {sum}")
print(f"Exact: {a + N*tiny}")
print(f"Diff : {sum - (a + N*tiny)}")
Loop : 1.0
Exact: 1.0000000001
Diff : -1.000000082740371e-10

This behavior can have serious implications in a variety of numerical work scenarios.

Consider the seemingly trivial problem of evaluating with a computer the expression

f(x)=rx(1x)f(x) = r x (1-x)

where rr and xx are real numbers with r[0,4]r \in [0,4] and x(0,1)x \in (0,1). This expression can also be written in an algebraically equivalent form:

f2(x)=rxrx2.f_2(x) = rx - rx^2.

We will see, however, that when using a computer these two forms don’t necessarily produce the same answer. Computers can not represent the real numbers (a mathematical abstraction with infinite precision) but instead must use finite-precision numbers that can fit in finite memory. The two expressions above can, therefore, lead to slightly different answers as the various (algebraically equivalent) operations are carried by the computer.

First a look at a few simple tests:

def f1(x): return r*x*(1-x)
def f2(x): return r*x - r*x**2

r = 0.5
x = 0.8
print('f1:', f1(x))
print('f2:', f2(x))
f1: 0.07999999999999999
f2: 0.07999999999999996
r = 3.9
x = 0.8
print('f1:', f1(x))
print('f2:', f2(x))
f1: 0.6239999999999999
f2: 0.6239999999999997

The difference is small but not zero:

print('difference:', (f1(x)-f2(x)))
difference: 2.220446049250313e-16

More importantly, this difference begins to accumulate as we perform the same operations over and over. Let’s illustrate this behavior by using the formulas above iteratively, that is, by feeding the result of the evaluation back into the same formula:

xn+1=f(xn),n=0,1,x_{n+1} = f(x_n), n=0,1, \ldots

We can experiment with different values of rr and different starting points x0x_0 to observe the different results. We will simply build a python list that contains the results of three different (algebraically equivalent) forms of evaluating the above expression.

Exercise

Build a little script that computes the iteration of $f(x)$ using three different ways of writing the expression. Store your results and plot them using the `plt.plot()` function (the solution follows).

For completeness, we define three algebraically equivalent formulations:

def f1(x): return r*x*(1-x)
def f2(x): return r*x - r*x**2
def f3(x): return r*(x-x**2)

In order to see the difference between the initial behavior and the later evolution, let’s declare two variables to control our plotting:

num_points = 100  # total number of points to compute
drop_points = 50  # don't display the first drop_points

Using these, we can get the following figure:

%matplotlib widget
import matplotlib.pyplot as plt
import numpy as np

x0 = 0.55 # Any starting value
r  = 3.9  # Change this to see changes in behavior
fp = (r-1.0)/r
x1 = x2 = x3 = x0
data = []
data.append([x1,x2,x3])
for i in range(num_points):
    x1 = f1(x1)
    x2 = f2(x2)
    x3 = f3(x3)
    data.append([x1,x2,x3])

# Display the results
plt.figure()
plt.title('r=%1.1f' % r)
plt.axhline(fp, color='black')
plt.plot(data[drop_points:], '-o', markersize=4)
plt.show()
Loading...
def f(x, r):
    #return r*x*(1-x)
    return r*x -r*x**2

num_points = 500
drop_points = 50
rmin, rmax = .4, 4
xmin, xmax = 0, 1
x0 = 0.65
fig, ax = plt.subplots()
ax.set_xlim(rmin, rmax)
ax.set_ylim(xmin, xmax)
for r in np.linspace(rmin, rmax, 2000):
    x = np.empty(num_points)
    x[0] = x0
    for n in range(1, num_points):
        x[n] = f(x[n-1], r)
    x = x[drop_points:]
    rplot = r*np.ones_like(x)
    ax.plot(rplot, x, 'b,')

plt.show()
Loading...