An Interactive Guide to Chaos for Dummies

The Math

The Simulations

Simulation 1 – The Double Pendulum

Simulation 2 – The Fixed Point Attractor

Simulation 3 – The Lorenz Attractor

 

The Math

Strange attractors
There does not seem to be a widely accepted definition of strange attractors. This of course is a consequence of the immense difficulty of the subject. For example, strange attractors and chaotic attractors are often considered to be the same phenomenon, but this is not the case. There is a clear distinction between non-strange non-chaotic, strange non-chaotic and strange chaotic attractors. Thus, different authors have different opinions about some conditions we will mention in this section, which makes it controversial and therefore interesting. As discussed in the journal, a strange attractor is made up of infinitely many unstable periodic orbits or UPO’s. As a chaotic system evolves, it will get infinitely close to the UPO’s. The ‘more’ stable the UPO the longer the system will approximately follow its orbit. If the orbit were stable, then the system would follow it forever and the system would by definition no longer be chaotic. These unstable periodic orbits are very import for attractors because they cause ‘stretching’ and ‘folding’, or diverging and converging along all the different orbits. When two trajectories stretch, they move further away from each other. Much like the way two points on an elastic band are separated further when the band is stretched. At a certain point the system will then switch over to another UPO. Folding is the way an attractor bounded in a specific finite space. If there were no folding, there would be no attractor, and indeed not all chaotic systems have an attractor. Also, a ‘normal’ (not a strange) attractor never stretches, meaning two trajectories never diverge from each other. The next step is knowing how to identify the properties of chaos and strangeness.

As already mentioned, a chaotic system is sensitive to initial conditions. Sensitivity is a direct result of the stretching or diverging of the trajectories of the chaotic system. Luckily, there is a relatively easy way to compute whether a system is chaotic, using the Lyapunov exponent. The Lyapunov exponent tells us the rate of divergence of trajectories on an attractor or more simply, the sensitivity of initial conditions. To get a Lyapunov exponent of the entire attractor you measure this divergence for a large amount of points on the attractor. There is one Lyapunov exponent for each dimension of the state space. If the sum of all Lyapunov exponents is negative, the system is pulled towards some sort of behaviour. For a system to be chaotic, there must also be some repelling force, otherwise it will just fall into a fixed point or periodicity. To determine whether a dynamical system is chaotic only requires the biggest exponent. If this is greater than 0 the system is chaotic, if it is exactly 0 or negative it is not.

Another widely used, and slightly more recent, method to determine chaos in a system, is the 0-1 test for chaos as put forth by Gottwald and Melbourne (2004). This method omits the need to reconstruct the state space, but still requires time series analysis. It turns out that there is no analytical way to determine whether a nonlinear dynamical system is chaotic, at least for now. The chaotic properties must be numerically shown, often by computers. But Sprott (1993) has shown that while we cannot know a priori whether a system of equations will behave chaotically, chaos is abundant. One can fairly easily systematically generate nonlinear systems and check them for chaotic properties.

So this is how many chaos theorists determine whether a dynamical system is chaotic: they reconstruct the state space of the system and calculate the largest Lyapunov exponent. If it is positive and the system has an attractor, then it is chaotic. Unfortunately, this is typically only done after the difference- or differential equations have been constructed (and sometimes even after multiple time series have been plotted)

Fractals
We have seen when an attractor can be called chaotic, but to be chaotic the attractor also has to be strange. To understand when a attractor is strange a new term has to be introduced, the fractal. A fractal is a geometrical structure with one very important feature, self-similarity. This means that a fractal is made up of parts, and those parts have the same general shape as the whole. Although many other characteristics are necessary to give a full description of a fractal, for strangeness only the fractal dimension is important. Basically, an attractor is strange if its fractal dimension is a non-integer, like 1.87, (meaning not a whole number, like 2). Although this already tells us when an attractor is strange, we will try and explain why the dimension of a fractal can be non-integer, as we find this quite an interesting phenomenon.

Evolution of a Sierpinski carpet clearly showing the self-similarity of a fractal. The dimension of something always depends on how much it takes up a section of ‘space’. We are familiar with three integer dimensions: a line has dimension 1, measured by the length (it has no width); a square has dimension 2, because it has length and width and we can measure the area as the product of these. However, if you were to draw a very, very long, very wiggly line on a piece of paper, it would start to take up some amount of area, rather than just a length. This area is not completely filled and so the dimension will be somewhere between 1 and 2. The longer and more convoluted the line, the higher the dimension. Because a fractal and a strange attractor also do not take up the entire dimensionality of the phase space, their dimension will be a non-integer and consequently smaller than the dimension of the phase space.