Playful Python Projects Modeling and Animation
Playful Python Projects Modeling and Animation
Playful Python Projects Modeling and Animation
This is an excellent book for readers who want to deepen their programming
skills. It uses a low-threshold, no-ceiling approach to programming, offering
the reader a plethora of practical projects that spark curiosity and demonstrate
the versatile contexts of application where refined programming abilities can be
applied.
— Calkin Suero Montero, Associate Professor, Uppsala University
The main feature of the book is the choice of topics that are designed to be both
entertaining and serious. These excursions are a great way to hone coding skills
while exploring diverse areas of human knowledge.
The variety of discussed subjects and creative project ideas make the book a per-
fect choice for aspiring coders thinking where to apply their growing skills.
Maxim Mozgovoy
Designed cover image: Rabbits. Beauty Art Design of Cute Little Easter Bunny in the Meadow; stock-
photo
Reasonable efforts have been made to publish reliable data and information, but the author and pub-
lisher cannot assume responsibility for the validity of all materials or the consequences of their use.
The authors and publishers have attempted to trace the copyright holders of all material reproduced
in this publication and apologize to copyright holders if permission to publish in this form has not
been obtained. If any copyright material has not been acknowledged please write and let us know so
we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information stor-
age or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.
com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA
01923, 978-750-8400. For works that are not available on CCC please contact mpkbookspermis-
[email protected]
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are
used only for identification and explanation without intent to infringe.
DOI: 10.1201/9781003455295
Typeset in Latin Modent font
by KnowledgeWorks Global Ltd.
Publisher’s note: This book has been prepared from camera-ready copy provided by the authors.
Contents
Foreword ...................................................................................................... xi
vii
viii Contents
Index.......................................................................................................... 229
Foreword
The profession of a software developer is becoming more specialized than ever.
There are countless technologies to master, and the differences in day-to-day
activities of IT professionals of different career paths are incredibly diverse.
Over decades, the challenge of making software started to look more and
more like a regular job, full of inevitable routine tasks, governed by “business
processes”, “market demands” and other facets of the real world many of us
are not overly excited about.
This process is natural: our field matures, and maturity comes with certain
downsides. However, no manager, employer or market analyst must let us
forget that programming is fun. It was fun for the hackers who created the
world’s famous programming languages, games, and operating systems, and it
can still be fun today. It probably does not look this way from many current
books and blogs that tend to be highly specialized or follow a straightforward
“getting things done” approach, but this is just a matter of setting the right
goals.
The primary goal of this book is to present a collection of entertaining pro-
gramming projects aimed at beginner and intermediate coders and hobbyists.
In a sense, it fills the gap between introductory programming tutorials and
specialized literature. Each project in the book is also supposed to demon-
strate a certain interesting practical application of programming skills, to
show what can be achieved with simple coding efforts. Most projects aim to
simulate or analyze something found in the real world, so we will rarely, if
ever, deal with abstract exercises.
Naturally, many of the ideas presented here are hardly new or original. My
challenge here is to hand-pick the topics that can bring fun and deliver some
useful knowledge: a good hobby project might entertain as well as educate.
(As a professional educator, I cannot overlook this aspect!) The primary cri-
terion for including a certain project is my personal taste: what sounds fun
to me might not sound fun to someone else. However, my rules of thumb are
simple: (1) be fun; (2) provide educational value; (3) be short; (4) do not
require pro skills or specialized knowledge; and (5) do not rely on the “magic”
functionality of a third-party library.
In practical terms, the last rule means that we are going to rely only on the
Python standard library and write the rest ourselves. At the same time, we
won’t reinvent the wheel: if something is available out of the box, we will use
it. This decision sets the scope of the book quite precisely, leaving outside any
projects involving larger or smaller building blocks. For example, I like the
ideas of making a game with Pygame library or writing my own line-drawing
routine, but not for this book.
xi
xii Foreword
xiii
1 A bit of Python
1.1 THE CASE FOR PYTHON
This book is not tied to any particular programming language or coding style.
Most projects presented here can be easily reproduced in different program-
ming environments. However, “easy” does not mean “equally easy”. Some
languages require more ceremony than others. A textbook example of “Hello,
world” program in Java looks like this:
class Hello
{
public static void main(String[] args)
{
System.out.println("Hello, World!");
}
}
This code is not only several lines long, it involves some understanding of
what class or public static mean, and all this for the sake of printing one line
of text on your screen. The value of these concepts grows as your project grows,
but since we are going to discuss only small and relatively simple programs,
their benefit is likely to be minimal.
Python makes an easy choice for our purposes. It is concise, involves little
ceremony, and has a reputation of being friendly for beginners, who start their
journey with the following piece of code:
print("Hello, world!")
I suspect that Python is also the most popular as a first language nowadays,
so I expect knowledge of its basics from the reader. Since there is no ironclad
list of “Python basics”, I will have to propose my own version.
DOI: 10.1201/9781003455295-1 1
2 Playful Python Projects
For the purposes of this book, the following elements are considered “basic”:
Again, many of them are not simple, but they are part and parcel of Python,
and deliberately avoiding them would produce code that feels nothing like
Python at all. At least I managed to get around lambda functions without
much harm.
Apart from these elements, we will also rely on data classes and turtle
graphics. These concepts are not difficult, but they might be outside of a
typical beginner’s toolbox. Finally, it should be noted that all the projects
are designed and tested in Python 3.9. Python’s backward compatibility is
far from perfect, but I do not expect any issues with newer versions. As a
reminder, no third-party libraries are used.
A better option is to keep the components together within the same entity,
which can be easily achieved with tuples:
@dataclass
class Point:
x: float
A bit of Python 3
y: float
@dataclass
class Point:
x: float
y: float
@classmethod
def create_random(cls):
return cls(uniform(0, 10), uniform(0, 10))
p1 = Point.create_random()
p2 = Point.create_random()
to use turtle graphics. Making turtle module the only option for drawing is a
peculiar design decision, but we have what we have.
Turtle graphics was specifically designed as a beginner’s gateway to com-
puter graphics, so it is likely that you know about it. However, we’ll push
turtle a bit farther than required for drawing basic shapes; so let’s discuss
some relevant aspects of its functionality.
import turtle
Next, suppose we need to draw a line between two given points (20, 0)
and (300, 200). Just issuing a single-line command to do it would have been
too simple. Instead, we have to create a “robotic turtle” and give it some
instructions:
m = turtle.Turtle()
m.color("blue") # choose a drawing color
m.penup() # no drawing mode
m.goto(20, 0) # source point
m.pendown() # drawing mode
m.goto(300, 200) # destination point
If you run this code, you will first see a drawing canvas appear, then a
quick stroke over the canvas, performed by a visible arrow-like “robot”, and
then everything will be closed.
To keep the drawn picture on the screen, you can add a pause using sleep()
function of the time module:
import time
...
This solution would, however, freeze the canvas for 5 seconds: you will not
be able to move it around the screen or close it. A better option is to end your
code with a turtle.done() call. In a nutshell, it turns the drawing canvas into
a regular window, which can be moved, resized, and closed.
Note that the turtle in the example above moves away from the screen cen-
ter toward its top-right corner. The default turtle coordinate system presumes
A bit of Python 5
that the screen center is (0, 0), the x-axis points right, and the y-axis points
up.
Each Turtle object keeps track of its movements, which makes it possible,
for example, to undo some actions or even completely remove the drawn shape
from the screen:
m = turtle.Turtle()
... # draw something with m
m.clear() # remove everything drawn with m
We will not use this capability much, but it comes in handy in one specific
case. Suppose we need an updatable text label on the screen. A turtle can be
used to output text as follows:
writer = turtle.Turtle()
writer.hideturtle()
writer.penup()
writer.goto(100, 100)
writer.write("Hello")
If we try updating the label by printing Good-bye at the same position, we’ll
simply get a mess of both labels printed one on top of another. A call to
m.clear() before updating the label fixes this issue.
Line-drawing animation is helpful to understand what is going on, but it
also obviously slows down the program. There are ways to make animation
faster, but we can also shut it down completely:
Turning off animation also turns off screen updates, meaning that we will
not see any graphics at all. Calling turtle.update() forces a screen update to
make the changes visible.
(x, y) from its current place. What it could do is turn around and move
forward. Seymour Papert, one of the original creators of Logo, thought that
the ability to “identify with the Turtle” was very important [39]. The point is
that we can easily imagine being a robot that executes commands like “turn
90 degrees to the right, then go 10 steps forward”, and it helps to internalize
what programming is all about.
Naturally, Python turtles also support such “native” commands, and we
will use them occasionally:
Turtle graphics is also a great example of how the right mental model of
a certain process can sometimes simplify our tasks. Imagine we need to draw
a simple 100×100 square (see Figure 1.1, left). It is equally easy to do with
conventional drawing functions and with turtle robot commands:
Now, how difficult it is to draw a slightly rotated square (Figure 1.1, right)?
The first option looks much more complicated now, because the coordinates
of vertices of the square are not obvious anymore, while the turtle graphics
version is still a piece of cake:
m = turtle.Turtle()
m.left(20)
for _ in range(4):
m.forward(100)
m.left(90)
our case we will only need simple moving objects, which can be achieved by
leveraging the existing functionality of the turtle module.
The trick is to use robotic turtles of different shapes and sizes. By default,
the turtle looks like a little arrowhead, but this shape can be changed or made
invisible:
...
m = turtle.Turtle()
m.shape("circle")
...
m.hideturtle()
Python turtle library comes with six shapes (“classic”, “arrow”, “turtle”,
“triangle”, “circle”, and “square”), but this list can be extended with custom
user-created shapes.
Likewise, it is possible to adjust turtle size. This functionality is slightly less
straightforward to use, however. All built-in turtle shapes fit into a 20×20-
pixel square. This fact can be checked from a Python shell:
>>> import turtle
>>> turtle.Screen()._shapes['square']._data
((10, -10), (10, 10), (-10, 10), (-10, -10))
Thus, if we pass size to shapesize(), the actual shape would fit into a
20×size pixel-wide square. If a certain robotic turtle is visible, we can simply
8 Playful Python Projects
change its color and appearance as needed, and move it around to obtain the
effect of a motion.
Event handling is going to be a minor yet vital part of our projects. Suppose
we want to move a turtle around using the keyboard. How can it be done? The
turtle module lets us register handlers for various kinds of events. A handler
is just a function that is going to be called when a certain event occurs. An
event is, well, “something that happens”, but this “something” must belong
to a list of supported events. Python’s turtle module supports only five event
types: a key is pressed, a key is released, the drawing canvas is clicked, the
mouse is dragged, and a timer alarm goes off. To detect a certain event, we
need to register an event handler and to turn on the listening mode:
m = turtle.Turtle()
def move_forward():
global m
m.forward(10)
turtle.onkeypress(move_forward, "space")
turtle.listen()
turtle.done()
m = turtle.Turtle()
def move_forward():
global m
m.forward(10)
Method Description
clearscreen() Deletes the drawings of all the turtles.
setworldcoordinates Sets up user coordinate system. Four passed
(x1, y1, x2, y2) values become new coordinates of screen
corners.
tracer(n, delay) Sets up turtle update mode. Only each n-th
screen update will be performed, and the delay
between the updates will be set to delay
milliseconds.
update() Forces screen update.
listen() Sets the currently active turtle screen as the
receiver of keyboard events.
onkeypress(fun, key) Sets fun() as the handler of a key press event
for the given key3 .
ontimer(fun, t) Sets a timer event to call fun() after t
milliseconds.
done() Starts the main event loop, responsible for the
proper event handling.
colormode(cmode) Sets the range of (R, G, B) values for color()
to [0, 1.0] or [0, 255]. The value of cmode must
be either 1.0 or 255.
setup(width, height) Sets the size of the main window to width ×
height.
title(titlestr) Sets the title of the main window to titlestr.
https://www.tcl.tk/man/tcl8.4/TkCmd/keysyms.html
2 Motion and reflections
Computational modeling of real-life phenomena is one of the earliest and most
important application of computer technologies. Scientific theories can be
viewed as models of reality, as more or less accurate descriptions or the pro-
cesses we see around us. The significance of these descriptions lie in their
predictive power: if we understand what is going on, we can predict the out-
comes.
The best predictions can be made when the underlying process is relatively
simple, and its accurate mathematical description is available. For example, if
I want to know how much time it would take to fill a bathtub with tap water,
I can simply measure the time needed to fill a one-liter bottle and multiply
it by the bathtub volume. The alternative approach would be to perform an
actual experiment by timing the whole process.
The bathtub example relies on a simple theory: it takes N times longer
to fill a vessel of an N -times larger volume. The theory will be much more
complex if the task is to drain a bathtub instead, since the rate of outflow is
going to decrease over time. In any case, mathematicians over centuries have
developed a vast collection of methods for solving many intricate problems
when they can be described in mathematical terms.
Sometimes, a process we are interested in happens to be too complex for a
straightforward mathematical solution, and experimental data is difficult to
obtain. For example, experimenting with rocket engines is dangerous and ex-
pensive, and experiments with social policies might take decades to complete.
Computers give us a third option: virtual experiments. If we understand
something enough to build its computer model, we can perform a simulation
to obtain experimental data. Computer model can be simpler than a math-
ematical description, and if it is sufficiently accurate, the results will also be
accurate. Computational modeling is used literally everywhere, and computer
simulations became an integral part of today’s science and technology.
Obviously, a model has to be accurate enough to produce reliable results,
and the primary point of a simulation is usually to provide numerical data.
In contrast, our models are going to be simple and “visual”. After all, our
primary purpose is to have fun. Still, you will see that even simple models
and simulations are able to produce some interesting results.
DOI: 10.1201/9781003455295-2 11
12 Playful Python Projects
2.1.1 IMPLEMENTATION
Fortunately, Python syntax is close enough to pseudocode, so usually we can
proceed straight to the real program. However, in this case let’s look at the
general picture first, and then sort out the missing details:2
by a moving object in one time unit. Velocity is a vector, referring to the distance covered
in one time unit in a certain direction. Our molecule is moving with a constant speed, but
its vertical and horizontal velocities may change.
Motion and reflections 13
vy V
a
vx
FIGURE 2.2: Calculating vertical and horizontal velocity constituents of a
molecule.
Let’s first note that our simulation is discrete: we cannot obtain the position
of the molecule in any arbitrary moment of time. Instead, time is treated as
a series of steps, and our molecule “jumps” from (x, y) to (x + vx , y + vy ) on
every step. Consequently, its speed will be expressed in “pixels per step”.
Our first task would be to obtain the values of vx and vy if we know the
molecule’s speed v and initial direction a (see Figure 2.2). This can be done
with the formulas
vx = v · cos(a)
vy = v · sin(a)
Since the model is discrete, it is easy to understand how our molecule might
end up being outside the vessel borders: if, for example, vx = 15, the molecule
makes a 15-pixel jump on each step of our simulation. If it is already close to
the wall, it might very well cross it.
We will push the molecule back to the vessel on the next step of simulation,
but such “jumps outside” do not look nice and make the simulation less ac-
curate, because some points of molecule-wall collision will not be determined
correctly. While there are ways to fix this issue, in practice it can usually be
14 Playful Python Projects
ignored. We can always opt for smaller speed values to minimize inevitable
inaccuracies.
Now we can consider the complete program (see Listing 2.1).
while True:
m.goto(m.xcor() + vx, m.ycor() + vy) # move the molecule
# reflect if necessary
if m.xcor() < left_wall or m.xcor() > right_wall:
vx *= -1
As you can see, the actual simulation code takes less than one third of the
complete program, and most space is devoted to setup and preparation. Let’s
make sure this fragment has no unclear parts.
We start by setting up vessel borders. The center of our molecule should
never come closer than R pixels to any border, so we have to make the
molecule’s fly zone a bit smaller than the actual drawing canvas. The molecule
Motion and reflections 15
...
done = False # a global "simulation is done" flag
...
turtle.listen()
turtle.onkeypress(set_done, "space")
Unfortunately, while this method seems to work fine in our case, it mixes
together two poorly compatible approaches to control flow organization. Our
current program is based on a traditional “active” control flow: instructions
immediately follow one another, and the application shuts down after execut-
ing the last statement. In contrast, Python’s turtle is based on a different,
event-driven architecture, which treats a program mainly as a collection of
functions, being called in response to certain events. A typical event-driven
program is not doing anything most of the time, and its functionality is only
triggered by external signals, such as user input or timer alarms (think how
an application like a calculator or a text editor works).
The turtle is designed to operate in an event-driven environment. The call
to turtle.done() (missing in our current code) actually starts a so-called event
loop, which enables proper event handling. For this reason, a well-designed
turtle-based application should quickly start an event loop, and only react
to incoming events. This type of program structure requires quite a different
thinking style. Our code is still simple, so the changes are going to be minimal:
all we need to do is to transfer the main molecule-moving functionality inside
a timer event handler.
16 Playful Python Projects
In this new version of the code, the while loop will be replaced with the
following fragment:
def tick():
if not done:
... # move the molecule here
turtle.ontimer(tick, 20) # call tick() after 20 milliseconds
tick()
turtle.done()
Instead of starting the main while loop, we are going to call tick(), which
performs only one step of simulation. At the end, however, it sets up a timer
alarm: we generate an artificial event, which will occur after 20 milliseconds
and is going to be handled by tick().
Other improvements are going to be less substantial. First, I think it would
be good to vessel borders to make it visible (the drawing canvas has to be
made a bit larger to accommodate a vessel with its borders). Second, since
our vessel is symmetrical, we can simplify the condition for a bounce.
These additions turn our program into its final form (see Listing 2.2). A
screenshot of the working program is shown in Figure 2.3.
import turtle
import math
from random import uniform
WIDTH = 600
HEIGHT = 400
V = 10
R = 10
MARGIN = 50 # additional space for the drawing canvas
turtle.listen()
turtle.onkeypress(set_done, "space")
m = turtle.Turtle()
m.shape("circle")
Motion and reflections 17
m.penup()
def tick():
if not done:
global vx, vy
m.goto(m.xcor() + vx, m.ycor() + vy)
turtle.update()
turtle.ontimer(tick, 20)
tick()
turtle.done()
2.1.2 AFTERTHOUGHTS
Let me pause for a moment to appreciate how much the turtle helps us to
implement moving onscreen objects.
Suppose our task is to move a black circle over a white background. Each
step of this process breaks down into three actions: delete the circle by turning
each of its pixels white, update its coordinates, and draw the circle at its new
position.
An obvious flaw of this simple method is an inevitable flickering effect it
produces. By deleting the circle before drawing it elsewhere, we create situa-
tions when nothing is drawn on the screen at all. As short as these periods may
be, they still make the circle visibly flicker when moving. In addition, simple
removal of onscreen objects might work only if the background is empty; if
the circle is moving over a graphical pattern, it has to be preserved somehow.
18 Playful Python Projects
macroscopic objects behave when they move and collide. The kinetic theory
developed a notion of ideal gas, based on the following assumptions:
2.2.1 IMPLEMENTATION
I think in this case we can proceed directly to Python code. Generalization
from one molecule to a list of N molecules is quite straightforward. However,
as our program grows, it makes sense to improve its organization by creating
a dedicated Molecule data class, a keypress handler class SimState, and separate
functions for preparing the screen and drawing a vessel (see Listing 2.3 and
Figure 2.4).
20 Playful Python Projects
import turtle
import math
from random import uniform
from dataclasses import dataclass
WIDTH = 600
HEIGHT = 400
V = 10
R = 10
MARGIN = 50
SLEEP_MS = 20
N = 20 # number of molecules in our vessel
right_wall = WIDTH / 2 - R
top_wall = HEIGHT / 2 - R
@dataclass
class SimState:
done: bool
def set_done(self):
self.done = True
@classmethod
def setup(cls):
r = cls(False)
turtle.listen()
turtle.onkeypress(r.set_done, "space")
return r
@dataclass
class Molecule:
m: turtle.Turtle
vx: float
vy: float
def move(self):
self.m.goto(self.m.xcor() + self.vx, self.m.ycor() + self.vy)
@classmethod
def create(cls):
m = turtle.Turtle()
m.shape("circle")
m.penup()
m.goto(uniform(-right_wall, right_wall), uniform(-top_wall, top_wall))
Motion and reflections 21
def setup_screen(title):
turtle.setup(WIDTH + MARGIN, HEIGHT + MARGIN)
turtle.tracer(0, 0)
turtle.title(title)
def draw_vessel():
m = turtle.Turtle()
m.hideturtle()
m.penup()
m.goto(-WIDTH / 2, -HEIGHT / 2)
m.pendown()
m.sety(HEIGHT / 2)
m.setx(WIDTH / 2)
m.sety(-HEIGHT / 2)
m.setx(-WIDTH / 2)
sim_state = SimState.setup()
setup_screen("Ideal gas")
draw_vessel()
molecules = [Molecule.create() for _ in range(N)]
def tick():
if not sim_state.done:
for m in molecules:
m.move()
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
2.2.2 AFTERTHOUGHTS
Turtle graphics capabilities definitely help us with visualization. Turtle’s
event-driven architecture might look like a less clear-cut bargain.
Nothing beats the simplicity of a command-line utility. Read input data
from a file or a console, do something, write output data. It is tempting to pre-
serve this simplicity in an interactive animation-heavy application. It is easy
to do on a game console, where all your users share the same hardware, and
your application can use computational resources exclusively. Things get more
difficult if available resources vary from device to device, and your software
has to play well with other processes in the memory.
22 Playful Python Projects
Event-driven architecture not only helps to solve these issues, but also
provides a way to deal with them separately. We will rarely have to think
about performance or responsiveness of user interface, but our programs also
benefit from this approach.
First of all, note that our simulations do not consume processor power
between timer events. They won’t cause your operating system to freeze. Next,
consider a subtle difference between physics and animation. Our ideal gas
simulation works fine for a small number of molecules. On my machine, when
N is set to 5, and SLEEP_MS is zero, the program is able to produce 500-600
frames per second. If N is increased to 500, the framerate drops to 10, and
the movements look slow. Computational resources are spent on two separate
activities: moving molecules and drawing them. Which one is more expensive?
It is easy to test: if I remove the call to turtle.update(), the framerate rises to
150 frames per second. Removing move() while keeping turtle.update() intact
barely makes any difference.
Thus, in this particular case the expensive part is animation, while the
actual simulation costs next to nothing. Now, suppose we need to keep 500
molecules in the vessel, but make them move faster. The easy way would be
to increase their speed, but it is bad for accuracy. Even now the molecules
might get outside vessel borders, and higher speeds will make this effect even
more pronounced. Such “jumps” are also very bad for any kinds of interaction
between moving objects. A smarter approach would be to keep the physics
accurate and sacrifice only the smoothness of animation. For example, we can
still update molecule coordinates at a fast rate, increasing the time between
screen updates only. This is easy to achieve with two independent timers:
SLEEP_MS = 10
VIS_SLEEP_MS = 40
...
def tick_draw():
if not sim_state.done:
turtle.update()
turtle.ontimer(tick_draw, VIS_SLEEP_MS)
def tick():
if not sim_state.done:
for m in molecules:
m.move()
turtle.ontimer(tick, SLEEP_MS)
tick()
tick_draw()
turtle.done()
also adjust frame delays to take into account that some frames take longer to
process than others:
import time
SLEEP = 50
def tick():
start = time.perf_counter() # hi-resolution timer (in seconds)
# do something
...
elapsed = int(1000 * (time.perf_counter() - start)) # milliseconds
turtle.ontimer(tick, max(0, SLEEP - elapsed))
https://docs.unity3d.com/2023.3/Documentation/Manual/TimeFrameManagement.html
24 Playful Python Projects
Our ideal gas simulation also can demonstrate certain properties of gasses.
Perhaps, the easiest one to test is Boyle-Mariotte’s law:
The pressure of a given quantity of gas varies inversely with its volume at
constant temperature.4
∆p = 2mvx
According to Newton’s second law of motion, the force is the rate of change
of momentum:
∆p
F =
∆t
In our case, ∆t is the time between two subsequent collisions of a molecule
with a wall, so in a large vessel the pressure is lower, because it takes more
time for a molecule to travel from one side to another and hit a wall again.
Naturally, we are not really interested in the contribution of every single
molecule: it is enough to calculate the average force, which in practice means
adding up all the calculated ∆p values and dividing the result by the total
running time. As a further simplification, we can presume that the mass of
each molecule is 0.5, so the formula for ∆p becomes simply
∆p = |vx |
Let’s also remember that every collision adds up to the pressure, so we don’t
need to take into account velocity signs: all contributions are positive. Since
we know the force, it is finally possible to calculate the pressure:
F
P =
(wall area)
If you are not convinced, think about our vessel as having a depth of 1, and
no molecules (by luck!) collide with its front and back walls.
2.3.1 IMPLEMENTATION
This simulation can be easily obtained from Listing 2.3 (“Ideal gas”). The
changes are going to be very straightforward:
@dataclass
class SimState:
done: bool
delta_p: float
delta_t: int
def set_done(self):
self.done = True
def pressure(self):
area = 2 * (2 * top_wall + 2 * right_wall)
force = self.delta_p / self.delta_t
return force / area
@classmethod
def setup(cls):
r = cls(False, 0, 0)
turtle.listen()
turtle.onkeypress(r.set_done, "space")
return r
2. Modify the value of delta_p every time a molecule collides with a wall:
@dataclass
class Molecule:
...
def move(self):
...
if abs(self.m.xcor()) > right_wall:
self.vx *= -1
sim_state.delta_p += abs(self.vx)
def tick():
...
sim_state.delta_t += 1
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
Finally, run the program with different values of HEIGHT and WIDTH to see
whether the value of P · V remains the same. In my test runs, I consistently
get something within a range of 460-500 for two vessels, one four times larger
than the other one in terms of volume.
2.4 THERMODYNAMICS
Thermodynamics is a branch of physics that “deals with the transfer of energy
from one place to another and from one form to another”.5 As one might
expect, it is an established and well-developed discipline with all kinds of
theories and laws, so it might seem strange to name any single simulation
simply “Thermodynamics”. I owe this name to Vaillencourt’s book [55] thanks
to the following statement:
“As far as using energy is concerned, thermodynamics can be boiled down
to one simple rule. <…> ENERGY ALWAYS MOVES FROM HOT
TO COLD”.
This rule (with all the necessary refinements) is also known as the second
law of thermodynamics. Despite its simplicity, this law possibly means grave
consequences for our beloved universe: according to the “heat death” hypoth-
esis, if the energy always moves to the colder objects, eventually all the objects
will reach the same temperature, and no further processes, essential to our
being, are going to be possible. However, there are other views on this subject,
and in any case humankind has more pressing issues than “heath death” at
the moment.
What we really can do now is to show that the second law indeed works.
As previously noted, according to the kinetic theory of gases, a temperature
is presumed to be proportional to the average kinetic energy of molecules.
Thus, we can create two communicating vessels—one with cold gas, another
one with hot gas—and see what happens (Figure 2.5).
https://www.britannica.com/science/thermodynamics
Motion and reflections 27
2.4.1 IMPLEMENTATION
Let’s again rely on our trusty “Ideal gas” model (Listing 2.3) and review the
necessary modifications. Instead of the same speed value for all the molecules,
we’ll need “cold” slow molecules and “hot” fast molecules now:
V_COLD = 5
V_HOT = 15
We will also need to separate our vessel into two compartments, leaving a
hole in the separating wall. The radius of this hole affects the ability of gases
to mix together, so it’s better to make it a dedicated constant:
R_HOLE = 30
def draw_vessel():
...
m.penup()
m.goto(0, -HEIGHT / 2) # draw a straight vertical line
m.pendown() # from the screen top toward the center
m.sety(-R_HOLE)
m.penup()
m.sety(R_HOLE) # draw a similar line from the screen bottom
m.pendown() # toward the center, leaving a gap in the middle
m.sety(HEIGHT / 2)
The changes in Molecule class are going to be more complicated. Most im-
portantly, its move() method will have to reflect a molecule from a new vertical
wall in the middle of the vessel:
@dataclass
class Molecule:
m: turtle.Turtle
v: float # let's store the original speed value
vx: float
vy: float
def move(self):
# same as before
self.m.goto(self.m.xcor() + self.vx, self.m.ycor() + self.vy)
The logic is simple: if a molecule is approaching the wall from either side
and its vertical coordinate does not match the hole, we have to reflect it back.
Motion and reflections 29
At this point, it is possible to run the program and see blue and red balls
flying around the screen just as expected. However, there is no way to check
the real-time temperature in either vessel, so we can only guess which side
is hotter. As mentioned above, the temperature of gas is presumed to be
proportional to the average kinetic energy of its molecules. The kinetic energy
2
of an individual molecule is mv 2 , but since in our case all masses are equal,
2
we can simply use v and calculate the temperature of the list of molecules
as follows:
def temperature(gas):
return sum(mol.v**2 for mol in gas) / len(gas)
Finally, we’ll have to call temperature() for both vessels in our tick() function
and show both values on the screen:
writer = turtle.Turtle()
writer.hideturtle()
writer.penup()
def tick():
if not sim_state.done:
for m in molecules:
m.move()
writer.clear()
writer.goto(-100, HEIGHT / 2) # left vessel's temperature
writer.write(round(temperature(m_left)))
writer.goto(100, HEIGHT / 2) # right vessel's temperature
writer.write(round(temperature(m_right)))
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
Finally, we can run the program and wait for some time to let the gases
mix together (see Figure 2.6). You can adjust the size of the hole to speed up
or slow down this process.
30 Playful Python Projects
v1 + v1′ = v2′ + v2
Therefore,
v2′ = v1 + v1′ − v2 (4)
By substituting v2′ in (1) with the right-hand side of (4), we obtain
2.5.1 IMPLEMENTATION
This model can be implemented by modifying one of our previous programs
(“Ideal gas” seems to be an easy choice in most cases). This time we will
have to deal with the balls of different masses and velocities. The tricky part
is mass, as it would be good to adjust ball sizes to show the differences in
their masses. In the real world, a mass of a ball is proportional to its volume,
meaning that it grows as a cube of radius. Since we are operating in a two-
dimensional world, I suggest to presume that the mass is proportional to the
ball’s area, i.e., to the square of its radius. Let’s recall that the size of the
robotic turtle is specified via “stretch factors”, so if we pass a value of size to
the shapesize() function, the actual values for the ball radius and ball mass
will be
r = size * R # R = 10
mass = math.pi * (r ** 2) # ball area
Note that the code for wall collision detection also has to take into account
different ball sizes. Apart from this change, the new program relies on roughly
the same Molecule class (renamed as Ball), with now-unnecessary vy velocity
component removed (see Listing 2.4). A screenshot of the working program is
shown in Figure 2.8.
# central_collision.py
import turtle
import math
from random import uniform
from dataclasses import dataclass
WIDTH = 600
HEIGHT = 400
MIN_V = 5
MAX_V = 15
MIN_SIZE_FACTOR = 0.7
MAX_SIZE_FACTOR = 4
START_DISTANCE = 400
R = 10
MARGIN = 50
SLEEP_MS = 20
@dataclass
class SimState:
... # same as in "Ideal gas"
@dataclass
class Ball:
m: turtle.Turtle
vx: float
Motion and reflections 33
r: float
def move(self):
self.m.setx(self.m.xcor() + self.vx)
def mass(self):
return math.pi * (self.r ** 2)
@classmethod
def create(cls, x, v_factor):
size = uniform(MIN_SIZE_FACTOR, MAX_SIZE_FACTOR)
r = size * R
m = turtle.Turtle()
m.shape("circle")
m.shapesize(size)
m.penup()
m.goto(x, 0)
def setup_screen(title):
... # same as in "Ideal gas"
def draw_vessel():
... # same as in "Ideal gas"
# balls b1 and b2 collide if their centers are closer than b1.r + b2.r
def balls_collide(b1, b2):
return abs(b1.m.xcor() - b2.m.xcor()) <= b1.r + b2.r
b1.vx = v1n
b2.vx = v2n
sim_state = SimState.setup()
setup_screen("Central collision")
draw_vessel()
ball1 = Ball.create(-START_DISTANCE / 2, 1)
ball2 = Ball.create(START_DISTANCE / 2, -1)
def tick():
if not sim_state.done:
34 Playful Python Projects
ball1.move()
ball2.move()
if balls_collide(ball1, ball2):
process_collision(ball1, ball2)
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
Before moving on to the next project, let’s test our formulas in two inter-
esting edge cases:
1. Suppose that both balls have the same mass (m1 = m2 = M ). Then
v1 (M − M ) + 2M v2 2M v2
v1′ = = = v2
M +M 2M
v2′ = v1 + v1′ − v2 = v1 + v2 − v2 = v1
Therefore, when the balls of equal masses collide, they exchange ve-
locities. In particular, when a moving ball hits a still ball, it stops,
and the previously still ball starts moving with the same velocity as
the ball that hit it.
2. Suppose that the mass of the first ball is very large (m1 = M ), and
its speed is zero. This case is equivalent to hitting a wall, and our
Motion and reflections 35
a = atan2(y2 − y1 , x2 − x1 )
Now let’s rotate the coordinate system to make the x-axis parallel to m, and
the y-axis parallel to its normal n. To do it, we simply need to subtract a
from A1 and A2 :
A′1 = atan2(vy1 , vx1 ) − a
A′2 = atan2(vy2 , vx2 ) − a
Each ball’s movement consists of radial (along m) and tangential (along n)
velocity constituents. As a result of collision, no tangential constituent is af-
fected, while radial constituents have to be modified according to the laws of
conservation, just like in the previous model.
At this point we need to know ball speeds. They can be calculated from
velocity constituents using the Euclidian distance formula:
0) with (x, y) is equal to arctan(y/x). However, arctan produces results between -90° and
90°, and it won’t work if x = 0, so various corrections are necessary to convert it to the
value in the range from -180° to 180° we are looking for. Python’s atan2() is able to make
these corrections and produce the result in the target range because it knows the signs of
both input values.
36 Playful Python Projects
m
n
a
A1
A2
FIGURE 2.9: Sketch of two colliding balls.
√
v1 = 2 + v2
vx1 (velocity of the first ball)
y1
√
v2 = 2 + v2
vx2 (velocity of the second ball)
y2
Since coordinate axes are parallel to m and n, radial and tangential velocity
constituents are simply projections onto axes:
Since one of the velocity constituents of each ball has changed, we have to
recalculate v1 and v2 : √
v1′ = vr1′2 + v 2
t1
√
v2′ = vr2′2 + v 2
t2
New values for the trajectory angles can be obtained using arctangents, just
like before. We also have to turn back the coordinate axes by adding a:
2.6.1 IMPLEMENTATION
The easiest way to create “Free kick” is to modify our previous program,
“Central collision” (Listing 2.4). Let me save some space and only show the
changes (see Listing 2.5). In a nutshell, we have to:
1. Add the second velocity component to the Ball class, so it can move
in the vertical direction.
2. Handle both velocity components in move().
3. Modify balls_collide() so it takes into account both x and y ball
coordinates.
4. Introduce our formulas into process_collision().
5. Create both balls in random vessel locations.
@dataclass
class Ball:
m: turtle.Turtle
vx: float
vy: float
r: float
def move(self):
self.m.goto(self.m.xcor() + self.vx, self.m.ycor() + self.vy)
38 Playful Python Projects
def mass(self):
return math.pi * (self.r ** 2)
@classmethod
def create(cls):
size = uniform(MIN_SIZE_FACTOR, MAX_SIZE_FACTOR)
r = size * R
x = uniform(-WIDTH / 2 + r, WIDTH / 2 - r) # create in a
y = uniform(-HEIGHT / 2 + r, HEIGHT / 2 - r) # random location
m = turtle.Turtle()
m.shape("circle")
m.shapesize(size)
m.penup()
m.goto(x, y)
v = uniform(MIN_V, MAX_V)
angle = uniform(0, 2 * math.pi) # set a random direction
return Ball(m, v * math.cos(angle), v * math.sin(angle), r)
def setup_screen(title):
... # same as before
def draw_vessel():
... # same as before
v1 = math.sqrt(b1.vx ** 2 + b1.vy ** 2)
v2 = math.sqrt(b2.vx ** 2 + b2.vy ** 2)
vr1 = v1 * math.cos(A1n)
vt1 = v1 * math.sin(A1n)
vr2 = v2 * math.cos(A2n)
vt2 = v2 * math.sin(A2n)
sim_state = SimState.setup()
setup_screen("Free kick")
draw_vessel()
ball1 = Ball.create()
ball2 = Ball.create()
def tick():
... # same as before
2.7.1 IMPLEMENTATION
The required modifications are going to be very minor.8 First, we’ll need to
declare the number of molecules in the vessel, rename the Ball class back to
Molecule, and let the user supply a custom molecule size:
8 The code can be simplified by removing the support of arbitrary molecule sizes, but I
N = 10
...
@dataclass
class Molecule:
...
def create(cls, size):
# remove the line
# size = uniform(MIN_SIZE_FACTOR, MAX_SIZE_FACTOR)
...
Next, we’ll have to place all our molecules into a list (just like in “Ideal
gas”), and modify the tick() function to handle their collisions:
...
molecules = [Molecule.create(1) for _ in range(N)]
def tick():
if not sim_state.done:
for m in molecules:
m.move()
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
Theoretically, it should be the end of the story. However, if you run the
modified program, eventually you will notice its flaws: sometimes a stray
molecule gets stuck somewhere outside the edge of the vessel, and sometimes
a pair of molecules gets “glued” together and continues moving as a single
blob.
This is how our discrete model meets the continuous reality. Consider, for
example, a molecule approaching the right vessel wall with vx of 5 pixels per
step. Suppose that its current x-coordinate is WIDTH / 2 - 11, so it is very close
to the wall. This molecule is being chased by another molecule with vx of
10 pixels per step, and its x-coordinate is WIDTH / 2 - 32. Since R is 10, the
molecules do not collide with each other yet. Now let’s see what happens
next:
will keep changing the sign of vx on every step, so the molecule will keep
oscillating outside the wall until some other molecule helps it get unstuck.
The trouble with moving blobs of molecules has a somewhat similar nature:
it is possible that two molecules end up in a collision resolved in such a way
that they still collide on the next step of simulation. These multiple collisions
may lead to all kinds of strange behavior.
Moving blobs, however, rarely happen in “Very ideal gas”, so for now let’s
just add a small fix for the molecules stuck outside the vessel. The easiest way
to do it is to make sure that all the coordinates in our program are inside the
vessel:
# make sure -max_v <= v <= max_v
def clamp(v, max_v):
return math.copysign(min(abs(v), max_v), v)
@dataclass
class Molecule:
...
def move(self):
...
self.m.setx(clamp(self.m.xcor(), WIDTH / 2 - self.r))
self.m.sety(clamp(self.m.ycor(), HEIGHT / 2 - self.r))
The final lines of move() force the molecule back into the vessel by “clamp-
ing” the molecule’s coordinates. Python’s function copysign(x, y) applies the
sign of y to abs(x) and returns the obtained value, so our clamp() first takes
the smaller of values abs(v) and max_v, and then uses copysign() to preserve the
original sign of v.
released certain particles that could be seen under a microscope. Brown no-
ticed and reported the irregular, jittery motion of these particles but could
not provide a good explanation for what he had seen.9
Much later it was discovered that the jittery motion of particles, now known
as Brownian motion, is the result of their collisions with the molecules of
water. A particle behaves somewhat like a large inflated ball in the middle of
a crowd of people moving in random directions.
Believe it or not, my proposal would be to simulate Brownian motion on a
computer.
2.8.1 IMPLEMENTATION
Essentially, “Brownian motion” is “Very ideal gas” with one large particle
surrounded by much smaller molecules. Its initial speed is equal to zero, and
all the collisions are handled according to the rules of “Free kick”.
The only important modification I am about to introduce deals with the
“moving blobs” problem that we encountered in “Very ideal gas”. The particle
is going to be large and is likely to form blobs with surrounding molecules.
Furthermore, some molecules will inevitably appear inside the particle if we
scatter them randomly within the vessel bounds. For these reasons we should
finally get rid of “moving blobs”.
Fortunately, there is an easy solution. Since blobs form when we handle
a collision between the same pair of objects on two successive iterations, we
can simply stop doing it. It is enough to form the list of colliding pairs while
working on collision resolution and ignore any collisions from this list on the
next step of simulation. This way, only the pairs that collide now but did not
collide at the previous step are going to be processed.
Let’s review the changes to be made in “Very ideal gas”. Since our particle is
much larger than a molecule, we’ll need to increase its size while reducing the
size of regular molecules. It also makes sense to increase the overall molecule
count:
PARTICLE_SIZE_FACTOR = 7
MOLECULE_SIZE_FACTOR = 0.5
N = 50
The SimState class will now have to store the pairs of particles colliding at
the previous simulation step:
@dataclass
class SimState:
done: bool
9 As further reading, I wholeheartedly recommend the site “What Brown Saw and You
prev_collisions: set
...
def setup(cls):
r = cls(False, set())
...
Our Brownian particle behaves just like an ordinary molecule; the only
difference is its larger size. Thus, the easiest way to incorporate it into the
simulation is to keep it in the same list with other molecules and handle it
with the same code:
particle = Molecule.create(PARTICLE_SIZE_FACTOR)
particle.vx = particle.vy = 0
particle.m.goto(0, 0)
particle.m.color("blue")
molecules.append(particle)
Here we make the molecules smaller (in reality, they are much smaller than
a Brownian particle) and place the still particle into the screen center. Finally,
the tick() function will have to prevent repeated collisions of the same pairs
of objects:
def tick():
if not sim_state.done:
for m in molecules:
m.move()
collisions = set()
for i in range(len(molecules)):
for j in range(0, i):
if balls_collide(molecules[i], molecules[j]):
collisions.add((i, j))
if not (i, j) in sim_state.prev_collisions:
process_collision(molecules[i], molecules[j])
sim_state.prev_collisions = collisions
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
That’s all. Now we can run the program and enjoy the result (see Fig-
ure 2.10).
10 FromEncyclopaedia Britannica:
https://www.britannica.com/science/Charless-law
Motion and reflections 45
3.1.1 IMPLEMENTATION
Taking into account acceleration is not hard. In the “Ideal gas” and related
models we had to change the coordinates of the moving entity on each step
of the simulation. Here we will also have to change its vertical velocity:
vy += ACCELERATION
DOI: 10.1201/9781003455295-3 47
48 Playful Python Projects
WIDTH = 600
HEIGHT = 400
V = 3
MINV = 0.01
LOSS_COEFF = 0.7
ACCELERATION = -0.5 # negative is downward
R = 10
MARGIN = 50
SLEEP_MS = 20
done = False
right_wall = WIDTH / 2 - R
bottom_wall = -HEIGHT / 2 + R
def set_done():
global done
done = True
turtle.listen()
turtle.onkeypress(set_done, "space")
m = turtle.Turtle()
m.shape("circle")
m.penup()
m.goto(WIDTH / 2, -HEIGHT / 2)
m.pendown()
m.setx(-WIDTH / 2)
vx = V
vy = 0
m.sety(HEIGHT / 3)
def tick():
if not done:
Gravity and rotation 49
global vx, vy
vy += ACCELERATION
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
3.2.1 IMPLEMENTATION
Generally speaking, we had to update our “Jumping ball” on every simulation
step as follows:
x += vx
y += vy
vy += acceleration
Our new project will use a modified scheme with variable acceleration:
x += vx
y += vy
vx += ax
vy += ay
ax, ay = calc_acceleration()
Ex , Ey
y - Ey dir
x, y x - Ex
ACCELERATION 1 ACCELERATION
2
· =
r r r3
Now we can proceed to the code (Listing 3.2).
import turtle
WIDTH = 600
HEIGHT = 400
V = 2.7
R = 5
R_EARTH = 100
PEDESTAL_H = 14
ACCELERATION = -1000
MARGIN = 50
TURTLE_SIZE = 20
SLEEP_MS = 20
done = False
right_wall = WIDTH / 2 - R
bottom_wall = -HEIGHT / 2 + R
def set_done():
global done
52 Playful Python Projects
done = True
turtle.listen()
turtle.onkeypress(set_done, "space")
m = turtle.Turtle()
m.shape("circle")
m.shapesize(2 * R / TURTLE_SIZE)
m.goto(0, R + R_EARTH + PEDESTAL_H)
vx = V
vy = 0
def tick():
if not done:
global vx, vy
m.goto(m.xcor() + vx, m.ycor() + vy)
r = m.distance((0, 0))
if r < R_EARTH + R:
set_done()
vx += ax
vy += ay
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
revolving around Earth. Higher orbital speeds produce various elliptic orbits
(see Figure 3.4).
An even faster cannonball fired with an escape velocity will leave the planet
without ever returning to its original location. For the real planet Earth escape
velocity is around 11.2 km/s.1
https://www.britannica.com/science/escape-velocity
54 Playful Python Projects
FIGURE 3.5: Ball falling into a tunnel through the center of Earth.
Gravity and rotation 55
speed. Then the ball would fall again into the tunnel to continue its endless
oscillation.2
This kind of movement is know as simple harmonic motion: the ball moves
back and forth under the influence of force directed toward the center of Earth
and proportional to the distance from it.3
The same pattern can be seen in a movement of a simple pendulum: tie a
string to the ceiling and attach a weight to its loose end, pull the weight (now
known as a bob) closer to yourself and release. The pendulum would swing
approaching both ends of its trajectory with zero speed and accelerating when
moving back to the center.4
The movement of a pendulum is interesting in terms of speed and acceler-
ation, but its trajectory is a simple straight line, when looked at from above.
A more complex pattern of motion can be obtained by combining movements
in two perpendicular directions.
This is not a trivial task in the real world. If you pull the pendulum bob
toward yourself, it will move back and forth. If you pull the bob to the left, it
will move sideways. However, pulling the bob toward yourself and to the left
will simply make it swing diagonally rather than in both directions.
The easiest way to make the bob move back-and-forth and left-and-right
is to employ a Y-shaped device known as the Blackburn pendulum [59]. It
consists of two secured threads tied with their loose ends, and a third thread
connected to the resulting knot. The bob is attached to the loose end of the
third thread (see Figure 3.6).
Obviously, the V-shaped part of this device can only move back and forth,
so if you pull the bob toward yourself and to the left, the resulting motion
would be the combination of movements in both directions. By plotting pos-
sible trajectories of the bob, we obtain Lissajous figures, named after Jules
Antoine Lissajous, who produced them by attaching a leaky funnel with sand
to the bob [16].
Fortunately, our previous projects provide a solid foundation for drawing
Lissajous figures without any need to mess with strings and sand. Let’s see
how it can be done with our trusty turtles.
3.3.1 IMPLEMENTATION
Before proceeding to implementation, let’s sort out the differences between
these recently considered cases of accelerated motion:
2 If you are interested in details of this process, check out the “Journey through the center
https://www.britannica.com/science/simple-harmonic-motion
4 Strictly speaking, a pendulum does not exhibit simple harmonic motion, but it works
• “Jumping ball”. For a ball moving near the surface of Earth, the
acceleration is constant.
• “Newton’s cannonball”. For large distances (comparable to planet
radius), the acceleration is inversely proportional to the square of
the distance between the ball and the planet center.
• “Lissajous figures” (and a fall-through-tunnel scenario). For simple
harmonic motions, the acceleration is proportional to the distance
between the moving object and the central point (equilibrium), ac-
cording to Hooke’s law.
The HyperPhysics article explains why the tunnel case can serve as an
example of simple harmonic motion. In brief, the so-called “shell theorem”
states that: (1) the ball inside the planet can be treated as tucked between a
smaller planet and an “outer shell”; (2) the gravity of the shell can be ignored;
(3) the gravity of a planet depends on its radius, so the gravitational force
applied to the ball is actually the force exerted by the smaller planet, which
gradually diminishes as the ball approaches the center of Earth.
Gravity and rotation 57
import turtle
WIDTH = 600
HEIGHT = 400
MARGIN = 50
SLEEP_MS = 20
AX_COEFF = -0.003
AY_COEFF = -0.001
START_COORDS = (200, 200)
done = False
def set_done():
global done
done = True
turtle.listen()
turtle.onkeypress(set_done, "space")
m = turtle.Turtle()
m.shape("circle")
m.penup()
m.goto(START_COORDS[0], START_COORDS[1])
m.pendown()
vx, vy = 0, 0
def tick():
if not done:
global vx, vy
vx += m.xcor() * AX_COEFF
vy += m.ycor() * AY_COEFF
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
58 Playful Python Projects
3.4.1 IMPLEMENTATION
We already know how to implement motion along a straight line. Let’s think
how to obtain a circular trajectory. Suppose our moving object revolves around
a central point (x0 , y0 ), just like Earth revolves around the Sun, and the
distance from the object to the central point is constant and equal to L. Then
the current position (x, y) of the object can be described by the angle a
between the line connecting (x0 , y0 ) and (x, y), and the line that goes parallel
to the x-axis through the point (x0 , y0 ) (see Figure 3.9).
Such polar coordinates (L, a) are just as good as Cartesian (x, y), and are
especially handy for our purposes: to make the object move around a central
https://www.britannica.com/science/orrery-astronomical-model
60 Playful Python Projects
y
L (x, y)
a
(x0, y0)
(0, 0) x
FIGURE 3.9: Using polar coordinates (L, a) to define the position of an object.
a += AV
x = x0 + L · cos(a)
y = y0 + L · sin(a)
Now we have everything to show how Earth orbits the Sun. The Moon might
seem a bit more tricky because it orbits around Earth, which orbits around the
Sun. However, the central point (x0 , y0 ) is not required to remain constant:
it can be set to the center of Earth, and the conversion formulas will do the
rest. The complete program is shown in Listing 3.4, and the screenshot of the
working application in Figure 3.10.
import turtle
import math
WIDTH = 600
HEIGHT = 400
Gravity and rotation 61
MARGIN = 50
SLEEP_MS = 20
TURTLE_SIZE = 20
R_SUN = 60
R_EARTH = 20
L_EARTH = 130
AV_EARTH = 0.02
R_MOON = 3
L_MOON = 30
AV_MOON = AV_EARTH * 13 # the real Moon makes approx 13 turns per year
done = False
def set_done():
global done
done = True
turtle.listen()
turtle.onkeypress(set_done, "space")
def tick():
if not done:
global a_earth, a_moon
a_earth += AV_EARTH
a_moon += AV_MOON
62 Playful Python Projects
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
3.5.1 IMPLEMENTATION
The previous project, “Digital orrery”, includes all the necessary pieces to
simulate a Spirograph. Earth revolving around a fixed center works like a
Spirograph cog, and the trajectory of the Moon is similar to the curve made
by a pen inserted into a cog hole.
What sets these two simulations apart is their treatment of relationships
between moving objects. In “Digital orrery”, each angular velocity can be set
independently: the Moon can spin around Earth faster or slower. In contrast,
a cog traveling around the circular opening always makes the same number of
revolutions on its path. Furthermore, if the cog travels clockwise, it revolves
anticlockwise.
To determine the angular velocity of a moving cog, we need a formula for
the distance covered by an object moving along the arc of the radius R and
the central angle a:
D =R·a
Suppose the radius of the frame opening is Rf , and the radius of the cog is
Rc ., so the cog is moving along the arc of radius L = Rf − Rc . When a freshly
painted cog moves by the angle A along the arc, it stains the strip of length
L · A on the frame (see Figure 3.12).
It also means that the outer side of the cog has to cover the same distance
and turn by the angle −a (“minus” because the cog rotates in the opposite
direction):
L · A = −Rc · a
64 Playful Python Projects
Rc
L
A
Rf
x = L · cos(A)
y = L · sin(A)
If the drawing pen is located on the edge of the cog, its coordinates are
xpen = x + Rc · cos(a)
ypen = y + Rc · sin(a)
However, since Spirograph cogs have multiple holes, a pen may be located at
a different distance Rp from the cog center, so these formulas become
xpen = x + Rp · cos(a)
Gravity and rotation 65
ypen = y + Rp · sin(a)
The complete listing of the project is shown in Listing 3.5. I decided to modify
the value of a inside the main loop and to calculate A, but it could have
been done in the opposite way. An example curve is shown on the screenshot
(Figure 3.13).
import turtle
import math
WIDTH = 600
HEIGHT = 400
MARGIN = 50
SLEEP_MS = 20
# astroid
# R_FRAME = 25 * 4
# R_COG = 25
# R_PEN = 25
R_FRAME = 154
R_COG = 55
R_PEN = 30
L = R_FRAME - R_COG
AV = 0.1 # angular velocity
done = False
def set_done():
global done
done = True
turtle.listen()
turtle.onkeypress(set_done, "space")
pen = turtle.Turtle()
pen.shape("circle")
pen.penup()
pen.goto(L + R_PEN, 0)
pen.pendown()
a = 0
def tick():
if not done:
66 Playful Python Projects
global a
A = -a * R_COG / L
x = L * math.cos(A)
y = L * math.sin(A)
pen.goto(x + R_PEN * math.cos(a), y + R_PEN * math.sin(a))
a += AV
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
certain body, its distance from its “parent” planet, and its angular
velocity. Planets with no parent should orbit around the screen center.
4. Introduce the idea of the tilt angle into “Jumping ball”. The ball in
our model starts moving in horizontal direction, but you can imagine
it being fired from a cannon pointing in an arbitrary direction.
5. The code of “Lissajous figures” can actually produce only figures with
zero phase difference. Both pendulums are started in their farthest
positions from the central point with their velocities set to zero. How-
ever, it does not have to be so: one of the pendulums can be started
when another one is already in motion. Extend our program with a
capability to simulate this scenario.
6. The Spirograph can be also used to draw epitrochoids—the curves
obtained by rolling one toothed wheel around another toothed wheel
instead of a frame. Implement this capability.
4 Stochastic processes
In the next projects we will discuss simple experiments where results are ob-
tained via numerous repetitions of a certain stochastic (randomized) process.
It might sound counterintuitive to rely on something random to obtain
something more or less precise, but this idea lies behind a family of Monte
Carlo methods, widely used in science and engineering. While the idea of
analyzing randomized processes is centuries old, Monte Carlo methods in the
present form appeared in the 1940s, and they are directly associated both with
nuclear physics, where it was applied at the time, and the rise of computing
machinery, which is great at doing repetitive work fast.
The advantage of randomized repetitions is its simplicity. Large systems
are complex and hard to simulate. However, large systems often consist of
simple small parts, moving and interacting with each other. By simulating
these microscopic interactions, Monte Carlo methods manage to grasp the
complexity of the whole [3]. Some processes, such as dice games, are stochastic
by their nature, so throwing “electronic dice” is the only right way to simulate
them accurately.
Now let’s see how order and chaos play together, reflecting our complex
reality. We’ll start with simple artificial examples and consider some real-
world scenarios closer to the end of the chapter.
4.1 CALCULATING PI
Imagine a mug with one black and three white tokens. Let’s take out a random
token without looking into the mug. Is the token more likely to be white? It
is intuitively clear that it is indeed so, and the chance to fish out the black
token is just one out of four. However, how to prove it? A straightforward
experimental proof can be performed by actually repeating this draft-a-token
process a number of times and to examine the outcomes.
While the experimentally obtained answer is going to be approximate, the
law of large numbers states that by increasing the number of trials it is possible
to produce more and more accurate results.1 Thus, despite inevitable hiccups
on the way, hundreds or thousands of token drafts should yield a proportion
of black tokens close to 41 .
For our next task, we will consider a more interesting example: experimental
derivation of the value of π. This can be done in a variety of ways, so let’s
choose the simplest one.
https://mathworld.wolfram.com/LawofLargeNumbers.html
DOI: 10.1201/9781003455295-4 69
70 Playful Python Projects
(0, 1)
(0, 0) (1, 0)
Nc area of quarter-circle
≈
N area of square
This observation lies behind the method of complex shape area calculation,
known as Monte Carlo integration.2 Naturally, it works best when the number
of thrown darts is large.
2
The area of a quarter-circle of radius r is πr4 , so for our case it is simply
π
4 , and the area of the bounding square is simply 1. Thus,
Nc π
≈
N 4
https://mathworld.wolfram.com/MonteCarloIntegration.html
Stochastic processes 71
Nc
π≈4
N
4.1.1 IMPLEMENTATION
The implementation of this project will follow a simple “text in—text out”
approach without any turtle-based visualizations. Graphics would certainly
look nice here, but it would take a disproportionately large piece of the code
without adding much value.
The only missing bit in the formulas above is the method for checking
whether a certain point lies within a quarter-circle. An obvious way to do it
is to employ Euclidian distance. It should be smaller than one for the inside
points:
√
x2 + y 2 < 1
By squaring both sides of this inequality, we can also get rid of the square
root:
x2 + y 2 < 1
The compete listing of this project gets the prize for the shortest program
in the book (see Listing 4.1). Still, it illustrates an interesting concept and is
well worth our attention.
Listing 4.1: Calculating Pi.
N = 100000
Nc = 0
for _ in range(N):
x = uniform(0, 1.0)
y = uniform(0, 1.0)
if x * x + y * y < 1:
Nc += 1
print(f"Pi is {4*Nc/N}")
Pi is 3.14124
Pi is 3.14312
Pi is 3.13384
...
72 Playful Python Projects
4.2.1 IMPLEMENTATION
The simulation will repeatedly play out the following scenario. A prize is ran-
domly placed behind one of the three doors. The player chooses a random
door, then the host opens another door without a prize. Then we’ll compare
how two distinct strategies perform. Player A would always stick to the origi-
nal choice, while Player B would always switch the door. Full implementation
is shown in Listing 4.2.
N = 100000
Stochastic processes 73
player_a = 0
player_b = 0
for _ in range(N):
prize = [False, False, False]
prize[choice([0, 1, 2])] = True
options = [0, 1, 2]
player_choice = choice(options)
options.remove(player_choice)
monty_choice = choose_empty(prize, options)
options.remove(monty_choice)
if prize[player_choice]:
player_a += 1
if prize[options[0]]:
player_b += 1
print(f"Player A: {player_a/N}")
print(f"Player B: {player_b/N}")
Player A: 0.33511
Player B: 0.66489
Player B, who always switches doors, wins twice more often! Thus, the
experiment shows vos Savant was right.
If your intuition still rebels, here is an easy way to think about this game.
Suppose you have initially chosen a door with the prize. The chances of this
lucky event are 1 out of 3. In this case, switching doors means failure: you
move away from your prize. However, in any other case (2 out of 3) you choose
between your empty door and another closed door with the prize behind.
Therefore, “always switch” strategy wins in 23 of all the games.
3 Unfortunately, I don’t know the original source of this problem. It was discussed at one
case 1
s d
B A
case 2
d s
B A
FIGURE 4.2: Two scenarios favorable for the passenger.
4.3.1 IMPLEMENTATION
Let’s formalize our experimental setup. A bus makes a full circle by driving
from A to B and back. If the distance between these points is D, the length
of the loop is 2D. If the time since bus departure is t, and the bus is moving
with unit speed (one unit of distance per unit of time), its location on the
route is
busloc = t mod 2D
The bus makes a full loop in 2D time units. If busloc is smaller than D, the bus
moves in the forward direction, and backward otherwise. From a passenger’s
point of view, there are two randomly chosen points, source s and destination
d, on the road, and either of these points can be closer to B. In a scenario
favorable for the passenger, the bus is either to the left side of s if s is closer
to B, or to the right side of s if s is closer to A (see Figure 4.2).
Stochastic processes 75
N = 100000
DIST = 10
DAYLEN = 5 * 2 * DIST # 5 loops
wins = 0
for _ in range(N):
t = uniform(0, DAYLEN)
src = uniform(0, DIST)
dst = uniform(0, DIST)
busloc = t % (2 * DIST)
if src >= dst and bus_sbs or src < dst and bus_sas:
wins += 1
print(wins / N)
Five full loops are simulated here. We choose random source and destination
points, and random time, and check favorable scenario conditions. My test run
yields the following output:
0.33357
In other words, there is only a 33% chance of being lucky; so, as passengers
we should keep our expectations low.
decides whether to continue rolling the die or to hold. If the player decides to
hold, the total turn score is added to the player’s final score, the turn ends,
and the die is passed to another player.
This type of game mechanics is called “push your luck”. By rolling the die,
the player continues to score (while the opponent doesn’t), but the stakes are
rising: one unlucky roll, and all the earnings evaporate. Anyway, what is the
optimal strategy in Pig? Discovering it will be our next challenge.
Let’s base our efforts on two quite strong presumptions. First, since the
player’s objective is to earn 100 points as quickly as possible, it makes sense to
strive to maximize the score of each turn. Second, the player chooses between
rolling and holding after each roll on the basis of the current round score.
The player should hold when the score is too high to justify the risk of rolling
“one” and losing everything.
These presumptions make finding the optimal strategy a very straightfor-
ward task. We will try out different values of “holding scores” and run exper-
iments to understand how they influence the average per-round earnings.
4.4.1 IMPLEMENTATION
The complete program is shown in Listing 4.4. For each possible holding score
from 2 to 100, we play 10,000 games and report the holding score yielding the
best overall performance.
N = 10000
def turn_score(target):
r = 0
while r < target:
die = randint(1, 6)
if die == 1:
return 0
r += die
return r
best_target = 0
best_total = 0
It seems that 10,000 games are not enough to obtain a reliable definitive
answer. Consequent runs of the program may produce different answers:
Playing more games takes more time, but at this point we can safely pre-
sume that the answer lies somewhere between 18 and 24, and play 100,000
games for each value in this range, which yields 20 in most cases:
4.4.2 AFTERTHOUGHTS
Our experiments show that the optimal strategy in Pig is “hold at 20”. While
this is indeed a good rule of thumb, the real optimal strategy is not that
simple, because our initial presumptions about it are not entirely correct.
For starters, the game is more favorable for the first player, because if both
players score equally per turn, the first player has one extra turn available to
win before the second player has a chance to catch up. For this reason, being
the second player you might decide to accept higher risk. There is a variation
of Pig with the “leveling” rule: if the first player holds with a score of 100+,
the second player has a chance to play one more turn to score higher.
Another important factor affecting the optimal strategy is the current op-
ponent score. Suppose it’s your turn. You have 75 points, while the opponent
has 95. Clearly, you should strive for victory rather than hold at 20, because
the opponent is likely to win on the next turn. In general, the optimal strategy
depends on the current scores of both opponents, and it significantly deviates
from “hold at 20” closer to the end of the game [33], [34].
0.25
0
0 1 2 3 4 5 6 7 8 9 10
It is easy to propose a similar setup for a coin. Let’s toss a coin N times
and count the number of obtained heads. The least likely outcomes would be
0 and N , while the most likely sums huddle around N2 . The sums of heads
are binomially distributed: their probabilities4 form a nice-looking “bell curve”
when plotted (see Figure 4.3)5 .
This result is just an example of a more fundamental law called central
limit theorem6 , which, in particular, states that the bell curve appears when
independent and identically distributed random variables are summed up.
Our coin tosses are independent in a sense that one toss “knows” nothing
about other tosses, and their outcomes are not connected. (A popular erro-
neous belief that five heads in a row means growing chances for a tail is known
as “gambler’s fallacy” [53].) The requirement of mere “identical distribution”
means that the law works even for non-uniformly distributed variables. If we
replace a coin with a six-sided die with faces marked 1, 2, 2, 3, 3, 3 and start
counting sums of points obtained over N throws instead of coin heads, we’ll
get the same bell curve picture.
A great way to visualize this process was proposed by Sir Francis Galton in
1873 [51]. He designed a device consisting of a board with a mounted funnel
and several rows of pegs, arranged in a chessboard-like pattern. The bottom
of the board hosted isolated compartments. The idea is to fill the funnel with
beads and let them fall through the board, bouncing off the pegs and finally
resting in the bottom compartments (see Figure 4.4).
4 For our purposes, the probability of an event can be defined as the number of times this
event occurred in trials (its frequency), divided by the total number of trials made.
5 Note that the sum of two dice is not binomially distributed, see
https://math.stackexchange.com/questions/1204396
6 From Wolfram MathWorld:
https://mathworld.wolfram.com/CentralLimitTheorem.html
Stochastic processes 79
4.5.1 IMPLEMENTATION
The proposed implementation will merely provide a textual output of the
resulting placement of beads. While visualization of a probabilistic process is
the point of the real Galton board, let’s concentrate on its inner workings for
now. Fortunately, the underlying mechanics can be implemented just in a few
lines of code (see Listing 4.5).
80 Playful Python Projects
N = 100000
PEGS = 7
for _ in range(N):
bin_idx = 0
for _ in range(PEGS):
if randint(0, 1) == 1:
bin_idx += 1
bins[bin_idx] += 1
print(f"bins: {bins}")
print(f"prob: {[b / N for b in bins]}")
Intuitively it feels natural to assign the central peg to the next incoming
bead, and shift the bead to the left or to the right on each subsequent step.
However, thinking in terms of the target compartments leads to simpler code.
A generated random value of 1 corresponds to coin tails, or shifting the bead
to the right. Initially, a bead is projected to fall into the leftmost compart-
ment, and each toss of tails promotes the bead to the next compartment.
As expected, the final list generated by our code is quite close to binomial
distribution:
4.5.2 AFTERTHOUGHTS
I have to note that this code is not a very accurate representation of the
real Galton board. Imagine the process simulated here. A bead falls into the
funnel, meets the first peg and bounces off to land precisely onto the next
peg. These bounces continue with clockwork accuracy until the bead reaches
its compartment, and the next bead starts its journey.
The physical board works quite differently. The funnel is usually wide
enough to pour a small stream of beads, and both the pegs and the dis-
tances between them are quite large to allow a great deal of random jumping
and bumping into walls, pegs, and other beads.7 Still, all the beads live in
the same environment, and their jumps are identically distributed, so thanks
to the central limit theorem the bottom of the board still forms a perfect
bell curve. This mechanism of transforming the chaos of beads into a neat
4.6.1 IMPLEMENTATION
There are two points to clarify in this task before we can proceed to the code.
First: what texts to analyze? Second: how to show that the formula of Zipf’s
law is approximately correct?
While any reasonably large text would work fine, we can stay strictly within
the Python environment by utilizing a built-in help system. Just type, for
example,
82 Playful Python Projects
help('LISTS')
in the interactive Python shell to read the section on lists. The complete
list of topics is available by typing:
help('topics')
Some parts of the documentation can be accessed from code. The above-
mentioned section on lists, for instance, can be retrieved from the dictionary
topics, located in the module pydoc_data.topics:
print(topics['typesseq-mutable'])
Now let’s decide how to evaluate the correctness of the formula for the
given list of words and their frequencies. Fortunately, there is a very easy way
to do it: if Zipf’s law is correct, word ranks and frequencies should form a
nearly straight line on a log-log plot, which is a two-dimensional graph with
logarithmic scales on both axes. If the frequency of the k-th frequent word is
Fk , we draw a dot at the screen location (log k, log Fk ), and the resulting line
would be expected to connect the top-left and the bottom-right corners of the
chart.
The complete text of the program is shown in Listing 4.6. A turtle is used
to create a log-log plot, shown in Figure 4.5.
import math
import turtle
from collections import defaultdict
from pydoc_data.topics import topics
WIDTH = 600
HEIGHT = 400
for x, y in enumerate(values):
drawer.goto(math.log(x + 1), math.log(y))
drawer.pendown()
turtle.update()
turtle.done()
The input text is transformed into the list of words using split(). Words are
converted to a lowercase to make sure we don’t process entries like “List” and
“list” separately, and all non-alphabetic entries are removed. The frequencies
are stored in the dictionary freq, which has to be initialized with zeroes. The
easiest way to do it is to use the type defaultdict(int), which returns zero as a
value for any key not found in the dictionary. When freq is ready, we simply
retrieve all the frequency values it stores (we don’t need the words anymore)
and sort them in the descending order. The first element of the resulting list
is the 1st frequent word, so the x-coordinate of its dot on the graph should be
log 1.
While not a perfectly straight line, our chart is quite close to it. The right
end of the picture shows a very typical step-like pattern, which usually appears
at low frequencies. A frequency is a whole number: there is no way for a word
to occur 1.5 or 2.7 times in the text. Thus, the differences between the adjacent
low frequencies can be large in relation to their values, and these “jumps” are
quite noticeable.
In our case, there are 2355 distinct words in the dictionary, and the most
frequent of them appears 4269 times (yes, it is “the”). Thus, the values on the
vertical axis range from zero to log 4269 ≈ 8.36. Two least frequent elements
that occur once and twice, respectively, occupy, vertical positions log 1 = 0
and log 2 ≈ 0.69. This means the vertical “jump” between these elements
1
takes around 12 of the total chart height.
4.7.1 IMPLEMENTATION
This simulation will work in two steps. First, we will simulate the growth of
cities until their population stabilizes. Next, we’ll draw a log-log plot to check
how the cities agree with Zipf’s law.
Let’s start with the initial quick sketch of the program shown in Listing 4.7.
import math
import turtle
from random import randint
from dataclasses import dataclass
H = 25
W = 25
SLEEP_MS = 50
CELLSIZE = 20 # pixels
@dataclass
class SimState:
done: bool
def set_done(self):
self.done = True
@classmethod
def setup(cls):
r = cls(False)
turtle.listen()
turtle.onkeypress(r.set_done, "space")
return r
@dataclass
class WorldState:
...
is_stable: bool = False
86 Playful Python Projects
@classmethod
def setup(cls):
...
def update(self):
...
def setup_screen(title):
turtle.setup(W * CELLSIZE, H * CELLSIZE)
turtle.tracer(0, 0)
turtle.title(title)
turtle.setworldcoordinates(0, 0, W, H)
def draw_chart(rankings):
...
def tick():
if not sim_state.done:
if not world_state.is_stable:
world_state.update()
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
else:
draw_chart(world_state.rankings())
tick()
turtle.done()
As you can see, this is our usual template for an interactive simulation. The
internal state of the model is updated on each step, and the user can stop it
at any time by pressing the space key. In this case, however, there is an extra
stage where the resulting chart is drawn.
Now let’s make one step forward by implementing the initial distribution
of people among the cities (Listing 4.8).
...
SHAPE_SIZE_FACTOR = 0.01
SHAPE_SIZE = SHAPE_SIZE_FACTOR * CELLSIZE / 20 # turtle size
POPULATION = 25000
SHAPE_SIZE_FACTOR = 5000
Stochastic processes 87
DAC = 0.0005
MS = 3
@dataclass
class City:
shape: turtle.Turtle
population: int = 0
def increase(self):
self.update(self.population + 1)
def decrease(self):
self.update(self.population - 1)
if self.population == 0:
self.shape.hideturtle()
else:
self.shape.shapesize(SHAPE_SIZE * math.sqrt(self.population))
def score(self):
return self.population - DAC * self.population**2
@classmethod
def create(cls, x, y):
p = turtle.Turtle()
p.penup()
p.shape("circle")
p.goto(x, y)
return cls(p)
@dataclass
class Person:
city: object
max_distance: int
@classmethod
def create(cls, cities):
c = cities[randint(0, W - 1)][randint(0, H - 1)]
c.increase()
return cls(c, randint(1, max(H, W) // 2))
@dataclass
class WorldState:
cities: list
population: list
88 Playful Python Projects
@classmethod
def setup(cls):
cells = [[City.create(x, y) for y in range(H)] for x in range(W)]
population = [Person.create(cells) for _ in range(POPULATION)]
return cls(cells, population)
The WorldState object keeps a list of cities and a list of people. Each cell of
the W × H board contains a city, and there are POPULATION people in total. Each
city “knows” how many people live there by keeping a counter of its current
population. City objects can be updated by calling increase() and decrease()
methods, and each update triggers redrawing. A city is drawn as a circle of a
radius, proportional to the square root of its population (because by linearly
increasing a city radius, we quadratically increase its livable area). Empty
cities are hidden. The most important attribute of a city is its attractiveness
score, calculated in score() method:
def score(self):
return self.population - DAC * self.population**2
@dataclass
class WorldState:
...
prev_ranks: list = None
def rankings(self):
p = [self.cities[x][y].population for x in range(W) for y in range(H)]
return sorted([v for v in p if v > 0], reverse=True)
Stochastic processes 89
def update(self):
for p in self.population:
dest = self.better_neighbor(p.city, p.max_distance)
p.move_to(dest)
if self.rankings() == self.prev_ranks:
self.is_stable = True
print(f"The cities ({len(self.prev_ranks)}) have stabilized")
self.prev_ranks = self.rankings()
def draw_chart(rankings):
turtle.clearscreen()
drawer = turtle.Turtle()
drawer.hideturtle()
drawer.penup()
width = math.log(len(rankings))
height = math.log(rankings[0])
turtle.setworldcoordinates(0, 0, math.ceil(width), math.ceil(height))
print(rankings)
for x, y in enumerate(rankings):
drawer.goto(math.log(x + 1), math.log(y))
drawer.pendown()
The corresponding log-log plot (see Figure 4.7) shows a certain lack of
small-sized cities in our simulation: the graph falls down somewhat abruptly
at its right end. However, considering the simplicity of the underlying model,
its ability to produce mostly realistic distribution of cities is impressive.
90 Playful Python Projects
slow down for a few seconds due to a certain distraction, obstacle, or techni-
cal trouble.
If cars are moving not too fast, and the distance between the cars is rea-
sonable, nothing noteworthy would happen: after a while, the “rhythm” of
the road would go back to normal. In contrast, if the cars approach airplane
speeds, every random delay might trigger a major obstruction, known as a
traffic shockwave.
The slowed-down car creates a “tail” of cars behind it. Over time, the
density of this “tail” (cars per unit of distance) decreases, while its length
grows. Eventually the “tail” density approaches the usual density for the road,
and the jam dissolves.
This effect can be observed even in very basic computer models of traffic
flow, which will be demonstrated in our next project (adopted with simplifi-
cations from Wetherell [58]). Interestingly, similar experiments with real cars
were performed only recently, showing how shockwaves actually happen in
the physical world [52].
92 Playful Python Projects
4.8.1 IMPLEMENTATION
The description above may look a bit overwhelming, but the required func-
tionality is not very hard to separate into small independent functions. Let’s
start with the general structure of our program, shown in Listing 4.10.
import turtle
import math
from random import uniform, randint
from dataclasses import dataclass
WIDTH = 600
HEIGHT = 400
MARGIN = 50
SLEEP_MS = 20
N = 25
@dataclass
class SimState:
done: bool
cars: list
def set_done(self):
self.done = True
def gen_disruption(self):
self.cars[randint(0, N - 1)].disrupt()
Stochastic processes 93
@classmethod
def setup(cls, cars):
r = cls(False, cars)
turtle.listen()
turtle.onkeypress(r.set_done, "space")
turtle.onkeypress(r.gen_disruption, "Return")
return r
@dataclass
class Car:
...
def disrupt(self):
...
def move(self):
...
def setup_screen(title):
turtle.setup(WIDTH + MARGIN, HEIGHT + MARGIN)
turtle.tracer(0, 0)
turtle.title(title)
def setup_cars():
...
setup_screen("Traffic shockwaves")
cars = setup_cars()
sim_state = SimState.setup(cars)
def tick():
if not sim_state.done:
for c in cars:
c.move()
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
This code is based on our usual template, which served as a base for “Ideal
gas” and related simulations. We’ll make a list of N cars, which will be moved
inside tick(). The sim_state object stops program execution when the user
presses space, as usual. It can also generate a disruption for a random car
when the enter key is pressed (“Return” keysym).
Now let’s fill some gaps in Car class (see Listing 4.11).
94 Playful Python Projects
MIN_SPEED = 4
MAX_SPEED = 7
R = 200
...
@dataclass
class Car:
m: turtle.Turtle
angle: float
pref_speed: float
speed: float
front_car: object = None
in_disrupt: bool = False
...
def disrupt(self):
self.in_disrupt = True
def adjust_speed(self):
...
def move(self):
self.adjust_speed()
self.angle += self.speed / R
self.m.goto(R * math.cos(self.angle), R * math.sin(self.angle))
@classmethod
def create(cls, angle):
m = turtle.Turtle()
m.shape("circle")
m.shapesize(0.2)
m.penup()
speed = uniform(MIN_SPEED, MAX_SPEED)
return cls(m, angle, speed, speed)
def setup_cars():
cars = [Car.create(i * 2 * math.pi / N) for i in range(N)]
for i in range(N):
cars[i].front_car = cars[(i + 1) % N]
return cars
...
Initially, each car is moving with its preferred speed, randomly chosen be-
tween MIN_SPEED and MAX_SPEED. To distribute cars uniformly along the circular
road, we assign each car its unique angle, calculated as
2π
i·
N
Since a car watches its front neighbor, it needs a reference to the next car in
the list. The last car gets a reference to the first car.
Stochastic processes 95
x = R · cos(angle)
y = R · sin(angle)
The main logic of the program is hidden in adjust_speed(). It is a bit involved,
so let’s proceed carefully (see Listing 4.12).
EPSILON = 0.0001
SAFE_DIST_STEPS = 15
CRASH_DISTANCE = 2
...
@dataclass
class Car:
...
def accelerate(self):
...
def decelerate(self):
...
def front_distanace(self):
angle = self.front_car.angle - self.angle
return R * (angle if angle >= 0 else angle + 2 * math.pi)
def front_distance_steps(self):
return self.front_distanace() / (self.speed + EPSILON)
def adjust_speed(self):
if self.front_distanace() < CRASH_DISTANCE:
print("Crash!")
sim_state.set_done()
elif self.front_distance_steps() < SAFE_DIST_STEPS or self.in_disrupt:
self.decelerate()
elif self.speed < self.pref_speed:
self.accelerate()
DECEL = 0.4
ACCEL = 0.1
REACT_TIME = 15
DISRUPT_TIME = 70
...
@dataclass
class Car:
...
pause_steps: int = 0
def accelerate(self):
if self.pause_steps > 0:
self.pause_steps -= 1
else:
self.speed = min(self.pref_speed, self.speed + ACCEL)
def decelerate(self):
self.speed = max(0, self.speed - DECEL)
if self.speed == 0:
self.pause_steps = DISRUPT_TIME if self.in_disrupt else REACT_TIME
self.in_disrupt = False
...
to dissipate, but they are not fatal. By playing with the values, it is easy
to simulate both ends of the spectrum: low speeds and high safe distances
make shockwaves barely noticeable, while high speeds and low distances cause
immediate crashes.
DOI: 10.1201/9781003455295-5 99
100 Playful Python Projects
current
x, y steering
lightbulb
x', y'
So far, this sounds very much like the motion of a molecule of gas. The
key difference between these models is made by inertia. We will presume that
the moth cannot turn instantaneously, so any change of movement direction
takes time. Naturally, this presumption creates much more life-like models of
real macroscopic objects, such as cars, animals, or spaceships. The resulting
steering behavior, thoroughly discussed in the influential Reynolds paper [45]
is indeed often employed in animation, virtual simulations, and computer
games.
With or without inertia, our simulation will end when the hapless moth
smacks into the lightbulb. To make things more interesting, let’s also presume
that the lightbulb is really tiny, and the moth can’t hit it. Instead, it will fly
through it, turn around and resume its endless chase.
In general, this simulation works very much like “A molecule of gas”: on
every step, we simply update the moth’s coordinates according to velocity
constituents vx and vy . The trick here is to modify these constituents on every
step as well. This procedure is also familiar to us from “Newton’s cannonball”,
where Earth served as a really large and attractive lightbulb.
Suppose our moth wants to reach the lightbulb at (x′ , y ′ ) but is currently
moving somewhere else (see Figure 5.1).
In terms of velocity constituents, the moth is moving with (vx , vy ), but its
desired constituents are (vx′ , vy′ ) that go along the vector pointing toward the
lightbulb.
Living things 101
vx∗ = x′ − x
vy∗ = y ′ − y
However, we need to scale them to make up the original moth speed (let’s
denote it as V ). The length of the vector (vx∗ , vy∗ ) is
√
|(vx∗ , vy∗ )| = vx∗2 + vy∗2 ,
vy∗
vy′ = V
|(vx , vy∗ )|
∗
If we set vx′ and vy′ as moth’s velocity constituents, the moth will instantly
turn in the direction of the lightbulb. What we want, however, is to gradually
transform vx into vx′ , and vy into vy′ . To do it, we need to introduce another
constant: a force F that is applied to the moth on each step to alter its
direction (larger force increases moth’s maneuverability).
F
A force modifies velocity via acceleration, calculated as a = m according
to Newton’s second law. Here m is the mass of a body, which can be set to 1
in this case, so for us a = F .
To apply the force, we’ll need a steering vector (sx , sy ), calculated as
sx = vx′ − vx
sy = vy′ − vy
Now we’ll simply scale the steering vector to the length of F to obtain the
force (and, hence, the acceleration) to be applied to each velocity constituent:
sx
ax = F
|(sx , sy )|
sy
ay = F
|(sx , sy )|
Finally, let’s calculate moth’s new velocity constituents (vx∗∗ , vy∗∗ ), making
sure the resulting speed is V :
vx′′ = vx + ax
vy′′ = vy + ay
102 Playful Python Projects
vx′′
vx∗∗ = V
|(vx′′ , vy′′ )|
vy′′
vy∗∗ = V
|(vx′′ , vy′′ )|
5.1.1 IMPLEMENTATION
It might take some time to fully comprehend the formulas used to implement
steering behavior, but the actual code is straightforward and follows the same
approach as in our previous simulations. The only notable difference is the use
of vector arithmetic. The explanations above look quite repetitive, because
most formulas appear twice (once per constituent). However, we can clean up
the code a bit by using variables that represent two-dimensional vectors, and
implementing some basic arithmetical functions for them.
Since we are about to draw a moth rather than a molecule, I also suggest
keeping the original arrow-like shape of the turtle. The complete code is shown
in Listing 5.1, and the screenshot of the working program in Figure 5.2.
import turtle
import math
from random import uniform
from dataclasses import dataclass
WIDTH = 600
HEIGHT = 400
MARGIN = 50
R = 10
SLEEP_MS = 20
V = 15
FORCE = 9
BULB = (0, 0)
@dataclass
class SimState:
done: bool
def set_done(self):
self.done = True
@classmethod
def setup(cls):
r = cls(False)
turtle.listen()
turtle.onkeypress(r.set_done, "space")
return r
def setup_screen(title):
Living things 103
sim_state = SimState.setup()
setup_screen("A moth and a lightbulb")
bulb = turtle.Turtle()
bulb.shape("circle")
moth = turtle.Turtle()
moth.penup()
moth.setx(uniform(-WIDTH / 2 + R, WIDTH / 2 - R))
moth.sety(uniform(-HEIGHT / 2 + R, HEIGHT / 2 - R))
def length(vec):
return math.sqrt(vec[0] ** 2 + vec[1] ** 2)
def tick():
if not sim_state.done:
global v
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
104 Playful Python Projects
tick()
turtle.done()
5.2.1 IMPLEMENTATION
This change is really a matter of three new lines of code:
BULB_RANGE = 70
...
def tick():
if not sim_state.done:
global v
Living things 105
Please try out this program, if you haven’t done it yet. It’s hard to believe
that just a few lines of Python can create such a fascinating effect.
distance_to_target
speed = V ·
nest_range
5.3.1 IMPLEMENTATION
Tuning global parameters on the go is not a good programming style, but in
this case we can do it to quickly turn the “Dancing moth” simulation into
“Homing pigeon”. We will need a new global constant VMAX for the maximum
allowed speed, while V will start receiving updates.
All we need to do is add a couple of declarations and a single line modifying
V at the end of the loop:
V = 10
VMAX = V
NEST_RANGE = 70
...
def tick():
if not sim_state.done:
global v, V
...
If the bird is farther than NEST_RANGE from the nest (still called “bulb” here),
the resulting value of V will be higher than VMAX, so we’ll have to clamp it down.
Before wrapping up this example, I suggest one simple improvement. When
the bird is about to arrive, its movement direction goes through a rapid series
of little adjustments. They are almost invisible if we only look at trajectory,
but the erratic turns of the turtle shape (bird’s “beak”) are annoying, so it
makes sense to stop changing its visible orientation when the speed becomes
small. This can be done by making the call to setheading() conditional:
I think this capability made the influence of “Boids” so enduring: once the
original model becomes insufficient for whatever reason, it is not hard to
improve it.
The three original rules can be summarized as follows:
The simulation we are about to develop shortly is based on the rules above,
their interpretation by Conrad Parker1 , and implementation by Ben Eater2 .
This flavor of “Boids” relies on the following additional presumptions:
5.4.1 IMPLEMENTATION
Since our new program will need a few functions from the “moth” simulation,
let’s take “moth” as a starting point, keeping only the necessary parts (see
Listing 5.2).
import turtle
import math
from random import uniform
from dataclasses import dataclass
N = 100 # number of boids
WIDTH = 600
HEIGHT = 400
MARGIN = 50
R = 10
1 Available at http://www.kfish.org/boids/pseudocode.html
2 Available at https://eater.net/boids
108 Playful Python Projects
SLEEP_MS = 20
@dataclass
class SimState:
... # same as in moth
@dataclass
class Boid:
... # to be created
sim_state = SimState.setup()
setup_screen("Boids")
boids = [Boid.create() for _ in range(N)]
def tick():
if not sim_state.done:
for b in boids:
b.move()
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
Our next job would be to develop the class Boid, responsible for the actual
behavior of flock members. As already noted, behavior rules in “Boids” are
stackable, so we will deal with them one-by-one. Let’s start with creating and
moving a boid:
...
MIN_V = 5
MAX_V = 20
@dataclass
class Boid:
m: turtle.Turtle
v: tuple
def move(self):
self.rule_separation()
self.rule_alignment()
self.rule_cohesion()
self.rule_limits()
@classmethod
def create(cls):
x = uniform(-WIDTH / 2 + R, WIDTH / 2 - R)
y = uniform(-HEIGHT / 2 + R, HEIGHT / 2 - R)
m = turtle.Turtle()
m.penup()
m.goto(x, y)
v = uniform(MIN_V, MAX_V)
angle = uniform(0, 2 * math.pi)
return cls(m, (v * math.cos(angle), v * math.sin(angle)))
This piece is very similar to the code we use for the moth simulation.
A boid is created in a random screen location, and it has a random speed
(within the given limits) and a random direction. To move a boid, we calculate
its new coordinates using velocity constituents, point its head in the right
direction, and update the current location. Velocity constituents are being
updated on every simulation step according to the four rules. Three of them
are the original rules proposed by Reynolds, while the fourth rule (“limits”)
keeps the flock within screen bounds, and limits movement speed to MAX_V.
Since three original rules rely on the concept of “nearby boids”, we will
need to be able to identify all the flockmates within a given distance:
@dataclass
class Boid:
...
# returns all neighbors of the current boid within the given range
def neighbors(self, dist):
return [b for b in boids if b != self and self.m.distance(b.m) < dist]
@dataclass
class Boid:
...
def rule_separation(self):
neighbors = self.neighbors(MIN_DISTANCE)
if neighbors:
vx = sum(self.m.xcor() - b.m.xcor() for b in neighbors)
vy = sum(self.m.ycor() - b.m.ycor() for b in neighbors)
target_v = (vx, vy)
20, 10
10, 5
25, 0
VISUAL_RANGE = 200
VMATCH_F = 0.04
...
@dataclass
class Boid:
...
def rule_alignment(self):
neighbors = self.neighbors(VISUAL_RANGE)
if neighbors:
vx = sum(b.v[0] for b in neighbors) / len(neighbors)
vy = sum(b.v[1] for b in neighbors) / len(neighbors)
flock_v = (vx, vy)
v_diff = vecdiff(flock_v, self.v)
This one is simpler: we calculate the average velocity constituents (vx, vy)
of the nearby boids and try to match them by applying the force VMATCH_F to
the current boid.
The final original rule is cohesion:
CENTERING_F = 0.03
...
@dataclass
class Boid:
...
def rule_cohesion(self):
neighbors = self.neighbors(VISUAL_RANGE)
if neighbors:
cx = sum(b.m.xcor() for b in neighbors) / len(neighbors)
cy = sum(b.m.ycor() for b in neighbors) / len(neighbors)
center = (cx, cy)
Here we calculate the center of the “local flock” (cx, cy) and steer the boid
toward it by applying the force CENTERING_F.
Finally, we have to limit boid speed to MAX_V, and steer any boids away from
the screen borders:
VRETURN_F = 3.0
RET_MARGIN = 70
...
@dataclass
class Boid:
...
def rule_limits(self):
ax, ay = 0, 0
A boid won’t reflect from the screen border like a molecule of gas. Instead,
we will apply the force VRETURN_F to steer the boid back. This force has to
be negative for the boids flying toward the right and the top borders of the
screen, and positive for the left and the bottom borders.
112 Playful Python Projects
3 Fun fact: remember that these boids are actually flying turtles in our case.
Living things 113
While the original motivation for creating L-systems was to study and
model the actual laws behind the process of plant growth, our goal is going to
be more modest. In case of boids, we were interested in making the onscreen
objects move somewhat like birds, and now we will try to generate patterns
that look somewhat like plants.
In a nutshell, L-systems are based on three foundational ideas. The first
idea is to represent plant structure with a string of characters. For example,
suppose that a certain type of a plant is made of three kinds of elements: a
trunk element I, a left leaf L, and a right leaf R. A little specimen of this plant,
consisting of a single trunk element and two leaves, can be described with a
string LRI (see Figure 5.5). Note that any particular image is just an illustration
of one of many possible looks of the LRI plant. The textual description does not
really dictate any particular appearance, it is only responsible for the internal
structure of the entity we describe.
Having enough patience, one can create fairly complicated plants using
this approach, but in practice the description strings quickly become long and
difficult to work with. Thus, the second foundational idea is to specify the
developmental rules of the plant in the form
predecessor → successor
We can make our LRI plant growable by introducing a new element A
(“root”) and specifying its developmental rule:
A → ALRI
Now let’s “seed” the initial plant, consisting only of the root element A.
After three applications of the developmental rule, we will obtain a “triple-
LRI” plant (see Figure 5.6):
A → ALRI → ALRILRI → ALRILRILRI
114 Playful Python Projects
step 0: g
step 1: g[+g][-g]
step 2: g[+g][-g][+g[+g][-g]][-g[+g][-g]]
Command Description
any lowercase Go DISTANCE points forward while drawing.
letter
any uppercase Ignore (do nothing).
letter
+ Turn left by ANGLE degrees.
- Turn right by ANGLE degrees.
[ Start a new branch (pass the control to a new turtle
created here).
] End a branch (pass the control to the previously active
turtle).
4 This is how deterministic context free (DOL) systems work. There are also other, more
Now let’s look at the picture drawn by the visualization algorithm for the
previously discussed L-system consisting of the axiom g and a single produc-
tion rule g → g[+g][-g] (see Table 5.2). This picture is obtained with DISTANCE
= 25, ANGLE = 20; the turtle is initially placed at the bottom of the screen,
pointing upward.
0 g
1 g[+g][-g]
2 g[+g][-g][+g[+g]
[-g]][-g[+g][-g]]
…
6 …
The initial plant string g is easy: by executing the command g, the turtle
draws a single vertical line. The next string g[+g][-g] is more interesting. The
first g corresponds to a vertical line, just like before. The next symbol [ starts
a branch: a new turtle appears in the current position, turns ANGLE degrees to
the left (the + command) and draws a line (the g command). The branch ends
here, and the control is passed to the previous turtle. Then the right branch
is drawn in the same manner.
Clearly, these descriptions contain many unnecessary commands. For ex-
ample, the next plant string starts with the already familiar fragment
g[+g][-g]. After finishing it, the turtle proceeds to a more complex structure
116 Playful Python Projects
step 0: A
step 1: g[+A][-A]
step 2: g[+g[+A][-A]][-g[+A][-A]]
step 3: g[+g[+g[+A][-A]][-g[+A][-A]]][-g[+g[+A][-A]][-g[+A][-A]]]
...
Being represented with uppercase letters, buds are ignored by the turtle
(and thus are invisible).
5.5.1 IMPLEMENTATION
Unlike our previous simulations, this program does not need animation, and
will finish its work as soon as the specified plant is drawn. Let’s start with
the basic template, shown in Listing 5.3.
import turtle
from dataclasses import dataclass
WIDTH = 600
HEIGHT = 400
# example plant
AXIOM = "A"
RULES = {"A": "g[+A][-A]"}
ANGLE = 20
DISTANCE = 25
STEPS = 6
def setup_screen(title):
turtle.setup(WIDTH, HEIGHT)
turtle.tracer(0, 0)
turtle.title(title)
@dataclass
class LSystem:
script: str
@classmethod
def create(cls):
...
Living things 117
setup_screen("L-systems")
LSystem.create().draw(drawer)
turtle.done()
There are only two missing fragments in this code. The create() function
will simulate STEPS iterations of plant development and save the resulting de-
scription string in the script variable. The draw() function will visualize the
plant stored in script.
Let’s create the plant growing functionality first:
...
@dataclass
class LSystem:
script: str
@classmethod
def create(cls):
r = cls(AXIOM)
for _ in range(STEPS):
r.transform()
return r
def transform(self):
self.script = "".join([self.apply_rule(c) for c in self.script])
...
The main work here is done inside transform(). It creates the next version of
self.script by concatenating all the symbols obtained by applying a matching
rule for each symbol c in the current self.script. The apply_rule() function
returns the right-hand side of the matching rule for the given character, or
the character itself if no matching rule is found.
The drawing functionality is longer but simple:
...
@dataclass
class LSystem:
script: str
118 Playful Python Projects
if c.islower():
drawer.forward(DISTANCE)
turtle.update()
if c == "+":
drawer.left(ANGLE)
if c == "-":
drawer.right(ANGLE)
if c == "[":
self.draw(drawer.clone())
if c == "]":
return
Here we set up a turtle first, and then proceed to executing the commands
stored in self.script. Each step “eats up” the first character in self.script,
so the drawing process ends when this string becomes empty. To make the
visualization more fun to watch, turtle.update() is called every time something
new appears on the screen.
The screenshot shown in Figure 5.7 shows the visualization of the following
L-system:
# Prusinkiewicz & Lindenmayer, Fig. 1.24-f
AXIOM = "X"
RULES = {"X": "g-[[X]+X]+g[+gX]-X", "g": "gg"}
ANGLE = 22.5
DISTANCE = 5
STEPS = 5
Hardin’s essay caused much debate, mainly because he was primarily con-
cerned with the problem of overpopulation, and his idea was to agree on the
rules of conscious human reproduction (the problem of overgrazing was dis-
cussed in literature long before). There is no surprise that such a proposal
ignited quite a stir, with numerous researchers pointing out the differences
between the actual cases where overgrazing occurs, and seemingly similar,
but ultimately distinct scenarios. In any case, Hardin’s main message to rec-
ognize the limits of common resources and to negotiate for the mutual good
remains relevant, and in our next simulation we’ll see how reasonable quotas
help to prevent overfishing.
For starters, we need to be able to simulate the process of population
growth. The simplest and perhaps most widely used approach is to employ
logistic function [41]:
P
∆P = rP (1 − )
K
Here ∆P is the change of population per unit of time, r is the natural growth
rate, P is the current population size, and K is the “carrying capacity”, rep-
resenting the maximum population size that can survive in the given environ-
ment.
For example, suppose that 100 tons of fish live in a basin that can support
up to 3000 tons of fish, and the natural growth rate is 20% per year. After
one year, the basin will receive additional
100
∆P = 0.2 · 100 · (1 − ) ≈ 19
3000
tons of fish. After 40 years or so, the fish population will reach its natural
limits (see Figure 5.8).
To examine the effects of fishing, let’s remove a fixed amount of fish from
the basin every year, and see what happens.
5.6.1 IMPLEMENTATION
Like the previous project, this one does not need animation. It is also quite
simple, so we can proceed straight to the code (see Listing 5.4).
import turtle
WIDTH = 600
HEIGHT = 400
YEARS = 40
P = 100
r = 0.2
K = 3000
Living things 121
2,969
0
4 8 12 16 20 24 28 32 36 40
HARVEST = 0
def setup_screen(title):
turtle.setup(WIDTH, HEIGHT)
turtle.tracer(0, 0)
turtle.title(title)
turtle.setworldcoordinates(0, 0, YEARS, K)
drawer = turtle.Turtle()
drawer.hideturtle()
turtle.update()
turtle.done()
In the current form, the program draws the same chart as shown in Fig-
ure 5.8. The simulation becomes more interesting when the yearly catch of
HARVEST tons of fish is introduced. Let’s see what happens, for example, in a
lake with an existing population of 1000 tons of fish and a yearly harvest of
120 tons (Figure 5.9).
This is “conscious consumption” scenario, good both for the people, and
for the fish. However, the difference between this idyllic picture and predatory
abuse is a mere 20 extra tons a year (see Figure 5.10).
122 Playful Python Projects
FIGURE 5.9: Dynamics of fish population (harvesting 120 tons of fish yearly).
http://www.shodor.org/interactivate/activities/RabbitsAndWolves/
Let’s implement this project first, and then discuss the life of rabbits and
wolves in more detail.
6.1.1 IMPLEMENTATION
The rules above are not a match for “boids” in terms of simplicity and el-
egance, so the final program is going to be relatively long and complicated.
We’ll start with the outline provided in Listing 6.1, which follows the scheme
used in our previous animation-based projects.
import turtle
from random import uniform, randint
from dataclasses import dataclass
@dataclass
class WolfConfig:
fat_use: float = 0.04
max_age: int = 17
delivery_age: int = 4
delivery_p = 0.4
fat_factor = 0.6
shape: str = "classic"
color: str = "black"
# same as before
@dataclass
class SimState:
done: bool
def set_done(self):
self.done = True
@classmethod
def setup(cls):
r = cls(False)
turtle.listen()
turtle.onkeypress(r.set_done, "space")
return r
@dataclass
class WorldState:
... # TODO
def setup_screen(title):
turtle.setup(W * 20, H * 20)
turtle.tracer(0, 0)
turtle.title(title)
turtle.setworldcoordinates(0, 0, W, H)
def tick():
if not sim_state.done:
world_state.update()
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
def update(self):
self.cycle += 1
rabbits = self.animals(self.rabbits)
wolves = self.animals(self.wolves)
print(f"{self.cycle}\t{len(rabbits)}\t{len(wolves)}") # for logging
for v in self.grass.values():
v.grow()
self.keep_alive(self.rabbits)
self.keep_alive(self.wolves)
self.move_and_deliver(self.rabbits)
self.move_and_deliver(self.wolves)
@classmethod
def coords(cls, count):
r = set()
while len(r) < count:
r.add((randint(0, W - 1), randint(0, H - 1)))
Grid worlds 129
return r
@classmethod
def setup(cls):
coords = [(x, y) for x in range(W) for y in range(H)]
The simulation starts with the call to setup(). Biota lives in three separate
“planes” grass, rabbits, and wolves, represented with dictionaries. A key in
such dictionary is a pair of coordinates (x, y), and a value is a plant or an
animal.
Since grass covers the whole plane, a Grass object is created for every possi-
ble pair of coordinates. To create rabbits and wolves, we start with preparing
their planes filled with None values. Next, the necessary amount of random co-
ordinate pairs is generated in coords(). Sets cannot contain duplicate values,
so the function returns only after enough unique pairs are produced.
The call dic1.update(dic2) copies key/value pairs from dic2 into dic1, over-
riding existing keys. Thus, None values in rabbits and wolves will be replaced
with Animal objects. Since the rules for both types of animals are nearly the
same, we’ll rely on the same class Animal with different settings.
World updates are handled in the update() function. It processes biota in a
straightforward manner: first the grass grows, then rabbits and wolves update
their fat reserves. A grass object does not need anything to grow, while the
animals might eat their food located in the same cell. After this process,
we update animal planes to keep only the animals that survived this round.
Finally, each animal has a chance to move to a neighboring cell and deliver
an offspring.
The coordinates for the target cell are determined randomly as follows:
If an animal jumps off the board side, it appears at the opposite side, which
means our patch of land actually has a shape of a torus (i.e., donut). If the
target cell is empty, we move the animal there and possibly create a baby
animal at the previous location.
Our next task is to design the Grass class (see Listing 6.3).
130 Playful Python Projects
@dataclass
class Shape:
drawer: turtle.Turtle
@classmethod
def create(cls, shape, color, size, coords):
r = turtle.Turtle()
r.shape(shape)
r.color(color)
r.penup()
r.shapesize(size)
r.goto(coords[0], coords[1])
r.right(90)
return cls(r)
@dataclass
class Grass:
shape: Shape
amount: float
def grow(self):
self.amount = min(self.amount + GRASS_GROWTH, 1)
self.shape.update(self.amount, True)
@classmethod
def create(cls, coords):
amount = uniform(0, 1)
color = "lawn green"
return cls(Shape.create("circle", color, amount, coords), amount)
The utility Shape class is a simple wrapper for a turtle-based drawer. Its
only capability is to create a turtle of the specified shape, size, and color in
the given location, and update its location and size upon request.
Grass plants are represented with green circles. Note the try_eat() function:
it will be called by an animal that wants to eat to_eat amount of grass. The
Grid worlds 131
Grass objects will return the actual amount eaten (which cannot exceed the
current amount available), and reduce its size accordingly.
Finally, let’s consider the Animal class (see Listing 6.4).
def is_alive(self):
return self.fat > 0 and self.age <= self.cfg.max_age
@classmethod
def create_full(cls, cfg, age, fat, coords):
shape = Shape.create(cfg.shape, cfg.color, fat, coords)
return cls(shape, fat, age, cfg)
@classmethod
def create_newborn(cls, cfg, coords):
return cls.create_full(cfg, 0, NEWBORN_FAT, coords)
@classmethod
def create(cls, cfg, coords):
age = randint(0, cfg.max_age)
return cls.create_full(cfg, age, uniform(0, 1), coords)
132 Playful Python Projects
Perhaps, the most tricky function here is try_eat(), which is only called
for rabbit objects. Here a rabbit escapes (zero is returned) if its fat reserve
exceeds the hunting wolf’s needs. Otherwise, all the fat is consumed, and the
rabbit will be removed from the board on the next turn.
According to the RabbitCfg and WolfConfig classes, the rabbits are going to
be represented with turtle shapes (because they vaguely resemble turtles),
and the wolves will have “classic” shapes with pointy ears.
A running program should produce a picture similar to the one shown in
Figure 6.1.
Grid worlds 133
40
Prey
Predator
0
6.1.2 AFTERTHOUGHTS
It is fun to watch “Rabbits and wolves” running, and you can obtain very
different outcomes by playing with the configuration. The big question is
whether this little world has any relation to the real world, where actual
rabbits and wolves live.
Clearly, a handful of arbitrary rules that reflect the most general ideas of a
population lifecycle cannot reflect the complexities of reality. However, even
thousands of rules cannot reflect these complexities either, meaning that the
practical goal of a typical computer simulation should be to model certain
aspects of reality with certain acceptable accuracy.
In “The tragedy of the commons”, we could see how it works. The complex
concept of a fish population was boiled down to a simple formula that makes
no difference between individual fish, and it does not reflect any damages
made by natural disasters or predators. Yet it is reasonably accurate in certain
contexts, which makes it useful in practice. Broadly speaking, any scientific
theory (and the fish population growth formula is a scientific theory) comes
with clearly stated conditions where it is applicable.
Our “Rabbits and wolves” simulation belongs to a family of widely used
predator-prey models. In the most commonly discussed scenario, there are
only two populations: prey that always have enough food, and predators that
depend on the prey. There is also a fairly standard approach to simulate
their dynamics using Lotka-Volterra equations [19]. These equations produce
a picture of growing prey population, which triggers the growth of predators,
in turn, making the prey population shrink and, therefore, reduce the predator
count. Both populations continue to oscillate in this manner indefinitely (see
Figure 6.2)). By adjusting model parameters (such as predator or prey growth
rate and their sensitivity to the size of the second population), it is possible
to simulate different kinds of animals.
Just like the fish population model based on logistic function, the Lotka-
Volterra model operates under strong constraints that are rarely seen in na-
ture. Yet, similar cycles can be observed in reality. A very popular exam-
ple is the life of the Canadian lynx and Snowshoe hare in Canada [9] (see
134 Playful Python Projects
134,000
Snowshoe hare
Canadian lynx
5 0 5 0 5 0
187
0
187 188 188 189 189 190
225
0
180 190 0 210 0 0 0 0 0 0 0
20 22 23 24 25 26 27 28
FIGURE 6.4: Population dynamics of rabbits and wolves (steps 180 to 280).
Figure 6.3). It is clear, however, that in the real world every cycle is different,
and none of the peaks is destined to have a perfect twin in the future.
Now let’s go back to “Rabbits and wolves”. By experimenting with settings,
it is easy to get a scenario where one of the population quickly dies due to
lack of food. Rabbits can even go extinct without wolves if they propagate
rapidly and eat too much. It seems, however, that any reasonable combina-
tion of parameters that allows both populations to survive a few hundreds of
simulation steps, produces a chart, similar to the shown in Figure 6.3. For
example, Figure 6.4 shows the life of rabbits and wolves obtained with the
default configuration provided in Listing 6.1.
In the configuration used here, quite large fat reserves are allocated to
wolves, so they react to the diminishing rabbit population with a delay.
Smaller reserves would bring population peaks closer to each other, but they
make wolves more vulnerable.
One interesting feature of our project, the model at shodor.org, and their
earlier inspiration Wa-Tor [7] is the instability of the obtained ecosystem.
A small change in a parameter or simply a streak of unlucky events may
bring it to the verge of extinction. This effect can also be observed in classic
predator-prey models, so our rabbits and wolves are in a good company. One
interesting way to destabilize the ecosystem is to provide too much food to
Grid worlds 135
the prey, which triggers dangerous growth of predator population (this effect
is known as the paradox of enrichment [46]).
Apparently, real-life populations are much more robust, being able to deal
with shortages of food and natural disasters. There are also mechanisms pro-
tecting them against the “paradox of enrichment” scenario [47]. Now we will
proceed to the next project, keeping in mind that “Rabbits and wolves” still
has much room for further improvements.
6.2 EVOLUTION
Rabbits and wolves in our simulation never change. One population cycle
follows another, but no rabbit and no wolf can learn anything from numerous
generations of their ancestors. In contrast, creatures of our world evolve, and
as a result, become better adapted to their environment.
Evolution we see in nature is a complicated process, but its effects can be
observed even in very simple models. To create such a model, it’s important to
understand that the term “evolution” (or “adaptive evolution”) might mean
either the processes actually happening in the world of living beings, or the
general idea of the mechanism that produces adaptive changes. It turns out
that this mechanism kicks in when the following conditions are met [17]:
• Next, each bug consumes one unit of energy and eats plankton be-
neath it (if available), obtaining FOOD_ENERGY units. Bug’s energy can-
not exceed MAX_ENERGY, and the bug dies if its energy drops down to
zero. After eating, the bug makes a random turn and moves forward
to one of the adjacent cells. There are eight possible directions of
movement2 , so the bug might turn 0, 45, …, 315 degrees to the left.
The probability of each turn is determined by its weight; each bug
receives a list of eight random weights upon creation.
• Bugs that survived MATURITY_AGE iterations and accumulated at least
FISSION_ENERGY units of energy propagate by division (“fission”). The
parent bug is replaced by two newborn bugs, each having one-half of
their parent’s energy. Direction weights are inherited with deviations:
one descendant gets only a half of one randomly chosen weight, while
another descendant gets a randomly chosen weight doubled.
The conditions for evolution are satisfied in the world of bugs as follows:
1. Bugs differ by their turning behavior. Some are more likely to turn
left, for example, while others might prefer 180° U-turns.
2. Turning behavior is inheritable, but with random weight modifica-
tions.
3. Plankton can easily be made scarce by reducing PLANKTON_GROWTH, which
makes life harder for the bugs.
4. Not every movement strategy is equally good to find food. Thus, the
bugs with more successful behavior are more likely to propagate.
Before proceeding to implementation, let’s discuss how to evaluate the fi-
nal result. How would we understand that the bugs evolve over time? It turns
out that devising a good criterion is not very easy. We can’t expect the in-
crease of the average bug lifespan, because at a certain age bugs fission. We
can’t expect the population to grow because scarce resources limit the size of
sustainable colony. We can’t even say that especially successful bugs produce
more offspring because every passable individual is going to bear exactly two
successors.
My proposal is to count the number of unique cells visited by a bug over
its lifespan and report the average value for each generation. The rationale is
simple: any successful foraging strategy should be based on harvesting existing
plankton. Since new plankton units appear in random bowl areas, there is little
sense in revisiting the same cells in the hope for luck.
There is no guarantee to find food by exploration, but we can check whether
the bugs at least tried.
6.2.1 IMPLEMENTATION
Just like “Rabbits and wolves”, “Evolution” is relatively long, so let’s start
with an outline (see Listing 6.5).
INITIAL_ENERGY = 120
MAX_ENERGY = 400
FISSION_ENERGY = 250
MATURITY_AGE = 200
FOOD_ENERGY = 20
MAX_WEIGHT = 32
PLANKTON_GROWTH = 4
PLANKTON_COUNT = 300
BUGS_COUNT = 50
@dataclass
class SimState:
done: bool
def set_done(self):
self.done = True
@classmethod
def setup(cls):
r = cls(False)
turtle.listen()
turtle.onkeypress(r.set_done, "space")
return r
@dataclass
class Plankton:
...
138 Playful Python Projects
@dataclass
class Bug:
...
@dataclass
class WorldState:
...
def setup_screen(title):
turtle.setup(W * CELLSIZE, H * CELLSIZE)
turtle.tracer(0, 0)
turtle.title(title)
turtle.setworldcoordinates(0, 0, W, H)
setup_screen("Evolution")
sim_state = SimState.setup()
world_state = WorldState.setup()
def tick():
if not sim_state.done:
world_state.update()
turtle.ontimer(tick, SLEEP_MS)
def tick_draw():
if not sim_state.done:
turtle.update()
world_state.report_visits()
turtle.ontimer(tick_draw, VIS_SLEEP_MS)
tick()
tick_draw()
turtle.done()
So far, it is just a generic prototype, very similar to the one used in “Rabbits
and wolves”. In this case, however, we have to strive for performance, since the
goal is to examine dozens of generations of bugs. For this reasons screen update
is moved into a separate function tick_draw(), which is called less frequently
than the primary world update function tick().
Let’s implement the simplest missing class Plankton (see Listing 6.6).
@dataclass
class Plankton:
shape: turtle.Turtle
def energy(self):
return FOOD_ENERGY if self.shape.isvisible() else 0
Grid worlds 139
@classmethod
def create(cls, x, y):
p = turtle.Turtle()
p.penup()
p.color("lawn green")
p.shape("triangle")
p.shapesize(SHAPE_SIZE)
p.goto(x, y)
p.hideturtle()
return cls(p)
@dataclass
class Bug:
shape: turtle.Turtle
dirweights: list
energy: int
generation: int
visited: set
age: int = 0
def x(self):
return clamp(round(self.shape.xcor()), 0, W - 1)
def y(self):
return clamp(round(self.shape.ycor()), 0, H - 1)
def remove(self):
gen_visited[self.generation] += len(self.visited)
self.shape.hideturtle()
self.shape.forward(1)
self.shape.goto(self.x(), self.y())
def fission(self):
if self.age > MATURITY_AGE and self.energy > FISSION_ENERGY:
self.remove()
e = int(self.energy / 2)
dirs1 = self.new_dirweights(self.dirweights, 2)
dirs2 = self.new_dirweights(self.dirweights, 0.5)
return [
Bug.create(self.x(), self.y(), dirs1, e, self.generation + 1),
Bug.create(self.x(), self.y(), dirs2, e, self.generation + 1),
]
return [self]
@classmethod
def create(cls, x, y, weights, energy, gen):
if gen == len(gen_count):
print(f"Generation: {gen}")
gen_count.append(0)
gen_visited.append(0)
gen_count[gen] += 1
p = turtle.Turtle()
p.penup()
p.shape("turtle")
p.shapesize(SHAPE_SIZE)
p.goto(x, y)
return cls(p, weights, energy, gen, set())
@classmethod
def create_random(cls):
x = randint(0, W - 1)
y = randint(0, H - 1)
weights = [randint(1, MAX_WEIGHT) for _ in range(8)]
return cls.create(x, y, weights, INITIAL_ENERGY, 0)
r = choices(list(range(8)), self.dirweights)[0]
self.shape.left(45 * r)
Grid worlds 141
This fragment makes use of a Python built-in function choices() that gener-
ates a list of k random elements from the options taken from its first argument,
using weights from its second argument. By default, k=1, so we obtain a single-
element list, containing a certain value in the range 0-7. If you are interested
in how weighted random choice works, check out, for example, detailed blog
posts by Zhanliang Liu [25] and Keith Schwarz [48].
By moving forward via self.shape.forward(1), a bug might end up being on
the border between cells. Since cell coordinates of a bug are only needed to
check for the plankton beneath, we can simply use a rounded bug’s coordinates
when necessary.
When a bug fissions, we create two versions of updated lists of direction
weights:
dirs1 = self.new_dirweights(self.dirweights, 2)
dirs2 = self.new_dirweights(self.dirweights, 0.5)
The first version will have a random weight doubled, the second version
will have a random weight halved. Bugs that don’t fission are returned from
the fission() method unchanged.
Bugs that fission or die are removed by the call to remove(). This func-
tion hides the bug and adds its unique visited cell count to the stats of its
generation.
Finally, let’s bind everything together using WorldState (see Listing 6.8).
@dataclass
class WorldState:
plankton: list
bugs: list
cycle: int = 0
mingen: int = 0
def update(self):
self.cycle += 1
self.add_plankton(PLANKTON_GROWTH)
for b in self.bugs:
food = self.plankton[b.x()][b.y()]
b.eat_and_move(food.energy())
food.show(False)
def report_visits(self):
# report past generations only
new_mingen = min(b.generation for b in self.bugs)
if new_mingen != self.mingen:
self.mingen = new_mingen
visits = (gen_visited[i] / gen_count[i] for i in range(new_mingen))
print([f"{v:.2f}" for v in visits])
@classmethod
def setup(cls):
bugs = [Bug.create_random() for _ in range(BUGS_COUNT)]
plk = [[Plankton.create(x, y) for y in range(H)] for x in range(W)]
The logic of this class follows the original description of the project. On
each simulation step, PLANKTON_GROWTH new instances of plankton are added to
the “soup”. If a cell chosen for the plankton turns out to be already occupied,
nothing happens. Each bug eats and moves (hidden plankton objects give zero
energy supply), then the bugs that did not survive are removed. To form the
final list of bugs after fission, a little trick based on sum() is used. Suppose
there are three bugs: a, b, and c. The bug b is about to fission into b1 and b2.
In this case,
The call to sum() sums up these lists, yielding [a, b1, b2, c]. The second
argument of sum() is also added to the result. By default it equals zero, but
adding numbers to lists is not allowed, so we have to pass an empty list instead.
The function report_visits() outputs the average number of unique cells
visited by a bug of each generation. Only past generations (not currently
present on the board) are reported, and nothing is reported if there were no
changes since the last call to the function.
Now the program is complete and can be launched (see Figure 6.5).
The result of each test run is going to be different, and some runs might
yield better results than others. We have to remember that evolution has no
goals or purposes: the organisms that simply happen to survive and propagate
pass their features to their descendants, ensuring the presence of these features
in the future. As a result of evolution, organisms may become simpler or more
complex, their size may increase or decrease, and their eating habits may
change unpredictably.
In our case, “visiting as many cells as possible” is just a reasonable heuris-
tics for successful foraging behavior. In reality, a skillful forager bug might
get really unlucky if its surroundings are scarce, while a simple “jitterbug”,
moving back and forth inside a fertile area might live long and prosper. Still,
most runs will likely produce many generations of impressive foragers.
Grid worlds 143
Consider, for example, two bugs having direction weights [32, 16, 2, 32, 2,
8, 4, 8] and [32, 16, 2, 16, 2, 8, 16, 4]. These lists can be visualized with radar
charts (see Figure 6.6). Clearly, such bugs prefer moving forward, making
occasional turns, which lets them cover large areas of unexplored space.
The average number of cells visited by the bugs of subsequent generations
also go up (see Figure 6.7).
Aggressive foraging behavior, however, is only necessary in scarce environ-
ments. A bug created as a result of fission, gets at least 125 units of energy
and has to survive 200 turns to reach maturity. The next goal is to accumulate
250 units of energy. Since the bug spends one energy unit on each turn, and
plankton gives 20 units, it is enough to eat just 17 plankton during a lifetime
in order to fission: 125-200+20×17=265.
144 Playful Python Projects
430
156
0 45
https://www.cdc.gov/fungal/diseases/ringworm/
Grid worlds 145
6.3.1 IMPLEMENTATION
This simulation relies on various ideas and techniques we used previously, so
let’s discuss its complete implementation right away (Listing 6.9).
import turtle
from enum import Enum
from random import randint
from dataclasses import dataclass
H = 41
W = 41
SLEEP_MS = 20
CELLSIZE = 15 # pixels
SHAPE_SIZE = CELLSIZE / 20 # turtle size
@dataclass
class SimState:
done: bool
def set_done(self):
self.done = True
@classmethod
def setup(cls):
r = cls(False)
turtle.listen()
turtle.onkeypress(r.set_done, "space")
return r
class Status(Enum):
HEALTHY = 1
INFECTED = 2
IMMUNE = 3
@dataclass
class Cell:
shape: turtle.Turtle
146 Playful Python Projects
status: Status
count: int
@classmethod
def create(cls, x, y):
p = turtle.Turtle()
p.penup()
p.shape("circle")
p.shapesize(SHAPE_SIZE)
p.goto(x, y)
p.color("white")
return cls(p, Status.HEALTHY, 0)
colors = {
Status.HEALTHY: "white",
Status.INFECTED: "red",
Status.IMMUNE: "blue",
}
self.shape.color(colors[status])
def update(self):
self.count = max(0, self.count - 1)
if self.count == 0:
if self.status == Status.IMMUNE:
self.update_status(Status.HEALTHY, 0)
elif self.status == Status.INFECTED:
self.update_status(Status.IMMUNE, 4)
@dataclass
class WorldState:
cells: list
return r
def update(self):
for x in range(W):
for y in range(H):
self.cells[x][y].update()
to_infect = []
for x in range(W):
for y in range(H):
Grid worlds 147
if self.cells[x][y].status == Status.INFECTED:
to_infect += self.spread_from(x, y)
for c in to_infect:
c.update_status(Status.INFECTED, 6)
@classmethod
def setup(cls):
cells = [[Cell.create(x, y) for y in range(H)] for x in range(W)]
cells[W // 2][H // 2].update_status(Status.INFECTED, 6)
return cls(cells)
def setup_screen(title):
turtle.setup(W * CELLSIZE, H * CELLSIZE)
turtle.tracer(0, 0)
turtle.title(title)
turtle.setworldcoordinates(0, 0, W, H)
setup_screen("Ringworm infection")
sim_state = SimState.setup()
world_state = WorldState.setup()
def tick():
if not sim_state.done:
world_state.update()
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
vision”).
148 Playful Python Projects
http://www.shodor.org/interactivate/activities/ABetterFire/
Grid worlds 149
6.4.1 IMPLEMENTATION
“Forest fire” is even simpler than “Ringworm infection”, because it does not
need to keep track of counters that turn infected cells into immune cells, and
then into healthy cells. Here every change takes just a single turn, which
simplifies the whole logic (see Listing 6.10).
import turtle
import math
from enum import Enum
from random import uniform
from dataclasses import dataclass
H = 41
W = 41
SLEEP_MS = 20
CELLSIZE = 15 # pixels
SHAPE_SIZE = CELLSIZE / 20 # turtle size
WIND_DIRECTION = math.pi / 4 # northeast
WIND_STRENGTH = 0.5
P = 0.6
PTREE = 0.6
@dataclass
class SimState:
done: bool
def set_done(self):
self.done = True
@classmethod
def setup(cls):
r = cls(False)
turtle.listen()
turtle.onkeypress(r.set_done, "space")
return r
class Status(Enum):
TREE = 1
FIRE = 2
EMPTY = 3
@dataclass
class Land:
shape: turtle.Turtle
status: Status = Status.EMPTY
@classmethod
def create(cls, x, y):
p = turtle.Turtle()
p.penup()
p.shape("circle")
p.shapesize(SHAPE_SIZE)
p.goto(x, y)
status = Status.TREE if uniform(0, 1) <= PTREE else Status.EMPTY
return cls(p).update_status(status)
Grid worlds 151
@dataclass
class WorldState:
cells: list
return r
def update(self):
to_burn = []
for x in range(W):
for y in range(H):
if self.cells[x][y].status == Status.FIRE:
to_burn += self.spread_from(x, y)
self.cells[x][y].update_status(Status.EMPTY)
for c in to_burn:
c.update_status(Status.FIRE)
@classmethod
def setup(cls):
cells = [[Land.create(x, y) for y in range(H)] for x in range(W)]
cells[W // 2][H // 2].update_status(Status.FIRE)
return cls(cells)
def setup_screen(title):
turtle.setup(W * CELLSIZE, H * CELLSIZE)
turtle.tracer(0, 0)
turtle.title(title)
turtle.setworldcoordinates(0, 0, W, H)
setup_screen("Forest fire")
sim_state = SimState.setup()
world_state = WorldState.setup()
def tick():
if not sim_state.done:
152 Playful Python Projects
world_state.update()
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
The easiest (but not very efficient) way to calculate the current “fire di-
rection” Af is to use the already familiar math.atan2(). Each simulation step
consists of two operations: we identify the neighbors to be ignited by each of
the currently burning trees (and turn burning trees into empty spaces as we
go), then mark all identified cells as burning. The result is shown in Figure 6.9.
6.4.2 AFTERTHOUGHTS
Just like in the case of rabbits and wolves, here we are dealing with an example,
approximating a certain real-world process. How realistic is our model?
Grid worlds 153
6.5.1 IMPLEMENTATION
Let’s consider the complete listing of this simulation, obtained by modifying
“Ringworm infection” a bit (see Listing 6.11). The required modifications are
relatively minor. We’ll need to take into account empty cells, because not all
city locations are occupied with inhabitants. Then, we’ll need to move each
inhabitant to an adjacent random empty cell on every simulation step. Just
http://www.shodor.org/interactivate/activities/SpreadofDisease/
154 Playful Python Projects
like in case of “Rabbits and wolves”, we can presume toroidal shape of the
city.
H = 41
W = 41
SLEEP_MS = 20
CELLSIZE = 10 # pixels
SHAPE_SIZE = CELLSIZE / 20 # turtle size
IMMUNE_DURATION = 15
INFECTED_DURATION = 7
PINHABIT = 0.4 # probability for a cell to be inhabited
PINFECT = 0.5 # probability for a cell to be infected by a neighbor
@dataclass
class SimState:
done: bool
def set_done(self):
self.done = True
@classmethod
def setup(cls):
r = cls(False)
turtle.listen()
turtle.onkeypress(r.set_done, "space")
return r
class Status(Enum):
HEALTHY = 1
INFECTED = 2
IMMUNE = 3
@dataclass
class Cell:
shape: turtle.Turtle
status: Status
count: int
def x(self):
return int(self.shape.xcor())
def y(self):
return int(self.shape.ycor())
@classmethod
Grid worlds 155
colors = {
Status.HEALTHY: "green",
Status.INFECTED: "red",
Status.IMMUNE: "blue",
}
self.shape.color(colors[status])
def update(self):
self.count = max(0, self.count - 1)
if self.count == 0:
if self.status == Status.IMMUNE:
self.update_status(Status.HEALTHY, 0)
elif self.status == Status.INFECTED:
self.update_status(Status.IMMUNE, IMMUNE_DURATION)
@dataclass
class WorldState:
cells: list
return r
if not self.cells[newx][newy]:
self.cells[p.x()][p.y()] = None
156 Playful Python Projects
def update(self):
to_infect = []
people = sum(([v for v in self.cells[x] if v] for x in range(W)), [])
self.print_stats(people)
for p in people:
p.update()
self.move(p)
if p.status == Status.INFECTED:
to_infect += self.spread_from(p.x(), p.y())
for c in to_infect:
c.update_status(Status.INFECTED, INFECTED_DURATION)
@classmethod
def setup(cls):
cells = [[Cell.populate(x, y) for y in range(H)] for x in range(W)]
cells[W // 2][H // 2].update_status(Status.INFECTED, INFECTED_DURATION)
return cls(cells)
def setup_screen(title):
turtle.setup(W * CELLSIZE, H * CELLSIZE)
turtle.tracer(0, 0)
turtle.title(title)
turtle.setworldcoordinates(0, 0, W, H)
setup_screen("Spread of disease")
sim_state = SimState.setup()
world_state = WorldState.setup()
def tick():
if not sim_state.done:
world_state.update()
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
Let’s review the most important changes that had to be introduced. The
simulation starts with “populating” the city and infecting inhabitants of the
central cell:
Grid worlds 157
Here we use the same sum() trick as in “Evolution”: cells is a list of lists,
so each element of cells is a list. It can be easily transformed into a list of
non-empty cells:
[v for v in self.cells[x] if v]
Next, all these lists are concatenated into a resulting “flat” list using sum().
Since the city we simulate has a toroidal shape, every time there is a chance
to leave the city boundaries, a modulo operator is used to arrive from the
opposite direction:
700
Healthy Immune
Infected
700
Immune
Healthy
Infected
110,000
S I
R
6.5.2 AFTERTHOUGHTS
Once again, it would be good to discuss whether this simulation has any
resemblance to the process of actual spread of disease. It turns out that yes,
the basic spread scenario (without subsequent disease waves) produces the
same pattern as can be obtained with conventional mathematical models,
actually used for practical purposes.
The most basic such model is known as SIR (susceptible-infected-removed)
[57]. It presumes that each individual belongs to one of three classes, just like
in our case, but immune (“removed”) people stay immune forever. SIR model
consists of equations representing the change of each class size in a unit of
time, and it has only two input parameters: the “disease transmission rate”
and the “recovery rate”. By varying these values, it is easy to obtain a picture
similar to Figure 6.11 (see Figure 6.13).
Comparing our model with SIR makes sense because SIR is quite accurate
under certain constraints if its parameters are set correctly (which can be a
separate complicated task). There are more advanced variations of SIR that
take into account other classes, such as “vaccinated” in the SIRV model, which
helps modeling the effects of vaccination.
It is interesting to see how different approaches to the task of modeling a
certain process produce so strikingly similar results. The SIR model, Lotka-
Volterra equations, and the logistic function used in “The tragedy of the
160 Playful Python Projects
At this point, let’s note that our “Ringworm infection” comes really close
to be called a cellular automaton: it is enough to get rid of randomicity and
presume that infected cells always infect their neighbors.8 Initial configuration
of this machine is a single cell in the “infected-1” state. At the next step, this
cell switches to the state “infected-2”, while its “healthy” neighbors become
“infected-1”. Cells reaching the “infected-6” state will switch into “immune-1”,
and so on. Thus, each cell can be in one of 11 states (one healthy, six infected,
four immune), and switching between these states depends entirely on cell
neighborhood (since a cell forms a part of its own neighborhood, unconditional
switching, such as “infected-2” to “infected-3”, still follows this principle).
Over the years, various kinds of cellular automata were proposed. One may
tinker with the rules, the states, or even with the board, which does not have to
be strictly two-dimensional. The variation known as “Life”, proposed by John
Conway in late 1960s, turned out to be especially popular [14]. The simplicity
of its rules and the richness of cell colonies’ developmental trajectories proved
to be a great combination for its longevity and fame.
Cells in “Life” have two states: “populated” and “empty”. They live in
a two-dimensional grid, and the neighborhood of each cell consists of eight
adjacent cells. “Life” has only three rules:
1. If a populated cell has less than two or more than three neighbors, it
becomes empty (“dies”).
2. If an empty cell has exactly three populated neighbors, it becomes
populated.
3. Otherwise, the cell preserves its current state.
It is important to understand that births and deaths occur simultaneously.
An empty cell with three neighbors will become populated on the next step.
Likewise, a populated cell with one neighbor will die, but it is still alive and
influences its neighborhood (see Figure 6.14).
Now let’s implement “Life”, and then discuss its fascinating world a bit
longer.
6.6.1 IMPLEMENTATION
The complete program is shown in Listing 6.12. It follows the general structure
of “Ringworm infection”, but it adds some notable modifications.
import turtle
from dataclasses import dataclass
H = 41
W = 41
SLEEP_MS = 20
CELLSIZE = 10
SHAPE_SIZE = CELLSIZE / 20 # turtle size
@dataclass
class SimState:
done: bool
def set_done(self):
self.done = True
@classmethod
def setup(cls):
r = cls(False)
turtle.listen()
turtle.onkeypress(r.set_done, "space")
return r
@dataclass
class CellShape:
shape: turtle.Turtle
@classmethod
def create(cls, x, y):
p = turtle.Turtle()
Grid worlds 163
p.penup()
p.shape("circle")
p.shapesize(SHAPE_SIZE)
p.goto(x, y)
p.color("black")
p.hideturtle()
return cls(p)
@dataclass
class WorldState:
shapes: list
current: list
next: list
return self.current[x][y]
def update(self):
for x in range(W):
for y in range(H):
self.shapes[x][y].update(self.current[x][y])
self.next[x][y] = self.next_status(x, y)
@classmethod
def setup(cls, population):
shapes = [[CellShape.create(x, y) for y in range(H)] for x in range(W)]
current = [[False for _ in range(H)] for _ in range(W)]
next = [[False for _ in range(H)] for _ in range(W)]
for x, y in population:
current[x][y] = True
def setup_screen(title):
turtle.setup(W * CELLSIZE, H * CELLSIZE)
turtle.tracer(0, 0)
turtle.title(title)
turtle.setworldcoordinates(0, 0, W, H)
setup_screen("Life")
sim_state = SimState.setup()
world_config = [(20, 20), (21, 20), (22, 20), (22, 21), (21, 22)]
world_state = WorldState.setup(world_config)
def tick():
if not sim_state.done:
world_state.update()
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
First let’s note that the “cell” class CellShape merely works a container for
a turtle that displays the current cell status (empty or populated). Since cells
may have only two states, we can get away with a matrix of Boolean values
instead of full-fledged cell objects.
Next, note that the setup() function of WorldState class creates two matrices:
current and next. Since all the changes in “Life” have to occur simultaneously,
we will store all the pending changes in the next matrix, and then swap it
with current. This way, we always have the current game board and a buffer
for the next configuration.
When counting neighbors, we have to exclude the current cell (otherwise,
it will be counted as its own neighbor):
neighbors = -int(self.current[x][y])
6.6.2 AFTERTHOUGHTS
The current implementation of “Life” comes with five alive cells:
world_config = [(20, 20), (21, 20), (22, 20), (22, 21), (21, 22)]
world_state = WorldState.setup(world_config)
Grid worlds 165
1. If the cell under the ant is white, the ant turns 90° to the right, makes
the cell black, and moves one cell forward.
2. If the cell under the ant is black, the ant turns 90° to the left, makes
the cell white, and moves one cell forward.
At a glance, this description does not look quite right for a proper cellular
automaton: the rules are supposed to specify how a cell switches from one
state to another on the basis of cell neighborhood, rather than direct a moving
creature over a board. However, the rules of Langton’s ant can be rewritten
to follow the usual conventions. Each cell of the board has 10 possible states:
(white, no ant), (white, ant looks left), (white, ant looks up), and so on. In
other words, a cell state is a combination of its color and ant status. Then
the next state of a cell indeed entirely depends on its neighborhood. Say, a
cell with the ant will have the opposite color and no ant on the next turn.
Similarly, a cell having a left white neighbor with the up-looking ant will have
the same color and the right-looking ant on the next turn.
Let’s now implement this system and see what kind of behavior is exhibited
by the virtual ant.
6.7.1 IMPLEMENTATION
A very straightforward implementation of the system is shown in Listing 6.13.
The CellShape class, similar to the one used in “Life”, is responsible for turning
on and off black spots on the board. The ant does not even have its dedicated
class: it behaves like a turtle (turn and go forward), so we can easily employ
a Turtle object.
import turtle
from dataclasses import dataclass
H = 81
W = 81
SLEEP_MS = 20
CELLSIZE = 5
SHAPE_SIZE = CELLSIZE / 20 # turtle size
168 Playful Python Projects
@dataclass
class SimState:
done: bool
def set_done(self):
self.done = True
@classmethod
def setup(cls):
r = cls(False)
turtle.listen()
turtle.onkeypress(r.set_done, "space")
return r
@dataclass
class CellShape:
shape: turtle.Turtle
@classmethod
def create(cls, x, y):
p = turtle.Turtle()
p.penup()
p.shape("circle")
p.shapesize(SHAPE_SIZE)
p.goto(x, y)
p.color("black")
p.hideturtle()
return cls(p)
def is_black(self):
return self.shape.isvisible()
def flip(self):
if self.is_black():
self.shape.hideturtle()
else:
self.shape.showturtle()
@dataclass
class WorldState:
board: list
ant: turtle.Turtle
step: int = 0
Grid worlds 169
def update(self):
self.step += 1
if self.step % 1000 == 0:
print(f"Performing step: {self.step}")
x = round(self.ant.xcor())
y = round(self.ant.ycor())
turn = 90 if self.board[x][y].is_black() else -90
self.board[x][y].flip()
self.ant.left(turn)
self.ant.forward(1)
@classmethod
def setup(cls):
board = [[CellShape.create(x, y) for y in range(H)] for x in range(W)]
ant = make_ant(W // 2, H // 2)
return cls(board, ant)
def setup_screen(title):
turtle.setup(W * CELLSIZE, H * CELLSIZE)
turtle.tracer(0, 0)
turtle.title(title)
turtle.setworldcoordinates(0, 0, W, H)
setup_screen("Langton's ant")
sim_state = SimState.setup()
world_state = WorldState.setup()
def tick():
if not sim_state.done:
world_state.update()
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
tick()
turtle.done()
The screenshot in Figure 6.18 shows the situation on the board after the
first 1000 steps. This picture shows the lack of any conceivable patterns behind
the ant’s trajectory. This impression is even stronger if the ant is watched in
real time: its nearly random movements do not seem to form regular shapes.
This effect shows the emergence of complex behavior from a set of simple
rules, which is actually the whole point of the system. The situation becomes
even more complex if we consider the interaction between several ants forming
a colony. Interestingly, the trajectory of an ant on an empty board eventu-
ally stabilizes after around 10,000 iterations, when the ant begins building a
170 Playful Python Projects
straight “path”, consisting of smaller segments, each taking 104 steps to make
(see Figure 6.19).
Later experiments showed that Langton’s ant, like many other cellular au-
tomata (including “Life”), exhibits a property of computational universality.
It means that an ant can theoretically perform any computational algorithm,
encoded along with its input data in the initial configuration of the board
[30].
6.8.1 IMPLEMENTATION
The first issue to sort out is hexagonal grid, which we never used before. The
simplest approach for making it is to treat individual hexagonal cells as points
on the regular square grid. When displayed, every second row is shifted to the
right by one-half of a cell size (see Figure 6.20). This method allows to employ
0, 3 1, 3 2, 3 3, 3 4, 3 5, 3
0, 2 1, 2 2, 2 3, 2 4, 2 5, 2
0, 1 1, 1 2, 1 3, 1 4, 1 5, 1
0, 0 1, 0 2, 0 3, 0 4, 0 5, 0
a regular Cartesian coordinate system with each cell addressed by its x and
y coordinates.
Since growing snowflakes requires processing the immediate neighborhood
of a cell, we need to be able to retrieve a list of neighbors for any specified
location. As a concrete example, let’s consider two such locations: (1, 1) and
(4, 2). Four out of six neighbors can be obtained by looking to the left, to the
right, up, and down (see Figure 6.21).
0, 3 1, 3 2, 3 3, 3 4, 3 5, 3
0, 2 1, 2 2, 2 3, 2 4, 2 5, 2
0, 1 1, 1 2, 1 3, 1 4, 1 5, 1
0, 0 1, 0 2, 0 3, 0 4, 0 5, 0
Now it should be clear that the remaining two neighbors are harder to
identify. For (4, 2), we need two cells lying diagonally to the left, while for (1,
1) the remaining cells lie diagonally to the right. The choice depends on the
location’s row: for even rows we have to look left, and for odd rows we have
to look right (see Figure 6.22).
174 Playful Python Projects
0, 3 1, 3 2, 3 3, 3 4, 3 5, 3
0, 2 1, 2 2, 2 3, 2 4, 2 5, 2
0, 1 1, 1 2, 1 3, 1 4, 1 5, 1
0, 0 1, 0 2, 0 3, 0 4, 0 5, 0
FIGURE 6.22: Retrieving the remaining neighbors for each highlighted loca-
tion.
Now we can proceed to the preliminary version of the code, still having
missing pieces (Listing 6.14).
import turtle
from dataclasses import dataclass
H = 201
W = 201
SLEEP_MS = 20
VIS_SLEEP_MS = 5000
CELLSIZE = 4
SHAPE_SIZE = CELLSIZE / 20 # turtle size
ALPHA = 2.03
BETA = 0.4
GAMMA = 0.0001
@dataclass
class SimState:
done: bool
def set_done(self):
self.done = True
@classmethod
def setup(cls):
r = cls(False)
turtle.listen()
turtle.onkeypress(r.set_done, "space")
return r
Grid worlds 175
@dataclass
class Drawer:
shape: turtle.Turtle
@classmethod
def create(cls, x, y):
p = turtle.Turtle()
p.penup()
p.shape("circle")
p.shapesize(SHAPE_SIZE)
p.goto(x + 0.5 * (y % 2), y)
p.color("black")
return cls(p)
@dataclass
class WorldState:
cells: list = None
step: int = 0
...
def setup_screen(title):
turtle.colormode(1.0)
turtle.setup(W * CELLSIZE, H * CELLSIZE)
turtle.tracer(0, 0)
turtle.title(title)
turtle.setworldcoordinates(0, 0, W, H)
setup_screen("Cellular snowflakes")
sim_state = SimState.setup()
world_state = WorldState.setup()
shapes = [[Drawer.create(x, y) for y in range(H)] for x in range(W)]
def tick():
if not sim_state.done:
world_state.update()
turtle.ontimer(tick, SLEEP_MS)
176 Playful Python Projects
def tick_draw():
if not sim_state.done:
print(f"step {world_state.step}")
update_shapes(shapes, world_state.cells)
turtle.update()
turtle.ontimer(tick_draw, VIS_SLEEP_MS)
tick()
tick_draw()
turtle.done()
This code follows the same scheme as used in “Life”. There is a list of cell
contents (world_state.cells) and a separate collection of turtle-based Drawer
objects used to display the grid. This simulation is quite computationally
demanding, so the graphics is updated in a separate function tick_draw(),
just like in “Evolution”. Note that the Drawer objects are created to form a
hexagonal grid by shifting every second row to the right:
@classmethod
def create(cls, x, y):
p = turtle.Turtle()
...
p.goto(x + 0.5 * (y % 2), y) # shift by half a cell for odd rows
...
To visualize cell content, we will use both turtle shape size (larger circles
correspond to higher values) and color. Each circle will have a black border
and will be filled with a shade approaching white as the cell value increases:
turtle.colormode(1.0)
Next, let’s implement the missing class WorldState (see Listing 6.15).
@dataclass
class WorldState:
cells: list = None
step: int = 0
for x in range(W):
for y in range(H):
e = cells[x][y]
return [
(x, ((y + 1) % H)),
(x, (y - 1) % H),
((x - 1) % W, y),
((x + 1) % W, y),
] + add_neighbors
def receptive_cells_map(self):
is_receptive = self.make_cells(False)
for x in range(W):
for y in range(H):
if self.cells[x][y] >= 1:
for nx, ny in self.neighbors(x, y) + [(x, y)]:
is_receptive[nx][ny] = True
return is_receptive
for x in range(W):
for y in range(H):
e = self.cells[x][y]
if is_receptive[x][y]:
receptive[x][y] = e + GAMMA
else:
non_receptive[x][y] = e
return receptive, non_receptive
def update(self):
self.step += 1
is_receptive = self.receptive_cells_map()
receptive, non_receptive = self.rec_nonrec_grids(is_receptive)
non_receptive = self.average(non_receptive)
178 Playful Python Projects
for x in range(W):
for y in range(H):
self.cells[x][y] = receptive[x][y] + non_receptive[x][y]
def fill_cells(self):
self.cells = self.make_cells(BETA)
self.cells[W // 2][H // 2] = 1.0
return self
@classmethod
def setup(cls):
return cls().fill_cells()
This code follows the description of the simulation quite closely. First, each
cell of the grid cells is initialized with the value of BETA in setup(). The central
cell gets the value of 1. The main simulation loop creates a map of receptive
cells and two grids: receptive and non_receptive. The non-receptive grid is then
averaged, and both grids are added together to form a new version of the main
grid cells.
The complexities of dealing with a hexagonal grid are hidden in the func-
tion neighbors(). It returns the list of neighboring locations of the given cell
taking into account the differences between odd and even rows. It also “wraps
around” the board, so the edge locations are treated as adjacent to the loca-
tions on the opposite side (the same technique was used in “Life”).
The result of a test run of the program after 900 steps is shown in Fig-
ure 6.23.
Naturally, the appearance of the resulting snowflake highly depends on the
value of input parameters, so try playing with them. The original paper [43]
shows a number of interesting examples.
This simulation is more complex than “Life”, but it still satisfies the criteria
of a regular cellular automaton. It operates on a two-dimensional grid, and its
cells switch between their states according to the rules that take into account
the immediate cell neighborhood. The cells here hold real numbers, so from
a mathematical point of view the number of cell states is infinite. However,
in Python, real numbers are stored in 64-bit variables, so there are at most
264 states per cell. I believe the snowflake model does not need such high
precision, so in practice the number of required states can be greatly reduced
further.
4 5
2 1
12 This setup is more commonly described as a triangular grid, where vertices of equilateral
(1, A) 5 (2, C) 4
(1, C) 1 (3, C) 1
(2, B) 1 (5, A) 2
3
2 4
C A A B
1 5
FIGURE 7.1: Finite state machine processing the input string CAAB.
Each FSM accepts certain strings and rejects other strings. Our device will
accept a variety of strings, including, for instance, CCAA, CCCAA, and CCAABAA.
Thus, an FSM divides all possible strings in the world into two classes: the
strings it accepts and the strings it rejects.1 Unsurprisingly, FSMs are often
used in practice specifically for this purpose: to figure out whether an input
string belongs to a class of our interest or not. For example, it is possible to
construct a device that would only accept strings forming a time description
in a 24-hour format (such as 12:35 or 00:15) or a valid Python variable name.
Some classes of strings, however, turn out to be an unreachable target for
FSMs. Say, it is impossible to construct an FSM that would accept any string
that reads the same both ways (a palindrome), such as “kayak” or “racecar”.
Intuitively, it is not hard to see why: an FSM has states, but no writable
memory, so it cannot keep track of incoming symbols to check whether the
same symbols reappear in the reverse order later.
This observation is an important result in theoretical computer science,
where distinction is made between the sets of strings that can be recognized
by an FSM, and the sets of strings requiring more powerful computational
devices. Thus, FSMs mark a certain threshold of computational capacity. Note
that “strings of symbols” do not necessarily mean “strings of characters”:
1 Some useful terminology: the given FSM recognizes the set S, consisting of all the
C 3
1
A B
C 4
5 2
A
FIGURE 7.2: Finite state machine shown as a transition diagram.
FSMs process sequences of arbitrary elements, which makes them useful far
beyond the tasks of text processing.
Before we move on, let me add that FSMs are typically visualized with
diagrams, depicting their states and transitions between them. Such a diagram
for the FSM shown in Figure 7.1 is provided in Figure 7.2.
7.1.1 IMPLEMENTATION
We’ll discuss some practical applications of FSMs shortly, but now let’s see
how to use Python to simulate a basic FSM, defined with a provided set of
rules. For the sake of simplicity, we’ll presume that FSM states are coded
with numbers, and the input string consists of ordinary ASCII characters (see
Listing 7.1).
rules = {
(3, "C"): 1,
(1, "C"): 1,
(1, "A"): 5,
(5, "A"): 2,
(2, "B"): 1,
(2, "C"): 4,
}
favs = {2}
start = 3
This approach for simulating an FSM might be perfectly fine for some tasks
and inadequate for others. One notable property of these devices is their linear
processing time: on each step, an FSM processes exactly one symbol of the
input string. Thus, if the length of the string is N , we are guaranteed to obtain
a solution after N steps. This makes FSMs useful for fast string processing.
However, our implementation is not particularly fast: each step here requires
to retrieve a value from a dictionary, which comes at a cost.
There are two digits for hours, two digits for minutes, an optional two-
digit part for seconds, and in case it is present, an additional optional part
for milliseconds (ranging from 000 to 999). Thus, these are valid time strings:
00:30, 13:45:59, 23:03:15.003.
This particular data format is quite simple, so the corresponding FSM can
be drawn without much preparation (see Figure 7.3).
A valid string starts with a digit 0, 1, or 2. Since the highest value for hours
is 23, the digit after 2 must be no larger than 3. Then we expect a colon and
a minutes chunk, which is a digit in a range 0-5 followed by a digit in a range
0-9. Since a valid string may end here, state 7 is favorable. Seconds work in
the same way as minutes, and there is yet another favorable state after their
chunk. The trailing part of the string, milliseconds, consists of a dot and three
digits.
States and transitions 187
2 0...3
2 0...5 0...9
:
1 4 5 6 7
:
0...1 3
0...9
8
14 0...9
0...9 0...9 .
13 12 11 10 9 0...5
0...9
To save space, most transitions in Figure 7.3 are labeled with ranges like
0...3.Every such transition denotes a list of rules:
(2, "0") → 4
(2, "1") → 4
(2, "2") → 4
(2, "3") → 4
7.2.1 IMPLEMENTATION
Let’s incorporate the rules shown in the diagram into our FSM simulator to
obtain the “time machine” (Listing 7.2).
# hours
rules = {(1, "2"): 2}
rules.update(rules_range(1, 3, "0", "1"))
rules.update(rules_range(2, 4, "0", "3"))
rules.update(rules_range(3, 4, "0", "9"))
188 Playful Python Projects
# minutes
rules.update(rules_range(5, 6, "0", "5"))
rules.update(rules_range(6, 7, "0", "9"))
# seconds
rules.update(rules_range(8, 9, "0", "5"))
rules.update(rules_range(9, 10, "0", "9"))
# milliseconds
rules.update(rules_range(11, 12, "0", "9"))
rules.update(rules_range(12, 13, "0", "9"))
rules.update(rules_range(13, 14, "0", "9"))
start = 1
favs = {7, 10, 14}
As you can see, apart from the utility function rules_range() that generates
a separate FSM rule for each character in the given range, this code follows
the same approach as used in the previous example.
7.2.2 AFTERTHOUGHTS
Being just a long and monotonous list of rules, this code possesses a particular
kind of beauty: it accurately represents the original transition diagram from
Figure 7.3. It is easy to imagine such a code being automatically generated
from a diagram, and indeed, there are tools able to do it.2
What about the previous step? We had to create a state machine diagram
from the format string HH:MM[:SS[.fff]]. Can this work be automated as well?
It turns out that it can. There is a widely used approach to specify such
“matching patterns”, known as regular expressions [11].3 Any given regular
expression can be automatically transformed into a runnable FSM, and this
is exactly the mechanism under the hood of Python module re.4
print("mundane".find("dan")) # prints 3
Yet, there are countless variations of a string matching problem, and plenty
of opportunities to improve existing methods for the benefit in certain specific
cases. We will consider just one kind of optimization, driven by an FSM.
Consider the simplest possible brute-force implementation of find():
text = "mundane"
pattern = "dan"
This code tries to find pattern in every location of text, proceeding to the
next location in case of failure. (The little-known else clause of for kicks in if
the loop ends normally, not due to break). Unfortunately, this approach suffers
from a potential performance issue.
Consider the case of pattern AAAAX and text AAAAAAAAX. Our code will match
four A characters in the beginning of the text, then fail to match X and restart
from the second character of text. It will fail there again after matching four
As, and proceed to the third character of text. Thus, working in this fashion,
this routine will perform len(pattern) operations every time inside the outer
for-loop. If both strings are very long, the resulting performance might become
unacceptable.
Let’s note that such scenario is quite unlikely in practice, and for the abso-
lute majority of real-world situations the brute-force approach is quite good.
It is simple and requires no additional memory, which also might be an im-
portant factor. However, there are practical cases where the chances of facing
poor performance are high. For example, DNA sequences consist of repetitive
patterns of characters A, T, G, and C:
190 Playful Python Projects
CTGTGGAATGTGTGTCAGTTAGGGTGTGGAAAGTCCCCAGGCTCCCCAGCAGGCAGAAGTATGCAAAGCATGCATCTCAA
TTAGTCAGCAACCAGGTGTGGAAAGTCCCCAGGCTCCCCAGCAGGCAGAAGTATGCAAAGCATGCATCTCAATTAGTCAG
CAACCATAGTCCCGCCCCTAACTCCGCCCATCCCGCCCCTAACTCCGCCCAGTTCCGCCCATTCTCCGCCCCATGGCTGA
CTAATTTTTTTTATTTATGCAGAGGCCGAGGCCGCCTCTGCCTCTGAGCTATTCCAGAAGTAGTGAGGAGGCTTTTTTGG
AGGCCTAGGCTTTTGCAAAAAGCTC
7.3.1 IMPLEMENTATION
Let’s discuss how the brute-force approach can be improved. The key idea is
simple: if we fail to match pattern at the current location of text, there is no
need to restart from the very beginning of pattern. When our routine fails to
match AAAAX at the beginning of AAAAAAAAX, it already knows that the first five
characters of text are AAAAA, so it can simply check whether the sixth character
is X to figure out whether AAAAX ends there (and therefore starts at the second
character of text). On the other hand, when the routine fails to match the
same pattern AAAAX at the beginning of AAAXAAAAX, it can conclude that there is
no chance the pattern starts before X, so the search has to be restarted from
the fifth character of text (right after X).
In general, a mismatch should be treated as “let’s switch to our backup
plan” rather than “let’s start from scratch again”. A mismatch at the earlier
location can be a part of a later match. The tricky part is to understand that
each combination of a successfully matched substring and the next character
of text that caused a mismatch requires its own backup plan. Such plans can
be shown on a diagram, where circles denote matched substrings, and arrows
show how the next character of text changes the situation (see Figure 7.4).
Naturally, this is a sketch of an FSM.
Any situation not covered by the rules switches the machine into its starting
state. Once the favorable state is reached, the pattern is found, we can stop
the routine.
All transitions rules of this FSM follow the same principle. For any pair of
the source state S and the input character c, the destination state R should
be labeled with the longest string, satisfying two conditions:
5 See https://github.com/python/cpython/pull/22904
States and transitions 191
A A A
A AA AAA
AAAA AAAAX
X
A
FIGURE 7.4: Finite state machine for matching the pattern AAAAX.
For example, consider the state AAAA. If the next input character is X, we
need to switch to a state labeled with the longest string R, so that:
b
b
b b
ba n ban a bana
b a
n
b
b banan a ban
ana
FIGURE 7.5: Finite state machine for matching the pattern banana.
else:
return i
return -1
def rules_for_pattern(pattern):
rules = {}
for c in set(pattern):
rules.update(rules_for_char(pattern, c))
return rules
TEXTLEN = 1000000
PATTERNLEN = 6
CHARS = "abcde"
text = "".join((choice(CHARS) for _ in range(TEXTLEN)))
pattern = "".join((choice(CHARS) for _ in range(PATTERNLEN)))
rules = rules_for_pattern(pattern)
matches found: 63
fsm speedup: 1.92
7.3.2 AFTERTHOUGHTS
I have to say that the results of our performance measurements should be
taken with a large grain of salt. It is very likely that significant slowdowns of
the brute-force routine are caused by the mere existence of the nested loop.
Initialization and teardown of loops are quite costly in Python, so by removing
the loop, it is possible to achieve a significant benefit.
If pattern is short (say, five characters), the loop can be simply unrolled as
follows:
You can benchmark this code and make sure it is even faster than our
FSM implementation. However, the game doesn’t stop here: it is possible to
optimize fsm_search() as well by replacing dictionary lookup with faster 2D-
array operations.
The real disadvantages of the FSM-based approach are different: construct-
ing FSM takes time, and storing FSM takes space (which can be quite large
for long patterns). The real KMP algorithm takes effort to minimize both,
but still different approaches turn out to be more efficient in different scenar-
ios of string matching. For example, searching the same pattern in multiple
texts and searching multiple patterns in the same text are completely distinct
scenarios.
Spot the bug? There’s nothing to prevent ‘air jumping’: keep hammering B
while she’s in the air and she will float forever. The simple fix is to add an
isJumping_ Boolean field to Heroine that tracks when she’s jumping, and then
do:
There should also be code that sets isJumping_ back to False when the heroine
touches the ground. I’ve omitted that here for brevity’s sake.
Next, we want the heroine to duck if the player presses down while she’s
on the ground, and stand back up when the button is released:
Spot the bug this time? With this code, the player could:
1) Press down to duck.
2) Press B to jump from a ducking position.
3) Release down while still in the air.
The heroine will switch to her standing graphic in the middle of the jump.
Time for another flag…”
Nystrom’s sad story goes on and on in the same manner until it is crystal
clear that every new capability or a bugfix makes the code even more brittle
and harder to understand.
What makes this case especially confusing is that there is nothing inherently
wrong in the chosen approach: all we see here is an honest attempt to “grow”
the required functionality by gradual implementation of features and handling
of edge cases. However, in the given circumstances it is like trying to build a
tower of bricks simply by placing every next brick on top of the previous one:
after a while, the whole structure collapses.
This problem can be also considered from another perspective. The code
itself may be perfectly fine, but its logic based on numerous nested conditional
statements and Boolean flags is too complicated for our limited cognitive abili-
ties. Finite state machines provide the right kind of tool to cut this complexity
into bite-size slices.
In the Nystrom’s example, Heroine behaves differently in different modes,
such as “standing”, “jumping”, “ducking”, and “diving”. The transitions be-
tween these modes are well defined: for example, ducking mode can only be
activated from standing mode, and it is only possible to come back to stand-
States and transitions 197
ing from ducking. This FSM-like diagram can be directly implemented in the
code, and the end result will probably look repetitive but not convoluted.
To see the whole road from the idea to the implementation, let’s consider
a simpler scenario. Suppose our task is to design a control module for the
industrial robot that sorts incoming boxes into two types: small and large (see
Figure 7.6). The robot recognizes the type of the next box on the conveyor
belt, and places it to the left or to the right with its robotic arm.7
The robot can lift its arm up and down, as well as open and close its
grabber. The robot body can be in one of three positions: neutral, turned to
the left, and turned to the right. Let’s also presume that body rotation can
only be performed with the arm lifted up (say, for safety reasons). When the
robot is straight with its arm lifted up, its camera can be used to recognize
7 There are robots doing exactly this kind of work, just search for “robotic pallet sorting”.
198 Playful Python Projects
the type of the next box on the conveyor. To formalize the task further, let’s
presume the robot is able to execute the following commands:
The control module consists of two subsystems. The first subsystem exe-
cutes individual commands by actually turning on and off electric motors. It is
also its job to prevent illegal movements: what if the robot tried to open even
wider its already open grabber? It might get damaged! The second subsystem
issues chains of commands like “OLDCU”, representing high-level function-
ality. Using this subsystem, a robot can be reprogrammed to sort incoming
boxes differently, say, to place all the boxes into the same-side pile.
7.4.1 IMPLEMENTATION
Since we can’t program real-life robots here, let’s employ the turtle as a virtual
robot to illustrate how the control module is doing its job. We will start by
drawing all possible states of the robot and the transitions between them
(see Figure 7.7). The robot starts operation in its “nuo” state: its body is in
(n)eutral position, the arm is (u)p, the grabber is (o)pen. By turning right,
the robot switches to the “ruo” position, and so on.
This diagram might look overly complicated for the task, but remember
that all the complexity of robot operation is localized here, and in case of any
changes in requirements we won’t have to look anywhere else. Such changes are
also easy to incorporate. Suppose that while designing a modified robot with
a large grabber, we realize it is unsafe to turn the body with the grabber open.
This new requirement can be easily introduced into the system by removing
the corresponding transitions from the diagram.
Our code will execute the following loop:
C
rdo rdc
D O
U
U
C D
ruc
ruo
L O L
C R
R
U O D
WIDTH = 600
HEIGHT = 400
MARGIN = 50
SLEEP_MS = 1000
@dataclass
class SimState:
done: bool = False
def set_done(self):
self.done = True
@classmethod
def setup(cls):
r = cls()
turtle.listen()
turtle.onkeypress(r.set_done, "space")
return r
200 Playful Python Projects
@dataclass
class WorldState:
# display the current robot position
def draw_state(self):
...
# recognize the next box and activate the corresponding command sequence
def next_box(self):
...
@classmethod
def setup(cls):
...
def setup_screen(title):
turtle.setup(WIDTH + MARGIN, HEIGHT + MARGIN)
turtle.tracer(0, 0)
turtle.title(title)
setup_screen("Robot control")
world_state = WorldState.setup()
sim_state = SimState.setup()
def tick():
if not sim_state.done:
try:
if not https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fworld_
state.next&c=E,1,PYe0vh_SoaKa190exmjPcPEVUVtmc-q_anI1IgNb6Pdi1Jfq7j
YFYQgThUvGN0fQ512q_--G1-5zVOjnfuhwArlpDYYZpM2c4ihJhwvxtPrcSgKPxTc,
&typo=1():
world_state.next_box()
world_state.draw_state()
turtle.update()
turtle.ontimer(tick, SLEEP_MS)
except KeyError:
print("Illegal command")
tick()
turtle.done()
@dataclass
class WorldState:
input: str = ""
state = "nuo"
index: int = 0
body = turtle.Turtle()
arm = turtle.Turtle()
box = turtle.Turtle()
rules = {}
def draw_state(self):
self.arm.reset()
self.arm.left({"l": 45, "r": -45, "n": 0}[self.state[0]])
self.arm.color("red" if self.state[1] == "u" else "black")
self.arm.shape("classic" if self.state[2] == "o" else "arrow")
self.arm.forward(30)
self.arm.left(180)
def next(self):
if r := self.index < len(self.input):
print(f"Running {self.input[self.index]} in {self.state}")
self.state = self.rules[(self.state, self.input[self.index])]
self.index += 1
return r
def next_box(self):
self.index = 0
if randint(0, 1) == 0: # small box
self.box.shapesize(0.8)
self.input = "DCURDOUL"
else:
self.box.shapesize(1.5)
self.input = "DCULDOUR"
@classmethod
def setup(cls):
r = cls()
r.body.shape("circle")
r.body.shapesize(2)
r.arm.shapesize(3)
r.box.penup()
r.box.shape("square")
r.box.forward(55)
r.add_all_rules(
[
202 Playful Python Projects
return r
The visual part of this simulation is supported by three turtles. The im-
movable circle body draws robot body, a square box draws a box of either size,
and the arm shows both the robot arm and the grabber. It can be turned left
or right, and its shape can resemble an arrow (open grabber) or a “classic”
triangle (closed grabber). Red color is used to represent the arm in its lifted
state (see Figure 7.8).
Helper functions add_rules() and add_all_rules simplify the task of listing
all FSM transitions, but the FSM itself works exactly like our very first FSM
project from the “Finite state machine” section.
States and transitions 203
NO SEE
ROAM
NO SEE PILL
NO SEE
SEE
PILL
SEE CHASE EVADE PILL
SEE
7.4.2 AFTERTHOUGHTS
2
0.8 1.0
1
4
0.5
0.2
0.5
3
0.3
sunny cloudy
0.3
0.1
0.3
0.6 0.3 0.2 0.5
rainy
0.4
hardly comes after a sunny day with no cloud in sight. A Markov process suits
this task much better (see Figure 7.11).
Let’s implement this model and see what kind of climate it produces.
7.5.1 IMPLEMENTATION
This project can easily be done in the text mode, but to make the output
easier to grasp, let’s add some simple turtle-powered visuals (Listing 7.6).
import turtle
from random import choices
CELLSIZE = 20
SHAPE_SIZE = CELLSIZE / 20
DAYS = 20
def setup_screen(title):
turtle.setup(800, 600)
turtle.tracer(0, 0)
turtle.title(title)
turtle.setworldcoordinates(-1, -10, DAYS, 10)
setup_screen("Markovian weather")
def next_day(state):
rules = {
"sunny": (("sunny", "rainy", "cloudy"), (0.6, 0.1, 0.3)),
"cloudy": (("cloudy", "sunny", "rainy"), (0.5, 0.3, 0.2)),
206 Playful Python Projects
state = "sunny"
colors = {"sunny": "gold", "cloudy": "gray", "rainy": "black"}
turtle.update()
turtle.done()
8 Weights are relative: if an element has triple the weight of another element, it is three
times more likely to be chosen. Probabilities are absolute and must add up to 1. In Markov
chain-based projects we employ probabilities as weights, so these words are used inter-
changeably.
States and transitions 207
sunny cloudy
rainy
snowy
FIGURE 7.13: Markov chain-based weather model with four weather types.
0.25 0.05
rainy
0.65
snowy
values between then. For example, if the likelihood of moving from “sunny”
to “snowy” is 0.4 somewhere in mid-winter, and only 0.1 somewhere in mid-
spring, it should be around 0.25 in late winter or early spring.
Still, there are 16 transitions in our model, meaning we’ll have to specify
16 numbers for each reference point. Thus, I propose pushing automation
even further by specifying a probability for each state S (let’s denote it p(S)),
and use the following method to calculate the probabilities of all outgoing
transitions of S:
7.6.1 IMPLEMENTATION
The program in Listing 7.7 builds on our previous project. Instead of 20 days
of the same season, we now have four 40-day “seasons” and four reference
points, setting the values to be achieved at the middle of each season. The
simulation starts on a sunny day in mid-winter and ends in mid-winter of the
next year.
Remember that there are two places where our manually specified weights
are used to generate the actual Markov chain. The function weight_to() can
calculate the weight of any requested transition, but this only gives us the
chain for the days where the reference weights are available, and they are
available for mid-season days only.
For any other day, reference weights have to be interpolated first. Suppose
a certain weight at the beginning of the current season is beg, and at the
end of the season its value is end. Then for any day within the season, the
corresponding weight value can be calculated with the function lerp():
# linear interpolation
def lerp(beg, end, day):
return beg + (day / (DAYS_IN_SEASON - 1)) * (end - beg)
If unsure, note that for the first (zeroth) day of the season this function re-
turns beg, for the last day (DAYS_IN_SEASON - 1) its value is end, and for anything
in between its value is also between beg and end.
import turtle
from random import choices
CELLSIZE = 20
SHAPE_SIZE = CELLSIZE / 20
DAYS_IN_ROW = 20
DAYS_IN_SEASON = 40
DAYS = 4 * DAYS_IN_SEASON
state = "sunny"
seasons = ("sunny", "rainy", "cloudy", "snowy")
colors = {"sunny": "gold", "cloudy": "gray", "rainy": "black", "snowy": "snow"}
winter = {"sunny": 0.25, "cloudy": 0.05, "rainy": 0.05, "snowy": 0.65}
spring = {"sunny": 0.4, "cloudy": 0.28, "rainy": 0.3, "snowy": 0.02}
summer = {"sunny": 0.64, "cloudy": 0.18, "rainy": 0.18, "snowy": 0}
autumn = {"sunny": 0.3, "cloudy": 0.28, "rainy": 0.4, "snowy": 0.02}
210 Playful Python Projects
def setup_screen(title):
turtle.setup(800, 600)
turtle.tracer(0, 0)
turtle.title(title)
turtle.setworldcoordinates(-1, -16, DAYS_IN_ROW, 4)
def pweights_for(day):
year = [winter, spring, summer, autumn, winter]
cur_season = day // DAYS_IN_SEASON
season_day = day % DAYS_IN_SEASON
pweights = year[cur_season]
nxt_pweights = year[cur_season + 1]
turtle.update()
turtle.done()
The same logic can be applied to other kinds of test data. Say, it is possible
to rely on user behavior data to generate a realistic sequence of clicks on
various elements of a website to measure its performance or check whether it
crashes in certain unlucky scenarios.
Derivation of a Markov chain from observations can be useful in cases like
the ones discussed in our previous project, “The four seasons”. It is possible to
simulate the climate of a particular region by collecting transition probabilities
from actual weather data.
In any case, the idea of deriving the configuration of the computing machine
from data is not new for us: the FSM for searching substrings in “A needle in
a haystack” was autogenerated as well. Now let’s create a Markov chain for
English texts.
7.7.1 IMPLEMENTATION
A season is a sequence of days. Similarly, an English text is a sequence of
words. In late spring, a chance of a sunny day after a rainy day is much
higher than a chance of a snowy day. In the same manner, it is far more
likely to encounter a singular noun after “a” (“a box”) than a plural noun
(“a boxes”) or a verb in the past tense (“a jumped”). Thus, we can start
by reusing our “weather” machine: let states be labeled with English words,
while the transitions reflect the probabilities of various continuations of the
text. This way, we’ll have a system knowing that “a” can be followed by “box”
or “fox”, but not “clocks”.
However, human language is more complex than a simple weather model.
By using only the probabilities between the successive word pairs, we’ll lose
broader context of the sentence. For example, the phrase “five wooden” will
be continued with anything valid as a continuation of “wooden” (including
“wooden box”), because “five” is already lost at this point. An easy way to fix
this issue to some degree is to extend the context by relying on two successive
words. A pair of words will form a state, and transitions from this state will
identify the next possible word in the sequence.
Next, to derive a Markov chain from English texts, we need to have some. I
suggest relying on Python documentation, just like we did in the “Zipf’s law”
project.
The complete implementation of the text generator is shown in Listing 7.8.
TOKENS = 50
freq[key][next_word] += 1
for _ in range(TOKENS):
w1, w2 = next_pair
print(w1, end=" ")
keys = list(freq[next_pair].keys())
values = list(freq[next_pair].values())
w3 = choices(keys, values)[0]
Each of the pairs Virtual subclassing and subclassing is has only one pos-
sible continuation. (Our print() call displays the counters between the words
comprising a pair.) The next pair, is not is more fruitful, having 53 possi-
ble continuations. Naturally, common phrases, especially with prepositions,
are likely to be “larger crossroads” than rare word combinations like “virtual
subclassing”.
• A finite state machine expresses how the state of the system changes
as a result of consuming the next element of the input sequence.
This process is deterministic: if the current state and the next input
element are known, the result is completely predictable. A good ex-
ample is robot operation: by consuming the next command “open the
grabber”, the robot moves from a state where the grabber is closed
to a state where the grabber is open.
• A Markov chain expresses how the state of the system changes “on
its own”, commonly as a result of passing time. This process is prob-
abilistic: the next state of the system is not entirely predictable, but
the possible outcomes and their probabilities are known. Our exam-
ple of such a situation was weather modeling: we don’t know the
weather of the next day, but the options are not equally likely.
An FSM can be used to describe the changes of a certain system over time
if these changes are deterministic. For example, consider a simple Python
program:
a = 1
b = 5
a = 3
The execution of this code can be treated as a series of state changes. Every
change takes place as a result of executing the next line of code. The program
starts in an “empty” state: it is about to execute its first line, and the list
of variables in the memory is empty. By executing the line, it moves into the
state where one variable a is available, its value is 1, and the second line is
about to be executed. The process continues in the same manner until the
favorable state is reached (see Figure 7.16).
Admittedly, this is not a very interesting state machine: knowing its previ-
ous state, we can always predict the next one. Even branches and loops in a
Python program won’t produce branches or loops in the corresponding FSM
States and transitions 215
next
a: 1
line: 1
line: 2
next
next
a: 1, b: 5
a: 3, b: 5 line: 3
Hit
opponent
0.5
attack
0.7 0.5
Idle Got
hit
0.3 defend
0.9 0.7
Blocked
attack
may choose a safer defensive move, which, however, makes no harm to the
opponent.
The probabilities of attack and defense show the personality of the player:
in this case, the boxer is quite aggressive, preferring to attack in 70% of cases.
Other probabilities describe the reality of the game world: it is equally likely
to hit and to get hit while attacking, while defense is mostly safe. The current
skill of the player is included into this reality. It is possible that a more skillful
player would be more successful while attacking.
Being a chart of actions and their outcomes, MDP is commonly used to
optimize agent behavior to obtain certain desired properties. In case of boxing,
it is desirable to arrive at the “hit opponent” state as often as possible, while
minimizing the chances of getting into the “got hit” state. The only way for
modifying agent behavior is to readjust action weights, just like we did in the
“Four seasons” project.
To improve the choice of action weights, reinforcement learning algorithms
are used. While this topic is far beyond the scope of the book, the general idea
is to introduce numerical “rewards” for the different states in the MDP. In our
case, the “hit opponent” state should be assigned a large positive, and “got
hit” a large negative reward. The “blocked attack” can be slightly positive,
and “idle” is neutral.
States and transitions 217
7.8.1 IMPLEMENTATION
Our program will work as follows. The MDP-powered player will engage in a
series of matches against a “random player”, placing their symbol into random
empty cells. If a match ends with a victory of either player, the MDP player
will be positively or negatively rewarded. The last move will receive the max-
imal reward, the previous move will receive a certain percentage of the full
reward, and so on all the way back to the first move. Current move rewards
will serve as weight for the weighted random choice procedure, so eventually
good actions will have much higher chances to be selected.
For the sake of simplicity, we’ll take the “brawn over brains” approach by
ignoring the fact that many game positions in the game are identical, being
mirror reflections or rotations of each other. Our learning algorithm is also
quite basic, only slightly more advanced than the one used in the tic-tac-toe
playing device MENACE, made entirely from matchboxes [28]. The original
MENACE needs around 300 boxes; so if you are interested in trying out this
technology, consider a simpler game of Hexapawn by Martin Gardner, which
can be played with a 24-box device [12].9 However, these early experiments are
direct predecessors of more advanced algorithms like Q-Learning that power
(with further improvements and variations) intelligent systems of the present
day [20].
It is quite hard to break down the program into meaningful independent
units, so let’s consider the whole code at once (Listing 7.9).
9 You can also play with the online version of MENACE, available at
https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.mscroggs.co.
uk%2fmenace%2f%7d%7d&c=E,1,lNxCeQBEkrGcncVh8XOmTHdIy9WrKQZQHJdgu83_
8RdoDogqn7-hext7w4viq2qjHxuBAjRKbeQvurDGT0c-eTqlwj23wqNGErgtHWNxjas,&typo=1
218 Playful Python Projects
INIT_WEIGHT = 50
POS_REWARD = 5
NEG_REWARD = -5
NTR_REWARD = 0
LEARN_RATE = 0.8
PLAYER_SYMBOL = "X"
OPP_SYMBOL = "O" if PLAYER_SYMBOL == "X" else "X"
@dataclass
class Action:
value: int
weight: float
@dataclass
class State:
value: list
actions: list
@classmethod
def create(cls, val):
acts = [Action(i, INIT_WEIGHT) for i, v in enumerate(val) if v == " "]
return cls(val, acts)
def random_action(self):
w = [a.weight for a in self.actions]
return choices(self.actions, w)[0]
def key(self):
return tuple(self.value)
def full(self):
return " " not in self.value
if h or v or d1 or d2:
return True
return False
def game_over(self):
return self.full() or self.victory("X") or self.victory("O")
knowledge = {}
def play_results():
history = []
sym = "X"
state = State.create([" " for _ in range(9)])
action = knowledge[state.key()].random_action()
history.append(action)
state = https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fstate.
next&c=E1,y2pysBPWpmAWVMvol97Kq5Xn_PvxKX1VlJQc5wGMRXRTUBInZj8h_
6KjWxJWVek4yg0luNoEfsSoBE00gqnLVPF2WoD_T2xLvPXkE4pciN7C&typo=1(action,
sym)sym = "X" if sym == "O" else "O"
pl_wins = state.victory(PLAYER_SYMBOL)
opp_wins = state.victory(OPP_SYMBOL)
is_draw = not (pl_wins or opp_wins)
for _ in range(10):
stats = [play_results() for _ in range(10000)]
for k in range(3):
print(sum(s[k] for s in stats), end=" ")
print()
To understand this program, let’s start with the basic elements of MDP:
states and actions. A Markov decision process is a network of states. Each
220 Playful Python Projects
X
O X X 0
7
O O
O X OX X X X O X
O X X O X X 1
OX X O X X
O X O O X O OOO O O
O X X X X
O X X O X X
O O OO O
state is associated with a list of possible actions, and each action leads to one
or more states representing its outcomes.
In our case, an action is simply an index of a cell where the player places
their symbol:
@dataclass
class Action:
value: int
weight: float
Cells are numbered from 0 to 8 (left to right, top to bottom). Initially, the
weight of each action is INIT_WEIGHT, but weights will change over time.
State objects keep the actual gameboard configurations (list of cells and
their values), so each state in our system represents a certain configuration:
@dataclass
class State:
value: list
actions: list
...
• create(): creates the board from the given list of cells and generates
an Action object for each empty cell.
States and transitions 221
The value stored inside a state is the actual game board configuration.
The program plays 10 sessions of 10,000 games each between the MDP
player and the random player, printing out stats after every session (victories
of both players and draws). An individual game is played in play_results().
This function prepares an empty game field and alternates turns between
the players. The random player simply creates a new State object and calls
random_action() to perform the turn. The MDP player preserves the states
between the calls, so the weights of the actions stored in states are never
reinitialized. Every applied action is stored in the history list, so at the end of
the game the reward can be propagated all the way backward. The reward is
multiplied by LEARN_RATE on each step, so the first action in the list (the oldest
one) gets the smallest reward by its absolute value.
Even though a game between two good players normally end with a draw,
in a game between two players making random moves the first player has
an advantage. The “baseline performance” can be obtained with a test run
between two random players:
Thus, roughly 58% of “random” games end with the first player victory,
the second player wins in 29% of cases, and the remaining games end in a
draw.
Now let’s examine how well the MDP player behaves in both roles:
Even playing as O, the MDP player is able to raise its victory percentage
to around 85%; so let’s consider our goals accomplished.
10 See https://www.gutenberg.org
States and transitions 223
other letters. Having these profiles, build a profile for your document
and find the closest match. Profiles, converted to lists of probabilities,
can be compared with each other using dot product:
4. Use real weather data from your favorite city to simulate its all-year
climate with “The four seasons” project code.
5. It might be surprising why the learning rate of our artificial Tic-
Tac-Toe player is so slow. The reason is simple: one can’t master
a game by playing against weak opponents. When playing against
a “random player”, our MDP optimization strategy rewards good
actions; but it often rewards bad ones as well and leaves many bad
moves without punishment. Try learning by playing against another
self-learning MDP player and watch how they evolve in parallel. In
this case, it makes sense to provide a small positive reward for the
draw, since victory is going to be rare.
6. Create a self-learning player for an interesting Tic-Tac-Toe variation
named Row call [38]. Here the players still attempt to complete a line
of three symbols, but the game takes place on a 4 × 4 grid, and a
player only picks the column or the row of the next symbol, while
the opponent finalizes the move by pointing the target cell. In other
words, every move except the very first one consists of two actions: (1)
choose the final location for the opponent’s symbol within the given
row or column; (2) chose the row or column for the next symbol
of your own. This game is far more complex than Tic-Tac-Toe, so
make sure to reuse rotations and reflections of already known game
situations.
References
1. Sidra Arshad, Shougeng Hu, and Badar Nadeem Ashraf. Zipf’s law
and city size distribution: A survey of the literature and future research
agenda. Physica A: Statistical Mechanics and its Applications, 492:75–92,
February 2018.
2. Marco Baroni. 39 Distributions in text. In Anke Lüdeling and Merja
Kytö, editors, Corpus Linguistics, pages 803–822. Mouton de Gruyter,
March 2009.
3. Alex F. Bielajew. History of Monte Carlo. In Monte Carlo Techniques in
Radiation Therapy, pages 3–15. CRC Press, 2021.
4. David M. Bourg and Glenn Seemann. AI for Game Developers. O’Reilly,
Sebastopol, CA, 1st edition, 2004.
5. George E. P. Box. Robustness in the strategy of scientific model building.
In Robustness in Statistics, pages 201–236. Elsevier, 1979.
6. Maxime Crochemore and Dominique Perrin. Two-way string-matching.
Journal of the ACM, 38(3):650–674, July 1991.
7. Alexander Keewatin Dewdney. Sharks and fish wage an ecological war on
the toroidal planet Wa-Tor. Scientific American, 251(6):14–22, 1984.
8. Alexander Keewatin Dewdney. The Magic Machine: A Handbook of Com-
puter Sorcery. W.H. Freeman, New York, 1990.
9. J. Patrick Finerty. Cycles in Canadian Lynx. The American Naturalist,
114(3):453–455, September 1979.
10. Michael Fowler. Kinetic Theory of Gases: A Brief Review.
https://galileo.phys.virginia.edu/classes/252/kinetic_theory.html, 2008.
11. Jeffrey Friedl. Mastering Regular Expressions. O’Reilly Media, Se-
bastapol, CA, 3rd edition, September 2006.
12. Martin Gardner. Mathematical games: How to build a game-learning
machine and then teach it to play and to win. Scientific American, 232
(126):592, 1958.
13. Martin Gardner. Mathematical games: Problems involving questions of
probability and ambiguity. Scientific American, 201(4):147–182, 1959.
14. Martin Gardner. Mathematical games: The fantastic combinations of
John Conway’s new solitaire game “life”. Scientific American, 223(4):
120–123, October 1970.
15. Martin Gardner. Mathematical games: Fantastic patterns traced by pro-
grammed “worms”. Scientific American, 229(5):116–123, November 1973.
16. Thomas B. Greenslade. Adventures with Lissajous Figures:. Morgan &
Claypool Publishers, June 2018.
17. T. Ryan Gregory. Understanding Natural Selection: Essential Concepts
and Common Misconceptions. Evolution: Education and Outreach, 2(2):
156–175, June 2009.
225
References
18. Garrett Hardin. The Tragedy of the Commons: The population problem
has no technical solution; it requires a fundamental extension in morality.
Science, 162(3859):1243–1248, December 1968.
19. F. Hoppensteadt. Predator-prey model. Scholarpedia, 1(10):1563, 2006.
20. Beakcheol Jang, Myeonghwi Kim, Gaspard Harerimana, and Jong Wook
Kim. Q-Learning Algorithms: A Comprehensive Classification and Ap-
plications. IEEE Access, 7:133653–133667, 2019.
21. Nathaniel Johnston. Spaceship Speed Limits in “B3” Life-Like Cellular
Automata. http://www.njohnston.ca/2009/10/spaceship-speed-limits-in-
life-like-cellular-automata/, October 2009.
22. Nathaniel Johnston and Dave Greene. Conway’s Game of Life: Mathe-
matics and Construction. Self-published, 2022.
23. Henry C. King and John R. Millburn. Geared to the Stars: The Evolution
of Planetariums, Orreries, and Astronomical Clocks. University of Toronto
Press, Toronto, 1978.
24. Christopher G. Langton. Studying artificial life with cellular automata.
Physica D: Nonlinear Phenomena, 22(1-3):120–149, 1986.
25. Zhanliang Liu. Weighted Random: Algorithms for sampling from discrete
probability distributions. https://zliu.org/post/weighted-random/, 2018.
26. Yuri Mansury and László Gulyás. The emergence of Zipf’s Law in a sys-
tem of cities: An agent-based simulation approach. Journal of Economic
Dynamics and Control, 31(7):2438–2460, July 2007.
27. Koji Maruyama, Franco Nori, and Vlatko Vedral. The physics of
Maxwell’s demon and information. Reviews of Modern Physics, 81(1):
1–23, January 2009.
28. Donald Michie. Experiments on the mechanization of game-learning. Part
I. Characterization of the model and its parameters. The Computer Jour-
nal, 6(3):232–236, 1963.
29. Bastian Molkenthin. Vector reflection at a surface.
http://sunshine2k.de/articles.html, 2021.
30. Andrés Moreira, Anahí Gajardo, and Eric Goles. Dynamical behavior and
complexity of Langton’s ant. Complexity, 6(4):46–52, 2001.
31. Dominique Morvan. Wildfires Modelling: Short Overview, Challenges and
Perspectives. Journal of the Combustion Society of Japan, 61(196):120–
125, 2019.
32. Lloyd Motz and Jefferson Hane Weaver. The perfect gas law. In The
Concepts of Science: From Newton to Einstein, pages 284–308. Springer
US, Boston, MA, 1988.
33. Todd W. Neller and Clifton G.M. Presser. Optimal play of the dice game
Pig. The UMAP Journal, 25(1), 2004.
34. Todd W. Neller and Clifton G.M. Presser. Practical play of the dice game
Pig. The UMAP Journal, 31(1), 2010.
35. John von Neumann. Theory of Self Reproducing Automata. University of
Illinois Press, first edition, January 1966.
References
36. Mej Newman. Power laws, Pareto distributions and Zipf’s law. Contem-
porary Physics, 46(5):323–351, September 2005.
37. Robert Nystrom. Game Programming Patterns. Genever Benning, 2014.
38. Ben Orlin. Math Games with Bad Drawings: 74 1/2 Simple, Challenging,
Go-Anywhere Games-and Why They Matter. Black Dog & Leventhal,
New York, NY, first edition, 2022.
39. Seymour Papert. Mindstorms: Children, Computers, and Powerful Ideas.
Basic Books, New York, 2nd edition, 1993.
40. Ed Pegg. Paterson’s Worms Revisited. https://www.mathpuzzle.com/
MAA/Worms.html, 2003.
41. Eric R. Pianka. Evolutionary Ecology. Benjamin/Cummings Life Science
Series. Benjamin/Cummings, San Francisco, sixth edition, 2000.
42. Przemysław Prusinkiewicz and Aristid Lindenmayer. The Algorithmic
Beauty of Plants. The Virtual Laboratory. Springer-Verlag, 1996.
43. Clifford A. Reiter. A local cellular model for snow crystal growth. Chaos,
Solitons & Fractals, 23(4):1111–1119, February 2005.
44. Craig W. Reynolds. Flocks, herds and schools: A distributed behavioral
model. In Proceedings of the 14th Annual Conference on Computer Graph-
ics and Interactive Techniques, pages 25–34, 1987.
45. Craig W. Reynolds. Steering behaviors for autonomous characters. In
Game Developers Conference, pages 763–782, 1999.
46. Michael L. Rosenzweig. Paradox of Enrichment: Destabilization of Ex-
ploitation Ecosystems in Ecological Time. Science, 171(3969):385–387,
January 1971.
47. Shovonlal Roy and J. Chattopadhyay. The stability of ecosystems: A
brief overview of the paradox of enrichment. Journal of Biosciences, 32
(2):421–428, March 2007.
48. Keith Schwarz. Darts, Dice, and Coins: Sampling from a Discrete Distri-
bution. https://www.keithschwarz.com/darts-dice-coins/, 2011.
49. Clinton Sheppard. Genetic Algorithms with Python. Self-published, 2018.
50. João Silva, João Marques, Inês Gonçalves, Rui Brito, Senhorinha Teixeira,
José Teixeira, and Filipe Alvelos. A Systematic Review and Bibliometric
Analysis of Wildland Fire Behavior Modeling. Fluids, 7(12):374, Decem-
ber 2022.
51. Stephen M. Stigler. Regression towards the mean, historically considered.
Statistical Methods in Medical Research, 6(2):103–114, April 1997.
52. Yuki Sugiyama, Minoru Fukui, Macoto Kikuchi, Katsuya Hasebe, Akihiro
Nakayama, Katsuhiro Nishinari, Shin-ichi Tadaki, and Satoshi Yukawa.
Traffic jams without bottlenecks—experimental evidence for the physical
mechanism of the formation of a jam. New Journal of Physics, 10(3):
033001, March 2008.
53. Dek Terrell. A test of the gambler’s fallacy: Evidence from pari-mutuel
games. Journal of Risk and Uncertainty, 8(3):309–317, May 1994.
References
54. Steven Tijms. Monty Hall and “the Leibniz Illusion”. CHANCE, 35(4):
4–14, October 2022.
55. Richard Vaillencourt. Simple Solutions to Energy Calculations. River
Publishers, sixth edition, 2021.
56. Dennie Van Tassel. Program Style, Design, Efficiency, Debugging, and
Testing. Prentice-Hall, Englewood Cliffs, NJ, second edition, 1978.
57. Howard Howie Weiss. The SIR model and the foundations of public
health. Materials Matematics, pages 1–17, 2013.
58. Charles Wetherell. Etudes for Programmers. Prentice-Hall, Englewood
Cliffs, NJ, 1978.
59. Robert J. Whitaker. A note on the Blackburn pendulum. American
Journal of Physics, 59(4):330–333, April 1991.
60. Stephen Wolfram. A New Kind of Science. Wolfram Media, Champaign,
IL, 2002.
61. Ronald Wyllys. Empirical and theoretical bases of Zipf’s law. Library
trends, 30(1):53–64, 1981.
62. R. D. Zinck and V. Grimm. More realistic than anticipated: A classical
forest-fire model from statistical physics captures real fire shapes. The
Open Ecology Journal, 1(1), 2008.
Index
80/20 rule, see Pareto principle flickering effect, 17
flocking behavior, 106
acceleration, 47, 101
adaptive evolution, 135 Galton board, 79
angular velocity, 60 Gardner, Martin, 217
genetic algorithms, 144
backpropagation, 217
binomial distribution, 78 Hardin, Garrett, 119
biomimicry, 99 Hexapawn, game, 217
Blackburn pendulum, 55 Hooke’s law, 56, 58
boid, 106 hypotrochoid, 62
Boyle-Mariotte’s law, 24
Brownian motion, 42 ideal gas, 19
inertia, 100
cellular automaton, 160
central limit theorem, 78 kinetic theory of gases, 12, 18
Charles’s law, 44 Knuth-Morris-Pratt algorithm, 190
class, 2
collision L-system, 112
central, 30 axiom, 114
elastic, 30 deterministic context free, 114
computational modeling, 11 production rules, 114
computational universality, 170 Langton’s ant, 166
computer simulations, 11 law of large numbers, 69
Conway, John, 161 laws of conservation, 30
Life, game, 160
data class, 2
Lindenmayer systems, see L-system
determinism, 204
Lissajous figures, 55
Dewdney, Alexander, 135
log-log plot, 82
double buffering, 18
logistic function, 120
Drossel-Schwabl forest-fire model, 153
Logo programming language, 6
emergence, 106, 112, 166, 169 Lotka-Volterra equations, 133
epitrochoid, 67
equilibrium, 56 Markov
escape velocity, 53 chain, 204
event loop, 15 decision process (MDP), 215
event-driven architecture, 15, 21 process, 204
events and event handlers, 8 property, 207
evolutionary computation, 144 Maxwell’s demon, 44
MENACE matchbox device, 217
finite state machine (FSM), 183 Monte Carlo
229
Index
self-similarity, 112
shell theorem, 56
simple harmonic motion, 55
Simulated Evolution model, 135
SIR model, 159
SIRV model, 159
speed, 12
spirograph, 62
steering, 100
stochastic process, 69
string matching, 189
symmetry, 112