Overtones – What They Are And How To Compute Them

In theory, hitting the middle C on a piano should produce a sound wave with a frequency of 523.25 Hz and nothing else. However, running the resulting audio through a spectrum analyzer, it becomes obvious that there’s much more going on. This is true for all other instruments, from tubas to trumpets, basoons to flutes, contrabasses to violins. Play any note and you’ll get a package of sound waves at different frequencies rather than just one.

First of all: why is that? Let’s focus on stringed instruments. When you plug the string, it goes into its most basic vibration mode: it moves up and down as a whole at a certain frequency f. This is the so called first harmonic (or fundamental). But shortly after that, the nature of the vibration changes and the string enters a second mode: while one half of the string moves up, the other half moves down. This happens naturally and is just part of the string’s dynamics. In this mode, called the second harmonic, the vibration accelerates to a frequency of 2 * f. The story continues in this fashion as other modes of vibration appear: the third harmonic at a frequency 3 * f, the fourth harmonic at 4 * f, and so on.

String Vibrating Modes Overtones

A note is determined by the frequency. As already stated, the middle C on the piano should produce a sound wave with a frequency of 523.25 Hz. And indeed it does produce said sound wave, but it is only the first harmonic. As the string continues to vibrate, all the other harmonics follow, producing overtones. In the picture below you can see which notes you’ll get when playing a C (overtone series):

(The marked notes are only approximates. Taken from http://legacy.earlham.edu)

Quite the package! And note that the major chord is fully included within the first four overtones. So it’s buy a note, get a chord free. And unless you digitally produce a note, there’s no avoiding it. You might wonder why it is that we don’t seem to perceive the additional notes. Well, we do and we don’t. We don’t perceive the overtones consciously because the amplitude, and thus volume, of each harmonic is smaller then the amplitude of the previous one (however, this is a rule of thumb and exceptions are possible, any instrument will emphasize some overtones in particular). But I can assure you that when listening to a digitally produced note, you’ll feel that something’s missing. It will sound bland and cold. So unconsciously, we do perceive and desire the overtones.

If you’re not interested in mathematics, feel free to stop reading now (I hope you enjoyed the post so far). For all others: let’s get down to some mathematical business. The frequency of a note, or rather of its first harmonic, can be computed via:

(1) f(n) = 440 * 2n/12

With n = 0 being the chamber pitch and each step of n one half-tone. For example, from the chamber pitch (note A) to the middle C there are n = 3 half-tone steps (A#, B, C). So the frequency of the middle C is:

f(3) = 440 * 23/12 = 523.25 Hz

As expected. Given a fundamental frequency f = F, corresponding to a half-step-value of n = N, the freqency of the k-th harmonic is just:

(2) f(k) = k * F = k * 440 * 2N/12

Equating (1) and (2), we get a relationship that enables us to identify the musical pitch of any overtone:

440 * 2n/12 = k * 440 * 2N/12

2n/12 = k * 2N/12

n/12 * ln(2) = ln(k) + N/12 * ln(2)

n/12 = ln(k)/ln(2) + N/12

(3) n – N = 12 * ln(k) / ln(2) ≈ 17.31 * ln(k)

The equation results in this table:

k

n – N (rounded)

1 0
2 12
3 19
4 24
5 28

And so on. How does this tell us where the overtones are? Read it like this:

  • The first harmonic (k = 1) is zero half-steps from the fundamental (n-N = 0). So far, so duh.
  • The second harmonic (k = 2) is twelve half-steps, or one octave, from the fundamental (n-N = 12).
  • The third harmonic (k = 3) is nineteen half-steps, or one octave and a quint, from the fundamental (n-N = 19).
  • The fourth harmonic (k = 4) is twenty-four half-steps, or two octaves, from the fundamental (n-N = 24).
  • The fifth harmonic (k = 5) is twenty-wight half-steps, or two octaves and a third, from the fundamental (n-N = 28).

So indeed the formula produces the correct overtone series for any note. And for any note the same is true: The second overtone is exactly one octave higher, the third harmonic one octave and a quint higher, and so on. The corresponding major chord is always contained within the first five harmonics.

The Doppler Effect in Pictures

The siren of an approaching police car will sound at a higher pitch, the light of an approaching star will be shifted towards blue and a passing supersonic jet will create a violent thunder. What do these phenomenon have in common? All of them are a result of the Doppler effect. To understand how it arises, just take a look at the animations below.

Stationary Source: The waves coming from the source propagate symmetrically.

Subsonic Source (moving below sound speed): Compression of waves in direction of motion.

Sonic Source (moving at sound speed): Maximum compression.

Supersonic Source (moving beyond sound speed): Source overtakes its waves, formation of Mach cone and sonic boom.

WOWC_IFUSA_16rotated

(Pictures taken from http://www.acs.psu.edu)

A Brief Look At Car-Following Models

Recently I posted a short introduction to recurrence relations – what they are and how they can be used for mathematical modeling. This post expands on the topic as car-following models are a nice example of recurrence relations applied to the real-world.

Suppose a car is traveling on the road at the speed u(t) at time t. Another car approaches this car from behind and starts following it. Obviously the driver of the car that is following cannot choose his speed freely. Rather, his speed v(t) at time t will be a result of whatever the driver in the leading car is doing.

The most basic car-following model assumes that the acceleration a(t) at time t of the follower is determined by the difference in speeds. If the leader is faster than the follower, the follower accelerates. If the leader is slower than the follower, the follower decelerates. The follower assumes a constant speed if there’s no speed difference. In mathematical form, this statement looks like this:

a(t) = λ * (u(t) – v(t))

The factor λ (sensitivity) determines how strongly the follower accelerates in response to a speed difference. To be more specific: it is the acceleration that results from a speed difference of one unit.

——————————————

Before we go on: how is this a recurrence relation? In a recurrence relation we determine a quantity from its values at an earlier time. This seems to be missing here. But remember that the acceleration is given by:

a(t) = (v(t+h) – v(t)) / h

with h being a time span. Inserted into the above car-following equation, we can see that it indeed implies a recurrence relation.

——————————————

Our model is still very crude. Here’s the biggest problem: The response of the driver is instantaneous. He picks up the speed difference at time t and turns this information into an acceleration also at time t. But more realistically, there will be a time lag. His response at time t will be a result of the speed difference at an earlier time t – Λ, with Λ being the reaction time.

a(t) = λ * (u(t – Λ) – v(t – Λ))

The reaction time is usually in the order of one second and consist of the time needed to process the information as well as the time it takes to move the muscles and press the pedal. There are several things we can do to make the model even more realistic. First of all, studies show that the speed difference is not the only factor. The distance d(t) between the leader and follower also plays an important role. The smaller it is, the stronger the follower will react. We can take this into account by putting the distance in the denominator:

a(t) = (λ / d(t)) * (u(t – Λ) – v(t – Λ))

You can also interpret this as making the sensitivity distance-dependent. There’s still one adjustment we need to make. The above model allows any value of acceleration, but we know that we can only reach certain maximum values in a car. Let’s symbolize the maximum acceleration by a(acc) and the maximum deceleration by a(dec). The latter will be a number smaller than zero since deceleration is by definition negative acceleration. We can write:

a(t) = a(acc) if (λ / d(t)) * (u(t – Λ) – v(t – Λ)) > a(acc)
a(t) = a(dec) if (λ / d(t)) * (u(t – Λ) – v(t – Λ)) < a(dec)
a(t) = (λ / d(t)) * (u(t – Λ) – v(t – Λ)) else

It probably looks simpler using an if-statement:

a(t) = (λ / d(t)) * (u(t – Λ) – v(t – Λ))

IF a(t) > a(acc) THEN
a(t) = a(acc)
ELSEIF a(t) < a(dec) THEN
a(t) = a(dec)
END IF

This model already catches a lot of nuances of car traffic. I hope I was able to give you some  insight into what car-following models are and how you can fine-tune them to satisfy certain conditions.

Recurrence Relations – A Simple Explanation And Much More

Recurrence relations are a powerful tool for mathematical modeling and numerically solving differential equations (no matter how complicated). And as luck would have it, they are relatively easy to understand and apply. So let’s dive right into it using a purely mathematical example (for clarity) before looking at a real-world application.

This equation is a typical example of a recurrence relation:

x(t+1) = 5 * x(t) + 2 * x(t-1)

At the heart of the equation is a certain quantity x. It appears three times: x(t+1) stands for the value of this quantity at a time t+1 (next month), x(t) for the value at time t (current month) and x(t-1) the value at time t-1 (previous month). So what the relation allows us to do is to determine the value of said quantity for the next month, given that we know it for the current and previous month. Of course the choice of time span here is just arbitrary, it might as well be a decade or nanosecond. What’s important is that we can use the last two values in the sequence to determine the next value.

Suppose we start with x(0) = 0 and x(1) = 1. With the recurrence relation we can continue the sequence step by step:

x(2) = 5 * x(1) + 2 * x(0) = 5 * 1 + 2 * 0 = 5

x(3) = 5 * x(2) + 2 * x(1) = 5 * 5 + 2 * 1 = 27

x(4) = 5 * x(3) + 2 * x(2) = 5 * 27 + 2 * 5 = 145

And so on. Once we’re given the “seed”, determining the sequence is not that hard. It’s just a matter of plugging in the last two data points and doing the calculation. The downside to defining a sequence recursively is that if you want to know x(500), you have to go through hundreds of steps to get there. Luckily, this is not a problem for computers.

In the most general terms, a recurrence relation relates the value of quantity x at a time t + 1 to the values of this quantity x at earlier times. The time itself could also appear as a factor. So this here would also be a legitimate recurrence relation:

x(t+1) = 5 * t * x(t) – 2 * x(t-10)

Here we calculate the value of x at time t+1 (next month) by its value at a time t (current month) and t – 10 (ten months ago). Note that in this case you need eleven seed values to be able to continue the sequence. If we are only given x(0) = 0 and x(10) = 50, we can do the next step:

x(11) = 5 * 10 * x(10) – 2 * x(0) = 5 * 10 * 50 – 2 * 0 = 2500

But we run into problems after that:

x(12) = 5 * 11 * x(11) – 2 * x(1) = 5 * 11 * 2500 – 2 * x(1) = ?

We already calculated x(11), but there’s nothing we can do to deduce x(1).

Now let’s look at one interesting application of such recurrence relations, modeling the growth of animal populations. We’ll start with a simple model that relates the number of animals x in the next month t+1 to the number of animals x in the current month t as such:

x(t+1) = x(t) + f * x(t)

The factor f is a constant that determines the rate of growth (to be more specific: its value is the decimal percentage change from one month to the next). So if our population grows with 25 % each month, we get:

x(t+1) = x(t) + 0.25 * x(t)

If we start with x(0) = 100 rabbits at month t = 0 we get:

x(1) = x(0) + 0.1 * x(0) = 100 + 0.25 * 100 = 125 rabbits

x(2) = x(1) + 0.1 * x(1) = 125 + 0.25 * 125 = 156 rabbits

x(3) = x(2) + 0.1 * x(2) = 156 + 0.25 * 156 = 195 rabbits

x(4) = x(3) + 0.1 * x(3) = 195 + 0.25 * 195 = 244 rabbits

x(5) = x(4) + 0.1 * x(4) = 244 + 0.25 * 244 = 305 rabbits

And so on. Maybe you already see the main problem with this exponential model: it just keeps on growing. This is fine as long as the population is small and the environment rich in ressources, but every environment has its limits. Let’s fix this problem by including an additional term in the recurrence relation that will lead to this behavior:

- Exponential growth as long as the population is small compared to the capacity
- Slowing growth near the capacity
- No growth at capacity
- Population decline when over the capacity

How can we translate this into mathematics? It takes a lot of practice to be able to tweak a recurrence relation to get the behavior you want. You just learned your first chord and I’m asking you to play Mozart, that’s not fair. But take a look at this bad boy:

x(t+1) = x(t) + a * x(t) * (1 – x(t) / C)

This is called the logistic model and the constant C represents said capacity. If x is much smaller than the capacity C, the ratio x / C will be close to zero and we are left with exponential growth:

x(t+1) ≈ x(t) + a * x(t) * (1 – 0)

x(t+1) ≈ x(t) + a * x(t)

So this admittedly complicated looking recurrence relation fullfils our first demand: exponential growth for small populations. What happens if the population x reaches the capacity C? Then all growth should stop. Let’s see if this is the case. With x = C, the ratio x / C is obviously equal to one, and in this case we get:

x(t+1) = x(t) + a * x(t) * (1 – 1)

x(t+1) = x(t)

The number of animals remains constant, just as we wanted. Last but not least, what happens if (for some reason) the population gets past the capacity, meaning that x is greater than C? In this case the ratio x / C is greater than one (let’s just say x / C = 1.2 for the sake of argument):

x(t+1) = x(t) + a * x(t) * (1 – 1.2)

x(t+1) = x(t) + a * x(t) * (- 0.2)

The second term is now negative and thus x(t+1) will be smaller than x(t) – a decline back to capacity. What an enormous amount of beautiful behavior in such a compact line of mathematics! This is where the power of recurrence relations comes to light. Anyways, let’s go back to our rabbit population. We’ll let them grow with 25 % (a = 0.25), but this time on an island that can only sustain 300 rabbits at most (C = 300). Thus the model looks like this:

x(t+1) = x(t) + 0.25 * x(t) * (1 – x(t) / 300)

If we start with x(0) = 100 rabbits at month t = 0 we get:

x(1) = 100 + 0.25 * 100 * (1 – 100 / 300) = 117 rabbits

x(2) = 117 + 0.25 * 117 * (1 – 117 / 300) = 135 rabbits

x(3) = 135 + 0.25 * 135 * (1 – 135 / 300) = 153 rabbits

x(4) = 153 + 0.25 * 153 * (1 – 153 / 300) = 172 rabbits

x(5) = 172 + 0.25 * 172 * (1 – 172 / 300) = 190 rabbits

Note that now the growth is almost linear rather than exponential and will slow down further the closer we get to the capacity (continue the sequence if you like, it will gently approach 300, but never go past it).

We can even go further and include random events in a recurrence relation. Let’s stick to the rabbits and their logistic growth and say that there’s a p = 5 % chance that in a certain month a flood occurs. If this happens, the population will halve. If no flood occurs, it will grow logistically as usual. This is what our new model looks like in mathematical terms:

x(t+1) = x(t) + 0.25 * x(t) * (1 – x(t) / 300)    if no flood occurs

x(t+1) = 0.5 * x(t)    if a flood occurs

To determine if there’s a flood, we let a random number generator spit out a number between 1 and 100 at each step. If it displays the number 5 or smaller, we use the “flood” equation (in accordance with the 5 % chance for a flood). Again we turn to our initial population of 100 rabbits with the growth rate and capacity unchanged:

x(1) = 100 + 0.25 * 100 * (1 – 100 / 300) = 117 rabbits

x(2) = 117 + 0.25 * 117 * (1 – 117 / 300) = 135 rabbits

x(3) = 135 + 0.25 * 135 * (1 – 135 / 300) = 153 rabbits

x(4) = 0.5 * 153 = 77 rabbits

x(5) = 77 + 0.25 * 77 * (1 – 77 / 300) = 91 rabbits

As you can see, in this run the random number generator gave a number 5 or smaller during the fourth step. Accordingly, the number of rabbits halved. You can do a lot of shenanigans (and some useful stuff as well) with recurrence relations and random numbers, the sky’s the limit. I hope this quick overview was helpful.

A note for the advanced: here’s how you turn a differential equation into a recurrence relation. Let’s take this differential equation:

dx/dt = a * x * exp(- b*x)

First multiply by dt:

dx = a * x * exp(- b * x) * dt

We set dx (the change in x) equal to x(t+h) – x(t) and dt (change in time) equal to a small constant h. Of course for x we now use x(t):

x(t+h) – x(t) = a * x(t) * exp(- b * x(t)) * h

Solve for x(t+h):

x(t+h) = x(t) + a * x(t) * exp(- b * x(t)) * h

And done! The smaller your h, the more accurate your numerical results. How low you can go depends on your computer’s computing power.

Want to learn German? Get ready for brain-overload …

german articles

(Note how the feminine article turns into the masculine article for absolutely no reason)

ze7ge9

Need more? Here’s how to conjugate the verb “to work” = “arbeiten” in English and German:

I work ich arbeite
you work du arbeitest
he/she/it works er/sie/es arbeitet
we work wir arbeiten
you work ihr arbeitet
they work sie/Sie arbeiten

(Just FYI: You say “du” when talking to one person and “ihr” when talking to a group. In English you use “you” in both cases)

But on the bright side, the tenses are a lot easier … no continuous forms and the perfect form is identical in meaning to the simple form … so there’s that

Einstein’s Special Relativity – The Core Idea

It might surprise you that a huge part of Einstein’s Special Theory of Relativity can be summed up in just one simple sentence. Here it is:

“The speed of light is the same in all frames of references”

In other words: no matter what your location or speed is, you will always measure the speed of light to be c = 300,000,000 m/s (approximate value). Not really that fascinating you say? Think of the implications. This sentence not only includes the doom of classical physics, it also forces us to give up our notions of time. How so?

Suppose you watch a train driving off into the distance with v = 30 m/s relative to you. Now someone on the train throws a tennis ball forward with u = 10 m/s relative to the train. How fast do you perceive the ball to be? Intuitively, we simply add the velocities. If the train drives off with 30 m/s and the ball adds another 10 m/s to that, it should have the speed w = 40 m/s relative to you. Any measurement would confirm this and all is well.

Now imagine (and I mean really imagine) said train is driving off into the distance with half the light speed, or v = 0.5 * c. Someone on the train shines a flashlight forwards. Obviously, this light is going at light speed relative to the train, or u = c. How fast do you perceive the light to be? We have the train at 0.5 * c and the light photons at the speed c on top of that, so according to our intuition we should measure the light at a velocity of v = 1.5 * c. But now think back to the above sentence:

“The speed of light is the same in all frames of references”

No matter how fast the train goes, we will always measure light coming from it at the same speed, period. Here, our intiution differs from physical reality. This becomes even clearer when we take it a step further. Let’s have the train drive off with almost light speed and have someone on the train shine a flashlight forwards. We know the light photons to go at light speed, so from our perspective the train is almost able to keep up with the light. An observer on the train would strongly disagree. For him the light beam is moving away as it always does and the train is not keeping up with the light in any way.

How is this possible? Both you and the observer on the train describe the same physical reality, but the perception of it is fundamentally different. There is only one way to make the disagreement go away and that is by giving up the idea that one second for you is the same as one second on the train. If you make the intervals of time dependent on speed in just the right fashion, all is well.

Suppose that one second for you is only one microsecond on the train. In your one second the distance between the train and the light beam grows by 300 meter. So you say: the light is going 300 m / 1 s = 300 m/s faster than the train.

However, for the people in the train, this same 300 meter distance arises in just one microsecond, so they say: the light is going 300 m / 1 µs = 300 m / 0.000,001 s  = 300,000,000 m/s faster than the train – as fast as it always does.

Note that this is a case of either / or. If the speed of light is the same in all frames of references, then we must give up our notions of time. If the light speed depends on your location and speed, then we get to keep our intiutive image of time. So what do the experiments say? All experiments regarding this agree that the speed of light is indeed the same in all frames of references and thus our everyday perception of time is just a first approximation to reality.

How Statistics Turned a Harmless Nurse Into a Vicious Killer

Let’s do a thought experiment. Suppose you have 2 million coins at hand and a machine that will flip them all at the same time. After twenty flips, you evaluate and you come across one particular coin that showed heads twenty times in a row. Suspicious? Alarming? Is there something wrong with this coin? Let’s dig deeper. How likely is it that a coin shows heads twenty times in a row? Luckily, that’s not so hard to compute. For each flip there’s a 0.5 probability that the coin shows heads and the chance of seeing this twenty times in a row is just 0.5^20 = 0.000001 (rounded). So the odds of this happening are incredibly low. Indeed we stumbled across a very suspicious coin. Deep down I always knew there was something up with this coin. He just had this “crazy flip”, you know what I mean? Guilty as charged and end of story.

Not quite, you say? You are right. After all, we flipped 2 million coins. If the odds of twenty heads in a row are 0.000001, we should expect 0.000001 * 2,000,000 = 2 coins to show this unlikely string. It would be much more surprising not to find this string among the large number of trials. Suddenly, the coin with the supposedly “crazy flip” doesn’t seem so guilty anymore.

What’s the point of all this? Recently, I came across the case of Lucia De Berk, a dutch nurse who was accused of murdering patients in 2003. Over the course of one year, seven of her patients had died and a “sharp” medical expert concluded that there was only a 1 in 342 million chance of this happening. This number and some other pieces of “evidence” (among them, her “odd” diary entries and her “obsession” with Tarot cards) led the court in The Hague to conclude that she must be guilty as charged, end of story.

Not quite, you say? You are right. In 2010 came the not guilty verdict. Turns out (funny story), she never commited any murder, she was just a harmless nurse that was transformed into vicious killer by faulty statistics. Let’s go back to the thought experiment for a moment, imperfect for this case though it may be. Imagine that each coin represents a nurse and each flip a month of duty. It is estimated that there are around 300,000 hospitals worldwide, so we are talking about a lot of nurses/coins doing a lot of work/flips. Should we become suspicious when seeing a string of several deaths for a particular nurse? No, of course not. By pure chance, this will occur. It would be much more surprising not to find a nurse with a “suspicious” string of deaths among this large number of nurses. Focusing in on one nurse only blurs the big picture.

And, leaving statistics behind, the case also goes to show that you can always find something “odd” about a person if you want to. Faced with new information, even if not reliable, you interpret the present and past behavior in a “new light”. The “odd” diary entries, the “obsession” with Tarot cards … weren’t the signs always there?

Be careful to judge. Benjamin Franklin once said he should consider himself lucky if he’s right 50 % of the time. And that’s a genius talking, so I don’t even want to know my stats …