New Release for Kindle: Math Shorts – Derivatives

The rich and exciting field of calculus begins with the study of derivatives. This book is a practical introduction to derivatives, filled with down-to-earth explanations, detailed examples and lots of exercises (solutions included). It takes you from the basic functions all the way to advanced differentiation rules and proofs. Check out the sample for the table of contents and a taste of the action. From the author of “Mathematical Shenanigans”, “Great Formulas Explained” and the “Math Shorts” series. A supplement to this book is available under the title “Exercises to Math Shorts – Derivatives”. It contains an additional 28 exercises including detailed solutions.

Note: Except for the very basics of algebra, no prior knowledge is required to enjoy this book.

Table of Contents:

- Section 1: The Big Picture

- Section 2: Basic Functions and Rules

Power Functions
Sum Rule and Polynomial Functions
Exponential Functions
Logarithmic Functions
Trigonometric Functions

- Section 3: Advanced Differentiation Rules

I Know That I Know Nothing
Product Rule
Quotient Rule
Chain Rule

- Section 4: Limit Definition and Proofs

The Formula
Power Functions
Constant Factor Rule and Sum Rule
Product Rule

- Section 5: Appendix

Solutions to the Problems
Copyright and Disclaimer
Request to the Reader

Differential Equations – The Big Picture

Population Growth

So you want to learn about differential equations? Excellent choice. Differential equations are not only of central importance to science, they can also be quite stimulating and fun (that’s right). In the broadest sense, a differential equation is any equation that connects a function with one or more of its derivatives. What makes these kinds of equations particularly important?

Remember that a derivative expresses the rate of change of a quantity. So the differential equation basically establishes a link between the rate of change of said quantity and its current value. Such a link is very common in nature. Consider population growth. It is obvious that the rate of change will depend on the current size of the population. The more animals there are, the more births (and deaths) we can expect and hence the faster the size of the population will change.

A commonly used model for population growth is the exponential model. It is based on the assumption that the rate of change is proportional to the current size of the population. Let’s put this into mathematical form. We will denote the size of the population by N (measured in number of animals) and the first derivative with respect to time by dN/dt (measured in number of animals per unit time). Note that other symbols often used for the first derivative are N’ and Ṅ. We will however stick to the so-called Leibniz notation dN/dt as it will prove to be quite instructive when dealing with separable differential equations. That said, let’s go back to the exponential model.

With N being the size of the population and dN/dt the corresponding rate of change, our assumption of proportionality between the two looks like this:

Differential Equations_html_m1eec977f

with r being a constant. We can interpret r as the growth rate. If r > 0, then the population will grow, if r < 0, it will shrink. This model has proven to be successful for relatively small animal populations. However, there’s one big flaw: there is no limiting value. According to the model, the population would just keep on growing and growing until it consumes the entire universe. Obviously and luckily, bacteria in Petri dish don’t behave this way. For a more accurate model, we need to take into account the limits of the environment.

The differential equation that forms the basis of the logistic model, called Verhulst equation in honor of the Belgian mathematician Pierre François Verhulst, does just that. Just like the differential equation for exponential growth, it relates the current size N of the population to its rate of change dN/dt, but also takes into account the finite capacity K of the environment:

Differential Equations_html_6205133b

Take a careful look at the equation. Even without any calculations a differential equation can tell a vivid story. Suppose for example that the population is very small. In this case N is much smaller than K, so the ratio N/K is close to zero. This means that we are back to the exponential model. Hence, the logistic model contains the exponential model as a special case. Great! The other extreme is N = K, that is, when the size of the population reaches the capacity. In this case the ratio N/K is one and the rate of change dN/dt becomes zero, which is exactly what we were expecting. No more growth at the capacity.

Definition and Equation of Motion

Now that you have seen two examples of differential equations, let’s generalize the whole thing. For starters, note that we can rewrite the two equations as such:

Differential Equations_html_m2f1acf93

Differential Equations_html_155a8ed6

Denoting the dependent variable with x (instead of N) and higher order derivatives with dnx/dtn (with n = 2 resulting in the second derivative, n = 3 in the third derivative, and so on), the general form of a differential equation looks like this:

Differential Equations_html_2f9e8613

Wow, that looks horrible! But don’t worry. We just stated in the broadest way possible that a differential equation is any equation that connects a function x(t) with one or more its derivatives dx/dt, d2x/dt2, and so on. The above differential equation is said to have the order n. Up to now, we’ve only been dealing with first order differential equations.

The following equation is an example of a second order differential equation that you’ll frequently come across in physics. Its solution x(t) describes the position or angle over time of an oscillating object (spring, pendulum).

Differential Equations_html_m52d742c5

with c being a constant. Second order differential equations often arise naturally from Newton’s equation of motion. This law, which even the most ruthless crook will never be able to break, states that the object’s mass m times the acceleration a experienced by it is equal to the applied net force F:

Differential Equations_html_5f68baa

The force can be a function of the object’s location x (spring), the velocity v = dx/dt (air resistance), the acceleration a = d2x/dt2 (Bremsstrahlung) and time t (motor):

Differential Equations_html_m239c875b

Hence the equation of motion becomes:

Differential Equations_html_m3d36476b

A second order differential equation which leads to the object’s position over time x(t) given the forces involved in shaping its motion. It might not look pretty to some (it does to me), but there’s no doubt that it is extremely powerful and useful.

Equilibrium Points

To demonstrate what equilibrium points are and how to compute them, let’s take the logistic model a step further. In the absence of predators, we can assume the fish in a certain lake to grow according to Verhulst’s equation. The presence of fishermen obviously changes the dynamics of the population. Every time a fisherman goes out, he will remove some of the fish from the population. It is safe to assume that the success of the fisherman depends on the current size of the population: the more fish there are, the more he will be able to catch. We can set up a modified version of Verhulst’s equation to describe the situation mathematically:

Differential Equations_html_m56b72820

with a constant c > 0 that depends on the total number of fishermen, the frequency and duration of their fishing trips, the size of their nets, and so on. Solving this differential equation is quite difficult. However, what we can do with relative ease is finding equilibrium points.

Remember that dN/dt describes the rate of change of the population. Hence, by setting dN/dt = 0, we can find out if and when the population reaches a constant size. Let’s do this for the above equation.

Differential Equations_html_m723a5b46

This leads to two solutions:

Differential Equations_html_4d69bf27

Differential Equations_html_m3756512b

The first equilibrium point is quite boring. Once the population reaches zero, it will remain there. You don’t need to do math to see that. However, the second equilibrium point is much more interesting. It tells us how to calculate the size of the population in the long run from the constants. We can also see that a stable population is only possible if c < r.

Note that not all equilibrium points that we find during such an analysis are actually stable (in the sense that the system will return to the equilibrium point after a small disturbance). The easiest way to find out whether an equilibrium point is stable or not is to plot the rate of change, in this case dN/dt, over the dependent variable, in this case N. If the curve goes from positive to negative values at the equilibrium point, the equilibrium is stable, otherwise it is unstable.

Differential Equations_html_m46f59529

(This was an excerpt from my e-book “Math Shorts – Introduction to Differential Equations”)

Modeling Theme Park Queues

Who doesn’t love a day at the theme park? You can go on thrilling roller‒coaster rides, enjoy elaborate shows, have a tasty lunch in between or just relax and take in the scenery. Of course there’s one thing that does spoil the fun a bit: the waiting. For the most popular attractions waiting times of around one hour are not uncommon during peak times, while the ride itself may be over in no more than two or three minutes.

Let’s work towards a basic model for queues in theme parks and other situations in which queues commonly arise. We will assume that the passing rate R(t), that is, the number of people passing the entrance of the attraction per unit time, is given. How many of these will enter the line? This will depend on the popularity of the attraction as well as the current size of the line. The more people are already in the line, the less likely others are to join. We’ll denote the number of people in the line at time t with n(t) and use this ansatz for the rate r(t) at which people join the queue:

Mathematical Shenanigans_html_5618c8ce

The constant a expresses the popularity of the attraction (more specifically, it is the percentage of passers‒by that will use the attraction if no queue is present) and the constant b is a “line repulsion” factor. The stronger visitors are put off by the presence of a line, the higher its value will be. How does the size of the line develop over time given the above function? We assume that the maximum serving rate is c people per unit time. So the rate of change for the number of people in line is (for n(t) ≥ 0):

Mathematical Shenanigans_html_5c443fa7

In case the numerical evaluation returns a value n(t) < 0 (which is obviously nonsense, but a mathematical possibility given our ansatz), we will force n(t) = 0. An interesting variation, into which we will not dive much further though, is to include a time lag. Usually the expected waiting time is displayed to visitors on a screen. The visitors make their decision on joining the line based on this information. However, the displayed waiting time is not updated in real‒time. We have to expect that there’s a certain delay d between actual and displayed length of the line. With this effect included, our equation becomes:

Mathematical Shenanigans_html_m2ee86a4f

Simulation

For the numerical solution we will go back to the delay‒free version. We choose one minute as our unit of time. For the passing rate, that is, the people passing by per minute, we set:

R(t) = 0.00046 · t · (720 ‒ t)

We can interpret this function as such: at t = 0 the park opens and the passing rate is zero. It then grows to a maximum of 60 people per minute at t = 360 minutes (or 6 hours) and declines again. At t = 720 minutes (or 12 hours) the park closes and the passing rate is back to zero. We will assume the popularity of the attraction to be:

a = 0.2

So if there’s no line, 20 % of all passers‒by will make use of the attraction. We set the maximum service rate to:

c = 5 people per minute

What about the “line repulsion” factor? Let’s assume that if the line grows to 200 people (given the above service rate this would translate into a waiting time of 40 minutes), the willingness to join the line drops from the initial 20 % to 10 %.

Mathematical Shenanigans_html_3d42d26c

→ b = 0.005

Given all these inputs and the model equation, here’s how the queue develops over time:

Mathematical Shenanigans_html_79a0f216

It shows that no line will form until around t = 100 minutes into opening the park (at which point the passing rate reaches 29 people per minute). Then the queue size increases roughly linearly for the next several hours until it reaches its maximum value of n = 256 people (waiting time: 51 minutes) at t = 440 minutes. Note that the maximum value of the queue size occurs much later than the maximum value of the passing rate. After reaching a maximum, there’s a sharp decrease in line length until the line ceases to be at around t = 685 minutes. Further simulations show that if you include a delay, there’s no noticeable change as long as the delay is in the order of a few minutes.

(This was an excerpt from my ebook “Mathematical Shenanigans”)

The Problem With Antimatter Rockets

The distance to our neighboring star Alpha Centauri is roughly 4.3 lightyears or 25.6 trillion km. This is an enormous distance. It would take the Space Shuttle 165,000 years to cover this distance. That’s 6,600 generations of humans who’d know nothing but the darkness of space. Obviously, this is not an option. Do we have the technologies to get there within the lifespan of a person? Surprisingly, yes. The concept of antimatter propulsion might sound futuristic, but all the technologies necessary to build such a rocket exist. Today.

What exactly do you need to build a antimatter rocket? You need to produce antimatter, store antimatter (remember, if it comes in contact with regular matter it explodes, so putting it in a box is certainly not a possibility) and find a way to direct the annihilation products. Large particle accelerators such as CERN routinely produce antimatter (mostly anti-electrons and anti-protons). Penning-Traps, a sophisticated arrangement of electric and magnetic fields, can store charged antimatter. And magnetic nozzles, suitable for directing the products of proton / anti-proton annihilations, have already been used in several experiments. It’s all there.

So why are we not on the way to Alpha Centauri? We should be making sweet love with green female aliens, but instead we’re still banging our regular, non-green, non-alien women. What’s the hold-up? It would be expensive. Let me rephrase that. The costs would be blasphemous, downright insane, Charlie Manson style. Making one gram of antimatter costs around 62.5 trillion $, it’s by far the most expensive material on Earth. And you’d need tons of the stuff to get to Alpha Centauri. Bummer! And even if we’d all get a second job to pay for it, we still couldn’t manufacture sufficient amounts in the near future. Currently 1.5 nanograms of antimatter are being produced every year. Even if scientists managed to increase this rate by a factor of one million, it would take 1000 years to produce one measly gram. And we need tons of it! Argh. Reality be a harsh mistress …

New Release for Kindle: Math Shorts – Integrals

Yesterday I released the second part of my “Math Shorts” series. This time it’s all about integrals. Integrals are among the most useful and fascinating mathematical concepts ever conceived. The ebook is a practical introduction for all those who don’t want to miss out. In it you’ll find down-to-earth explanations, detailed examples and interesting applications. Check out the sample (see link to product page) a taste of the action.

Important note: to enjoy the book, you need solid prior knowledge in algebra and calculus. This means in particular being able to solve all kinds of equations, finding and interpreting derivatives as well as understanding the notation associated with these topics.

Click the cover to open the product page:

cover

Here’s the TOC:

Section 1: The Big Picture
-Anti-Derivatives
-Integrals
-Applications

Section 2: Basic Anti-Derivatives and Integrals
-Power Functions
-Sums of Functions
-Examples of Definite Integrals
-Exponential Functions
-Trigonometric Functions
-Putting it all Together

Section 3: Applications
-Area – Basics
-Area – Numerical Example
-Area – Parabolic Gate
-Area – To Infinity and Beyond
-Volume – Basics
-Volume – Numerical Example
-Volume – Gabriel’s Horn
-Average Value
-Kinematics

Section 4: Advanced Integration Techniques
-Substitution – Basics
-Substitution – Indefinite Integrals
-Substitution – Definite Integrals
-Integration by Parts – Basics
-Integration by Parts – Indefinite Integrals
-Integration by Parts – Definite Integrals

Section 5: Appendix
-Formulas To Know By Heart
-Greek Alphabet
-Copyright and Disclaimer
-Request to the Reader

Enjoy!

New Release for Kindle: Introduction to Differential Equations

Differential equations are an important and fascinating part of mathematics with numerous applications in almost all fields of science. This book is a gentle introduction to the rich world of differential equations filled with no-nonsense explanations, step-by-step calculations and application-focused examples.

Important note: to enjoy the book, you need solid prior knowledge in algebra and calculus. This means in particular being able to solve all kinds of equations, finding and interpreting derivatives, evaluating integrals as well as understanding the notation that goes along with those.

Click the cover to open the product page:

cover

Here’s the TOC:

Section 1: The Big Picture

-Population Growth
-Definition and Equation of Motion
-Equilibrium Points
-Some More Terminology

Section 2: Separable Differential Equations

-Approach
-Exponential Growth Revisited
-Fluid Friction
-Logistic Growth Revisited

Section 3: First Order Linear Differential Equations

-Approach
-More Fluid Friction
-Heating and Cooling
-Pure, Uncut Mathematics
-Bernoulli Differential Equations

Section 4: Second Order Homogeneous Linear Differential Equations (with Constant Coefficients)

-Wait, what?
-Oscillating Spring
-Numerical Example
-The Next Step – Non-Homogeneous Equations

Section 5: Appendix

-Formulas To Know By Heart
-Greek Alphabet
-Copyright and Disclaimer
-Request to the Reader

Note: With this book release I’m starting my “Math Shorts” Series. The next installment “Math Shorts – Integrals” will be available in just a few days! (Yes, I’m working like a mad man on it)

Motion With Constant Acceleration (Examples, Exercises, Solutions)

An abstraction often used in physics is motion with constant acceleration. This is a good approximation for many different situations: free fall over small distances or in low-density atmospheres, full braking in car traffic, an object sliding down an inclined plane, etc … The mathematics behind this special case is relatively simple. Assume the object that is subject to the constant acceleration a (in m/s²) initially has a velocity v(0) (in m/s). Since the velocity is the integral of the acceleration function, the object’s velocity after time t (in s) is simply:

1) v(t) = v(0) + a · t

For example, if a car initially goes v(0) = 20 m/s and brakes with a constant a = -10 m/s², which is a realistic value for asphalt, its velocity after a time t is:

v(t) = 20 – 10 · t

After t = 1 second, the car’s speed has decreased to v(1) = 20 – 10 · 1 = 10 m/s and after t = 2 seconds the car has come to a halt: v(2) = 20 – 10 · 2 = 0 m/s. As you can see, it’s all pretty straight-forward. Note that the negative acceleration (also called deceleration) has led the velocity to decrease over time. In a similar manner, a positive acceleration will cause the speed to go up. You can read more on acceleration in this blog post.

What about the distance x (in m) the object covers? We have to integrate the velocity function to find the appropriate formula. The covered distance after time t is:

2) x(t) = v(0) · t + 0.5 · a · t²

While that looks a lot more complicated, it is really just as straight-forward. Let’s go back to the car that initially has a speed of v(0) = 20 m/s and brakes with a constant a = -10 m/s². In this case the above formula becomes:

x(t) = 20 · t – 0.5 · 10 · t²

After t = 1 second, the car has traveled x(1) = 20 · 1 – 0.5 · 10 · 1² = 15 meters. By the time it comes to a halt at t = 2 seconds, it moved x(2) = 20 · 2 – 0.5 · 10 · 2² = 20 meters. Note that we don’t have to use the time as a variable. There’s a way to eliminate it. We could solve equation 1) for t and insert the resulting expression into equation 2). This leads to a formula connecting the velocity v and distance x.

3) Constant acceleration_html_b85f3ec

Solved for x it looks like this:

3)’ Constant acceleration_html_m23bb2bb3

It’s a very useful formula that you should keep in mind. Suppose a tram accelerates at a constant a = 1.3 m/s², which is also a realistic value, from rest (v(0) = 0 m/s). What distance does it need to go to full speed v = 10 m/s? Using equation 3)’ we can easily calculate this:

Constant acceleration_html_m11de6604

————————————————————————————-

Here are a few exercises and solutions using the equations 1), 2) and 3).

1. During free fall (air resistance neglected) an object accelerates with about a = 10 m/s. Suppose the object is dropped, that is, it is initially at rest (v(0) = 0 m/s).

a) What is its speed after t = 3 seconds?
b) What distance has it traveled after t = 3 seconds?
c) Suppose we drop the object from a tower that is x = 20 meters tall. At what speed will it impact the ground?
d) How long does the drop take?

Hint: in exercise d) solve equation 1) for t and insert the result from c)

2. During the reentry of space crafts accelerations can be as high as a = -70 m/s². Suppose the space craft initially moves with v(0) = 6000 m/s.

a) What’s the speed and covered distance after t = 10 seconds?
b) How long will it take the space craft to half its initial velocity?
c) What distance will it travel during this time?

3. An investigator arrives at the scene of a car crash. From the skid marks he deduces that it took the car a distance x = 55 meters to come to a halt. Assume full braking (a = -10 m/s²). Was the car initially above the speed limit of 30 m/s?

————————————————————————————-

Solutions to the exercises:

Exercise 1

a) 30 m/s
b) 45 m
c) 20 m/s
d) 2 s

Exercise 2

a) 5,300 m/s and 56,500 m
b) 42.9 s (rounded)
c) 192,860 m (rounded)

Exercise 3

Yes (he was initially going 33.2 m/s)

————————————————————————————-

To learn the basic math you need to succeed in physics, check out the e-book “Algebra – The Very Basics”. For an informal introduction to physics, check out the e-book “Physics! In Quantities and Examples”. Both are available at low prices and exclusively for Kindle.

CTR (Click Through Rate) – Explanation, Results and Tips

A very important metric for banner advertiesment is the CTR (click through rate). It is simply the number of clicks the ad generated divided by the number of total impressions. You can also think of it as the product of the probability of a user noticing the ad and the probability of the user being interested in the ad.

CTR = clicks / impressions = p(notice) · p(interested)

The current average CTR is around 0.09 % or 9 clicks per 10,000 impressions and has been declining for the past several years. What are the reasons for this? For one, the common banner locations are familiar to web users and are thus easy to ignore. There’s also the increased popularity of ad-blocking software.

The attitude of internet users is generally negative towards banner ads. This is caused by advertisers using more and more intrusive formats. These include annoying pop-ups and their even more irritating sisters, the floating ads. Adopting them is not favorable for advertisers. They harm a brand and produce very low CTRs. So hopefully, we will see an end to such nonsense soon.

As for animated ads, their success depends on the type of website and target group. For high-involvement websites that users visit to find specific information (news, weather, education), animated banners perform worse than static banners. In case of low-involvement websites that are put in place for random surfing (entertainment, lists, mini games) the situation is reversed. The target group also plays an important role. For B2C (business-to-consumer) ads animation generally works well, while for B2B (business-to-business) animation was shown to lower the CTR.

The language used in ads has also been extensively studied. One interesting result is that often it is preferable to use English language even if the ad is displayed in a country in which English is not the first language. A more obvious result is that catchy words and calls to action (“read more”) increase the CTR.

As for the banner size, there is inconclusive data. Some analysis report that the CTR grows with banner size, while others conclude that banner sizes around 250×250 or 300×250 generate the highest CTRs. There is a clearer picture regarding shape: in terms of CTR, square shapes work better than thin rectangles having the same size. No significant difference was found between vertical and horizontal rectangles.

Here’s another hint: my own theoretical calculations show that higher CTRs can be achieved by advertising on pages that have a low visitor loyalty. The explanation for this counter-intuitive outcome as well as a more sophisticated formula for the CTR can be found here. It is, in a nutshell, a result of the multiplication rule of statistics. The calculation also shows that on sites with a low visitor loyalty the CTR will stay constant, while on websites with a high visitor loyalty it will decrease over time.

Sources and further reading:

  • Study on banner advertisement type and shape effect on click-through-rate and conversion

http://www.aabri.com/manuscripts/131481.pdf

  • The impact of banner ad styles on interaction and click-through-rates

http://iacis.org/iis/2008/S2008_989.pdf

  • Impact of animation and language on banner click-through-rates

http://www.academia.edu/1608289/Impact_of_Animation_and_Language_on_Banner_Click-Through_Rates

Mathematics of Banner Ads: Visitor Loyalty and CTR

First of all: why should a website’s visitor loyalty have any effect at all on the CTR we can expect to achieve with a banner ad? What does the one have to do with the other? To understand the connection, let’s take a look at an overly simplistic example. Suppose we place a banner ad on a website and get in total 3 impressions (granted, not a realistic number, but I’m only trying to make a point here). From previous campaigns we know that a visitor clicks on our ad with a probability of 0.1 = 10 % (which is also quite unrealistic).

The expected number of clicks from these 3 impressions is …

… 0.1 + 0.1 + 0.1 = 0.3 when all impressions come from different visitors.

… 1 – 0.9^3 = 0.27 when all impressions come from only one visitor.

(the symbol ^ stands for “to the power of”)

This demonstrates that we can expect more clicks if the website’s visitor loyalty is low, which might seem counter-intuitive at first. But the great thing about mathematics is that it cuts through bullshit better than the sharpest knife ever could. Math doesn’t lie. Let’s develop a model to show that a higher vistor loyalty translates into a lower CTR.

Suppose we got a number of I impressions on the banner ad in total. We’ll denote the percentage of visitors that contributed …

… only one impression by f(1)
… two impressions by f(2)
… three impressions by f(3)

And so on. Note that this distribution f(n) must satisfy the condition ∑[n] n·f(n) = I for it all to check out. The symbol ∑[n] stands for the sum over all n.

We’ll assume that the probability of a visitor clicking on the ad is q. The probability that this visitor clicks on the ad at least once during n visits is just: p(n) = 1 – (1 – q)^n (to understand why you have the know about the multiplication rule of statistics – if you’re not familiar with it, my ebook “Statistical Snacks” is a good place to start).

Let’s count the expected number of clicks for the I impressions. Visitors …

… contributing only one impression give rise to c(1) = p(1) + p(1) + … [f(1)·I addends in total] = p(1)·f(1)·I clicks

… contributing two impressions give rise to c(2) = p(2) + p(2) + … [f(2)·I/2 addends in total] = p(2)·f(2)·I/2 clicks

… contributing three impressions give rise to c(3) = p(3) + p(3) + … [f(3)·I/3 addends in total] = p(3)·f(3)·I/3 clicks

And so on. So the total number of clicks we can expect is: c = ∑[n] p(n)·f(n)/n·I. Since the CTR is just clicks divided by impressions, we finally get this beautiful formula:

CTR = ∑[n] p(n)·f(n)/n

The expression p(n)/n decreases as n increases. So a higher visitor loyalty (which mathematically means that f(n) has a relatively high value for n greater than one) translates into a lower CTR. One final conclusion: the formula can also tell us a bit about how the CTR develops during a campaign. If a website has no loyal visitors, the CTR will remain at a constant level, while for websites with a lot of loyal visitors, the CTR will decrease over time.

Decibel – A Short And Simple Explanation

A way of expressing a quantity in relative terms is to do the ratio with respect to a reference value. This helps to put a quantity into perspective. For example, in mechanics the acceleration is often expressed in relation to the gravitational acceleration. Instead of saying the acceleration is 22 m/s² (which is hard to relate to unless you know mechanics), we can also say the acceleration is 22 / 9.81 ≈ 2.2 times the gravitational acceleration or simply 2.2 g’s (which is much easier to comprehend).

The decibel (dB) is also a general way of expressing a quantity in relative terms, sort of a “logarithmic ratio”. And just like the ratio, it is not a physical unit or limited to any field such as mechanics, audio, etc … You can express any quantity in decibels. For example, if we take the reference value to be the gravitational acceleration, the acceleration 22 m/s² corresponds to 3.5 dB.

To calculate the decibel value L of a quantity x relative to the reference value x(0), we can use this formula:

MIXING_html_m6e565f15

In acoustics the decibel is used to express the sound pressure level (SPL), measured in Pascal = Pa, using the threshold of hearing (0.00002 Pa) as reference. However, in this case a factor of twenty instead of ten is used. The change in factor is a result of inputting the squares of the pressure values rather than the linear values.

MIXING_html_297973dd

The sound coming from a stun grenade peaks at a sound pressure level of around 15,000 Pa. In decibel terms this is:

MIXING_html_m75e9ffd9

which is way past the threshold of pain that is around 63.2 Pa (130 dB). Here are some typical values to keep in mind:

0 dB → Threshold of Hearing
20 dB → Whispering
60 dB → Normal Conversation
80 dB → Vacuum Cleaner
110 dB → Front Row at Rock Concert
130 dB → Threshold of Pain
160 dB → Bursting Eardrums

Why use the decibel at all? Isn’t the ratio good enough for putting a quantity into perspective? The ratio works fine as long as the quantity doesn’t go over many order of magnitudes. This is the case for the speeds or accelerations that we encounter in our daily lives. But when a quantity varies significantly and spans many orders of magnitude (which is what the SPL does), the decibel is much more handy and relatable.

Another reason for using the decibel for audio signals is provided by the Weber-Fechner law. It states that a stimulus is perceived in a logarithmic rather than linear fashion. So expressing the SPL in decibels can be regarded as a first approximation to how loud a sound is perceived by a person as opposed to how loud it is from a purely physical point of view.

Note that when combining two or more sound sources, the decibel values are not simply added. Rather, if we combine two sources that are equally loud and in phase, the volume increases by 6 dB (if they are out of phase, it will be less than that). For example, when adding two sources that are at 50 dB, the resulting sound will have a volume of 56 dB (or less).

(This was an excerpt from Audio Effects, Mixing and Mastering. Available for Kindle)

Compressors: Formula for Maximum Volume

Suppose we have an audio signal which peaks at L decibels. We apply a compressor with a threshold T (with T being smaller than L, otherwise the compressor will not spring into action) and ratio r. How does this effect the maximum volume of the audio signal? Let’s derive a formula for that. Remember that the compressor leaves the parts of the signal that are below the threshold unchanged and dampens the excess volume (threshold to signal level) by the ratio we set. So the dynamic range from the threshold to the peak, which is L – T, is compressed to (L – T) / r. Hence, the peak volume after compression is:

L’ = T + (L – T) / r

For example, suppose our mix peaks at L = – 2 dB. We compress it using a threshold of T = – 10 dB and a ratio r = 2:1. The maximum volume after compression is:

L’ = – 10 dB + ( – 2 dB – (- 10 dB) ) / 2 = – 10 dB + 8 dB / 2 = – 6 dB

Braingate – You Thought It’s Science-Fiction, But It’s Not

On April 12, 2011, something extraordinary happened. A 58-year-old woman that was paralyzed from the neck down reached for a bottle of coffee, drank from a straw and put the bottle back on the table. But she didn’t reach with her own hand – she controlled a robotic arm with her mind. Uneblievable? It is. But decades of research made the unbelievable possible. Watch this exceptional and moving moment in history here (click on picture for Youtube video).

Beautiful

The 58-year-old women (patient S3) was part of the BrainGate2 project, a collaboration of researchers at the Department of Veterans Affairs, Brown University, German Aerospace Center (DLR) and others. The scientists implanted a small chip containing 96 electrodes into her motor cortex. This part of the brain is responsible for voluntary movement. The chip measures the electrical activity of the brain and an external computer translates this pattern into the movement of a robotic arm. A brain-computer interface. And it’s not science-fiction, it’s science.

During the study the woman was able to grasp items during the allotted time with a 70 % success rate. Another participant (patient T2) even managed to achieve a 96 % success rate. Besides moving robotic arms, the participants also were given the task to spell out words and sentences by indicating letters via eye movement. Participant T2 spelt out this sentence: “I just imagined moving my own arm and the [robotic] arm moved where I wanted it to go”.

The future is exciting.

Audio Effects: All About Compressors

Almost all music and recorded speech that you hear has been sent through at least one compressor at some point during the production process. If you are serious about music production, you need to get familiar with this powerful tool. This means understanding the big picture as well as getting to know each of the parameters (Threshold, Ratio, Attack, Release, Make-Up Gain) intimately.

  • How They Work

Throughout any song the volume level varies over time. It might hover around – 6 dB in the verse, rise to – 2 dB in the first chorus, drop to – 8 dB in the interlude, and so on. A term that is worth knowing in this context is the dynamic range. It refers to the difference in volume level from the softest to the loudest part. Some genres of music, such as orchestral music, generally have a large dynamic range, while for mainstream pop and rock a much smaller dynamic range is desired. A symphony might range from – 20 dB in the soft oboe solo to – 2 dB for the exciting final chord (dynamic range: 18 dB), whereas your common pop song will rather go from – 8 dB in the first verse to 0 dB in the last chorus (dynamic range: 8 dB).

During a recording we have some control over what dynamic range we will end up with. We can tell the musicians to take it easy in the verse and really go for it in the chorus. But of course this is not very accurate and we’d like to have full control of the dynamic range rather than just some. We’d also like to be able to to change the dynamic range later on. Compressors make this (and much more) possible.

The compressor constantly monitors the volume level. As long as the level is below a certain threshold, the compressor will not do anything. Only when the level exceeds the threshold does it become active and dampen the excess volume by a certain ratio. In short: everything below the threshold stays as it is, everything above the threshold gets compressed. Keep this in mind.

Suppose for example we set the threshold to – 10 dB and the ratio to 4:1. Before applying the compressor, our song varies from a minimum value of – 12 dB in the verse to a maximum value – 2 dB in the chorus. Let’s look at the verse first. Here the volume does not exceed the threshold and thus the compressor does not spring into action. The signal will pass through unchanged. The story is different for the chorus. Its volume level is 8 dB above the threshold. The compressor takes this excess volume and dampens it according to the ratio we set. To be more specific: the compressor turns the 8 dB excess volume into a mere 8 dB / 4 = 2 dB. So the compressed song ranges from – 12 dB in the verse to -10 dB + 2 dB = – 8 dB in the chorus.

Here’s a summary of the process:

Settings:

Threshold: – 10 dB
Ratio: 4:1

Before:

Minimum: – 12 dB
Maximum: – 2 dB
Dynamic range: 10 dB

Excess volume (threshold to maximum): 8 dB
With ratio applied: 8 dB / 4 = 2 dB

After:

Minimum: – 12 dB
Maximum: – 8 dB
Dynamic range: 4 dB

As you can see, the compressor had a significant effect on the dynamic range. Choosing appropriate values for the threshold and ratio, we are free to compress the song to any dynamic range we desire. When using a DAW (Digital Audio Workstation such as Cubase, FL Studio or Ableton Live), it is possible to see the workings of a compressor with your own eyes. The image below shows the uncompressed file (top) and the compressed file (bottom) with the threshold set to – 12 dB and the ratio to 2:1.

MIXING_html_26d7be80

The soft parts are identical, while the louder parts (including the short and possibly problematic peaks) have been reduced in volume. The dynamic range clearly shrunk in the process. Note that after applying the compressor, the song’s effective volume (RMS) is much lower. Since this is usually not desired, most compressors have a parameter called make-up gain. Here you can specify by how much you’d like the compressor to raise the volume of the song after the compression process is finished. This increase in volume is applied to all parts of the song, soft or loud, so there will not be another change in the dynamic range. It only makes up for the loss in loudness (hence the name).

  • Usage of Compressors

We already got to know one application of the compressor: controlling the dynamic range of a song. But usually this is just a first step in reaching another goal: increasing the effective volume of the song. Suppose you have a song with a dynamic range of 10 dB and you want to make it as loud as possible. So you move the volume fader until the maximum level is at 0 dB. According to the dynamic range, the minimum level will now be at – 10 dB. The effective volume will obviously be somewhere in-between the two values. For the sake of simplicity, we’ll assume it to be right in the middle, at – 5 dB. But this is too soft for your taste. What to do?

You insert a compressor with a threshold of – 6 dB and a ratio of 3:1. The 4 dB range from the minimum level – 10 dB to the threshold – 6 dB is unchanged, while the 6 dB range from the threshold – 6 dB to the maximum level 0 dB is compressed to 6 dB / 3 = 2 dB. So overall the dynamic range is reduced to 4 dB + 2 dB = 6 dB. Again you move the volume fader until the maximum volume level coincides with 0 dB. However, this time the minimum volume will be higher, at – 6 dB, and the effective volume at – 3 dB (up from the – 5 dB we started with). Mission accomplished, the combination of compression and gain indeed left us with a higher average volume.

In theory, this means we can get the effective volume up to almost any value we desire by compressing a song and then making it louder. We could have the whole song close to 0 dB. This possibility has led to a “loudness war” in music production. Why not go along with that? For one, you always want to put as much emphasis as possible on the hook. This is hard to do if the intro and verse is already blaring at maximum volume. Another reason is that severely reducing the dynamic range kills the expressive elements in your song. It is not a coincidence that music which strongly relies on expressive elements (orchestral and acoustic music) usually has the highest dynamic range. It needs the wide range to go from expressing peaceful serenity to expressing destructive desperation. Read the following out loud and memorize it: the more expression it has, the less you should compress. While a techno song might work at maximum volume, a ballad sure won’t.

————————————-

Background Info – SPL and Loudness

Talking about how loud something is can be surprisingly complicated. The problem is that our brain does not process sound inputs in a linear fashion. A sound wave with twice the sound pressure does not necessarily seem twice as loud to us. So when expressing how loud something is, we can either do this by using well-defined physical quantities such as the sound pressure level (which unfortunately does not reflect how loud a person perceives something to be) or by using subjective psycho-acoustic quantities such as loudness (which is hard to define and measure properly).

Sound waves are pressure and density fluctuations that propagate at a material- and temperature-dependent speed in a medium. For air at 20 °C this speed is roughly 340 m/s. The quantity sound pressure expresses the deviation of the sound wave pressure from the pressure of the surrounding air. The sound pressure level, in short: SPL, is proportional to the logarithm of the effective sound pressure. Long story short: the stronger the sound pressure, the higher the SPL. The SPL is used to objectively measure how loud something is. Another important objective quantity for this purpose is the volume. It is a measure of how much energy is contained in an audio signal and thus closely related to the SPL.

A subjective quantity that reflects how loud we perceive something to be is loudness. Due to our highly non-linear brains, the loudness of an audio signal is not simply proportional to its SPL or volume level. Rather, loudness depends in a complex way on the SPL, frequency, duration of the sound, its bandwidth, etc … In the image below you can see an approximation of the relationship between loudness, SPL and frequency.

MIXING_html_mc95d258

Any red curve is a curve of equal loudness. Here’s how we can read the chart. Take a look at the red curve at the very bottom. It starts at 75 dB SPL and a frequency of 20 Hz and reaches 25 dB SPL at 100 Hz. Since the red curve is a curve of equal loudness, we can conclude we perceive a 75 dB SPL sound at 20 Hz to be just as loud as a 25 dB SPL sound at 100 Hz, even though from a purely physical point of view the first sound is three times as loud as the second (75 dB / 25 dB = 3).

————————————-

MIXING_html_m53a1053e

(Compressor in Cubase)

  • Threshold and Ratio

What’s the ideal threshold to use? This depends on what you are trying to accomplish. Suppose you set the threshold at a relatively high value (for example – 10 dB in a good mix). In this case the compressor will be inactive for most of the song and only kick in during the hook and short peaks. With the threshold set to a high value, you are thus “taking the top off”. This would be a suitable choice if you are happy with the dynamics in general, but would like to make the mix less aggressive.

What about low thresholds (such as -25 dB in a good mix)? In this case the compressor will be active for the most part of the song and will make the entire song quite dense. This is something to consider if you aim to really push the loudness of the song. Once the mix is dense, you can go for a high effective volume. But a low threshold compression can also add warmth to a ballad, so it’s not necessarily a tool restricted to usage in the loudness war.

Onto the ratio. If you set the ratio to a high value (such as 5:1 and higher), you are basically telling the mix: to the threshold and no further. Anything past the threshold will be heavily compressed, which is great if you have pushy peaks that make a mix overly aggressive. This could be the result of a snare that’s way too loud or an inexperienced singer. Whatever the cause, a carefully chosen threshold and a high ratio should take care of it in a satisfying manner. Note though that in this case the compressor should be applied to the track that is causing the problem and not the entire mix.

A low value for the ratio (such as 2:1 or smaller) will have a rather subtle effect. Such values are perfect if you want to apply the compressor to a mix that already sounds well and just needs a finishing touch. The mix will become a little more dense, but its character will be kept intact.

  • Attack and Release

There are two important parameters we have ignored so far: the attack and release. The attack parameter allows you to specify how quickly the compressor sets in once the volume level goes past the threshold. A compressor with a long attack (20 milliseconds or more) will let short peaks pass. As long as these peaks are not over-the-top, this is not necessarily a bad thing. The presence of short peaks, also called transients, is important for a song’s liveliness and natural sound. A long attack makes sure that these qualities are preserved and that the workings of the compressor are less noticeable.

A short attack (5 milliseconds or less) can produce a beautifully crisp sound that is suitable for energetic music. But it is important to note that if the attack is too short, the compressor will kill the transients and the whole mix will sound flat and bland. Even worse, a short attack can lead to clicks and a nervous “pumping effect”. Be sure to watch out for those as you shorten the attack.

The release is the time for the compressor to become inactive once the volume level goes below the threshold. It is usually much longer than the attack, but the overall principles are similar. A long release (600 milliseconds or more) will make sure that the compression happens in a more subtle fashion, while a short release (150 milliseconds or less) can produce a pumping sound.

It is always a good idea to choose the release so that it fits the rhythm of your song (the same of course is true for temporal parameters in reverb and delay). One way to do this is to calculate the time per beat TPB in milliseconds from your song’s tempo as measured in beats per minute BPM and use this value as the point of reference.

TPB [ms] = 60000 / BPM

For example, in a song with the tempo BPM = 120 the duration of one beat is TPB = 60000 / 120 = 500 ms. If you need a longer release, use a multiple of it (1000 ms, 1500 ms, and so on), for a shorter release divide it by any natural number (500 ms / 2 = 250 ms, 500 ms / 3 = 167 ms, and so on). This way the compressor will “breathe” in unison with your music.

If you are not sure where to start regarding attack and release, just make use of the 20/200-rule: Set the attack to 20 ms, the release to 200 ms and work towards the ideal values from there. Alternatively, you can always go through the presets of the compressor to find suitable settings.

 

You can learn about advanced compression techniques as well as other effects from Audio Effects, Mixing and Mastering, available for Kindle for $ 3.95.

New Release: Audio Effects, Mixing and Mastering (Kindle)

This book is a quick guide to effects, mixing and mastering for beginners using Cubase as its platform. The first chapter highlights the most commonly used effects in audio production such as compressors, limiters, equalizers, reverb, delay, gates and others. You will learn about how they work, when to apply them, the story behind the parameters and what traps you might encounter. The chapter also contains a quick peek into automation and what it can do.

In the second chapter we focus on what constitutes a good mix and how to achieve it using a clear and comprehensible strategy. This is followed by a look at the mastering chain that will help to polish and push a mix. The guide is sprinkled with helpful tips and background information to make the learning experience more vivid. You get all of this for a fair price of $ 3.95.

cover

Table Of Contents:

1. Audio Effects And Automation
1.1. Compressors
1.2. Limiters
1.3. Equalizers
1.4. Reverb and Delay
1.5. Gates
1.6. Chorus
1.7. Other Effects
1.8. Automation

2. Mixing
2.1. The Big Picture
2.2. Mixing Strategy
2.3. Separating Tracks

3. Mastering
3.1. Basic Idea
3.2. Mastering Strategy
3.3. Mid/Side Processing
3.4. Don’t Give Up

4. Appendix
4.1. Calculating Frequencies
4.2. Decibel
4.3. Copyright and Disclaimer
4.4. Request to the Reader

Temperature – From The Smallest To The Largest

For temperature there is a definite and incontrovertible lower limit: 0 K. Among the closest things to absolute zero in the universe is the temperature of supermassive black holes (10-18 K). At this temperature it will take them 10100 years and more to evaporate their mass. Yes, that’s a one with one-hundred zeros. If the universe really does keep on expanding as believed by most scientist today, supermassive black holes will be the last remaining objects in the fading universe. Compared to their temperature, the lowest temperature ever achieved in a laboratory (10-12 K) is a true hellfire, despite it being many orders of magnitudes lower than the background temperature of the universe (2.73 K and slowly decreasing).

In terms of temperature, helium is an exceptional element. The fact that we almost always find it in the gaseous state is a result of its low boiling point (4.22 K). Even on Uranus (53 K), since the downgrading of Pluto the coldest planet in the solar system and by far the planet with the most inappropriate name, it would appear as a gas. Another temperature you definitely should remember is 92 K. Why? Because at this temperature the material Y-Ba-Cu-oxide becomes superconductive and there is no material known to man that is superconductive at higher temperatures. Note that you want a superconductor to do what it does best at temperatures as close to room temperature as possible because otherwise making use of this effect will require enormous amounts of energy for cooling.

The lowest officially recorded air temperature on Earth is 184 K ≈ -89 °C, so measured in 1983 in Stántsiya Vostók, Antarctica. Just recently scientists reported seeing an even lower temperature, but at the time of writing this is still unconfirmed. The next two values are very familiar to you: the melting point (273 K ≈ 0 °C) and the boiling point (373 K ≈ 100 °C) of water. But I would not advise you to become too familiar with burning wood (1170 K ≈ 900 °C) or the surface of our Sun (5780 K ≈ 5500 °C).

Temperatures in a lightning channel can go far beyond that, up to about 28,000 K. This was topped on August 6, 1945, when the atomic bomb “Little Boy” was dropped on Hiroshima. It is estimated that at a distance of 17 meters from the center of the blast the temperature rose to 300,000 K. Later and more powerful models of the atomic bomb even went past the temperature of the solar wind (800,000 K).

If you are disappointed about the relatively low surface temperature of the sun, keep in mind that this is the coldest part of the sun. In the corona surrounding it, temperatures can reach 10 million K, the center of the Sun is estimated to be at 16 million K and solar flares can be as hot as 100 million K. Surprisingly, mankind managed to top that. The plasma in the experimental Tokamak Fusion Test Reactor was recorded at mind-blowing 530 million K. Except for supernova explosions (10 billion K) and infant neutron stars (1 trillion K), there’s not much beyond that.

Simpson’s Paradox And Gender Discrimination

One sunny day we arrive at work in the university administration to find a lot of aggressive emails in our in‒box. Just the day before, a news story about gender discrimination in academia was published in a popular local newspaper which included data from our university. The emails are a result of that. Female readers are outraged that men were accepted at the university at a higher rate, while male readers are angry that women were favored in each course the university offers. Somewhat puzzled, you take a look at the data to see what’s going on and who’s wrong.

The university only offers two courses: physics and sociology. In total, 1000 men and 1000 women applied. Here’s the breakdown:

Physics:

800 men applied ‒ 480 accepted (60 %)
100 women applied ‒ 80 accepted (80 %)

Sociology:

200 men applied ‒ 40 accepted (20 %)
900 women applied ‒ 360 accepted (40 %)

Seems like the male readers are right. In each course women were favored. But why the outrage by female readers? Maybe they focused more on the following piece of data. Let’s count how many men and women were accepted overall.

Overall:

1000 men applied ‒ 520 accepted (52 %)
1000 women applied ‒ 440 accepted (44 %)

Wait, what? How did that happen? Suddenly the situation seems reversed. What looked like a clear case of discrimination of male students turned into a case of discrimination of female students by simple addition. How can that be explained?

The paradoxical situation is caused by the different capacities of the two departments as well as the student’s overall preferences. While the physics department, the top choice of male students, could accept 560 students, the smaller sociology department, the top choice of female students, could only take on 400 students. So a higher acceptance rate of male students is to be expected even if women are slightly favored in each course.

While this might seem to you like an overly artificial example to demonstrate an obscure statistical phenomenon, I’m sure the University of California (Berkeley) would beg to differ. It was sued in 1973 for bias against women on the basis of these admission rates:

8442 men applied ‒ 3715 accepted (44 %)
4321 women applied ‒ 1512 accepted (35 %)

A further analysis of the data however showed that women were favored in almost all departments ‒ Simpson’s paradox at work. The paradox also appeared (and keeps on appearing) in clinical trials. A certain treatment might be favored in individual groups, but still prove to be inferior in the aggregate data.

Overtones – What They Are And How To Compute Them

In theory, hitting the middle C on a piano should produce a sound wave with a frequency of 523.25 Hz and nothing else. However, running the resulting audio through a spectrum analyzer, it becomes obvious that there’s much more going on. This is true for all other instruments, from tubas to trumpets, basoons to flutes, contrabasses to violins. Play any note and you’ll get a package of sound waves at different frequencies rather than just one.

First of all: why is that? Let’s focus on stringed instruments. When you plug the string, it goes into its most basic vibration mode: it moves up and down as a whole at a certain frequency f. This is the so called first harmonic (or fundamental). But shortly after that, the nature of the vibration changes and the string enters a second mode: while one half of the string moves up, the other half moves down. This happens naturally and is just part of the string’s dynamics. In this mode, called the second harmonic, the vibration accelerates to a frequency of 2 * f. The story continues in this fashion as other modes of vibration appear: the third harmonic at a frequency 3 * f, the fourth harmonic at 4 * f, and so on.

String Vibrating Modes Overtones

A note is determined by the frequency. As already stated, the middle C on the piano should produce a sound wave with a frequency of 523.25 Hz. And indeed it does produce said sound wave, but it is only the first harmonic. As the string continues to vibrate, all the other harmonics follow, producing overtones. In the picture below you can see which notes you’ll get when playing a C (overtone series):

(The marked notes are only approximates. Taken from http://legacy.earlham.edu)

Quite the package! And note that the major chord is fully included within the first four overtones. So it’s buy a note, get a chord free. And unless you digitally produce a note, there’s no avoiding it. You might wonder why it is that we don’t seem to perceive the additional notes. Well, we do and we don’t. We don’t perceive the overtones consciously because the amplitude, and thus volume, of each harmonic is smaller then the amplitude of the previous one (however, this is a rule of thumb and exceptions are possible, any instrument will emphasize some overtones in particular). But I can assure you that when listening to a digitally produced note, you’ll feel that something’s missing. It will sound bland and cold. So unconsciously, we do perceive and desire the overtones.

If you’re not interested in mathematics, feel free to stop reading now (I hope you enjoyed the post so far). For all others: let’s get down to some mathematical business. The frequency of a note, or rather of its first harmonic, can be computed via:

(1) f(n) = 440 * 2n/12

With n = 0 being the chamber pitch and each step of n one half-tone. For example, from the chamber pitch (note A) to the middle C there are n = 3 half-tone steps (A#, B, C). So the frequency of the middle C is:

f(3) = 440 * 23/12 = 523.25 Hz

As expected. Given a fundamental frequency f = F, corresponding to a half-step-value of n = N, the freqency of the k-th harmonic is just:

(2) f(k) = k * F = k * 440 * 2N/12

Equating (1) and (2), we get a relationship that enables us to identify the musical pitch of any overtone:

440 * 2n/12 = k * 440 * 2N/12

2n/12 = k * 2N/12

n/12 * ln(2) = ln(k) + N/12 * ln(2)

n/12 = ln(k)/ln(2) + N/12

(3) n – N = 12 * ln(k) / ln(2) ≈ 17.31 * ln(k)

The equation results in this table:

k

n – N (rounded)

1 0
2 12
3 19
4 24
5 28

And so on. How does this tell us where the overtones are? Read it like this:

  • The first harmonic (k = 1) is zero half-steps from the fundamental (n-N = 0). So far, so duh.
  • The second harmonic (k = 2) is twelve half-steps, or one octave, from the fundamental (n-N = 12).
  • The third harmonic (k = 3) is nineteen half-steps, or one octave and a quint, from the fundamental (n-N = 19).
  • The fourth harmonic (k = 4) is twenty-four half-steps, or two octaves, from the fundamental (n-N = 24).
  • The fifth harmonic (k = 5) is twenty-wight half-steps, or two octaves and a third, from the fundamental (n-N = 28).

So indeed the formula produces the correct overtone series for any note. And for any note the same is true: The second overtone is exactly one octave higher, the third harmonic one octave and a quint higher, and so on. The corresponding major chord is always contained within the first five harmonics.

The Doppler Effect in Pictures

The siren of an approaching police car will sound at a higher pitch, the light of an approaching star will be shifted towards blue and a passing supersonic jet will create a violent thunder. What do these phenomenon have in common? All of them are a result of the Doppler effect. To understand how it arises, just take a look at the animations below.

Stationary Source: The waves coming from the source propagate symmetrically.

Subsonic Source (moving below sound speed): Compression of waves in direction of motion.

Sonic Source (moving at sound speed): Maximum compression.

Supersonic Source (moving beyond sound speed): Source overtakes its waves, formation of Mach cone and sonic boom.

WOWC_IFUSA_16rotated

(Pictures taken from http://www.acs.psu.edu)

A Brief Look At Car-Following Models

Recently I posted a short introduction to recurrence relations – what they are and how they can be used for mathematical modeling. This post expands on the topic as car-following models are a nice example of recurrence relations applied to the real-world.

Suppose a car is traveling on the road at the speed u(t) at time t. Another car approaches this car from behind and starts following it. Obviously the driver of the car that is following cannot choose his speed freely. Rather, his speed v(t) at time t will be a result of whatever the driver in the leading car is doing.

The most basic car-following model assumes that the acceleration a(t) at time t of the follower is determined by the difference in speeds. If the leader is faster than the follower, the follower accelerates. If the leader is slower than the follower, the follower decelerates. The follower assumes a constant speed if there’s no speed difference. In mathematical form, this statement looks like this:

a(t) = λ * (u(t) – v(t))

The factor λ (sensitivity) determines how strongly the follower accelerates in response to a speed difference. To be more specific: it is the acceleration that results from a speed difference of one unit.

——————————————

Before we go on: how is this a recurrence relation? In a recurrence relation we determine a quantity from its values at an earlier time. This seems to be missing here. But remember that the acceleration is given by:

a(t) = (v(t+h) – v(t)) / h

with h being a time span. Inserted into the above car-following equation, we can see that it indeed implies a recurrence relation.

——————————————

Our model is still very crude. Here’s the biggest problem: The response of the driver is instantaneous. He picks up the speed difference at time t and turns this information into an acceleration also at time t. But more realistically, there will be a time lag. His response at time t will be a result of the speed difference at an earlier time t – Λ, with Λ being the reaction time.

a(t) = λ * (u(t – Λ) – v(t – Λ))

The reaction time is usually in the order of one second and consist of the time needed to process the information as well as the time it takes to move the muscles and press the pedal. There are several things we can do to make the model even more realistic. First of all, studies show that the speed difference is not the only factor. The distance d(t) between the leader and follower also plays an important role. The smaller it is, the stronger the follower will react. We can take this into account by putting the distance in the denominator:

a(t) = (λ / d(t)) * (u(t – Λ) – v(t – Λ))

You can also interpret this as making the sensitivity distance-dependent. There’s still one adjustment we need to make. The above model allows any value of acceleration, but we know that we can only reach certain maximum values in a car. Let’s symbolize the maximum acceleration by a(acc) and the maximum deceleration by a(dec). The latter will be a number smaller than zero since deceleration is by definition negative acceleration. We can write:

a(t) = a(acc) if (λ / d(t)) * (u(t – Λ) – v(t – Λ)) > a(acc)
a(t) = a(dec) if (λ / d(t)) * (u(t – Λ) – v(t – Λ)) < a(dec)
a(t) = (λ / d(t)) * (u(t – Λ) – v(t – Λ)) else

It probably looks simpler using an if-statement:

a(t) = (λ / d(t)) * (u(t – Λ) – v(t – Λ))

IF a(t) > a(acc) THEN
a(t) = a(acc)
ELSEIF a(t) < a(dec) THEN
a(t) = a(dec)
END IF

This model already catches a lot of nuances of car traffic. I hope I was able to give you some  insight into what car-following models are and how you can fine-tune them to satisfy certain conditions.

Recurrence Relations – A Simple Explanation And Much More

Recurrence relations are a powerful tool for mathematical modeling and numerically solving differential equations (no matter how complicated). And as luck would have it, they are relatively easy to understand and apply. So let’s dive right into it using a purely mathematical example (for clarity) before looking at a real-world application.

This equation is a typical example of a recurrence relation:

x(t+1) = 5 * x(t) + 2 * x(t-1)

At the heart of the equation is a certain quantity x. It appears three times: x(t+1) stands for the value of this quantity at a time t+1 (next month), x(t) for the value at time t (current month) and x(t-1) the value at time t-1 (previous month). So what the relation allows us to do is to determine the value of said quantity for the next month, given that we know it for the current and previous month. Of course the choice of time span here is just arbitrary, it might as well be a decade or nanosecond. What’s important is that we can use the last two values in the sequence to determine the next value.

Suppose we start with x(0) = 0 and x(1) = 1. With the recurrence relation we can continue the sequence step by step:

x(2) = 5 * x(1) + 2 * x(0) = 5 * 1 + 2 * 0 = 5

x(3) = 5 * x(2) + 2 * x(1) = 5 * 5 + 2 * 1 = 27

x(4) = 5 * x(3) + 2 * x(2) = 5 * 27 + 2 * 5 = 145

And so on. Once we’re given the “seed”, determining the sequence is not that hard. It’s just a matter of plugging in the last two data points and doing the calculation. The downside to defining a sequence recursively is that if you want to know x(500), you have to go through hundreds of steps to get there. Luckily, this is not a problem for computers.

In the most general terms, a recurrence relation relates the value of quantity x at a time t + 1 to the values of this quantity x at earlier times. The time itself could also appear as a factor. So this here would also be a legitimate recurrence relation:

x(t+1) = 5 * t * x(t) – 2 * x(t-10)

Here we calculate the value of x at time t+1 (next month) by its value at a time t (current month) and t – 10 (ten months ago). Note that in this case you need eleven seed values to be able to continue the sequence. If we are only given x(0) = 0 and x(10) = 50, we can do the next step:

x(11) = 5 * 10 * x(10) – 2 * x(0) = 5 * 10 * 50 – 2 * 0 = 2500

But we run into problems after that:

x(12) = 5 * 11 * x(11) – 2 * x(1) = 5 * 11 * 2500 – 2 * x(1) = ?

We already calculated x(11), but there’s nothing we can do to deduce x(1).

Now let’s look at one interesting application of such recurrence relations, modeling the growth of animal populations. We’ll start with a simple model that relates the number of animals x in the next month t+1 to the number of animals x in the current month t as such:

x(t+1) = x(t) + f * x(t)

The factor f is a constant that determines the rate of growth (to be more specific: its value is the decimal percentage change from one month to the next). So if our population grows with 25 % each month, we get:

x(t+1) = x(t) + 0.25 * x(t)

If we start with x(0) = 100 rabbits at month t = 0 we get:

x(1) = x(0) + 0.1 * x(0) = 100 + 0.25 * 100 = 125 rabbits

x(2) = x(1) + 0.1 * x(1) = 125 + 0.25 * 125 = 156 rabbits

x(3) = x(2) + 0.1 * x(2) = 156 + 0.25 * 156 = 195 rabbits

x(4) = x(3) + 0.1 * x(3) = 195 + 0.25 * 195 = 244 rabbits

x(5) = x(4) + 0.1 * x(4) = 244 + 0.25 * 244 = 305 rabbits

And so on. Maybe you already see the main problem with this exponential model: it just keeps on growing. This is fine as long as the population is small and the environment rich in ressources, but every environment has its limits. Let’s fix this problem by including an additional term in the recurrence relation that will lead to this behavior:

- Exponential growth as long as the population is small compared to the capacity
– Slowing growth near the capacity
– No growth at capacity
– Population decline when over the capacity

How can we translate this into mathematics? It takes a lot of practice to be able to tweak a recurrence relation to get the behavior you want. You just learned your first chord and I’m asking you to play Mozart, that’s not fair. But take a look at this bad boy:

x(t+1) = x(t) + a * x(t) * (1 – x(t) / C)

This is called the logistic model and the constant C represents said capacity. If x is much smaller than the capacity C, the ratio x / C will be close to zero and we are left with exponential growth:

x(t+1) ≈ x(t) + a * x(t) * (1 – 0)

x(t+1) ≈ x(t) + a * x(t)

So this admittedly complicated looking recurrence relation fullfils our first demand: exponential growth for small populations. What happens if the population x reaches the capacity C? Then all growth should stop. Let’s see if this is the case. With x = C, the ratio x / C is obviously equal to one, and in this case we get:

x(t+1) = x(t) + a * x(t) * (1 – 1)

x(t+1) = x(t)

The number of animals remains constant, just as we wanted. Last but not least, what happens if (for some reason) the population gets past the capacity, meaning that x is greater than C? In this case the ratio x / C is greater than one (let’s just say x / C = 1.2 for the sake of argument):

x(t+1) = x(t) + a * x(t) * (1 – 1.2)

x(t+1) = x(t) + a * x(t) * (- 0.2)

The second term is now negative and thus x(t+1) will be smaller than x(t) – a decline back to capacity. What an enormous amount of beautiful behavior in such a compact line of mathematics! This is where the power of recurrence relations comes to light. Anyways, let’s go back to our rabbit population. We’ll let them grow with 25 % (a = 0.25), but this time on an island that can only sustain 300 rabbits at most (C = 300). Thus the model looks like this:

x(t+1) = x(t) + 0.25 * x(t) * (1 – x(t) / 300)

If we start with x(0) = 100 rabbits at month t = 0 we get:

x(1) = 100 + 0.25 * 100 * (1 – 100 / 300) = 117 rabbits

x(2) = 117 + 0.25 * 117 * (1 – 117 / 300) = 135 rabbits

x(3) = 135 + 0.25 * 135 * (1 – 135 / 300) = 153 rabbits

x(4) = 153 + 0.25 * 153 * (1 – 153 / 300) = 172 rabbits

x(5) = 172 + 0.25 * 172 * (1 – 172 / 300) = 190 rabbits

Note that now the growth is almost linear rather than exponential and will slow down further the closer we get to the capacity (continue the sequence if you like, it will gently approach 300, but never go past it).

We can even go further and include random events in a recurrence relation. Let’s stick to the rabbits and their logistic growth and say that there’s a p = 5 % chance that in a certain month a flood occurs. If this happens, the population will halve. If no flood occurs, it will grow logistically as usual. This is what our new model looks like in mathematical terms:

x(t+1) = x(t) + 0.25 * x(t) * (1 – x(t) / 300)    if no flood occurs

x(t+1) = 0.5 * x(t)    if a flood occurs

To determine if there’s a flood, we let a random number generator spit out a number between 1 and 100 at each step. If it displays the number 5 or smaller, we use the “flood” equation (in accordance with the 5 % chance for a flood). Again we turn to our initial population of 100 rabbits with the growth rate and capacity unchanged:

x(1) = 100 + 0.25 * 100 * (1 – 100 / 300) = 117 rabbits

x(2) = 117 + 0.25 * 117 * (1 – 117 / 300) = 135 rabbits

x(3) = 135 + 0.25 * 135 * (1 – 135 / 300) = 153 rabbits

x(4) = 0.5 * 153 = 77 rabbits

x(5) = 77 + 0.25 * 77 * (1 – 77 / 300) = 91 rabbits

As you can see, in this run the random number generator gave a number 5 or smaller during the fourth step. Accordingly, the number of rabbits halved. You can do a lot of shenanigans (and some useful stuff as well) with recurrence relations and random numbers, the sky’s the limit. I hope this quick overview was helpful.

A note for the advanced: here’s how you turn a differential equation into a recurrence relation. Let’s take this differential equation:

dx/dt = a * x * exp(- b*x)

First multiply by dt:

dx = a * x * exp(- b * x) * dt

We set dx (the change in x) equal to x(t+h) – x(t) and dt (change in time) equal to a small constant h. Of course for x we now use x(t):

x(t+h) – x(t) = a * x(t) * exp(- b * x(t)) * h

Solve for x(t+h):

x(t+h) = x(t) + a * x(t) * exp(- b * x(t)) * h

And done! The smaller your h, the more accurate your numerical results. How low you can go depends on your computer’s computing power.