The Weirdness of Empty Space – Casimir Force

(This is an excerpt from The Book of Forces – enjoy!)

The forces we have discussed so far are well-understood by the scientific community and are commonly featured in introductory as well as advanced physics books. In this section we will turn to a more exotic and mysterious interaction: the Casimir force. After a series of complex quantum mechanical calculations, the Dutch physicist Hendrick Casimir predicted its existence in 1948. However, detecting the interaction proved to be an enormous challenge as this required sensors capable picking up forces in the order of 10^(-15) N and smaller. It wasn’t until 1996 that this technology became available and the existence of the Casimir force was experimentally confirmed.

So what does the Casimir force do? When you place an uncharged, conducting plate at a small distance to an identical plate, the Casimir force will pull them towards each other. The term “conductive” refers to the ability of a material to conduct electricity. For the force it plays no role though whether the plates are actually transporting electricity in a given moment or not, what counts is their ability to do so.

The existence of the force can only be explained via quantum theory. Classical physics considers the vacuum to be empty – no particles, no waves, no forces, just absolute nothingness. However, with the rise of quantum mechanics, scientists realized that this is just a crude approximation of reality. The vacuum is actually filled with an ocean of so-called virtual particles (don’t let the name fool you, they are real). These particles are constantly produced in pairs and annihilate after a very short period of time. Each particle carries a certain amount of energy that depends on its wavelength: the lower the wavelength, the higher the energy of the particle. In theory, there’s no upper limit for the energy such a particle can have when spontaneously coming into existence.

So how does this relate to the Casimir force? The two conducting plates define a boundary in space. They separate the space of finite extension between the plates from the (for all practical purposes) infinite space outside them. Only particles with wavelengths that are a multiple of the distance between the plates fit in the finite space, meaning that the particle density (and thus energy density) in the space between the plates is smaller than the energy density in the pure vacuum surrounding them. This imbalance in energy density gives rise to the Casimir force. In informal terms, the Casimir force is the push of the energy-rich vacuum on the energy-deficient space between the plates.

4

(Illustration of Casimir force)

It gets even more puzzling though. The nature of the Casimir force depends on the geometry of the plates. If you replace the flat plates by hemispherical shells, the Casimir force suddenly becomes repulsive, meaning that this specific geometry somehow manages to increase the energy density of the enclosed vacuum. Now the even more energy-rich finite space pushes on the surrounding infinite vacuum. Trippy, huh? So which shapes lead to attraction and which lead to repulsion? Unfortunately, there is no intuitive way to decide. Only abstract mathematical calculations and sophisticated experiments can provide an answer.

We can use the following formula to calculate the magnitude of the attractive Casimir force FCS between two flat plates. Its value depends solely on the distance d (in m) between the plates and the area A (in m²) of one plate. The letters h = 6.63·10^(-34) m² kg/s and c = 3.00·10^8 m/s represent Plank’s constant and the speed of light.

FCS = π·h·c·A / (480·d^4) ≈ 1.3·10^(-27)·A / d^4

(The sign ^ stands for “to the power”) Note that because of the exponent, the strength of the force goes down very rapidly with increasing distance. If you double the size of the gap between the plates, the magnitude of the force reduces to 1/2^4 = 1/16 of its original value. And if you triple the distance, it goes down to 1/3^4 = 1/81 of its original value. This strong dependence on distance and the presence of Plank’s constant as a factor cause the Casimir force to be extremely weak in most real-world situations.

————————————-

Example 24:

a) Calculate the magnitude of the Casimir force experienced by two conducting plates having an area A = 1 m² each and distance d = 0.001 m (one millimeter). Compare this to their mutual gravitational attraction given the mass m = 5 kg of one plate.

b) How close do the plates need to be for the Casimir force to be in the order of unity? Set FCS = 1 N.

Solution:

a)

Inserting the given values into the formula for the Casimir force leads to (units not included):

FCS = 1.3·10^(-27)·A/d^4
FCS = 1.3·10^(-27) · 1 / 0.0014
FCS ≈ 1.3·10^(-15) N

Their gravitational attraction is:

FG = G·m·M / r²
FG = 6.67·10^(-11)·5·5 / 0.001²
FG ≈ 0.0017 N

This is more than a trillion times the magnitude of the Casimir force – no wonder this exotic force went undetected for so long.  I should mention though that the gravitational force calculated above should only be regarded as a rough approximation as Newton’s law of gravitation is tailored to two attracting spheres, not two attracting plates.

b)

Setting up an equation we get:

FCS = 1.3·10^(-27)·A/d^4
1 = 1.3·10^(-27) · 1 / d^4

Multiply by d4:

d4 = 1.3·10^(-27)

And apply the fourth root:

d ≈ 2·10^(-7) m = 200 nanometers

This is roughly the size of a common virus and just a bit longer than the wavelength of violet light.

————————————-

The existence of the Casimir force provides an impressive proof that the abstract mathematics of quantum mechanics is able to accurately describe the workings of the small-scale universe. However, many open questions remain. Quantum theory predicts the energy density of the vacuum to be infinitely large. According to Einstein’s theory of gravitation, such a concentration of energy would produce an infinite space-time curvature and if this were the case, we wouldn’t exist. So either we don’t exist (which I’m pretty sure is not the case) or the most powerful theories in physics are at odds when it comes to the vacuum.

All about the Gravitational Force (For Beginners)

(This is an excerpt from The Book of Forces)

All objects exert a gravitational pull on all other objects. The Earth pulls you towards its center and you pull the Earth towards your center. Your car pulls you towards its center and you pull your car towards your center (of course in this case the forces involved are much smaller, but they are there). It is this force that invisibly tethers the Moon to Earth, the Earth to the Sun, the Sun to the Milky Way Galaxy and the Milky Way Galaxy to its local galaxy cluster.

Experiments have shown that the magnitude of the gravitational attraction between two bodies depends on their masses. If you double the mass of one of the bodies, the gravitational force doubles as well. The force also depends on the distance between the bodies. More distance means less gravitational pull. To be specific, the gravitational force obeys an inverse-square law. If you double the distance, the pull reduces to 1/2² = 1/4 of its original value. If you triple the distance, it goes down to 1/3² = 1/9 of its original value. And so on. These dependencies can be summarized in this neat formula:

F = G·m·M / r²

With F being the gravitational force in Newtons, m and M the masses of the two bodies in kilograms, r the center-to-center distance between the bodies in meters and G = 6.67·10^(-11) N m² kg^(-2) the (somewhat cumbersome) gravitational constant. With this great formula, that has first been derived at the end of the seventeenth century and has sparked an ugly plagiarism dispute between Newton and Hooke, you can calculate the gravitational pull between two objects for any situation.

1

(Gravitational attraction between two spherical masses)

If you have trouble applying the formula on your own or just want to play around with it a bit, check out the free web applet Newton’s Law of Gravity Calculator that can be found on the website of the UNL astronomy education group. It allows you to set the required inputs (the masses and the center-to-center distance) using sliders that are marked special values such as Earth’s mass or the distance Earth-Moon and calculates the gravitational force for you.

————————————-

Example 3:

Calculate the gravitational force a person of mass m = 72 kg experiences at the surface of Earth. The mass of Earth is M = 5.97·10^24 kg (the sign ^ stands for “to the power”) and the distance from the center to the surface r = 6,370,000 m. Using this, show that the acceleration the person experiences in free fall is roughly 10 m/s².

Solution:

To arrive at the answer, we simply insert all the given inputs into the formula for calculating gravitational force.

F = G·m·M / r²
F = 6.67·10^(-11)·72·5.97·10^24 / 6,370,000² N ≈ 707 N

So the magnitude of the gravitational force experienced by the m = 72 kg person is 707 N. In free fall, he or she is driven by this net force (assuming that we can neglect air resistance). Using Newton’s second law we get the following value for the free fall acceleration:

F = m·a
707 N = 72 kg · a

Divide both sides by 72 kg:

a = 707 / 72 m/s² ≈ 9.82 m/s²

Which is roughly (and more exact than) the 10 m/s² we’ve been using in the introduction. Except for the overly small and large numbers involved, calculating gravitational pull is actually quite straight-forward.

As mentioned before, gravitation is not a one-way street. As the Earth pulls on the person, the person pulls on the Earth with the same force (707 N). However, Earth’s mass is considerably larger and hence the acceleration it experiences much smaller. Using Newton’s second law again and the value M = 5.97·1024 kg for the mass of Earth we get:

F = m·a
707 N = 5.97·10^24 kg · a

Divide both sides by 5.97·10^24 kg:

a = 707 / (5.97·10^24) m/s² ≈ 1.18·10^(-22) m/s²

So indeed the acceleration the Earth experiences as a result of the gravitational attraction to the person is tiny.

————————————-

Example 4:

By how much does the gravitational pull change when the  person of mass m = 72 kg is in a plane (altitude 10 km = 10,000 m) instead of the surface of Earth? For the mass and radius of Earth, use the values from the previous example.

Solution:

In this case the center-to-center distance r between the bodies is a bit larger. To be specific, it is the sum of the radius of Earth 6,370,000 m and the height above the surface 10,000 m:

r = 6,370,000 m + 10,000 m = 6,380,000 m

Again we insert everything:

F = G·m·M / r²
F = 6.67·10^(-11)·72·5.97·10^24 / 6,380,000² N ≈ 705 N

So the gravitational force does not change by much (only by 0.3 %) when in a plane. 10 km altitude are not much by gravity’s standards, the height above the surface needs to be much larger for a noticeable difference to occur.

————————————-

With the gravitational law we can easily show that the gravitational acceleration experienced by an object in free fall does not depend on its mass. All objects are subject to the same 10 m/s² acceleration near the surface of Earth. Suppose we denote the mass of an object by m and the mass of Earth by M. The center-to-center distance between the two is r, the radius of Earth. We can then insert all these values into our formula to find the value of the gravitational force:

F = G·m·M / r²

Once calculated, we can turn to Newton’s second law to find the acceleration a the object experiences in free fall. Using F = m·a and dividing both sides by m we find that:

a = F / m = G·M / r²

So the gravitational acceleration indeed depends only on the mass and radius of Earth, but not the object’s mass. In free fall, a feather is subject to the same 10 m/s² acceleration as a stone. But wait, doesn’t that contradict our experience? Doesn’t a stone fall much faster than a feather? It sure does, but this is only due to the presence of air resistance. Initially, both are accelerated at the same rate. But while the stone hardly feels the effects of air resistance, the feather is almost immediately slowed down by the collisions with air molecules. If you dropped both in a vacuum tube, where no air resistance can build up, the stone and the feather would reach the ground at the same time! Check out an online video that shows this interesting vacuum tube experiment, it is quite enlightening to see a feather literally drop like a stone.

2

(All bodies are subject to the same gravitational acceleration)

Since all objects experience the same acceleration near the surface of Earth and since this is where the everyday action takes place, it pays to have a simplified equation at hand for this special case. Denoting the gravitational acceleration by g (with g ≈ 10 m/s²) as is commonly done, we can calculate the gravitational force, also called weight, an object of mass m is subject to at the surface of Earth by:

F = m·g

So it’s as simple as multiplying the mass by ten. Depending on the application, you can also use the more accurate factor g ≈ 9.82 m/s² (which I will not do in this book). Up to now we’ve only been dealing with gravitation near the surface of Earth, but of course the formula allows us to compute the gravitational force and acceleration near any other celestial body. I will spare you trouble of looking up the relevant data and do the tedious calculations for you. In the table below you can see what gravitational force and acceleration a person of mass m = 72 kg would experience at the surface of various celestial objects. The acceleration is listed in g’s, with 1 g being equal to the free-fall acceleration experienced near the surface of Earth.

3

So while jumping on the Moon would feel like slow motion (the free-fall acceleration experienced is comparable to what you feel when stepping on the gas pedal in a common car), you could hardly stand upright on Jupiter as your muscles would have to support more than twice your weight. Imagine that! On the Sun it would be even worse. Assuming you find a way not get instantly killed by the hellish thermonuclear inferno, the enormous gravitational force would feel like having a car on top of you. And unlike temperature or pressure, shielding yourself against gravity is not possible.

What about the final entry? What is a neutron star and why does it have such a mind-blowing gravitational pull? A neutron star is the remnant of a massive star that has burned its fuel and exploded in a supernova, no doubt the most spectacular light-show in the universe. Such remnants are extremely dense – the mass of several suns compressed into an almost perfect sphere of just 20 km radius. With the mass being so large and the distance from the surface to the center so small, the gravitational force on the surface is gigantic and not survivable under any circumstances.

If you approached a neutron star, the gravitational pull would actually kill you long before reaching the surface in a process called spaghettification. This unusual term, made popular by the brilliant physicist Stephen Hawking, refers to the fact that in intense gravitational fields objects are vertically stretched and horizontally compressed. The explanation is rather straight-forward: since the strength of the gravitational force depends on the distance to the source of said force, one side of the approaching object, the side closer to the source, will experience a stronger pull than the opposite side. This leads to a net force stretching the object. If the gravitational force is large enough, this would make any object look like a thin spaghetti. For humans spaghettification would be lethal as the stretching would cause the body to break apart at the weakest spot (which presumably is just above the hips). So my pro-tip is to keep a polite distance from neutron stars.

Antimatter Production – Present and Future

When it comes to using antimatter for propulsion, getting sufficient amounts of the exotic fuel is the biggest challenge. For flights within the solar system, hybrid concepts would require several micrograms of antimatter, while pure antimatter rockets would consume dozens of kilograms per trip. And going beyond the solar system would demand the production of several metric tons and more.

We are very, very far from this. Currently around 10 nanograms of anti-protons are produced in the large particles accelerators each year. At this rate it would take 100 years to produce one measly microgram and 100 billion years to accumulate one kilogram. However, the antimatter production rate has seen exponential growth, going up by sixteen orders of magnitude over the past decades, and this general trend will probably continue for some time.

Even with a noticeable slowdown in this exponential growth, gram amounts of anti-protons could be manufactured each year towards the end of the 21st century, making hybrid antimatter propulsion feasible. With no slowdown, the rate could even reach kilograms per year by then. While most physicists view this as an overly optimistic estimate, it is not impossible considering the current trend in antimatter production and the historic growth of liquid hydrogen and uranium production rates (both considered difficult to manufacture in the past).

There is still much to be optimized in the production of antimatter. The energy efficiency at present is only 10-9, meaning that you have to put in one gigajoule of pricey electric energy to produce a single joule of antimatter energy. The resulting costs are a staggering 62.5 trillion USD per gram of anti-protons, making antimatter the most expensive material known to man. So if you want to tell your wife how precious she is to you (and want to get rid of her at the same time), how about buying her a nice anti-matter scarf?

Establishing facilities solely dedicated to antimatter production, as opposed to the by-product manufacturing in modern particle accelerators, would significantly improve the situation. NASA experts estimate that an investment of around 5 billion USD is sufficient to build such a first generation antimatter factory. This step could bring the costs of anti-protons down to 25 billion USD per gram and increase the production rate to micrograms per year.

While we might not see kilogram amounts of antimatter or antimatter propulsion systems in our lifetime, the production trend over the next few decades will reveal much about the feasibility of antimatter rockets and interstellar travel. If the optimists are correct, and that’s a big if, the grandchildren of our grandchildren’s grandchildren might watch the launch of the first spacecraft capable of reaching neighboring stars. Sci-fi? I’m sure that’s what people said about the Moon landing and close-up pictures from Mars and Jupiter just a lifetime ago.

For more information, check out my ebook Antimatter Propulsion.

New Release for Kindle: Antimatter Propulsion

I’m very excited to announce the release of my latest ebook called “Antimatter Propulsion”. I’ve been working working on it like a madman for the past few months, going through scientific papers and wrestling with the jargon and equations. But I’m quite satisfied with the result. Here’s the blurb, the table of contents and the link to the product page. No prior knowledge is required to enjoy the book.

Many popular science fiction movies and novels feature antimatter propulsion systems, from the classic Star Trek series all the way to Cameron’s hit movie Avatar. But what exactly is antimatter? And how can it be used accelerate rockets? This book is a gentle introduction to the science behind antimatter propulsion. The first section deals with antimatter in general, detailing its discovery, behavior, production and storage. This is followed by an introduction to propulsion, including a look at the most important quantities involved and the propulsion systems in use or in development today. Finally, the most promising antimatter propulsion and rocket concepts are presented and their feasibility discussed, from the solid core concept to antimatter initiated microfusion engines, from the Valkyrie project to Penn State’s AIMStar spacecraft.

Section 1: Antimatter

The Atom
Dirac’s Idea
Anti-Everything
An Explosive Mix
Proton and Anti-Proton Annihilation
Sources of Antimatter
Storing Antimatter
Getting the Fuel

Section 2: Propulsion Basics

Conservation of Momentum
♪ Throw, Throw, Throw Your Boat ♫
So What’s The Deal?
May The Thrust Be With You
Acceleration
Specific Impulse and Fuel Requirements
Chemical Propulsion
Electric Propulsion
Fusion Propulsion

Section 3: Antimatter Propulsion Concepts

Solid Core Concept
Plasma Core Concept
Beamed Core Concept
Antimatter Catalyzed Micro-Fission / Fusion
Antimatter Initiated Micro-Fusion

Section 4: Antimatter Rocket Concepts

Project Valkyrie
ICAN-II
AIMStar
Dust Shields

You can purchase “Antimatter Propulsion” here for $ 2.99.

Exponential Functions and their Derivatives (including Examples)

Exponential functions have the general form:

Derivatives_html_28f62f1b

with two constants a and b (called base). It’s quite common to use Euler’s number e = 2.7182… as the base and the exponential function expressed as such:

Derivatives_html_m55d6ad0b

with two constants a and c. Converting from one form to the other is not that difficult, just use ec = b or c = ln(b). Here’s how it works:

Derivatives_html_m3951e5f

Derivatives_html_59deade

As for the plot, you should keep two special cases in mind. For b > 1 (which corresponds to c > 0 in case of base e), the function goes through the point P(0,a) and goes to infinity as x goes to infinity.

Derivatives_html_m47952ca3

(Exponential function with b > 1 or c > 0. For example: f(x) = 8·3x)

This is exponential growth. When 0 < b < 1 (or c < 0) this turns into exponential decline. The function again goes through the point P(0,a), but approaches zero as x goes to infinity.

Derivatives_html_m3039be7

(Exponential function with 0 < b < 1 or c < 0. For example: f(x) = 0.5x)

Here’s how the differentiation of exponential functions works. Given the function:

Derivatives_html_28f62f1b

 The first derivative is:

Derivatives_html_1caa28b5

For the case of base e:

Derivatives_html_m55d6ad0b

We get:

Derivatives_html_m7e6aad94

You should remember both formulas. Note that the exponential functions have the unique property that their first derivative (slope) is proportional to the function value (height above x-axis). So the higher the curve, the sharper it rises. This is why exponential growth is so explosive.

———————————————–>

Example 1:

Derivatives_html_m59848e08

Derivatives_html_m30687e30

Example 2:

Derivatives_html_m79ef2f66

Derivatives_html_1f8b7d15

Example 3:

Derivatives_html_3b1b3aeb

Derivatives_html_729444d5

Derivatives_html_2c9b781e

Derivatives_html_m63701786

Derivatives_html_74dba73b

Example 4:

Derivatives_html_m43291588

Derivatives_html_6f6b9781

<———————————————–

If exponential functions are combined with power or polynomial functions, just use the sum rule.

———————————————–>

Example 5:

Derivatives_html_6de189e

Derivatives_html_5a9fe046

Derivatives_html_m7f2d5936

Derivatives_html_753619fa

Derivatives_html_63759307

Example 6:

Derivatives_html_m1e3db122

Derivatives_html_42e3abb4

Derivatives_html_4069df23

Derivatives_html_m3887b4fe

Derivatives_html_m517177dd

(This was an excerpt from the FREE ebook “Math Shorts – Derivatives”)

New Release for Kindle: Math Shorts – Derivatives

The rich and exciting field of calculus begins with the study of derivatives. This book is a practical introduction to derivatives, filled with down-to-earth explanations, detailed examples and lots of exercises (solutions included). It takes you from the basic functions all the way to advanced differentiation rules and proofs. Check out the sample for the table of contents and a taste of the action. From the author of “Mathematical Shenanigans”, “Great Formulas Explained” and the “Math Shorts” series. A supplement to this book is available under the title “Exercises to Math Shorts – Derivatives”. It contains an additional 28 exercises including detailed solutions.

Note: Except for the very basics of algebra, no prior knowledge is required to enjoy this book.

Table of Contents:

- Section 1: The Big Picture

- Section 2: Basic Functions and Rules

Power Functions
Sum Rule and Polynomial Functions
Exponential Functions
Logarithmic Functions
Trigonometric Functions

- Section 3: Advanced Differentiation Rules

I Know That I Know Nothing
Product Rule
Quotient Rule
Chain Rule

- Section 4: Limit Definition and Proofs

The Formula
Power Functions
Constant Factor Rule and Sum Rule
Product Rule

- Section 5: Appendix

Solutions to the Problems
Copyright and Disclaimer
Request to the Reader

Differential Equations – The Big Picture

Population Growth

So you want to learn about differential equations? Excellent choice. Differential equations are not only of central importance to science, they can also be quite stimulating and fun (that’s right). In the broadest sense, a differential equation is any equation that connects a function with one or more of its derivatives. What makes these kinds of equations particularly important?

Remember that a derivative expresses the rate of change of a quantity. So the differential equation basically establishes a link between the rate of change of said quantity and its current value. Such a link is very common in nature. Consider population growth. It is obvious that the rate of change will depend on the current size of the population. The more animals there are, the more births (and deaths) we can expect and hence the faster the size of the population will change.

A commonly used model for population growth is the exponential model. It is based on the assumption that the rate of change is proportional to the current size of the population. Let’s put this into mathematical form. We will denote the size of the population by N (measured in number of animals) and the first derivative with respect to time by dN/dt (measured in number of animals per unit time). Note that other symbols often used for the first derivative are N’ and Ṅ. We will however stick to the so-called Leibniz notation dN/dt as it will prove to be quite instructive when dealing with separable differential equations. That said, let’s go back to the exponential model.

With N being the size of the population and dN/dt the corresponding rate of change, our assumption of proportionality between the two looks like this:

Differential Equations_html_m1eec977f

with r being a constant. We can interpret r as the growth rate. If r > 0, then the population will grow, if r < 0, it will shrink. This model has proven to be successful for relatively small animal populations. However, there’s one big flaw: there is no limiting value. According to the model, the population would just keep on growing and growing until it consumes the entire universe. Obviously and luckily, bacteria in Petri dish don’t behave this way. For a more accurate model, we need to take into account the limits of the environment.

The differential equation that forms the basis of the logistic model, called Verhulst equation in honor of the Belgian mathematician Pierre François Verhulst, does just that. Just like the differential equation for exponential growth, it relates the current size N of the population to its rate of change dN/dt, but also takes into account the finite capacity K of the environment:

Differential Equations_html_6205133b

Take a careful look at the equation. Even without any calculations a differential equation can tell a vivid story. Suppose for example that the population is very small. In this case N is much smaller than K, so the ratio N/K is close to zero. This means that we are back to the exponential model. Hence, the logistic model contains the exponential model as a special case. Great! The other extreme is N = K, that is, when the size of the population reaches the capacity. In this case the ratio N/K is one and the rate of change dN/dt becomes zero, which is exactly what we were expecting. No more growth at the capacity.

Definition and Equation of Motion

Now that you have seen two examples of differential equations, let’s generalize the whole thing. For starters, note that we can rewrite the two equations as such:

Differential Equations_html_m2f1acf93

Differential Equations_html_155a8ed6

Denoting the dependent variable with x (instead of N) and higher order derivatives with dnx/dtn (with n = 2 resulting in the second derivative, n = 3 in the third derivative, and so on), the general form of a differential equation looks like this:

Differential Equations_html_2f9e8613

Wow, that looks horrible! But don’t worry. We just stated in the broadest way possible that a differential equation is any equation that connects a function x(t) with one or more its derivatives dx/dt, d2x/dt2, and so on. The above differential equation is said to have the order n. Up to now, we’ve only been dealing with first order differential equations.

The following equation is an example of a second order differential equation that you’ll frequently come across in physics. Its solution x(t) describes the position or angle over time of an oscillating object (spring, pendulum).

Differential Equations_html_m52d742c5

with c being a constant. Second order differential equations often arise naturally from Newton’s equation of motion. This law, which even the most ruthless crook will never be able to break, states that the object’s mass m times the acceleration a experienced by it is equal to the applied net force F:

Differential Equations_html_5f68baa

The force can be a function of the object’s location x (spring), the velocity v = dx/dt (air resistance), the acceleration a = d2x/dt2 (Bremsstrahlung) and time t (motor):

Differential Equations_html_m239c875b

Hence the equation of motion becomes:

Differential Equations_html_m3d36476b

A second order differential equation which leads to the object’s position over time x(t) given the forces involved in shaping its motion. It might not look pretty to some (it does to me), but there’s no doubt that it is extremely powerful and useful.

Equilibrium Points

To demonstrate what equilibrium points are and how to compute them, let’s take the logistic model a step further. In the absence of predators, we can assume the fish in a certain lake to grow according to Verhulst’s equation. The presence of fishermen obviously changes the dynamics of the population. Every time a fisherman goes out, he will remove some of the fish from the population. It is safe to assume that the success of the fisherman depends on the current size of the population: the more fish there are, the more he will be able to catch. We can set up a modified version of Verhulst’s equation to describe the situation mathematically:

Differential Equations_html_m56b72820

with a constant c > 0 that depends on the total number of fishermen, the frequency and duration of their fishing trips, the size of their nets, and so on. Solving this differential equation is quite difficult. However, what we can do with relative ease is finding equilibrium points.

Remember that dN/dt describes the rate of change of the population. Hence, by setting dN/dt = 0, we can find out if and when the population reaches a constant size. Let’s do this for the above equation.

Differential Equations_html_m723a5b46

This leads to two solutions:

Differential Equations_html_4d69bf27

Differential Equations_html_m3756512b

The first equilibrium point is quite boring. Once the population reaches zero, it will remain there. You don’t need to do math to see that. However, the second equilibrium point is much more interesting. It tells us how to calculate the size of the population in the long run from the constants. We can also see that a stable population is only possible if c < r.

Note that not all equilibrium points that we find during such an analysis are actually stable (in the sense that the system will return to the equilibrium point after a small disturbance). The easiest way to find out whether an equilibrium point is stable or not is to plot the rate of change, in this case dN/dt, over the dependent variable, in this case N. If the curve goes from positive to negative values at the equilibrium point, the equilibrium is stable, otherwise it is unstable.

Differential Equations_html_m46f59529

(This was an excerpt from my e-book “Math Shorts – Introduction to Differential Equations”)

Modeling Theme Park Queues

Who doesn’t love a day at the theme park? You can go on thrilling roller‒coaster rides, enjoy elaborate shows, have a tasty lunch in between or just relax and take in the scenery. Of course there’s one thing that does spoil the fun a bit: the waiting. For the most popular attractions waiting times of around one hour are not uncommon during peak times, while the ride itself may be over in no more than two or three minutes.

Let’s work towards a basic model for queues in theme parks and other situations in which queues commonly arise. We will assume that the passing rate R(t), that is, the number of people passing the entrance of the attraction per unit time, is given. How many of these will enter the line? This will depend on the popularity of the attraction as well as the current size of the line. The more people are already in the line, the less likely others are to join. We’ll denote the number of people in the line at time t with n(t) and use this ansatz for the rate r(t) at which people join the queue:

Mathematical Shenanigans_html_5618c8ce

The constant a expresses the popularity of the attraction (more specifically, it is the percentage of passers‒by that will use the attraction if no queue is present) and the constant b is a “line repulsion” factor. The stronger visitors are put off by the presence of a line, the higher its value will be. How does the size of the line develop over time given the above function? We assume that the maximum serving rate is c people per unit time. So the rate of change for the number of people in line is (for n(t) ≥ 0):

Mathematical Shenanigans_html_5c443fa7

In case the numerical evaluation returns a value n(t) < 0 (which is obviously nonsense, but a mathematical possibility given our ansatz), we will force n(t) = 0. An interesting variation, into which we will not dive much further though, is to include a time lag. Usually the expected waiting time is displayed to visitors on a screen. The visitors make their decision on joining the line based on this information. However, the displayed waiting time is not updated in real‒time. We have to expect that there’s a certain delay d between actual and displayed length of the line. With this effect included, our equation becomes:

Mathematical Shenanigans_html_m2ee86a4f

Simulation

For the numerical solution we will go back to the delay‒free version. We choose one minute as our unit of time. For the passing rate, that is, the people passing by per minute, we set:

R(t) = 0.00046 · t · (720 ‒ t)

We can interpret this function as such: at t = 0 the park opens and the passing rate is zero. It then grows to a maximum of 60 people per minute at t = 360 minutes (or 6 hours) and declines again. At t = 720 minutes (or 12 hours) the park closes and the passing rate is back to zero. We will assume the popularity of the attraction to be:

a = 0.2

So if there’s no line, 20 % of all passers‒by will make use of the attraction. We set the maximum service rate to:

c = 5 people per minute

What about the “line repulsion” factor? Let’s assume that if the line grows to 200 people (given the above service rate this would translate into a waiting time of 40 minutes), the willingness to join the line drops from the initial 20 % to 10 %.

Mathematical Shenanigans_html_3d42d26c

→ b = 0.005

Given all these inputs and the model equation, here’s how the queue develops over time:

Mathematical Shenanigans_html_79a0f216

It shows that no line will form until around t = 100 minutes into opening the park (at which point the passing rate reaches 29 people per minute). Then the queue size increases roughly linearly for the next several hours until it reaches its maximum value of n = 256 people (waiting time: 51 minutes) at t = 440 minutes. Note that the maximum value of the queue size occurs much later than the maximum value of the passing rate. After reaching a maximum, there’s a sharp decrease in line length until the line ceases to be at around t = 685 minutes. Further simulations show that if you include a delay, there’s no noticeable change as long as the delay is in the order of a few minutes.

(This was an excerpt from my ebook “Mathematical Shenanigans”)

The Problem With Antimatter Rockets

The distance to our neighboring star Alpha Centauri is roughly 4.3 lightyears or 25.6 trillion km. This is an enormous distance. It would take the Space Shuttle 165,000 years to cover this distance. That’s 6,600 generations of humans who’d know nothing but the darkness of space. Obviously, this is not an option. Do we have the technologies to get there within the lifespan of a person? Surprisingly, yes. The concept of antimatter propulsion might sound futuristic, but all the technologies necessary to build such a rocket exist. Today.

What exactly do you need to build a antimatter rocket? You need to produce antimatter, store antimatter (remember, if it comes in contact with regular matter it explodes, so putting it in a box is certainly not a possibility) and find a way to direct the annihilation products. Large particle accelerators such as CERN routinely produce antimatter (mostly anti-electrons and anti-protons). Penning-Traps, a sophisticated arrangement of electric and magnetic fields, can store charged antimatter. And magnetic nozzles, suitable for directing the products of proton / anti-proton annihilations, have already been used in several experiments. It’s all there.

So why are we not on the way to Alpha Centauri? We should be making sweet love with green female aliens, but instead we’re still banging our regular, non-green, non-alien women. What’s the hold-up? It would be expensive. Let me rephrase that. The costs would be blasphemous, downright insane, Charlie Manson style. Making one gram of antimatter costs around 62.5 trillion $, it’s by far the most expensive material on Earth. And you’d need tons of the stuff to get to Alpha Centauri. Bummer! And even if we’d all get a second job to pay for it, we still couldn’t manufacture sufficient amounts in the near future. Currently 1.5 nanograms of antimatter are being produced every year. Even if scientists managed to increase this rate by a factor of one million, it would take 1000 years to produce one measly gram. And we need tons of it! Argh. Reality be a harsh mistress …

New Release for Kindle: Math Shorts – Integrals

Yesterday I released the second part of my “Math Shorts” series. This time it’s all about integrals. Integrals are among the most useful and fascinating mathematical concepts ever conceived. The ebook is a practical introduction for all those who don’t want to miss out. In it you’ll find down-to-earth explanations, detailed examples and interesting applications. Check out the sample (see link to product page) a taste of the action.

Important note: to enjoy the book, you need solid prior knowledge in algebra and calculus. This means in particular being able to solve all kinds of equations, finding and interpreting derivatives as well as understanding the notation associated with these topics.

Click the cover to open the product page:

cover

Here’s the TOC:

Section 1: The Big Picture
-Anti-Derivatives
-Integrals
-Applications

Section 2: Basic Anti-Derivatives and Integrals
-Power Functions
-Sums of Functions
-Examples of Definite Integrals
-Exponential Functions
-Trigonometric Functions
-Putting it all Together

Section 3: Applications
-Area – Basics
-Area – Numerical Example
-Area – Parabolic Gate
-Area – To Infinity and Beyond
-Volume – Basics
-Volume – Numerical Example
-Volume – Gabriel’s Horn
-Average Value
-Kinematics

Section 4: Advanced Integration Techniques
-Substitution – Basics
-Substitution – Indefinite Integrals
-Substitution – Definite Integrals
-Integration by Parts – Basics
-Integration by Parts – Indefinite Integrals
-Integration by Parts – Definite Integrals

Section 5: Appendix
-Formulas To Know By Heart
-Greek Alphabet
-Copyright and Disclaimer
-Request to the Reader

Enjoy!

New Release for Kindle: Introduction to Differential Equations

Differential equations are an important and fascinating part of mathematics with numerous applications in almost all fields of science. This book is a gentle introduction to the rich world of differential equations filled with no-nonsense explanations, step-by-step calculations and application-focused examples.

Important note: to enjoy the book, you need solid prior knowledge in algebra and calculus. This means in particular being able to solve all kinds of equations, finding and interpreting derivatives, evaluating integrals as well as understanding the notation that goes along with those.

Click the cover to open the product page:

cover

Here’s the TOC:

Section 1: The Big Picture

-Population Growth
-Definition and Equation of Motion
-Equilibrium Points
-Some More Terminology

Section 2: Separable Differential Equations

-Approach
-Exponential Growth Revisited
-Fluid Friction
-Logistic Growth Revisited

Section 3: First Order Linear Differential Equations

-Approach
-More Fluid Friction
-Heating and Cooling
-Pure, Uncut Mathematics
-Bernoulli Differential Equations

Section 4: Second Order Homogeneous Linear Differential Equations (with Constant Coefficients)

-Wait, what?
-Oscillating Spring
-Numerical Example
-The Next Step – Non-Homogeneous Equations

Section 5: Appendix

-Formulas To Know By Heart
-Greek Alphabet
-Copyright and Disclaimer
-Request to the Reader

Note: With this book release I’m starting my “Math Shorts” Series. The next installment “Math Shorts – Integrals” will be available in just a few days! (Yes, I’m working like a mad man on it)

Motion With Constant Acceleration (Examples, Exercises, Solutions)

An abstraction often used in physics is motion with constant acceleration. This is a good approximation for many different situations: free fall over small distances or in low-density atmospheres, full braking in car traffic, an object sliding down an inclined plane, etc … The mathematics behind this special case is relatively simple. Assume the object that is subject to the constant acceleration a (in m/s²) initially has a velocity v(0) (in m/s). Since the velocity is the integral of the acceleration function, the object’s velocity after time t (in s) is simply:

1) v(t) = v(0) + a · t

For example, if a car initially goes v(0) = 20 m/s and brakes with a constant a = -10 m/s², which is a realistic value for asphalt, its velocity after a time t is:

v(t) = 20 – 10 · t

After t = 1 second, the car’s speed has decreased to v(1) = 20 – 10 · 1 = 10 m/s and after t = 2 seconds the car has come to a halt: v(2) = 20 – 10 · 2 = 0 m/s. As you can see, it’s all pretty straight-forward. Note that the negative acceleration (also called deceleration) has led the velocity to decrease over time. In a similar manner, a positive acceleration will cause the speed to go up. You can read more on acceleration in this blog post.

What about the distance x (in m) the object covers? We have to integrate the velocity function to find the appropriate formula. The covered distance after time t is:

2) x(t) = v(0) · t + 0.5 · a · t²

While that looks a lot more complicated, it is really just as straight-forward. Let’s go back to the car that initially has a speed of v(0) = 20 m/s and brakes with a constant a = -10 m/s². In this case the above formula becomes:

x(t) = 20 · t – 0.5 · 10 · t²

After t = 1 second, the car has traveled x(1) = 20 · 1 – 0.5 · 10 · 1² = 15 meters. By the time it comes to a halt at t = 2 seconds, it moved x(2) = 20 · 2 – 0.5 · 10 · 2² = 20 meters. Note that we don’t have to use the time as a variable. There’s a way to eliminate it. We could solve equation 1) for t and insert the resulting expression into equation 2). This leads to a formula connecting the velocity v and distance x.

3) Constant acceleration_html_b85f3ec

Solved for x it looks like this:

3)’ Constant acceleration_html_m23bb2bb3

It’s a very useful formula that you should keep in mind. Suppose a tram accelerates at a constant a = 1.3 m/s², which is also a realistic value, from rest (v(0) = 0 m/s). What distance does it need to go to full speed v = 10 m/s? Using equation 3)’ we can easily calculate this:

Constant acceleration_html_m11de6604

————————————————————————————-

Here are a few exercises and solutions using the equations 1), 2) and 3).

1. During free fall (air resistance neglected) an object accelerates with about a = 10 m/s. Suppose the object is dropped, that is, it is initially at rest (v(0) = 0 m/s).

a) What is its speed after t = 3 seconds?
b) What distance has it traveled after t = 3 seconds?
c) Suppose we drop the object from a tower that is x = 20 meters tall. At what speed will it impact the ground?
d) How long does the drop take?

Hint: in exercise d) solve equation 1) for t and insert the result from c)

2. During the reentry of space crafts accelerations can be as high as a = -70 m/s². Suppose the space craft initially moves with v(0) = 6000 m/s.

a) What’s the speed and covered distance after t = 10 seconds?
b) How long will it take the space craft to half its initial velocity?
c) What distance will it travel during this time?

3. An investigator arrives at the scene of a car crash. From the skid marks he deduces that it took the car a distance x = 55 meters to come to a halt. Assume full braking (a = -10 m/s²). Was the car initially above the speed limit of 30 m/s?

————————————————————————————-

Solutions to the exercises:

Exercise 1

a) 30 m/s
b) 45 m
c) 20 m/s
d) 2 s

Exercise 2

a) 5,300 m/s and 56,500 m
b) 42.9 s (rounded)
c) 192,860 m (rounded)

Exercise 3

Yes (he was initially going 33.2 m/s)

————————————————————————————-

To learn the basic math you need to succeed in physics, check out the e-book “Algebra – The Very Basics”. For an informal introduction to physics, check out the e-book “Physics! In Quantities and Examples”. Both are available at low prices and exclusively for Kindle.

CTR (Click Through Rate) – Explanation, Results and Tips

A very important metric for banner advertiesment is the CTR (click through rate). It is simply the number of clicks the ad generated divided by the number of total impressions. You can also think of it as the product of the probability of a user noticing the ad and the probability of the user being interested in the ad.

CTR = clicks / impressions = p(notice) · p(interested)

The current average CTR is around 0.09 % or 9 clicks per 10,000 impressions and has been declining for the past several years. What are the reasons for this? For one, the common banner locations are familiar to web users and are thus easy to ignore. There’s also the increased popularity of ad-blocking software.

The attitude of internet users is generally negative towards banner ads. This is caused by advertisers using more and more intrusive formats. These include annoying pop-ups and their even more irritating sisters, the floating ads. Adopting them is not favorable for advertisers. They harm a brand and produce very low CTRs. So hopefully, we will see an end to such nonsense soon.

As for animated ads, their success depends on the type of website and target group. For high-involvement websites that users visit to find specific information (news, weather, education), animated banners perform worse than static banners. In case of low-involvement websites that are put in place for random surfing (entertainment, lists, mini games) the situation is reversed. The target group also plays an important role. For B2C (business-to-consumer) ads animation generally works well, while for B2B (business-to-business) animation was shown to lower the CTR.

The language used in ads has also been extensively studied. One interesting result is that often it is preferable to use English language even if the ad is displayed in a country in which English is not the first language. A more obvious result is that catchy words and calls to action (“read more”) increase the CTR.

As for the banner size, there is inconclusive data. Some analysis report that the CTR grows with banner size, while others conclude that banner sizes around 250×250 or 300×250 generate the highest CTRs. There is a clearer picture regarding shape: in terms of CTR, square shapes work better than thin rectangles having the same size. No significant difference was found between vertical and horizontal rectangles.

Here’s another hint: my own theoretical calculations show that higher CTRs can be achieved by advertising on pages that have a low visitor loyalty. The explanation for this counter-intuitive outcome as well as a more sophisticated formula for the CTR can be found here. It is, in a nutshell, a result of the multiplication rule of statistics. The calculation also shows that on sites with a low visitor loyalty the CTR will stay constant, while on websites with a high visitor loyalty it will decrease over time.

Sources and further reading:

  • Study on banner advertisement type and shape effect on click-through-rate and conversion

http://www.aabri.com/manuscripts/131481.pdf

  • The impact of banner ad styles on interaction and click-through-rates

http://iacis.org/iis/2008/S2008_989.pdf

  • Impact of animation and language on banner click-through-rates

http://www.academia.edu/1608289/Impact_of_Animation_and_Language_on_Banner_Click-Through_Rates

Mathematics of Banner Ads: Visitor Loyalty and CTR

First of all: why should a website’s visitor loyalty have any effect at all on the CTR we can expect to achieve with a banner ad? What does the one have to do with the other? To understand the connection, let’s take a look at an overly simplistic example. Suppose we place a banner ad on a website and get in total 3 impressions (granted, not a realistic number, but I’m only trying to make a point here). From previous campaigns we know that a visitor clicks on our ad with a probability of 0.1 = 10 % (which is also quite unrealistic).

The expected number of clicks from these 3 impressions is …

… 0.1 + 0.1 + 0.1 = 0.3 when all impressions come from different visitors.

… 1 – 0.9^3 = 0.27 when all impressions come from only one visitor.

(the symbol ^ stands for “to the power of”)

This demonstrates that we can expect more clicks if the website’s visitor loyalty is low, which might seem counter-intuitive at first. But the great thing about mathematics is that it cuts through bullshit better than the sharpest knife ever could. Math doesn’t lie. Let’s develop a model to show that a higher vistor loyalty translates into a lower CTR.

Suppose we got a number of I impressions on the banner ad in total. We’ll denote the percentage of visitors that contributed …

… only one impression by f(1)
… two impressions by f(2)
… three impressions by f(3)

And so on. Note that this distribution f(n) must satisfy the condition ∑[n] n·f(n) = I for it all to check out. The symbol ∑[n] stands for the sum over all n.

We’ll assume that the probability of a visitor clicking on the ad is q. The probability that this visitor clicks on the ad at least once during n visits is just: p(n) = 1 – (1 – q)^n (to understand why you have the know about the multiplication rule of statistics – if you’re not familiar with it, my ebook “Statistical Snacks” is a good place to start).

Let’s count the expected number of clicks for the I impressions. Visitors …

… contributing only one impression give rise to c(1) = p(1) + p(1) + … [f(1)·I addends in total] = p(1)·f(1)·I clicks

… contributing two impressions give rise to c(2) = p(2) + p(2) + … [f(2)·I/2 addends in total] = p(2)·f(2)·I/2 clicks

… contributing three impressions give rise to c(3) = p(3) + p(3) + … [f(3)·I/3 addends in total] = p(3)·f(3)·I/3 clicks

And so on. So the total number of clicks we can expect is: c = ∑[n] p(n)·f(n)/n·I. Since the CTR is just clicks divided by impressions, we finally get this beautiful formula:

CTR = ∑[n] p(n)·f(n)/n

The expression p(n)/n decreases as n increases. So a higher visitor loyalty (which mathematically means that f(n) has a relatively high value for n greater than one) translates into a lower CTR. One final conclusion: the formula can also tell us a bit about how the CTR develops during a campaign. If a website has no loyal visitors, the CTR will remain at a constant level, while for websites with a lot of loyal visitors, the CTR will decrease over time.

Decibel – A Short And Simple Explanation

A way of expressing a quantity in relative terms is to do the ratio with respect to a reference value. This helps to put a quantity into perspective. For example, in mechanics the acceleration is often expressed in relation to the gravitational acceleration. Instead of saying the acceleration is 22 m/s² (which is hard to relate to unless you know mechanics), we can also say the acceleration is 22 / 9.81 ≈ 2.2 times the gravitational acceleration or simply 2.2 g’s (which is much easier to comprehend).

The decibel (dB) is also a general way of expressing a quantity in relative terms, sort of a “logarithmic ratio”. And just like the ratio, it is not a physical unit or limited to any field such as mechanics, audio, etc … You can express any quantity in decibels. For example, if we take the reference value to be the gravitational acceleration, the acceleration 22 m/s² corresponds to 3.5 dB.

To calculate the decibel value L of a quantity x relative to the reference value x(0), we can use this formula:

MIXING_html_m6e565f15

In acoustics the decibel is used to express the sound pressure level (SPL), measured in Pascal = Pa, using the threshold of hearing (0.00002 Pa) as reference. However, in this case a factor of twenty instead of ten is used. The change in factor is a result of inputting the squares of the pressure values rather than the linear values.

MIXING_html_297973dd

The sound coming from a stun grenade peaks at a sound pressure level of around 15,000 Pa. In decibel terms this is:

MIXING_html_m75e9ffd9

which is way past the threshold of pain that is around 63.2 Pa (130 dB). Here are some typical values to keep in mind:

0 dB → Threshold of Hearing
20 dB → Whispering
60 dB → Normal Conversation
80 dB → Vacuum Cleaner
110 dB → Front Row at Rock Concert
130 dB → Threshold of Pain
160 dB → Bursting Eardrums

Why use the decibel at all? Isn’t the ratio good enough for putting a quantity into perspective? The ratio works fine as long as the quantity doesn’t go over many order of magnitudes. This is the case for the speeds or accelerations that we encounter in our daily lives. But when a quantity varies significantly and spans many orders of magnitude (which is what the SPL does), the decibel is much more handy and relatable.

Another reason for using the decibel for audio signals is provided by the Weber-Fechner law. It states that a stimulus is perceived in a logarithmic rather than linear fashion. So expressing the SPL in decibels can be regarded as a first approximation to how loud a sound is perceived by a person as opposed to how loud it is from a purely physical point of view.

Note that when combining two or more sound sources, the decibel values are not simply added. Rather, if we combine two sources that are equally loud and in phase, the volume increases by 6 dB (if they are out of phase, it will be less than that). For example, when adding two sources that are at 50 dB, the resulting sound will have a volume of 56 dB (or less).

(This was an excerpt from Audio Effects, Mixing and Mastering. Available for Kindle)

Compressors: Formula for Maximum Volume

Suppose we have an audio signal which peaks at L decibels. We apply a compressor with a threshold T (with T being smaller than L, otherwise the compressor will not spring into action) and ratio r. How does this effect the maximum volume of the audio signal? Let’s derive a formula for that. Remember that the compressor leaves the parts of the signal that are below the threshold unchanged and dampens the excess volume (threshold to signal level) by the ratio we set. So the dynamic range from the threshold to the peak, which is L – T, is compressed to (L – T) / r. Hence, the peak volume after compression is:

L’ = T + (L – T) / r

For example, suppose our mix peaks at L = – 2 dB. We compress it using a threshold of T = – 10 dB and a ratio r = 2:1. The maximum volume after compression is:

L’ = – 10 dB + ( – 2 dB – (- 10 dB) ) / 2 = – 10 dB + 8 dB / 2 = – 6 dB

Braingate – You Thought It’s Science-Fiction, But It’s Not

On April 12, 2011, something extraordinary happened. A 58-year-old woman that was paralyzed from the neck down reached for a bottle of coffee, drank from a straw and put the bottle back on the table. But she didn’t reach with her own hand – she controlled a robotic arm with her mind. Uneblievable? It is. But decades of research made the unbelievable possible. Watch this exceptional and moving moment in history here (click on picture for Youtube video).

Beautiful

The 58-year-old women (patient S3) was part of the BrainGate2 project, a collaboration of researchers at the Department of Veterans Affairs, Brown University, German Aerospace Center (DLR) and others. The scientists implanted a small chip containing 96 electrodes into her motor cortex. This part of the brain is responsible for voluntary movement. The chip measures the electrical activity of the brain and an external computer translates this pattern into the movement of a robotic arm. A brain-computer interface. And it’s not science-fiction, it’s science.

During the study the woman was able to grasp items during the allotted time with a 70 % success rate. Another participant (patient T2) even managed to achieve a 96 % success rate. Besides moving robotic arms, the participants also were given the task to spell out words and sentences by indicating letters via eye movement. Participant T2 spelt out this sentence: “I just imagined moving my own arm and the [robotic] arm moved where I wanted it to go”.

The future is exciting.

Audio Effects: All About Compressors

Almost all music and recorded speech that you hear has been sent through at least one compressor at some point during the production process. If you are serious about music production, you need to get familiar with this powerful tool. This means understanding the big picture as well as getting to know each of the parameters (Threshold, Ratio, Attack, Release, Make-Up Gain) intimately.

  • How They Work

Throughout any song the volume level varies over time. It might hover around – 6 dB in the verse, rise to – 2 dB in the first chorus, drop to – 8 dB in the interlude, and so on. A term that is worth knowing in this context is the dynamic range. It refers to the difference in volume level from the softest to the loudest part. Some genres of music, such as orchestral music, generally have a large dynamic range, while for mainstream pop and rock a much smaller dynamic range is desired. A symphony might range from – 20 dB in the soft oboe solo to – 2 dB for the exciting final chord (dynamic range: 18 dB), whereas your common pop song will rather go from – 8 dB in the first verse to 0 dB in the last chorus (dynamic range: 8 dB).

During a recording we have some control over what dynamic range we will end up with. We can tell the musicians to take it easy in the verse and really go for it in the chorus. But of course this is not very accurate and we’d like to have full control of the dynamic range rather than just some. We’d also like to be able to to change the dynamic range later on. Compressors make this (and much more) possible.

The compressor constantly monitors the volume level. As long as the level is below a certain threshold, the compressor will not do anything. Only when the level exceeds the threshold does it become active and dampen the excess volume by a certain ratio. In short: everything below the threshold stays as it is, everything above the threshold gets compressed. Keep this in mind.

Suppose for example we set the threshold to – 10 dB and the ratio to 4:1. Before applying the compressor, our song varies from a minimum value of – 12 dB in the verse to a maximum value – 2 dB in the chorus. Let’s look at the verse first. Here the volume does not exceed the threshold and thus the compressor does not spring into action. The signal will pass through unchanged. The story is different for the chorus. Its volume level is 8 dB above the threshold. The compressor takes this excess volume and dampens it according to the ratio we set. To be more specific: the compressor turns the 8 dB excess volume into a mere 8 dB / 4 = 2 dB. So the compressed song ranges from – 12 dB in the verse to -10 dB + 2 dB = – 8 dB in the chorus.

Here’s a summary of the process:

Settings:

Threshold: – 10 dB
Ratio: 4:1

Before:

Minimum: – 12 dB
Maximum: – 2 dB
Dynamic range: 10 dB

Excess volume (threshold to maximum): 8 dB
With ratio applied: 8 dB / 4 = 2 dB

After:

Minimum: – 12 dB
Maximum: – 8 dB
Dynamic range: 4 dB

As you can see, the compressor had a significant effect on the dynamic range. Choosing appropriate values for the threshold and ratio, we are free to compress the song to any dynamic range we desire. When using a DAW (Digital Audio Workstation such as Cubase, FL Studio or Ableton Live), it is possible to see the workings of a compressor with your own eyes. The image below shows the uncompressed file (top) and the compressed file (bottom) with the threshold set to – 12 dB and the ratio to 2:1.

MIXING_html_26d7be80

The soft parts are identical, while the louder parts (including the short and possibly problematic peaks) have been reduced in volume. The dynamic range clearly shrunk in the process. Note that after applying the compressor, the song’s effective volume (RMS) is much lower. Since this is usually not desired, most compressors have a parameter called make-up gain. Here you can specify by how much you’d like the compressor to raise the volume of the song after the compression process is finished. This increase in volume is applied to all parts of the song, soft or loud, so there will not be another change in the dynamic range. It only makes up for the loss in loudness (hence the name).

  • Usage of Compressors

We already got to know one application of the compressor: controlling the dynamic range of a song. But usually this is just a first step in reaching another goal: increasing the effective volume of the song. Suppose you have a song with a dynamic range of 10 dB and you want to make it as loud as possible. So you move the volume fader until the maximum level is at 0 dB. According to the dynamic range, the minimum level will now be at – 10 dB. The effective volume will obviously be somewhere in-between the two values. For the sake of simplicity, we’ll assume it to be right in the middle, at – 5 dB. But this is too soft for your taste. What to do?

You insert a compressor with a threshold of – 6 dB and a ratio of 3:1. The 4 dB range from the minimum level – 10 dB to the threshold – 6 dB is unchanged, while the 6 dB range from the threshold – 6 dB to the maximum level 0 dB is compressed to 6 dB / 3 = 2 dB. So overall the dynamic range is reduced to 4 dB + 2 dB = 6 dB. Again you move the volume fader until the maximum volume level coincides with 0 dB. However, this time the minimum volume will be higher, at – 6 dB, and the effective volume at – 3 dB (up from the – 5 dB we started with). Mission accomplished, the combination of compression and gain indeed left us with a higher average volume.

In theory, this means we can get the effective volume up to almost any value we desire by compressing a song and then making it louder. We could have the whole song close to 0 dB. This possibility has led to a “loudness war” in music production. Why not go along with that? For one, you always want to put as much emphasis as possible on the hook. This is hard to do if the intro and verse is already blaring at maximum volume. Another reason is that severely reducing the dynamic range kills the expressive elements in your song. It is not a coincidence that music which strongly relies on expressive elements (orchestral and acoustic music) usually has the highest dynamic range. It needs the wide range to go from expressing peaceful serenity to expressing destructive desperation. Read the following out loud and memorize it: the more expression it has, the less you should compress. While a techno song might work at maximum volume, a ballad sure won’t.

————————————-

Background Info – SPL and Loudness

Talking about how loud something is can be surprisingly complicated. The problem is that our brain does not process sound inputs in a linear fashion. A sound wave with twice the sound pressure does not necessarily seem twice as loud to us. So when expressing how loud something is, we can either do this by using well-defined physical quantities such as the sound pressure level (which unfortunately does not reflect how loud a person perceives something to be) or by using subjective psycho-acoustic quantities such as loudness (which is hard to define and measure properly).

Sound waves are pressure and density fluctuations that propagate at a material- and temperature-dependent speed in a medium. For air at 20 °C this speed is roughly 340 m/s. The quantity sound pressure expresses the deviation of the sound wave pressure from the pressure of the surrounding air. The sound pressure level, in short: SPL, is proportional to the logarithm of the effective sound pressure. Long story short: the stronger the sound pressure, the higher the SPL. The SPL is used to objectively measure how loud something is. Another important objective quantity for this purpose is the volume. It is a measure of how much energy is contained in an audio signal and thus closely related to the SPL.

A subjective quantity that reflects how loud we perceive something to be is loudness. Due to our highly non-linear brains, the loudness of an audio signal is not simply proportional to its SPL or volume level. Rather, loudness depends in a complex way on the SPL, frequency, duration of the sound, its bandwidth, etc … In the image below you can see an approximation of the relationship between loudness, SPL and frequency.

MIXING_html_mc95d258

Any red curve is a curve of equal loudness. Here’s how we can read the chart. Take a look at the red curve at the very bottom. It starts at 75 dB SPL and a frequency of 20 Hz and reaches 25 dB SPL at 100 Hz. Since the red curve is a curve of equal loudness, we can conclude we perceive a 75 dB SPL sound at 20 Hz to be just as loud as a 25 dB SPL sound at 100 Hz, even though from a purely physical point of view the first sound is three times as loud as the second (75 dB / 25 dB = 3).

————————————-

MIXING_html_m53a1053e

(Compressor in Cubase)

  • Threshold and Ratio

What’s the ideal threshold to use? This depends on what you are trying to accomplish. Suppose you set the threshold at a relatively high value (for example – 10 dB in a good mix). In this case the compressor will be inactive for most of the song and only kick in during the hook and short peaks. With the threshold set to a high value, you are thus “taking the top off”. This would be a suitable choice if you are happy with the dynamics in general, but would like to make the mix less aggressive.

What about low thresholds (such as -25 dB in a good mix)? In this case the compressor will be active for the most part of the song and will make the entire song quite dense. This is something to consider if you aim to really push the loudness of the song. Once the mix is dense, you can go for a high effective volume. But a low threshold compression can also add warmth to a ballad, so it’s not necessarily a tool restricted to usage in the loudness war.

Onto the ratio. If you set the ratio to a high value (such as 5:1 and higher), you are basically telling the mix: to the threshold and no further. Anything past the threshold will be heavily compressed, which is great if you have pushy peaks that make a mix overly aggressive. This could be the result of a snare that’s way too loud or an inexperienced singer. Whatever the cause, a carefully chosen threshold and a high ratio should take care of it in a satisfying manner. Note though that in this case the compressor should be applied to the track that is causing the problem and not the entire mix.

A low value for the ratio (such as 2:1 or smaller) will have a rather subtle effect. Such values are perfect if you want to apply the compressor to a mix that already sounds well and just needs a finishing touch. The mix will become a little more dense, but its character will be kept intact.

  • Attack and Release

There are two important parameters we have ignored so far: the attack and release. The attack parameter allows you to specify how quickly the compressor sets in once the volume level goes past the threshold. A compressor with a long attack (20 milliseconds or more) will let short peaks pass. As long as these peaks are not over-the-top, this is not necessarily a bad thing. The presence of short peaks, also called transients, is important for a song’s liveliness and natural sound. A long attack makes sure that these qualities are preserved and that the workings of the compressor are less noticeable.

A short attack (5 milliseconds or less) can produce a beautifully crisp sound that is suitable for energetic music. But it is important to note that if the attack is too short, the compressor will kill the transients and the whole mix will sound flat and bland. Even worse, a short attack can lead to clicks and a nervous “pumping effect”. Be sure to watch out for those as you shorten the attack.

The release is the time for the compressor to become inactive once the volume level goes below the threshold. It is usually much longer than the attack, but the overall principles are similar. A long release (600 milliseconds or more) will make sure that the compression happens in a more subtle fashion, while a short release (150 milliseconds or less) can produce a pumping sound.

It is always a good idea to choose the release so that it fits the rhythm of your song (the same of course is true for temporal parameters in reverb and delay). One way to do this is to calculate the time per beat TPB in milliseconds from your song’s tempo as measured in beats per minute BPM and use this value as the point of reference.

TPB [ms] = 60000 / BPM

For example, in a song with the tempo BPM = 120 the duration of one beat is TPB = 60000 / 120 = 500 ms. If you need a longer release, use a multiple of it (1000 ms, 1500 ms, and so on), for a shorter release divide it by any natural number (500 ms / 2 = 250 ms, 500 ms / 3 = 167 ms, and so on). This way the compressor will “breathe” in unison with your music.

If you are not sure where to start regarding attack and release, just make use of the 20/200-rule: Set the attack to 20 ms, the release to 200 ms and work towards the ideal values from there. Alternatively, you can always go through the presets of the compressor to find suitable settings.

 

You can learn about advanced compression techniques as well as other effects from Audio Effects, Mixing and Mastering, available for Kindle for $ 3.95.

New Release: Audio Effects, Mixing and Mastering (Kindle)

This book is a quick guide to effects, mixing and mastering for beginners using Cubase as its platform. The first chapter highlights the most commonly used effects in audio production such as compressors, limiters, equalizers, reverb, delay, gates and others. You will learn about how they work, when to apply them, the story behind the parameters and what traps you might encounter. The chapter also contains a quick peek into automation and what it can do.

In the second chapter we focus on what constitutes a good mix and how to achieve it using a clear and comprehensible strategy. This is followed by a look at the mastering chain that will help to polish and push a mix. The guide is sprinkled with helpful tips and background information to make the learning experience more vivid. You get all of this for a fair price of $ 3.95.

cover

Table Of Contents:

1. Audio Effects And Automation
1.1. Compressors
1.2. Limiters
1.3. Equalizers
1.4. Reverb and Delay
1.5. Gates
1.6. Chorus
1.7. Other Effects
1.8. Automation

2. Mixing
2.1. The Big Picture
2.2. Mixing Strategy
2.3. Separating Tracks

3. Mastering
3.1. Basic Idea
3.2. Mastering Strategy
3.3. Mid/Side Processing
3.4. Don’t Give Up

4. Appendix
4.1. Calculating Frequencies
4.2. Decibel
4.3. Copyright and Disclaimer
4.4. Request to the Reader