Mathematics

Recurrence Relations – A Simple Explanation And Much More

Recurrence relations are a powerful tool for mathematical modeling and numerically solving differential equations (no matter how complicated). And as luck would have it, they are relatively easy to understand and apply. So let’s dive right into it using a purely mathematical example (for clarity) before looking at a real-world application.

This equation is a typical example of a recurrence relation:

x(t+1) = 5 * x(t) + 2 * x(t-1)

At the heart of the equation is a certain quantity x. It appears three times: x(t+1) stands for the value of this quantity at a time t+1 (next month), x(t) for the value at time t (current month) and x(t-1) the value at time t-1 (previous month). So what the relation allows us to do is to determine the value of said quantity for the next month, given that we know it for the current and previous month. Of course the choice of time span here is just arbitrary, it might as well be a decade or nanosecond. What’s important is that we can use the last two values in the sequence to determine the next value.

Suppose we start with x(0) = 0 and x(1) = 1. With the recurrence relation we can continue the sequence step by step:

x(2) = 5 * x(1) + 2 * x(0) = 5 * 1 + 2 * 0 = 5

x(3) = 5 * x(2) + 2 * x(1) = 5 * 5 + 2 * 1 = 27

x(4) = 5 * x(3) + 2 * x(2) = 5 * 27 + 2 * 5 = 145

And so on. Once we’re given the “seed”, determining the sequence is not that hard. It’s just a matter of plugging in the last two data points and doing the calculation. The downside to defining a sequence recursively is that if you want to know x(500), you have to go through hundreds of steps to get there. Luckily, this is not a problem for computers.

In the most general terms, a recurrence relation relates the value of quantity x at a time t + 1 to the values of this quantity x at earlier times. The time itself could also appear as a factor. So this here would also be a legitimate recurrence relation:

x(t+1) = 5 * t * x(t) – 2 * x(t-10)

Here we calculate the value of x at time t+1 (next month) by its value at a time t (current month) and t – 10 (ten months ago). Note that in this case you need eleven seed values to be able to continue the sequence. If we are only given x(0) = 0 and x(10) = 50, we can do the next step:

x(11) = 5 * 10 * x(10) – 2 * x(0) = 5 * 10 * 50 – 2 * 0 = 2500

But we run into problems after that:

x(12) = 5 * 11 * x(11) – 2 * x(1) = 5 * 11 * 2500 – 2 * x(1) = ?

We already calculated x(11), but there’s nothing we can do to deduce x(1).

Now let’s look at one interesting application of such recurrence relations, modeling the growth of animal populations. We’ll start with a simple model that relates the number of animals x in the next month t+1 to the number of animals x in the current month t as such:

x(t+1) = x(t) + f * x(t)

The factor f is a constant that determines the rate of growth (to be more specific: its value is the decimal percentage change from one month to the next). So if our population grows with 25 % each month, we get:

x(t+1) = x(t) + 0.25 * x(t)

If we start with x(0) = 100 rabbits at month t = 0 we get:

x(1) = x(0) + 0.1 * x(0) = 100 + 0.25 * 100 = 125 rabbits

x(2) = x(1) + 0.1 * x(1) = 125 + 0.25 * 125 = 156 rabbits

x(3) = x(2) + 0.1 * x(2) = 156 + 0.25 * 156 = 195 rabbits

x(4) = x(3) + 0.1 * x(3) = 195 + 0.25 * 195 = 244 rabbits

x(5) = x(4) + 0.1 * x(4) = 244 + 0.25 * 244 = 305 rabbits

And so on. Maybe you already see the main problem with this exponential model: it just keeps on growing. This is fine as long as the population is small and the environment rich in ressources, but every environment has its limits. Let’s fix this problem by including an additional term in the recurrence relation that will lead to this behavior:

– Exponential growth as long as the population is small compared to the capacity
– Slowing growth near the capacity
– No growth at capacity
– Population decline when over the capacity

How can we translate this into mathematics? It takes a lot of practice to be able to tweak a recurrence relation to get the behavior you want. You just learned your first chord and I’m asking you to play Mozart, that’s not fair. But take a look at this bad boy:

x(t+1) = x(t) + a * x(t) * (1 – x(t) / C)

This is called the logistic model and the constant C represents said capacity. If x is much smaller than the capacity C, the ratio x / C will be close to zero and we are left with exponential growth:

x(t+1) ≈ x(t) + a * x(t) * (1 – 0)

x(t+1) ≈ x(t) + a * x(t)

So this admittedly complicated looking recurrence relation fullfils our first demand: exponential growth for small populations. What happens if the population x reaches the capacity C? Then all growth should stop. Let’s see if this is the case. With x = C, the ratio x / C is obviously equal to one, and in this case we get:

x(t+1) = x(t) + a * x(t) * (1 – 1)

x(t+1) = x(t)

The number of animals remains constant, just as we wanted. Last but not least, what happens if (for some reason) the population gets past the capacity, meaning that x is greater than C? In this case the ratio x / C is greater than one (let’s just say x / C = 1.2 for the sake of argument):

x(t+1) = x(t) + a * x(t) * (1 – 1.2)

x(t+1) = x(t) + a * x(t) * (- 0.2)

The second term is now negative and thus x(t+1) will be smaller than x(t) – a decline back to capacity. What an enormous amount of beautiful behavior in such a compact line of mathematics! This is where the power of recurrence relations comes to light. Anyways, let’s go back to our rabbit population. We’ll let them grow with 25 % (a = 0.25), but this time on an island that can only sustain 300 rabbits at most (C = 300). Thus the model looks like this:

x(t+1) = x(t) + 0.25 * x(t) * (1 – x(t) / 300)

If we start with x(0) = 100 rabbits at month t = 0 we get:

x(1) = 100 + 0.25 * 100 * (1 – 100 / 300) = 117 rabbits

x(2) = 117 + 0.25 * 117 * (1 – 117 / 300) = 135 rabbits

x(3) = 135 + 0.25 * 135 * (1 – 135 / 300) = 153 rabbits

x(4) = 153 + 0.25 * 153 * (1 – 153 / 300) = 172 rabbits

x(5) = 172 + 0.25 * 172 * (1 – 172 / 300) = 190 rabbits

Note that now the growth is almost linear rather than exponential and will slow down further the closer we get to the capacity (continue the sequence if you like, it will gently approach 300, but never go past it).

We can even go further and include random events in a recurrence relation. Let’s stick to the rabbits and their logistic growth and say that there’s a p = 5 % chance that in a certain month a flood occurs. If this happens, the population will halve. If no flood occurs, it will grow logistically as usual. This is what our new model looks like in mathematical terms:

x(t+1) = x(t) + 0.25 * x(t) * (1 – x(t) / 300)    if no flood occurs

x(t+1) = 0.5 * x(t)    if a flood occurs

To determine if there’s a flood, we let a random number generator spit out a number between 1 and 100 at each step. If it displays the number 5 or smaller, we use the “flood” equation (in accordance with the 5 % chance for a flood). Again we turn to our initial population of 100 rabbits with the growth rate and capacity unchanged:

x(1) = 100 + 0.25 * 100 * (1 – 100 / 300) = 117 rabbits

x(2) = 117 + 0.25 * 117 * (1 – 117 / 300) = 135 rabbits

x(3) = 135 + 0.25 * 135 * (1 – 135 / 300) = 153 rabbits

x(4) = 0.5 * 153 = 77 rabbits

x(5) = 77 + 0.25 * 77 * (1 – 77 / 300) = 91 rabbits

As you can see, in this run the random number generator gave a number 5 or smaller during the fourth step. Accordingly, the number of rabbits halved. You can do a lot of shenanigans (and some useful stuff as well) with recurrence relations and random numbers, the sky’s the limit. I hope this quick overview was helpful.

A note for the advanced: here’s how you turn a differential equation into a recurrence relation. Let’s take this differential equation:

dx/dt = a * x * exp(- b*x)

First multiply by dt:

dx = a * x * exp(- b * x) * dt

We set dx (the change in x) equal to x(t+h) – x(t) and dt (change in time) equal to a small constant h. Of course for x we now use x(t):

x(t+h) – x(t) = a * x(t) * exp(- b * x(t)) * h

Solve for x(t+h):

x(t+h) = x(t) + a * x(t) * exp(- b * x(t)) * h

And done! The smaller your h, the more accurate your numerical results. How low you can go depends on your computer’s computing power.

How Statistics Turned a Harmless Nurse Into a Vicious Killer

Let’s do a thought experiment. Suppose you have 2 million coins at hand and a machine that will flip them all at the same time. After twenty flips, you evaluate and you come across one particular coin that showed heads twenty times in a row. Suspicious? Alarming? Is there something wrong with this coin? Let’s dig deeper. How likely is it that a coin shows heads twenty times in a row? Luckily, that’s not so hard to compute. For each flip there’s a 0.5 probability that the coin shows heads and the chance of seeing this twenty times in a row is just 0.5^20 = 0.000001 (rounded). So the odds of this happening are incredibly low. Indeed we stumbled across a very suspicious coin. Deep down I always knew there was something up with this coin. He just had this “crazy flip”, you know what I mean? Guilty as charged and end of story.

Not quite, you say? You are right. After all, we flipped 2 million coins. If the odds of twenty heads in a row are 0.000001, we should expect 0.000001 * 2,000,000 = 2 coins to show this unlikely string. It would be much more surprising not to find this string among the large number of trials. Suddenly, the coin with the supposedly “crazy flip” doesn’t seem so guilty anymore.

What’s the point of all this? Recently, I came across the case of Lucia De Berk, a dutch nurse who was accused of murdering patients in 2003. Over the course of one year, seven of her patients had died and a “sharp” medical expert concluded that there was only a 1 in 342 million chance of this happening. This number and some other pieces of “evidence” (among them, her “odd” diary entries and her “obsession” with Tarot cards) led the court in The Hague to conclude that she must be guilty as charged, end of story.

Not quite, you say? You are right. In 2010 came the not guilty verdict. Turns out (funny story), she never commited any murder, she was just a harmless nurse that was transformed into vicious killer by faulty statistics. Let’s go back to the thought experiment for a moment, imperfect for this case though it may be. Imagine that each coin represents a nurse and each flip a month of duty. It is estimated that there are around 300,000 hospitals worldwide, so we are talking about a lot of nurses/coins doing a lot of work/flips. Should we become suspicious when seeing a string of several deaths for a particular nurse? No, of course not. By pure chance, this will occur. It would be much more surprising not to find a nurse with a “suspicious” string of deaths among this large number of nurses. Focusing in on one nurse only blurs the big picture.

And, leaving statistics behind, the case also goes to show that you can always find something “odd” about a person if you want to. Faced with new information, even if not reliable, you interpret the present and past behavior in a “new light”. The “odd” diary entries, the “obsession” with Tarot cards … weren’t the signs always there?

Be careful to judge. Benjamin Franklin once said he should consider himself lucky if he’s right 50 % of the time. And that’s a genius talking, so I don’t even want to know my stats …

Code Transmission and Probability

Not long ago did mankind first send rovers to Mars to analyze the planet and find out if it ever supported life. The nagging question “Are we alone?” drives us to penetrate deeper into space. A special challenge associated with such journeys is communication. There needs to be a constant flow of digital data, strings of ones and zeros, back and forth to ensure the success of the space mission.

During the process of transmission over the endless distances, errors can occur. There’s always a chance that zeros randomly turn into ones and vice versa. What can we do to make communication more reliable? One way is to send duplicates.

Instead of simply sending a 0, we send the string 00000. If not too many errors occur during the transmission, we can still decode it on arrival. For example, if it arrives as 00010, we can deduce that the originating string was with a high probability a 0 rather than a 1. The single transmission error that occurred did not cause us to incorrectly decode the string.

Assume that the probability of a transmission error is p and that we add to each 0 (or 1) four copies, as in the above paragraph. What is the chance of us being able to decode it correctly? To be able to decode 00000 on arrival correctly, we can’t have more than two transmission errors occurring. So during the n = 5 transmissions, k = 0, k = 1 and k = 2 errors are allowed. Using the binomial distribution we can compute the probability for each of these events:

p(0 errors) = C(5,0) · p^0 · (1-p)^5

p(1 error) = C(5,1) · p^1 · (1-p)^4

p(2 errors) = C(5,2) · p^2 · (1-p)^3

We can simplify these expressions somewhat. A binomial calculator provides us with these values: C(5,0) = 1, C(5,1) = 5 and C(5,2) = 10. This leads to:

p(0 errors) = (1-p)^5

p(1 error) = 5 · p · (1-p)^4

p(2 errors) = 10 · p^2 · (1-p)^3

Adding the probabilities for all these desired events tells us how likely it is that we can correctly decode the string.

p(success) = (1-p)^3 · ((1-p)^2 + 5·p·(1-p) + 10·p^2)

In the graph below you can see the plot of this function. The x-axis represents the transmission error probability p and the y-axis the chance of successfully decoding the string. For p = 10 % (1 in 10 bits arrive incorrectly) the odds of identifying the originating string are still a little more than 99 %. For p = 20 % (1 in 5 bits arrive incorrectly) this drops to about 94 %.

Code Transmission

The downside to this gain in accuracy is that the amount of data to be transmitted, and thus the time it takes for the transmission to complete, increases fivefold.

The Jeans Mass, or: How are stars born?

No, this has nothing to do with pants. The Jeans mass is a concept used in astrophysics and its unlikely name comes from the British physicist Sir James Jeans, who researched the conditions of star formation. The question at the core is: under what circumstances will a dark and lonely gas cloud floating somewhere in the depth of space turn into a shining star? To answer this, we have to understand what forces are at work.

One obvious factor is gravitation. It will always work towards contracting the gas cloud. If no other forces were present, it would lead the cloud to collapse into a single point. The temperature of the cloud however provides an opposite push. It “equips” the molecules of the cloud with kinetic energy (energy of motion) and given a high enough temperature, the kinetic energy would be sufficient for the molecules to simply fly off into space, never to be seen again.

It is clear that no star will form if the cloud expands and falls apart. Only when gravity wins this battle of inward and outward push can a stable star result. Sir James Jeans did the math and found that it all boils down to one parameter, the Jeans mass. If the actual mass of the interstellar cloud is larger than this critical mass, it will contract and stellar formation occurs. If on the other hand the actual mass is smaller, the cloud will simply dissipate.

The Jeans mass depends mainly on the temperature T (in K) and density D (in kg/m³) of the cloud. The higher the temperature, the larger the Jeans mass will be. This is in line with our previous discussion. When the temperature is high, a larger amount of mass is necessary to overcome the thermal outward push. The value of the Jeans mass M (in kg) can be estimated from this equation:

M ≈ 1020 · sqrt(T³ / D)

Typical values for the temperature and density of interstellar clouds are T = 10 K and D = 10-22 kg/m³. This leads to a Jeans mass of M = 1.4 · 1032 kg. Note that the critical mass turns out to be much greater than the mass of a typical star, indicating that stars generally form in clusters. Rather than the cloud contracting into a single star, which is the picture you probably had in your mind during this discussion, it will fragment at some point during the contraction and form multiple stars. So stars always have brothers and sisters.

(This was an excerpt from the Kindle book Physics! In Quantities and Examples)

The Difference Between Mass and Weight

In general, it is acceptable to use weight as a synonym for mass. However, in a very strict physical sense this is incorrect. Weight is the gravitational force experienced by an object and accordingly measured in Newtons and not kilograms. An object of mass m has the weight F:

F = m · g

with the gravitational acceleration g. On Earth the value of the gravitational acceleration at the surface is g = 9.81 m/s². So a typical adult with a mass of m = 75 kg has a weight of:

F = 75 kg · 9.81 m/s² = 735.75 N

On the moon (or any other point of the universe), the mass would remain at m = 75 kg. But since the gravitational acceleration on the moon is much lower (g = 1.62 m/s²), the weight changes to:

F = 75 kg · 1.62 m/s² = 121.5 N

Keep this distinction in mind. Mass is a fundamental property of an object that does not depend on the conditions outside the object, while weight is a variable that changes with the strength of surrounding gravitational field.

(This was an excerpt from Physics! In Quantities and Examples)

Distribution of E-Book Sales on Amazon

For e-books on Amazon the relationship between the daily sales rate s and the rank r is approximately given by:

s = 100,000 / r

Such an inverse proportional relationship between a ranked quantity and the rank is called a Zipf distribution. So a book on rank r = 10,000 can be expected to sell s = 100,000 / 10,000 = 10 copies per day. As of November 2013, there are about 2.4 million e-books available on Amazon’s US store (talk about a tough competition). In this post we’ll answer two questions. The first one is: how many e-books are sold on Amazon each day? To answer that, we need to add the daily sales rate from r = 1 to r = 2,400,000.

s = 100,000 · ( 1/1 + 1/2 + … + 1/2,400,000 )

We can evaluate that using the approximation formula for harmonic sums:

1/1 + 1/2 + 1/3 + … + 1/r ≈ ln(r) + 0.58

Thus we get:

s ≈ 100,000 · ( ln(2,400,000) + 0.58 ) ≈ 1.5 million

That’s a lot of e-books! And a lot of saved trees for that matter. The second question: What percentage of the e-book sales come from the top 100 books? Have a guess before reading on. Let’s calculate the total daily sales for the top 100 e-books:

s ≈ 100,000 · ( ln(100) + 0.58 ) ≈ 0.5 million

So the top 100 e-books already make up one-third of all sales while the other 2,399,900 e-books have to share the remaining two-thirds. The cake is very unevenly distributed.

This was a slightly altered excerpt from More Great Formulas Explained, available on Amazon for Kindle. For more posts on the ebook market go to my E-Book Market and Sales Analysis Pool.

Car Dynamics – Sliding and Overturning

In this post we will take a look at car performance in curves. Of central importance for our considerations is the centrifugal force. Whenever a body is moving in a curved path, this force comes into play. You probably felt it many times in your car. It is the force that tries to push you out of a curve as you go through it.

The centrifugal force C (in N) depends on three factors: the velocity v (in m/s) of the car, its mass m (in kg) and the radius r (in m) of the curve. Given these quantities, we can easily compute the centrifugal force using this formula:

C = m · v² / r

Note the quadratic dependence on speed. If you double the car’s speed, the centrifugal force quadruples. With this force acting, there must be a counter-force to cancel it for the car not to slide. This force is provided by the sideways friction of the tires. The frictional force F (in N) can be calculated from the so called coefficient of friction μ (dimensionless), the car mass m and the gravitational acceleration g (in m/s²).

F = μ · m · g

The coefficient of friction depends mainly on the road type and condition. On dry asphalt we can set μ ≈ 0.8, on wet asphalt μ ≈ 0.6, on snow μ ≈ 0.2 and on ice μ ≈ 0.1. At low speeds the frictional force exceeds the centrifugal force and the car will be able to go through the curve without any problems. However, as we increase the velocity, so does the centrifugal force and at a certain critical velocity the forces cancel each other out. Any increase in speed from this point on will result in the car sliding.

We can compute the critical speed s (in m/s) by equating the expressions for the forces:

m · s² / r = μ · m · g

s = sqrt (μ · r · g)

This is the speed at which the car begins to slide. Note that there’s no dependence on mass anymore. Since both the centrifugal as well as the frictional force grow proportionally to the car’s mass, it doesn’t play a role in determining the critical speed for sliding. All that’s left in terms of variables is the coefficient of friction (lower friction, lower critical speed) and the radius of the curve (smaller radius, more narrow curve, smaller critical speed).

However, sliding is not the only problem that can occur in curves. Under certain circumstances a car can also overturn. Again the centrifugal force is the culprit. Assuming the center of gravity (in short: CG) of the car is at a height of h (in m), the centrifugal force will produce a torque T acting to overturn the car:

T = h · C = m · v² · h / r

On the other hand, there’s the weight of the car giving rise to an opposing torque T’ that grows with the width w (in m) and mass m of the car:

T’ = 0.5 · m · g · w

At low speeds, the torque caused by the centrifugal force will be lower than the one caused by the gravitational pull. But at a certain critical speed o (in m/s), the torques will cancel each other and any further increase in speed will result in the car overturning. Equating the above expressions, we get:

m · o² · h / r = 0.5 · m · g · w

o = sqrt (0.5 · r · g · w / h)

Aside from the coefficient of friction, the determining factor here is the ratio of width to height. The larger it is, the harder it will be for the centrifugal force to overturn the car. This is why lowering a car when intending to go fast makes sense. If you lower the CG while keeping the width the same, the ratio w / h, and thus the critical speed for overturning, will increase.

Let’s look at some examples before drawing a final conclusion from these truly great formulas.

—————————

According to caranddriver.com the center of gravity of a 2014 BMW 435i is h = 0.5 m above the ground. The width of the car is about w = 1.8 m. Calculate the critical speed for sliding and overturning in a curve of radius r = 300 m on a dry asphalt road (μ ≈ 0.8).

Nothing to do but to apply the formulas:

s = sqrt (0.8 · 300 m · 9.81 m/s²)

s ≈ 49 m/s ≈ 175 km/h ≈ 108 mph

So with normal driving behavior you certainly won’t get anywhere near sliding. But note that sudden steering in a curve can cause the radius of the your car’s path to be considerably lower than the actual curve radius.

Onto the critical overturning speed:

o = sqrt (0.5 · 300 m · 9.81 m/s² · 3.6)

o ≈ 73 m/s ≈ 262 km/h ≈ 162 mph

Not even Michael Schumacher could bring this car to overturn.

—————————

How would the critical speeds change if we drove the 2014 BMW 435i through the same curve on an icy road? In this case the coefficient is considerably lower (μ ≈ 0.1). For the critical sliding speed we get:

s = sqrt (0.1 · 300 m · 9.81 m/s²)

s ≈ 17 m/s ≈ 62 km/h ≈ 38 mph

So even this sweet sport car is in danger of sliding relatively quickly under these conditions. What about the overturning speed? Well, it has nothing to do with the friction of the tires, so it will still be at 73 m/s.

—————————

This was an excerpt from More Great Formulas Explained. Interested in more car dynamics? Take a look at my post on How to Compute Maximum Car Speed. For other interesting physics articles, check out my BEST OF. I hope you enjoyed and drive safe!

How To Calculate Maximum Car Speed + Examples (Mercedes C-180, Bugatti Veyron)

How do you determine the maximum possible speed your car can go? Well, one rather straight-forward option is to just get into your car, go on the Autobahn and push down the pedal until the needle stops moving. The problem with this option is that there’s not always an Autobahn nearby. So we need to find another way.

Luckily, physics can help us out here. You probably know that whenever a body is moving at constant speed, there must be a balance of forces in play. The force that is aiming to accelerate the object is exactly balanced by the force that wants to decelerate it. Our first job is to find out what forces we are dealing with.

Obvious candidates for the retarding forces are ground friction and air resistance. However, in our case looking at the latter is sufficient since at high speeds, air resistance becomes the dominating factor. This makes things considerably easier for us. So how can we calculate air resistance?

To compute air resistance we need to know several inputs. One of these is the air density D (in kg/m³), which at sea level has the value D = 1.25 kg/m³. We also need to know the projected area A (in m²) of the car, which is just the product of width times height. Of course there’s also the dependence on the velocity v (in m/s) relative to the air. The formula for the drag force is:

F = 0.5 · c · D · A · v²

with c (dimensionless) being the drag coefficient. This is the one quantity in this formula that is tough to determine. You probably don’t know this value for your car and there’s a good chance you will never find it out even if you try. In general, you want to have this value as low as possible.

On ecomodder.com you can find a table of drag coefficients for many common modern car models. Excluding prototype models, the drag coefficient in this list ranges between c = 0.25 for the Honda Insight to c = 0.58 for the Jeep Wrangler TJ Soft Top. The average value is c = 0.33. In first approximation you can estimate your car’s drag coefficient by placing it in this range depending on how streamlined it looks compared to the average car.

With the equation: power equals force times speed, we can use the above formula to find out how much power (in W) we need to provide to counter the air resistance at a certain speed:

P = F · v = 0.5 · c · D · A · v³

Of course we can also reverse this equation. Given that our car is able to provide a certain amount of power P, this is the maximum speed v we can achieve:

v = ( 2 · P / (c · D · A) )1/3

From the formula we can see that the top speed grows with the third root of the car’s power, meaning that when we increase the power eightfold, the maximum speed doubles. So even a slight increase in top speed has to be bought with a significant increase in energy output.

Note the we have to input the power in the standard physical unit watt rather than the often used unit horsepower. Luckily the conversion is very easy, just multiply horsepower with 746 to get to watt.

Let’s put the formula to the test.

—————————

I drive a ten year old Mercedes C180 Compressor. According the Mercedes-Benz homepage, its drag coefficient is c = 0.29 and its power P = 143 HP ≈ 106,680 W. Its width and height is w = 1.77 m and h = 1.45 m respectively. What is the maximum possible speed?

First we need the projected area of the car:

A = 1.77 m · 1.45 m ≈ 2.57 m²

Now we can use the formula:

v = ( 2 · 106,680 / (0.29 · 1.25 · 2.57) )1/3

v ≈ 61.2 m/s ≈ 220.3 km/h ≈ 136.6 mph

From my experience on the Autobahn, this seems to be very realistic. You can reach 200 Km/h quite well, but the acceleration is already noticeably lower at this point.

If you ever get the chance to visit Germany, make sure to rent a ridiculously fast sports car (you can rent a Porsche 911 Carrera for as little as 200 $ per day) and find a nice section on the Autobahn with unlimited speed. But remember: unless you’re overtaking, always use the right lane. The left lanes are reserved for overtaking. Never overtake on the right side, nobody will expect you there. And make sure to check the rear-view mirror often. You might think you’re going fast, but there’s always someone going even faster. Let them pass. Last but not least, stay focused and keep your eyes on the road. Traffic jams can appear out of nowhere and you don’t want to end up in the back of a truck at these speeds.

—————————

The fastest production car at the present time is the Bugatti Veyron Super Sport. Is has a drag coefficient of c = 0.35, width w = 2 m, height h = 1.19 m and power P = 1200 HP = 895,200 W. Let’s calculate its maximum possible speed:

v = ( 2 · 895,200 / (0.35 · 1.25 · 2 · 1.19) )1/3

v ≈ 119.8 m/s ≈ 431.3 km/h ≈ 267.4 mph

Does this seem unreasonably high? It does. But the car has actually been recorded going 431 Km/h, so we are right on target. If you’d like to purchase this car, make sure you have 4,000,000 $ in your bank account.

—————————

This was an excerpt from the ebook More Great Formulas Explained.

Check out my BEST OF for more interesting physics articles.

Sources:

http://ecomodder.com/wiki/index.php/Vehicle_Coefficient_of_Drag_List

http://www.mercedes-benz.de/content/germany/mpc/mpc_germany_website/de/home_mpc/passengercars/home/_used_cars/technical_data.0006.html

http://www.carfolio.com/specifications/models/car/?car=218999

Law Of The Lever – Explanation and Examples

Imagine a beam sitting on a fulcrum. We apply one force F'(1) = 20 N on the left side at a distance of r(1) = 0.1 m from the fulcrum and another force F'(2) = 5 N on the right side at a distance of r(2) = 0.2 m. In which direction, clockwise or anti-clockwise, will the beam move?

Great Formulas II_html_m2eb595ec

(Before reading on, please make sure that you understand the concept of torque)

To find that out we can take a look at the corresponding torques. The torque on the left side is:

T(1) = 0.1 m · 20 N = 2 Nm

For the right side we get:

T(2) = 0.2 · 5 N = 1 Nm

So the rotational push caused by force 1 (left side) exceeds that of force 2 (right side). Hence, the beam will turn anti-clockwise. If we don’t want that to happen and instead want to achieve equilibrium, we need to increase force 2 to F'(2) = 10 N. In this case the torques would be equal and the opposite rotational pushes would cancel each other. So in general, this equation needs to be satisfied to achieve a state of equilibrium:

r(1) · F'(1) = r(2) · F'(2)

This is the law of the lever in its simplest form. Let’s see how and where we can apply it.

—————————

A great example for the usefulness of the law of the lever is provided by cranes. On one side, let’s set r(1) = 30 m, it lifts objects. Since we don’t want it to fall over, we stabilize the crane using a 20,000 kg concrete block at a distance of r(2) = 2 m from the axis. What is the maximum mass we can lift with this crane?

First we need to compute the gravitational force of the concrete block.

F'(2) = 20,000 kg · 9.81 m/s² = 196,200 N

Now we can use the law of the lever to find out what maximum force we can apply on the opposite site:

r(1) · F'(1) = r(2) · F'(2)

30 m · F'(1) = 2 m · 196,200 N

30 m · F'(1) = 392,400 Nm

Divide by 30 m:

F'(1) = 13,080 N

As long as we don’t exceed this, the torque caused by the concrete block will exceed that of the lifted object and the crane will not fall over. The maximum mass we can lift is now easy to find. We use the formula for the gravitational force one more time:

13,080 N = m · 9.81 m/s²

Divide by 9.81:

m ≈ 1330 kg

To lift even heavier objects, we need to use either a heavier concrete block or put it at a larger distance from the axis.

—————————

The law of the lever shows why we can interpret a lever as a tool to amplify forces. Suppose you want use a force of F'(1) = 100 N to lift a heavy object with the gravitational pull F'(2) = 2000 N. Not possible you say? With a lever you can do this by applying the smaller force at a larger distance to the axis and the larger force at a shorter distance.

Suppose the heavy object sits at a distance r(2) = 0.1 m to the axis. At what what distance r(1) should we apply the 100 N to be able to lift it? We can use the law of the lever to find the minimum distance required.

r(1) · 100 N = 0.1 m · 2000 N

r(1) · 100 N = 200 Nm

r(1) = 2 m

So as long as we apply the force at a distance of over 2 m, we can lift the object. We effectively amplified the force by a factor of 20. Scientists believe that the principle of force amplification using levers was already used by the Egyptians to build the pyramids. Given a long enough lever, we could lift basically anything even with a moderate force.

—————————

This was an excerpt from More Great Formulas Explained.

Check out my BEST OF for more interesting physics articles.

Estimating Temperature Using Cricket Chirps

I stumbled upon a truly great formula on the GLOBE scientists’ blog. It allows you to compute the ambient air temperature from the number of cricket chirps in a fixed time interval and this with a surprising accuracy. The idea actually quite old, it dates back 1898 when the physicist Dalbear first analyzed this relationship, and has been revived from time to time ever since.

Here’s how it works: count the number of chirps N over 13 seconds. Add 40 to that and you got the outside temperature T in Fahrenheit.

T = N + 40

From the picture below you can see that the fit is really good. The error seems to be plus / minus 6 % at most in the range from 50 to 80 °F.

E-Book Market & Sales – Analysis Pool

On this page you can find a collection of all my statistical analysis and research regarding the Kindle ebook market and sales. I’ll keep the page updated.

How E-Book Sales Vary at the End / Beginning of a Month

The E-Book Market in Numbers

Computing and Tracking the Amazon Sales Rank

Typical Per-Page-Prices for E-Books

Quantitative Analysis of Top 60 Kindle Romance Novels

Mathematical Model For E-Book Sales

If you have any suggestions on what to analyze next, just let me know. Share if you like the information.

Mathematical Model For (E-) Book Sales

It seems to be a no-brainer that with more books on the market, an author will see higher revenues. I wanted to know more about how the sales rate varies with the number of books. So I did what I always do when faced with an economic problem: construct a mathematical model. Even though it took me several tries to find the right approach, I’m fairly confident that the following model is able to explain why revenues grow overproportionally with the number of books an author has published. I also stumbled across a way to correct the marketing R/C for number of books.

The basic quantities used are:

  • n = number of books
  • i = impressions per day
  • q = conversion probability (which is the probability that an impression results in a sale)
  • s = sales per buyer
  • r = daily sales rate

Obviously the basic relationship is:

r = i(n) * q(n) * s(n)

with the brackets indicating a dependence of the quantities on the number of books.

1) Let’s start with s(n) = sales per buyer. Suppose there’s a probability p that a buyer, who has purchased an author’s book, will go on to buy yet another book of said author. To visualize this, think of the books as some kind of mirrors: each ray (sale) will either go through the book (no further sales from this buyer) or be reflected on another book of the author. In the latter case, the process repeats. Using this “reflective model”, the number of sales per buyer is:

s(n) = 1 + p + p² + … + pn = (1 – pn) / (1 – p)

For example, if the probability of a reader buying another book from the same author is p = 15 % = 0.15 and the author has n = 3 books available, we get:

s(3) = (1 – 0.153) / (1 – 0.15) = 1.17 sales per buyer

So the number of sales per buyer increases with the number of books. However, it quickly reaches a limiting value. Letting n go to infinity results in:

s(∞) = 1 / (1 – p)

Hence, this effect is a source for overproportional growth only for the first few books. After that it turns into a constant factor.

2) Let’s turn to q(n) = conversion probability. Why should there be a dependence on number of books at all for this quantity? Studies show that the probability of making a sale grows with the choice offered. That’s why ridiculously large malls work. When an author offers a large number of books, he is able to provide list impression (featuring all his / her books) additionally to the common single impressions (featuring only one book). With more choice, the conversion probability on list impressions will be higher than that on single impressions.

  • qs = single impression conversion probability
  • ps = percentage of impressions that are single impressions
  • ql = list impression conversion probability
  • pl = percentage of impressions that are list impressions

with ps + pl = 1. The overall conversion probability will be:

q(n) = qs(n) * ps(n) + ql(n)* pl(n)

With ql(n) and pl(n) obviously growing with the number of books and ps(n) decreasing accordingly, we get an increase in the overall conversion probability.

3) Finally let’s look at i(n) = impressions per day. Denoting with i1, i2, … the number of daily impressions by book number 1, book number 2, … , the average number of impressions per day and book are:

ib = 1/n * ∑[k] ik

with ∑[k] meaning the sum over all k. The overall impressions per day are:

i(n) = ib(n) * n

Assuming all books generate the same number of daily impressions, this is a linear growth. However, there might be an overproportional factor at work here. As an author keeps publishing, his experience in writing, editing and marketing will grow. Especially for initially inexperienced authors the quality of the books and the marketing approach will improve with each book. Translated in numbers, this means that later books will generate more impressions per day:

ik+1 > ik

which leads to an overproportional (instead of just linear) growth in overall impressions per day with the number of books. Note that more experience should also translate into a higher single impression conversion probability:

qs(n+1) > qs(n)

4) As a final treat, let’s look at how these effects impact the marketing R/C. The marketing R/C is the ratio of revenues that result from an ad divided by the costs of the ad:

R/C = Revenues / Costs

For an ad to be of worth to an author, this value should be greater than 1. Assume an ad generates the number of iad single impressions in total. For one book we get the revenues:

R = iad * qs(1)

If more than one book is available, this number changes to:

R = iad * qs(n) * (1 – pn) / (1 – p)

So if the R/C in the case of one book is (R/C)1, the corrected R/C for a larger number of books is:

R/C = (R/C)1 * qs(n) / qs(1) * (1 – pn) / (1 – p)

In short: ads, that aren’t profitable, can become profitable as the author offers more books.

For more mathematical modeling check out: Mathematics of Blog Traffic: Model and Tips for High Traffic.

What is Torque? – A Short and Simple Explanation

Often times when doing physics we simply say “a force is acting on a body” without specifying which point of the body it is acting on. This is basically point-mass physics. We ignore the fact that the object has a complex three-dimensional shape and assume it to be a single point having a certain mass. Sometimes this is sufficient, other times we need to go beyond that. And this is where the concept of torque comes in.

Let’s define what is meant by torque. Assume a force F (in N) is acting on a body at a distance r (in m) from the axis of rotation. This distance is called the lever arm. Take a look at the image below for an example of such a set up.

Great Formulas II_html_m7f65e4ec

(Taken from sdsu-physics.org)

Relevant for the rotation of the body is only the force component perpendicular to the lever arm, which we will denote by F’. If given the angle Φ between the force and the lever arm (as shown in the image), we can easily compute the relevant force component by:

F’ = F · sin(Φ)

For example, if the total force is F = 50 N and it acts at an angle of Φ = 45° to the lever arm, only the the component F’ = 50 N · sin(45°) ≈ 35 N will work to rotate the body. So you can see that sometimes it makes sense to break a force down into its components. But this shouldn’t be cause for any worries, with the above formula it can be done quickly and painlessly.

With this out of the way, we can define what torque is in one simple sentence: Torque T (in Nm) is the product of the lever arm r and the force F’ acting perpendicular to it. In form of an equation the definition looks like this:

T = r · F’

In quantitative terms we can interpret torque as a measure of rotational push. If there’s a force acting at a large distance from the axis of rotation, the rotational push will be strong. However, if one and the same force is acting very close to said axis, we will see hardly any rotation. So when it comes to rotation, force is just one part of the picture. We also need to take into consideration where the force is applied.

Let’s compute a few values before going to the extremely useful law of the lever.

—————————

We’ll have a look at the wrench from the image. Suppose the wrench is r = 0.2 m long. What’s the resulting torque when applying a force of F = 80 N at an angle of Φ = 70° relative to the lever arm?

To answer the question, we first need to find the component of the force perpendicular to the lever arm.

F’ = 80 N · sin(70°) ≈ 75.18 N

Now onto the torque:

T = 0.2 m · 75.18 N ≈ 15.04 Nm

—————————

If this amount of torque is not sufficient to turn the nut, how could we increase that? Well, we could increase the force F and at the same time make sure that it is applied at a 90° angle to the wrench. Let’s assume that as a measure of last resort, you apply the force by standing on the wrench. Then the force perpendicular to the lever arm is just your gravitational pull:

F’ = F = m · g

Assuming a mass of m = 75 kg, we get:

F’ = 75 kg · 9.81 m/s² = 735.75 N

With this not very elegant, but certainly effective technique, we are able to increase the torque to:

T = 0.2 m · 735.75 N = 147.15 Nm

That should do the trick. If it doesn’t, there’s still one option left and that is using a longer wrench. With a longer wrench you can apply the force at a greater distance to the axis of rotation. And with r increased, the torque T is increased by the same factor.

—————————

This was an excerpt from my Kindle ebook More Great Formulas Explained.

Check out my BEST OF for more interesting physics articles.

Released Today: More Great Formulas Explained (Ebook for Kindle)

I’m happy to announce that today I’ve released the second volume of the series “Great Formulas Explained”. The aim of the series is to gently explain the greatest formulas the fields of physics, mathematics and economics have brought forth. It is suitable for high-school students, freshmen and anyone else with a keen interest in science. I had a lot of fun writing the series and edited both volumes thoroughly, including double-checking all sources and calculations.

Here are the contents of More Great Formulas Explained:

  • Part I: Physics

Law Of The Lever
Sliding and Overturning
Maximum Car Speed
Range Continued
Escape Velocity
Cooling and Wind-Chill
Adiabatic Processes
Draining a Tank
Open-Channel Flow
Wind-Driven Waves
Sailing
Heat Radiation
Main Sequence Stars
Electrical Resistance
Strings and Sound

  • Part II: Mathematics

Cylinders
Arbitrary Triangles
Summation
Standard Deviation and Error
Zipf Distribution

  • Part III: Appendix

Unit Conversion
Unit Prefixes
References
Copyright and Disclaimer
Request to the Reader

I will post exerpts in the days to come. If you are interested, click the cover to get to the Amazon product page. Since I’m enrolled in the KDP Select program, the book is exclusively available through Amazon for a constant price of $ 2.99, I will not be offering it through any other retailers in the near future.

Remember what Benjamin Franklin once said: “Knowledge pays the best interest”. An investment in education (be that time or money) can never be wrong. Knowledge is a powerful tool to make you free and independent. I hope I can contribute to bringing knowledge to people all over the world. In the spirit of this, I have permanently discounted this book, as well as volume I, in India.

Computing the Surface Area of a Person – Mosteller Formula

While doing research for my new book “More Great Formulas Explained”, I came across a neat formula that can be used to calculate the surface area of a person. It goes by the name Mosteller formula and requires two inputs: the mass m (in kg) and the height h (in cm). The surface area S (in m²) is proportional to the square root of m times h:

S = sqrt (m * h / 3600)

For example, a person with the weight m = 75 kg and height h = 175 cm can be expected to have the body surface area S = 1.91 m². A note for American readers: you can use this table to easily convert the height in feet / inches to centimeters.

What’s the use of this? In my book I needed to know this quantity to compute heat loss. According to Newton’s law of cooling, the heat loss rate P (in Watt = Joules per second) is proportional to the surface area S and the temperature difference ΔT (in °C or K):

P = a * S *ΔT

with a being the so called heat transfer coefficient. For calm air it has the value a = 10 W/(m² * K). A person’s body temperature is around 37 °C. So the m = 75 kg and h = 175 cm person from above would lose this amount of heat every second at an air temperature of 20 °C:

P = 10 W/(m² * K) * 1.91 m² * 17 °C = 325 Watt

That is of course assuming the person is naked, clothing will reduce this value significantly. So the surface area formula indeed is useful.

Space Shuttle Launch and Sound Suppression

The Space Shuttle’s first flight (STS-1) in 1981 was considered a great success as almost all the technical and scientific goals were achieved. However, post flight analysis showed one potentially fatal problem: 16 heat shield tiles had been destroyed and another 148 damaged. How did that happen? The culprit was quickly determined to be sound. During launch the shuttle’s main engine and the SRBs (Solid Rocket Boosters) produce intense sound waves which cause strong vibrations. A sound suppression system was needed to protect the shuttle from acoustically induced damage such as cracks and mechanical fatigue. But how do you suppress the sound coming from a jet engine?

Let’s take a step back. What is the source of this sound? When the hot exhaust gas meets the ambient air, mixing occurs. This leads to the formation of a large number of eddies. The small-scale eddies close to the engine are responsible for high frequency noise, while the large-scale eddies that appear downstream cause intense low-frequency noise. Lighthill showed that the power P (in W) of the sound increases with the jet velocity v (in m/s) and the size s (in m) of the eddies:

P = K * D * c-5 * s2 * v8

with K being a constant, D the exhaust gas density and c the speed of sound. Note the extremely strong dependence of acoustic power on jet velocity: if you double the velocity, the power increases by a factor of 256. Such a strong relationship is very unusual in physics. The dependence on eddy size is also significant, doubling the size leads to a quadrupling in power. The formula tells us what we must do to effectively suppress sound: reduce jet velocity and the size of the eddies. Water injection into the exhaust gas achieves both. The water droplets absorb kinetic energy from the gas molecules, thus slowing them down. At the same time, the water breaks down the eddies.

During the second Space Shuttle launch (STS-2) a water injection system was used to suppress potentially catastrophic acoustic vibrations. This proved to be successful, it reduced the sound level by 10 – 20 dB (depending on location), and accordingly was used during every launch since then. But large amounts of water are needed to accomplish this reduction. The tank at the launch pad holds about 300,000 gallons. The flow starts at T minus 6.6 seconds and last for about 20 seconds. The peak flow rate is roughly 15,000 gallons per seconds. That’s a lot of water!

The video below shows a test run of the sound suppression system:

Sources and further reading:

Click to access art09.pdf

http://www-pao.ksc.nasa.gov/nasafact/count4ssws.htm

Click to access CAE_XUYue_Investigation-of-Flow-Control-with-Fluidic-injection-for-Jet-Noise-Reduction.pdf

Home Experiment – Impact Speed and Sound Level

A while ago I got my hands on a sound level meter and pondered what to do with it. Sound level versus distance from source? Too boring, there’s already a formula for that (see here: Intensity: How Much Power Will Burst Your Eardrums?). What I noticed though is that I’ve never seen a formula relating impact height or speed to sound level, that seemed interesting. So I bought a small wooden sphere at a local store and dropped it from various heights, at each impact recording the maximum sound level. I dropped the sphere from 8 different heights and to reduce the effect of random fluctuations 20 times from each height. So in total I collected 160 data points. I’m not so sure if my neighbors were happy about that.

I calculated the impact speed v from the drop height h using the common v = sqrt (2 * g * h). As you might know, this formula neglects air resistance. However, I’m not concerned about that. The wooden sphere was small and massive and only dropped from heights below about 1 ft. The computed impact speed shouldn’t be off by more than a few percent.

Here’s the resulting plot of impact speed versus sound level (in decibels):

Impact Speed Sound Level Decibel

The fit turned out to be fantastic and implies that if you increase the impact speed by a factor of five, the sound level doubles. What’s the point of this? I don’t know, but it’s a neat graph and that’s good enough for me.

Acceleration – A Short and Simple Explanation

The three basic quantities used in kinematics are distance, velocity and acceleration. Let’s first look at velocity before moving on to the main topic. The velocity is simply the rate of change in distance. If we cover the distance d in a time span t, than the average velocity during this interval is:

v = d / t

So if we drive d = 800 meters in t = 40 seconds, the average speed is v = 800 meters / 40 seconds = 20 m/s. No surprise here. Note that there are many different units commonly used for velocity: kilometers per hour, feet per second, miles per hour, etc … The SI unit is m/s, so unless otherwise stated, you have to input the velocity in m/s into a formula to get a correct result.

Acceleration is also defined as the rate of change, but this time with respect to velocity. If the velocity changes by the amount v in a time span t, the average acceleration is:

a = v / t

For example, my beloved Mercedes C-180 Compressor can go from 0 to 100 kilometers per hour (or 27.8 meters per second) in about 9 seconds. So the average acceleration during this time is:

a = 27.8 meters per second / 9 seconds = 3.1 m/s²

Is that a lot? Obviously we should know some reference values to be able to judge acceleration.

The one value you should know is: g = 9.81 m/s². This is the acceleration experienced in free fall. And you can take the word “experienced” literally because unlike velocity, we really do feel acceleration. Our inner ear system contains structures that enable us to perceive it. Often times acceleration is compared to this value because it provides a meaningful and easily relatable reference value.

So the acceleration in the Mercedes C-180 Compressor is not quite as thrilling as free fall, it only accelerates with about 3.1 / 9.81 = 0.32 g. How much higher can it go for production cars? Well, meet the Bugatti Veyron Super Sport. It goes from 0 to 100 kilometers per hour (or 27.8 meters per second) in 2.2 seconds. This translates into an acceleration of:

a = 27.8 meters per second / 2.2 seconds = 12.6 m/s²

This is more than the free fall acceleration! To be more specific, it’s 12.6 / 9.81 = 1.28 g. If you got $ 4,000,000 to spare, how about getting one of these? But even this is nothing compared to what astronauts have to endure during launch. Here you can see a typical acceleration profile of a Space Shuttle launch:

(Taken from http://www.russellwestbrook.com)

Right before the main engine shutoff the acceleration peaks at close to 30 m/s² or 3 g. That’s certainly not for everyone. How much can a person endure by the way? According to “Aerospace Medicine” accelerations of around 5 g and higher can result in death if sustained for more than a few seconds. Very short acceleration bursts can be survivable up to about 50 g, which is a value that can be reached and exceeded in a car crash.

One more thing to keep in mind about acceleration: it is always a result of a force. If a force F (measured in Newtons = N) acts on a body, it responds by accelerating. The stronger the force is, the higher the resulting acceleration. This is just Newton’s Second Law:

a = F / m

So a force of F = 210 N on a body of m = 70 kg leads to an acceleration of a = 210 N / 70 kg = 3 m/s². The same force however on a m = 140 kg mass only leads to the acceleration a = 210 N / 140 kg = 1.5 m/s². Hence, mass provides resistance to acceleration. You need more force to accelerate a massive body at the same rate as a light body.

For more interesting physics articles, check out my BEST OF.

How To Calculate the Elo-Rating (including Examples)

In sports, most notably in chess, baseball and basketball, the Elo-rating system is used to rank players. The rating is also helpful in deducing win probabilities (see my blog post Elo-Rating and Win Probability for more details on that). Suppose two players or teams with the current ratings r(1) and r(2) compete in a match. What will be their updated rating r'(1) and r'(2) after said match? Let’s do this step by step, first in general terms and then in a numerical example.

The first step is to compute the transformed rating for each player or team:

R(1) = 10r(1)/400

R(2) = 10r(2)/400

This is just to simplify the further computations. In the second step we calculate the expected score for each player:

E(1) = R(1) / (R(1) + R(2))

E(2) = R(2) / (R(1) + R(2))

Now we wait for the match to finish and set the actual score in the third step:

S(1) = 1 if player 1 wins / 0.5 if draw / 0 if player 2 wins

S(2) = 0 if player 1 wins / 0.5 if draw / 1 if player 2 wins

Now we can put it all together and in a fourth step find out the updated Elo-rating for each player:

r'(1) = r(1) + K * (S(1) – E(1))

r'(2) = r(2) + K * (S(2) – E(2))

What about the K that suddenly popped up? This is called the K-factor and basically a measure of how strong a match will impact the players’ ratings. If you set K too low the ratings will hardly be impacted by the matches and very stable ratings (too stable) will occur. On the other hand, if you set it too high, the ratings will fluctuate wildly according to the current performance. Different organizations use different K-factors, there’s no universally accepted value. In chess the ICC uses a value of K = 32. Other approaches can be found here.

—————————————–

Now let’s do an example. We’ll adopt the value K = 32. Two chess players rated r(1) = 2400 and r(2) = 2000 (so player 2 is the underdog) compete in a single match. What will be the resulting rating if player 1 wins as expected? Let’s see. Here are the transformed ratings:

R(1) = 102400/400 = 1.000.000

R(2) = 102000/400 = 100.000

Onto the expected score for each player:

E(1) = 1.000.000 / (1.000.000 + 100.000) = 0.91

E(2) = 100.000 / (1.000.000 + 100.000) = 0.09

This is the actual score if player 1 wins:

S(1) = 1

S(2) = 0

Now we find out the updated Elo-rating:

r'(1) = 2400 + 32 * (1 – 0.91) = 2403

r'(2) = 2000 + 32 * (0 – 0.09) = 1997

Wow, that’s boring, the rating hardly changed. But this makes sense. By player 1 winning, both players performed according to their ratings. So no need for any significant changes.

—————————————–

What if player 2 won instead? Well, we don’t need to recalculate the transformed ratings and expected scores, these remain the same. However, this is now the actual score for the match:

S(1) = 0

S(2) = 1

Now onto the updated Elo-rating:

r'(1) = 2400 + 32 * (0 – 0.91) = 2371

r'(2) = 2000 + 32 * (1 – 0.09) = 2029

This time the rating changed much more strongly.

—————————————–

Mathematics of Blog Traffic: Model and Tips for High Traffic

Over the last few days I finally did what I long had planned and worked out a mathematical model for blog traffic. Here are the results. First we’ll take a look at the most general form and then use it to derive a practical, easily applicable formula.

We need some quantities as inputs. The time (in days), starting from the first blog entry, is denoted by t. We number the blog posts with the variable k. So k = 1 refers to the first post published, k = 2 to the second, etc … We’ll refer to the day on which entry k is published by t(k).

The initial number of visits entry k draws from the feed is symbolized by i(k), the average number of views per day entry k draws from search engines by s(k). Assuming that the number of feed views declines exponentially for each article with a factor b (my observations put the value for this at around 0.4 – 0.6), this is the number of views V the blog receives on day t:

V(t) = Σ[k] ( s(k) + i(k) · bt – t(k))

Σ[k] means that we sum over all k. This is the most general form. For it to be of any practical use, we need to make simplifying assumptions. We assume that the entries are published at a constant frequency f (entries per day) and that each article has the same popularity, that is:

i(k) = i = const.
s(k) = s = const.

After a long calculation you can arrive at this formula. It provides the expected number of daily views given that the above assumptions hold true and that the blog consists of n entries in total:

V = s · n + i / ( 1 – b1/f )

Note that according to this formula, blog traffic increases linearly with the number of entries published. Let’s apply the formula. Assume we publish articles at a frequency f = 1 per day and they draw i = 5 views on the first day from the feed and s = 0.1 views per day from search engines. With b = 0.5, this leads to:

V = 0.1 · n + 10

So once we gathered n = 20 entries with this setup, we can expect V = 12 views per day, at n = 40 entries this grows to V = 14 views per day, etc … The theoretical growth of this blog with number of entries is shown below:

viewsentries

How does the frequency at which entries are being published affect the number of views? You can see this dependency in the graph below (I set n = 40):

viewsfrequency

The formula is very clear about what to do for higher traffic: get more attention in the feed (good titles, good tagging and a large number of followers all lead to high i and possibly reduced b), optimize the entries for search engines (high s), publish at high frequency (obviously high f) and do this for a long time (high n).

We’ll draw two more conclusions. As you can see the formula neatly separates the search engine traffic (left term) and feed traffic (right term). And while the feed traffic reaches a constant level after a while of constant publishing, it is the search engine traffic that keeps on growing. At a critical number of entries N, the search engine traffic will overtake the feed traffic:

N = i / ( s · ( 1 – b1/f ) )

In the above blog setup, this happens at N = 100 entries. At this point both the search engines as well as the feed will provide 10 views per day.

Here’s one more conclusion: the daily increase in the average number of views is just the product of the daily search engine views per entry s and the publishing frequency f:

V / t = s · f

Thus, our example blog will experience an increase of 0.1 · 1 = 0.1 views per day or 1 additional view per 10 days. If we publish entries at twice the frequency, the blog would grow with 0.1 · 2 = 0.2 views per day or 1 additional view every 5 days.