Math

All about the Gravitational Force (For Beginners)

(This is an excerpt from The Book of Forces)

All objects exert a gravitational pull on all other objects. The Earth pulls you towards its center and you pull the Earth towards your center. Your car pulls you towards its center and you pull your car towards your center (of course in this case the forces involved are much smaller, but they are there). It is this force that invisibly tethers the Moon to Earth, the Earth to the Sun, the Sun to the Milky Way Galaxy and the Milky Way Galaxy to its local galaxy cluster.

Experiments have shown that the magnitude of the gravitational attraction between two bodies depends on their masses. If you double the mass of one of the bodies, the gravitational force doubles as well. The force also depends on the distance between the bodies. More distance means less gravitational pull. To be specific, the gravitational force obeys an inverse-square law. If you double the distance, the pull reduces to 1/2² = 1/4 of its original value. If you triple the distance, it goes down to 1/3² = 1/9 of its original value. And so on. These dependencies can be summarized in this neat formula:

F = G·m·M / r²

With F being the gravitational force in Newtons, m and M the masses of the two bodies in kilograms, r the center-to-center distance between the bodies in meters and G = 6.67·10^(-11) N m² kg^(-2) the (somewhat cumbersome) gravitational constant. With this great formula, that has first been derived at the end of the seventeenth century and has sparked an ugly plagiarism dispute between Newton and Hooke, you can calculate the gravitational pull between two objects for any situation.

1

(Gravitational attraction between two spherical masses)

If you have trouble applying the formula on your own or just want to play around with it a bit, check out the free web applet Newton’s Law of Gravity Calculator that can be found on the website of the UNL astronomy education group. It allows you to set the required inputs (the masses and the center-to-center distance) using sliders that are marked special values such as Earth’s mass or the distance Earth-Moon and calculates the gravitational force for you.

————————————-

Example 3:

Calculate the gravitational force a person of mass m = 72 kg experiences at the surface of Earth. The mass of Earth is M = 5.97·10^24 kg (the sign ^ stands for “to the power”) and the distance from the center to the surface r = 6,370,000 m. Using this, show that the acceleration the person experiences in free fall is roughly 10 m/s².

Solution:

To arrive at the answer, we simply insert all the given inputs into the formula for calculating gravitational force.

F = G·m·M / r²
F = 6.67·10^(-11)·72·5.97·10^24 / 6,370,000² N ≈ 707 N

So the magnitude of the gravitational force experienced by the m = 72 kg person is 707 N. In free fall, he or she is driven by this net force (assuming that we can neglect air resistance). Using Newton’s second law we get the following value for the free fall acceleration:

F = m·a
707 N = 72 kg · a

Divide both sides by 72 kg:

a = 707 / 72 m/s² ≈ 9.82 m/s²

Which is roughly (and more exact than) the 10 m/s² we’ve been using in the introduction. Except for the overly small and large numbers involved, calculating gravitational pull is actually quite straight-forward.

As mentioned before, gravitation is not a one-way street. As the Earth pulls on the person, the person pulls on the Earth with the same force (707 N). However, Earth’s mass is considerably larger and hence the acceleration it experiences much smaller. Using Newton’s second law again and the value M = 5.97·1024 kg for the mass of Earth we get:

F = m·a
707 N = 5.97·10^24 kg · a

Divide both sides by 5.97·10^24 kg:

a = 707 / (5.97·10^24) m/s² ≈ 1.18·10^(-22) m/s²

So indeed the acceleration the Earth experiences as a result of the gravitational attraction to the person is tiny.

————————————-

Example 4:

By how much does the gravitational pull change when the  person of mass m = 72 kg is in a plane (altitude 10 km = 10,000 m) instead of the surface of Earth? For the mass and radius of Earth, use the values from the previous example.

Solution:

In this case the center-to-center distance r between the bodies is a bit larger. To be specific, it is the sum of the radius of Earth 6,370,000 m and the height above the surface 10,000 m:

r = 6,370,000 m + 10,000 m = 6,380,000 m

Again we insert everything:

F = G·m·M / r²
F = 6.67·10^(-11)·72·5.97·10^24 / 6,380,000² N ≈ 705 N

So the gravitational force does not change by much (only by 0.3 %) when in a plane. 10 km altitude are not much by gravity’s standards, the height above the surface needs to be much larger for a noticeable difference to occur.

————————————-

With the gravitational law we can easily show that the gravitational acceleration experienced by an object in free fall does not depend on its mass. All objects are subject to the same 10 m/s² acceleration near the surface of Earth. Suppose we denote the mass of an object by m and the mass of Earth by M. The center-to-center distance between the two is r, the radius of Earth. We can then insert all these values into our formula to find the value of the gravitational force:

F = G·m·M / r²

Once calculated, we can turn to Newton’s second law to find the acceleration a the object experiences in free fall. Using F = m·a and dividing both sides by m we find that:

a = F / m = G·M / r²

So the gravitational acceleration indeed depends only on the mass and radius of Earth, but not the object’s mass. In free fall, a feather is subject to the same 10 m/s² acceleration as a stone. But wait, doesn’t that contradict our experience? Doesn’t a stone fall much faster than a feather? It sure does, but this is only due to the presence of air resistance. Initially, both are accelerated at the same rate. But while the stone hardly feels the effects of air resistance, the feather is almost immediately slowed down by the collisions with air molecules. If you dropped both in a vacuum tube, where no air resistance can build up, the stone and the feather would reach the ground at the same time! Check out an online video that shows this interesting vacuum tube experiment, it is quite enlightening to see a feather literally drop like a stone.

2

(All bodies are subject to the same gravitational acceleration)

Since all objects experience the same acceleration near the surface of Earth and since this is where the everyday action takes place, it pays to have a simplified equation at hand for this special case. Denoting the gravitational acceleration by g (with g ≈ 10 m/s²) as is commonly done, we can calculate the gravitational force, also called weight, an object of mass m is subject to at the surface of Earth by:

F = m·g

So it’s as simple as multiplying the mass by ten. Depending on the application, you can also use the more accurate factor g ≈ 9.82 m/s² (which I will not do in this book). Up to now we’ve only been dealing with gravitation near the surface of Earth, but of course the formula allows us to compute the gravitational force and acceleration near any other celestial body. I will spare you trouble of looking up the relevant data and do the tedious calculations for you. In the table below you can see what gravitational force and acceleration a person of mass m = 72 kg would experience at the surface of various celestial objects. The acceleration is listed in g’s, with 1 g being equal to the free-fall acceleration experienced near the surface of Earth.

3

So while jumping on the Moon would feel like slow motion (the free-fall acceleration experienced is comparable to what you feel when stepping on the gas pedal in a common car), you could hardly stand upright on Jupiter as your muscles would have to support more than twice your weight. Imagine that! On the Sun it would be even worse. Assuming you find a way not get instantly killed by the hellish thermonuclear inferno, the enormous gravitational force would feel like having a car on top of you. And unlike temperature or pressure, shielding yourself against gravity is not possible.

What about the final entry? What is a neutron star and why does it have such a mind-blowing gravitational pull? A neutron star is the remnant of a massive star that has burned its fuel and exploded in a supernova, no doubt the most spectacular light-show in the universe. Such remnants are extremely dense – the mass of several suns compressed into an almost perfect sphere of just 20 km radius. With the mass being so large and the distance from the surface to the center so small, the gravitational force on the surface is gigantic and not survivable under any circumstances.

If you approached a neutron star, the gravitational pull would actually kill you long before reaching the surface in a process called spaghettification. This unusual term, made popular by the brilliant physicist Stephen Hawking, refers to the fact that in intense gravitational fields objects are vertically stretched and horizontally compressed. The explanation is rather straight-forward: since the strength of the gravitational force depends on the distance to the source of said force, one side of the approaching object, the side closer to the source, will experience a stronger pull than the opposite side. This leads to a net force stretching the object. If the gravitational force is large enough, this would make any object look like a thin spaghetti. For humans spaghettification would be lethal as the stretching would cause the body to break apart at the weakest spot (which presumably is just above the hips). So my pro-tip is to keep a polite distance from neutron stars.

The Best Math-Related Memes

61c642b2d3333629df39b91f31e48fdd

a-guide-to-succeed-in-math_c_1374041

chair-math_o_130887

0cb

doing-math-instead-of-girls_c_876701

duckulator_c_2455921

happy-birthday_o_698353

hFF1C974B

i-8-sum-pi_c_1488893

(Explanation to the image above: the square root of minus one is the imaginary unit, usually abbreviated by i, 2 to the power of 3 is 8, the third symbol is the summation sign, the last one of course is pi … so it reads: “i 8 sum pi” = “I ate some pie”)

i-still-hate-math_o_2225991

just-math-being-math_o_870323

just-who_o_163263

le-test_o_180697

math_c_850076

Math-Jokes_c_91573

math-not-even-once_o_935204

math-porn_o_175037

math-reaction_o_258488

math-test_o_184666

matrix_o_262494

nice-try-math_o_178094

pirates_c_345991

staph_o_859913

there-is-only-one-theory-in-maths_c_643017

e2c

ee6

true-story_o_829612

yo-dawg_o_567211

New Release for Kindle: Math Shorts – Integrals

Yesterday I released the second part of my “Math Shorts” series. This time it’s all about integrals. Integrals are among the most useful and fascinating mathematical concepts ever conceived. The ebook is a practical introduction for all those who don’t want to miss out. In it you’ll find down-to-earth explanations, detailed examples and interesting applications. Check out the sample (see link to product page) a taste of the action.

Important note: to enjoy the book, you need solid prior knowledge in algebra and calculus. This means in particular being able to solve all kinds of equations, finding and interpreting derivatives as well as understanding the notation associated with these topics.

Click the cover to open the product page:

cover

Here’s the TOC:

Section 1: The Big Picture
-Anti-Derivatives
-Integrals
-Applications

Section 2: Basic Anti-Derivatives and Integrals
-Power Functions
-Sums of Functions
-Examples of Definite Integrals
-Exponential Functions
-Trigonometric Functions
-Putting it all Together

Section 3: Applications
-Area – Basics
-Area – Numerical Example
-Area – Parabolic Gate
-Area – To Infinity and Beyond
-Volume – Basics
-Volume – Numerical Example
-Volume – Gabriel’s Horn
-Average Value
-Kinematics

Section 4: Advanced Integration Techniques
-Substitution – Basics
-Substitution – Indefinite Integrals
-Substitution – Definite Integrals
-Integration by Parts – Basics
-Integration by Parts – Indefinite Integrals
-Integration by Parts – Definite Integrals

Section 5: Appendix
-Formulas To Know By Heart
-Greek Alphabet
-Copyright and Disclaimer
-Request to the Reader

Enjoy!

Mathematics of Banner Ads: Visitor Loyalty and CTR

First of all: why should a website’s visitor loyalty have any effect at all on the CTR we can expect to achieve with a banner ad? What does the one have to do with the other? To understand the connection, let’s take a look at an overly simplistic example. Suppose we place a banner ad on a website and get in total 3 impressions (granted, not a realistic number, but I’m only trying to make a point here). From previous campaigns we know that a visitor clicks on our ad with a probability of 0.1 = 10 % (which is also quite unrealistic).

The expected number of clicks from these 3 impressions is …

… 0.1 + 0.1 + 0.1 = 0.3 when all impressions come from different visitors.

… 1 – 0.9^3 = 0.27 when all impressions come from only one visitor.

(the symbol ^ stands for “to the power of”)

This demonstrates that we can expect more clicks if the website’s visitor loyalty is low, which might seem counter-intuitive at first. But the great thing about mathematics is that it cuts through bullshit better than the sharpest knife ever could. Math doesn’t lie. Let’s develop a model to show that a higher vistor loyalty translates into a lower CTR.

Suppose we got a number of I impressions on the banner ad in total. We’ll denote the percentage of visitors that contributed …

… only one impression by f(1)
… two impressions by f(2)
… three impressions by f(3)

And so on. Note that this distribution f(n) must satisfy the condition ∑[n] n·f(n) = I for it all to check out. The symbol ∑[n] stands for the sum over all n.

We’ll assume that the probability of a visitor clicking on the ad is q. The probability that this visitor clicks on the ad at least once during n visits is just: p(n) = 1 – (1 – q)^n (to understand why you have the know about the multiplication rule of statistics – if you’re not familiar with it, my ebook “Statistical Snacks” is a good place to start).

Let’s count the expected number of clicks for the I impressions. Visitors …

… contributing only one impression give rise to c(1) = p(1) + p(1) + … [f(1)·I addends in total] = p(1)·f(1)·I clicks

… contributing two impressions give rise to c(2) = p(2) + p(2) + … [f(2)·I/2 addends in total] = p(2)·f(2)·I/2 clicks

… contributing three impressions give rise to c(3) = p(3) + p(3) + … [f(3)·I/3 addends in total] = p(3)·f(3)·I/3 clicks

And so on. So the total number of clicks we can expect is: c = ∑[n] p(n)·f(n)/n·I. Since the CTR is just clicks divided by impressions, we finally get this beautiful formula:

CTR = ∑[n] p(n)·f(n)/n

The expression p(n)/n decreases as n increases. So a higher visitor loyalty (which mathematically means that f(n) has a relatively high value for n greater than one) translates into a lower CTR. One final conclusion: the formula can also tell us a bit about how the CTR develops during a campaign. If a website has no loyal visitors, the CTR will remain at a constant level, while for websites with a lot of loyal visitors, the CTR will decrease over time.

Simpson’s Paradox And Gender Discrimination

One sunny day we arrive at work in the university administration to find a lot of aggressive emails in our in‒box. Just the day before, a news story about gender discrimination in academia was published in a popular local newspaper which included data from our university. The emails are a result of that. Female readers are outraged that men were accepted at the university at a higher rate, while male readers are angry that women were favored in each course the university offers. Somewhat puzzled, you take a look at the data to see what’s going on and who’s wrong.

The university only offers two courses: physics and sociology. In total, 1000 men and 1000 women applied. Here’s the breakdown:

Physics:

800 men applied ‒ 480 accepted (60 %)
100 women applied ‒ 80 accepted (80 %)

Sociology:

200 men applied ‒ 40 accepted (20 %)
900 women applied ‒ 360 accepted (40 %)

Seems like the male readers are right. In each course women were favored. But why the outrage by female readers? Maybe they focused more on the following piece of data. Let’s count how many men and women were accepted overall.

Overall:

1000 men applied ‒ 520 accepted (52 %)
1000 women applied ‒ 440 accepted (44 %)

Wait, what? How did that happen? Suddenly the situation seems reversed. What looked like a clear case of discrimination of male students turned into a case of discrimination of female students by simple addition. How can that be explained?

The paradoxical situation is caused by the different capacities of the two departments as well as the student’s overall preferences. While the physics department, the top choice of male students, could accept 560 students, the smaller sociology department, the top choice of female students, could only take on 400 students. So a higher acceptance rate of male students is to be expected even if women are slightly favored in each course.

While this might seem to you like an overly artificial example to demonstrate an obscure statistical phenomenon, I’m sure the University of California (Berkeley) would beg to differ. It was sued in 1973 for bias against women on the basis of these admission rates:

8442 men applied ‒ 3715 accepted (44 %)
4321 women applied ‒ 1512 accepted (35 %)

A further analysis of the data however showed that women were favored in almost all departments ‒ Simpson’s paradox at work. The paradox also appeared (and keeps on appearing) in clinical trials. A certain treatment might be favored in individual groups, but still prove to be inferior in the aggregate data.

Recurrence Relations – A Simple Explanation And Much More

Recurrence relations are a powerful tool for mathematical modeling and numerically solving differential equations (no matter how complicated). And as luck would have it, they are relatively easy to understand and apply. So let’s dive right into it using a purely mathematical example (for clarity) before looking at a real-world application.

This equation is a typical example of a recurrence relation:

x(t+1) = 5 * x(t) + 2 * x(t-1)

At the heart of the equation is a certain quantity x. It appears three times: x(t+1) stands for the value of this quantity at a time t+1 (next month), x(t) for the value at time t (current month) and x(t-1) the value at time t-1 (previous month). So what the relation allows us to do is to determine the value of said quantity for the next month, given that we know it for the current and previous month. Of course the choice of time span here is just arbitrary, it might as well be a decade or nanosecond. What’s important is that we can use the last two values in the sequence to determine the next value.

Suppose we start with x(0) = 0 and x(1) = 1. With the recurrence relation we can continue the sequence step by step:

x(2) = 5 * x(1) + 2 * x(0) = 5 * 1 + 2 * 0 = 5

x(3) = 5 * x(2) + 2 * x(1) = 5 * 5 + 2 * 1 = 27

x(4) = 5 * x(3) + 2 * x(2) = 5 * 27 + 2 * 5 = 145

And so on. Once we’re given the “seed”, determining the sequence is not that hard. It’s just a matter of plugging in the last two data points and doing the calculation. The downside to defining a sequence recursively is that if you want to know x(500), you have to go through hundreds of steps to get there. Luckily, this is not a problem for computers.

In the most general terms, a recurrence relation relates the value of quantity x at a time t + 1 to the values of this quantity x at earlier times. The time itself could also appear as a factor. So this here would also be a legitimate recurrence relation:

x(t+1) = 5 * t * x(t) – 2 * x(t-10)

Here we calculate the value of x at time t+1 (next month) by its value at a time t (current month) and t – 10 (ten months ago). Note that in this case you need eleven seed values to be able to continue the sequence. If we are only given x(0) = 0 and x(10) = 50, we can do the next step:

x(11) = 5 * 10 * x(10) – 2 * x(0) = 5 * 10 * 50 – 2 * 0 = 2500

But we run into problems after that:

x(12) = 5 * 11 * x(11) – 2 * x(1) = 5 * 11 * 2500 – 2 * x(1) = ?

We already calculated x(11), but there’s nothing we can do to deduce x(1).

Now let’s look at one interesting application of such recurrence relations, modeling the growth of animal populations. We’ll start with a simple model that relates the number of animals x in the next month t+1 to the number of animals x in the current month t as such:

x(t+1) = x(t) + f * x(t)

The factor f is a constant that determines the rate of growth (to be more specific: its value is the decimal percentage change from one month to the next). So if our population grows with 25 % each month, we get:

x(t+1) = x(t) + 0.25 * x(t)

If we start with x(0) = 100 rabbits at month t = 0 we get:

x(1) = x(0) + 0.1 * x(0) = 100 + 0.25 * 100 = 125 rabbits

x(2) = x(1) + 0.1 * x(1) = 125 + 0.25 * 125 = 156 rabbits

x(3) = x(2) + 0.1 * x(2) = 156 + 0.25 * 156 = 195 rabbits

x(4) = x(3) + 0.1 * x(3) = 195 + 0.25 * 195 = 244 rabbits

x(5) = x(4) + 0.1 * x(4) = 244 + 0.25 * 244 = 305 rabbits

And so on. Maybe you already see the main problem with this exponential model: it just keeps on growing. This is fine as long as the population is small and the environment rich in ressources, but every environment has its limits. Let’s fix this problem by including an additional term in the recurrence relation that will lead to this behavior:

– Exponential growth as long as the population is small compared to the capacity
– Slowing growth near the capacity
– No growth at capacity
– Population decline when over the capacity

How can we translate this into mathematics? It takes a lot of practice to be able to tweak a recurrence relation to get the behavior you want. You just learned your first chord and I’m asking you to play Mozart, that’s not fair. But take a look at this bad boy:

x(t+1) = x(t) + a * x(t) * (1 – x(t) / C)

This is called the logistic model and the constant C represents said capacity. If x is much smaller than the capacity C, the ratio x / C will be close to zero and we are left with exponential growth:

x(t+1) ≈ x(t) + a * x(t) * (1 – 0)

x(t+1) ≈ x(t) + a * x(t)

So this admittedly complicated looking recurrence relation fullfils our first demand: exponential growth for small populations. What happens if the population x reaches the capacity C? Then all growth should stop. Let’s see if this is the case. With x = C, the ratio x / C is obviously equal to one, and in this case we get:

x(t+1) = x(t) + a * x(t) * (1 – 1)

x(t+1) = x(t)

The number of animals remains constant, just as we wanted. Last but not least, what happens if (for some reason) the population gets past the capacity, meaning that x is greater than C? In this case the ratio x / C is greater than one (let’s just say x / C = 1.2 for the sake of argument):

x(t+1) = x(t) + a * x(t) * (1 – 1.2)

x(t+1) = x(t) + a * x(t) * (- 0.2)

The second term is now negative and thus x(t+1) will be smaller than x(t) – a decline back to capacity. What an enormous amount of beautiful behavior in such a compact line of mathematics! This is where the power of recurrence relations comes to light. Anyways, let’s go back to our rabbit population. We’ll let them grow with 25 % (a = 0.25), but this time on an island that can only sustain 300 rabbits at most (C = 300). Thus the model looks like this:

x(t+1) = x(t) + 0.25 * x(t) * (1 – x(t) / 300)

If we start with x(0) = 100 rabbits at month t = 0 we get:

x(1) = 100 + 0.25 * 100 * (1 – 100 / 300) = 117 rabbits

x(2) = 117 + 0.25 * 117 * (1 – 117 / 300) = 135 rabbits

x(3) = 135 + 0.25 * 135 * (1 – 135 / 300) = 153 rabbits

x(4) = 153 + 0.25 * 153 * (1 – 153 / 300) = 172 rabbits

x(5) = 172 + 0.25 * 172 * (1 – 172 / 300) = 190 rabbits

Note that now the growth is almost linear rather than exponential and will slow down further the closer we get to the capacity (continue the sequence if you like, it will gently approach 300, but never go past it).

We can even go further and include random events in a recurrence relation. Let’s stick to the rabbits and their logistic growth and say that there’s a p = 5 % chance that in a certain month a flood occurs. If this happens, the population will halve. If no flood occurs, it will grow logistically as usual. This is what our new model looks like in mathematical terms:

x(t+1) = x(t) + 0.25 * x(t) * (1 – x(t) / 300)    if no flood occurs

x(t+1) = 0.5 * x(t)    if a flood occurs

To determine if there’s a flood, we let a random number generator spit out a number between 1 and 100 at each step. If it displays the number 5 or smaller, we use the “flood” equation (in accordance with the 5 % chance for a flood). Again we turn to our initial population of 100 rabbits with the growth rate and capacity unchanged:

x(1) = 100 + 0.25 * 100 * (1 – 100 / 300) = 117 rabbits

x(2) = 117 + 0.25 * 117 * (1 – 117 / 300) = 135 rabbits

x(3) = 135 + 0.25 * 135 * (1 – 135 / 300) = 153 rabbits

x(4) = 0.5 * 153 = 77 rabbits

x(5) = 77 + 0.25 * 77 * (1 – 77 / 300) = 91 rabbits

As you can see, in this run the random number generator gave a number 5 or smaller during the fourth step. Accordingly, the number of rabbits halved. You can do a lot of shenanigans (and some useful stuff as well) with recurrence relations and random numbers, the sky’s the limit. I hope this quick overview was helpful.

A note for the advanced: here’s how you turn a differential equation into a recurrence relation. Let’s take this differential equation:

dx/dt = a * x * exp(- b*x)

First multiply by dt:

dx = a * x * exp(- b * x) * dt

We set dx (the change in x) equal to x(t+h) – x(t) and dt (change in time) equal to a small constant h. Of course for x we now use x(t):

x(t+h) – x(t) = a * x(t) * exp(- b * x(t)) * h

Solve for x(t+h):

x(t+h) = x(t) + a * x(t) * exp(- b * x(t)) * h

And done! The smaller your h, the more accurate your numerical results. How low you can go depends on your computer’s computing power.

How To Calculate Maximum Car Speed + Examples (Mercedes C-180, Bugatti Veyron)

How do you determine the maximum possible speed your car can go? Well, one rather straight-forward option is to just get into your car, go on the Autobahn and push down the pedal until the needle stops moving. The problem with this option is that there’s not always an Autobahn nearby. So we need to find another way.

Luckily, physics can help us out here. You probably know that whenever a body is moving at constant speed, there must be a balance of forces in play. The force that is aiming to accelerate the object is exactly balanced by the force that wants to decelerate it. Our first job is to find out what forces we are dealing with.

Obvious candidates for the retarding forces are ground friction and air resistance. However, in our case looking at the latter is sufficient since at high speeds, air resistance becomes the dominating factor. This makes things considerably easier for us. So how can we calculate air resistance?

To compute air resistance we need to know several inputs. One of these is the air density D (in kg/m³), which at sea level has the value D = 1.25 kg/m³. We also need to know the projected area A (in m²) of the car, which is just the product of width times height. Of course there’s also the dependence on the velocity v (in m/s) relative to the air. The formula for the drag force is:

F = 0.5 · c · D · A · v²

with c (dimensionless) being the drag coefficient. This is the one quantity in this formula that is tough to determine. You probably don’t know this value for your car and there’s a good chance you will never find it out even if you try. In general, you want to have this value as low as possible.

On ecomodder.com you can find a table of drag coefficients for many common modern car models. Excluding prototype models, the drag coefficient in this list ranges between c = 0.25 for the Honda Insight to c = 0.58 for the Jeep Wrangler TJ Soft Top. The average value is c = 0.33. In first approximation you can estimate your car’s drag coefficient by placing it in this range depending on how streamlined it looks compared to the average car.

With the equation: power equals force times speed, we can use the above formula to find out how much power (in W) we need to provide to counter the air resistance at a certain speed:

P = F · v = 0.5 · c · D · A · v³

Of course we can also reverse this equation. Given that our car is able to provide a certain amount of power P, this is the maximum speed v we can achieve:

v = ( 2 · P / (c · D · A) )1/3

From the formula we can see that the top speed grows with the third root of the car’s power, meaning that when we increase the power eightfold, the maximum speed doubles. So even a slight increase in top speed has to be bought with a significant increase in energy output.

Note the we have to input the power in the standard physical unit watt rather than the often used unit horsepower. Luckily the conversion is very easy, just multiply horsepower with 746 to get to watt.

Let’s put the formula to the test.

—————————

I drive a ten year old Mercedes C180 Compressor. According the Mercedes-Benz homepage, its drag coefficient is c = 0.29 and its power P = 143 HP ≈ 106,680 W. Its width and height is w = 1.77 m and h = 1.45 m respectively. What is the maximum possible speed?

First we need the projected area of the car:

A = 1.77 m · 1.45 m ≈ 2.57 m²

Now we can use the formula:

v = ( 2 · 106,680 / (0.29 · 1.25 · 2.57) )1/3

v ≈ 61.2 m/s ≈ 220.3 km/h ≈ 136.6 mph

From my experience on the Autobahn, this seems to be very realistic. You can reach 200 Km/h quite well, but the acceleration is already noticeably lower at this point.

If you ever get the chance to visit Germany, make sure to rent a ridiculously fast sports car (you can rent a Porsche 911 Carrera for as little as 200 $ per day) and find a nice section on the Autobahn with unlimited speed. But remember: unless you’re overtaking, always use the right lane. The left lanes are reserved for overtaking. Never overtake on the right side, nobody will expect you there. And make sure to check the rear-view mirror often. You might think you’re going fast, but there’s always someone going even faster. Let them pass. Last but not least, stay focused and keep your eyes on the road. Traffic jams can appear out of nowhere and you don’t want to end up in the back of a truck at these speeds.

—————————

The fastest production car at the present time is the Bugatti Veyron Super Sport. Is has a drag coefficient of c = 0.35, width w = 2 m, height h = 1.19 m and power P = 1200 HP = 895,200 W. Let’s calculate its maximum possible speed:

v = ( 2 · 895,200 / (0.35 · 1.25 · 2 · 1.19) )1/3

v ≈ 119.8 m/s ≈ 431.3 km/h ≈ 267.4 mph

Does this seem unreasonably high? It does. But the car has actually been recorded going 431 Km/h, so we are right on target. If you’d like to purchase this car, make sure you have 4,000,000 $ in your bank account.

—————————

This was an excerpt from the ebook More Great Formulas Explained.

Check out my BEST OF for more interesting physics articles.

Sources:

http://ecomodder.com/wiki/index.php/Vehicle_Coefficient_of_Drag_List

http://www.mercedes-benz.de/content/germany/mpc/mpc_germany_website/de/home_mpc/passengercars/home/_used_cars/technical_data.0006.html

http://www.carfolio.com/specifications/models/car/?car=218999

E-Book Market & Sales – Analysis Pool

On this page you can find a collection of all my statistical analysis and research regarding the Kindle ebook market and sales. I’ll keep the page updated.

How E-Book Sales Vary at the End / Beginning of a Month

The E-Book Market in Numbers

Computing and Tracking the Amazon Sales Rank

Typical Per-Page-Prices for E-Books

Quantitative Analysis of Top 60 Kindle Romance Novels

Mathematical Model For E-Book Sales

If you have any suggestions on what to analyze next, just let me know. Share if you like the information.

Mathematical Model For (E-) Book Sales

It seems to be a no-brainer that with more books on the market, an author will see higher revenues. I wanted to know more about how the sales rate varies with the number of books. So I did what I always do when faced with an economic problem: construct a mathematical model. Even though it took me several tries to find the right approach, I’m fairly confident that the following model is able to explain why revenues grow overproportionally with the number of books an author has published. I also stumbled across a way to correct the marketing R/C for number of books.

The basic quantities used are:

  • n = number of books
  • i = impressions per day
  • q = conversion probability (which is the probability that an impression results in a sale)
  • s = sales per buyer
  • r = daily sales rate

Obviously the basic relationship is:

r = i(n) * q(n) * s(n)

with the brackets indicating a dependence of the quantities on the number of books.

1) Let’s start with s(n) = sales per buyer. Suppose there’s a probability p that a buyer, who has purchased an author’s book, will go on to buy yet another book of said author. To visualize this, think of the books as some kind of mirrors: each ray (sale) will either go through the book (no further sales from this buyer) or be reflected on another book of the author. In the latter case, the process repeats. Using this “reflective model”, the number of sales per buyer is:

s(n) = 1 + p + p² + … + pn = (1 – pn) / (1 – p)

For example, if the probability of a reader buying another book from the same author is p = 15 % = 0.15 and the author has n = 3 books available, we get:

s(3) = (1 – 0.153) / (1 – 0.15) = 1.17 sales per buyer

So the number of sales per buyer increases with the number of books. However, it quickly reaches a limiting value. Letting n go to infinity results in:

s(∞) = 1 / (1 – p)

Hence, this effect is a source for overproportional growth only for the first few books. After that it turns into a constant factor.

2) Let’s turn to q(n) = conversion probability. Why should there be a dependence on number of books at all for this quantity? Studies show that the probability of making a sale grows with the choice offered. That’s why ridiculously large malls work. When an author offers a large number of books, he is able to provide list impression (featuring all his / her books) additionally to the common single impressions (featuring only one book). With more choice, the conversion probability on list impressions will be higher than that on single impressions.

  • qs = single impression conversion probability
  • ps = percentage of impressions that are single impressions
  • ql = list impression conversion probability
  • pl = percentage of impressions that are list impressions

with ps + pl = 1. The overall conversion probability will be:

q(n) = qs(n) * ps(n) + ql(n)* pl(n)

With ql(n) and pl(n) obviously growing with the number of books and ps(n) decreasing accordingly, we get an increase in the overall conversion probability.

3) Finally let’s look at i(n) = impressions per day. Denoting with i1, i2, … the number of daily impressions by book number 1, book number 2, … , the average number of impressions per day and book are:

ib = 1/n * ∑[k] ik

with ∑[k] meaning the sum over all k. The overall impressions per day are:

i(n) = ib(n) * n

Assuming all books generate the same number of daily impressions, this is a linear growth. However, there might be an overproportional factor at work here. As an author keeps publishing, his experience in writing, editing and marketing will grow. Especially for initially inexperienced authors the quality of the books and the marketing approach will improve with each book. Translated in numbers, this means that later books will generate more impressions per day:

ik+1 > ik

which leads to an overproportional (instead of just linear) growth in overall impressions per day with the number of books. Note that more experience should also translate into a higher single impression conversion probability:

qs(n+1) > qs(n)

4) As a final treat, let’s look at how these effects impact the marketing R/C. The marketing R/C is the ratio of revenues that result from an ad divided by the costs of the ad:

R/C = Revenues / Costs

For an ad to be of worth to an author, this value should be greater than 1. Assume an ad generates the number of iad single impressions in total. For one book we get the revenues:

R = iad * qs(1)

If more than one book is available, this number changes to:

R = iad * qs(n) * (1 – pn) / (1 – p)

So if the R/C in the case of one book is (R/C)1, the corrected R/C for a larger number of books is:

R/C = (R/C)1 * qs(n) / qs(1) * (1 – pn) / (1 – p)

In short: ads, that aren’t profitable, can become profitable as the author offers more books.

For more mathematical modeling check out: Mathematics of Blog Traffic: Model and Tips for High Traffic.

Acceleration – A Short and Simple Explanation

The three basic quantities used in kinematics are distance, velocity and acceleration. Let’s first look at velocity before moving on to the main topic. The velocity is simply the rate of change in distance. If we cover the distance d in a time span t, than the average velocity during this interval is:

v = d / t

So if we drive d = 800 meters in t = 40 seconds, the average speed is v = 800 meters / 40 seconds = 20 m/s. No surprise here. Note that there are many different units commonly used for velocity: kilometers per hour, feet per second, miles per hour, etc … The SI unit is m/s, so unless otherwise stated, you have to input the velocity in m/s into a formula to get a correct result.

Acceleration is also defined as the rate of change, but this time with respect to velocity. If the velocity changes by the amount v in a time span t, the average acceleration is:

a = v / t

For example, my beloved Mercedes C-180 Compressor can go from 0 to 100 kilometers per hour (or 27.8 meters per second) in about 9 seconds. So the average acceleration during this time is:

a = 27.8 meters per second / 9 seconds = 3.1 m/s²

Is that a lot? Obviously we should know some reference values to be able to judge acceleration.

The one value you should know is: g = 9.81 m/s². This is the acceleration experienced in free fall. And you can take the word “experienced” literally because unlike velocity, we really do feel acceleration. Our inner ear system contains structures that enable us to perceive it. Often times acceleration is compared to this value because it provides a meaningful and easily relatable reference value.

So the acceleration in the Mercedes C-180 Compressor is not quite as thrilling as free fall, it only accelerates with about 3.1 / 9.81 = 0.32 g. How much higher can it go for production cars? Well, meet the Bugatti Veyron Super Sport. It goes from 0 to 100 kilometers per hour (or 27.8 meters per second) in 2.2 seconds. This translates into an acceleration of:

a = 27.8 meters per second / 2.2 seconds = 12.6 m/s²

This is more than the free fall acceleration! To be more specific, it’s 12.6 / 9.81 = 1.28 g. If you got $ 4,000,000 to spare, how about getting one of these? But even this is nothing compared to what astronauts have to endure during launch. Here you can see a typical acceleration profile of a Space Shuttle launch:

(Taken from http://www.russellwestbrook.com)

Right before the main engine shutoff the acceleration peaks at close to 30 m/s² or 3 g. That’s certainly not for everyone. How much can a person endure by the way? According to “Aerospace Medicine” accelerations of around 5 g and higher can result in death if sustained for more than a few seconds. Very short acceleration bursts can be survivable up to about 50 g, which is a value that can be reached and exceeded in a car crash.

One more thing to keep in mind about acceleration: it is always a result of a force. If a force F (measured in Newtons = N) acts on a body, it responds by accelerating. The stronger the force is, the higher the resulting acceleration. This is just Newton’s Second Law:

a = F / m

So a force of F = 210 N on a body of m = 70 kg leads to an acceleration of a = 210 N / 70 kg = 3 m/s². The same force however on a m = 140 kg mass only leads to the acceleration a = 210 N / 140 kg = 1.5 m/s². Hence, mass provides resistance to acceleration. You need more force to accelerate a massive body at the same rate as a light body.

For more interesting physics articles, check out my BEST OF.

Mathematics of Blog Traffic: Model and Tips for High Traffic

Over the last few days I finally did what I long had planned and worked out a mathematical model for blog traffic. Here are the results. First we’ll take a look at the most general form and then use it to derive a practical, easily applicable formula.

We need some quantities as inputs. The time (in days), starting from the first blog entry, is denoted by t. We number the blog posts with the variable k. So k = 1 refers to the first post published, k = 2 to the second, etc … We’ll refer to the day on which entry k is published by t(k).

The initial number of visits entry k draws from the feed is symbolized by i(k), the average number of views per day entry k draws from search engines by s(k). Assuming that the number of feed views declines exponentially for each article with a factor b (my observations put the value for this at around 0.4 – 0.6), this is the number of views V the blog receives on day t:

V(t) = Σ[k] ( s(k) + i(k) · bt – t(k))

Σ[k] means that we sum over all k. This is the most general form. For it to be of any practical use, we need to make simplifying assumptions. We assume that the entries are published at a constant frequency f (entries per day) and that each article has the same popularity, that is:

i(k) = i = const.
s(k) = s = const.

After a long calculation you can arrive at this formula. It provides the expected number of daily views given that the above assumptions hold true and that the blog consists of n entries in total:

V = s · n + i / ( 1 – b1/f )

Note that according to this formula, blog traffic increases linearly with the number of entries published. Let’s apply the formula. Assume we publish articles at a frequency f = 1 per day and they draw i = 5 views on the first day from the feed and s = 0.1 views per day from search engines. With b = 0.5, this leads to:

V = 0.1 · n + 10

So once we gathered n = 20 entries with this setup, we can expect V = 12 views per day, at n = 40 entries this grows to V = 14 views per day, etc … The theoretical growth of this blog with number of entries is shown below:

viewsentries

How does the frequency at which entries are being published affect the number of views? You can see this dependency in the graph below (I set n = 40):

viewsfrequency

The formula is very clear about what to do for higher traffic: get more attention in the feed (good titles, good tagging and a large number of followers all lead to high i and possibly reduced b), optimize the entries for search engines (high s), publish at high frequency (obviously high f) and do this for a long time (high n).

We’ll draw two more conclusions. As you can see the formula neatly separates the search engine traffic (left term) and feed traffic (right term). And while the feed traffic reaches a constant level after a while of constant publishing, it is the search engine traffic that keeps on growing. At a critical number of entries N, the search engine traffic will overtake the feed traffic:

N = i / ( s · ( 1 – b1/f ) )

In the above blog setup, this happens at N = 100 entries. At this point both the search engines as well as the feed will provide 10 views per day.

Here’s one more conclusion: the daily increase in the average number of views is just the product of the daily search engine views per entry s and the publishing frequency f:

V / t = s · f

Thus, our example blog will experience an increase of 0.1 · 1 = 0.1 views per day or 1 additional view per 10 days. If we publish entries at twice the frequency, the blog would grow with 0.1 · 2 = 0.2 views per day or 1 additional view every 5 days.

Analysis of Viewers for TV Series

I analysed the number of viewers of all the completed seasons for the following tv shows: Fringe, Lost, Heroes, Gossip Girl, Vampire Diaries, True Blood, The Sopranos, How I met your Mother, Glee and Family Guy. The data was taken from the respective Wikipedia pages.

My aim was to find simple “rule-of-thumb” formulas to estimate key values from the number of premiere viewers and to see if there’s a pattern for the decline of a show. Below you can see the main results from the analysis.

Result 1: Finale vs. Premiere

The number of finale viewers is about 85 % the number of premiere viewers.

Result 2: Average vs. Premiere

The average number of viewers during a season is about 83 % the number of premiere viewers.

finaleaverage

Result 3: Decline Pattern

The average number of viewers during a season is about 93 % the average number of viewers during the previous season.

averageprevious

This last result implies that the decline in popularity is exponential. If the average number of viewers for the first season is N(1), then the expected number of viewers for season n is: N(n) = N(1) * 0.93^(n-1). We can also express this using a table:

Average season two = 93 % of average season one

Average season three = 86 % of average season one

Average season four = 80 % of average season one

Average season five = 75 % of average season one

Average season six = 70 % of average season one

etc …

Of course, this is all just the sum of the behaviour of all the analyzed shows. Individual shows can behave very differently form that.

Inflation: How long does it take for prices to double?

A question that often comes up is how long it would take for prices to double if the rate of inflation remained constant. It also helps to turn an abstract percentage number into a value that is easier to grasp and interpret.

If we start at a certain value for the consumer price index CPI0 and apply a constant annual inflation factor f (which is just the annual inflation rate expressed in decimals plus one), the CPI would grow exponentially according to this formula:

CPIn = CPI0 · f n

where CPIn symbolizes the Consumer Price Index for year n. The prices have doubled when CPIn equals 2 · CPI0. So we get:

2 · CPI0 = CPI0 · f n

Or, after solving this equation for n:

n = ln(2) / ln(f)

with ln being the natural logarithm. Using this formula, we can calculate how many years it would take for prices to double given a constant inflation rate (and thus inflation factor). Let’s look at some examples.

——————–

In 1918, the end of World War I and the beginning of the Spanish Flu, the inflation rate in the US rose to a frightening r = 0.204 = 20.4 %. The corresponding inflation factor is f = 1.204. How long would it take for prices to double if it remained constant?

Applying the formula, we get:

n = ln(2) / ln(1.204) = ca. 4 years

More typical values for the annual inflation rate are in the region several percent. Let’s see how long it takes for prices to double under normal circumstances. We will use r = 0.025 = 2.5 % for the constant inflation rate.

n = ln(2) / ln(1.025) = ca. 28 years

Which is approximately one generation.

One of the highest inflation rates ever measured occurred during the Hyperinflation in the Weimar Republic, a democratic ancestor of the Federal Republic of Germany. The monthly (!) inflation rate reached a fantastical value of r = 295 = 29500 %. To grasp this, it is certainly helpful to express it in form of the doubling time.

n = ln(2) / ln(296) = ca. 0.12 months = ca. 4 days

Note that since we used the monthly inflation rate as the input, we got the result in months as well. Even worse was the inflation at the beginning of the nineties in Yugoslavia, with a daily (!) inflation rate of r = 0.65 = 65 %, meaning prices doubled every 33 hours.

——————–

This was an excerpt from “Business Math Basics – Practical and Simple”. I hope you enjoyed it. For more on inflation check out my post about the Time Value of Money.

The Standard Error – What it is and how it’s used

I smoke electronic cigarettes and recently I wanted to find out how much nicotine liquid I consume per day. I noted the used amount on five consecutive days:

3 ml, 3.4 ml, 7.2 ml, 3.7 ml, 4.3 ml

So how much do I use per day? Well, our best guess is to do the average, that is, sum all the amounts and divide by the number of measurements:

(3 ml + 3.4 ml + 7.2 ml + 3.7 ml + 4.3 ml) / 5 = 4.3 ml

Most people would stop here. However, there’s one very important piece of information missing: how accurate is that result? Surely an average value of 4.3 ml computed from 100 measurements is much more reliable than the same average computed from 5 measurements. Here’s where the standard error comes in and thanks to the internet, calculating it couldn’t be easier. You can type in the measurements here to get the standard error:

http://www.miniwebtool.com/standard-error-calculator/

It tells us that the standard error (of the mean, to be pedantically precise) of my five measurements is SEM = 0.75. This number is extremely useful because there’s a rule in statistics that states that with a 95 % probability, the true average lies within two standard errors of the computed average. For us this means that there’s a 95 % chance, which you could call beyond reasonable doubt, that the true average of my daily liquid consumption lies in this intervall:

4.3 ml ± 1.5 ml

or between 2.8 and 5.8 ml. So the computed average is not very accurate. Note that as long as the standard deviation remains more or less constant as further measurements come in, the standard error is inversely proportional to the square root of the number of measurements. In simpler terms: If you quadruple the number of measurements, the size of the error interval halves. With 20 instead of only 5 measurements, we should be able to archieve plus/minus 0.75 accuracy.

So when you have an average value to report, be sure to include the error intervall. Your result is much more informative this way and with the help of the online calculator as well as the above rule, computing it is quick and painless. It took me less than a minute.

A more detailed explanation of the average value, standard deviation and standard error (yes, the latter two are not the same thing) can be found in chapter 7 of my Kindle ebook Statistical Snacks (this was not an excerpt).

Increase Views per Visit by Linking Within your Blog

One of the most basic and useful performance indicator for blogs is the average number of views per visit. If it is high, that means visitors stick around to explore the blog after reading a post. They value the blog for being well-written and informative. But in the fast paced, content saturated online world, achieving a lot of views per visit is not easy.

You can help out a little by making exploring your blog easier for readers. A good way to do this is to link within your blog, that is, to provide internal links. Keep in mind though that random links won’t help much. If you link one of your blog post to another, they should be connected in a meaningful way, for example by covering the same topic or giving relevant additional information to what a visitor just read.

Being mathematically curious, I wanted to find a way to judge what impact such internal links have on the overall views per visit. Assume you start with no internal links and observe a current number views per visitor of x. Now you add n internal links in your blog, which has in total a number of m entries. Given that the probability for a visitor to make use of an internal link is p, what will the overall number of views per visit change to? Yesterday night I derived a formula for that:

x’ = x + (n / m) · (1 / (1-p) – 1)

For example, my blog (which has as of now very few internal links) has an average of x = 2.3 views per visit and m = 42 entries. If I were to add n = 30 internal links and assuming a reader makes use of an internal link with the probability p = 20 % = 0.2, this should theoretically change into:

x’ = 2.3 + (30 / 42) · (1 / 0.8 – 1) = 2.5 views per visit

A solid 9 % increase in views per visit and this just by providing visitors a simple way to explore. So make sure to go over your blog and connect articles that are relevant to each other. The higher the relevancy of the links, the higher the probability that readers will end up using them. For example, if I only added n = 10 internal links instead of thirty, but had them at such a level of relevancy that the probability of them being used increases to p = 40 % = 0.4, I would end up with the same overall views per visit:

x’ = 2.3 + (10 / 42) · (1 / 0.6 – 1) = 2.5 views per visit

So it’s about relevancy as much as it is about amount. And in the spirit of not spamming, I’d prefer adding a few high-relevancy internal links that a lot low-relevancy ones.

If you’d like to know more on how to optimize your blog, check out: Setting the Order for your WordPress Blog Posts and Keywords: How To Use Them Properly On a Website or Blog.

The Mach Cone

When an object moves faster than the speed of sound, it will go past an observer before the sound waves emitted by object do. The waves are compressed so strongly that a shock front forms. So instead of the sound gradually building up to a maximum as it is usually the case, the observer will hear nothing until the shock front arrives with a sudden and explosion-like noise.

Geometrically, the shock front forms a cone around the object, which under certain circumstances can even be visible to the naked eye (see image below). The great formula that is featured in this section deals with the opening angle of said cone. This angle, symbolized by the Greek letter θ, is also indicated in the image.

Great Formulas_html_m7f5cfada

All we need to compute the mach angle θ is the velocity of the object v (in m/s) and speed of sound c (in m/s):

sin θ = c / v

Let’s turn to an example.

———————-

A jet fighter flies with a speed of v = 500 m/s toward its destination. It flies close to the ground, so the speed of sound is approximately c = 340 m/s. This leads to:

sin θ = 340 / 500 = 0.68

θ = arcsin(0.68) ≈ 43°

———————-

In the picture above the angle is approximately 62°. How fast was the jet going at the time when the picture was taken? We’ll set the speed of sound to c = 340 m/s and insert all the given data into the formula:

sin 62° = 340 / v

0.88 = 340 / v

Obviously we need to solve for v. To do that, we first multiply both sides by v. This leads to:

0.88 · v = 340

Dividing both sides by 0.88 results in the answer:

v = 340 / 0.88 ≈ 385 m/s ≈ 1390 km/h ≈ 860 mph

———————-

This was an excerpt from the ebook “Great Formulas Explained – Physics, Mathematics, Economics”, released yesterday and available here: http://www.amazon.com/dp/B00G807Y00.

The Time Value of Money and Inflation

To make a point, I’ll start this blog entry in an unusual way, that is, by talking about vectors. A vector is basically an ordered row of numbers. Consider this expression for example:

(12, 3, 5)

This vector could represent a lot of things. For example a point in a three dimensional coordinate system, with the vector components being the x-, y- and z-values respectively. Or for a company offering three products, it could stand for the sales of these products in a certain year.

Why this talk about vectors? You were probably very surprised when you heard grandma say that she paid only 150 $ for her first car. It seems so amazingly cheap. But it is not. Your dear grandma is talking about 1950’s money, while you are thinking of today’s money. These two have a very different value.

If you want to specify the costs of a good precisely, merely giving an amount of money will not be sufficient. The value of money changes over time and thus to be absolutely precise, you should always couple this amount with a certain year. For example, this is what grandma’s car really cost:

(150 $, 1950)

This is far from (150 $, 2012), which is what you were thinking of when grandma shared the story with you. Using an online inflation calculator, we can conclude that this is actually what the car would cost in today’s money:

(1410 $, 2012)

Not an expensive car, but certainly more than 150 $ in today’s money. Now you can see why I started this chapter using vectors. They allow us to easily and clearly couple an amount with a year. A true pedant would even ask for one more component since we are still missing the respective months. But let’s not get too pedantic.

How can we justify saying that 150 $ in 1950’s money is the same as 1410 $ in today’s money? We can look at how much of a certain good these amounts would buy in the given year. With 150 $ in 1950 you could fill your basket with about as many apples as you can with 1410 $ today. The same goes for most other common goods: oranges, potatoes, water, cinema tickets, and so on.

This is inflation, goods get more expensive each year. At a later point we will take a look at what reasons there are for inflation to occur. But before that, let’s define the rate of inflation and see how it is measured …

This was an excerpt from the ebook “Business Math Basics – Practical and Simple”, available for Kindle here: http://www.amazon.com/dp/B00FXB8QSO.

Physics (And The Formula That Got Me Hooked)

A long time ago, in my teen years, this was the formula that got me hooked on physics. Why? I can’t say for sure. I guess I was very surprised that you could calculate something like this so easily. So with some nostalgia, I present another great formula from the field of physics. It will be a continuation of and a last section on energy.

To heat something, you need a certain amount of energy E (in J). How much exactly? To compute this we require three inputs: the mass m (in kg) of the object we want to heat, the temperature difference T (in °C) between initial and final state and the so called specific heat c (in J per kg °C) of the material that is heated. The relationship is quite simple:

E = c · m · T

If you double any of the input quantities, the energy required for heating will double as well. A very helpful addition to problems involving heating is this formula:

E = P · t

with P (in watt = W = J/s) being the power of the device that delivers heat and t (in s) the duration of the heat delivery.

———————

The specific heat of water is c = 4200 J per kg °C. How much energy do you need to heat m = 1 kg of water from room temperature (20 °C) to its boiling point (100 °C)? Note that the temperature difference between initial and final state is T = 80 °C. So we have all the quantities we need.

E = 4200 · 1 · 80 = 336,000 J

Additional question: How long will it take a water heater with an output of 2000 W to accomplish this? Let’s set up an equation for this using the second formula:

336,000 = 2000 · t

t ≈ 168 s ≈ 3 minutes

———————-

We put m = 1 kg of water (c = 4200 J per kg °C) in one container and m = 1 kg of sand (c = 290 J per kg °C) in another next to it. This will serve as an artificial beach. Using a heater we add 10,000 J of heat to each container. By what temperature will the water and the sand be raised?

Let’s turn to the water. From the given data and the great formula we can set up this equation:

10,000 = 4200 · 1 · T

T ≈ 2.4 °C

So the water temperature will be raised by 2.4 °C. What about the sand? It also receives 10,000 J.

10,000 = 290 · 1 · T

T ≈ 34.5 °C

So sand (or any ground in general) will heat up much stronger than water. In other words: the temperature of ground reacts quite strongly to changes in energy input while water is rather sluggish. This explains why the climate near oceans is milder than inland, that is, why the summers are less hot and the winters less cold. The water efficiently dampens the changes in temperature.

It also explains the land-sea-breeze phenomenon (seen in the image below). During the day, the sun’s energy will cause the ground to be hotter than the water. The air above the ground rises, leading to cooler air flowing from the ocean to the land. At night, due to the lack of the sun’s power, the situation reverses. The ground cools off quickly and now it’s the air above the water that rises.

Image
———————-

I hope this formula got you hooked as well. It’s simple, useful and can explain quite a lot of physics at the same time. It doesn’t get any better than this. Now it’s time to leave the concept of energy and turn to other topics.

This was an excerpt from my Kindle ebook: Great Formulas Explained – Physics, Mathematics, Economics. For another interesting physics quicky, check out: Intensity (or: How Much Power Will Burst Your Eardrums?).

Physics: Free Fall and Terminal Velocity

After a while of free fall, any object will reach and maintain a terminal velocity. To calculate it, we need a lot of inputs.

The necessary quantities are: the mass of the object (in kg), the gravitational acceleration (in m/s²), the density of air D (in kg/m³), the projected area of the object A (in m²) and the drag coefficient c (dimensionless). The latter two quantities need some explaining.

The projected area is the largest cross-section in the direction of fall. You can think of it as the shadow of the object on the ground when the sun’s rays hit the ground at a ninety degree angle. For example, if the falling object is a sphere, the projected area will be a circle with the same radius.

The drag coefficient is a dimensionless number that depends in a very complex way on the geometry of the object. There’s no simple way to compute it, usually it is determined in a wind tunnel. However, you can find the drag coefficients for common shapes in the picture below.

Now that we know all the inputs, let’s look at the formula for the terminal velocity v (in m/s). It will be valid for objects dropped from such a great heights that they manage to reach this limiting value, which is basically a result of the air resistance canceling out gravity.

v = sq root (2 * m * g / (c * D * A) )

Let’s do an example.

Skydivers are in free fall after leaving the plane, but soon reach the terminal velocity. We will set the mass to m = 75 kg, g = 9.81 (as usual) and D = 1.2 kg/m³. In a head-first position the skydiver has a drag coefficient of c = 0.8 and a projected area A = 0.3 m². What is the terminal velocity of the skydiver?

v = sq root (2 * 75 * 9.81 / (0.8 * 1.2 * 0.3) )

v ≈ 70 m/s ≈ 260 km/h ≈ 160 mph

Let’s take a look how changing the inputs varies the terminal velocity. Two bullet points will be sufficient here:

  • If you quadruple the mass (or the gravitational acceleration), the terminal velocity doubles. So a very heavy skydiver or a regular skydiver on a massive planet would fall much faster.
  • If you quadruple the drag coefficient (or the density or the projected area), the terminal velocity halves. This is why parachutes work. They have a higher drag coefficient and larger area, thus effectively reducing the terminal velocity.

This was an excerpt from the Kindle ebook: Great Formulas Explained – Physics. Mathematics, Economics. Check out my BEST OF for more interesting physics articles.

How much habitable land is there on earth per person?

What is the total area of habitable land on Earth? And how much habitable land does that leave one person? We’ll use the value r = 6400 km as the radius of Earth. According to the corresponding formula for spheres, the surface area of Earth is:

S = 4 * π * (6400 km)^2 ≈ 515 million square km

Since about 30 % of Earth’s surface is land, this means that the total area of land is 0.3 * 515 ≈ 155 million square km, about half of which is habitable for humans. With roughly 7 billion people alive today, we can conclude that there is 0.011 square km habitable land available per person. This corresponds to a square with 100 m ≈ 330 ft length and width.