Technology

What are Functions? Excerpt from “Math Dialogue: Functions”

T:

What are functions? I could just insert the standard definition here, but I fear that this might not be the best approach for those who have just started their journey into the fascinating world of mathematics. For one, any common textbook will include the definition, so if that’s all you’re looking for, you don’t need to continue reading here. Secondly, it is much more rewarding to build towards the definition step by step. This approach minimizes the risk of developing deficits and falling prey to misunderstandings.

S:

So where do we start?

T:

We have two options here. We could take the intuitive, concept-focused approach or the more abstract, mathematically rigorous path. My recommendation is to go down both roads, starting with the more intuitive approach and taking care of the strict details later on. This will allow you to get familiar with the concept of the function and apply it to solve real-world problems without first delving into sets, Cartesian products as well as relations and their properties.

S:

Sounds reasonable.

T:

Then let’s get started. For now we will think of a function as a mathematical expression that allows us to insert the value of one quantity x and spits out the value of another quantity y. So it’s basically an input-output system.

S:

Can you give me an example of this?

T:

Certainly. Here is a function: y = 2·x + 4. As you can see, there are two variables in there, the so-called independent variable x and the dependent variable y. The variable x is called independent because we are free to choose any value for it. Once a value is chosen, we do what the mathematical expression tells us to do, in this case multiply two by the value we have chosen for x and add four to that. The result of this is the corresponding value of the dependent variable y.

S:

So I can choose any value for x?

T:

That’s right, try out any value.

S:

Okay, I’ll set x = 1 then. When I insert this into the expression I get: y = 2·1 + 4 = 6. What does this tell me?

T:

This calculation tells you that the function y = 2·x + 4 links the input x = 1 with the output y = 6. Go on, try out another input.

S:

Okay, can I use x = 0?

T:

Certainly. Any real number works here.

S:

For x = 0 I get y = 2·0 + 4 = 4. So the function y = 2·x + 4 links the input x = 0 with the output y = 4.

T:

That’s right. Now it should be clear why x is called the independent variable and y the dependent variable. While you may choose any real number for x, sometimes there are common sense restrictions though, we’ll get to that later, the value of y is determined by the form of the function. A few more words on terminology and notation. Sometimes the output is also called the value of the function. We’ve just found that the function y = 2·x + 4 links the input x = 1 with the output y = 6. We could restate that as follows: at x = 1 the function takes on the value y = 6. The other input-output pair we found was x = 0 and y = 4. In other words: at x = 0 the value of the function is y = 4. Keep that in mind.

As for notation, it is very common to use f(x) instead of y. This emphasizes that the expression we are dealing with should be interpreted as a function of the independent variable x. It also allows us to note the input-output pairs in a more compact fashion by including specific values of x in the bracket. Here’s what I mean.

For the function we can write: f(x) = 2·x + 4. Inserting x = 1 we get: f(1) = 2·1 + 4 = 6 or, omitting the calculation, f(1) = 6. The latter is just a very compact way of saying that for x = 1 we get the output y = 6. In a similar manner we can write f(0) = 4 to state that at x = 0 the function takes on the value y = 4. Please insert another value for x using this notation.

S:

Will do. I’ll choose x = -1. Using this value I get: f(-1) = 2·(-1) + 4 = 2 or in short f(-1) = 2. So at x = -1 the value of the function is y = 2. Is all of this correct?

T:

Yes, that’s correct.

S:

You just mentioned that sometimes there are common sense restrictions for the independent variable x. Can I see an example of this?

T:

Okay, let’s get to this right now. Consider the function f(x) = 1/x. Please insert the value x = 1.

S:

For x = 1 I get f(1) = 1/1 = 1. So is it a problem that the output is the same as the input?

T:

Not at all, at x = 1 the function f(x) = 1/x takes on the value y = 1 and this is just fine. The input x = 2 also works well: f(2) = 1/2, so x = 2 is linked with the output y = 1/2. But we will run into problems when trying to insert x = 0.

S:

I see, division by zero. For x = 0 we have f(0) = 1/0 and this expression makes no sense.

T:

That’s right, division by zero is strictly verboten. So whenever an input x would lead to division by zero, we have to rule it out. Let’s state this a bit more elegantly. Every function has a domain. This is just the set of all inputs for which the function produces a real-valued output. For example, the domain of the function f(x) = 2·x + 4 is the set of all real numbers since we can insert any real number x without running into problems. The domain of the function f(x) = 1/x is the set of all real numbers with the number zero excluded since we can use all real numbers as inputs except for zero.

Can you see why the domain of the function f(x) = 1/(3·x – 12) is the set of all real numbers excluding the number four? If it is not obvious, try to insert x = 4.

S:

Okay, for x = 4 I get f(4) = 1/(3·4 – 12) = 1/0. Oh yes, division by zero again.

T:

Correct. That’s why we say that the domain of the function f(x) = 1/(3·x – 12) is the set of all real numbers excluding the number four. Any input x works except for x = 4. So whenever there’s an x somewhere in the denominator, watch out for this. Sometimes we have to exclude inputs for other reasons, too. Consider the function f(x) = sqrt(x). The abbreviation “sqrt” refers to the square root of x. Please compute the value of the function for the inputs x = 0, x = 1 and x = 2.

S:

Will do.

f(0) = sqrt(0) = 0

At x = 0 the value of the function is 0.

f(1) = sqrt(1) = 1

At x = 1 the value of the function is 1.

f(2) = sqrt(2) = 1.4142 …

At x = 2 the value of the function is 1.4142 … All of this looks fine to me. Or is there a problem here?

T:

No problem at all. But now try x = -1.

S:

Okay, f(-1) = sqrt(-1) = … Oh, seems like my calculator spits out an error message here. What’s going on?

T:

Seems like your calculator knows math well. There is no square root of a negative number. Think about it. We say that the square root of the number 4 is 2 because when you multiply 2 by itself you get 4. Note that 4 has another square root and for the same reason. When you multiply -2 by itself, you also get 4, so -2 is also a square root of 4.

Let’s choose another positive number, say 9. The square root of 9 is 3 because when you multiply 3 by itself you get 9. Another square root of 9 is -3 since multiplying -3 by itself leads to 9. So far so good, but what is the square root of -9? Which number can you multiply by itself to produce -9?

S:

Hmmm … 3 doesn’t work since 3 multiplied by itself is 9, -3 also doesn’t work since -3 multiplied by itself is 9. Looks like I can’t think of any number I could multiply by itself to get the result -9.

T:

That’s right, no such real number exists. In other words: there is no real-valued square root of -9. Actually, no negative number has a real-valued square root. That’s why your calculator complained when you gave him the task of finding the square root of -1. For our function f(x) = sqrt(x) all of this means that inserting an x smaller than zero would lead to a nonsensical result. We say that the domain of the function f(x) = sqrt(x) is the set of all positive real numbers including zero.

In summary, when trying to determine the domain of a function, that is, the set of all inputs that lead to a real-valued output, make sure to exclude any values of x that would a) lead to division by zero or b) produce a negative number under a square root sign. Unless faced with a particularly exotic function, the domain of the function is then simply the set of all real numbers excluding values of x that lead to division by zero and those values of x that produce negative numbers under a square root sign.

I promise we will get back to this, but I want to return to the concept of the function before doing some exercises. Let’s go back to the introductory example: f(x) = 2·x + 4. Please make an input-output table for the following inputs: x = -3, -2, -1, 0, 1, 2 and 3.

This was an excerpt from “Math Dialogue: Functions“, available on Amazon for Kindle.

New Release for Kindle: Introduction to Stars – Spectra, Formation, Evolution, Collapse

I’m happy to announce my new e-book release “Introduction to Stars – Spectra, Formation, Evolution, Collapse” (126 pages, $ 2.99). It contains the basics of how stars are born, what mechanisms power them, how they evolve and why they often die a spectacular death, leaving only a remnant of highly exotic matter. The book also delves into the methods used by astronomers to gather information from the light reaching us through the depth of space. No prior knowledge is required to follow the text and no mathematics beyond the very basics of algebra is used.

If you are interested in learning more, click the cover to get to the Amazon product page:

Screenshot_4

Here’s the table of contents:

Gathering Information
Introduction
Spectrum and Temperature
Gaps in the Spectrum
Doppler Shift

The Life of a Star
Introduction
Stellar Factories
From Protostar to Star
Main Sequence Stars
Giant Space Onions

The Death of a Star
Introduction
Slicing the Space Onion
Electron Degeneracy
Extreme Matter
Supernovae
Black Holes

Appendix
Answers
Excerpt
Sources and Further Reading

Enjoy the reading experience!

The Weirdness of Empty Space – Casimir Force

(This is an excerpt from The Book of Forces – enjoy!)

The forces we have discussed so far are well-understood by the scientific community and are commonly featured in introductory as well as advanced physics books. In this section we will turn to a more exotic and mysterious interaction: the Casimir force. After a series of complex quantum mechanical calculations, the Dutch physicist Hendrick Casimir predicted its existence in 1948. However, detecting the interaction proved to be an enormous challenge as this required sensors capable picking up forces in the order of 10^(-15) N and smaller. It wasn’t until 1996 that this technology became available and the existence of the Casimir force was experimentally confirmed.

So what does the Casimir force do? When you place an uncharged, conducting plate at a small distance to an identical plate, the Casimir force will pull them towards each other. The term “conductive” refers to the ability of a material to conduct electricity. For the force it plays no role though whether the plates are actually transporting electricity in a given moment or not, what counts is their ability to do so.

The existence of the force can only be explained via quantum theory. Classical physics considers the vacuum to be empty – no particles, no waves, no forces, just absolute nothingness. However, with the rise of quantum mechanics, scientists realized that this is just a crude approximation of reality. The vacuum is actually filled with an ocean of so-called virtual particles (don’t let the name fool you, they are real). These particles are constantly produced in pairs and annihilate after a very short period of time. Each particle carries a certain amount of energy that depends on its wavelength: the lower the wavelength, the higher the energy of the particle. In theory, there’s no upper limit for the energy such a particle can have when spontaneously coming into existence.

So how does this relate to the Casimir force? The two conducting plates define a boundary in space. They separate the space of finite extension between the plates from the (for all practical purposes) infinite space outside them. Only particles with wavelengths that are a multiple of the distance between the plates fit in the finite space, meaning that the particle density (and thus energy density) in the space between the plates is smaller than the energy density in the pure vacuum surrounding them. This imbalance in energy density gives rise to the Casimir force. In informal terms, the Casimir force is the push of the energy-rich vacuum on the energy-deficient space between the plates.

4

(Illustration of Casimir force)

It gets even more puzzling though. The nature of the Casimir force depends on the geometry of the plates. If you replace the flat plates by hemispherical shells, the Casimir force suddenly becomes repulsive, meaning that this specific geometry somehow manages to increase the energy density of the enclosed vacuum. Now the even more energy-rich finite space pushes on the surrounding infinite vacuum. Trippy, huh? So which shapes lead to attraction and which lead to repulsion? Unfortunately, there is no intuitive way to decide. Only abstract mathematical calculations and sophisticated experiments can provide an answer.

We can use the following formula to calculate the magnitude of the attractive Casimir force FCS between two flat plates. Its value depends solely on the distance d (in m) between the plates and the area A (in m²) of one plate. The letters h = 6.63·10^(-34) m² kg/s and c = 3.00·10^8 m/s represent Plank’s constant and the speed of light.

FCS = π·h·c·A / (480·d^4) ≈ 1.3·10^(-27)·A / d^4

(The sign ^ stands for “to the power”) Note that because of the exponent, the strength of the force goes down very rapidly with increasing distance. If you double the size of the gap between the plates, the magnitude of the force reduces to 1/2^4 = 1/16 of its original value. And if you triple the distance, it goes down to 1/3^4 = 1/81 of its original value. This strong dependence on distance and the presence of Plank’s constant as a factor cause the Casimir force to be extremely weak in most real-world situations.

————————————-

Example 24:

a) Calculate the magnitude of the Casimir force experienced by two conducting plates having an area A = 1 m² each and distance d = 0.001 m (one millimeter). Compare this to their mutual gravitational attraction given the mass m = 5 kg of one plate.

b) How close do the plates need to be for the Casimir force to be in the order of unity? Set FCS = 1 N.

Solution:

a)

Inserting the given values into the formula for the Casimir force leads to (units not included):

FCS = 1.3·10^(-27)·A/d^4
FCS = 1.3·10^(-27) · 1 / 0.0014
FCS ≈ 1.3·10^(-15) N

Their gravitational attraction is:

FG = G·m·M / r²
FG = 6.67·10^(-11)·5·5 / 0.001²
FG ≈ 0.0017 N

This is more than a trillion times the magnitude of the Casimir force – no wonder this exotic force went undetected for so long.  I should mention though that the gravitational force calculated above should only be regarded as a rough approximation as Newton’s law of gravitation is tailored to two attracting spheres, not two attracting plates.

b)

Setting up an equation we get:

FCS = 1.3·10^(-27)·A/d^4
1 = 1.3·10^(-27) · 1 / d^4

Multiply by d4:

d4 = 1.3·10^(-27)

And apply the fourth root:

d ≈ 2·10^(-7) m = 200 nanometers

This is roughly the size of a common virus and just a bit longer than the wavelength of violet light.

————————————-

The existence of the Casimir force provides an impressive proof that the abstract mathematics of quantum mechanics is able to accurately describe the workings of the small-scale universe. However, many open questions remain. Quantum theory predicts the energy density of the vacuum to be infinitely large. According to Einstein’s theory of gravitation, such a concentration of energy would produce an infinite space-time curvature and if this were the case, we wouldn’t exist. So either we don’t exist (which I’m pretty sure is not the case) or the most powerful theories in physics are at odds when it comes to the vacuum.

All about the Gravitational Force (For Beginners)

(This is an excerpt from The Book of Forces)

All objects exert a gravitational pull on all other objects. The Earth pulls you towards its center and you pull the Earth towards your center. Your car pulls you towards its center and you pull your car towards your center (of course in this case the forces involved are much smaller, but they are there). It is this force that invisibly tethers the Moon to Earth, the Earth to the Sun, the Sun to the Milky Way Galaxy and the Milky Way Galaxy to its local galaxy cluster.

Experiments have shown that the magnitude of the gravitational attraction between two bodies depends on their masses. If you double the mass of one of the bodies, the gravitational force doubles as well. The force also depends on the distance between the bodies. More distance means less gravitational pull. To be specific, the gravitational force obeys an inverse-square law. If you double the distance, the pull reduces to 1/2² = 1/4 of its original value. If you triple the distance, it goes down to 1/3² = 1/9 of its original value. And so on. These dependencies can be summarized in this neat formula:

F = G·m·M / r²

With F being the gravitational force in Newtons, m and M the masses of the two bodies in kilograms, r the center-to-center distance between the bodies in meters and G = 6.67·10^(-11) N m² kg^(-2) the (somewhat cumbersome) gravitational constant. With this great formula, that has first been derived at the end of the seventeenth century and has sparked an ugly plagiarism dispute between Newton and Hooke, you can calculate the gravitational pull between two objects for any situation.

1

(Gravitational attraction between two spherical masses)

If you have trouble applying the formula on your own or just want to play around with it a bit, check out the free web applet Newton’s Law of Gravity Calculator that can be found on the website of the UNL astronomy education group. It allows you to set the required inputs (the masses and the center-to-center distance) using sliders that are marked special values such as Earth’s mass or the distance Earth-Moon and calculates the gravitational force for you.

————————————-

Example 3:

Calculate the gravitational force a person of mass m = 72 kg experiences at the surface of Earth. The mass of Earth is M = 5.97·10^24 kg (the sign ^ stands for “to the power”) and the distance from the center to the surface r = 6,370,000 m. Using this, show that the acceleration the person experiences in free fall is roughly 10 m/s².

Solution:

To arrive at the answer, we simply insert all the given inputs into the formula for calculating gravitational force.

F = G·m·M / r²
F = 6.67·10^(-11)·72·5.97·10^24 / 6,370,000² N ≈ 707 N

So the magnitude of the gravitational force experienced by the m = 72 kg person is 707 N. In free fall, he or she is driven by this net force (assuming that we can neglect air resistance). Using Newton’s second law we get the following value for the free fall acceleration:

F = m·a
707 N = 72 kg · a

Divide both sides by 72 kg:

a = 707 / 72 m/s² ≈ 9.82 m/s²

Which is roughly (and more exact than) the 10 m/s² we’ve been using in the introduction. Except for the overly small and large numbers involved, calculating gravitational pull is actually quite straight-forward.

As mentioned before, gravitation is not a one-way street. As the Earth pulls on the person, the person pulls on the Earth with the same force (707 N). However, Earth’s mass is considerably larger and hence the acceleration it experiences much smaller. Using Newton’s second law again and the value M = 5.97·1024 kg for the mass of Earth we get:

F = m·a
707 N = 5.97·10^24 kg · a

Divide both sides by 5.97·10^24 kg:

a = 707 / (5.97·10^24) m/s² ≈ 1.18·10^(-22) m/s²

So indeed the acceleration the Earth experiences as a result of the gravitational attraction to the person is tiny.

————————————-

Example 4:

By how much does the gravitational pull change when the  person of mass m = 72 kg is in a plane (altitude 10 km = 10,000 m) instead of the surface of Earth? For the mass and radius of Earth, use the values from the previous example.

Solution:

In this case the center-to-center distance r between the bodies is a bit larger. To be specific, it is the sum of the radius of Earth 6,370,000 m and the height above the surface 10,000 m:

r = 6,370,000 m + 10,000 m = 6,380,000 m

Again we insert everything:

F = G·m·M / r²
F = 6.67·10^(-11)·72·5.97·10^24 / 6,380,000² N ≈ 705 N

So the gravitational force does not change by much (only by 0.3 %) when in a plane. 10 km altitude are not much by gravity’s standards, the height above the surface needs to be much larger for a noticeable difference to occur.

————————————-

With the gravitational law we can easily show that the gravitational acceleration experienced by an object in free fall does not depend on its mass. All objects are subject to the same 10 m/s² acceleration near the surface of Earth. Suppose we denote the mass of an object by m and the mass of Earth by M. The center-to-center distance between the two is r, the radius of Earth. We can then insert all these values into our formula to find the value of the gravitational force:

F = G·m·M / r²

Once calculated, we can turn to Newton’s second law to find the acceleration a the object experiences in free fall. Using F = m·a and dividing both sides by m we find that:

a = F / m = G·M / r²

So the gravitational acceleration indeed depends only on the mass and radius of Earth, but not the object’s mass. In free fall, a feather is subject to the same 10 m/s² acceleration as a stone. But wait, doesn’t that contradict our experience? Doesn’t a stone fall much faster than a feather? It sure does, but this is only due to the presence of air resistance. Initially, both are accelerated at the same rate. But while the stone hardly feels the effects of air resistance, the feather is almost immediately slowed down by the collisions with air molecules. If you dropped both in a vacuum tube, where no air resistance can build up, the stone and the feather would reach the ground at the same time! Check out an online video that shows this interesting vacuum tube experiment, it is quite enlightening to see a feather literally drop like a stone.

2

(All bodies are subject to the same gravitational acceleration)

Since all objects experience the same acceleration near the surface of Earth and since this is where the everyday action takes place, it pays to have a simplified equation at hand for this special case. Denoting the gravitational acceleration by g (with g ≈ 10 m/s²) as is commonly done, we can calculate the gravitational force, also called weight, an object of mass m is subject to at the surface of Earth by:

F = m·g

So it’s as simple as multiplying the mass by ten. Depending on the application, you can also use the more accurate factor g ≈ 9.82 m/s² (which I will not do in this book). Up to now we’ve only been dealing with gravitation near the surface of Earth, but of course the formula allows us to compute the gravitational force and acceleration near any other celestial body. I will spare you trouble of looking up the relevant data and do the tedious calculations for you. In the table below you can see what gravitational force and acceleration a person of mass m = 72 kg would experience at the surface of various celestial objects. The acceleration is listed in g’s, with 1 g being equal to the free-fall acceleration experienced near the surface of Earth.

3

So while jumping on the Moon would feel like slow motion (the free-fall acceleration experienced is comparable to what you feel when stepping on the gas pedal in a common car), you could hardly stand upright on Jupiter as your muscles would have to support more than twice your weight. Imagine that! On the Sun it would be even worse. Assuming you find a way not get instantly killed by the hellish thermonuclear inferno, the enormous gravitational force would feel like having a car on top of you. And unlike temperature or pressure, shielding yourself against gravity is not possible.

What about the final entry? What is a neutron star and why does it have such a mind-blowing gravitational pull? A neutron star is the remnant of a massive star that has burned its fuel and exploded in a supernova, no doubt the most spectacular light-show in the universe. Such remnants are extremely dense – the mass of several suns compressed into an almost perfect sphere of just 20 km radius. With the mass being so large and the distance from the surface to the center so small, the gravitational force on the surface is gigantic and not survivable under any circumstances.

If you approached a neutron star, the gravitational pull would actually kill you long before reaching the surface in a process called spaghettification. This unusual term, made popular by the brilliant physicist Stephen Hawking, refers to the fact that in intense gravitational fields objects are vertically stretched and horizontally compressed. The explanation is rather straight-forward: since the strength of the gravitational force depends on the distance to the source of said force, one side of the approaching object, the side closer to the source, will experience a stronger pull than the opposite side. This leads to a net force stretching the object. If the gravitational force is large enough, this would make any object look like a thin spaghetti. For humans spaghettification would be lethal as the stretching would cause the body to break apart at the weakest spot (which presumably is just above the hips). So my pro-tip is to keep a polite distance from neutron stars.

Antimatter Production – Present and Future

When it comes to using antimatter for propulsion, getting sufficient amounts of the exotic fuel is the biggest challenge. For flights within the solar system, hybrid concepts would require several micrograms of antimatter, while pure antimatter rockets would consume dozens of kilograms per trip. And going beyond the solar system would demand the production of several metric tons and more.

We are very, very far from this. Currently around 10 nanograms of anti-protons are produced in the large particles accelerators each year. At this rate it would take 100 years to produce one measly microgram and 100 billion years to accumulate one kilogram. However, the antimatter production rate has seen exponential growth, going up by sixteen orders of magnitude over the past decades, and this general trend will probably continue for some time.

Even with a noticeable slowdown in this exponential growth, gram amounts of anti-protons could be manufactured each year towards the end of the 21st century, making hybrid antimatter propulsion feasible. With no slowdown, the rate could even reach kilograms per year by then. While most physicists view this as an overly optimistic estimate, it is not impossible considering the current trend in antimatter production and the historic growth of liquid hydrogen and uranium production rates (both considered difficult to manufacture in the past).

There is still much to be optimized in the production of antimatter. The energy efficiency at present is only 10-9, meaning that you have to put in one gigajoule of pricey electric energy to produce a single joule of antimatter energy. The resulting costs are a staggering 62.5 trillion USD per gram of anti-protons, making antimatter the most expensive material known to man. So if you want to tell your wife how precious she is to you (and want to get rid of her at the same time), how about buying her a nice anti-matter scarf?

Establishing facilities solely dedicated to antimatter production, as opposed to the by-product manufacturing in modern particle accelerators, would significantly improve the situation. NASA experts estimate that an investment of around 5 billion USD is sufficient to build such a first generation antimatter factory. This step could bring the costs of anti-protons down to 25 billion USD per gram and increase the production rate to micrograms per year.

While we might not see kilogram amounts of antimatter or antimatter propulsion systems in our lifetime, the production trend over the next few decades will reveal much about the feasibility of antimatter rockets and interstellar travel. If the optimists are correct, and that’s a big if, the grandchildren of our grandchildren’s grandchildren might watch the launch of the first spacecraft capable of reaching neighboring stars. Sci-fi? I’m sure that’s what people said about the Moon landing and close-up pictures from Mars and Jupiter just a lifetime ago.

For more information, check out my ebook Antimatter Propulsion.

New Release for Kindle: Antimatter Propulsion

I’m very excited to announce the release of my latest ebook called “Antimatter Propulsion”. I’ve been working working on it like a madman for the past few months, going through scientific papers and wrestling with the jargon and equations. But I’m quite satisfied with the result. Here’s the blurb, the table of contents and the link to the product page. No prior knowledge is required to enjoy the book.

Many popular science fiction movies and novels feature antimatter propulsion systems, from the classic Star Trek series all the way to Cameron’s hit movie Avatar. But what exactly is antimatter? And how can it be used accelerate rockets? This book is a gentle introduction to the science behind antimatter propulsion. The first section deals with antimatter in general, detailing its discovery, behavior, production and storage. This is followed by an introduction to propulsion, including a look at the most important quantities involved and the propulsion systems in use or in development today. Finally, the most promising antimatter propulsion and rocket concepts are presented and their feasibility discussed, from the solid core concept to antimatter initiated microfusion engines, from the Valkyrie project to Penn State’s AIMStar spacecraft.

Section 1: Antimatter

The Atom
Dirac’s Idea
Anti-Everything
An Explosive Mix
Proton and Anti-Proton Annihilation
Sources of Antimatter
Storing Antimatter
Getting the Fuel

Section 2: Propulsion Basics

Conservation of Momentum
♪ Throw, Throw, Throw Your Boat ♫
So What’s The Deal?
May The Thrust Be With You
Acceleration
Specific Impulse and Fuel Requirements
Chemical Propulsion
Electric Propulsion
Fusion Propulsion

Section 3: Antimatter Propulsion Concepts

Solid Core Concept
Plasma Core Concept
Beamed Core Concept
Antimatter Catalyzed Micro-Fission / Fusion
Antimatter Initiated Micro-Fusion

Section 4: Antimatter Rocket Concepts

Project Valkyrie
ICAN-II
AIMStar
Dust Shields

You can purchase “Antimatter Propulsion” here for $ 2.99.

The Problem With Antimatter Rockets

The distance to our neighboring star Alpha Centauri is roughly 4.3 lightyears or 25.6 trillion km. This is an enormous distance. It would take the Space Shuttle 165,000 years to cover this distance. That’s 6,600 generations of humans who’d know nothing but the darkness of space. Obviously, this is not an option. Do we have the technologies to get there within the lifespan of a person? Surprisingly, yes. The concept of antimatter propulsion might sound futuristic, but all the technologies necessary to build such a rocket exist. Today.

What exactly do you need to build a antimatter rocket? You need to produce antimatter, store antimatter (remember, if it comes in contact with regular matter it explodes, so putting it in a box is certainly not a possibility) and find a way to direct the annihilation products. Large particle accelerators such as CERN routinely produce antimatter (mostly anti-electrons and anti-protons). Penning-Traps, a sophisticated arrangement of electric and magnetic fields, can store charged antimatter. And magnetic nozzles, suitable for directing the products of proton / anti-proton annihilations, have already been used in several experiments. It’s all there.

So why are we not on the way to Alpha Centauri? We should be making sweet love with green female aliens, but instead we’re still banging our regular, non-green, non-alien women. What’s the hold-up? It would be expensive. Let me rephrase that. The costs would be blasphemous, downright insane, Charlie Manson style. Making one gram of antimatter costs around 62.5 trillion $, it’s by far the most expensive material on Earth. And you’d need tons of the stuff to get to Alpha Centauri. Bummer! And even if we’d all get a second job to pay for it, we still couldn’t manufacture sufficient amounts in the near future. Currently 1.5 nanograms of antimatter are being produced every year. Even if scientists managed to increase this rate by a factor of one million, it would take 1000 years to produce one measly gram. And we need tons of it! Argh. Reality be a harsh mistress …

CTR (Click Through Rate) – Explanation, Results and Tips

A very important metric for banner advertiesment is the CTR (click through rate). It is simply the number of clicks the ad generated divided by the number of total impressions. You can also think of it as the product of the probability of a user noticing the ad and the probability of the user being interested in the ad.

CTR = clicks / impressions = p(notice) · p(interested)

The current average CTR is around 0.09 % or 9 clicks per 10,000 impressions and has been declining for the past several years. What are the reasons for this? For one, the common banner locations are familiar to web users and are thus easy to ignore. There’s also the increased popularity of ad-blocking software.

The attitude of internet users is generally negative towards banner ads. This is caused by advertisers using more and more intrusive formats. These include annoying pop-ups and their even more irritating sisters, the floating ads. Adopting them is not favorable for advertisers. They harm a brand and produce very low CTRs. So hopefully, we will see an end to such nonsense soon.

As for animated ads, their success depends on the type of website and target group. For high-involvement websites that users visit to find specific information (news, weather, education), animated banners perform worse than static banners. In case of low-involvement websites that are put in place for random surfing (entertainment, lists, mini games) the situation is reversed. The target group also plays an important role. For B2C (business-to-consumer) ads animation generally works well, while for B2B (business-to-business) animation was shown to lower the CTR.

The language used in ads has also been extensively studied. One interesting result is that often it is preferable to use English language even if the ad is displayed in a country in which English is not the first language. A more obvious result is that catchy words and calls to action (“read more”) increase the CTR.

As for the banner size, there is inconclusive data. Some analysis report that the CTR grows with banner size, while others conclude that banner sizes around 250×250 or 300×250 generate the highest CTRs. There is a clearer picture regarding shape: in terms of CTR, square shapes work better than thin rectangles having the same size. No significant difference was found between vertical and horizontal rectangles.

Here’s another hint: my own theoretical calculations show that higher CTRs can be achieved by advertising on pages that have a low visitor loyalty. The explanation for this counter-intuitive outcome as well as a more sophisticated formula for the CTR can be found here. It is, in a nutshell, a result of the multiplication rule of statistics. The calculation also shows that on sites with a low visitor loyalty the CTR will stay constant, while on websites with a high visitor loyalty it will decrease over time.

Sources and further reading:

  • Study on banner advertisement type and shape effect on click-through-rate and conversion

http://www.aabri.com/manuscripts/131481.pdf

  • The impact of banner ad styles on interaction and click-through-rates

http://iacis.org/iis/2008/S2008_989.pdf

  • Impact of animation and language on banner click-through-rates

http://www.academia.edu/1608289/Impact_of_Animation_and_Language_on_Banner_Click-Through_Rates

Braingate – You Thought It’s Science-Fiction, But It’s Not

On April 12, 2011, something extraordinary happened. A 58-year-old woman that was paralyzed from the neck down reached for a bottle of coffee, drank from a straw and put the bottle back on the table. But she didn’t reach with her own hand – she controlled a robotic arm with her mind. Uneblievable? It is. But decades of research made the unbelievable possible. Watch this exceptional and moving moment in history here (click on picture for Youtube video).

Beautiful

The 58-year-old women (patient S3) was part of the BrainGate2 project, a collaboration of researchers at the Department of Veterans Affairs, Brown University, German Aerospace Center (DLR) and others. The scientists implanted a small chip containing 96 electrodes into her motor cortex. This part of the brain is responsible for voluntary movement. The chip measures the electrical activity of the brain and an external computer translates this pattern into the movement of a robotic arm. A brain-computer interface. And it’s not science-fiction, it’s science.

During the study the woman was able to grasp items during the allotted time with a 70 % success rate. Another participant (patient T2) even managed to achieve a 96 % success rate. Besides moving robotic arms, the participants also were given the task to spell out words and sentences by indicating letters via eye movement. Participant T2 spelt out this sentence: “I just imagined moving my own arm and the [robotic] arm moved where I wanted it to go”.

The future is exciting.

Audio Effects: All About Compressors

Almost all music and recorded speech that you hear has been sent through at least one compressor at some point during the production process. If you are serious about music production, you need to get familiar with this powerful tool. This means understanding the big picture as well as getting to know each of the parameters (Threshold, Ratio, Attack, Release, Make-Up Gain) intimately.

  • How They Work

Throughout any song the volume level varies over time. It might hover around – 6 dB in the verse, rise to – 2 dB in the first chorus, drop to – 8 dB in the interlude, and so on. A term that is worth knowing in this context is the dynamic range. It refers to the difference in volume level from the softest to the loudest part. Some genres of music, such as orchestral music, generally have a large dynamic range, while for mainstream pop and rock a much smaller dynamic range is desired. A symphony might range from – 20 dB in the soft oboe solo to – 2 dB for the exciting final chord (dynamic range: 18 dB), whereas your common pop song will rather go from – 8 dB in the first verse to 0 dB in the last chorus (dynamic range: 8 dB).

During a recording we have some control over what dynamic range we will end up with. We can tell the musicians to take it easy in the verse and really go for it in the chorus. But of course this is not very accurate and we’d like to have full control of the dynamic range rather than just some. We’d also like to be able to to change the dynamic range later on. Compressors make this (and much more) possible.

The compressor constantly monitors the volume level. As long as the level is below a certain threshold, the compressor will not do anything. Only when the level exceeds the threshold does it become active and dampen the excess volume by a certain ratio. In short: everything below the threshold stays as it is, everything above the threshold gets compressed. Keep this in mind.

Suppose for example we set the threshold to – 10 dB and the ratio to 4:1. Before applying the compressor, our song varies from a minimum value of – 12 dB in the verse to a maximum value – 2 dB in the chorus. Let’s look at the verse first. Here the volume does not exceed the threshold and thus the compressor does not spring into action. The signal will pass through unchanged. The story is different for the chorus. Its volume level is 8 dB above the threshold. The compressor takes this excess volume and dampens it according to the ratio we set. To be more specific: the compressor turns the 8 dB excess volume into a mere 8 dB / 4 = 2 dB. So the compressed song ranges from – 12 dB in the verse to -10 dB + 2 dB = – 8 dB in the chorus.

Here’s a summary of the process:

Settings:

Threshold: – 10 dB
Ratio: 4:1

Before:

Minimum: – 12 dB
Maximum: – 2 dB
Dynamic range: 10 dB

Excess volume (threshold to maximum): 8 dB
With ratio applied: 8 dB / 4 = 2 dB

After:

Minimum: – 12 dB
Maximum: – 8 dB
Dynamic range: 4 dB

As you can see, the compressor had a significant effect on the dynamic range. Choosing appropriate values for the threshold and ratio, we are free to compress the song to any dynamic range we desire. When using a DAW (Digital Audio Workstation such as Cubase, FL Studio or Ableton Live), it is possible to see the workings of a compressor with your own eyes. The image below shows the uncompressed file (top) and the compressed file (bottom) with the threshold set to – 12 dB and the ratio to 2:1.

MIXING_html_26d7be80

The soft parts are identical, while the louder parts (including the short and possibly problematic peaks) have been reduced in volume. The dynamic range clearly shrunk in the process. Note that after applying the compressor, the song’s effective volume (RMS) is much lower. Since this is usually not desired, most compressors have a parameter called make-up gain. Here you can specify by how much you’d like the compressor to raise the volume of the song after the compression process is finished. This increase in volume is applied to all parts of the song, soft or loud, so there will not be another change in the dynamic range. It only makes up for the loss in loudness (hence the name).

  • Usage of Compressors

We already got to know one application of the compressor: controlling the dynamic range of a song. But usually this is just a first step in reaching another goal: increasing the effective volume of the song. Suppose you have a song with a dynamic range of 10 dB and you want to make it as loud as possible. So you move the volume fader until the maximum level is at 0 dB. According to the dynamic range, the minimum level will now be at – 10 dB. The effective volume will obviously be somewhere in-between the two values. For the sake of simplicity, we’ll assume it to be right in the middle, at – 5 dB. But this is too soft for your taste. What to do?

You insert a compressor with a threshold of – 6 dB and a ratio of 3:1. The 4 dB range from the minimum level – 10 dB to the threshold – 6 dB is unchanged, while the 6 dB range from the threshold – 6 dB to the maximum level 0 dB is compressed to 6 dB / 3 = 2 dB. So overall the dynamic range is reduced to 4 dB + 2 dB = 6 dB. Again you move the volume fader until the maximum volume level coincides with 0 dB. However, this time the minimum volume will be higher, at – 6 dB, and the effective volume at – 3 dB (up from the – 5 dB we started with). Mission accomplished, the combination of compression and gain indeed left us with a higher average volume.

In theory, this means we can get the effective volume up to almost any value we desire by compressing a song and then making it louder. We could have the whole song close to 0 dB. This possibility has led to a “loudness war” in music production. Why not go along with that? For one, you always want to put as much emphasis as possible on the hook. This is hard to do if the intro and verse is already blaring at maximum volume. Another reason is that severely reducing the dynamic range kills the expressive elements in your song. It is not a coincidence that music which strongly relies on expressive elements (orchestral and acoustic music) usually has the highest dynamic range. It needs the wide range to go from expressing peaceful serenity to expressing destructive desperation. Read the following out loud and memorize it: the more expression it has, the less you should compress. While a techno song might work at maximum volume, a ballad sure won’t.

————————————-

Background Info – SPL and Loudness

Talking about how loud something is can be surprisingly complicated. The problem is that our brain does not process sound inputs in a linear fashion. A sound wave with twice the sound pressure does not necessarily seem twice as loud to us. So when expressing how loud something is, we can either do this by using well-defined physical quantities such as the sound pressure level (which unfortunately does not reflect how loud a person perceives something to be) or by using subjective psycho-acoustic quantities such as loudness (which is hard to define and measure properly).

Sound waves are pressure and density fluctuations that propagate at a material- and temperature-dependent speed in a medium. For air at 20 °C this speed is roughly 340 m/s. The quantity sound pressure expresses the deviation of the sound wave pressure from the pressure of the surrounding air. The sound pressure level, in short: SPL, is proportional to the logarithm of the effective sound pressure. Long story short: the stronger the sound pressure, the higher the SPL. The SPL is used to objectively measure how loud something is. Another important objective quantity for this purpose is the volume. It is a measure of how much energy is contained in an audio signal and thus closely related to the SPL.

A subjective quantity that reflects how loud we perceive something to be is loudness. Due to our highly non-linear brains, the loudness of an audio signal is not simply proportional to its SPL or volume level. Rather, loudness depends in a complex way on the SPL, frequency, duration of the sound, its bandwidth, etc … In the image below you can see an approximation of the relationship between loudness, SPL and frequency.

MIXING_html_mc95d258

Any red curve is a curve of equal loudness. Here’s how we can read the chart. Take a look at the red curve at the very bottom. It starts at 75 dB SPL and a frequency of 20 Hz and reaches 25 dB SPL at 100 Hz. Since the red curve is a curve of equal loudness, we can conclude we perceive a 75 dB SPL sound at 20 Hz to be just as loud as a 25 dB SPL sound at 100 Hz, even though from a purely physical point of view the first sound is three times as loud as the second (75 dB / 25 dB = 3).

————————————-

MIXING_html_m53a1053e

(Compressor in Cubase)

  • Threshold and Ratio

What’s the ideal threshold to use? This depends on what you are trying to accomplish. Suppose you set the threshold at a relatively high value (for example – 10 dB in a good mix). In this case the compressor will be inactive for most of the song and only kick in during the hook and short peaks. With the threshold set to a high value, you are thus “taking the top off”. This would be a suitable choice if you are happy with the dynamics in general, but would like to make the mix less aggressive.

What about low thresholds (such as -25 dB in a good mix)? In this case the compressor will be active for the most part of the song and will make the entire song quite dense. This is something to consider if you aim to really push the loudness of the song. Once the mix is dense, you can go for a high effective volume. But a low threshold compression can also add warmth to a ballad, so it’s not necessarily a tool restricted to usage in the loudness war.

Onto the ratio. If you set the ratio to a high value (such as 5:1 and higher), you are basically telling the mix: to the threshold and no further. Anything past the threshold will be heavily compressed, which is great if you have pushy peaks that make a mix overly aggressive. This could be the result of a snare that’s way too loud or an inexperienced singer. Whatever the cause, a carefully chosen threshold and a high ratio should take care of it in a satisfying manner. Note though that in this case the compressor should be applied to the track that is causing the problem and not the entire mix.

A low value for the ratio (such as 2:1 or smaller) will have a rather subtle effect. Such values are perfect if you want to apply the compressor to a mix that already sounds well and just needs a finishing touch. The mix will become a little more dense, but its character will be kept intact.

  • Attack and Release

There are two important parameters we have ignored so far: the attack and release. The attack parameter allows you to specify how quickly the compressor sets in once the volume level goes past the threshold. A compressor with a long attack (20 milliseconds or more) will let short peaks pass. As long as these peaks are not over-the-top, this is not necessarily a bad thing. The presence of short peaks, also called transients, is important for a song’s liveliness and natural sound. A long attack makes sure that these qualities are preserved and that the workings of the compressor are less noticeable.

A short attack (5 milliseconds or less) can produce a beautifully crisp sound that is suitable for energetic music. But it is important to note that if the attack is too short, the compressor will kill the transients and the whole mix will sound flat and bland. Even worse, a short attack can lead to clicks and a nervous “pumping effect”. Be sure to watch out for those as you shorten the attack.

The release is the time for the compressor to become inactive once the volume level goes below the threshold. It is usually much longer than the attack, but the overall principles are similar. A long release (600 milliseconds or more) will make sure that the compression happens in a more subtle fashion, while a short release (150 milliseconds or less) can produce a pumping sound.

It is always a good idea to choose the release so that it fits the rhythm of your song (the same of course is true for temporal parameters in reverb and delay). One way to do this is to calculate the time per beat TPB in milliseconds from your song’s tempo as measured in beats per minute BPM and use this value as the point of reference.

TPB [ms] = 60000 / BPM

For example, in a song with the tempo BPM = 120 the duration of one beat is TPB = 60000 / 120 = 500 ms. If you need a longer release, use a multiple of it (1000 ms, 1500 ms, and so on), for a shorter release divide it by any natural number (500 ms / 2 = 250 ms, 500 ms / 3 = 167 ms, and so on). This way the compressor will “breathe” in unison with your music.

If you are not sure where to start regarding attack and release, just make use of the 20/200-rule: Set the attack to 20 ms, the release to 200 ms and work towards the ideal values from there. Alternatively, you can always go through the presets of the compressor to find suitable settings.

 

You can learn about advanced compression techniques as well as other effects from Audio Effects, Mixing and Mastering, available for Kindle for $ 3.95.

Recurrence Relations – A Simple Explanation And Much More

Recurrence relations are a powerful tool for mathematical modeling and numerically solving differential equations (no matter how complicated). And as luck would have it, they are relatively easy to understand and apply. So let’s dive right into it using a purely mathematical example (for clarity) before looking at a real-world application.

This equation is a typical example of a recurrence relation:

x(t+1) = 5 * x(t) + 2 * x(t-1)

At the heart of the equation is a certain quantity x. It appears three times: x(t+1) stands for the value of this quantity at a time t+1 (next month), x(t) for the value at time t (current month) and x(t-1) the value at time t-1 (previous month). So what the relation allows us to do is to determine the value of said quantity for the next month, given that we know it for the current and previous month. Of course the choice of time span here is just arbitrary, it might as well be a decade or nanosecond. What’s important is that we can use the last two values in the sequence to determine the next value.

Suppose we start with x(0) = 0 and x(1) = 1. With the recurrence relation we can continue the sequence step by step:

x(2) = 5 * x(1) + 2 * x(0) = 5 * 1 + 2 * 0 = 5

x(3) = 5 * x(2) + 2 * x(1) = 5 * 5 + 2 * 1 = 27

x(4) = 5 * x(3) + 2 * x(2) = 5 * 27 + 2 * 5 = 145

And so on. Once we’re given the “seed”, determining the sequence is not that hard. It’s just a matter of plugging in the last two data points and doing the calculation. The downside to defining a sequence recursively is that if you want to know x(500), you have to go through hundreds of steps to get there. Luckily, this is not a problem for computers.

In the most general terms, a recurrence relation relates the value of quantity x at a time t + 1 to the values of this quantity x at earlier times. The time itself could also appear as a factor. So this here would also be a legitimate recurrence relation:

x(t+1) = 5 * t * x(t) – 2 * x(t-10)

Here we calculate the value of x at time t+1 (next month) by its value at a time t (current month) and t – 10 (ten months ago). Note that in this case you need eleven seed values to be able to continue the sequence. If we are only given x(0) = 0 and x(10) = 50, we can do the next step:

x(11) = 5 * 10 * x(10) – 2 * x(0) = 5 * 10 * 50 – 2 * 0 = 2500

But we run into problems after that:

x(12) = 5 * 11 * x(11) – 2 * x(1) = 5 * 11 * 2500 – 2 * x(1) = ?

We already calculated x(11), but there’s nothing we can do to deduce x(1).

Now let’s look at one interesting application of such recurrence relations, modeling the growth of animal populations. We’ll start with a simple model that relates the number of animals x in the next month t+1 to the number of animals x in the current month t as such:

x(t+1) = x(t) + f * x(t)

The factor f is a constant that determines the rate of growth (to be more specific: its value is the decimal percentage change from one month to the next). So if our population grows with 25 % each month, we get:

x(t+1) = x(t) + 0.25 * x(t)

If we start with x(0) = 100 rabbits at month t = 0 we get:

x(1) = x(0) + 0.1 * x(0) = 100 + 0.25 * 100 = 125 rabbits

x(2) = x(1) + 0.1 * x(1) = 125 + 0.25 * 125 = 156 rabbits

x(3) = x(2) + 0.1 * x(2) = 156 + 0.25 * 156 = 195 rabbits

x(4) = x(3) + 0.1 * x(3) = 195 + 0.25 * 195 = 244 rabbits

x(5) = x(4) + 0.1 * x(4) = 244 + 0.25 * 244 = 305 rabbits

And so on. Maybe you already see the main problem with this exponential model: it just keeps on growing. This is fine as long as the population is small and the environment rich in ressources, but every environment has its limits. Let’s fix this problem by including an additional term in the recurrence relation that will lead to this behavior:

– Exponential growth as long as the population is small compared to the capacity
– Slowing growth near the capacity
– No growth at capacity
– Population decline when over the capacity

How can we translate this into mathematics? It takes a lot of practice to be able to tweak a recurrence relation to get the behavior you want. You just learned your first chord and I’m asking you to play Mozart, that’s not fair. But take a look at this bad boy:

x(t+1) = x(t) + a * x(t) * (1 – x(t) / C)

This is called the logistic model and the constant C represents said capacity. If x is much smaller than the capacity C, the ratio x / C will be close to zero and we are left with exponential growth:

x(t+1) ≈ x(t) + a * x(t) * (1 – 0)

x(t+1) ≈ x(t) + a * x(t)

So this admittedly complicated looking recurrence relation fullfils our first demand: exponential growth for small populations. What happens if the population x reaches the capacity C? Then all growth should stop. Let’s see if this is the case. With x = C, the ratio x / C is obviously equal to one, and in this case we get:

x(t+1) = x(t) + a * x(t) * (1 – 1)

x(t+1) = x(t)

The number of animals remains constant, just as we wanted. Last but not least, what happens if (for some reason) the population gets past the capacity, meaning that x is greater than C? In this case the ratio x / C is greater than one (let’s just say x / C = 1.2 for the sake of argument):

x(t+1) = x(t) + a * x(t) * (1 – 1.2)

x(t+1) = x(t) + a * x(t) * (- 0.2)

The second term is now negative and thus x(t+1) will be smaller than x(t) – a decline back to capacity. What an enormous amount of beautiful behavior in such a compact line of mathematics! This is where the power of recurrence relations comes to light. Anyways, let’s go back to our rabbit population. We’ll let them grow with 25 % (a = 0.25), but this time on an island that can only sustain 300 rabbits at most (C = 300). Thus the model looks like this:

x(t+1) = x(t) + 0.25 * x(t) * (1 – x(t) / 300)

If we start with x(0) = 100 rabbits at month t = 0 we get:

x(1) = 100 + 0.25 * 100 * (1 – 100 / 300) = 117 rabbits

x(2) = 117 + 0.25 * 117 * (1 – 117 / 300) = 135 rabbits

x(3) = 135 + 0.25 * 135 * (1 – 135 / 300) = 153 rabbits

x(4) = 153 + 0.25 * 153 * (1 – 153 / 300) = 172 rabbits

x(5) = 172 + 0.25 * 172 * (1 – 172 / 300) = 190 rabbits

Note that now the growth is almost linear rather than exponential and will slow down further the closer we get to the capacity (continue the sequence if you like, it will gently approach 300, but never go past it).

We can even go further and include random events in a recurrence relation. Let’s stick to the rabbits and their logistic growth and say that there’s a p = 5 % chance that in a certain month a flood occurs. If this happens, the population will halve. If no flood occurs, it will grow logistically as usual. This is what our new model looks like in mathematical terms:

x(t+1) = x(t) + 0.25 * x(t) * (1 – x(t) / 300)    if no flood occurs

x(t+1) = 0.5 * x(t)    if a flood occurs

To determine if there’s a flood, we let a random number generator spit out a number between 1 and 100 at each step. If it displays the number 5 or smaller, we use the “flood” equation (in accordance with the 5 % chance for a flood). Again we turn to our initial population of 100 rabbits with the growth rate and capacity unchanged:

x(1) = 100 + 0.25 * 100 * (1 – 100 / 300) = 117 rabbits

x(2) = 117 + 0.25 * 117 * (1 – 117 / 300) = 135 rabbits

x(3) = 135 + 0.25 * 135 * (1 – 135 / 300) = 153 rabbits

x(4) = 0.5 * 153 = 77 rabbits

x(5) = 77 + 0.25 * 77 * (1 – 77 / 300) = 91 rabbits

As you can see, in this run the random number generator gave a number 5 or smaller during the fourth step. Accordingly, the number of rabbits halved. You can do a lot of shenanigans (and some useful stuff as well) with recurrence relations and random numbers, the sky’s the limit. I hope this quick overview was helpful.

A note for the advanced: here’s how you turn a differential equation into a recurrence relation. Let’s take this differential equation:

dx/dt = a * x * exp(- b*x)

First multiply by dt:

dx = a * x * exp(- b * x) * dt

We set dx (the change in x) equal to x(t+h) – x(t) and dt (change in time) equal to a small constant h. Of course for x we now use x(t):

x(t+h) – x(t) = a * x(t) * exp(- b * x(t)) * h

Solve for x(t+h):

x(t+h) = x(t) + a * x(t) * exp(- b * x(t)) * h

And done! The smaller your h, the more accurate your numerical results. How low you can go depends on your computer’s computing power.

Released Today for Kindle: Physics! In Quantities and Examples

I finally finished and released my new ebook … took me longer than usual because I always kept finding new interesting topics while researching. Here’s the blurb, link and TOC:

This book is a concept-focused and informal introduction to the field of physics that can be enjoyed without any prior knowledge. Step by step and using many examples and illustrations, the most important quantities in physics are gently explained. From length and mass, over energy and power, all the way to voltage and magnetic flux. The mathematics in the book is strictly limited to basic high school algebra to allow anyone to get in and to assure that the focus always remains on the core physical concepts.

(Click cover to get to the Amazon Product Page)

cover

Table of Contents:

Length
(Introduction, From the Smallest to the Largest, Wavelength)

Mass
(Introduction, Mass versus Weight, From the Smallest to the Largest, Mass Defect and Einstein, Jeans Mass)

Speed / Velocity
(Introduction, From the Smallest to the Largest, Faster than Light, Speed of Sound for all Purposes)

Acceleration
(Introduction, From the Smallest to the Largest, Car Performance, Accident Investigation)

Force
(Introduction, Thrust and the Space Shuttle, Force of Light and Solar Sails, MoND and Dark Matter, Artificial Gravity and Centrifugal Force, Why do Airplanes Fly?)

Area
(Introduction, Surface Area and Heat, Projected Area and Planetary Temperature)

Pressure
(Introduction, From the Smallest to the Largest, Hydraulic Press, Air Pressure, Magdeburg Hemispheres)

Volume
(Introduction, Poisson’s Ratio)

Density
(Introduction, From the Smallest to the Largest, Bulk Density, Water Anomaly, More Densities)

Temperature
(Introduction, From the Smallest to the Largest, Thermal Expansion, Boiling, Evaporation is Cool, Why Blankets Work, Cricket Temperature)

Energy
(Introduction, Impact Speed, Ice Skating, Dear Radioactive Ladies and Gentlemen!, Space Shuttle Reentry, Radiation Exposure)

Power
(Introduction, From the Smallest to the Largest, Space Shuttle Launch and Sound Suppression)

Intensity
(Introduction, Inverse Square Law, Absorption)

Momentum
(Introduction, Perfectly Inelastic Collisions, Recoil, Hollywood and Physics, Force Revisited)

Frequency / Period
(Introduction, Heart Beat, Neutron Stars, Gravitational Redshift)

Rotational Motion
(Extended Introduction, Moment of Inertia – The Concept, Moment of Inertia – The Computation, Conservation of Angular Momentum)

Electricity
(Extended Introduction, Stewart-Tolman Effect, Piezoelectricity, Lightning)

Magnetism
(Extended Introduction, Lorentz Force, Mass Spectrometers, MHD Generators, Earth’s Magnetic Field)

Appendix:
Scalar and Vector Quantities
Measuring Quantities
Unit Conversion
Unit Prefixes
References
Copyright and Disclaimer

As always, I discounted the book in countries with a low GDP because I think that education should be accessible for all people. Enjoy!

How To Calculate Maximum Car Speed + Examples (Mercedes C-180, Bugatti Veyron)

How do you determine the maximum possible speed your car can go? Well, one rather straight-forward option is to just get into your car, go on the Autobahn and push down the pedal until the needle stops moving. The problem with this option is that there’s not always an Autobahn nearby. So we need to find another way.

Luckily, physics can help us out here. You probably know that whenever a body is moving at constant speed, there must be a balance of forces in play. The force that is aiming to accelerate the object is exactly balanced by the force that wants to decelerate it. Our first job is to find out what forces we are dealing with.

Obvious candidates for the retarding forces are ground friction and air resistance. However, in our case looking at the latter is sufficient since at high speeds, air resistance becomes the dominating factor. This makes things considerably easier for us. So how can we calculate air resistance?

To compute air resistance we need to know several inputs. One of these is the air density D (in kg/m³), which at sea level has the value D = 1.25 kg/m³. We also need to know the projected area A (in m²) of the car, which is just the product of width times height. Of course there’s also the dependence on the velocity v (in m/s) relative to the air. The formula for the drag force is:

F = 0.5 · c · D · A · v²

with c (dimensionless) being the drag coefficient. This is the one quantity in this formula that is tough to determine. You probably don’t know this value for your car and there’s a good chance you will never find it out even if you try. In general, you want to have this value as low as possible.

On ecomodder.com you can find a table of drag coefficients for many common modern car models. Excluding prototype models, the drag coefficient in this list ranges between c = 0.25 for the Honda Insight to c = 0.58 for the Jeep Wrangler TJ Soft Top. The average value is c = 0.33. In first approximation you can estimate your car’s drag coefficient by placing it in this range depending on how streamlined it looks compared to the average car.

With the equation: power equals force times speed, we can use the above formula to find out how much power (in W) we need to provide to counter the air resistance at a certain speed:

P = F · v = 0.5 · c · D · A · v³

Of course we can also reverse this equation. Given that our car is able to provide a certain amount of power P, this is the maximum speed v we can achieve:

v = ( 2 · P / (c · D · A) )1/3

From the formula we can see that the top speed grows with the third root of the car’s power, meaning that when we increase the power eightfold, the maximum speed doubles. So even a slight increase in top speed has to be bought with a significant increase in energy output.

Note the we have to input the power in the standard physical unit watt rather than the often used unit horsepower. Luckily the conversion is very easy, just multiply horsepower with 746 to get to watt.

Let’s put the formula to the test.

—————————

I drive a ten year old Mercedes C180 Compressor. According the Mercedes-Benz homepage, its drag coefficient is c = 0.29 and its power P = 143 HP ≈ 106,680 W. Its width and height is w = 1.77 m and h = 1.45 m respectively. What is the maximum possible speed?

First we need the projected area of the car:

A = 1.77 m · 1.45 m ≈ 2.57 m²

Now we can use the formula:

v = ( 2 · 106,680 / (0.29 · 1.25 · 2.57) )1/3

v ≈ 61.2 m/s ≈ 220.3 km/h ≈ 136.6 mph

From my experience on the Autobahn, this seems to be very realistic. You can reach 200 Km/h quite well, but the acceleration is already noticeably lower at this point.

If you ever get the chance to visit Germany, make sure to rent a ridiculously fast sports car (you can rent a Porsche 911 Carrera for as little as 200 $ per day) and find a nice section on the Autobahn with unlimited speed. But remember: unless you’re overtaking, always use the right lane. The left lanes are reserved for overtaking. Never overtake on the right side, nobody will expect you there. And make sure to check the rear-view mirror often. You might think you’re going fast, but there’s always someone going even faster. Let them pass. Last but not least, stay focused and keep your eyes on the road. Traffic jams can appear out of nowhere and you don’t want to end up in the back of a truck at these speeds.

—————————

The fastest production car at the present time is the Bugatti Veyron Super Sport. Is has a drag coefficient of c = 0.35, width w = 2 m, height h = 1.19 m and power P = 1200 HP = 895,200 W. Let’s calculate its maximum possible speed:

v = ( 2 · 895,200 / (0.35 · 1.25 · 2 · 1.19) )1/3

v ≈ 119.8 m/s ≈ 431.3 km/h ≈ 267.4 mph

Does this seem unreasonably high? It does. But the car has actually been recorded going 431 Km/h, so we are right on target. If you’d like to purchase this car, make sure you have 4,000,000 $ in your bank account.

—————————

This was an excerpt from the ebook More Great Formulas Explained.

Check out my BEST OF for more interesting physics articles.

Sources:

http://ecomodder.com/wiki/index.php/Vehicle_Coefficient_of_Drag_List

http://www.mercedes-benz.de/content/germany/mpc/mpc_germany_website/de/home_mpc/passengercars/home/_used_cars/technical_data.0006.html

http://www.carfolio.com/specifications/models/car/?car=218999

Released Today: More Great Formulas Explained (Ebook for Kindle)

I’m happy to announce that today I’ve released the second volume of the series “Great Formulas Explained”. The aim of the series is to gently explain the greatest formulas the fields of physics, mathematics and economics have brought forth. It is suitable for high-school students, freshmen and anyone else with a keen interest in science. I had a lot of fun writing the series and edited both volumes thoroughly, including double-checking all sources and calculations.

Here are the contents of More Great Formulas Explained:

  • Part I: Physics

Law Of The Lever
Sliding and Overturning
Maximum Car Speed
Range Continued
Escape Velocity
Cooling and Wind-Chill
Adiabatic Processes
Draining a Tank
Open-Channel Flow
Wind-Driven Waves
Sailing
Heat Radiation
Main Sequence Stars
Electrical Resistance
Strings and Sound

  • Part II: Mathematics

Cylinders
Arbitrary Triangles
Summation
Standard Deviation and Error
Zipf Distribution

  • Part III: Appendix

Unit Conversion
Unit Prefixes
References
Copyright and Disclaimer
Request to the Reader

I will post exerpts in the days to come. If you are interested, click the cover to get to the Amazon product page. Since I’m enrolled in the KDP Select program, the book is exclusively available through Amazon for a constant price of $ 2.99, I will not be offering it through any other retailers in the near future.

Remember what Benjamin Franklin once said: “Knowledge pays the best interest”. An investment in education (be that time or money) can never be wrong. Knowledge is a powerful tool to make you free and independent. I hope I can contribute to bringing knowledge to people all over the world. In the spirit of this, I have permanently discounted this book, as well as volume I, in India.

NASA’s O-Ring Problem and the Challenger Disaster

In January 1986 the world watched in shock as the Challenger Space Shuttle, on its way to carry the first civilian to space, exploded just two minutes after lift-off. A presidential commission later determined that an O-ring failure in the Solid Rocket Booster (SRB) caused the disaster. This was not a new problem, there’s a long history of issues with the O-rings leading up to Challenger’s loss.

Before the Space Shuttle was declared operational, it performed four test flights to space and back. The first O-ring anomaly occurred on the second test flight, named STS-2 (November 1981). After each flight Thiokol, the company in charge of manufacturing the SRBs, sent a team of engineers to inspect the retrieved boosters. The engineers found that the primary O-ring had eroded by 0.053”. The secondary O-ring, which serves as a back-up for the primary O-ring, showed no signs of erosion. On further inspection the engineers also discovered that the putty protecting the O-rings from the hot gas inside the SRB had blow-holes.

Luckily, the O-rings sealed the SRB despite the erosion. Simulations done by engineers after the STS-2 O-ring anomaly showed that even with 0.095” erosion the primary O-ring would perform its duty up to a pressure of 3000 psi (the pressure inside the SRB only goes up to about 1000 psi). And if the erosion was even stronger, the second O-ring could still finish the job. So neither Thiokol nor NASA, neither engineers nor managers considered the problem to be critical. After the putty composition was slightly altered to prevent blow-holes from forming, the problem was considered solved. The fact that no erosion occurred on the following flights seemed to confirm this.

On STS-41-B (February 1984), the tenth Space Shuttle mission including the four test flights, the anomaly surfaced again. This time two primary O-rings were affected and there were again blow-holes in the putty. However, the erosion was within the experience base (the 0.053” that occurred on STS-2) and within the safety margin (the 0.095” resulting from simulations). So neither Thiokol nor NASA was alarmed over this.

Engineers realized that it was the leak check that caused the blow-holes in the putty. The leak check was an important tool to confirm that the O-rings are properly positioned. This was done by injecting pressurized air in the space between the primary and secondary O-ring. Initially a pressure of 50 psi was used, but this was increased to 200 psi prior to STS-41-B to make the test more reliable. After this change, O-ring erosion occurred more frequently and became a normal aspect of Space Shuttle flights.

On STS-41-C (April 1984), the eleventh overall mission, there was again primary O-ring erosion within the experience base and safety margin. The same was true for STS-41-D (August 1984), the mission following STS-41-C. This time however a new problem accompanied the known erosion anomaly. Engineers found a small amount of soot behind the primary O-ring, meaning that hot gas was able to get through before the O-ring sealed. There was no impact on the secondary O-ring. This blow-by was determined to be an acceptable risk and the flights continued.

The second case of blow-by occurred on STS-51-C (January 1985), the fifteenth mission. There was erosion and blow-by on two primary O-rings and the blow-by was worse than before. It was the first time that hot gas had reached the secondary O-ring, luckily without causing any erosion. It was also the first time that temperature was discussed as a factor. STS-51-C was launched at 66 °F and the night before the temperature dropped to an unusually low 20 °F. So the Space Shuttle and its components was even colder than the 66 °F air temperature. Estimates by Thiokol engineers put the O-ring temperature at launch at around 53 °F. Since rubber gets harder at low temperatures, low temperatures might reduce the O-rings sealing capabilities. But there was no hard data to back this conclusion up.

Despite the escalation of O-ring anomalies, the risk was again determined to be acceptable, by Thiokol as well as by NASA. The rationale behind this decision was:

  • Experience Base: All primary O-ring erosions that occurred after STS-2 were within the 0.053” experience base.

  • Safety Margin: Even with 0.095” erosion the primary O-ring would seal.

  • Redundancy: If the primary O-ring failed, the secondary O-ring would seal.

The following missions saw more escalation of the problem. On STS-51-D (early April 1985), carrying the first politician to space, primary O-ring erosion reached an unprecedented 0.068”. This was outside the experience base, but still within the safety margin. And on STS-51-B (late April 1985) a primary O-ring eroded by 0.171”, significantly outside experience base and safety margin. It practically burned through. On top of that, the Space Shuttle saw its first case of secondary O-ring erosion (0.032”).

Post-flight analysis showed that the burnt-through primary O-ring on STS-51-B was not properly positioned, which led to changes in the leak check procedure. Simulations showed that O-ring erosion could go up to 0.125” before the ability to seal would be lost and that under worst case conditions the secondary O-ring would erode by no more than 0.075”. So it seemed impossible that the secondary O-ring could fail and the risk again was declared acceptable. Also, the fact that the O-ring temperature at STS-51-B’s launch was 75 °F seemed to contradict the temperature effect.

Despite these reassurances, concerns escalated and O-ring task forces were established at Thiokol and Marshall (responsible for the Solid Rocket Motor). Space Shuttle missions continued while engineers were looking for short- and long-term solutions.

On the day of STS-51-L’s launch (January 1986), the twenty-fifth Space Shuttle mission, the temperature was expected to drop to the low 20s. Prior to launch a telephone conference was organized to discuss the effects of low temperatures on O-ring sealing. Present at the conference were engineers and managers from Thiokol, Marshall and NASA. Thiokol engineers raised concerns that the seal might fail, but were not able to present any conclusive data. Despite that, Thiokol management went along with the engineer’s position and decided not to recommend launch for temperatures below 53 °F.

The fact that there was no conclusive data supporting this new launch criterion, that Thiokol did not raise these concerns before and just three weeks ago recommended launch for STS-61-C at 40 °F caused outrage at Marshall and NASA. Thiokol then went off-line to discuss the matter and management changed their position despite the warnings of their engineers. After 30 minutes the telcon resumed and Thiokol gave their go to launch the Challenger Space Shuttle. Shortly after lift-off the O-rings failed, hot gas leaked out of the SRB and the shuttle broke apart.

If you’d like to know more, check out this great book (which served as the source for this post):

The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA

Here you can find a thorough accident investigation report by NASA:

For the broader picture you can check out this great documentary:

More on Space Shuttles in general can be found here: Space Shuttle Launch and Sound Suppression.

Amazon Plans to Use Drones to Deliver Packages

Usually I don’t post news in my blog, but this sounds like a fantastic idea. Amazon is testing drones that could deliver up to 5 pounds per flight (which covers 86 % of all Amazon sales). The service, called Prime Air, could be available within five years if the ongoing series of tests is successful and the necessary FAA permissions are obtained. As an ebook author, I wonder though what impact this will have on the ebook market. Will people go back to print?

Amazon: “Amazon customer service, how may we help you?”

Customer: “Your damned drone put my package on the roof again!”

Here’s a picture of Amazon’s “Octocopter”:

(Taken from regmedia.co.uk)

You can find more info on BBC: http://www.bbc.co.uk/news/technology-25180906