New E-Book Release: Math Concepts Everyone Should Know (And Can Learn)

Well, a few weeks ago I broke my toe, which meant that I was forced to sit in my room hour after hour and think about what to do. Luckily, I found something to keep me busy: writing a new math book. The book is called Math Concepts Everyone Should Know (And Can Learn) and was completed yesterday. It is now live on Amazon (click the cover to get to the product page) for the low price of $ 2.99 and can be read without any prior knowledge in mathematics.


I must say that I’m very pleased with the result and I hope you will enjoy the learning experience. The topics are:

– Combinatorics and Probability
– Binomial Distribution
– Banzhaf Power Index
– Trigonometry
– Hypothesis Testing
– Bijective Functions And Infinity

What are Functions? Excerpt from “Math Dialogue: Functions”


What are functions? I could just insert the standard definition here, but I fear that this might not be the best approach for those who have just started their journey into the fascinating world of mathematics. For one, any common textbook will include the definition, so if that’s all you’re looking for, you don’t need to continue reading here. Secondly, it is much more rewarding to build towards the definition step by step. This approach minimizes the risk of developing deficits and falling prey to misunderstandings.


So where do we start?


We have two options here. We could take the intuitive, concept-focused approach or the more abstract, mathematically rigorous path. My recommendation is to go down both roads, starting with the more intuitive approach and taking care of the strict details later on. This will allow you to get familiar with the concept of the function and apply it to solve real-world problems without first delving into sets, Cartesian products as well as relations and their properties.


Sounds reasonable.


Then let’s get started. For now we will think of a function as a mathematical expression that allows us to insert the value of one quantity x and spits out the value of another quantity y. So it’s basically an input-output system.


Can you give me an example of this?


Certainly. Here is a function: y = 2·x + 4. As you can see, there are two variables in there, the so-called independent variable x and the dependent variable y. The variable x is called independent because we are free to choose any value for it. Once a value is chosen, we do what the mathematical expression tells us to do, in this case multiply two by the value we have chosen for x and add four to that. The result of this is the corresponding value of the dependent variable y.


So I can choose any value for x?


That’s right, try out any value.


Okay, I’ll set x = 1 then. When I insert this into the expression I get: y = 2·1 + 4 = 6. What does this tell me?


This calculation tells you that the function y = 2·x + 4 links the input x = 1 with the output y = 6. Go on, try out another input.


Okay, can I use x = 0?


Certainly. Any real number works here.


For x = 0 I get y = 2·0 + 4 = 4. So the function y = 2·x + 4 links the input x = 0 with the output y = 4.


That’s right. Now it should be clear why x is called the independent variable and y the dependent variable. While you may choose any real number for x, sometimes there are common sense restrictions though, we’ll get to that later, the value of y is determined by the form of the function. A few more words on terminology and notation. Sometimes the output is also called the value of the function. We’ve just found that the function y = 2·x + 4 links the input x = 1 with the output y = 6. We could restate that as follows: at x = 1 the function takes on the value y = 6. The other input-output pair we found was x = 0 and y = 4. In other words: at x = 0 the value of the function is y = 4. Keep that in mind.

As for notation, it is very common to use f(x) instead of y. This emphasizes that the expression we are dealing with should be interpreted as a function of the independent variable x. It also allows us to note the input-output pairs in a more compact fashion by including specific values of x in the bracket. Here’s what I mean.

For the function we can write: f(x) = 2·x + 4. Inserting x = 1 we get: f(1) = 2·1 + 4 = 6 or, omitting the calculation, f(1) = 6. The latter is just a very compact way of saying that for x = 1 we get the output y = 6. In a similar manner we can write f(0) = 4 to state that at x = 0 the function takes on the value y = 4. Please insert another value for x using this notation.


Will do. I’ll choose x = -1. Using this value I get: f(-1) = 2·(-1) + 4 = 2 or in short f(-1) = 2. So at x = -1 the value of the function is y = 2. Is all of this correct?


Yes, that’s correct.


You just mentioned that sometimes there are common sense restrictions for the independent variable x. Can I see an example of this?


Okay, let’s get to this right now. Consider the function f(x) = 1/x. Please insert the value x = 1.


For x = 1 I get f(1) = 1/1 = 1. So is it a problem that the output is the same as the input?


Not at all, at x = 1 the function f(x) = 1/x takes on the value y = 1 and this is just fine. The input x = 2 also works well: f(2) = 1/2, so x = 2 is linked with the output y = 1/2. But we will run into problems when trying to insert x = 0.


I see, division by zero. For x = 0 we have f(0) = 1/0 and this expression makes no sense.


That’s right, division by zero is strictly verboten. So whenever an input x would lead to division by zero, we have to rule it out. Let’s state this a bit more elegantly. Every function has a domain. This is just the set of all inputs for which the function produces a real-valued output. For example, the domain of the function f(x) = 2·x + 4 is the set of all real numbers since we can insert any real number x without running into problems. The domain of the function f(x) = 1/x is the set of all real numbers with the number zero excluded since we can use all real numbers as inputs except for zero.

Can you see why the domain of the function f(x) = 1/(3·x – 12) is the set of all real numbers excluding the number four? If it is not obvious, try to insert x = 4.


Okay, for x = 4 I get f(4) = 1/(3·4 – 12) = 1/0. Oh yes, division by zero again.


Correct. That’s why we say that the domain of the function f(x) = 1/(3·x – 12) is the set of all real numbers excluding the number four. Any input x works except for x = 4. So whenever there’s an x somewhere in the denominator, watch out for this. Sometimes we have to exclude inputs for other reasons, too. Consider the function f(x) = sqrt(x). The abbreviation “sqrt” refers to the square root of x. Please compute the value of the function for the inputs x = 0, x = 1 and x = 2.


Will do.

f(0) = sqrt(0) = 0

At x = 0 the value of the function is 0.

f(1) = sqrt(1) = 1

At x = 1 the value of the function is 1.

f(2) = sqrt(2) = 1.4142 …

At x = 2 the value of the function is 1.4142 … All of this looks fine to me. Or is there a problem here?


No problem at all. But now try x = -1.


Okay, f(-1) = sqrt(-1) = … Oh, seems like my calculator spits out an error message here. What’s going on?


Seems like your calculator knows math well. There is no square root of a negative number. Think about it. We say that the square root of the number 4 is 2 because when you multiply 2 by itself you get 4. Note that 4 has another square root and for the same reason. When you multiply -2 by itself, you also get 4, so -2 is also a square root of 4.

Let’s choose another positive number, say 9. The square root of 9 is 3 because when you multiply 3 by itself you get 9. Another square root of 9 is -3 since multiplying -3 by itself leads to 9. So far so good, but what is the square root of -9? Which number can you multiply by itself to produce -9?


Hmmm … 3 doesn’t work since 3 multiplied by itself is 9, -3 also doesn’t work since -3 multiplied by itself is 9. Looks like I can’t think of any number I could multiply by itself to get the result -9.


That’s right, no such real number exists. In other words: there is no real-valued square root of -9. Actually, no negative number has a real-valued square root. That’s why your calculator complained when you gave him the task of finding the square root of -1. For our function f(x) = sqrt(x) all of this means that inserting an x smaller than zero would lead to a nonsensical result. We say that the domain of the function f(x) = sqrt(x) is the set of all positive real numbers including zero.

In summary, when trying to determine the domain of a function, that is, the set of all inputs that lead to a real-valued output, make sure to exclude any values of x that would a) lead to division by zero or b) produce a negative number under a square root sign. Unless faced with a particularly exotic function, the domain of the function is then simply the set of all real numbers excluding values of x that lead to division by zero and those values of x that produce negative numbers under a square root sign.

I promise we will get back to this, but I want to return to the concept of the function before doing some exercises. Let’s go back to the introductory example: f(x) = 2·x + 4. Please make an input-output table for the following inputs: x = -3, -2, -1, 0, 1, 2 and 3.

This was an excerpt from “Math Dialogue: Functions“, available on Amazon for Kindle.

Stellar Physics – Gaps in the Spectrum

The following is an excerpt from my e-book “Introduction to Stars: Spectra, Formation, Evolution, Collapse” available here for Kindle.

When you look at the spectrum of the heat radiation coming from glowing metal, you will find a continuous spectrum, that is, one that does not have any gaps. You would expect to see the same when looking at light coming from a star. However, there is one significant difference: all stellar spectra have gaps (dark lines) in them. In other words: photons of certain wavelengths seem to be either completely missing or at least arriving in much smaller numbers. Aside from these gaps though, the spectrum is just what you’d expect to see when looking at a heat radiator. So what’s going on here? What can we learn from these gaps?


Spectrum of the Sun. Note the pronounced gaps.

To understand this, we need to delve into atomic physics, or to be more specific, look at how atoms interact with photons. Every atom can only absorb or emit photons of specific wavelengths. Hydrogen atoms for example will absorb photons having the wavelength 4102 A, but do not care about photons having a wavelength 4000 A or 4200 A. Those photons just pass through it without any interaction taking place. Sodium atoms prefer photons with a wavelength 5890 A, when a photon of wavelength 5800 A or 6000 A comes by, the sodium atom is not at all impressed. This is a property you need to keep in mind: every atom absorbs or emits only photons of specific wavelengths.

Suppose a 4102 A photon hits a hydrogen atom. The atom will then absorb the photon, which in crude terms means that the photon “vanishes” and its energy is transferred to one of the atom’s electrons. The electron is now at a higher energy level. However, this state is unstable. After a very short time, the electron returns to a lower energy level and during this process a new photon appears, again having the wavelength 4102 A. So it seems like nothing was gained or lost. Photon comes in, vanishes, electron gains energy, electron loses energy again, photon of same wavelength appears. This seems pointless, why bother mentioning it? Here’s why. The photon that is created when the electron returns to the lower energy level is emitted in a random direction and not the direction the initial photon came from. This is an important point! We can understand the gaps in a spectrum by pondering the consequences of this fact.

Suppose both of us observe a certain heat source. The light from this source reaches me directly while you see the light through a cloud of hydrogen. Both of us are equipped with a device that generates the spectrum of the incoming light. Comparing the resulting spectra, we would see that they are for the most part the same. This is because most photons pass through the hydrogen cloud without any interaction. Consider for example photons of wavelength 5000 A. Hydrogen does not absorb or emit photons of this wavelength, so we will both record the same light intensity at 5000 A. But what about the photons with a 4102 A wavelength?

Imagine a directed stream of these particular photons passing through the hydrogen cloud. As they get absorbed and re-emitted, they get thrown into random directions. Only those photons which do not encounter a hydrogen atom and those which randomly get thrown in your direction will reach your position. Unless the hydrogen cloud is very thin and has a low density, that’s only a very small part of the initial stream. Hence, your spectrum will show a pronounced gap, a line of low light intensity, at λ = 4102 A while in my spectrum no such gap exists.

What if it were a sodium instead of hydrogen cloud? Using the same logic, we can see that now your spectrum should show a gap at λ = 5890 A since this is the characteristic wavelength at which sodium atoms absorb and emit photons. And if it were a mix of hydrogen and sodium, you’d see two dark lines, one at λ = 4102 A due to the presence of hydrogen atoms and another one at λ = 5890 A due to the sodium atoms. Of course, and here comes the fun part, we can reverse this logic. If you record a spectrum and you see gaps at λ = 4102 A and λ = 5890 A, you know for sure that the light must have passed through a gas that contains hydrogen and sodium. So the seemingly unremarkable gaps in a spectrum are actually a neat way of determining what elements sit in a star’s atmosphere! This means that by just looking at a star’s spectrum we can not only determine its temperature, but also its chemical composition at the surface. Here are the results for the Sun:

– Hydrogen 73.5 %

– Helium 24.9 %

– Oxygen 0.8 %

– Carbon 0.3 %

– Iron 0.2 %

– Neon 0.1 %

There are also traces (< 0.1 %) of nitrogen, silicon, magnesium and sulfur. This composition is quite typical for other stars and the universe as a whole: lots of hydrogen (the lightest element), a bit of helium (the second lightest element) and very little of everything else. Mathematical models suggest that even though the interior composition changes significantly over the life time of a star (the reason being fusion, in particular the transformation of hydrogen into helium), its surface composition remains relatively constant in this time.

New Release for Kindle: Introduction to Stars – Spectra, Formation, Evolution, Collapse

I’m happy to announce my new e-book release “Introduction to Stars – Spectra, Formation, Evolution, Collapse” (126 pages, $ 2.99). It contains the basics of how stars are born, what mechanisms power them, how they evolve and why they often die a spectacular death, leaving only a remnant of highly exotic matter. The book also delves into the methods used by astronomers to gather information from the light reaching us through the depth of space. No prior knowledge is required to follow the text and no mathematics beyond the very basics of algebra is used.

If you are interested in learning more, click the cover to get to the Amazon product page:


Here’s the table of contents:

Gathering Information
Spectrum and Temperature
Gaps in the Spectrum
Doppler Shift

The Life of a Star
Stellar Factories
From Protostar to Star
Main Sequence Stars
Giant Space Onions

The Death of a Star
Slicing the Space Onion
Electron Degeneracy
Extreme Matter
Black Holes

Sources and Further Reading

Enjoy the reading experience!

New Kindle Release: Math Shorts – Exponential and Trigonometric Functions

I’m on a roll here … another math book, comin’ right up … I’m happy to announce that today I’m expanding my “Math Shorts” series with the latest release “Math Shorts – Exponential and Trigonometric Functions”. This time it’s pre-calculus and thus serves as a bridge between my permanently free e-books “Algebra – The Very Basics” and “Math Shorts – Derivatives”. Without further ado, here’s the blurb and table of contents (click cover to get to the product page on Amazon):



Before delving into the exciting fields of calculus and mathematical physics, it is necessary to gain an in-depth understanding of functions. In this book you will get to know two of the most fundamental function classes intimately: the exponential and trigonometric functions. You will learn how to visualize the graph from the equation, how to set up the function from conditions for real-world applications, how to find the roots, and much more. While prior knowledge in linear and quadratic functions is helpful, it is not necessary for understanding the contents of the book as all the core concepts are developed during the discussion and demonstrated using plenty of examples. The book also contains problems along with detailed solutions to each section. So except for the very basics of algebra, no prior knowledge is required.

Once done, you can continue your journey into mathematics, from the basics all the way to differential equations, by following the “Math Shorts” series, with the recommended reading being “Math Shorts – Derivatives” upon completion of this book. From the author of “Great Formulas Explained” and “Statistical Snacks”, here’s another down-to-earth guide to the joys of mathematics.

Table of contents:

1. Exponential Functions
1.1. Definition
1.2. Exercises
1.3. Basics Continued
1.4. Exercises
1.5. A More General Form
1.6. Exercises

2. Trigonometric Functions
2.1. The Sine Function
2.2. Exercises
2.3. The Cosine Function
2.4. Exercises
2.5. Roots
2.6. Exercises
2.7. Sine Squared And More
2.8. The Tangent Function
2.9. Exercises

3. Solutions to the Problems

Have fun reading!

The Placebo Effect – An Overview

There is a major problem with reliance on placebos, like most vitamins and antioxidants. Everyone gets upset about Big Science, Big Pharma, but they love Big Placebo.

– Michael Specter

A Little White Lie

In 1972, Blackwell invited fifty-seven pharmacology students to an hour-long lecture that, unbeknownst to the students, had only one real purpose: bore them. Before the tedious lecture began, the participants were offered a pink or a blue pill and told that the one is a stimulant and the other a sedative (though it was not revealed which color corresponded to which effect – the students had to take their chances). When measuring the alertness of the students later on, the researchers found that 1) the pink pills helped students to stay concentrated and 2) two pills worked better than one. The weird thing about these results: both the pink and blue pills were plain ol’ sugar pills containing no active ingredient whatsoever. From a purely pharmacological point of view, neither pill should have a stimulating or sedative effect. The students were deceived … and yet, those who took the pink pill did a much better job in staying concentrated than those who took the blue pill, outperformed only by those brave individuals who took two of the pink miracle pills. Both the effects of color and number have been reproduced. For example, Luchelli (1972) found that patients with sleeping problems fell asleep faster after taking a blue capsule than after taking an orange one. And red placebos have proven to be more effective pain killers than white, blue or green placebos (Huskisson 1974). As for number, a comprehensive meta-analysis of placebo-controlled trials by Moerman (2002) confirmed that four sugar pills are more beneficial than two. With this we are ready to enter another curious realm of the mind: the placebo effect, where zero is something and two times zero is two times something.

The Oxford Dictionary defines the placebo effect as a beneficial effect produced by a placebo drug or treatment, which cannot be attributed to the properties of the placebo itself and must therefore be due to the patient’s belief in that treatment. In short: mind over matter. The word placebo originates from the Bible (Psalm 116:9, Vulgate version by Jerome) and translates to “I shall please”, which seems to be quite fitting. Until the dawn of modern science, almost all of medicine was, knowingly or unknowingly, based on this effect. Healing spells, astrological rituals, bloodletting … We now know that any improvement in health resulting from such crude treatments can only arise from the fact that the patient’s mind has been sufficiently pleased. Medicine has no doubt come a long way and all of us profit greatly from this. We don’t have to worry about dubious healers drilling holes into our brains to “relieve pressure” (an extremely painful and usually highly ineffective treatment called trepanning), we don’t have to endure the unimaginable pain of a surgeon cutting us open and we live forty years longer than our ancestors. Science has made it possible. However, even in today’s technology-driven world one shouldn’t underestimate the healing powers of the mind.

Before taking a closer look at relevant studies and proposed explanations, we should point out that studying the placebo effect can be a rather tricky affair. It’s not as simple as giving a sugar pill to an ill person and celebrating the resulting improvement in health. All conditions have a certain natural history. Your common cold will build up over several days, peak over the following days and then slowly disappear. Hence, handing a patient a placebo pill (or any other drug for that matter) when the symptoms are at their peak and observing the resulting improvement does not allow you to conclude anything meaningful. In this set-up, separating the effects of the placebo from the natural history of the illness is impossible. To do it right, researchers need one placebo group and one natural history (no-treatment) group. The placebo response is the difference that arises between the two groups. Ignoring natural history is a popular way of “proving” the effectiveness of sham healing rituals and supposed miracle pills. You can literally make any treatment look like a gift from God by knowing the natural history and waiting for the right moment to start the treatment. One can already picture the pamphlet: “93 % of patients were free of symptoms after just three days, so don’t miss out on this revolutionary treatment”. Sounds great, but what they conveniently forget to mention is that the same would have been true had the patients received no treatment.

There are also ethical consideration that need to be taken into account. Suppose you wanted to test how your placebo treatment compares to a drug that is known to be beneficial to a patient’s health. The scientific approach demands setting up one placebo group and one group that receives the well-known drug. How well your placebo treatment performs will be determined by comparing the groups after a predetermined time has passed. However, having one placebo group means that you are depriving people of a treatment that is proven to improve their condition. It goes without saying that this is highly problematic from an ethical point of view. Letting the patient suffer for the quest of knowledge? This approach might be justified if there is sufficient cause to believe that the alternative treatment in question is superior, but this is rarely the case for placebo treatments. While beneficial, their effect is usually much weaker than that of established drugs.

Another source of criticism is the deception of the patient during a placebo treatment. Doctors prefer to be open and honest when discussing a patient’s conditions and the methods of treatment. But a placebo therapy requires them to tell patients that the prescribed pill contains an active ingredient and has proven to be highly effective when in reality it’s nothing but sugar wrapped in a thick layer of good-will. Considering the benefits, we can certainly call it a white lie, but telling it still makes many professionals feel uncomfortable. However, they might be in luck. Several studies have suggested that, surprisingly, the placebo effect still works when the patient is fully aware that he receives placebo pills.

Experimental Evidence

One example of this is the study by Kaptchuk et al. (2010). The Harvard scientists randomly assigned 80 patients suffering from irritable bowel syndrome (IBS) to either a placebo group or no-treatment group. The patients in the placebo group received a placebo pill along with the following explanation: “Placebo pills are made of an inert substance, like sugar pills, and have been shown in clinical studies to produce significant improvement in IBS symptoms through mind-body self-healing processes”. As can be seen from the graph below, the pills did their magic. The improvement in the placebo group was roughly 30 % higher than in the no-treatment group and the low p-value (see appendix for an explanation of the p-value) shows that it is extremely unlikely that this result came to be by chance. Unfortunately, there seems to be a downside to the honesty. Hashish (1988) analyzed the effects of real and sham ultrasound treatment on patients whose impacted lower third molars had been surgically removed and concluded that the effectiveness in producing a placebo response is diminished if the patient comes to understand that the therapy is a placebo treatment rather than the “real” one. So while the placebo-effect does arise even without the element of deception, a fact that is quite astonishing on its own, deception does strengthen the response to the placebo treatment.


Results of Kaptchuk et al. (2010)

Let’s explore some more experimental studies to fully understand the depth and variety of the placebo effect. A large proportion of the relevant research has focused on the effect’s analgesic nature, that is, its ability to reduce pain without impairing consciousness. Amanzio et al. (2001) examined patients who had undergone thoracotomy, a major surgical procedure to gain access to vital organs in the chest and one that is often associated with severe post-operative pain. As handing out sugar pills would have been irresponsible and unethical in this case, the researchers found a more humane method of unearthing the placebo effect: the open-hidden paradigm. All patients received powerful painkillers such as Morphine, Buprenorphine, Tramadol, … However, while one group received the drug in an open manner, administered by a caring clinician in full view of the patient, another group was given the drug in a hidden manner, by means of a computer-programmed drug infusion pump with no clinician present and no indication that the drug was being administered. This set-up enabled the researchers to determine how much of the pain reduction was due to the caring nature of the clinician and the ritual of injecting the drug. The results: the human touch matters and matters a lot. As can be seen from the graph below, every painkiller became significantly more effective when administered in an open fashion. Several follow-on studies (Benedetti et al. 2003, Colloca et al. 2004) confirmed this finding. This demonstrates that the placebo effect goes far beyond the notorious sugar pill, it can also be induced by the caring words of a professional or a dramatic treatment ritual.

Results of Amanzio et al. (2001)

The fact that the human touch is of major importance in any clinical treatment, placebo or otherwise, seems pretty obvious (though its power in reducing pain might have surprised you). Much less obvious are the roles of administration form and treatment ritual, something we shall further explore. For both we can use the following rule of thumb: the more dramatic the intervention, the stronger the placebo response. For example, several studies have shown that an injection with salt-water is more effective in generating the placebo effect than a sugar pill. This is of course despite the fact that both salt-water and sugar pills do not have any direct medical benefits. The key difference lies in the inconveniences associated with the form of delivery: while swallowing a pill takes only a moment and is a rather uncomplicated process, the injection, preparation included, might take up to several minutes and can be quite painful. There’s no doubt that the latter intervention will leave a much stronger impression. Another study (Kaptchuk et al. 2006) came to the exciting conclusion that a nonsensical therapeutic intervention modeled on acupuncture did a significantly better job in reducing arm pain than the sugar pill. While the average pain score in the sham acupuncture group dropped by 0.33 over the course of one week, the corresponding drop in the sugar pill group was only 0.15. Again the more dramatic treatment came out on top.

The experimental results mentioned above might explain why popular ritualistic treatments found in alternative medicine remain so widespread even when there are numerous studies providing ample proof that the interventions lack biological plausibility and produce no direct medical benefits. Despite their scientific shortcomings, such treatments do work. However, this feat is extremely unlikely to be accomplished by strengthening a person’s aura, enhancing life force or harnessing quantum energy, as the brochure might claim. They work mainly (even solely) because of their efficiency in inducing the mind’s own placebo effect. Kaptchuk’s study impressively demonstrates that you can take any arbitrary ritual, back it up with any arbitrary theory to give the procedure pseudo-plausibility and let the placebo effect take over from there. Such a treatment might not be able to compete with cutting-edge drugs, but the benefits will be there. Though one has to wonder about the ethics of providing a patient with a certain treatment when demonstrably a more effective one is available, especially in case of serious diseases.

Don’t Forget Your Lucky Charm

This seems to be a great moment to get in the following entertaining gem. In 2010, Damish et al. invited twenty-eight people to the University of Cologne to take part in a short but sweet experiment that had them play ten balls on a putting green. Half of the participants were given a regular golf ball and managed to get 4.7 putts out of 10 on average. The other half was told they would be playing a “lucky ball” and, sure enough, this increased performance by an astonishing 36 % to 6.4 putts out of 10. I think we can agree that the researchers hadn’t really gotten hold of some magical performance-enhancing “lucky ball” and that the participants most likely didn’t even believe the story of the blessed ball. Yet, the increase was there and the result statistically significant despite the small sample size. So what happened? As you might have expected, this is just another example of the placebo effect (in this particular case also called the lucky charm effect) in action.

OK, so the ball was not really lucky, but it seems that simply floating the far-fetched idea of a lucky ball was enough to put participants into a different mindset, causing them to approach the task at hand in a different manner. One can assume that the story made them less worried about failing and more focused on the game, in which case the marked increase is no surprise at all. Hence, bringing a lucky charm to an exam might not be so superstitious after all. Though we should mention that a lucky charm can only do its magic if the task to be completed requires some skill. If the outcome is completely random, there simply is nothing to gain from being put into a different mindset. So while a lucky charm might be able to help a golfer, student, chess player or even a race car driver, it is completely useless for dice games, betting or winning the lottery.

Let’s look at a few more studies that show just how curious and complex the placebo effect is before moving on to explanations. Shiv et al. (2008) from the Stanford Business School analyzed the economic side of self-healing. They applied electric shocks to 82 participants and then offered them to buy a painkiller (guess that’s also a way to fund your research). The option: get the cheap painkiller for $ 0.10 per pill or the expensive one for $ 2.50 per pill. What the participants weren’t told was that there was no difference between the pills except for the price. Despite that, the price did have an effect on pain reduction. While 61 % of the subjects taking the cheap painkiller reported a significant pain reduction, an impressive 85 % reported the same after treating themselves to the expensive version. The researchers suspect that this is a result of quality expectations. We associate high price with good quality and in case of painkillers good quality equals effective pain reduction. So buying the expensive brand name drug might not be such a bad idea even when there is a chemically identical and lower priced generic drug available. In another study, Shiv et al. also found the same effect for energy drinks. The more expensive energy drink, with price being the only difference, made people report higher alertness and noticeably enhanced their ability to solve word puzzles.


This was an excerpt from my Kindle e-book Curiosities of the Mind. To learn more about the placebo effect, as well as other interesting psychological effects such as the chameleon effect, Mozart effect and the actor-observer bias, click here. (Link to Amazon.com)

The Weirdness of Empty Space – Casimir Force

(This is an excerpt from The Book of Forces – enjoy!)

The forces we have discussed so far are well-understood by the scientific community and are commonly featured in introductory as well as advanced physics books. In this section we will turn to a more exotic and mysterious interaction: the Casimir force. After a series of complex quantum mechanical calculations, the Dutch physicist Hendrick Casimir predicted its existence in 1948. However, detecting the interaction proved to be an enormous challenge as this required sensors capable picking up forces in the order of 10^(-15) N and smaller. It wasn’t until 1996 that this technology became available and the existence of the Casimir force was experimentally confirmed.

So what does the Casimir force do? When you place an uncharged, conducting plate at a small distance to an identical plate, the Casimir force will pull them towards each other. The term “conductive” refers to the ability of a material to conduct electricity. For the force it plays no role though whether the plates are actually transporting electricity in a given moment or not, what counts is their ability to do so.

The existence of the force can only be explained via quantum theory. Classical physics considers the vacuum to be empty – no particles, no waves, no forces, just absolute nothingness. However, with the rise of quantum mechanics, scientists realized that this is just a crude approximation of reality. The vacuum is actually filled with an ocean of so-called virtual particles (don’t let the name fool you, they are real). These particles are constantly produced in pairs and annihilate after a very short period of time. Each particle carries a certain amount of energy that depends on its wavelength: the lower the wavelength, the higher the energy of the particle. In theory, there’s no upper limit for the energy such a particle can have when spontaneously coming into existence.

So how does this relate to the Casimir force? The two conducting plates define a boundary in space. They separate the space of finite extension between the plates from the (for all practical purposes) infinite space outside them. Only particles with wavelengths that are a multiple of the distance between the plates fit in the finite space, meaning that the particle density (and thus energy density) in the space between the plates is smaller than the energy density in the pure vacuum surrounding them. This imbalance in energy density gives rise to the Casimir force. In informal terms, the Casimir force is the push of the energy-rich vacuum on the energy-deficient space between the plates.


(Illustration of Casimir force)

It gets even more puzzling though. The nature of the Casimir force depends on the geometry of the plates. If you replace the flat plates by hemispherical shells, the Casimir force suddenly becomes repulsive, meaning that this specific geometry somehow manages to increase the energy density of the enclosed vacuum. Now the even more energy-rich finite space pushes on the surrounding infinite vacuum. Trippy, huh? So which shapes lead to attraction and which lead to repulsion? Unfortunately, there is no intuitive way to decide. Only abstract mathematical calculations and sophisticated experiments can provide an answer.

We can use the following formula to calculate the magnitude of the attractive Casimir force FCS between two flat plates. Its value depends solely on the distance d (in m) between the plates and the area A (in m²) of one plate. The letters h = 6.63·10^(-34) m² kg/s and c = 3.00·10^8 m/s represent Plank’s constant and the speed of light.

FCS = π·h·c·A / (480·d^4) ≈ 1.3·10^(-27)·A / d^4

(The sign ^ stands for “to the power”) Note that because of the exponent, the strength of the force goes down very rapidly with increasing distance. If you double the size of the gap between the plates, the magnitude of the force reduces to 1/2^4 = 1/16 of its original value. And if you triple the distance, it goes down to 1/3^4 = 1/81 of its original value. This strong dependence on distance and the presence of Plank’s constant as a factor cause the Casimir force to be extremely weak in most real-world situations.


Example 24:

a) Calculate the magnitude of the Casimir force experienced by two conducting plates having an area A = 1 m² each and distance d = 0.001 m (one millimeter). Compare this to their mutual gravitational attraction given the mass m = 5 kg of one plate.

b) How close do the plates need to be for the Casimir force to be in the order of unity? Set FCS = 1 N.



Inserting the given values into the formula for the Casimir force leads to (units not included):

FCS = 1.3·10^(-27)·A/d^4
FCS = 1.3·10^(-27) · 1 / 0.0014
FCS ≈ 1.3·10^(-15) N

Their gravitational attraction is:

FG = G·m·M / r²
FG = 6.67·10^(-11)·5·5 / 0.001²
FG ≈ 0.0017 N

This is more than a trillion times the magnitude of the Casimir force – no wonder this exotic force went undetected for so long.  I should mention though that the gravitational force calculated above should only be regarded as a rough approximation as Newton’s law of gravitation is tailored to two attracting spheres, not two attracting plates.


Setting up an equation we get:

FCS = 1.3·10^(-27)·A/d^4
1 = 1.3·10^(-27) · 1 / d^4

Multiply by d4:

d4 = 1.3·10^(-27)

And apply the fourth root:

d ≈ 2·10^(-7) m = 200 nanometers

This is roughly the size of a common virus and just a bit longer than the wavelength of violet light.


The existence of the Casimir force provides an impressive proof that the abstract mathematics of quantum mechanics is able to accurately describe the workings of the small-scale universe. However, many open questions remain. Quantum theory predicts the energy density of the vacuum to be infinitely large. According to Einstein’s theory of gravitation, such a concentration of energy would produce an infinite space-time curvature and if this were the case, we wouldn’t exist. So either we don’t exist (which I’m pretty sure is not the case) or the most powerful theories in physics are at odds when it comes to the vacuum.

New Release for Kindle: Antimatter Propulsion

I’m very excited to announce the release of my latest ebook called “Antimatter Propulsion”. I’ve been working working on it like a madman for the past few months, going through scientific papers and wrestling with the jargon and equations. But I’m quite satisfied with the result. Here’s the blurb, the table of contents and the link to the product page. No prior knowledge is required to enjoy the book.

Many popular science fiction movies and novels feature antimatter propulsion systems, from the classic Star Trek series all the way to Cameron’s hit movie Avatar. But what exactly is antimatter? And how can it be used accelerate rockets? This book is a gentle introduction to the science behind antimatter propulsion. The first section deals with antimatter in general, detailing its discovery, behavior, production and storage. This is followed by an introduction to propulsion, including a look at the most important quantities involved and the propulsion systems in use or in development today. Finally, the most promising antimatter propulsion and rocket concepts are presented and their feasibility discussed, from the solid core concept to antimatter initiated microfusion engines, from the Valkyrie project to Penn State’s AIMStar spacecraft.

Section 1: Antimatter

The Atom
Dirac’s Idea
An Explosive Mix
Proton and Anti-Proton Annihilation
Sources of Antimatter
Storing Antimatter
Getting the Fuel

Section 2: Propulsion Basics

Conservation of Momentum
♪ Throw, Throw, Throw Your Boat ♫
So What’s The Deal?
May The Thrust Be With You
Specific Impulse and Fuel Requirements
Chemical Propulsion
Electric Propulsion
Fusion Propulsion

Section 3: Antimatter Propulsion Concepts

Solid Core Concept
Plasma Core Concept
Beamed Core Concept
Antimatter Catalyzed Micro-Fission / Fusion
Antimatter Initiated Micro-Fusion

Section 4: Antimatter Rocket Concepts

Project Valkyrie
Dust Shields

You can purchase “Antimatter Propulsion” here for $ 2.99.

New Release for Kindle: Math Shorts – Integrals

Yesterday I released the second part of my “Math Shorts” series. This time it’s all about integrals. Integrals are among the most useful and fascinating mathematical concepts ever conceived. The ebook is a practical introduction for all those who don’t want to miss out. In it you’ll find down-to-earth explanations, detailed examples and interesting applications. Check out the sample (see link to product page) a taste of the action.

Important note: to enjoy the book, you need solid prior knowledge in algebra and calculus. This means in particular being able to solve all kinds of equations, finding and interpreting derivatives as well as understanding the notation associated with these topics.

Click the cover to open the product page:


Here’s the TOC:

Section 1: The Big Picture

Section 2: Basic Anti-Derivatives and Integrals
-Power Functions
-Sums of Functions
-Examples of Definite Integrals
-Exponential Functions
-Trigonometric Functions
-Putting it all Together

Section 3: Applications
-Area – Basics
-Area – Numerical Example
-Area – Parabolic Gate
-Area – To Infinity and Beyond
-Volume – Basics
-Volume – Numerical Example
-Volume – Gabriel’s Horn
-Average Value

Section 4: Advanced Integration Techniques
-Substitution – Basics
-Substitution – Indefinite Integrals
-Substitution – Definite Integrals
-Integration by Parts – Basics
-Integration by Parts – Indefinite Integrals
-Integration by Parts – Definite Integrals

Section 5: Appendix
-Formulas To Know By Heart
-Greek Alphabet
-Copyright and Disclaimer
-Request to the Reader


New Release for Kindle: Introduction to Differential Equations

Differential equations are an important and fascinating part of mathematics with numerous applications in almost all fields of science. This book is a gentle introduction to the rich world of differential equations filled with no-nonsense explanations, step-by-step calculations and application-focused examples.

Important note: to enjoy the book, you need solid prior knowledge in algebra and calculus. This means in particular being able to solve all kinds of equations, finding and interpreting derivatives, evaluating integrals as well as understanding the notation that goes along with those.

Click the cover to open the product page:


Here’s the TOC:

Section 1: The Big Picture

-Population Growth
-Definition and Equation of Motion
-Equilibrium Points
-Some More Terminology

Section 2: Separable Differential Equations

-Exponential Growth Revisited
-Fluid Friction
-Logistic Growth Revisited

Section 3: First Order Linear Differential Equations

-More Fluid Friction
-Heating and Cooling
-Pure, Uncut Mathematics
-Bernoulli Differential Equations

Section 4: Second Order Homogeneous Linear Differential Equations (with Constant Coefficients)

-Wait, what?
-Oscillating Spring
-Numerical Example
-The Next Step – Non-Homogeneous Equations

Section 5: Appendix

-Formulas To Know By Heart
-Greek Alphabet
-Copyright and Disclaimer
-Request to the Reader

Note: With this book release I’m starting my “Math Shorts” Series. The next installment “Math Shorts – Integrals” will be available in just a few days! (Yes, I’m working like a mad man on it)

Decibel – A Short And Simple Explanation

A way of expressing a quantity in relative terms is to do the ratio with respect to a reference value. This helps to put a quantity into perspective. For example, in mechanics the acceleration is often expressed in relation to the gravitational acceleration. Instead of saying the acceleration is 22 m/s² (which is hard to relate to unless you know mechanics), we can also say the acceleration is 22 / 9.81 ≈ 2.2 times the gravitational acceleration or simply 2.2 g’s (which is much easier to comprehend).

The decibel (dB) is also a general way of expressing a quantity in relative terms, sort of a “logarithmic ratio”. And just like the ratio, it is not a physical unit or limited to any field such as mechanics, audio, etc … You can express any quantity in decibels. For example, if we take the reference value to be the gravitational acceleration, the acceleration 22 m/s² corresponds to 3.5 dB.

To calculate the decibel value L of a quantity x relative to the reference value x(0), we can use this formula:


In acoustics the decibel is used to express the sound pressure level (SPL), measured in Pascal = Pa, using the threshold of hearing (0.00002 Pa) as reference. However, in this case a factor of twenty instead of ten is used. The change in factor is a result of inputting the squares of the pressure values rather than the linear values.


The sound coming from a stun grenade peaks at a sound pressure level of around 15,000 Pa. In decibel terms this is:


which is way past the threshold of pain that is around 63.2 Pa (130 dB). Here are some typical values to keep in mind:

0 dB → Threshold of Hearing
20 dB → Whispering
60 dB → Normal Conversation
80 dB → Vacuum Cleaner
110 dB → Front Row at Rock Concert
130 dB → Threshold of Pain
160 dB → Bursting Eardrums

Why use the decibel at all? Isn’t the ratio good enough for putting a quantity into perspective? The ratio works fine as long as the quantity doesn’t go over many order of magnitudes. This is the case for the speeds or accelerations that we encounter in our daily lives. But when a quantity varies significantly and spans many orders of magnitude (which is what the SPL does), the decibel is much more handy and relatable.

Another reason for using the decibel for audio signals is provided by the Weber-Fechner law. It states that a stimulus is perceived in a logarithmic rather than linear fashion. So expressing the SPL in decibels can be regarded as a first approximation to how loud a sound is perceived by a person as opposed to how loud it is from a purely physical point of view.

Note that when combining two or more sound sources, the decibel values are not simply added. Rather, if we combine two sources that are equally loud and in phase, the volume increases by 6 dB (if they are out of phase, it will be less than that). For example, when adding two sources that are at 50 dB, the resulting sound will have a volume of 56 dB (or less).

(This was an excerpt from Audio Effects, Mixing and Mastering. Available for Kindle)

Wavelength (And: Why Is The Sky Blue?)

A very important type of length is wavelength, usually symbolized by the Greek letter λ (in m). It is defined as the distance from crest to crest (one complete cycle) and can easily be calculated for any wave by dividing the speed of the wave c (in m/s) by its frequency f (in Hz):

λ = c / f

What are typical wavelengths for sound? At room temperature, sound travels with a speed of c = 343 m/s. The chamber pitch has a frequency of f = 440 Hz. According to the equation, the corresponding wavelength is:

λ = 343 / 440 ≈ 0.8 m ≈ 2.6 ft

Are you surprised? I bet most people would greatly underestimate this value. Bass sounds are even longer than that. The lowest tone on a four-string bass guitar has a frequency of f = 41.2 Hz, which leads to the wavelength:

λ = 343 / 41.2 ≈ 8.3 m ≈ 27 ft

So the wave coming from the open E string of a bass guitar doesn’t even fit in a common room. In the case of light, the situation is very different. As noted in the introduction, the wavelength of light ranges between 4000 A (violet light) and 7000 A (red light), which is just below the size of a bacterium.

Wavelength plays an important role in explaining why the sky is blue. When light collides with a particle, parts of it are deflected while the rest continues along the initial path. This phenomenon is known as scattering. The smaller the wavelength of the light, the stronger the effect. This means that scattering is particularly pronounced for violet and blue light.

Unless you are looking directly at the Sun, all the light you see when looking at the sky is scattered light coming from the particles in the atmosphere. Since blue light tends to scatter so easily, the sky ends up in just this color. But why not violet? This is a legitimate question. After all, due to its smaller wavelength, violet light is even more willing to scatter. While this is true, it is also important to note that the sun’s rays don’t contain all the colors in the same ratio. In particular, they carry much less violet than blue light. On top of that, our eyes are less sensitive to violet light.

(This is an excerpt from my Kindle book: Physics! In Quantities and Examples)

Released Today for Kindle: Physics! In Quantities and Examples

I finally finished and released my new ebook … took me longer than usual because I always kept finding new interesting topics while researching. Here’s the blurb, link and TOC:

This book is a concept-focused and informal introduction to the field of physics that can be enjoyed without any prior knowledge. Step by step and using many examples and illustrations, the most important quantities in physics are gently explained. From length and mass, over energy and power, all the way to voltage and magnetic flux. The mathematics in the book is strictly limited to basic high school algebra to allow anyone to get in and to assure that the focus always remains on the core physical concepts.

(Click cover to get to the Amazon Product Page)


Table of Contents:

(Introduction, From the Smallest to the Largest, Wavelength)

(Introduction, Mass versus Weight, From the Smallest to the Largest, Mass Defect and Einstein, Jeans Mass)

Speed / Velocity
(Introduction, From the Smallest to the Largest, Faster than Light, Speed of Sound for all Purposes)

(Introduction, From the Smallest to the Largest, Car Performance, Accident Investigation)

(Introduction, Thrust and the Space Shuttle, Force of Light and Solar Sails, MoND and Dark Matter, Artificial Gravity and Centrifugal Force, Why do Airplanes Fly?)

(Introduction, Surface Area and Heat, Projected Area and Planetary Temperature)

(Introduction, From the Smallest to the Largest, Hydraulic Press, Air Pressure, Magdeburg Hemispheres)

(Introduction, Poisson’s Ratio)

(Introduction, From the Smallest to the Largest, Bulk Density, Water Anomaly, More Densities)

(Introduction, From the Smallest to the Largest, Thermal Expansion, Boiling, Evaporation is Cool, Why Blankets Work, Cricket Temperature)

(Introduction, Impact Speed, Ice Skating, Dear Radioactive Ladies and Gentlemen!, Space Shuttle Reentry, Radiation Exposure)

(Introduction, From the Smallest to the Largest, Space Shuttle Launch and Sound Suppression)

(Introduction, Inverse Square Law, Absorption)

(Introduction, Perfectly Inelastic Collisions, Recoil, Hollywood and Physics, Force Revisited)

Frequency / Period
(Introduction, Heart Beat, Neutron Stars, Gravitational Redshift)

Rotational Motion
(Extended Introduction, Moment of Inertia – The Concept, Moment of Inertia – The Computation, Conservation of Angular Momentum)

(Extended Introduction, Stewart-Tolman Effect, Piezoelectricity, Lightning)

(Extended Introduction, Lorentz Force, Mass Spectrometers, MHD Generators, Earth’s Magnetic Field)

Scalar and Vector Quantities
Measuring Quantities
Unit Conversion
Unit Prefixes
Copyright and Disclaimer

As always, I discounted the book in countries with a low GDP because I think that education should be accessible for all people. Enjoy!

Self-Publishing – A Rat Race

The trouble with the rat race is that even if you win, you’re still a rat. (Lily Tomlin)

Self-publishing seems like a cozy thing to do. You are free to choose any topic, free to write any way you like, free to set your own schedule … in short: a fantastic opportunity to express oneself and to share your ideas with the world. But as with anything on God’s green Earth, you gotta read the small print. There’s a good chance that self-publishing sucks you into a rat race filled with uncertainty, stress, anger and sleepless nights.

You chase the Amazon ranks like an addict chasing the drug that will bring his doom. You rush to finish the next book before the previous one drops out of Amazon’s “new releases” list. You are downright angry at an invisible algorithm that doesn’t make your book appear in the “also bought” list. You spend hours and hours of searching new ways to be seen and spend hours and hours frustrated when you can’t achieve the visibility you desire. You are happy about the sudden rise in sales and wonder obsessively about the following drop. You feel fantastic about the praise and are devasted for days about a bad review. If you get sucked in, there’s little left of the original idea: express oneself and share the ideas.

Any book takes a lot of work. The actual writing and researching, the creative process, is only a part of it. Thorough editing takes patience and time. You have to proof-read your book until you can’t stand the sight of it. This is the only way of making it error-free if you don’t intent to hire an editor. You have to format the whole thing for Kindle properly, including making it pass the EPUB validation (which will most likely cause you to scream at your innocent computer screen on more than one occasion), make the right book cover and write a catchy blurb. Then comes the worst part: marketing. Like a desperate and lonely vacuum cleaner salesman you go from virtual door to virtual door, begging for attention and feeling dirty all along. All for this little gain in visibility and rank, the self-published author’s cocaine. Then come the rollercoaster sales and reviews. If you kept your cool up to now, this will get to you. It’s amazing how much a negative review can hurt. But it’s just part of the job.

The rat race is on and you are rat #2,534,287 trying to find your edge. Remember the original idea? The expressing and sharing thing? Rat #2,534,287 doesn’t, all it wants to do is chase ranks.

I’m not making a case against self-publishing. I’m making a case for remembering why you do it or want to do it. The original idea that made writing your first book a joy. The first book! Just pure creative process. No thought wasted on ranks, promos, new releases, also boughts, pricing, … That’s how it should be. And that’s why I’m taking a break from publishing, to get back to this state of mind. I want to be me, not rat #2,534,287.

Distribution of E-Book Sales on Amazon

For e-books on Amazon the relationship between the daily sales rate s and the rank r is approximately given by:

s = 100,000 / r

Such an inverse proportional relationship between a ranked quantity and the rank is called a Zipf distribution. So a book on rank r = 10,000 can be expected to sell s = 100,000 / 10,000 = 10 copies per day. As of November 2013, there are about 2.4 million e-books available on Amazon’s US store (talk about a tough competition). In this post we’ll answer two questions. The first one is: how many e-books are sold on Amazon each day? To answer that, we need to add the daily sales rate from r = 1 to r = 2,400,000.

s = 100,000 · ( 1/1 + 1/2 + … + 1/2,400,000 )

We can evaluate that using the approximation formula for harmonic sums:

1/1 + 1/2 + 1/3 + … + 1/r ≈ ln(r) + 0.58

Thus we get:

s ≈ 100,000 · ( ln(2,400,000) + 0.58 ) ≈ 1.5 million

That’s a lot of e-books! And a lot of saved trees for that matter. The second question: What percentage of the e-book sales come from the top 100 books? Have a guess before reading on. Let’s calculate the total daily sales for the top 100 e-books:

s ≈ 100,000 · ( ln(100) + 0.58 ) ≈ 0.5 million

So the top 100 e-books already make up one-third of all sales while the other 2,399,900 e-books have to share the remaining two-thirds. The cake is very unevenly distributed.

This was a slightly altered excerpt from More Great Formulas Explained, available on Amazon for Kindle. For more posts on the ebook market go to my E-Book Market and Sales Analysis Pool.

Five Biggest Mistakes In E-Book Publishing

Are you thinking about publishing an e-book? If yes, then know that you are entering a highly competetive market. Publishing a book has never been easier and accordingly, many new authors have joined in. To have a chance at being read, you need to make sure to avoid common mistakes.

1. Lack of Writing Experience

Almost everybody can read and write, it’s a skill we learn from an early age on. But writing is not the same as writing well. It takes a lot of practice to write articles or books that make a good read. So before you start that novel, grow a blog and gain experience. This provides a chance to see what works and what doesn’t. And the improvement will become noticeable after just a few weeks and months. As a plus, the blog you grew can serve as a marketing platform once your e-book is finished. In such a competitive market, this can be a big advantage.

2. Writing for Quick Cash

Writing for quick and easy cash is a really bad idea. This might have worked for a short while when the e-books were new and fresh, but this time is long gone. Just browse any indie author forum for proof. The market is saturated. If your first e-book brings in 30 $ a month or so, you can call yourself lucky. If it’s more, even better, but don’t expect it. Writing and selling e-book is not a get-rich-quick scheme. It’s tough work with a very low ROI. If you do it for the money, you’re in for a disappointment. Do it out of passion.

3. Lack of Editing

If you spend three weeks writing a book, expect to spend another three weeks on fine-tuning and proof-reading. To find the mistakes in the text, you have to go over it again and again until you can’t stand your book anymore. You’ll be amazed that seemingly obvious mistakes (the same words twice, for for example) can be overlooked several times. And no spell checker will find that. Tedious editing is just part of writing and if you try to skip that, you will end up with many deserved one-star reviews.

4. No or Ineffective Marketing

With 2.5 million e-books on Amazon, many of high quality, getting noticed is tough. Without any marketing, your sales will most likely just disappear in an exponential fashion over time. The common marketing means for indie authors are: growing a blog, establishing a facebook fan page, joining facebook groups and interacting, becoming active on twitter, joining goodreads and doing giveaways, free promos via KDP Select, banner and other paid ads (notably on BookBub – as expensive as it is effective), and and and … So you’re far from done with just writing, editing and publishing. You should set aside half an hour a day or so for marketing. And always make sure to market to the right people.

5. Stopping After The First Book

Publishing the first e-book can be a quite sobering experience. You just slaved for weeks or even months over your book and your stats hardly move. Was it all worth it? If you did it out of passion, then yes, certainly. But of course you want to be read and so you feel the frustration coming in. The worst thing you could do is to stop there. Usually sales will pick up after the third or fourth book. So keep publishing and results will come in.

E-Book Market & Sales – Analysis Pool

On this page you can find a collection of all my statistical analysis and research regarding the Kindle ebook market and sales. I’ll keep the page updated.

How E-Book Sales Vary at the End / Beginning of a Month

The E-Book Market in Numbers

Computing and Tracking the Amazon Sales Rank

Typical Per-Page-Prices for E-Books

Quantitative Analysis of Top 60 Kindle Romance Novels

Mathematical Model For E-Book Sales

If you have any suggestions on what to analyze next, just let me know. Share if you like the information.

How E-Book Sales Vary at the End / Beginning of a Month

After getting satisfying data and results on ebook sales over the course of a week, I was also interested in finding out what impact the end or beginning of a month has on sales. For that I looked up the sales of 20 ebooks, all taken from the current top 100 Kindle ebooks list, for November and beginning of December on novelrank. Here’s how they performed at the end of November:

  • Strong Increase: 0%
  • Slight Increase: 0 %
  • Unchanged: 20%
  • Slight Decrease: 35 %
  • Strong Decrease: 45 %

80 % showed either a slight or strong decrease, none showed any increase. So there’s a very pronounced downwards trend in ebook sales at the end of the month. It usually begins around the 20th. Onto the performance at the beginning of December:

  • Strong Increase: 50%
  • Slight Increase: 35 %
  • Unchanged: 10%
  • Slight Decrease: 5 %
  • Strong Decrease: 0 %

Here 85 % showed either a slight or strong increase, while only 5 % showed any decrease. This of course doesn’t leave much room for interpretation, there’s a clear upwards trend at the beginning of the month. It usually lasts only a few days (shorter than the decline period) and after that the elevated level is more or less maintained.

Mathematical Model For (E-) Book Sales

It seems to be a no-brainer that with more books on the market, an author will see higher revenues. I wanted to know more about how the sales rate varies with the number of books. So I did what I always do when faced with an economic problem: construct a mathematical model. Even though it took me several tries to find the right approach, I’m fairly confident that the following model is able to explain why revenues grow overproportionally with the number of books an author has published. I also stumbled across a way to correct the marketing R/C for number of books.

The basic quantities used are:

  • n = number of books
  • i = impressions per day
  • q = conversion probability (which is the probability that an impression results in a sale)
  • s = sales per buyer
  • r = daily sales rate

Obviously the basic relationship is:

r = i(n) * q(n) * s(n)

with the brackets indicating a dependence of the quantities on the number of books.

1) Let’s start with s(n) = sales per buyer. Suppose there’s a probability p that a buyer, who has purchased an author’s book, will go on to buy yet another book of said author. To visualize this, think of the books as some kind of mirrors: each ray (sale) will either go through the book (no further sales from this buyer) or be reflected on another book of the author. In the latter case, the process repeats. Using this “reflective model”, the number of sales per buyer is:

s(n) = 1 + p + p² + … + pn = (1 – pn) / (1 – p)

For example, if the probability of a reader buying another book from the same author is p = 15 % = 0.15 and the author has n = 3 books available, we get:

s(3) = (1 – 0.153) / (1 – 0.15) = 1.17 sales per buyer

So the number of sales per buyer increases with the number of books. However, it quickly reaches a limiting value. Letting n go to infinity results in:

s(∞) = 1 / (1 – p)

Hence, this effect is a source for overproportional growth only for the first few books. After that it turns into a constant factor.

2) Let’s turn to q(n) = conversion probability. Why should there be a dependence on number of books at all for this quantity? Studies show that the probability of making a sale grows with the choice offered. That’s why ridiculously large malls work. When an author offers a large number of books, he is able to provide list impression (featuring all his / her books) additionally to the common single impressions (featuring only one book). With more choice, the conversion probability on list impressions will be higher than that on single impressions.

  • qs = single impression conversion probability
  • ps = percentage of impressions that are single impressions
  • ql = list impression conversion probability
  • pl = percentage of impressions that are list impressions

with ps + pl = 1. The overall conversion probability will be:

q(n) = qs(n) * ps(n) + ql(n)* pl(n)

With ql(n) and pl(n) obviously growing with the number of books and ps(n) decreasing accordingly, we get an increase in the overall conversion probability.

3) Finally let’s look at i(n) = impressions per day. Denoting with i1, i2, … the number of daily impressions by book number 1, book number 2, … , the average number of impressions per day and book are:

ib = 1/n * ∑[k] ik

with ∑[k] meaning the sum over all k. The overall impressions per day are:

i(n) = ib(n) * n

Assuming all books generate the same number of daily impressions, this is a linear growth. However, there might be an overproportional factor at work here. As an author keeps publishing, his experience in writing, editing and marketing will grow. Especially for initially inexperienced authors the quality of the books and the marketing approach will improve with each book. Translated in numbers, this means that later books will generate more impressions per day:

ik+1 > ik

which leads to an overproportional (instead of just linear) growth in overall impressions per day with the number of books. Note that more experience should also translate into a higher single impression conversion probability:

qs(n+1) > qs(n)

4) As a final treat, let’s look at how these effects impact the marketing R/C. The marketing R/C is the ratio of revenues that result from an ad divided by the costs of the ad:

R/C = Revenues / Costs

For an ad to be of worth to an author, this value should be greater than 1. Assume an ad generates the number of iad single impressions in total. For one book we get the revenues:

R = iad * qs(1)

If more than one book is available, this number changes to:

R = iad * qs(n) * (1 – pn) / (1 – p)

So if the R/C in the case of one book is (R/C)1, the corrected R/C for a larger number of books is:

R/C = (R/C)1 * qs(n) / qs(1) * (1 – pn) / (1 – p)

In short: ads, that aren’t profitable, can become profitable as the author offers more books.

For more mathematical modeling check out: Mathematics of Blog Traffic: Model and Tips for High Traffic.

Released Today: More Great Formulas Explained (Ebook for Kindle)

I’m happy to announce that today I’ve released the second volume of the series “Great Formulas Explained”. The aim of the series is to gently explain the greatest formulas the fields of physics, mathematics and economics have brought forth. It is suitable for high-school students, freshmen and anyone else with a keen interest in science. I had a lot of fun writing the series and edited both volumes thoroughly, including double-checking all sources and calculations.

Here are the contents of More Great Formulas Explained:

  • Part I: Physics

Law Of The Lever
Sliding and Overturning
Maximum Car Speed
Range Continued
Escape Velocity
Cooling and Wind-Chill
Adiabatic Processes
Draining a Tank
Open-Channel Flow
Wind-Driven Waves
Heat Radiation
Main Sequence Stars
Electrical Resistance
Strings and Sound

  • Part II: Mathematics

Arbitrary Triangles
Standard Deviation and Error
Zipf Distribution

  • Part III: Appendix

Unit Conversion
Unit Prefixes
Copyright and Disclaimer
Request to the Reader

I will post exerpts in the days to come. If you are interested, click the cover to get to the Amazon product page. Since I’m enrolled in the KDP Select program, the book is exclusively available through Amazon for a constant price of $ 2.99, I will not be offering it through any other retailers in the near future.

Remember what Benjamin Franklin once said: “Knowledge pays the best interest”. An investment in education (be that time or money) can never be wrong. Knowledge is a powerful tool to make you free and independent. I hope I can contribute to bringing knowledge to people all over the world. In the spirit of this, I have permanently discounted this book, as well as volume I, in India.