pull down to refresh

Here’s a math problem that everybody can solve: What is 1 − 1? 0. So far so good. If we then add a 1, the sum grows, but if we subtract yet another 1, we’re back at 0. Let’s say, we keep doing this forever:
1 – 1 + 1 – 1 + 1 – 1 + ...
What is the resulting sum? The question seems simple, silly even, but it puzzled some of the greatest mathematicians of the 18th century. Paradoxes surround the problem because multiple seemingly sound arguments about the sum reach radically different conclusions. The first person to deeply investigate it thought it explained how God created the universe. Its resolution in modern terms illustrates that mathematics is a more human enterprise than sometimes appreciated.
Take a guess at what you think the infinite sum equals. I’ll give you multiple choices:
A. 0
B. 1
C. ½
D. It does not equal anything
The argument for 0 comes naturally if we include suggestive parentheses:
(1 – 1) + (1 – 1) + (1 – 1) + ...
Recall that in mathematics the order of operations dictates that we evaluate those inside parentheses before evaluating those outside. Each (1 − 1) cancels to 0, so the above works out to 0 + 0 + 0 +..., which clearly amounts to nothing.
Yet a slight shift of the brackets yields a different result. If we set aside the first 1, then the second and third terms also cancel, and the fourth and fifth cancel:
1 + (–1 + 1) + (–1 + 1) + (–1 + 1) + …
Again, all the parentheticals add up to 0, but we have this extra positive 1 at the beginning, which suggests that the whole expression sums to 1.
Italian monk and mathematician Luigi Guido Grandi first investigated the series (the sum of infinitely many numbers) in 1703. Grandi, whom this particular series is now named after, observed that by merely shifting around parentheses he could make the series sum to 0 or 1. According to math historian Giorgio Bagni, this arithmetic inconsistency held theological significance for Grandi, who believed it showed that creation out of nothing was “perfectly plausible.”
The series summing to both 0 and 1 seems paradoxical, but surely option C (½) is no less troubling. How could a sum of infinitely many integers ever yield a fraction? Yet ultimately Grandi and many prominent 18th-­century mathematicians after him thought the answer was ½. Grandi argued for this with a parable: Imagine that two brothers inherit a single gem from their father, and each keeps it in their own museum on alternating years. If this tradition of passing the gem back and forth carried on with their descendants, then the two families would each have ½ ownership over the gem.
As proofs go, I wouldn’t recommend putting the gem story on your next math test. German mathematician Gottfried Wilhelm Leibniz agreed with Grandi’s conclusion, but he tried to support it with probabilistic reasoning. Leibniz argued that if you stop summing the series at a random point, then your sum up to that point will be either 0 or 1 with equal probability, so it makes sense to average them to ½. He thought the result was correct but acknowledged that his argument was more “metaphysical than mathematical.”
Swiss mathematician Leonhard Euler employed more complicated methods to argue for ½ and addressed those who disagreed in a rather defensive paragraph in his 1760 paper De seriebus divergentibus (translation: “On Divergent Series”). Euler asserted that “no doubt can remain that in fact the series 1 − 1 + 1 − 1 + 1 − 1 + etc. and the fraction ½ are equivalent quantities and that it is always permitted to substitute one for the other without error.” So a lot of smart people were strongly in favor of option C.
Infinite series like this one have flummoxed thinkers dating back at least to the ancient Greeks with Zeno of Elea’s paradoxes of motion. In a well-known example, Zeno observed that to walk a path, one must first traverse half of it, then must traverse half of the remaining distance (¼ of the total path), and then half of the remaining distance (⅛), and so on. One can keep subdividing forever, which suggests that every time we walk a path we complete an infinite number of actions in a finite amount of time—a paradox.
While philosophers still debate the metaphysics of Zeno’s paradoxes some 2,400 years later, mathematicians did make a substantial leap toward resolving them and the mystery of Grandi’s series in the late 19th century. From the foundations of calculus emerged clarifying definitions about when infinite series sum to finite values. Finding the answer begins with looking at partial sums—add the first two terms, then the first three, then the first four, and so on. If these intermediate sums continue to get closer and closer to a fixed value, then we say the series “converges” to that value. Let’s apply this to the series in Zeno’s paradox, which sums half of a path plus a quarter of a path plus an eighth of a path, and so on.
1/2+ 1/4 + 1/8 + 1/16
The first two terms sum to 0.75, the first three terms sum to 0.875, and the first four sum to 0.9375. If we summed the first 10 terms, we’d get 0.9990234375. The partial sums get closer and closer to 1, so the series converges to 1. Although we can conceive of a path as an infinite number of distances, calculus confirms that it still ultimately amounts to one path.
The partial sums of Grandi’s series oscillate between 0 and 1 without ever homing in on a single value. So modern mathematicians would choose option D (Grandi’s series does not sum to anything).
The resolution of Grandi’s series raises a sociological question. Why does the mathematical community accept the partial-sum approach but not Leibniz’s probabilistic argument or some other prescription for summing an infinite series? Although they may look alike and smell alike, summing an infinite series is not the same as addition. Addition does not change when you shift ­parentheses around—for example, 1 + (2 + 3) = (1 + 2) + 3—but many series, including Grandi’s, do. For convenience, mathematicians borrow words like “summing” and “equals” from addition to discuss series, but under the hood what they really mean when they say Zeno’s series “sums to 1” or “equals 1” is that the partial sums converge to 1, no more and no less.
The partial-sum definition of convergence isn’t arbitrary. The math community prefers it to alternatives for good reasons. It alleviates a lot of the paradoxes that beset earlier mathematicians who studied infinite sums, and it preserves many of the nice properties that finite addition enjoys. But other definitions of convergence are useful as well. For instance, rather than asking what number the partial sums approach, the Cesàro summation method takes the average of the first two partial sums, then the first three partial sums, and then the first four partial sums, and so on ad infinitum, and asks what those averages approach. If you apply this tweaked method to a convergent series like Zeno’s, it will always give you the same answer. But it sometimes will give a different answer when applied to series that do not converge under the standard definition. In particular, Grandi’s series has a Cesàro sum of ½.
Many other summation methods appear in the mathematical literature. In reality, we can’t physically add an infinite number of things, so summation methods simply provide principled ways of assigning values to infinite series. The partial-sum definition holds deserved status as the default, but it occasionally helps to have other options.
Curiously, Grandi’s series sums to ½ under most alternative methods. So a colloquial answer to our opening question might be: Grandi’s series does not sum to anything, but if it did it would sum to ½.
this territory is moderated
21 sats \ 0 replies \ @ken 7 Sep
It's an infinite sum, so unfortunately you run out of time before you can add up all of the terms. Many mathematicians have tried doing the sum but most of them get bored and give up. Perhaps one day, someone will be motivated enough to sum it up. But nobody wants to work anymore
reply
@dontforgetthekeys, what are your thoughts on this article?
reply
I thought it was pretty interesting. Not so much in a mathematical way but in how we develop theories over time. How we add meaning to things and how we sometimes miss the mark trying to prove something because we're not looking at what is right in front of us.
They wanted a definite solution to an infinite problem without working on it indefinitely. How? The numbers keep changing for as long as you observe it. No one thought of how much weight the observation played into the answer.
This is pre-quantum thinking. We now know that the answer can be both 0 and 1 at the same time.
I thought it was interesting example of "we don't know, what we don't know, until we know."
reply