Category Archives: Algebra

Just Beatty It

[This is the 6th post in the current series about Wythoff’s game: see posts #1, #2, #3, #4, and #5. Caveat lector: this post is a bit more difficult than usual. Let me know what you think in the comments!]

Our only remaining task from last week was to prove the mysterious Covering Theorem: we must show that there is exactly one dot in each row and column of the grid (we already covered the diagonal case). Since the rows and columns are symmetric, let’s focus on columns.

The columns really only care about the x-coordinates of the points, so let’s draw just these x-coordinates on the number-line. We’ve drawn \(\phi,2\phi,3\phi,\ldots\) with small dots and \(\phi^2,2\phi^2,3\phi^2,\ldots\) with large dots. We need to show that there’s exactly one dot between 1 and 2, precisely one dot between 2 and 3, just one between 3 and 4, and so on down the line. For terminology’s sake, break the number line into length-1 intervals [1,2], [2,3], [3,4], etc., so we must show that each interval has one and only one dot:

Multiples of phi and phi^2 on a number line
Multiples of \(\phi\) (small blue dots) and of \(\phi^2\) (large green dots) perfectly interleave the integers on the number line. This is proved below.

Why is this true? One explanation hinges on a nice geometric observation: Take any small dot s and large dot t on our number-line above, and cut segment st into two parts in the ratio \(1:\phi\) (with s on the shorter side). Then the point where we cut is always an integer! For example, the upper-left segment in the diagram below has endpoints at \(s=2\cdot\phi\) and \(t=1\cdot\phi^2\), and its cutting point is the integer 3:

Splitting blue and green dots at integers
When a segment formed by a small and a large dot is cut into a \(1:\phi\) ratio (closer to the small dot), the cutting point is always an integer. In fact, the segment between \(j\cdot \phi\) and \(k\cdot \phi^2\) is cut at the integer j+k.

In general, if s is the jth small dot—i.e., \(s=j\cdot\phi\)—and \(t=k\cdot\phi^2\) is the kth large dot, then the cutting point between s and t is \(\frac{1}{\phi}\cdot s+\frac{1}{\phi^2}\cdot t = j+k\) (Why?![1]). But more importantly, this observation shows that no interval has two or more dots: a small dot and a large dot can’t be in the same interval because they always have an integer between them![2]

So all we have to do now is prove that no interval is empty: for each integer n, some dot lies in the interval [n,n+1]. We will prove this by contradiction. What happens if no dot hits this interval? Then the sequence \(\phi,2\phi,3\phi,\ldots\) jumps over the interval, i.e., for some j, the jth dot in the sequence is less than n but the (j+1)st is greater than n+1. Likewise, the sequence \(\phi^2,2\phi^2,3\phi^2,\ldots\) jumps over the interval: its kth dot is less than n while its (k+1)st dot is greater than n+1:

Showing no interval is empty
Illustrating the hypothetical situation where interval [n, n+1] contains no dot. This is used in a proof by contradiction to show that the interval in fact cannot be empty.

By our observation above on segment \(s=j\phi\) and \(t=k\phi^2\), we find that the integer j+k is less than n, so \(j+k\le n-1\). Similarly, \(j+k+2 > n+1\), so \(j+k+2 \ge n+2\). But together these inequalities say that \(n\le j+k\le n-1\), which is clearly absurd! This is the contradiction we were hoping for, so the interval [n,n+1] is in fact not empty. This completes our proof of the Covering Theorem and the Wythoff formula!

It was a long journey, but we’ve finally seen exactly why the Wythoff losing positions are arranged as they are. Thank you for following me through this!

A Few Words on the Column Covering Theorem

Using the floor function \(\lfloor x\rfloor\) that rounds x down to the nearest integer, we can restate the Column Covering Theorem in perhaps a more natural context. The sequence of integers $$\lfloor\phi\rfloor = 1, \lfloor 2\phi\rfloor = 3, \lfloor 3\phi\rfloor = 4, \lfloor 4\phi\rfloor = 6, \ldots$$ is called the Beatty sequence for the number \(\phi\), and similarly, $$\lfloor\phi^2\rfloor = 2, \lfloor 2\phi^2\rfloor = 5, \lfloor 3\phi^2\rfloor = 7, \lfloor 4\phi^2\rfloor = 8,\ldots$$ is the Beatty sequence for \(\phi^2\). Today we proved that these two sequence are complementary, i.e., together they contain each positive integer exactly once. We seemed to use very specific properties of the numbers \(\phi\) and \(\phi^2\), but in fact, a much more general theorem is true:

Beatty’s Theorem: If \(\alpha\) and \(\beta\) are any positive irrational numbers with \(\frac{1}{\alpha}+\frac{1}{\beta}=1\), then their Beatty sequences \(\lfloor\alpha\rfloor, \lfloor 2\alpha\rfloor, \lfloor 3\alpha\rfloor,\ldots\) and \(\lfloor\beta\rfloor, \lfloor 2\beta\rfloor, \lfloor 3\beta\rfloor,\ldots\) are complementary sequences.

Furthermore, our same argument—using \(\alpha\) and \(\beta\) instead of \(\phi\) and \(\phi^2\)—can be used to prove the more general Beatty’s Theorem!


Notes

  1. Hint: use the identity \(\frac{1}{\phi}+\frac{1}{\phi^2}=1\). []
  2. To be thorough, we should also check that no interval has two small or two large dots. Why can’t this happen? []

Logic Under Construction

In last week’s discussion of proofs by contradiction and nonconstructive proofs, we showed:

Theorem: There exist irrational numbers \(x\) and \(y\) with the property that \(x^y\) is rational.

However, our proof was nonconstructive: it did not pinpoint explicit values for \(x\) and \(y\) that satisfy the condition, instead proving only that such numbers must exist. Would a more constructive proof be more satisfying? Let’s see! I claim \(x=\sqrt{2}\) and \(y=\log_2 9\) work, because \(\sqrt{2}\) we already know to be irrational, \(y=\log_2 9\) can be similarly proved to be irrational (try this!), and $$x^y = \sqrt{2}^{\log_2 9} = \sqrt{2}^{\log_{\sqrt{2}}3}=3,$$ which is rational.

Let’s further discuss why last week’s proof was less satisfying. The following rephrasing of this proof may help shed some light on the situation:

Proof: Assume the theorem were false, so that any time \(x\) and \(y\) were irrational, \(x^y\) would also be irrational. This would imply that \(\sqrt{2}^{\sqrt{2}}\) would be irrational, and by applying our assumption again, \(\left(\sqrt{2}^{\sqrt{2}}\right)^{\sqrt{2}}\) would also be irrational. But this last number equals 2, which is rational. This contradiction disproves our assumption and thereby proves the theorem, QED.

So perhaps this argument seems less satisfactory simply because it is, at its core, a proof by contradiction. It does not give us evidence for the positive statement “\(x\) and \(y\) exist”, but instead only for the negative statement “\(x\) and \(y\) don’t not exist.” (Note the double negative.) This distinction is subtle, but a similar phenomenon can be found in the English language: the double negative “not bad” does not mean “good” but instead occupies a hazy middle-ground between the two extremes. And even though we don’t usually think of such a middle-ground existing between logic’s “true” and “false”, proofs by contradiction fit naturally into this haze. In fact, these ideas motivate a whole branch of mathematical logic called Constructive logic that disallows double negatives and proofs by contradiction, instead requiring concrete, constructive justifications for all statements.

But wait; last week’s proof that \(\sqrt{2}\) is irrational used contradiction, and therefore is not acceptable in constructive logic. Can we prove this statement constructively? We must show that \(\sqrt{2}\) is not equal to any rational number; what does it even mean to do this constructively? First, we turn it into a positive statement: we must show that \(\sqrt{2}\) is unequal to every rational number. And how do we constructively prove that two numbers are unequal? By showing that they are measurably far apart. So, here is a sketch of a constructive proof: \(\sqrt{2}\) is unequal to every rational number \(a/b\) because $$\left|\sqrt{2} – \frac{a}{b}\right| \ge \frac{1}{3b^2}.$$ See if you can verify this inequality![1]

PS. In case you are still wondering whether \(\sqrt{2}^{\sqrt{2}}\) is rational or irrational: It is irrational (moreover, transcendental), but the only proof that I know uses a very difficult theorem of Gelfond and Schneider.


Notes

  1. This inequality would also have to be proven in a constructive manner. See these Wikipedia articles for more information: Intuitionistic logic (another name for Constructive logic) and Square root of 2: Constructive proof. []

Methods of Irrationality

Being a mathematician requires you to think in strange ways.

For starters, you might think that mathematicians spend all day pondering purely theoretical things that only exist in Mathematical abstraction, right? Well, it can get even stranger than that: sometimes we have to think about things that don’t exist, even in the abstraction! It’s called a proof by contradiction, and here’s an example:

Theorem: The number \(\sqrt{2}\) is irrational.

Proof: To prove that \(\sqrt{2}\) is irrational, we have to show that we can never write it as a ratio of integers, \(\sqrt{2}=a/b\). So let’s assume we can find such a fraction and see where it leads us.

Let’s cancel common factors in the numerator and denominator so that \(a/b\) is in lowest terms. Squaring our equation shows that \(a^2=2b^2\), so \(a^2\) is even and so \(a\) itself is even. Since \(a/b\) is in lowest terms, \(b\) must be odd.

Since \(a\) is even, \(a^2\) is in fact divisible by 4. On the other hand, since \(b\) is odd, \(2b^2\) is not divisible by 4. But then the integer \(a^2=2b^2\) is both divisible by 4 and not divisible by 4, which is absurd! The only explanation for this contradiction is that our original assumption—that \(\sqrt{2}\) is rational—is false. So we’re done with the proof! Notice that we spent most of our effort reasoning about integers that never existed (specifically \(a\) and \(b\)).

But that’s not the only twisted thing about Mathematical thinking. Here’s a delightfully short yet aggravatingly unsatisfying proof:

Theorem: There exist irrational numbers \(x\) and \(y\) such that \(x^y\) is rational.

Proof: We already know that \(\sqrt{2}\) is irrational, so maybe we can use \(\sqrt{2}^{\sqrt{2}}\). Is this number rational? If it is, then we’re done: \(x=\sqrt{2},y=\sqrt{2}\) solves the problem. But what if \(\sqrt{2}^{\sqrt{2}}\) is irrational? In this case, I claim \(x=\sqrt{2}^{\sqrt{2}},y=\sqrt{2}\) works: indeed, $$\left(\sqrt{2}^{\sqrt{2}}\right)^{\sqrt{2}} = \sqrt{2}^{\sqrt{2} \cdot\sqrt{2}} = \sqrt{2}^2 = 2,$$ which is rational. In either case the required numbers \(x\) and \(y\) exist, so this completes the proof.

But wait; which is it? The proof shows us that one of the pairs \(x=\sqrt{2},y=\sqrt{2}\) or \(x=\sqrt{2}^{\sqrt{2}},y=\sqrt{2}\) works, but it doesn’t tell us which one! So, have we proven the theorem? Yes, technically, but only nonconstructively.

Frustrating, isn’t it?

Astounding “Natural” Identities

The modest sequence \(1,2,3,4,\ldots\) can do some rather awe-inspiring things, when properly arranged. Here’s a short list of some of its many impressive feats.

There are numerous expressions for \(\pi\) relying on the progression of integers, including the Wallis formula: $$\frac{\pi}{2} = \frac{2}{1} \cdot \frac{2}{3} \cdot \frac{4}{3} \cdot \frac{4}{5} \cdot \frac{6}{5} \cdot \frac{6}{7} \cdots$$ (which can be derived from Complex Analysis using an infinite product representation for the sine function) and an elegant alternating sum: $$\frac{\pi-3}{4} = \frac{1}{2\cdot 3\cdot 4}-\frac{1}{4\cdot 5\cdot 6}+\frac{1}{6\cdot 7\cdot 8}-\cdots$$ (try to prove this!). Euler’s number \(e\) has similarly surprising formulas, such as $$\frac{1}{e-2} = 1+\frac{1}{2+\frac{2}{3+\frac{3}{4+\frac{4}{5+\cdots}}}}$$ (listed at MathWorld) and $$\frac{e}{e-1} = 1+\frac{1+\frac{1+\frac{1+\cdots}{2+\cdots}}{2+\frac{2+\cdots}{3+\cdots}}}{2+\frac{2+\frac{2+\cdots}{3+\cdots}}{3+\frac{3+\cdots}{4+\cdots}}}$$ (which is problem 1745 in Mathematics Magazine, posed by Gerald A. Edgar).

The list doesn’t stop here! The nested square-root identity $$3 = \sqrt{1+2\sqrt{1+3\sqrt{1+4\sqrt{1+\cdots}}}}$$ is attributed to Ramanujan (on this Wikipedia page). As another curiosity, the sequence $$\frac{1}{2},\qquad
\frac{1}{2} \Big/ \frac{3}{4},\qquad
\frac{\frac{1}{2} \big/ \frac{3}{4}}{\frac{5}{6} \big/ \frac{7}{8}},\qquad
\frac{\frac{1}{2} \big/ \frac{3}{4}}{\frac{5}{6} \big/ \frac{7}{8}} \bigg/ \frac{\frac{9}{10} \big/ \frac{11}{12}}{\frac{13}{14} \big/ \frac{15}{16}},\qquad\ldots$$ (which relates to the Thue-Morse sequence) can be shown to converge to \(\sqrt{2}/2\).

There are many pretty/unexpected/crazy formulas obtainable from the natural numbers \(1,2,3,4,\ldots\) that could not fit in this post. What are some of your favorites?

Generating Primes with Formulas

Prime numbers—namely, positive integers with no divisors other than 1 and themselves—have long fascinated mathematicians. As a result, there have been many attempts at formulas to generate prime numbers. For example, the polynomial \(x^2-x+41\) spits out a prime number for \(x = 0, 1, 2, …, 40\). But for \(x = 41\), the value is \(41^2\), which is not prime. Can we find some nonconstant polynomial \(f(x)\) with integer coefficients that only spits out prime values? (The constant polynomial \(f(x) = 2\) always returns a prime, but this is not very interesting.) In a word: No. There is no such polynomial. To prove this, assume \(f\) is such a polynomial. Then \(f(0) = p\) for some prime \(p\), and all of the numbers \(\{f(p), f(2p), f(3p), \ldots \}\) are divisible by \(p\). This sequence grows toward infinity, so one of them is greater than \(p\). Since this number has a factor of \(p\), it is not prime.

What if we switch from polynomials to exponents? Maybe the function \(2^m + 1\) will give us lots of primes. It can be shown that if \(2^m + 1\) is prime then \(m\) itself has to be a power of 2. (Indeed, if a prime \(p > 2\) divides \(m\), check that \(2^{m/p} + 1\) divides \(2^m + 1\).) So the only possibilities are the numbers \(F_n = 2^{2^n} + 1\), which are called Fermat numbers. The first few are $$\begin{align*}
F_0 &= 3,\\
F_1 &= 5,\\
F_2 &= 17,\\
F_3 &= 257,\\
F_4 &= 65537.
\end{align*}$$ It was long believed that all Fermat numbers were prime (Fermat himself conjectured this!), but in fact the very next one, \(F_5 = 4294967297 = 641 \cdot 6700417\), is composite. (In Fermat’s defense, this is a large number to factor by hand!) Are there at least infinitely many Fermat primes? We currently don’t know. In fact, the Fermat primes shown above are the only known Fermat primes!

Similarly, numbers of the form \(2^n-1\) are called Mersenne numbers. In order for this to be prime, it is necessary for \(n\) itself to be prime. (Indeed, if \(a\) divides \(n\) then \(2^a-1\) divides \(2^n-1\).) So is \(2^p-1\) prime for every prime \(p\)? Unfortunately, no: \(2^{11}-1\) is not prime. But there seem to be many Mersenne primes. Thanks to the “Great Internet Mersenne Prime Search,” a.k.a. GIMPS, there are 47 Mersenne primes known, the largest of which is \(2^{43112609}-1\), which has 12978189 digits (in base 10).

OK, so the above “prime-generating” functions don’t work very well. Here’s one that does work. It can be shown that there is some polynomial \(Z(a, b, c, d, e, f, g, h, i, j)\) with 10 variables and integer coefficients such that any positive output from \(Z\) (when given integer inputs) is prime, and furthermore, every prime is obtained in this way. (Note: \(Z\) spits out many non-positive values as well.) Unfortunately, this polynomial \(Z\) is not very practical. For one thing, its degree is about \(38!\). (Yes, I mean 38 factorial.)