Category Archives: Counting/Probability

Bucky’s Dozen

Buckyballs are remarkable structures, and not just to mathematicians. In chemistry, a buckyball—more correctly, a spherical fullerene—is a molecule of carbon atoms forming a hollow spherical “shell.” Many sizes and configurations are possible (and stable!), but due to Carbon’s chemical properties they always have a few common characteristics: the faces are rings of 5 or 6 carbon atoms, and each carbon atom bonds to 3 others. The most commonly occurring one, buckminsterfullerene, is made with 60 carbon atoms in the shape of a soccer ball:

Buckminsterfullerene
The buckminsterfullerene molecule has sixty carbon atoms (green spheres) forming a “soccer ball” polyhedron.

These molecules were named after architect Richard Buckminster Fuller due to their resemblance to his famous geodesic domes. Fuller’s domes are also based on spheres built from hexagons and pentagons, and he discovered that these shapes naturally distribute forces and tensions evenly throughout the structure, thus enabling light, sturdy, and enormous constructions, such as the Climatron in St. Louis or the Montreal Biosphère.

Wait, where’s the math?

Mathematically, a fullerene is a convex polyhedron[1] where all faces are either hexagons or pentagons (these need not be regular), and 3 faces/edges meet at each vertex. The smallest of these mathematical fullerenes is the regular dodecahedron, which has no hexagons at all. The next smallest has two hexagons separated by twelve pentagons.

Smallest Fullerenes
Left: The smallest fullerene polyhedron is the dodecahedron, with 20 vertices and 12 pentagons. Right: The next smallest fullerene is the hexagonal truncated trapezohedron, having 24 vertices, 12 pentagons, and 2 hexagons.

Large and unwieldy fullerenes also exist. Some are highly symmetric with pentagons arranged at the corners of an icosahedron; others form long tubes of hexagons closed at the ends; still others display very little discernible structure whatsoever.

Larger Fullerene Examples
Left: A larger, icosahedrally-symmetric fullerene with 140 vertices and 72 faces. Middle: A tube-like fullerene with 66 vertices and 35 faces whose center is formed by a long spiral of hexagons. Right: a highly asymmetric fullerene with 44 vertices and 24 faces.

Despite their variety of appearances, they all have one thing in common:

All fullerenes have exactly 12 pentagons!

Why is this true?

One explanation comes from Euler’s beautiful formula that relates the number of vertices, edges, and faces in a polyhedron. This formula says, quite simply, that if there are v vertices, e edges, and f faces in any convex polyhedron (not necessarily a fullerene), then \(v-e+f=2\), always! For example, a cube has 8 vertices, 12 edges, and 6 faces, and indeed \(8-12+6=2\).

Let’s apply this to fullerenes. If a fullerene has \(p\) pentagons and \(h\) hexagons, then, since each edge is shared by two faces, there are \(e=\frac{5p+6h}{2}\) edges. Similarly, each vertex is shared by three faces, so there are \(v=\frac{5p+6h}{3}\) vertices. And since there are \(f=p+h\) faces, Euler’s formula tells us that

$$\frac{5p+6h}{3} – \frac{5p+6h}{2} + (p+h) = 2,$$

which indeed simplifies to \(p=12\).

So there are always 12 pentagons in a fullerene, but how many hexagons can be present? The two smallest examples tell us that 0 or 2 hexagons are possible, but 1 is not. Can we get 3 hexagons? (Yes. Try this!) What about 33 hexagons? (Harder, but yes. Try this too!) In fact, any number of hexagons is possible, with the one exception we’ve already mentioned. See if you can prove this![2] Furthermore, the number of possible fullerenes grows rapidly as the number of hexagons grows. For example, there are 1812 different fullerenes that, like the soccer ball, have exactly 20 hexagons or, equivalently, 60 vertices. With 50 hexagons (120 vertices), this number jumps to more than 52 billion different fullerenes![3]


Notes

  1. more correctly, the planar graph made by the vertices and edges of such a polyhedron []
  2. The “tube-like” fullerene from the last image provides a hint: the spiral of hexagons can have any desired length. []
  3. The exact number is 52628839448: OEIS Sequence A007894. []

Putting the Why in Wythoff

Last week we drew dots at multiples of the vectors \(v=(\phi,\phi^2)\) and \(w=(\phi^2,\phi)\), and we colored grid-cells green or yellow according to whether they contain a dot or not. Our remaining task was to prove these three facts:

  • Endgame condition: The cell (0,0) is green.
  • Greens lose: From a green cell, there are no other green cells accessible with a single Wythoff’s game move.
  • Yellows win: From any yellow cell, it is possible to move to a green cell with one Wythoff’s game move.
Our guess at the solution to Wythoff's game
Our guess at the solution to Wythoff's game: multiples of vectors v and w (dotted) and their containing cells (dark green).

The “Endgame” condition is easy: the dot at \(0\cdot v = (0,0)\) makes cell (0,0) green. One condition down, two to go!

The “Greens lose” condition asks us to show that no two green cells are in the same row, column, or (slope 1) diagonal—otherwise, you could get from one to the other with a Wythoff move. In fact, we will prove something stronger:

Covering Theorem: Every row, column, and diagonal contains exactly one green cell.

Pictorially, this says that each thick line in the diagrams below will hit one and only one green cell:

Covering the columns and diagonals
All columns (left), diagonals (right), and rows (not pictured) contain exactly one green cell. (Some of these green cells are beyond the image boundaries.) Because the green cell locations are symmetric through the line y=x, the diagram for the rows is redundant and therefore omitted.

It is not difficult to describe why each diagonal is covered, i.e. has exactly one green cell. In the diagram on the right, the central diagonal line has equation \(y-x=0\); the one above it has equation \(y-x=1\); the next is \(y-x=2\); and so on. And because \(\phi^2-\phi=1\), the vector v jumps from one diagonal line to the next at each step. Vector w similarly jumps from line to line in the other direction.

By contrast, I find the fact that each column is covered quite surprising. It says that the upper and lower branches of the “V” perfectly complement each other, each filling exactly the columns that the other skips! Before we prove this fact, let’s see why it helps us.

If we assume the covering theorem for now, we can finish the three conditions needed for our proof. We already saw that (0,0) is green, and the covering theorem certainly tells us that each row, column, and diagonal has at most one green cell, so the “Greens lose” condition holds. We just have to prove the “Yellows win” property: that you can always move to a green cell from any yellow cell. The idea is simple: if your yellow cell is above the “V”[1] then move down to the unique green cell in your column (which may be on either the upper or lower “V” branch). If you are below the “V”, move left to the green cell in your row. Finally, if you are inside the “V”, move diagonally. In this way, you can always move to a green cell with one Wythoff move.

Proving the "Yellows win" condition
All yellow cells can reach a green cell with one Wythoff move. Yellow cells above the "V" can move down to a green cell; yellow cells below the "V" can move left to a green cell; yellow cells inside the "V" can move diagonally to a green cell.

With this third and last property verified, we’re finally done with our proof! …except that we still need to prove the mysterious covering theorem. We’ll discuss this next week. Come back then for the exciting QED!


Notes

  1. i.e., its lower-left corner is above the line \(y=\phi\cdot x\) []

The “Fibonacci”est String

The celebrated Fibonacci Sequence is defined by setting each term equal to the sum of the two previous terms, starting at \(F_0=0\) and \(F_1=1\). So it continues like this:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, …

What’s the relation to last week‘s infinite jump sequence,

a b a b b a b a b b a b b a b a b b a b a b b a b b a b a b b a b b … ?

Let’s find out!

Recall from last week that we can group our sequence into blocks of \(A=(ab)\) and \(B=(abb)\), and that the pattern of “short” and “long” blocks is the same as the pattern of as and bs in the original sequence:

Grouping the Jump Sequence, Level 1 This means we can do this grouping again! Specifically, we can group our sequence into “blocks of blocks”, or level 2 blocks, that look like \(A_2=[(ab)(abb)]\) and \(B_2=[(ab)(abb)(abb)]\):

Grouping the Jump Sequence, Level 2 Furthermore, the pattern of short and long level 2 blocks is still the same as the original jump sequence! So we can keep going, grouping our sequence into level 3 blocks \(A_3=A_2 B_2\) and \(B_3=A_2 B_2 B_2\), level 4 blocks \(A_4=A_3 B_3\) and \(B_4=A_3 B_3 B_3\), and so on, thus finding longer and longer patterns that fill up our sequence. Here’s level 3:

Grouping the Jump Sequence, Level 3 One natural question is: what are the lengths of these blocks? Let’s make a table:

n An Bn An Length Bn Length
0 a b 1 1
1 ab abb 2 3
2 ababb ababbabb 5 8
3 ababbababbabb ababbababbabbababbabb 13 21

The lengths are exactly the Fibonacci numbers! (Prove this!) Furthermore, how many as and bs does each block have? Here’s another table:

n An # of as # of bs Bn # of as # of bs
0 a 1 0 b 0 1
1 ab 1 1 abb 1 2
2 ababb 2 3 ababbabb 3 5
3 ababbababbabb 5 8 ababbababbabbababbabb 8 13

Again the Fibonacci numbers! This gives us another way to compute the ratio of as to bs in the jump sequence: The ratios of bs to as in blocks An and Bn are \(F_{2n}/F_{2n-1}\) and \(F_{2n+1}/F_{2n}\) respectively, so as n gets larger these ratios should both approach last week’s answer, the golden ratio \(\phi\).

OK, so our jump sequence is very “Fibonacci”y: it has blocks of Fibonacci lengths with Fibonacci proportions of as and bs. How can it get any “Fibonacci”er than that? Here’s how: Define a different infinite string using the same definition as the Fibonacci numbers themselves: write \(W_0=0\), \(W_1=1\), and after that, each string is formed by writing the previous two next to each other: \(W_n=W_{n-1}W_{n-2}\). The Ws continue like this:

0, 1, 10, 101, 10110, 10110101, 1011010110110, 101101011011010110101, …,

and in the limit they define some infinite string of 0s and 1s that, without a doubt, is as “Fibonacci”y as it gets. But wait; except for the first digit, this is exactly the same as our jump sequence with 0=a and 1=b!
- 1 0 1 1 0 1 0 1 1 0 1 1 0 1 0 1 1 0 1 0 1 1 0 1 1 0 1 0 1 1 0 1 1 ...
a b a b b a b a b b a b b a b a b b a b a b b a b b a b a b b a b b ...
This truly is the Fibonacci numbers reincarnated in a binary string.

A Golden Observation

Last week we saw that the best strategy for Wythoff’s game is to always move to one of the blue squares in the diagram below (if you can). We found the locations of the blue squares with an iterative algorithm that builds outward from (0,0), but we haven’t yet discovered the underlying pattern in how these blue cells are arranged. Let’s go pattern hunting!

As we noticed last week, the blue cells are arranged in a symmetric “V” shape, formed from two seemingly straight lines. What are the slopes of these lines? This is the question du jour.

Jumps in losing positions for Wythoff's game
Jumps between blue cells for Wythoff's game come in just two sizes: a=(1,2) and b=(2,3).

Looking more carefully, the upper branch of the “V” has only two types of “gaps”: consecutive blue cells are separated by either a “short” (1,2) step (a.k.a. a knight’s move) or a “long” (2,3) step. So, if we believe that this “V” branch is a perfect line, then that line’s slope should be somewhere between the slopes of these two steps, namely \(3/2=1.5\) and \(2/1=2\). Knowing more about how these steps are arranged will tell us more about the “V”.

Specifically, label these “short” and “long” steps as a and b respectively. What we care about is the ratio of bs to as. Why? For example, if there were an equal number of as and bs on average, then the slope of the line would be the same as the vector \(a+b=(3,5)\), i.e., the slope would be 5/3. If there were, say, two bs for each a in the long run, then the line would fall in the direction of \(a+2b=(5,8)\), with slope 8/5. Unfortunately, the ratio we seek is neither 1 nor 2; what is this magic number?

Let’s write down the sequence of jumps along our line. From the diagram above we see that it starts a b a b b a …, and using last week’s iterative algorithm, we can compute farther into this infinite sequence:

a b a b b a b a b b a b b a b a b b a b a b b a b b a b a b b a b b …

This sequence seems to be composed of “blocks” of (a b) and (a b b):

(a b)(a b b)(a b)(a b b)(a b b)(a b)(a b b)(a b)(a b b)(a b b)(a b)(a b b)(a b b)…

So for every a there are either one or two bs, meaning the ratio of bs to as is somewhere between 1 and 2. So the slope should be between 5/3 and 8/5. But what is it exactly? To answer this, we need to know how these “short” and “long” blocks are arranged.

Write A = (a b) and B = (a b b), so we can rewrite the sequence of blocks as

A B A B B A B A B B A B B …

Hold on; this sequence looks familiar. It’s exactly the same pattern as our original jump sequence! So it seems that the pattern of blocks is exactly the same as the pattern of the letters themselves! This is a “self-generating” sequence!

In particular, if the ratio of bs to as is some number r, then the ratio of (a b b) blocks to (a b) blocks is also r. But if we have one block of (a b) and r blocks of (a b b) then we have a total of \(1+2r\) bs and \(1+r\) as, so the ratio of bs to as is \(\frac{1+2r}{1+r}=r\). This simplifies to \(1+r=r^2\), and the solution is the shiny number \(r=\frac{1+\sqrt{5}}{2}\), known as the golden ratio and denoted \(\phi\) (Greek letter phi).

Now we know that our purported line should be in the direction of \(a+b\cdot\phi = (1+2\phi,2+3\phi)\), so the slope is \((2+3\phi)/(1+2\phi)=\phi\). And since our “V” shape is symmetric, the other line should have slope \(1/\phi = \frac{\sqrt{5}-1}{2}\). Done and done!

Well, yes and no. We found the slopes of the lines in the “V”, but why do they form lines at all? And what does all this have to do with the Fibonacci numbers?

Wythoff’s Game: Red or Blue?

Let’s play a game; just the two of us. We’ll need a (potentially large!) chess board and a single queen somewhere on the board. We take turns making queen moves, always making progress toward the bottom-left corner of the board. Specifically, in one turn we are allowed to move the queen as far as we want either straight down, straight left, or diagonally down and left (at a 45-degree angle). The first to land the queen on the bottom-left corner wins the game. If you go first and we both play as best as possible, who wins? Well, it depends on where the queen started.

If the queen starts at any of the light-red spaces in the diagram below, you can win on the first move by going straight to the bottom-left corner. On the other hand, if the queen starts at (2,1), then you’re sunk: any move you make lands on a red square and so leaves me in a position to win. So (2,1) is a losing position, indicated by dark blue shading. (The jagged edge reminds us that we could ask the same questions on much bigger grids.)

Wythoff's game: beginning analysis
If the queen starts on a light-red cell, you win on the first move. If she starts on (1,2) or (2,1) (dark blue), you lose.

What should you do if the queen starts at, say, (6,5)? Move to (2,1)! This leaves me in a losing position, so no matter how I respond, you’ll win. So once we find where all the blue losing cells are, your strategy should be: always move to a blue cell, if you can. What if you can only land on red cells? Then you must have started at a blue position, and you’re going to lose.

This also tells us how to find the losing positions, step by step. We know that (2,1) and (1,2) are losing positions, so any position that can reach these in one move is a winning position. But now (3,5) and (5,3) can only reach red cells, so they are blue, and in turn, anything that can directly reach (3,5) or (5,3) is red.

Wythoff's game: continued analysis
Finding blue cells step by step. Left: Because (1,2) and (2,1) are blue, any cells that can reach these directly are red. Right: (3,5) and (5,3) can only reach red cells, so we color them blue. This reveals even more red cells.

Continuing like this, we can fill up our board as follows:

Wythoff's game: completed analysis
Continuing step-by-step as above, all blue cells in the grid are revealed. What patterns can we find?

Interestingly, the blue positions seem to fit nicely into two straight lines forming a “V”-shape, but the exact pattern governing their placement is less clear (especially if we were to ask about larger grids!). Is there more underlying structure in the blue cells’ positioning?[1] How could we discover the pattern in their placement? And why does that line seem to have slope equal to the golden ratio, \(\phi = (1+\sqrt{5})/2\)?[2] Tune in next week to find out! In the meantime, be sure to play a few rounds of Wythoff’s game with your friends!


Notes

  1. Yes. We call that a leading question… []
  2. Correction: that is a leading question. []

Dueling Dilettantes

[This is post #4 in a series on the Thue-Morse sequence. See posts #1, #2, and #3.]

What? You don’t agree that the Thue-Morse sequence is awesome? OK, there’s only one way to settle this dispute: I challenge you to a duel. This unfortunately might take a while, because we both have terrible aim.

Let’s say we both have the same probability \(p=.2\) of hitting our target on any given shot, and the first to hit is the winner. I’ll even be gracious and let you go first, and I’ll shoot second. In the first two shots of the game, the probability that you win (i.e., hit your first shot) is \(p=.2\), and the probability that I win (i.e., you miss and then I hit) is \((1-p)\cdot p = .16\). (In the remaining probability \(1-.2-.16=.64\), the duel continues.) This means that you are more likely to win during the first two shots than I am, so it should only be fair that I take the third shot!

How do the probabilities look in the first three shots? Your chances of winning are still \(p=.2\), while mine are \((1-p)\cdot p + (1-p)^2\cdot p = .288\) because I might win on the second or third shot. Now it seems that I have the advantage, so you should take the 4th shot in the interest of fairness. Let’s continue like this, choosing to give the 5th shot to the theoretical underdog from the first 4 rounds, the 6th shot to the underdog in the first 5 rounds, and so on.

You can (and should!) check that the shot order will be Y, M, M, Y, M, Y, Y, M, M, Y,  M, Y, Y, M, M, Y, ..., where Y=You and M=Me. Does that sequence look familiar? The first 10 digits look suspiciously like the start of the Thue-Morse sequence, but I don’t recognize the rest of it. Did we make a computation error? Hmm…

The problem, as it turns out, is that our aim is too good! What happens if we make ourselves worse by, say, standing farther apart? Let’s say we now have a \(p=.1\) chance of hitting our mark on each shot. What is the “fair” shot order this time? Our duel will now follow the sequence Y, M, M, Y, M, Y, Y, M, M, Y, Y, M, Y, M, M, Y, M, Y, Y, M,  M, Y, Y, ... which matches the Thue-Morse sequence in the first 20 digits. Similarly, setting \(p=.05\) gives 48 correct Thue-Morse digits, defining \(p=.01\) gives 192 correct Thue-Morse digits, and decreasing to \(p=.001\) gives 1536 matching digits! It can in fact be proved[1] that decreasing \(p\) toward 0 (i.e., making our aim worse) gives more and more Thue-Morse digits, with the full Thue-Morse sequence appearing in the limit.

See? Even when arguing about whether the Thue-Morse sequence is cool, we find evidence of its greatness! I hereby declare myself the winner of this duel.

This concludes our series on the Thue-Morse sequence. Tune in next week for, well, something else!


Notes

  1. Joshua Cooper and Aaron Dutle. Greedy Galois Games. ArXiv, 2011. []

Chess, Bored

[This is post #3 in a series on the Thue-Morse sequence, which was introduced two weeks ago and discussed further last week.]

While the Thue-Morse sequence appears in many branches of Mathematics[1], this sequence has also captured a place in the history of chess.

With only the basic rules of chess (specifically, how pieces move and capture and how a game may end by checkmate or stalemate), there is no guarantee that a game of chess will ever finish. For example, two (very bored!) players could theoretically move their knights back and forth ad infinitum without ever engaging each other or capturing a piece (but why would they want to?). A more realistic example is drawn below. In this game, black will soon checkmate white if given the freedom to move, and white has no way to win. However, white can avoid losing by keeping black in an unending runaround with repeated checks, as illustrated. In this case (a phenomenon known as perpetual check), forcing an infinite game is more advantageous for white than allowing an otherwise inevitable loss.

Perpetual Check
Animation frames were made with the WiseBoard Chess Board Editor.

Because of this possibility of never finishing, a search began for new chess rules to guarantee that every game of chess eventually ends. One suggested rule was:

A chess game ends with a draw if a sequence of moves—with all pieces in exactly the same positions—is played three times successively.[2]

The hope was that any sufficiently long chess game would eventually repeat itself enough to break this rule. However, as Mathematician and Chess Grandmaster Max Euwe showed in 1929[3], this hope is not correct. Indeed, there exist infinite chess games that never repeat the same block of moves three times in a row!

How did he prove this? With the Thue-Morse Sequence, of course! He proved the very unintuitive fact that the Thue-Morse sequence has no chunk repeated three times consecutively. Such a thrice-repeated sequence is called a cube, so this property can be restated by saying that the Thue-Morse sequence is cube-free. For example, you will never find the sequences 1 1 1 or 011 011 011 in the Thue-Morse sequence. (Challenge: show that these two sequences do not appear. Harder challenge: show that the Thue-Morse sequence is cube free!) So, if the “very bored” knight-flippers described above decide to move their right knights or left knights according to the 1s and 0s of the Thue-Morse sequence, the resulting game will have no tripled move sequences.

Luckily, the above rule is not part of the current official chess rule set. There are now two rules on the books that can be used to eventually end any chess game. (In fact, either one would suffice. [Prove this!]) A draw may be declared in either of these situations:

  • The exact same board position appears three times ever during a game. (Every detail must be the same, such as whose turn it is, which kings have the ability to castle, etc.)
  • No pawn is moved and no piece is captured in 50 consecutive turns. (A turn consists of one move by each player.)

However, there’s (always) a catch. These two rules only take effect if a player notices and decides to declare a draw. So even with all modern rules in effect, the bored knight-flippers are still allowed to play their infinite game of chess—cube-free or not.


Notes

  1. A great survey called The ubiquitous Prouhet-Thue-Morse sequence by Allouche and Shallit documents many sightings of this sequence throughout Mathematics, including Combinatorics, Differential Geometry, Number Theory, Analysis, and as discussed here, Combinatorial Game Theory. []
  2. Manfred Börgens, Max Euwe’s sequence []
  3. in his paper titled Mengentheoretische Betrachtungen über das Schachspiel []

A Distinctly Odd Coincidence

This week’s topic comes from partition theory. Consider all the ways to write 7 as the sum of positive integers:
$$\begin{align*}
&7, && 3+2+2,\\
&6+1, && 3+2+1+1,\\
&5+2, && 3+1+1+1+1,\\
&5+1+1, && 2+2+2+1,\\
&4+3, && 2+2+1+1+1,\\
&4+2+1, && 2+1+1+1+1+1,\\
&4+1+1+1, && 1+1+1+1+1+1+1+1.\\
&3+3+1, &&
\end{align*}$$ (We don’t care about order: for example, \(5+2\) and \(2+5\) are the same.) These are called the partitions of 7, and there are 15 of them.

We can also consider imposing constraints on these partitions: For example, how many of these partitions of 7 use only odd integers? These “odd” partitions are
$$\begin{align*}
& 7, && 3+1+1+1+1, \\
& 5+1+1, && 1+1+1+1+1+1+1, \\
& 3+3+1, &&
\end{align*}$$ and there are 5 of them. Here’s a very different question: how many partition of 7 have no repeated parts? These “distinct” partition are
$$\begin{align*}
& 7, && 6+1, && 5+2, && 4+3, && 4+2+1,
\end{align*}$$ and there are 5 of these as well.

This must be coincidence, right? Let’s try a bigger example. How many partitions of 11 use only odd parts? You can (and should!) check that there are 12. And how many partitions of 11 use only distinct parts? Again, there are 12. Surprisingly, these numbers will always match!:

Theorem (Euler): For any positive integer \(n\), the number of “odd” partitions of \(n\) is the same as the number of “distinct” partitions of \(n\).

Transforming Between “Odd” and “Distinct” Partitions

Let’s try to make sense of this. Given a partition of \(n\) into odd parts, there is a simple way to transform it into a partition with distinct parts: whenever two parts are the same, add them together, and repeat until they are all distinct. For example, the partition \(7+3+3+3+3+3+3+1+1+1\) of 28 is transformed into
$$\begin{align*}
& 7+(3+3)+(3+3)+(3+3)+(1+1)+1 \\
& \quad \to 7+(6+6)+6+2+1 \\
& \quad \to 12+7+6+2+1,
\end{align*}$$ which has distinct parts. In the other direction, to transform a “distinct” partition to an “odd” one, we can reverse this process: whenever there is an even part, cut it in half and write it twice. Our example \(12+7+6+2+1\) from above becomes
$$\begin{align*}
& (6+6)+7+(3+3)+(1+1)+1 \\
&\quad \to (3+3)+(3+3)+7+3+3+1+1+1,
\end{align*}$$ which is the “odd” permutation we started with. These transformations can be shown to match “odd” permutations with “distinct” permutations in pairs, proving that these sets have the same size. (A careful argument uses binary representation.) For example, the 5 partitions of 7 above are paired as follows:
$$\begin{align*}
& 7 &&\leftrightarrow && 7 \\
& 6+1 &&\leftrightarrow && 3+3+1 \\
& 5+2 &&\leftrightarrow && 5+1+1 \\
& 4+3 &&\leftrightarrow && 3+1+1+1+1 \\
& 4+2+1 &&\leftrightarrow && 1+1+1+1+1+1+1.
\end{align*}$$ (If you know something about generating functions, then a different proof follows by proving the identity
$$\frac{1}{(1-x)(1-x^3)(1-x^5)(1-x^7)\cdots} = (1+x)(1+x^2)(1+x^3)(1+x^4)\cdots$$ and noting that the coefficients of \(x^n\) in the expansions of the left and right products equals the number of “odd” and “distinct” permutations of \(n\), respectively.)

More Coincidences

These apparent coincidences are characteristic of many results in partition theory. Try your hand at proving the following generalization:

Theorem: The number of partitions of \(n\) into parts that are not divisible by 10 equals the number of partitions of \(n\) where no part is repeated 10 or more times. The same is true for any positive integer in place of 10.

(Hint: both methods used above can be extended to prove this.) If instead we want the parts to be really distinct and make sure no two parts are equal or consecutive, then we want my favorite partition theory result:

Theorem (Rogers-Ramanujan Identity): The number of partitions of \(n\) where any two parts differ by at least 2 is the same as the number of partitions of \(n\) where every part is one more or one less than a multiple of 5.

Despite the simplicity of the statement, this is much harder to prove than the previous theorems.

Secret Santa (Now with Fewer Hats!)

As we approach the holiday season, some of you may soon participate in—or perhaps even organize—a “Secret Santa” exchange. Specifically, some number of people enter the exchange, and each is assigned one person (their Santee) to give a gift to (or multiple gifts… or other craziness…), in such a way that each person also receives from one person (their Santa). As organizers of such an exchange, how should we choose a permutation of the participants, i.e., an assignment of each person to his/her Santee? Here’s one method: put all names in a hat, and let each player extract a name at random. What could go wrong?

For starters, someone might pick their own name! Shopping for yourself might make the task easier, but this seems to contradict the (Christmas) spirit of the exchange. If we pick randomly as above, and if there are \(n\) people in the exchange, what is the probability that no one picks him/herself? For large \(n\), this answer tends toward \(1/e = .3679\ldots\), where \(e = 2.718\ldots\) is Euler’s number. So, with more than a 63 percent chance, someone is left out. (See below for a proof of this fact.)

What else might go wrong? Perhaps we also want to avoid two people being assigned each other, so that this pair doesn’t feel excluded. What is the chance of a randomly chosen permutation having no excluded person or pair? This can be computed to tend toward \(1/e^{3/2} = .2231\ldots\), so there is some left out person or pair with almost 80% probability. And if we wish to make sure there are no excluded groups of size at most 6, say, then the chance of failure limits to $$1-e^{-1-\frac{1}{2}-\frac{1}{3}-\frac{1}{4}-\frac{1}{5}-\frac{1}{6}} = .9137$$ for large \(n\).

So, while Santa hats may be fluffy and festive, Secret Santa hats are a bad idea. What should we do to organize our exchange? Here’s one option: just make a big cycle. Specifically, write all names in some random order and chain them up, e.g., \(2\to 6\to 7\to 3\to 1\to 4\to 5\to 2\) (if the “names” are just 1, 2, …, 7).

Where do these numbers come from?

How can we compute the probabilities mentioned above? One common method to prove the \(1/e\) probability uses the principle of Inclusion-Exclusion, but here is a sketch of an argument that doesn’t rely on this technique. Let \(A_n\) be the number of permutations of \(n\) people with no person assigned to him/herself. Such a permutation is called a Derangement, and a direct counting argument can be used to show that \(A_n = (n-1)(A_{n-1} + A_{n-2})\) (see the Wikipedia page for derangements, for example). As there are \(n!\) total permutations of the players, the desired probability is \(P_n = A_n/n!\), so the above formula can be manipulated to \(P_n-P_{n-1} = -\frac{1}{n}(P_{n-1}-P_{n-2})\). This produces \(P_n-P_{n-1} = \frac{(-1)^n}{n!}\) by induction, so we obtain $$P_n = 1-\frac{1}{1!}+\frac{1}{2!}-\frac{1}{3!}+\cdots+\frac{(-1)^n}{n!}.$$ As \(n\) gets big this value tends toward the infinite sum \(\sum_{i=0}^\infty \frac{(-1)^i}{i!} = 1/e\).

The other probabilities mentioned take more work. A well-suited tool is that of (Exponential) Generating Functions, which in this case help organize a more complicated Inclusion-Exclusion argument. Using these, it can be shown that the probability that a random permutation of \(n\) players has no groups of size 6 or less is equal to the coefficient of \(x^n\) in the power series expansion of $$\frac{1}{1-x}\cdot e^{-\left(x+\frac{x^2}{2} + \cdots + \frac{x^6}{6}\right)}.$$