## Two sample tests as correlation tests

Suppose we have two samples $Y_1^{(0)}, Y_2^{(0)},\ldots, Y_{n_0}^{(0)}$ and $Y_1^{(1)},Y_2^{(1)},\ldots, Y_{n_1}^{(1)}$ and we want to test if they are from the same distribution. Many popular tests can be reinterpreted as correlation tests by pooling the two samples and introducing a dummy variable that encodes which sample each data point comes from. In this post we will see how this plays out in a simple t-test.

### The equal variance t-test

In the equal variance t-test, we assume that $Y_i^{(0)} \stackrel{\text{iid}}{\sim} \mathcal{N}(\mu_0,\sigma^2)$ and $Y_i^{(1)} \stackrel{\text{iid}}{\sim} \mathcal{N}(\mu_1,\sigma^2)$, where $\sigma^2$ is unknown. Our hypothesis that $Y_1^{(0)}, Y_2^{(0)},\ldots, Y_{n_0}^{(0)}$ and $Y_1^{(1)},Y_2^{(1)},\ldots, Y_{n_1}^{(1)}$ are from the same distribution becomes the hypothesis $\mu_0 = \mu_1$. The test statistic is

$t = \frac{\displaystyle \overline{Y}^{(1)} - \overline{Y}^{(0)}}{\displaystyle \hat{\sigma}\sqrt{\frac{1}{n_0}+\frac{1}{n_1}}}$,

where $\overline{Y}^{(0)}$ and $\overline{Y}^{(1)}$ are the two sample means. The variable $\hat{\sigma}$ is the pooled estimate of the standard deviation and is given by

$\hat{\sigma}^2 = \displaystyle\frac{1}{n_0+n_1-2}\left(\sum_{i=1}^{n_0}\left(Y_i^{(0)}-\overline{Y}^{(0)}\right)^2 + \sum_{i=1}^{n_1}\left(Y_i^{(1)}-\overline{Y}^{(1)}\right)^2\right)$.

Under the null hypothesis, $t$ follows the T-distribution with $n_0+n_1-2$ degrees of freedom. We thus reject the null $\mu_0=\mu_1$ when $|t|$ exceeds the $1-\alpha/2$ quantile of the T-distribution.

### Pooling the data

We can turn this two sample test into a correlation test by pooling the data and using a linear model. Let $Y_1,\ldots,Y_{n_0}, Y_{n_0+1},\ldots,Y_{n_0+n_1}$ be the pooled data and for $i = 1,\ldots, n_0+n_1$, define $x_i \in \{0,1\}$ by

$x_i = \begin{cases} 0 & \text{if } 1 \le i \le n_0,\\ 1 & \text{if } n_0+1 \le i \le n_0+n_1.\end{cases}$

The assumptions that $Y_i^{(0)} \stackrel{\text{iid}}{\sim} \mathcal{N}(\mu_0,\sigma^2)$ and $Y_i^{(1)} \stackrel{\text{iid}}{\sim} \mathcal{N}(\mu_1,\sigma^2)$ can be rewritten as

$Y_i = \beta_0+\beta_1x_i + \varepsilon_i,$

where $\varepsilon_i \stackrel{\text{iid}}{\sim} \mathcal{N}(0,\sigma^2)$. That is, we have expressed our modelling assumptions as a linear model. When working with this linear model, the hypothesis $\mu_0 = \mu_1$ is equivalent to $\beta_1 = 0$. To test $\beta_1 = 0$ we can use the standard t-test for a coefficient in linear model. The test statistic in this case is

$t' = \displaystyle\frac{\hat{\beta}_1}{\hat{\sigma}_{OLS}\sqrt{(X^TX)^{-1}_{11}}},$

where $\hat{\beta}_1$ is the ordinary least squares estimate of $\beta_1$, $X \in \mathbb{R}^{(n_0+n_1)\times 2}$ is the design matrix and $\hat{\sigma}_{OLS}$ is an estimate of $\sigma$ given by

$\hat{\sigma}_{OLS}^2 = \displaystyle\frac{1}{n_0+n_1-2}\sum_{i=1}^{n_0+n_1} (Y_i-\hat{Y}_i)^2,$

where $\hat{Y} = \hat{\beta}_0+\hat{\beta}_1x_i$ is the fitted value of $Y_i$.

It turns out that $t'$ is exactly equal to $t$. We can see this by writing out the design matrix and calculating everything above. The design matrix has rows $[1,x_i]$ and is thus equal to

$X = \begin{bmatrix} 1&x_1\\ 1&x_2\\ \vdots&\vdots\\ 1&x_{n_0}\\ 1&x_{n_0+1}\\ \vdots&\vdots\\ 1&x_{n_0+n_1}\end{bmatrix} = \begin{bmatrix} 1&0\\ 1&0\\ \vdots&\vdots\\ 1&0\\ 1&1\\ \vdots&\vdots\\ 1&1\end{bmatrix}.$

This implies that

$X^TX = \begin{bmatrix} n_0+n_1 &n_1\\n_1&n_1 \end{bmatrix},$

And therefore,

$(X^TX)^{-1} = \frac{1}{(n_0+n_1)n_1-n_1^2}\begin{bmatrix} n_1 &-n_1\\-n_1&n_0+n_1 \end{bmatrix} = \frac{1}{n_0n_1}\begin{bmatrix} n_1&-n_1\\-n_1&n_0+n_1\end{bmatrix} =\begin{bmatrix} \frac{1}{n_0}&-\frac{1}{n_0}\\-\frac{1}{n_0}&\frac{1}{n_0}+\frac{1}{n_1}\end{bmatrix} .$

Thus, $(X^TX)^{-1}_{11} = \frac{1}{n_0}+\frac{1}{n_1}$. So,

$t' = \displaystyle\frac{\hat{\beta}_1}{\hat{\sigma}_{OLS}\sqrt{\frac{1}{n_0}+\frac{1}{n_1}}},$

which is starting to like $t$ from the two-sample test. Now

$X^TY = \begin{bmatrix} \displaystyle\sum_{i=1}^{n_0+n_1} Y_i\\ \displaystyle \sum_{i=n_0+1}^{n_0+n_1} Y_i \end{bmatrix} = \begin{bmatrix} n_0\overline{Y}^{(0)} + n_1\overline{Y}^{(1)}\\ n_1\overline{Y}^{(1)} \end{bmatrix}.$

And so

$\hat{\beta} = (X^TX)^{-1}X^TY = \begin{bmatrix} \frac{1}{n_0}&-\frac{1}{n_0}\\-\frac{1}{n_0}&\frac{1}{n_0}+\frac{1}{n_1}\end{bmatrix}\begin{bmatrix} n_0\overline{Y}^{(0)} + n_1\overline{Y}^{(1)}\\ n_1\overline{Y}^{(1)} \end{bmatrix}=\begin{bmatrix} \overline{Y}^{(0)}\\ \overline{Y}^{(1)} -\overline{Y}^{(0)}\end{bmatrix}.$

Thus, $\hat{\beta}_1 = \overline{Y}^{(1)} -\overline{Y}^{(0)}$ and

$t' = \displaystyle\frac{\overline{Y}^{(1)}-\overline{Y}^{(0)}}{\hat{\sigma}_{OLS}\sqrt{\frac{1}{n_0}+\frac{1}{n_1}}}.$

This means to show that $t' = t$, we only need to show that $\hat{\sigma}_{OLS}^2=\hat{\sigma}^2$. To do this, note that the fitted values $\hat{Y}$ are equal to

$\displaystyle\hat{Y}_i=\hat{\beta}_0+x_i\hat{\beta}_1 = \begin{cases} \overline{Y}^{(0)} & \text{if } 1 \le i \le n_0,\\ \overline{Y}^{(1)} & \text{if } n_0+1\le i \le n_0+n_1\end{cases}.$

Thus,

$\hat{\sigma}^2_{OLS} = \displaystyle\frac{1}{n_0+n_1-2}\sum_{i=1}^{n_0+n_1}\left(Y_i-\hat{Y}_i\right)^2=\displaystyle\frac{1}{n_0+n_1-2}\left(\sum_{i=1}^{n_0}\left(Y_i^{(0)}-\overline{Y}^{(0)}\right)^2 + \sum_{i=1}^{n_1}\left(Y_i^{(1)}-\overline{Y}^{(1)}\right)^2\right),$

Which is exactly $\hat{\sigma}^2$. Therefore, $t'=t$ and the two sample t-test is equivalent to a correlation test.

### The Friedman-Rafsky test

In the above example, we saw that the two sample t-test was a special case of the t-test for regressions. This is neat but both tests make very strong assumptions about the data. However, the same thing happens in a more interesting non-parametric setting.

In their 1979 paper, Jerome Friedman and Lawrence Rafsky introduced a two sample tests that makes no assumptions about the distribution of the data. The two samples do not even have to real-valued and can instead be from any metric space. It turns out that their test is a special case of another procedure they devised for testing for association (Friedman & Rafsky, 1983). As with the t-tests above, this connection comes from pooling the two samples and introducing a dummy variable.

I plan to write a follow up post explaining these procedures but you can also read about it in Chapter 6 of Group Representations in Probability and Statistics by Persi Diaconis.

### References

Persi DiaconisÂ “Group representations in probability and statistics,” pp 104-106, Hayward, CA: Institute of Mathematical Statistics, (1988)

Jerome H. Friedman, Lawrence C. Rafsky “Multivariate Generalizations of the Wald-Wolfowitz and Smirnov Two-Sample Tests,” The Annals of Statistics, Ann. Statist. 7(4), 697-717, (July, 1979)

Jerome H. Friedman, Lawrence C. Rafsky “Graph-Theoretic Measures of Multivariate Association and Prediction,” The Annals of Statistics, Ann. Statist. 11(2), 377-391, (June, 1983).

## Non-measurable sets, cardinality and the axiom of choice

The following post is based on a talk I gave at the 2022 Stanford statistics retreat. The talk was titled “Another non-measurable monster”.

The material was based on the discussion and references given in this stackexchange post. The title is a reference to a Halloween lecture on measurability given by Professor Persi Diaconis.

### What’s scarier than a non-measurable set?

Making every set measurable. Or rather one particular consequence of making every set measurable.

In my talk, I argued that if you make every set measurable, then there exists a set $\Omega$ and an equivalence relation $\sim$ on $\Omega$ such that $|\Omega| < |\Omega / \sim|$. That is, the set $\Omega$ has strictly smaller cardinality than the set of equivalence classes $\Omega/\sim$. The contradictory nature of this statement is illustrated in the picture below

To make sense of this we’ll first have to be a bit more precise about what we mean by cardinality.

### What do we mean by bigger and smaller?

Let $A$ and $B$ be two sets. We say that $A$ and $B$ have the same cardinality and write $|A| = |B|$ if there exists a bijection function $f:A \to B$. We can think of the function $f$ as a way of pairing each element of $A$ with a unique element of $B$ such that every element of $B$ is paired with an element of $A$.

We next want to define $|A|\le |B|$ which means $A$ has cardinality at most the cardinality of $B$. There are two reasonable ways in which we could try to define this relationship

1. We could say $|A|\le |B|$ means that there exists an injective function $f : A \to B$.
2. Alternatively, we could $|A|\le |B|$ means that there exists a surjective function $g:B \to A$.

Definitions 1 and 2 say similar things and, in the presence of the axiom of choice, they are equivalent. Since we are going to be making every set measurable in this talk, we won’t be assuming the axiom of choice. Definitions 1 and 2 are thus no longer equivalent and we have a decision to make. We will use definition . in this talk. For justification, note that definition 1 implies that there exists a subset $B' \subseteq B$ such that $|A|=|B|$. We simply take $B'$ to be the range of $f$. This is a desirable property of the relation $|A|\le |B|$ and it’s not clear how this could be done using definition 2.

### Infinite binary sequences

It’s time to introduce the set $\Omega$ and the equivalence relation we will be working with. The set $\Omega$ is the set $\{0,1\}^\mathbb{Z}$ the set of all function $\omega : \mathbb{Z} \to \{0,1\}$. We can think of each elements $\omega \in \Omega$ as an infinite sequence of zeros and ones stretching off in both directions. For example

$\omega = \ldots 1110110100111\ldots$.

But this analogy hides something important. Each $\omega \in \Omega$ has a “middle” which is the point $\omega_0$. For instance, the two sequences below look the same but when we make $\omega_0$ bold we see that they are different.

$\omega = \ldots 111011\mathbf{0}100111\ldots$,

$\omega' = \ldots 1110110\mathbf{1}00111\ldots$.

The equivalence relation $\sim$ on $\Omega$ can be thought of as forgetting the location $\omega_0$. More formally we have $\omega \sim \omega'$ if and only if there exists $n \in \mathbb{Z}$ such that $\omega_{n+k} = \omega_{k}'$ for all $k \in \mathbb{Z}$. That is, if we shift the sequence $\omega$ by $n$ we get the sequence $\omega'$. We will use $[\omega]$ to denote the equivalence class of $\omega$ and $\Omega/\sim$ for the set of all equivalences classes.

### Some probability

Associated with the space $\Omega$ are functions $X_k : \Omega \to \{0,1\}$, one for each integer $k \in \mathbb{Z}$. These functions simply evaluate $\omega$ at $k$. That is $X_k(\omega)=\omega_k$. A probabilist or statistician would think of $X_k$ as reporting the result of one of infinitely many independent coin tosses. Normally to make this formal we would have to first define a $\sigma$-algebra on $\Omega$ and then define a probability on this $\sigma$-algebra. Today we’re working in a world where every set is measurable and so don’t have to worry about $\sigma$-algebras. Indeed we have the following result:

(Solovay, 1970)1 There exists a model of the Zermelo Fraenkel axioms of set theory such that there exists a probability $\mathbb{P}$ defined on all subsets of $\Omega$ such that $X_k$ are i.i.d. $\mathrm{Bernoulli}(0.5)$.

This result is saying that there is world in which, other than the axiom of choice, all the regular axioms of set theory holds. And in this world, we can assign a probability to every subset $A \subseteq \Omega$ in a way so that the events $\{X_k=1\}$ are all independent and have probability $0.5$. It’s important to note that this is a true countably additive probability and we can apply all our familiar probability results to $\mathbb{P}$. We are now ready to state and prove the spooky result claimed at the start of this talk.

Proposition: Given the existence of such a probability $\mathbb{P}$, $|\Omega | < |\Omega /\sim|$.

Proof: Let $f:\Omega/\sim \to \Omega$ be any function. To show that $|\Omega|<|\Omega /\sim|$ we need to show that $f$ is not injective. To do this, we’ll first define another function $g:\Omega \to \Omega$ given by $g(\omega)=f([\omega])$. That is, $g$ first maps $\omega$ to $\omega$‘s equivalence class and then applies $f$ to this equivalence class. This is illustrated below.

We will show that $g : \Omega \to \Omega$ is almost surely constant with respect to $\mathbb{P}$. That is, there exists $\omega^\star \in \Omega$ such that $\mathbb{P}(g(\omega)=\omega^\star)=1$. Each equivalence class $[\omega]$ is finite or countable and thus has probability zero under $\mathbb{P}$. This means that if $g$ is almost surely constant, then $f$ cannot be injective and must map multiple (in fact infinitely many) equivalence classes to $\omega^\star$.

It thus remains to show that $g:\Omega \to \Omega$ is almost surely constant. To do this we will introduce a third function $\varphi : \Omega \to \Omega$. The map $\varphi$ is simply the shift map and is given by $\varphi(\omega)_k = \omega_{k+1}$. Note that $\omega$ and $\varphi(\omega)$ are in the same equivalence class for every $\omega\in \Omega$. Thus, the map $g$ satisfies $g\circ \varphi = g$. That is $g$ is $\varphi$-invariant.

The map $\varphi$ is ergodic. This means that if $A \subseteq \Omega$ satisfies $\varphi(A)=A$, then $\mathbb{P}(A)$ equals $0$ or $1$. For example if $A$ is the event that $10110$ appears at some point in $\omega$, then $\varphi(A)=A$ and $\mathbb{P}(A)=`1$. Likewise if $A$ is the event that the relative frequency of heads converges to a number strictly greater than $0.5$, then $\varphi(A)=A$ and $\mathbb{P}(A)=0$. The general claim that all $\varphi$-invariant events have probability $0$ or $1$ can be proved using the independence of $X_k$.

For each $k$, define an event $A_k$ by $A_k = \{\omega : g(\omega)_k = 1\}$. Since $g$ is $\varphi$-invariant we have that $\varphi(A_k)=A_k$. Thus, $\mathbb{P}(A_k)=0$ or $1$. This gives us a function $\omega^\star :\mathbb{Z} \to \{0,1\}$ given by $\omega^\star_k = \mathbb{P}(A_k)$. Note that for every $k$, $\mathbb{P}(\{\omega : g(\omega)_k = \omega_k^\star\}) = 1$. This is because if $w_{k}^\star=1$, then $\mathbb{P}(\{\omega: g(\omega)_k = 1\})=1$, by definition of $w_k^\star$. Likewise if $\omega_k^\star =0$, then $\mathbb{P}(\{\omega:g(\omega)_k=1\})=0$ and hence $\mathbb{P}(\{\omega:g(\omega)_k=0\})=1$. Thus, in both cases, $\mathbb{P}(\{\omega : g(\omega)_k = \omega_k^*\})= 1$.

Since $\mathbb{P}$ is a probability measure, we can conclude that

$\mathbb{P}(\{\omega : g(\omega)=\omega^\star\}) = \mathbb{P}\left(\bigcap_{k \in \mathbb{Z}} \{\omega : g(\omega)_k = \omega_k^\star\}\right)=1$.

Thus, $g$ map $\Omega$ to $\omega^\star$ with probability one. Showing that $g$ is almost surely constant and hence that $f$ is not injective. $\square$

### There’s a catch!

So we have proved that there cannot be an injective map $f : \Omega/\sim \to \Omega$. Does this mean we have proved $|\Omega| < |\Omega/\sim|$? Technically no. We have proved the negation of $|\Omega/\sim|\le |\Omega|$ which does not imply $|\Omega| \le |\Omega/\sim|$. To argue that $|\Omega| < |\Omega/\sim|$ we need to produce a map $g: \Omega \to \Omega/\sim$ that is injective. Surprising this is possible and not too difficult. The idea is to find a map $g : \Omega \to \Omega$ such that $g(\omega)\sim g(\omega')$ implies that $\omega = \omega'$. This can be done by somehow encoding in $g(\omega)$ where the centre of $\omega$ is.

### A simpler proof and other examples

Our proof was nice because we explicitly calculated the value $\omega^\star$ where $g$ sent almost all of $\Omega$. We could have been less explicit and simply noted that the function $g:\Omega \to \Omega$ was measurable with respect to the invariant $\sigma$-algebra of $\varphi$ and hence almost surely constant by the ergodicity of $\varphi$.

This quicker proof allows us to generalise our “spooky result” to other sets. Below are two examples where $\Omega = [0,1)$

• Fix $\theta \in [0,1)\setminus \mathbb{Q}$ and define $\omega \sim \omega'$ if and only if $\omega + n \theta= \omega'$ for some $n \in \mathbb{Z}$.
• $\omega \sim \omega'$ if and only if $\omega - \omega' \in \mathbb{Q}$.

A similar argument can be used to show that in Solovay’s world $|\Omega| < |\Omega/\sim|$. The exact same argument follows from the ergodicity of the corresponding actions on $\Omega$ under the uniform measure.

### Three takeaways

I hope you agree that this example is good fun and surprising. I’d like to end with some remarks.

• The first remark is some mathematical context. This argument given today is linked to some interesting mathematics called descriptive set theory. This field studies the properties of well behaved subsets (such as Borel subsets) of topological spaces. Descriptive set theory incorporates logic, topology and ergodic theory. I don’t know much about the field but in Persi’s Halloween talk he said that one “monster” was that few people are interested in the subject.
• The next remark is a better way to think about our “spooky result”. The result is really saying something about cardinality. When we no longer use the axiom of choice, cardinality becomes a subtle concept. The statement $|A|\le |B|$ no longer corresponds to $A$ being “smaller” than $B$ but rather that $A$ is “less complex” than $B$. This is perhaps analogous to some statistical models which may be “large” but do not overfit due to subtle constraints on the model complexity.
• In light of the previous remark, I would invite you to think about whether the example I gave is truly spookier than non-measurable sets. It might seem to you that it is simply a reasonable consequence of removing the axiom of choice and restricting ourselves to functions we could actually write down or understand. I’ll let you decide

### Footnotes

1. Technically Solovay proved that there exists a model of set theory such that every subset of $\mathbb{R}$ is Borel measurable. To get the result for binary sequences we have to restrict to $[0,1)$ and use the binary expansion of $x \in [0,1)$ to define a function $[0,1) \to \Omega$. Solvay’s paper is available here https://www.jstor.org/stable/1970696?seq=1

## Why is the fundamental theorem of arithmetic a theorem?

The fundamental theorem of arithmetic states that every natural number can be factorized uniquely as a product of prime numbers. The word “uniquely” here means unique up to rearranging. The theorem means that if you and I take the same number $n$ and I write $n = p_1p_2\ldots p_k$ and you write $n = q_1q_2\ldots q_l$ where each $p_i$ and $q_i$ is a prime number, then in fact $k=l$ and we wrote the same prime numbers (but maybe in a different order).

Most people happily accept this theorem as self evident and believe it without proof. Indeed some people take it to be so self evident they feel it doesn’t really deserve the name “theorem” – hence the title of this blog post. In this post I want to highlight two situations where an analogous theorem fails.

## Situation One: The Even Numbers

Imagine a world where everything comes in twos. In this world nobody knows of the number one or indeed any odd number. Their counting numbers are the even numbers $\mathbb{E} = \{2,4,6,8,\ldots\}$. People in this world can add numbers and multiply numbers just like we can. They can even talk about divisibility, for example $2$ divides $8$ since $8 = 4\cdot 2$. Note that things are already getting a bit strange in this world. Since there is no number one, numbers in this world do not divide themselves.

Once people can talk about divisibility, they can talk about prime numbers. A number is prime in this world if it is not divisible by any other number. For example $2$ is prime but as we saw $8$ is not prime. Surprisingly the number $6$ is also prime in this world. This is because there are no two even numbers that multiply together to make $6$.

If a number is not prime in this world, we can reduce it to a product of primes. This is because if $n$ is not prime, then there are two number $a$ and $b$ such that $n = ab$. Since $a$ and $b$ are both smaller than $n$, we can apply the same argument and recursively write $n$ as a product of primes.

Now we can ask whether or not the fundamental theorem of arthimetic holds in this world. Namely we want to know if their is a unique way to factorize each number in this world. To get an idea we can start with some small even numbers.

• $2$ is prime.
• $4 = 2 \cdot 2$ can be factorized uniquely.
• $6$ is prime.
• $8 = 2\cdot 2 \cdot 2$ can be factorized uniquely.
• $10$ is prime.
• $12 = 2 \cdot 6$ can be factorized uniquely.
• $14$ is prime.
• $16 = 2\cdot 2 \cdot 2 \cdot 2$ can be factorized uniquely.
• $18$ is prime.
• $20 = 2 \cdot 10$ can be factorized uniquely.

Thus it seems as though there might be some hope for this theorem. It at least holds for the first handful of numbers. Unfortunately we eventually get to $36$ and we have:

$36 = 2 \cdot 18$ and $36 = 6 \cdot 6$.

Thus there are two distinct ways of writing $36$ as a product of primes in this world and thus the fundamental theorem of arithmetic does not hold.

## Situtation Two: A Number Ring

While the first example is fun and interesting, it is somewhat artificial. We are unlikely to encounter a situation where we only have the even numbers. It is however common and natural for mathematicians to be lead into certain worlds called number rings. We will see one example here and see what an effect the fundamental theorem of arithmetic can have.

Consider wanting to solve the equation $x^2+19=y^3$ where $x$ and $y$ are both integers. One way to try to solve this is by rewriting the equation as $(x+\sqrt{-19})(x-\sqrt{-19}) = y^3$. With this rewriting we have left the familiar world of the whole numbers and entered the number ring $\mathbb{Z}[\sqrt{-19}]$.

In $\mathbb{Z}[\sqrt{-19}]$ all numbers have the form $a + b \sqrt{-19}$, where $a$ and $b$ are integers. Addition of two such numbers is defined like so

$(a+b\sqrt{-19}) + (c + d \sqrt{-19}) = (a+c) + (b+d)\sqrt{-19}$.

Multiplication is define by using the distributive law and the fact that $\sqrt{-19}^2 = -19$. Thus

$(a+b\sqrt{-19})(c+d\sqrt{-19}) = (ac-19bd) + (ad+bc)\sqrt{-19}$.

Since we have multiplication we can talk about when a number in $\mathbb{Z}[\sqrt{-19}]$ divides another and hence define primes in $\mathbb{Z}[\sqrt{-19}]$. One can show that if $x^2 + 19 = y^3$, then $x+\sqrt{-19}$ and $x-\sqrt{-19}$ are coprime in $\mathbb{Z}[\sqrt{-19}]$ (see the references at the end of this post).

This means that there are no primes in $\mathbb{Z}[\sqrt{-19}]$ that divides both $x+\sqrt{-19}$ and $x-\sqrt{-19}$. If we assume that the fundamental theorem of arthimetic holds in $\mathbb{Z}[\sqrt{-19}]$, then this implies that $x+\sqrt{-19}$ must itself be a cube. This is because $(x+\sqrt{-19})(x-\sqrt{-19})=y^3$ is a cube and if two coprime numbers multiply to be a cube, then both of those coprime numbers must be cubes.

Thus we can conclude that there are integers $a$ and $b$ such that $x+\sqrt{-19} = (a+b\sqrt{-19})^3$. If we expand out this cube we can conclude that

$x+\sqrt{-19} = (a^3-57ab^2)+(3a^2b-19b^3)\sqrt{-19}$.

Thus in particular we have $1=3a^2b-19b^3=(3a^2-19b^2)b$. This implies that $b = \pm 1$ and $3a^2-19b^2=\pm 1$. Hence $b^2=1$ and $3a^2-19 = \pm 1$. Now if $3a^2 -19 =-1$, then $a^2=6$ – a contradiction. Similarly if $3a^2-19=1$, then $3a^2=20$ – another contradiction. Thus we can conclude there are no integer solutions to the equation $x^2+19=y^3$!

Unfortunately however, a bit of searching reveals that $18^2+19=343=7^3$. Thus simply assuming that that the ring $\mathbb{Z}[\sqrt{-19}]$ has unique factorization led us to incorrectly conclude that an equation had no solutions. The question of unique factorization in number rings such as $\mathbb{Z}[\sqrt{-19}]$ is a subtle and important one. Some of the flawed proofs of Fermat’s Last Theorem incorrectly assume that certain number rings have unique factorization – like we did above.

## References

The lecturer David Smyth showed us that the even integers do not have unique factorization during a lecture of the great course MATH2222.

The example of $\mathbb{Z}[\sqrt{-19}]$ failing to have unique factorization and the consequences of this was shown in a lecture for a course on algebraic number theory by James Borger. In this class we followed the (freely available) textbook “Number Rings” by P. Stevenhagen. Problem 1.4 on page 8 is the example I used in this post. By viewing the textbook you can see a complete solution to the problem.