Blog Feed

Two sample tests as correlation tests

Suppose we have two samples Y_1^{(0)}, Y_2^{(0)},\ldots, Y_{n_0}^{(0)} and Y_1^{(1)},Y_2^{(1)},\ldots, Y_{n_1}^{(1)} and we want to test if they are from the same distribution. Many popular tests can be reinterpreted as correlation tests by pooling the two samples and introducing a dummy variable that encodes which sample each data point comes from. In this post we will see how this plays out in a simple t-test.

The equal variance t-test

In the equal variance t-test, we assume that Y_i^{(0)} \stackrel{\text{iid}}{\sim} \mathcal{N}(\mu_0,\sigma^2) and Y_i^{(1)} \stackrel{\text{iid}}{\sim}  \mathcal{N}(\mu_1,\sigma^2), where \sigma^2 is unknown. Our hypothesis that Y_1^{(0)}, Y_2^{(0)},\ldots, Y_{n_0}^{(0)} and Y_1^{(1)},Y_2^{(1)},\ldots, Y_{n_1}^{(1)} are from the same distribution becomes the hypothesis \mu_0 = \mu_1. The test statistic is

t = \frac{\displaystyle \overline{Y}^{(1)} - \overline{Y}^{(0)}}{\displaystyle \hat{\sigma}\sqrt{\frac{1}{n_0}+\frac{1}{n_1}}},

where \overline{Y}^{(0)} and \overline{Y}^{(1)} are the two sample means. The variable \hat{\sigma} is the pooled estimate of the standard deviation and is given by

\hat{\sigma}^2 = \displaystyle\frac{1}{n_0+n_1-2}\left(\sum_{i=1}^{n_0}\left(Y_i^{(0)}-\overline{Y}^{(0)}\right)^2 + \sum_{i=1}^{n_1}\left(Y_i^{(1)}-\overline{Y}^{(1)}\right)^2\right).

Under the null hypothesis, t follows the T-distribution with n_0+n_1-2 degrees of freedom. We thus reject the null \mu_0=\mu_1 when |t| exceeds the 1-\alpha/2 quantile of the T-distribution.

Pooling the data

We can turn this two sample test into a correlation test by pooling the data and using a linear model. Let Y_1,\ldots,Y_{n_0}, Y_{n_0+1},\ldots,Y_{n_0+n_1} be the pooled data and for i = 1,\ldots, n_0+n_1, define x_i \in \{0,1\} by

x_i = \begin{cases} 0 & \text{if } 1 \le i \le n_0,\\ 1 & \text{if } n_0+1 \le i \le n_0+n_1.\end{cases}

The assumptions that Y_i^{(0)} \stackrel{\text{iid}}{\sim} \mathcal{N}(\mu_0,\sigma^2) and Y_i^{(1)} \stackrel{\text{iid}}{\sim} \mathcal{N}(\mu_1,\sigma^2) can be rewritten as

Y_i = \beta_0+\beta_1x_i + \varepsilon_i,

where \varepsilon_i \stackrel{\text{iid}}{\sim} \mathcal{N}(0,\sigma^2). That is, we have expressed our modelling assumptions as a linear model. When working with this linear model, the hypothesis \mu_0 = \mu_1 is equivalent to \beta_1 = 0. To test \beta_1 = 0 we can use the standard t-test for a coefficient in linear model. The test statistic in this case is

t' = \displaystyle\frac{\hat{\beta}_1}{\hat{\sigma}_{OLS}\sqrt{(X^TX)^{-1}_{11}}},

where \hat{\beta}_1 is the ordinary least squares estimate of \beta_1, X \in \mathbb{R}^{(n_0+n_1)\times 2} is the design matrix and \hat{\sigma}_{OLS} is an estimate of \sigma given by

\hat{\sigma}_{OLS}^2 = \displaystyle\frac{1}{n_0+n_1-2}\sum_{i=1}^{n_0+n_1} (Y_i-\hat{Y}_i)^2,

where \hat{Y} = \hat{\beta}_0+\hat{\beta}_1x_i is the fitted value of Y_i.

It turns out that t' is exactly equal to t. We can see this by writing out the design matrix and calculating everything above. The design matrix has rows [1,x_i] and is thus equal to

X = \begin{bmatrix} 1&x_1\\ 1&x_2\\  \vdots&\vdots\\ 1&x_{n_0}\\  1&x_{n_0+1}\\ \vdots&\vdots\\ 1&x_{n_0+n_1}\end{bmatrix} = \begin{bmatrix} 1&0\\ 1&0\\  \vdots&\vdots\\ 1&0\\  1&1\\ \vdots&\vdots\\ 1&1\end{bmatrix}.

This implies that

X^TX = \begin{bmatrix} n_0+n_1 &n_1\\n_1&n_1 \end{bmatrix},

And therefore,

(X^TX)^{-1} = \frac{1}{(n_0+n_1)n_1-n_1^2}\begin{bmatrix} n_1 &-n_1\\-n_1&n_0+n_1 \end{bmatrix} = \frac{1}{n_0n_1}\begin{bmatrix} n_1&-n_1\\-n_1&n_0+n_1\end{bmatrix} =\begin{bmatrix} \frac{1}{n_0}&-\frac{1}{n_0}\\-\frac{1}{n_0}&\frac{1}{n_0}+\frac{1}{n_1}\end{bmatrix} .

Thus, (X^TX)^{-1}_{11} = \frac{1}{n_0}+\frac{1}{n_1}. So,

t' = \displaystyle\frac{\hat{\beta}_1}{\hat{\sigma}_{OLS}\sqrt{\frac{1}{n_0}+\frac{1}{n_1}}},

which is starting to like t from the two-sample test. Now

X^TY = \begin{bmatrix} \displaystyle\sum_{i=1}^{n_0+n_1} Y_i\\ \displaystyle \sum_{i=n_0+1}^{n_0+n_1} Y_i \end{bmatrix} = \begin{bmatrix} n_0\overline{Y}^{(0)} + n_1\overline{Y}^{(1)}\\  n_1\overline{Y}^{(1)} \end{bmatrix}.

And so

\hat{\beta} = (X^TX)^{-1}X^TY = \begin{bmatrix} \frac{1}{n_0}&-\frac{1}{n_0}\\-\frac{1}{n_0}&\frac{1}{n_0}+\frac{1}{n_1}\end{bmatrix}\begin{bmatrix} n_0\overline{Y}^{(0)} + n_1\overline{Y}^{(1)}\\  n_1\overline{Y}^{(1)} \end{bmatrix}=\begin{bmatrix} \overline{Y}^{(0)}\\  \overline{Y}^{(1)} -\overline{Y}^{(0)}\end{bmatrix}.

Thus, \hat{\beta}_1 = \overline{Y}^{(1)} -\overline{Y}^{(0)} and

t' = \displaystyle\frac{\overline{Y}^{(1)}-\overline{Y}^{(0)}}{\hat{\sigma}_{OLS}\sqrt{\frac{1}{n_0}+\frac{1}{n_1}}}.

This means to show that t' = t, we only need to show that \hat{\sigma}_{OLS}^2=\hat{\sigma}^2. To do this, note that the fitted values \hat{Y} are equal to

\displaystyle\hat{Y}_i=\hat{\beta}_0+x_i\hat{\beta}_1 = \begin{cases} \overline{Y}^{(0)} & \text{if } 1 \le i \le n_0,\\ \overline{Y}^{(1)} & \text{if } n_0+1\le i \le n_0+n_1\end{cases}.

Thus,

\hat{\sigma}^2_{OLS} = \displaystyle\frac{1}{n_0+n_1-2}\sum_{i=1}^{n_0+n_1}\left(Y_i-\hat{Y}_i\right)^2=\displaystyle\frac{1}{n_0+n_1-2}\left(\sum_{i=1}^{n_0}\left(Y_i^{(0)}-\overline{Y}^{(0)}\right)^2 + \sum_{i=1}^{n_1}\left(Y_i^{(1)}-\overline{Y}^{(1)}\right)^2\right),

Which is exactly \hat{\sigma}^2. Therefore, t'=t and the two sample t-test is equivalent to a correlation test.

The Friedman-Rafsky test

In the above example, we saw that the two sample t-test was a special case of the t-test for regressions. This is neat but both tests make very strong assumptions about the data. However, the same thing happens in a more interesting non-parametric setting.

In their 1979 paper, Jerome Friedman and Lawrence Rafsky introduced a two sample tests that makes no assumptions about the distribution of the data. The two samples do not even have to real-valued and can instead be from any metric space. It turns out that their test is a special case of another procedure they devised for testing for association (Friedman & Rafsky, 1983). As with the t-tests above, this connection comes from pooling the two samples and introducing a dummy variable.

I plan to write a follow up post explaining these procedures but you can also read about it in Chapter 6 of Group Representations in Probability and Statistics by Persi Diaconis.

References

Persi Diaconis “Group representations in probability and statistics,” pp 104-106, Hayward, CA: Institute of Mathematical Statistics, (1988)

Jerome H. Friedman, Lawrence C. Rafsky “Multivariate Generalizations of the Wald-Wolfowitz and Smirnov Two-Sample Tests,” The Annals of Statistics, Ann. Statist. 7(4), 697-717, (July, 1979)

Jerome H. Friedman, Lawrence C. Rafsky “Graph-Theoretic Measures of Multivariate Association and Prediction,” The Annals of Statistics, Ann. Statist. 11(2), 377-391, (June, 1983).

MCMC for hypothesis testing

A Monte Carlo significance test of the null hypothesis X_0 \sim H requires creating independent samples X_1,\ldots,X_B \sim H. The idea is if X_0 \sim H and independently X_1,\ldots,X_B are i.i.d. from H, then for any test statistic T, the rank of T(X_0) among T(X_0), T(X_1),\ldots, T(X_B) is uniformly distributed on \{1,\ldots, B+1\}. This means that if T(X_0) is one of the k largest values of T(X_b), then we can reject the hypothesis X \sim H at the significance level k/(B+1).

The advantage of Monte Carlo significance tests is that we do not need an analytic expression for the distribution of T(X_0) under X_0 \sim H. By generating the i.i.d. samples X_1,\ldots,X_B, we are making an empirical distribution that approximates the theoretical distribution. However, sometimes sampling X_1,\ldots, X_B is just as intractable as theoretically studying the distribution of T(X_0) . Often approximate samples based on Markov chain Monte Carlo (MCMC) are used instead. However, these samples are not independent and may not be sampling from the true distribution. This means that a test using MCMC may not be statistically valid

In the 1989 paper Generalized Monte Carlo significance tests, Besag and Clifford propose two methods that solve this exact problem. Their two methods can be used in the same settings where MCMC is used but they are statistically valid and correctly control the Type 1 error. In this post, I will describe just one of the methods – the serial test.

Background on Markov chains

To describe the serial test we will need to introduce some notation. Let P denote a transition matrix for a Markov chain on a discrete state space \mathcal{X}. A Markov chain with transition matrix P thus satisfies,

\mathbb{P}(X_n = x_n \mid X_0 = x_0,\ldots, X_{n-1} = x_{n-1}) = P(x_{n-1}, x_n)

Suppose that the transition matrix P has a stationary distribution \pi. This means that if (X_0, X_1,\ldots) is a Markov chain with transition matrix P and X_0 is distributed according to \pi, then X_1 is also distributed according to \pi. This implies that all X_i are distributed according to \pi.

We can construct a new transition matrix Q from P and \pi by Q(y,x) = P(x,y) \frac{\pi(x)}{\pi(y)}. The transition matrix Q is called the reversal of P. This is because for all x and y in \mathcal{X}, \pi(x)P(x,y) = \pi(y)Q(x,y). That is the chance of drawing x from \pi and then transitioning to y according to P is equal to the chance of drawing y from \pi and then transitioning to x according to Q

The new transition matrix Q also allows us to reverse longer runs of the Markov chain. Fix n \ge 0 and let (X_0,X_1,\ldots,X_n) be a Markov chain with transition matrix P and initial distribution \pi. Also let (Y_0,Y_1,\ldots,Y_n) be a Markov chain with transition matrix Q and initial distribution \pi, then

(X_0,X_1,\ldots,X_{n-1}, X_n) \stackrel{dist}{=} (Y_n,Y_{n-1},\ldots, Y_1,Y_0) ,

where \stackrel{dist}{=} means equal in distribution.

The serial test

Suppose we want to test the hypothesis x \sim \pi where x \in \mathcal{X} is our observed data and \pi is some distribution on \mathcal{X}. To conduct the serial test, we need to construct a Markov chain P for which \pi is a stationary distribution. We then also need to construct the reversal Q described above. There are many possible ways to construct P such as the Metropolis-Hastings algorithm.

We also need a test statistic T. This is a function T:\mathcal{X} \to \mathbb{R} which we will use to detect outliers. This function is the same function we would use in a regular Monte Carlo test. Namely, we want to reject the null hypothesis when T(x) is much larger than we would expect under x \sim \pi.

The serial test then proceeds as follows. First we pick \xi uniformly in \{0,1,\ldots,B\} and set X_\xi = x. We then generate (X_\xi, X_{\xi+1},\ldots,X_B) as a Markov chain with transition matrix P that starts at X_\xi = x. Likewise we generate (X_\xi, X_{\xi-1},\ldots, X_0) as a Markov chain that starts from Q.

We then apply T to each of (X_0,X_1,\ldots,X_\xi,\ldots,X_B) and count the number of b \in \{0,1,\ldots,B\} such that T(X_b) \ge T(X_\xi) = T(x). If there are k such b, then the reported p-value of our test is k/(B+1).

We will now show that this test produces a valid p-value. That is, when x \sim \pi, the probability that k/(B+1) is less than \alpha is at most \alpha. In symbols,

\mathbb{P}\left( \frac{\#\{b: T(X_b) \ge T(X_\xi)\}}{B+1} \le \alpha \right) \le \alpha

Under the null hypothesis x \sim \pi, (X_\xi, X_{\xi-1},\ldots, X_0) is equal in distribution to generating X_0 from \pi and using the transition matrix P to go from X_i to X_{i+1}.

Thus, under the null hypothesis, the distribution of (X_0,X_1,\ldots,X_B) does not depend on \xi. The entire procedure is equivalent to generating a Markov chain \mathbf{X} = (X_0,X_1,\ldots,X_B) with initial distribution \pi and transition matrix P, and then choosing \xi \in \{0,1,\ldots,B\} independently of \mathbf{X}. This is enough to show that the serial method produces valid p-values. The idea is that since the distribution of \mathbf{X} does not depend on \xi and \xi is uniformly distributed on \{0,1,\ldots,B\}, the probability that T(X_\xi) is in the top \alpha proportion of T(X_b) should be at most \alpha. This is proved more formally below.

For each i \in \{0,1,\ldots,B\}, let A_i be the event that T(X_i) is in the top \alpha proportion of T(X_0),\ldots,T(X_B). That is,

A_i = \left\{ \frac{\# \{b : T(X_b) \ge T(X_i)\} }{B+1} \le \alpha \right\}.

Let I_{A_i} be the indicator function for the event A_i. Since at must \alpha(B+1) values of i can be in the top \alpha fraction of T(X_0),\ldots,T(X_B), we have that

\sum_{i=0}^B I_{A_i} \le \alpha(B+1),

Therefor, by linearity of expectations,

\sum_{i=0}^B \mathbb{P}(A_i) \le \alpha(B+1)

By the law of total probability we have,

\mathbb{P}\left(\frac{\#\{b: T(X_b) \ge T(X_\xi)\}}{B+1} \le \alpha \right) = \sum_{i=0}^B \mathbb{P}\left(\frac{\#\{b: T(X_b) \ge T(X_\xi)\}}{B+1} \le \alpha | \xi = i\right)\mathbb{P}(\xi = i),

Since \xi is uniform on \{0,\ldots,B\}, \mathbb{P}(\xi = i) is \frac{1}{B+1} for all i. Furthermore, by independence of \xi and \mathbf{X}, we have

\mathbb{P}\left(\frac{\#\{b: T(X_b) \ge T(X_\xi)\}}{B+1} \le \alpha | \xi = i\right) = \mathbb{P}\left(\frac{\#\{b: T(X_b) \ge T(X_i)\}}{B+1} \le \alpha | \xi = i\right) = \mathbb{P}(A_i).

Thus, by our previous bound,

\mathbb{P}\left(\frac{\#\{b: T(X_b) \ge T(X_\xi)\}}{B+1} \le \alpha \right) = \frac{1}{B+1}\sum_{i=0}^B \mathbb{P}(A_i) \le \alpha.

Applications

In the original paper by Besag and Clifford, the authors discuss how this procedure can be used to perform goodness-of-fit tests. They construct Markov chains that can test the Rasch model or the Ising model. More broadly the method can be used to tests goodness-of-fit tests for any exponential family by using the Markov chains developed by Diaconis and Sturmfels.

A similar method has also been applied more recently to detect Gerrymandering. In this setting, the null hypothesis is the uniform distribution on all valid redistrictings of a state and the test statistics T measure the political advantage of a given districting.

References

  1. Besag, Julian, and Peter Clifford. “Generalized Monte Carlo Significance Tests.” Biometrika 76, no. 4 (1989)
  2. Persi Diaconis, Bernd Sturmfels “Algebraic algorithms for sampling from conditional distributions,” The Annals of Statistics, Ann. Statist. 26(1), 363-397, (1998)
  3. Chikina, Maria et al. “Assessing significance in a Markov chain without mixing.” Proceedings of the National Academy of Sciences of the United States of America vol. 114,11 (2017)

How to Bake Pi, Sherman-Morrison and log-sum-exp

A few months ago, I had the pleasure of reading Eugenia Cheng‘s book How to Bake Pi. Each chapter starts with a recipe which Cheng links to the mathematical concepts contained in the chapter. The book is full of interesting connections between mathematics and the rest of the world.

One of my favourite ideas in the book is something Cheng writes about equations and the humble equals sign: =. She explains that when an equation says two things are equal we very rarely mean that they are exactly the same thing. What we really mean is that the two things are the same in some ways even though they may be different in others.

One example that Cheng gives is the equation a + b = b+a. This is such a familiar statement that you might really think that a+b and b+a are the same thing. Indeed, if a and b are any numbers, then the number you get when you calculate a + b is the same as the number you get when you calculate b + a. But calculating a+b could be very different from calculating b+a. A young child might calculate a+b by starting with a and then counting one-by-one from a to a + b. If a is 1 and b is 20, then calculating a + b requires counting from 1 to 21 but calculating b+a simply amounts to counting from 20 to 21. The first process takes way longer than the second and the child might disagree that 1 + 20 is the same as 20 + 1.

In How to Bake Pi, Cheng explains that a crucial idea behind equality is context. When someone says that two things are equal we really mean that they are equal in the context we care about. Cheng talks about how context is crucial through-out mathematics and introduces a little bit of category theory as a tool for moving between different contexts. I think that this idea of context is really illuminating and I wanted to share some examples where “=” doesn’t mean “exactly the same as”.

The Sherman-Morrison formula

The Sherman-Morrison formula is a result from linear algebra that says for any invertible matrix A \in \mathbb{R}^{n\times n} and any pair of vectors u,v \in \mathbb{R}^n, if v^TA^{-1}u \neq -1, then A + uv^T is invertible and

(A + uv^T)^{-1} = A^{-1} + \frac{A^{-1}uv^TA^{-1}}{1 + v^TA^{-1}u}

Here “=” means the following:

  1. You can take any natural number n, any matrix A of size n by n, and any length n-vectors u and v that satisfy the above condition.
  2. If you take all those things and carry out all the matrix multiplications, additions and inversions on the left and all the matrix multiplications, additions and inversions on the right, then you will end up with exactly the same matrix in both cases.

But depending on the context, the equation on one side of “=” may be much easier than the other. Although the right hand side looks a lot more complicated, it is much easier to compute in one important context. This context is when we have already calculated the matrix A^{-1} and now want the inverse of A + uv^T. The left hand side naively computes A + uv^T which takes O(n^3) computations since we have to invert a n \times n matrix. On the right hand side, we only need to compute a small number of matrix-vector products and then add two matrices together. This bring the computational cost down to O(n^2).

These cost saving measures come up a lot when studying linear regression. The Sherman-Morrison formula can be used to update regression coefficients when a new data point is added. Similarly, the Sherman-Morrison formula can be used to quickly calculate the fitted values in leave-one-out cross validation.

log-sum-exp

This second example also has connections to statistics. In a mixture model, we assume that each data point Y comes from a distribution of the form:

p(y|\pi,\theta) = \sum_{k=1}^K \pi_k p(y | \theta_k),

where \pi is a vector and \pi_k is equal to the probability that Y came from class k. The parameters \theta_k \in\mathbb{R}^p are the parameters for the k^{th} group.The log-likelihood is thus,

\log\left(\sum_{k=1}^K \pi_k p(y | \theta_k)\right) = \log\left(\sum_{k=1}^K \exp(\eta_{k})\right),

where \eta_{k} = \log(\pi_k p(y| \theta_k)). We can see that the log-likelihood is of the form log-sum-exp. Calculating a log-sum-exp can cause issues with numerical stability. For instance if K = 3 and \eta_k = 1000, for all k=1,2,3, then the final answer is simply \log(3)+1000. However, as soon as we try to calculate \exp(1000) on a computer, we’ll be in trouble.

The solution is to use the following equality, for any \beta \in \mathbb{R},

\log\left(\sum_{k=1}^K \exp(\eta_k) \right) = \beta + \log\left(\sum_{k=1}^K \exp(\beta - \eta_k)\right).

Proving the above identity is a nice exercise in the laws of logarithm’s and exponential’s, but with a clever choice of \beta we can more safely compute the log-sum-exp expression. For instance, in the documentation for pytorch’s implementation of logsumexp() they take \beta to be the maximum of \eta_k. This (hopefully) makes each of the terms \beta - \eta_k a reasonable size and avoids any numerical issues.

Again, the left and right hand sides of the above equation might be the same number, but in the context of having to use computers with limited precision, they represent very different calculations.

Beyond How to Bake Pi

Eugenia Cheng has recently published a new book called The Joy of Abstraction. I’m just over half way through and it’s been a really engaging and interesting introduction to category theory. I’m looking forward to reading the rest of it and getting more insight from Eugenia Cheng’s great mathematical writing.

Looking back on the blog

Next week I’ll be starting the second year of my statistics PhD. I’ve learnt a lot from taking the first year classes and studying for the qualifying exams. Some of what I’ve learnt has given me a new perspective on some of my old blog posts. Here are three things that I’ve written about before and I now understand better.

1. The Pi-Lambda Theorem

An early post on this blog was titled “A minimal counterexample in probability theory“. The post was about a theorem from the probability course offered at the Australian National University. The theorem states that if two probability measures agree on a collection of subsets and the collection is closed under finite intersections, then the two probability measures agree on the \sigma-algebra generated by the collection. In my post I give an example which shows that you need the collection to be closed under finite intersections. I also show that you need to have at least four points in the space to find such an example.

What I didn’t know then is that the above theorem is really a corollary of Dykin’s \pi - \lambda theorem. This theorem was proved in my first graduate probability course which was taught by Persi Diaconis. Professor Diaconis kept a running tally of how many times we used the \pi - \lambda theorem in his course and we got up to at least 10. (For more appearances by Professor Diaconis on this blog see here, here and here).

If I were to write the above post again, I would talk about the \pi - \lambda theorem and rename the post “The smallest \lambda-system”. The example given in my post is really about needing at least four points to find a \lambda-system that is not a \sigma-algebra.

2. Mallow’s Cp statistic

The very first blog post I wrote was called “Complexity Penalties in Statistical Learning“. I wasn’t sure if I would write a second and so I didn’t set up a WordPress account. I instead put the post on the website LessWrong. I no longer associate myself with the rationality community but posting to LessWrong was straight forward and helped me reach more people.

The post was inspired in two ways by the 2019 AMSI summer school. First, the content is from the statistical learning course I took at the summer school. Second, at the career fair many employers advised us to work on our writing skills. I don’t know if would have started blogging if not for the AMSI Summer School.

I didn’t know it at the time but the blog post is really about Mallow’s Cp statistic. Mallow’s Cp statistic is an estimate of the test error of a regression model fit using ordinary least squares. The Mallow’s Cp is equal to the training error plus a “complexity penalty” which takes into account the number of parameters. In the blog post I talk about model complexity and over-fitting. I also write down and explain Mallow’s Cp in the special case of polynomial regression.

In the summer school course I took, I don’t remember the name Mallow’s Cp being used but I thought it was a great idea and enjoyed writing about it. The next time encountered Mallow’s Cp was in the linear models course I took last fall. I was delighted to see it again and learn how it fit into a bigger picture. More recently, I read Brad Efron’s paper “How Biased is the Apparent Error Rate of a Prediction Rule?“. The paper introduces the idea of “effective degrees of freedom” and expands on the ideas behind the Cp statistic.

Incidentally, enrolment is now open for the 2023 AMSI Summer School! This summer it will be hosted at the University of Melbourne. I encourage any Australia mathematics or statistics students reading this to take a look and apply. I really enjoyed going in both 2019 and 2020. (Also if you click on the above link you can try to spot me in the group photo of everyone wearing read shirts!)

3. Finitely additive probability measures

In “Finitely additive measures” I talk about how hard it is to define a finitely additive measure on the set of natural numbers that is not countably additive. In particular, I talked about needing to use the Hahn — Banach extension theorem to extend the natural density from the collection of sets with density to the collection of all subsets of the natural numbers.

There were a number of homework problems in my first graduate probability course that relate to this post. We proved that the sets with density are not closed under finite unions and we showed that the square free numbers have density 6/\pi^2.

We also proved that any finite measure defined on an algebra of subsets can be extend to the collection of all subsets. This proof used Zorn’s lemma and the resulting measure is far from unique. The use of Zorn’s lemma relates to the main idea in my blog, that defining an additive probability measure is in some sense non-constructive.

Other posts

Going forward, I hope to continue publishing at least one new post every month. I look forward to one day writing another post like this when I can look back and reflect on how much I have learnt.

Total Variation and Marathon Running

Total variation is a way of measuring how much a function f:[a,b] \to \mathbb{R} “wiggles”. In this post, I want to motivate the definition of total variation by talking about elevation in marathon running.

Comparing marathon courses

On July 24th I ran the 2022 San Francisco (SF) marathon. All marathons are the same distance, 42.2 kilometres (26.2 miles) but individual courses can vary greatly. Some marathons are on road and others are on trails. Some locations can be hot and others can be rainy. And some, such as the SF marathon, can be much hillier than others. Below is a plot comparing the elevation of the Canberra marathon I ran last year to the elevation of the SF marathon:

A plot showing the relative elevation over the course of the Canberra and San Francisco marathons. Try to spot the two times I ran over the Golden Gate Bridge during the SF marathon.

Immediately, you can see that the range of elevation during the San Francisco marathon was much higher than the range of elevation during the Canberra marathon. However, what made the SF marathon hard wasn’t any individual incline but rather the sheer number of ups and downs. For comparison, the plot below shows elevation during a 32 km training run and elevation during the SF marathon:

A plot showing the relative elevation over the course of a training run and the San Francisco marathon. The big climb during my training run is the Stanford dish.

You can see that my training run was mostly flat but had one big hill in the last 10 kilometres. The maximum relative elevation on my training run was about 50 meters higher than the maximum relative elevation of the marathon, but overall the training run graph is a lot less wiggly. This meant there were far more individual hills during the marathon and so the first 32 km of the marathon felt a lot tougher than the training run. By comparing these two runs, you can see that the elevation range can hide important information about the difficulty of a run. We also need to pay attention to how wiggly the elevation curve is.

Wiggliness Scores

So far our definition of wiggliness has been imprecise and has relied on looking at a graph of the elevation. This makes it hard to compare two runs and quickly decide which one is wigglier. It would be convenient if there was a “wiggliness score” – a single number we could assign to each run which measured the wiggliness of the run’s elevation. Bellow we’ll see that total variation does exactly this.

If we zoom in on one of the graphs above, we would see that it actually consists of tiny straight line segments. For example, let’s look at the 22nd kilometre of the SF marathon. In this plot it looks like elevation is a smooth function of distance:

The relative elevation of the 22nd kilometre during the SF marathon.

But if we zoom in on a 100 m stretch, then we see that the graph is actually a series of straight lines glued together:

The relative elevation over a 100 metres during the marathon.

This is because these graphs are made using my GPS watch which makes one recording per second. If we place dots at each of these times, then the straight lines become clearer:

The relative elevation over the same 100 metre stretch. Each blue dots marks a point when a measurement was made.

We can use these blue dots to define the graph’s wiggliness score. The wiggliness score should capture how much the graph varies across its domain. This suggests that wiggliness scores should be additive. By additive, I mean that if we split the domain into a finite number of pieces, then the wiggliness score across the whole domain should be the sum of the wiggliness score of each segment.

In particular, the wiggliness score for the SF marathon is equal to the sum of the wiggliness score of each section between two consecutive blue dots. This means we only need to quantify how much the graph varies between consecutive blue dots. Fortunately, between two such dots, the graph is a straight line. The amount that a straight line varies is simply the distance between the y-value at the start and the y-value at the end. Thus, by adding up all these little distances we can get a wiggliness score for the whole graph. This wiggliness score is used in mathematics, where it is called the total variation.

Here are the wiggliness scores for the three runs shown above:

RunWiggliness score
Canberra Marathon 2021617 m
Training run742 m
SF Marathon 20222140 m
The total variation or wiggliness score for the three graphs shown above.

Total Variation

We’ve seen that by breaking up a run into little pieces, we can calculate the total variation over the course of the run. But how can we calculate the total variation of an arbitrary function f:[a,b] \to \mathbb{R}?

Our previous approach won’t work because the function f might not be made up of straight lines. But we can approximate f with other functions that are made of straight lines. We can calculate the total variation of these approximations using the approach we used for the marathon runs. Then we define the total variation of f as the limit of the total variation of each of these approximations.

To make this precise, we will work with partitions of [a,b]. A partition of [a,b] is a finite set of points P = \{x_0, x_1,\ldots,x_n\} such that:

a = x_0 < x_1 < \ldots < x_n = b.

That is, x_0, x_1,\ldots,x_n is a collection of increasing points in [a,b] that start at a and end at b. For a given partition P of [a,b], we calculate how much the function f varies over the points in the partition P. As with the blue dots above, we can simply add up the distance between consecutive y values f(x_i) and f(x_{i-1}). In symbols, we define V_P(f) (the variation over f over the partition P) to be:

V_P(f) = \sum\limits_{i=1}^n |f(x_i)-f(x_{i-1})|.

To define the variation of f over the interval [a,b], we can imagine taking finer and finer partitions of [a,b]. To do this, note that whenever we add more points to a partition, the total variation over that partition can only increase. Thus, we can think of the total variation of f as the maximum total variation over any partition. We denote the total variation of f by V(f) and define it as:

V(f) = \sup\{V_P(f) : P \text{ is a partition of } [a,b]\}.

Surprisingly, there exist continuous function for which the total variation is infinite. Sample paths of the Brownian motion are canonical examples of continuous functions with infinite total variation. Such functions would be very challenging runs.

Some Limitations

Total variation does a good job of measuring how wiggly a function is but it has some limitations when applied to course elevation. The biggest issue is that total variation treats inclines and declines symmetrically. A steep line sloping down increases the total variation by the same amount as a line with the same slope going upwards. This obviously isn’t true when running; an uphill is very different to a downhill.

To quantify how much a function wiggles upwards, we could use the same ideas but replace the absolute value |f(x_i)-f(x_{i-1})| with the positive part (f(x_i)-f(x_{i-1}))_+ = \max\{f(x_i)-f(x_{i-1}),0\}. This means that only the lines that slope upwards will count towards the wiggliness score. Lines that slope downwards get a wiggliness score of zero.

Another limitation of total variation is that it measures total wiggliness across the whole domain rather than average wiggliness. This isn’t much of a problem when comparing runs of a similar length, but when comparing runs of different lengths, total variation can give surprising results. Below is a comparison between the Australian Alpine Ascent and the SF marathon:

The Australian Alpine Ascent is a 25 km run that goes up Australia’s tallest mountain. Despite the huge climbs during the Australian Alpine Ascent, the SF marathon has a higher total variation. Since the Australian Alpine Ascent was shorter, it gets a lower wiggliness score (1674 m vs 2140 m). For this comparison it would be better to divide each wiggliness score by the runs’ distance.

Summary

Despite these limitations, I still think that total variation is a useful metric for comparing two runs. It doesn’t tell you exactly how tough a run will be but if you already know the run’s distance and starting/finishing elevation, then the total variation helps you know what to expect.

Finding Australia’s youngest electorates with R

My partner recently wrote an article for Changing Times, a grassroots newspaper that focuses on social change. Her article, Who’s not voting? Engaging with First Nations voters and young voters, is about voter turn-out in Australia and an important read.

While doing research for the article, she wanted to know which electorates in Australia had the highest proportion of young voters. Fortunately the Australian Electoral Commission (AEC) keeps a detailed record of the number of electors in each electorate available here. The records list the number of voters of a given sex and age bracket in each of Australia’s 153 electorates. To calculate and sort the proportion of young voters (18-29) in each electorate using the most recent records, I wrote the below R code for her:

library(tidyverse)

voters <- read_csv("elector-count-fe-2022.csv", 
                   skip = 8,
                   col_names = c("Description",
                                 "18-19", 
                                 "20-24", 
                                 "25-29", 
                                 "30-34", 
                                 "35-39", 
                                 "40-44", 
                                 "45-49", 
                                 "50-54", 
                                 "55-59", 
                                 "60-64", 
                                 "65-69", 
                                 "70+",  
                                 "Total"),
                   col_select = (1:14),
                   col_types = cols(Description = "c",
                                    .default = "n"))


not_electorates = c("NSW", "VIC", "ACT", "WA", "SA", "TAS", 
                    "QLD", "NT", "Grand Total","Female", 
                    "Male", "Indeterminate/Unknown")

electorates <- voters %>% 
  filter(!(Description %in% not_electorates),
         !is.na(Description))  

young_voters <- electorates %>% 
  mutate(Young = `18-19` + `20-24`,
         Proportion = Young/Total,
         rank = min_rank(desc(Proportion))) %>% 
  arrange(rank) %>% 
  select(Electorate = Description, Total, Young, Proportion, rank) 

young_voters

To explain what I did, I’ll first describe the format of the data set from the AEC. The first column contained a description of each row. All the other columns contained the number of voters in a given age bracket. Rows either corresponded to an electorate or a state and either contained the total electorate or a given sex.

For the article my partner wanted to know which electorate had the highest proportion of young voters (18-24) in total so we removed the rows for each state and the rows that selected a specific sex. The next step was to calculate the number of young voters across the age brackets 18-19 and 20-24 and then calculate the proportion of such voters. Once ranked, the five electorates with the highest proportion of young voters were

ElectorateProportion of young voters
Ryan0.146
Brisbane0.132
Griffith0.132
Canberra0.131
Kooyong0.125

In Who’s not Voting?, my partner points out that the three seats with the highest proportion of voters all swung to the Greens at the most recent election. To view all the proportion of young voters across every electorate, you can download this spreadsheet.

A clock is a one-dimensional subgroup of the torus

Recently, my partner and I installed a clock in our home. The clock previously belonged to my grandparents and we have owned it for a while. We hadn’t put it up earlier because the original clock movement ticked and the sound would disrupt our small studio apartment. After much procrastinating, I bought a new clock movement, replaced the old one and proudly hung up our clock.

Our new clock. We still need to reattach the 5 and 10 which fell off when we moved.

When I first put on the clock hands I made the mistake of not putting them both on at exactly 12 o’clock. This meant that the minute and hour hands were not synchronised. The hands were in an impossible position. At times, the minute hand was at 12 and the hour hand was between 3 and 4. It took some time for me to register my mistake as at some times of the day it can be hard to tell that the hands are out of sync (how often do you look at a clock at 12:00 exactly?). Fortunately, I did notice the mistake and we have a correct clock. Now I can’t help noticing when others make the same mistake such as in this piece of clip art.

An incorrect clock where the minute hand points to 6 but the hour hand is exactly at 10. From 30 Clipart – Clock Face Clip Art @clipartmax.com

After fixing the clock, I was still thinking about how only some clock hand positions correspond to actual times. This led me to think “a clock is a one-dimensional subgroup of the torus”. Let me explain why.

The torus

The minute and hour hands on a clock can be thought of as two points on two different circles. For instance, if the time is 9:30, then the minute hand corresponds to a point at the very bottom of the circle and the hour hand corresponds to a point 15 degrees clockwise of the leftmost point of the circle. As a clock goes through a 12 hour cycle the minute-hand-point goes around the circle 12 times and the hour-hand-point goes around the circle once. This is shown below.

The blue point goes around its blue circle in time with the minute hand on the clock in the middle. The red point goes around its red circle in time with the hour hand.

If you take the collection of all pairs of points on a circle you get what mathematicians call a torus. The torus is a geometric shape that looks like the surface of a donut. The torus is defined as the Cartesian product of two circles. That is, a single point on the torus corresponds to two points on two different circles. A torus is plotted below.

The green surface above is a torus. The black lines aren’t a part of the torus, they are just there to help the visualisation.

To understand the torus, it’s helpful to consider a more familiar example, the 2-dimensional plane. If we have points x and y on two different lines, then we can produce the point (x,y) in the two dimensional plane. Likewise, if we have a point p and a point q on two different circles, then we can produce a point (p,q) on the torus. Both of these concepts are illustrated below. I have added two circles to the torus which are analogous to the x and y axes of the plane. The blue and red points on the blue and red circle produce the black point on the torus.

Mapping the clock to the torus

The points on the torus are in one-to-one correspondence with possible arrangements of the two clock hands. However, as I learnt putting up our clock, not all arrangements of clock hands correspond to an actual time. This means that only some points on the torus correspond to an actual time but how can we identify these points?

Keeping with our previous convention, let’s use the blue circle to represent the position of the minute hand and the red circle to represent the position of the hour hand. This means that the point where the two circles meet corresponds to 12 o’clock.

The point where the two circles meet corresponds to both hands pointing to 12, that is, 12 o’clock.

There are eleven other points on the red line that correspond to the other times when the minute hand is at 12. That is, there’s a point for 1 o’clock, 2 o’clock, 3 o’clock and so on. Once we add in those points, our torus looks like this:

Each black dot corresponds to when the minute hand is at 12. That is, the dots represent 12 o’clock, 1 o’clock, 2 o’clock and so on.

Finally, we have to join these points together. We know that when the hour hand moves from 12 to 1, the minute hand does one full rotation. This means that we have to join the black points by making one full rotation in the direction of the blue circle. The result is the black curve below that snakes around the torus.

Points on the black curve correspond to actual times on the clock.

The picture above should explain most of this blog’s title – “a clock is a one-dimensional subgroup of the torus”. We now know what the torus is and why certain points on the torus correspond to positions of the hands on a clock. We can see that these “clock points” correspond to a line that snakes around the torus. While the torus is a surface and hence two dimensional, the line is one-dimensional. The last missing part is the word “subgroup”. I won’t go into the details here but the torus has some extra structure that makes it something called a group. Our map from the clock to the torus interacts nicely with this structure and this makes the black line a “subgroup”.

Another perspective

While the above pictures of the torus are pretty, they can be a bit hard to understand and hard to draw. Mathematicians have another perspective of the torus that is often easier to work with. Imagine that you have a square sheet of rubber. If you rolled up the rubber and joined a pair of opposite sides, you would get a rubber tube. If you then bent the tube to join the opposite sides again, you would get a torus! The gif bellow illustrates this idea

By joining opposite sides, we can turn a square into a torus. This gif is taken from https://en.wikipedia.org/wiki/Torus

This means that we can simply view the torus as a square. We just have to remember that the opposite sides of the squares have been glued together. So like a game of snake on a phone, if you leave the top of the square, you come out at the same place on the bottom of the square. If we use this idea to redraw our torus it now looks like this:

A drawing of a flat torus. To make a donut shaped torus, the two red lines and then the two blue lines have to be glued together. As before, the blue line corresponds to the minute hand and the red line to the hour hand. When we glue the opposite sides of this square, the four corners all get glued together. This point is where the two circles intersect and corresponds to 12 o’clock.

As before we can draw in the other points when the minute hand is at 12. These points correspond to 1 o’clock, 2 o’clock, 3 o’clock…

Each black dot corresponds to a time when the minute hand is at 12. Remember that each dot on the top is actually the same point as the corresponding dot on the bottom. These opposite points get glued together when we turn the square into a torus.

Finally we can draw in all the other times on the clock. This is the result:

Points on the black line correspond to actual times on the clock. Although it looks like there are 12 different lines, there is actually only one line once we glue the opposite sides together.

One nice thing about this picture is that it can help us answer a classic riddle. In a 12-hour cycle, how many times are the minute and hour hands on top of each other? We can answer this riddle by adding a second line to the above square. The bottom-left to top-right diagonal is the collection of all hand positions where the two hands are on top of each other. Let’s add that line in green and add the points where this new line intersects the black line.

The green line is the collection of all hand positions when the two hands are pointing in the same direction. The black points are where the green and black lines intersect each other.

The points where the green and black lines intersect are hand positions where the clock hands are directly on top of each other and which correspond to actual times. Thus we can count that there are exactly 11 times when the hands are on top of each other in a 12-hour cycle. It might look like there are 12 such times but we have to remember that the corners of the square are all the same point on the torus.

Adding the second hand

So far I have ignored the second hand on the clock. If we included the second hand, we would have three points on three different circles. The corresponding geometric object is a 3-dimensional torus. The 3-dimensional torus is what you get when you take a cube and glue together the three pairs of opposite faces (don’t worry if you have trouble visualising such a shape!).

The points on the 3-dimensional torus which correspond to actual times will again be a line that wraps around the 3-dimensional torus. You could use this line to find out how many times the three hands are all on top of each other! Let me know if you work it out.

I hope that if you’re ever asked to define a clock, you’d at least consider saying “a clock is a one-dimensional subgroup of the torus” and you could even tell them which subgroup!

Sampling from the non-central chi-squared distribution

The non-central chi-squared distribution is a generalisation of the regular chi-squared distribution. The chi-squared distribution turns up in many statistical tests as the (approximate) distribution of a test statistic under the null hypothesis. Under alternative hypotheses, those same statistics often have approximate non-central chi-squared distributions.

This means that the non-central chi-squared distribution is often used to study the power of said statistical tests. In this post I give the definition of the non-central chi-squared distribution, discuss an important invariance property and show how to efficiently sample from this distribution.

Definition

Let Z be a normally distributed random vector with mean 0 and covariance I_n. Given a vector \mu \in \mathbb{R}^n, the non-central chi-squared distribution with n degrees of freedom and non-centrality parameter \Vert \mu\Vert_2^2 is the distribution of the quantity

\Vert Z+\mu \Vert_2^2 = \sum\limits_{i=1}^n (Z_i+\mu_i)^2.

This distribution is denoted by \chi^2_n(\Vert \mu \Vert_2^2). As this notation suggests, the distribution of \Vert Z+\mu \Vert_2^2 depends only on \Vert \mu \Vert_2^2, the norm of \mu. The first few times I heard this fact, I had no idea why it would be true (and even found it a little spooky). But, as we will see below, the result is actually a simply consequence of the fact that standard normal vectors are invariant under rotations.

Rotational invariance

Suppose that we have two vectors \mu, \nu \in \mathbb{R}^n such that \Vert \mu\Vert_2^2 = \Vert \nu \Vert_2^2. We wish to show that if Z \sim \mathcal{N}(0,I_n), then

\Vert Z+\mu \Vert_2^2 has the same distribution as \Vert Z + \nu \Vert_2^2.

Since \mu and \nu have the same norm there exists an orthogonal matrix U \in \mathbb{R}^{n \times n} such that U\mu = \nu. Since U is orthogonal and Z \sim \mathcal{N}(0,I_n), we have Z'=UZ \sim \mathcal{N}(U0,UU^T) = \mathcal{N}(0,I_n). Furthermore, since U is orthogonal, U preserves the norm \Vert \cdot \Vert_2^2. This is because, for all x \in \mathbb{R}^n,

\Vert Ux\Vert_2^2 = (Ux)^TUx = x^TU^TUx=x^Tx=\Vert x\Vert_2^2.

Putting all these pieces together we have

\Vert Z+\mu \Vert_2^2 = \Vert U(Z+\mu)\Vert_2^2 = \Vert UZ + U\mu \Vert_2^2 = \Vert Z'+\nu \Vert_2^2.

Since Z and Z' have the same distribution, we can conclude that \Vert Z'+\nu \Vert_2^2 has the same distribution as \Vert Z + \nu \Vert. Since \Vert Z + \mu \Vert_2^2 = \Vert Z'+\nu \Vert_2^2, we are done.

Sampling

Above we showed that the distribution of the non-central chi-squared distribution, \chi^2_n(\Vert \mu\Vert_2^2) depends only on the norm of the vector \mu. We will now use this to provide an algorithm that can efficiently generate samples from \chi^2_n(\Vert \mu \Vert_2^2).

A naive way to sample from \chi^2_n(\Vert \mu \Vert_2^2) would be to sample n independent standard normal random variables Z_i and then return \sum_{i=1}^n (Z_i+\mu_i)^2. But for large values of n this would be very slow as we have to simulate n auxiliary random variables Z_i for each sample from \chi^2_n(\Vert \mu \Vert_2^2). This approach would not scale well if we needed many samples.

An alternative approach uses the rotation invariance described above. The distribution \chi^2_n(\Vert \mu \Vert_2^2) depends only on \Vert \mu \Vert_2^2 and not directly on \mu. Thus, given \mu, we could instead work with \nu = \Vert \mu \Vert_2 e_1 where e_1 is the vector with a 1 in the first coordinate and 0s in all other coordinates. If we use \nu instead of \mu, we have

\sum\limits_{i=1}^n (Z_i+\nu_i)^2 = (Z_1+\Vert \mu \Vert_2)^2 + \sum\limits_{i=2}^{n}Z_i^2.

The sum \sum_{i=2}^n Z_i^2 follows the regular chi-squared distribution with n-1 degrees of freedom and is independent of Z_1. The regular chi-squared distribution is a special case of the gamma distribution and can be effectively sampled with rejection sampling for large shape parameter (see here).

The shape parameter for \sum_{i=2}^n Z_i^2 is \frac{n-1}{2}, so for large values of n we can efficiently sample a value Y that follows that same distribution as \sum_{i=2}^n Z_i^2 \sim \chi^2_{n-1}. Finally to get a sample from \chi^2_n(\Vert \mu \Vert_2^2) we independently sample Z_1, and then return the sum (Z_1+\Vert \mu\Vert_2)^2 +Y.

Conclusion

In this post, we saw that the rotational invariance of the standard normal distribution gives a similar invariance for the non-central chi-squared distribution.

This invariance allowed us to efficiently sample from the non-central chi-squared distribution. The sampling procedure worked by reducing the problem to sampling from the regular chi-squared distribution.

The same invariance property is also used to calculate the cumulative distribution function and density of the non-central chi-squared distribution. Although the resulting formulas are not for the faint of heart.

Non-measurable sets, cardinality and the axiom of choice

The following post is based on a talk I gave at the 2022 Stanford statistics retreat. The talk was titled “Another non-measurable monster”.

The opening slide from my presentation. The test reads “another non-measurable monster”. There are spooky pumpkins in the background. Image from https://www.shutterstock.com/image-photo/pumpkins-graveyard-spooky-night-halloween-backdrop-1803950377

The material was based on the discussion and references given in this stackexchange post. The title is a reference to a Halloween lecture on measurability given by Professor Persi Diaconis.

What’s scarier than a non-measurable set?

Making every set measurable. Or rather one particular consequence of making every set measurable.

In my talk, I argued that if you make every set measurable, then there exists a set \Omega and an equivalence relation \sim on \Omega such that |\Omega| < |\Omega / \sim|. That is, the set \Omega has strictly smaller cardinality than the set of equivalence classes \Omega/\sim. The contradictory nature of this statement is illustrated in the picture below

We can think of the set \Omega as the collection of crosses drawn above. The equivalence relation \sim divides \Omega into the regions drawn above. The statement |\Omega|<|\Omega /\sim| means that in some sense there are more regions than crosses.

To make sense of this we’ll first have to be a bit more precise about what we mean by cardinality.

What do we mean by bigger and smaller?

Let A and B be two sets. We say that A and B have the same cardinality and write |A| = |B| if there exists a bijection function f:A \to B. We can think of the function f as a way of pairing each element of A with a unique element of B such that every element of B is paired with an element of A.

We next want to define |A|\le |B| which means A has cardinality at most the cardinality of B. There are two reasonable ways in which we could try to define this relationship

  1. We could say |A|\le |B| means that there exists an injective function f : A \to B.
  2. Alternatively, we could |A|\le |B| means that there exists a surjective function g:B \to A.

Definitions 1 and 2 say similar things and, in the presence of the axiom of choice, they are equivalent. Since we are going to be making every set measurable in this talk, we won’t be assuming the axiom of choice. Definitions 1 and 2 are thus no longer equivalent and we have a decision to make. We will use definition . in this talk. For justification, note that definition 1 implies that there exists a subset B' \subseteq B such that |A|=|B|. We simply take B' to be the range of f. This is a desirable property of the relation |A|\le |B| and it’s not clear how this could be done using definition 2.

Infinite binary sequences

It’s time to introduce the set \Omega and the equivalence relation we will be working with. The set \Omega is the set \{0,1\}^\mathbb{Z} the set of all function \omega : \mathbb{Z} \to \{0,1\}. We can think of each elements \omega \in \Omega as an infinite sequence of zeros and ones stretching off in both directions. For example

\omega = \ldots 1110110100111\ldots.

But this analogy hides something important. Each \omega \in \Omega has a “middle” which is the point \omega_0. For instance, the two sequences below look the same but when we make \omega_0 bold we see that they are different.

\omega = \ldots 111011\mathbf{0}100111\ldots,

\omega' = \ldots 1110110\mathbf{1}00111\ldots.

The equivalence relation \sim on \Omega can be thought of as forgetting the location \omega_0. More formally we have \omega \sim \omega' if and only if there exists n \in \mathbb{Z} such that \omega_{n+k} = \omega_{k}' for all k \in \mathbb{Z}. That is, if we shift the sequence \omega by n we get the sequence \omega'. We will use [\omega] to denote the equivalence class of \omega and \Omega/\sim for the set of all equivalences classes.

Some probability

Associated with the space \Omega are functions X_k : \Omega \to \{0,1\}, one for each integer k \in \mathbb{Z}. These functions simply evaluate \omega at k. That is X_k(\omega)=\omega_k. A probabilist or statistician would think of X_k as reporting the result of one of infinitely many independent coin tosses. Normally to make this formal we would have to first define a \sigma-algebra on \Omega and then define a probability on this \sigma-algebra. Today we’re working in a world where every set is measurable and so don’t have to worry about \sigma-algebras. Indeed we have the following result:

(Solovay, 1970)1 There exists a model of the Zermelo Fraenkel axioms of set theory such that there exists a probability \mathbb{P} defined on all subsets of \Omega such that X_k are i.i.d. \mathrm{Bernoulli}(0.5).

This result is saying that there is world in which, other than the axiom of choice, all the regular axioms of set theory holds. And in this world, we can assign a probability to every subset A \subseteq \Omega in a way so that the events \{X_k=1\} are all independent and have probability 0.5. It’s important to note that this is a true countably additive probability and we can apply all our familiar probability results to \mathbb{P}. We are now ready to state and prove the spooky result claimed at the start of this talk.

Proposition: Given the existence of such a probability \mathbb{P}, |\Omega | < |\Omega /\sim|.

Proof: Let f:\Omega/\sim \to \Omega be any function. To show that |\Omega|<|\Omega /\sim| we need to show that f is not injective. To do this, we’ll first define another function g:\Omega \to \Omega given by g(\omega)=f([\omega]). That is, g first maps \omega to \omega‘s equivalence class and then applies f to this equivalence class. This is illustrated below.

A commutative diagram showing the definition of g as g(\omega)=f([\omega]).

We will show that g : \Omega \to \Omega is almost surely constant with respect to \mathbb{P}. That is, there exists \omega^\star \in \Omega such that \mathbb{P}(g(\omega)=\omega^\star)=1. Each equivalence class [\omega] is finite or countable and thus has probability zero under \mathbb{P}. This means that if g is almost surely constant, then f cannot be injective and must map multiple (in fact infinitely many) equivalence classes to \omega^\star.

It thus remains to show that g:\Omega  \to \Omega is almost surely constant. To do this we will introduce a third function \varphi : \Omega \to \Omega. The map \varphi is simply the shift map and is given by \varphi(\omega)_k = \omega_{k+1}. Note that \omega and \varphi(\omega) are in the same equivalence class for every \omega\in \Omega. Thus, the map g satisfies g\circ \varphi = g. That is g is \varphi-invariant.

The map \varphi is ergodic. This means that if A \subseteq \Omega satisfies \varphi(A)=A, then \mathbb{P}(A) equals 0 or 1. For example if A is the event that 10110 appears at some point in \omega, then \varphi(A)=A and \mathbb{P}(A)=`1. Likewise if A is the event that the relative frequency of heads converges to a number strictly greater than 0.5, then \varphi(A)=A and \mathbb{P}(A)=0. The general claim that all \varphi-invariant events have probability 0 or 1 can be proved using the independence of X_k.

For each k, define an event A_k by A_k = \{\omega : g(\omega)_k = 1\}. Since g is \varphi-invariant we have that \varphi(A_k)=A_k. Thus, \mathbb{P}(A_k)=0 or 1. This gives us a function \omega^\star :\mathbb{Z} \to \{0,1\} given by \omega^\star_k = \mathbb{P}(A_k). Note that for every k, \mathbb{P}(\{\omega : g(\omega)_k = \omega_k^\star\}) = 1. This is because if w_{k}^\star=1, then \mathbb{P}(\{\omega: g(\omega)_k = 1\})=1, by definition of w_k^\star. Likewise if \omega_k^\star =0, then \mathbb{P}(\{\omega:g(\omega)_k=1\})=0 and hence \mathbb{P}(\{\omega:g(\omega)_k=0\})=1. Thus, in both cases, \mathbb{P}(\{\omega : g(\omega)_k = \omega_k^*\})= 1.

Since \mathbb{P} is a probability measure, we can conclude that

\mathbb{P}(\{\omega : g(\omega)=\omega^\star\}) = \mathbb{P}\left(\bigcap_{k \in \mathbb{Z}} \{\omega : g(\omega)_k = \omega_k^\star\}\right)=1.

Thus, g map \Omega to \omega^\star with probability one. Showing that g is almost surely constant and hence that f is not injective. \square

There’s a catch!

So we have proved that there cannot be an injective map f : \Omega/\sim \to \Omega. Does this mean we have proved |\Omega| < |\Omega/\sim|? Technically no. We have proved the negation of |\Omega/\sim|\le |\Omega| which does not imply |\Omega| \le |\Omega/\sim|. To argue that |\Omega| < |\Omega/\sim| we need to produce a map g: \Omega \to \Omega/\sim that is injective. Surprising this is possible and not too difficult. The idea is to find a map g : \Omega \to \Omega such that g(\omega)\sim g(\omega') implies that \omega = \omega'. This can be done by somehow encoding in g(\omega) where the centre of \omega is.

A simpler proof and other examples

Our proof was nice because we explicitly calculated the value \omega^\star where g sent almost all of \Omega. We could have been less explicit and simply noted that the function g:\Omega \to \Omega was measurable with respect to the invariant \sigma-algebra of \varphi and hence almost surely constant by the ergodicity of \varphi.

This quicker proof allows us to generalise our “spooky result” to other sets. Below are two examples where \Omega = [0,1)

  • Fix \theta \in [0,1)\setminus \mathbb{Q} and define \omega \sim \omega' if and only if \omega + n \theta= \omega' for some n \in \mathbb{Z}.
  • \omega \sim \omega' if and only if \omega - \omega' \in \mathbb{Q}.

A similar argument can be used to show that in Solovay’s world |\Omega| < |\Omega/\sim|. The exact same argument follows from the ergodicity of the corresponding actions on \Omega under the uniform measure.

Three takeaways

I hope you agree that this example is good fun and surprising. I’d like to end with some remarks.

  • The first remark is some mathematical context. This argument given today is linked to some interesting mathematics called descriptive set theory. This field studies the properties of well behaved subsets (such as Borel subsets) of topological spaces. Descriptive set theory incorporates logic, topology and ergodic theory. I don’t know much about the field but in Persi’s Halloween talk he said that one “monster” was that few people are interested in the subject.
  • The next remark is a better way to think about our “spooky result”. The result is really saying something about cardinality. When we no longer use the axiom of choice, cardinality becomes a subtle concept. The statement |A|\le |B| no longer corresponds to A being “smaller” than B but rather that A is “less complex” than B. This is perhaps analogous to some statistical models which may be “large” but do not overfit due to subtle constraints on the model complexity.
  • In light of the previous remark, I would invite you to think about whether the example I gave is truly spookier than non-measurable sets. It might seem to you that it is simply a reasonable consequence of removing the axiom of choice and restricting ourselves to functions we could actually write down or understand. I’ll let you decide

Footnotes

  1. Technically Solovay proved that there exists a model of set theory such that every subset of \mathbb{R} is Borel measurable. To get the result for binary sequences we have to restrict to [0,1) and use the binary expansion of x \in [0,1) to define a function [0,1) \to \Omega. Solvay’s paper is available here https://www.jstor.org/stable/1970696?seq=1

The Singular Value Decomposition (SVD)

The singular value decomposition (SVD) is a powerful matrix decomposition. It is used all the time in statistics and numerical linear algebra. The SVD is at the heart of the principal component analysis, it demonstrates what’s going on in ridge regression and it is one way to construct the Moore-Penrose inverse of a matrix. For more SVD love, see the tweets below.

A tweet by Women in Statistics and Data Science about the SVD.
The full thread is here https://twitter.com/WomenInStat/status/1285610321747611653?s=20&t=Elj62mGSc9gINHvbwt82Zw

In this post I’ll define the SVD and prove that it always exists. At the end we’ll look at some pictures to better understand what’s going on.

Definition

Let X be a n \times p matrix. We will define the singular value decomposition first in the case n \ge p. The SVD consists of three matrix U \in \mathbb{R}^{n \times p}, \Sigma \in \mathbb{R}^{p \times p} and V \in \mathbb{R}^{p \times p} such that X = U\Sigma V^T. The matrix \Sigma is required to be diagonal with non-negative diagonal entries \sigma_1 \ge \sigma_2 \ge \ldots \ge \sigma_p \ge 0. These numbers are called the singular values of X. The matrices U and V are required to orthogonal matrices so that U^TU=V^TV = I_p, the p \times p identity matrix. Note that since V is square we also have VV^T=I_p however we won’t have UU^T = I_n unless n = p.

In the case when n \le p, we can define the SVD of X in terms of the SVD of X^T. Let \widetilde{U} \in \mathbb{R}^{p \times n}, \widetilde{\Sigma} \in \mathbb{R}^{n \times n} and \widetilde{V} \in \mathbb{R}^{n \times n} be the SVD of X^T so that X^T=\widetilde{U}\widetilde{\Sigma}\widetilde{V}^T. The SVD of X is then given by transposing both sides of this equation giving U = \widetilde{V}, \Sigma = \widetilde{\Sigma}^T=\widetilde{\Sigma} and V = \widetilde{U}.

Construction

The SVD of a matrix can be found by iteratively solving an optimisation problem. We will first describe an iterative procedure that produces matrices U \in \mathbb{R}^{n \times p}, \Sigma \in \mathbb{R}^{p \times p} and V \in \mathbb{R}^{p \times p}. We will then verify that U,\Sigma and V satisfy the defining properties of the SVD.

We will construct the matrices U and V one column at a time and we will construct the diagonal matrix \Sigma one entry at a time. To construct the first columns and entries, recall that the matrix X is really a linear function from \mathbb{R}^p to \mathbb{R}^n given by v \mapsto Xv. We can thus define the operator norm of X via

\Vert X \Vert = \sup\left\{ \|Xv\|_2 : \|v\|_2 =1\right\},

where \|v\|_2 represents the Euclidean norm of v \in \mathbb{R}^p and \|Xv\|_2 is the Euclidean norm of Xv \in \mathbb{R}^n. The set of vectors \{v \in \mathbb{R} : \|v\|_2 = 1 \} is a compact set and the function v \mapsto \|Xv\|_2 is continuous. Thus, the supremum used to define \Vert X \Vert is achieved at some vector v_1 \in \mathbb{R}^p. Define \sigma_1 = \|X v_1\|_2. If \sigma_1 \neq 0, then define u_1 = Xv_1/\sigma_1 \in \mathbb{R}^n. If \sigma_1 = 0, then define u_1 to be an arbitrary vector in \mathbb{R}^n with \|u\|_2 = 1. To summarise we have

  • v_1 \in \mathbb{R}^p with \|v_1\|_2 = 1.
  • \sigma_1 = \|X\| = \|Xv_1\|_2.
  • u_1 \in \mathbb{R}^n with \|u_1\|_2=1 and Xv_1 = \sigma_1u_1.

We have now started to fill in our SVD. The number \sigma_1 \ge 0 is the first singular value of X and the vectors v_1 and u_1 will be the first columns of the matrices V and U respectively.

Now suppose that we have found the first k singular values \sigma_1,\ldots,\sigma_k and the first k columns of V and U. If k = p, then we are done. Otherwise we repeat a similar process.

Let v_1,\ldots,v_k and u_1,\ldots,u_k be the first k columns of V and U. The vectors v_1,\ldots,v_k split \mathbb{R}^p into two subspaces. These subspaces are S_1 = \text{span}\{v_1,\ldots,v_k\} and S_2 = S_1^\perp, the orthogonal compliment of S_1. By restricting X to S_2 we get a new linear map X_{|S_2} : S_2 \to \mathbb{R}^n. Like before, the operator norm of X_{|S_2} is defined to be

\|X_{|S_2}\| = \sup\{\|X_{|S_2}v\|_2:v \in S_2, \|v\|_2=1\}.

Since S_2 = \text{span}\{v_1,\ldots,v_k\}^\perp we must have

\|X_{|S_2}\| =  \sup\{\|Xv\|_2:v \in \mathbb{R}^p, \|v\|_2=1, v_j^Tv = 0 \text{ for } j=1,\ldots,k\}.

The set \{v \in \mathbb{R}^p : \|v\|_2=1, v_j^Tv=0\text{ for } j=1,\ldots,k\} is a compact set and thus there exists a vector v_{k+1} such that \|Xv_{k+1}\|_2 = \|X_{|S_2}\|. As before define \sigma_{k+1} = \|Xv_{k+1}\|_2 and u_{k+1} = Xv_{k+1}/\sigma_{k+1} if \sigma_{k+1}\neq 0. If \sigma_{k+1} = 0, then define u_{k+1} to be any vector in \mathbb{R}^{n} that is orthogonal to u_1,u_2,\ldots,u_k.

This process repeats until eventually k = p and we have produced matrices U \in \mathbb{R}^{n \times p}, \Sigma \in \mathbb{R}^{p \times p} and V \in \mathbb{R}^{p \times p}. In the next section, we will argue that these three matrices satisfy the properties of the SVD.

Correctness

The defining properties of the SVD were given at the start of this post. We will see that most of the properties follow immediately from the construction but one of them requires a bit more analysis. Let U = [u_1,\ldots,u_p], \Sigma = \text{diag}(\sigma_1,\ldots,\sigma_p) and V= [v_1,\ldots,v_p] be the output from the above construction.

First note that by construction v_1,\ldots, v_p are orthogonal since we always had v_{k+1} \in \text{span}\{v_1,\ldots,v_k\}^\perp. It follows that the matrix V is orthogonal and so V^TV=VV^T=I_p.

The matrix \Sigma is diagonal by construction. Furthermore, we have that \sigma_{k+1} \le \sigma_k for every k. This is because both \sigma_k and \sigma_{k+1} were defined as maximum value of \|Xv\|_2 over different subsets of \mathbb{R}^p. The subset for \sigma_k contained the subset for \sigma_{k+1} and thus \sigma_k \ge \sigma_{k+1}.

We’ll next verify that X = U\Sigma V^T. Since V is orthogonal, the vectors v_1,\ldots,v_p form an orthonormal basis for \mathbb{R}^p. It thus suffices to check that Xv_k = U\Sigma V^Tv_k for k = 1,\ldots,p. Again by the orthogonality of V we have that V^Tv_k = e_k, the k^{th} standard basis vector. Thus,

U\Sigma V^Tv_k = U\Sigma e_k = U\sigma_k e_k = \sigma_k u_k.

Above, we used that \Sigma was a diagonal matrix and that u_k is the k^{th} column of U. If \sigma_k \neq 0, then \sigma_k u_k = Xv_k by definition. If \sigma_k =0, then \|Xv_k\|_2=0 and so Xv_k = 0 = \sigma_ku_k also. Thus, in either case, U\Sigma V^Tv_k = Xv_k and so U\Sigma V^T = X.

The last property we need to verify is that U is orthogonal. Note that this isn’t obvious. At each stage of the process, we made sure that v_{k+1} \in \text{span}\{v_1,\ldots,v_k\}^\perp. However, in the case that \sigma_{k+1} \neq 0, we simply defined u_{k+1} = Xv_{k+1}/\sigma_{k+1}. It is not clear why this would imply that u_{k+1} is orthogonal to u_1,\ldots,u_k.

It turns out that a geometric argument is needed to show this. The idea is that if u_{k+1} was not orthogonal to u_j for some j \le k, then v_j couldn’t have been the value that maximises \|Xv\|_2.

Let u_{k} and u_j be two columns of U with j < k and \sigma_j,\sigma_k > 0. We wish to show that u_j^Tu_k = 0. To show this we will use the fact that v_j and v_k are orthonormal and perform “polar-interpolation“. That is, for \lambda \in [0,1], define

v_\lambda = \sqrt{1-\lambda}\cdot v_{j}-\sqrt{\lambda} \cdot v_k.

Since v_{j} and v_k are orthogonal, we have that

\|v_\lambda\|_2^2 = (1-\lambda)\|v_{j}\|_2^2+\lambda\|v_k\|_2^2 = (1-\lambda)+\lambda = 1.

Furthermore v_\lambda is orthogonal to v_1,\ldots,v_{j-1}. Thus, by definition of v_j,

\|Xv_\lambda\|_2^2 \le \sigma_j^2 = \|Xv_j\|_2^2.

By the linearity of X and the definitions of u_j,u_k,

\|Xv_\lambda\|_2^2 = \|\sqrt{1-\lambda}\cdot Xv_j+\sqrt{\lambda}\cdot Xv_{k+1}\|_2^2 = \|\sigma_j\sqrt{1-\lambda}\cdot u_j+\sigma_{k+1}\sqrt{\lambda}\cdot u_{k+1}\|_2^2.

Since Xv_j = \sigma_ju_j and Xv_{k+1}=\sigma_{k+1}u_{k+1}, we have

(1-\lambda)\sigma_j^2 + 2\sqrt{\lambda(1-\lambda)}\cdot \sigma_1\sigma_2 u_j^Tu_{k}+\lambda\sigma_k^2 = \|Xv_\lambda\|_2^2 \le \sigma_j^2

Rearranging and dividing by \sqrt{\lambda} gives,

2\sqrt{1-\lambda}\cdot \sigma_1\sigma_2 u_j^Tu_k \le \sqrt{\lambda}\cdot(\sigma_j^2-\sigma_k^2). for all \lambda \in (0,1]

Taking \lambda \searrow 0 gives u_j^Tu_k \le 0. Performing the same polar interpolation with v_\lambda' = \sqrt{1-\lambda}v_j - \sqrt{\lambda}v_k shows that -u_j^Tu_k \le 0 and hence u_j^Tu_k = 0.

We have thus proved that U is orthogonal. This proof is pretty “slick” but it isn’t very illuminating. To better demonstrate the concept, I made an interactive Desmos graph that you can access here.

This graph shows example vectors u_j, u_k \in \mathbb{R}^2. The vector u_j is fixed at (1,0) and a quarter circle of radius 1 is drawn. Any vectors u that are outside this circle have \|u\|_2 > 1 = \|u_j\|_2.

The vector u_k can be moved around inside this quarter circle. This can be done either cby licking and dragging on the point or changing that values of a and b on the left. The red curve is the path of

\lambda \mapsto \sqrt{1-\lambda}\cdot u_j+\sqrt{\lambda}\cdot u_k.

As \lambda goes from 0 to 1, the path travels from u_j to u_k.

Note that there is a portion of the red curve near u_j that is outside the black circle. This corresponds to a small value of \lambda > 0 that results in \|X v_\lambda\|_2 > \|Xv_j\|_2 contradicting the definition of v_j. By moving the point u_k around in the plot you can see that this always happens unless u_k lies exactly on the y-axis. That is, unless u_k is orthogonal to u_j.