I recently gave a talk on the Yang-Baxter equation. The focus of the talk was to state the connection between the Yang-Baxter equation and the braid relation. This connection comes from a system of interacting particles. In this post, I’ll go over part of my talk. You can access the full set of notes here.
Interacting particles
Imagine two particles on a line, each with a state that can be any element of a set . Suppose that the only way particles can change their states is by interacting with each other. An interaction occurs when two particles pass by each other. We could define a function that describes how the states change after interaction. Specifically, if the first particle is in state and the second particle is in state , then their states after interacting will be
where are the components of . Recall that the particles move past each other when they interact. Thus, to keep track of the whole system we need an element of to keep track of the states and a permutation to keep track of the positions.
Three particles
Now suppose that we have particles labelled . As before, each particle has a state in . We can thus keep track of the state of each particle with an element of . The particles also have a position which is described by a permutation . The order the entries of corresponds to the labels of the particles not their positions. A possible configuration is shown below:
A possible configuration of the three particles. The above configuration is escribed as having states and positions .
As before, the particles can interact with each other. However, we’ll now add the restriction that the particles can only interact two at a time and interacting particles must have adjacent positions. When two particles interact, they swap positions and their states change according to . The state and position of the remaining particle is unchanged. For example, in the above picture we could interact particles and . This will produce the below configuration:
The new configuration after interacting particles and in the first diagram. The configuration is now described by the states and the permutation .
To keep track of how the states of the particles change over time we will introduce three functions from to . These functions are . The function is given by applying to the coordinates of and acting by the identity on the remaining coordinate. In symbols,
The function exactly describes how the states of the three particles change when particles and interact. Now suppose that three particles begin in position and states . We cannot directly interact particles and since they are not adjacent. We have to pass first pass one of the particles through particle . This means that there are two ways we can interact particles and . These are illustrated below.
The two ways of passing through particle 2 to interact particles 2 and 3.
In the top chain of interactions, we first interact particles and . In this chain of interactions, the states evolve as follows:
In the bottom chain of interactions, we first interact particles and . On this chain, the states evolve in a different way:
Note that after both of these chains of interactions the particles are in position . The function is said to solve the Yang–Baxter equation if two chains of interactions also result in the same states.
Definition: A function is a solution to the set theoretic Yang–Baxter equation if,
This equation can be visualized as the “braid relation” shown below. Here the strings represent the three particles and interacting two particles corresponds to crossing one string over the other.
The braid relation.
Here are some examples of solutions to the set theoretic Yang-Baxter equation,
The identity on .
The swap map, .
If commute, then the function is a solution the Yang-Baxter equation.
In the full set of notes I talk about a number of extensions and variations of the Yang-Baxter equation. These include having more than three particles, allowing for the particle states to be entangle and the parametric Yang-Baxter equation.
Suppose you want to calculate an expression of the form
where . Such expressions can be difficult to evaluate directly since the exponentials can easily cause overflow errors. In this post, I’ll talk about a clever way to avoid such errors.
If there were no terms in the second sum we could use the log-sum-exp trick. That is, to calculate
we set and use the identity
Since for all , the left hand side of the above equation can be computed without the risk of overflow. To calculate,
we can use the above method to separately calculate
and
The final result we want is
Since $A > B$, the right hand side of the above expression can be evaluated safely and we will have our final answer.
R code
The R code below defines a function that performs the above procedure
# Safely compute log(sum(exp(pos)) - sum(exp(neg)))
# The default value for neg is an empty vector.
logSumExp <- function(pos, neg = c()){
max_pos <- max(pos)
A <- max_pos + log(sum(exp(pos - max_pos)))
# If neg is empty, the calculation is done
if (length(neg) == 0){
return(A)
}
# If neg is non-empty, calculate B.
max_neg <- max(neg)
B <- max_neg + log(sum(exp(neg - max_neg)))
# Check that A is bigger than B
if (A <= B) {
stop("sum(exp(pos)) must be larger than sum(exp(neg))")
}
# log1p() is a built in function that accurately calculates log(1+x) for |x| << 1
return(A + log1p(-exp(B - A)))
}
An example
The above procedure can be used to evaulate
Evaluating this directly would quickly lead to errors since R (and most other programming languages) cannot compute . However, R has the functions lfactorial() and lchoose() which can compute and for large values of and . We can thus put this expression in the general form at the start of this post
The following R code thus us exactly what we want:
Suppose we have two samples and and we want to test if they are from the same distribution. Many popular tests can be reinterpreted as correlation tests by pooling the two samples and introducing a dummy variable that encodes which sample each data point comes from. In this post we will see how this plays out in a simple t-test.
The equal variance t-test
In the equal variance t-test, we assume that and , where is unknown. Our hypothesis that and are from the same distribution becomes the hypothesis . The test statistic is
,
where and are the two sample means. The variable is the pooled estimate of the standard deviation and is given by
.
Under the null hypothesis, follows the T-distribution with degrees of freedom. We thus reject the null when exceeds the quantile of the T-distribution.
Pooling the data
We can turn this two sample test into a correlation test by pooling the data and using a linear model. Let be the pooled data and for , define by
The assumptions that and can be rewritten as
where . That is, we have expressed our modelling assumptions as a linear model. When working with this linear model, the hypothesis is equivalent to . To test we can use the standard t-test for a coefficient in linear model. The test statistic in this case is
where is the ordinary least squares estimate of , is the design matrix and is an estimate of given by
where is the fitted value of .
It turns out that is exactly equal to . We can see this by writing out the design matrix and calculating everything above. The design matrix has rows and is thus equal to
This implies that
And therefore,
Thus, . So,
which is starting to like from the two-sample test. Now
And so
Thus, and
This means to show that , we only need to show that . To do this, note that the fitted values are equal to
Thus,
Which is exactly . Therefore, and the two sample t-test is equivalent to a correlation test.
The Friedman-Rafsky test
In the above example, we saw that the two sample t-test was a special case of the t-test for regressions. This is neat but both tests make very strong assumptions about the data. However, the same thing happens in a more interesting non-parametric setting.
In their 1979 paper, Jerome Friedman and Lawrence Rafsky introduced a two sample tests that makes no assumptions about the distribution of the data. The two samples do not even have to real-valued and can instead be from any metric space. It turns out that their test is a special case of another procedure they devised for testing for association (Friedman & Rafsky, 1983). As with the t-tests above, this connection comes from pooling the two samples and introducing a dummy variable.
A few months ago, I had the pleasure of reading Eugenia Cheng‘s book How to Bake Pi. Each chapter starts with a recipe which Cheng links to the mathematical concepts contained in the chapter. The book is full of interesting connections between mathematics and the rest of the world.
One of my favourite ideas in the book is something Cheng writes about equations and the humble equals sign: . She explains that when an equation says two things are equal we very rarely mean that they are exactly the same thing. What we really mean is that the two things are the same in some ways even though they may be different in others.
One example that Cheng gives is the equation . This is such a familiar statement that you might really think that and are the same thing. Indeed, if and are any numbers, then the number you get when you calculate is the same as the number you get when you calculate . But calculating could be very different from calculating . A young child might calculate by starting with and then counting one-by-one from to . If is and is , then calculating requires counting from to but calculating simply amounts to counting from to . The first process takes way longer than the second and the child might disagree that is the same as .
In How to Bake Pi, Cheng explains that a crucial idea behind equality is context. When someone says that two things are equal we really mean that they are equal in the context we care about. Cheng talks about how context is crucial through-out mathematics and introduces a little bit of category theory as a tool for moving between different contexts. I think that this idea of context is really illuminating and I wanted to share some examples where “” doesn’t mean “exactly the same as”.
The Sherman-Morrison formula
The Sherman-Morrison formula is a result from linear algebra that says for any invertible matrix and any pair of vectors , if , then is invertible and
Here “” means the following:
You can take any natural number , any matrix of size by , and any length -vectors and that satisfy the above condition.
If you take all those things and carry out all the matrix multiplications, additions and inversions on the left and all the matrix multiplications, additions and inversions on the right, then you will end up with exactly the same matrix in both cases.
But depending on the context, the equation on one side of “” may be much easier than the other. Although the right hand side looks a lot more complicated, it is much easier to compute in one important context. This context is when we have already calculated the matrix and now want the inverse of . The left hand side naively computes which takes computations since we have to invert a matrix. On the right hand side, we only need to compute a small number of matrix-vector products and then add two matrices together. This bring the computational cost down to .
These cost saving measures come up a lot when studying linear regression. The Sherman-Morrison formula can be used to update regression coefficients when a new data point is added. Similarly, the Sherman-Morrison formula can be used to quickly calculate the fitted values in leave-one-out cross validation.
log-sum-exp
This second example also has connections to statistics. In a mixture model, we assume that each data point comes from a distribution of the form:
,
where is a vector and is equal to the probability that came from class . The parameters are the parameters for the group.The log-likelihood is thus,
,
where . We can see that the log-likelihood is of the form log-sum-exp. Calculating a log-sum-exp can cause issues with numerical stability. For instance if and , for all , then the final answer is simply . However, as soon as we try to calculate on a computer, we’ll be in trouble.
The solution is to use the following equality, for any ,
.
Proving the above identity is a nice exercise in the laws of logarithm’s and exponential’s, but with a clever choice of we can more safely compute the log-sum-exp expression. For instance, in the documentation for pytorch’s implementation of logsumexp() they take to be the maximum of . This (hopefully) makes each of the terms a reasonable size and avoids any numerical issues.
Again, the left and right hand sides of the above equation might be the same number, but in the context of having to use computers with limited precision, they represent very different calculations.
Beyond How to Bake Pi
Eugenia Cheng has recently published a new book called The Joy of Abstraction. I’m just over half way through and it’s been a really engaging and interesting introduction to category theory. I’m looking forward to reading the rest of it and getting more insight from Eugenia Cheng’s great mathematical writing.
Next week I’ll be starting the second year of my statistics PhD. I’ve learnt a lot from taking the first year classes and studying for the qualifying exams. Some of what I’ve learnt has given me a new perspective on some of my old blog posts. Here are three things that I’ve written about before and I now understand better.
1. The Pi-Lambda Theorem
An early post on this blog was titled “A minimal counterexample in probability theory“. The post was about a theorem from the probability course offered at the Australian National University. The theorem states that if two probability measures agree on a collection of subsets and the collection is closed under finite intersections, then the two probability measures agree on the -algebra generated by the collection. In my post I give an example which shows that you need the collection to be closed under finite intersections. I also show that you need to have at least four points in the space to find such an example.
What I didn’t know then is that the above theorem is really a corollary of Dykin’s theorem. This theorem was proved in my first graduate probability course which was taught by Persi Diaconis. Professor Diaconis kept a running tally of how many times we used the theorem in his course and we got up to at least 10. (For more appearances by Professor Diaconis on this blog see here, here and here).
If I were to write the above post again, I would talk about the theorem and rename the post “The smallest -system”. The example given in my post is really about needing at least four points to find a -system that is not a -algebra.
2. Mallow’s Cp statistic
The very first blog post I wrote was called “Complexity Penalties in Statistical Learning“. I wasn’t sure if I would write a second and so I didn’t set up a WordPress account. I instead put the post on the website LessWrong. I no longer associate myself with the rationality community but posting to LessWrong was straight forward and helped me reach more people.
The post was inspired in two ways by the 2019 AMSI summer school. First, the content is from the statistical learning course I took at the summer school. Second, at the career fair many employers advised us to work on our writing skills. I don’t know if would have started blogging if not for the AMSI Summer School.
I didn’t know it at the time but the blog post is really about Mallow’s Cp statistic. Mallow’s Cp statistic is an estimate of the test error of a regression model fit using ordinary least squares. The Mallow’s Cp is equal to the training error plus a “complexity penalty” which takes into account the number of parameters. In the blog post I talk about model complexity and over-fitting. I also write down and explain Mallow’s Cp in the special case of polynomial regression.
In the summer school course I took, I don’t remember the name Mallow’s Cp being used but I thought it was a great idea and enjoyed writing about it. The next time encountered Mallow’s Cp was in the linear models course I took last fall. I was delighted to see it again and learn how it fit into a bigger picture. More recently, I read Brad Efron’s paper “How Biased is the Apparent Error Rate of a Prediction Rule?“. The paper introduces the idea of “effective degrees of freedom” and expands on the ideas behind the Cp statistic.
Incidentally, enrolment is now open for the 2023 AMSI Summer School! This summer it will be hosted at the University of Melbourne. I encourage any Australia mathematics or statistics students reading this to take a look and apply. I really enjoyed going in both 2019 and 2020. (Also if you click on the above link you can try to spot me in the group photo of everyone wearing read shirts!)
3. Finitely additive probability measures
In “Finitely additive measures” I talk about how hard it is to define a finitely additive measure on the set of natural numbers that is not countably additive. In particular, I talked about needing to use the Hahn — Banach extension theorem to extend the natural density from the collection of sets with density to the collection of all subsets of the natural numbers.
There were a number of homework problems in my first graduate probability course that relate to this post. We proved that the sets with density are not closed under finite unions and we showed that the square free numbers have density .
We also proved that any finite measure defined on an algebra of subsets can be extend to the collection of all subsets. This proof used Zorn’s lemma and the resulting measure is far from unique. The use of Zorn’s lemma relates to the main idea in my blog, that defining an additive probability measure is in some sense non-constructive.
Other posts
Going forward, I hope to continue publishing at least one new post every month. I look forward to one day writing another post like this when I can look back and reflect on how much I have learnt.
Total variation is a way of measuring how much a function “wiggles”. In this post, I want to motivate the definition of total variation by talking about elevation in marathon running.
Comparing marathon courses
On July 24th I ran the 2022 San Francisco (SF) marathon. All marathons are the same distance, 42.2 kilometres (26.2 miles) but individual courses can vary greatly. Some marathons are on road and others are on trails. Some locations can be hot and others can be rainy. And some, such as the SF marathon, can be much hillier than others. Below is a plot comparing the elevation of the Canberra marathon I ran last year to the elevation of the SF marathon:
A plot showing the relative elevation over the course of the Canberra and San Francisco marathons. Try to spot the two times I ran over the Golden Gate Bridge during the SF marathon.
Immediately, you can see that the range of elevation during the San Francisco marathon was much higher than the range of elevation during the Canberra marathon. However, what made the SF marathon hard wasn’t any individual incline but rather the sheer number of ups and downs. For comparison, the plot below shows elevation during a 32 km training run and elevation during the SF marathon:
A plot showing the relative elevation over the course of a training run and the San Francisco marathon. The big climb during my training run is the Stanford dish.
You can see that my training run was mostly flat but had one big hill in the last 10 kilometres. The maximum relative elevation on my training run was about 50 meters higher than the maximum relative elevation of the marathon, but overall the training run graph is a lot less wiggly. This meant there were far more individual hills during the marathon and so the first 32 km of the marathon felt a lot tougher than the training run. By comparing these two runs, you can see that the elevation range can hide important information about the difficulty of a run. We also need to pay attention to how wiggly the elevation curve is.
Wiggliness Scores
So far our definition of wiggliness has been imprecise and has relied on looking at a graph of the elevation. This makes it hard to compare two runs and quickly decide which one is wigglier. It would be convenient if there was a “wiggliness score” – a single number we could assign to each run which measured the wiggliness of the run’s elevation. Bellow we’ll see that total variation does exactly this.
If we zoom in on one of the graphs above, we would see that it actually consists of tiny straight line segments. For example, let’s look at the 22nd kilometre of the SF marathon. In this plot it looks like elevation is a smooth function of distance:
The relative elevation of the 22nd kilometre during the SF marathon.
But if we zoom in on a 100 m stretch, then we see that the graph is actually a series of straight lines glued together:
The relative elevation over a 100 metres during the marathon.
This is because these graphs are made using my GPS watch which makes one recording per second. If we place dots at each of these times, then the straight lines become clearer:
The relative elevation over the same 100 metre stretch. Each blue dots marks a point when a measurement was made.
We can use these blue dots to define the graph’s wiggliness score. The wiggliness score should capture how much the graph varies across its domain. This suggests that wiggliness scores should be additive. By additive, I mean that if we split the domain into a finite number of pieces, then the wiggliness score across the whole domain should be the sum of the wiggliness score of each segment.
In particular, the wiggliness score for the SF marathon is equal to the sum of the wiggliness score of each section between two consecutive blue dots. This means we only need to quantify how much the graph varies between consecutive blue dots. Fortunately, between two such dots, the graph is a straight line. The amount that a straight line varies is simply the distance between the y-value at the start and the y-value at the end. Thus, by adding up all these little distances we can get a wiggliness score for the whole graph. This wiggliness score is used in mathematics, where it is called the total variation.
Here are the wiggliness scores for the three runs shown above:
Run
Wiggliness score
Canberra Marathon 2021
617 m
Training run
742 m
SF Marathon 2022
2140 m
The total variation or wiggliness score for the three graphs shown above.
Total Variation
We’ve seen that by breaking up a run into little pieces, we can calculate the total variation over the course of the run. But how can we calculate the total variation of an arbitrary function ?
Our previous approach won’t work because the function might not be made up of straight lines. But we can approximate with other functions that are made of straight lines. We can calculate the total variation of these approximations using the approach we used for the marathon runs. Then we define the total variation of as the limit of the total variation of each of these approximations.
To make this precise, we will work with partitions of . A partition of is a finite set of points such that:
.
That is, is a collection of increasing points in that start at and end at . For a given partition of , we calculate how much the function varies over the points in the partition . As with the blue dots above, we can simply add up the distance between consecutive values and . In symbols, we define (the variation over over the partition ) to be:
.
To define the variation of over the interval , we can imagine taking finer and finer partitions of . To do this, note that whenever we add more points to a partition, the total variation over that partition can only increase. Thus, we can think of the total variation of as the maximum total variation over any partition. We denote the total variation of by and define it as:
.
Surprisingly, there exist continuous function for which the total variation is infinite. Sample paths of the Brownian motion are canonical examples of continuous functions with infinite total variation. Such functions would be very challenging runs.
Some Limitations
Total variation does a good job of measuring how wiggly a function is but it has some limitations when applied to course elevation. The biggest issue is that total variation treats inclines and declines symmetrically. A steep line sloping down increases the total variation by the same amount as a line with the same slope going upwards. This obviously isn’t true when running; an uphill is very different to a downhill.
To quantify how much a function wiggles upwards, we could use the same ideas but replace the absolute value with the positive part . This means that only the lines that slope upwards will count towards the wiggliness score. Lines that slope downwards get a wiggliness score of zero.
Another limitation of total variation is that it measures total wiggliness across the whole domain rather than average wiggliness. This isn’t much of a problem when comparing runs of a similar length, but when comparing runs of different lengths, total variation can give surprising results. Below is a comparison between the Australian Alpine Ascent and the SF marathon:
The Australian Alpine Ascent is a 25 km run that goes up Australia’s tallest mountain. Despite the huge climbs during the Australian Alpine Ascent, the SF marathon has a higher total variation. Since the Australian Alpine Ascent was shorter, it gets a lower wiggliness score (1674 m vs 2140 m). For this comparison it would be better to divide each wiggliness score by the runs’ distance.
Summary
Despite these limitations, I still think that total variation is a useful metric for comparing two runs. It doesn’t tell you exactly how tough a run will be but if you already know the run’s distance and starting/finishing elevation, then the total variation helps you know what to expect.
Recently, my partner and I installed a clock in our home. The clock previously belonged to my grandparents and we have owned it for a while. We hadn’t put it up earlier because the original clock movement ticked and the sound would disrupt our small studio apartment. After much procrastinating, I bought a new clock movement, replaced the old one and proudly hung up our clock.
Our new clock. We still need to reattach the 5 and 10 which fell off when we moved.
When I first put on the clock hands I made the mistake of not putting them both on at exactly 12 o’clock. This meant that the minute and hour hands were not synchronised. The hands were in an impossible position. At times, the minute hand was at 12 and the hour hand was between 3 and 4. It took some time for me to register my mistake as at some times of the day it can be hard to tell that the hands are out of sync (how often do you look at a clock at 12:00 exactly?). Fortunately, I did notice the mistake and we have a correct clock. Now I can’t help noticing when others make the same mistake such as in this piece of clip art.
After fixing the clock, I was still thinking about how only some clock hand positions correspond to actual times. This led me to think “a clock is a one-dimensional subgroup of the torus”. Let me explain why.
The torus
The minute and hour hands on a clock can be thought of as two points on two different circles. For instance, if the time is 9:30, then the minute hand corresponds to a point at the very bottom of the circle and the hour hand corresponds to a point 15 degrees clockwise of the leftmost point of the circle. As a clock goes through a 12 hour cycle the minute-hand-point goes around the circle 12 times and the hour-hand-point goes around the circle once. This is shown below.
The blue point goes around its blue circle in time with the minute hand on the clock in the middle. The red point goes around its red circle in time with the hour hand.
If you take the collection of all pairs of points on a circle you get what mathematicians call a torus. The torus is a geometric shape that looks like the surface of a donut. The torus is defined as the Cartesian product of two circles. That is, a single point on the torus corresponds to two points on two different circles. A torus is plotted below.
The green surface above is a torus. The black lines aren’t a part of the torus, they are just there to help the visualisation.
To understand the torus, it’s helpful to consider a more familiar example, the 2-dimensional plane. If we have points and on two different lines, then we can produce the point in the two dimensional plane. Likewise, if we have a point and a point on two different circles, then we can produce a point on the torus. Both of these concepts are illustrated below. I have added two circles to the torus which are analogous to the x and y axes of the plane. The blue and red points on the blue and red circle produce the black point on the torus.
Mapping the clock to the torus
The points on the torus are in one-to-one correspondence with possible arrangements of the two clock hands. However, as I learnt putting up our clock, not all arrangements of clock hands correspond to an actual time. This means that only some points on the torus correspond to an actual time but how can we identify these points?
Keeping with our previous convention, let’s use the blue circle to represent the position of the minute hand and the red circle to represent the position of the hour hand. This means that the point where the two circles meet corresponds to 12 o’clock.
The point where the two circles meet corresponds to both hands pointing to 12, that is, 12 o’clock.
There are eleven other points on the red line that correspond to the other times when the minute hand is at 12. That is, there’s a point for 1 o’clock, 2 o’clock, 3 o’clock and so on. Once we add in those points, our torus looks like this:
Each black dot corresponds to when the minute hand is at 12. That is, the dots represent 12 o’clock, 1 o’clock, 2 o’clock and so on.
Finally, we have to join these points together. We know that when the hour hand moves from 12 to 1, the minute hand does one full rotation. This means that we have to join the black points by making one full rotation in the direction of the blue circle. The result is the black curve below that snakes around the torus.
Points on the black curve correspond to actual times on the clock.
The picture above should explain most of this blog’s title – “a clock is a one-dimensional subgroup of the torus”. We now know what the torus is and why certain points on the torus correspond to positions of the hands on a clock. We can see that these “clock points” correspond to a line that snakes around the torus. While the torus is a surface and hence two dimensional, the line is one-dimensional. The last missing part is the word “subgroup”. I won’t go into the details here but the torus has some extra structure that makes it something called a group. Our map from the clock to the torus interacts nicely with this structure and this makes the black line a “subgroup”.
Another perspective
While the above pictures of the torus are pretty, they can be a bit hard to understand and hard to draw. Mathematicians have another perspective of the torus that is often easier to work with. Imagine that you have a square sheet of rubber. If you rolled up the rubber and joined a pair of opposite sides, you would get a rubber tube. If you then bent the tube to join the opposite sides again, you would get a torus! The gif bellow illustrates this idea
This means that we can simply view the torus as a square. We just have to remember that the opposite sides of the squares have been glued together. So like a game of snake on a phone, if you leave the top of the square, you come out at the same place on the bottom of the square. If we use this idea to redraw our torus it now looks like this:
A drawing of a flat torus. To make a donut shaped torus, the two red lines and then the two blue lines have to be glued together. As before, the blue line corresponds to the minute hand and the red line to the hour hand. When we glue the opposite sides of this square, the four corners all get glued together. This point is where the two circles intersect and corresponds to 12 o’clock.
As before we can draw in the other points when the minute hand is at 12. These points correspond to 1 o’clock, 2 o’clock, 3 o’clock…
Each black dot corresponds to a time when the minute hand is at 12. Remember that each dot on the top is actually the same point as the corresponding dot on the bottom. These opposite points get glued together when we turn the square into a torus.
Finally we can draw in all the other times on the clock. This is the result:
Points on the black line correspond to actual times on the clock. Although it looks like there are 12 different lines, there is actually only one line once we glue the opposite sides together.
One nice thing about this picture is that it can help us answer a classic riddle. In a 12-hour cycle, how many times are the minute and hour hands on top of each other? We can answer this riddle by adding a second line to the above square. The bottom-left to top-right diagonal is the collection of all hand positions where the two hands are on top of each other. Let’s add that line in green and add the points where this new line intersects the black line.
The green line is the collection of all hand positions when the two hands are pointing in the same direction. The black points are where the green and black lines intersect each other.
The points where the green and black lines intersect are hand positions where the clock hands are directly on top of each other and which correspond to actual times. Thus we can count that there are exactly 11 times when the hands are on top of each other in a 12-hour cycle. It might look like there are 12 such times but we have to remember that the corners of the square are all the same point on the torus.
Adding the second hand
So far I have ignored the second hand on the clock. If we included the second hand, we would have three points on three different circles. The corresponding geometric object is a 3-dimensional torus. The 3-dimensional torus is what you get when you take a cube and glue together the three pairs of opposite faces (don’t worry if you have trouble visualising such a shape!).
The points on the 3-dimensional torus which correspond to actual times will again be a line that wraps around the 3-dimensional torus. You could use this line to find out how many times the three hands are all on top of each other! Let me know if you work it out.
I hope that if you’re ever asked to define a clock, you’d at least consider saying “a clock is a one-dimensional subgroup of the torus” and you could even tell them which subgroup!
The singular value decomposition (SVD) is a powerful matrix decomposition. It is used all the time in statistics and numerical linear algebra. The SVD is at the heart of the principal component analysis, it demonstrates what’s going on in ridge regression and it is one way to construct the Moore-Penrose inverse of a matrix. For more SVD love, see the tweets below.
In this post I’ll define the SVD and prove that it always exists. At the end we’ll look at some pictures to better understand what’s going on.
Definition
Let be a matrix. We will define the singular value decomposition first in the case . The SVD consists of three matrix and such that . The matrix is required to be diagonal with non-negative diagonal entries . These numbers are called the singular values of . The matrices and are required to orthogonal matrices so that , the identity matrix. Note that since is square we also have however we won’t have unless .
In the case when , we can define the SVD of in terms of the SVD of . Let and be the SVD of so that . The SVD of is then given by transposing both sides of this equation giving and .
Construction
The SVD of a matrix can be found by iteratively solving an optimisation problem. We will first describe an iterative procedure that produces matrices and . We will then verify that and satisfy the defining properties of the SVD.
We will construct the matrices and one column at a time and we will construct the diagonal matrix one entry at a time. To construct the first columns and entries, recall that the matrix is really a linear function from to given by . We can thus define the operator norm of via
where represents the Euclidean norm of and is the Euclidean norm of . The set of vectors is a compact set and the function is continuous. Thus, the supremum used to define is achieved at some vector . Define . If , then define . If , then define to be an arbitrary vector in with . To summarise we have
with .
.
with and .
We have now started to fill in our SVD. The number is the first singular value of and the vectors and will be the first columns of the matrices and respectively.
Now suppose that we have found the first singular values and the first columns of and . If , then we are done. Otherwise we repeat a similar process.
Let and be the first columns of and . The vectors split into two subspaces. These subspaces are and , the orthogonal compliment of . By restricting to we get a new linear map . Like before, the operator norm of is defined to be
.
Since we must have
The set is a compact set and thus there exists a vector such that . As before define and if . If , then define to be any vector in that is orthogonal to .
This process repeats until eventually and we have produced matrices and . In the next section, we will argue that these three matrices satisfy the properties of the SVD.
Correctness
The defining properties of the SVD were given at the start of this post. We will see that most of the properties follow immediately from the construction but one of them requires a bit more analysis. Let , and be the output from the above construction.
First note that by construction are orthogonal since we always had . It follows that the matrix is orthogonal and so .
The matrix is diagonal by construction. Furthermore, we have that for every . This is because both and were defined as maximum value of over different subsets of . The subset for contained the subset for and thus .
We’ll next verify that . Since is orthogonal, the vectors form an orthonormal basis for . It thus suffices to check that for . Again by the orthogonality of we have that , the standard basis vector. Thus,
Above, we used that was a diagonal matrix and that is the column of . If , then by definition. If , then and so also. Thus, in either case, and so .
The last property we need to verify is that is orthogonal. Note that this isn’t obvious. At each stage of the process, we made sure that . However, in the case that , we simply defined . It is not clear why this would imply that is orthogonal to .
It turns out that a geometric argument is needed to show this. The idea is that if was not orthogonal to for some , then couldn’t have been the value that maximises .
Let and be two columns of with and . We wish to show that . To show this we will use the fact that and are orthonormal and perform “polar-interpolation“. That is, for , define
Since and are orthogonal, we have that
Furthermore is orthogonal to . Thus, by definition of ,
By the linearity of and the definitions of ,
.
Since and , we have
Rearranging and dividing by gives,
for all
Taking gives . Performing the same polar interpolation with shows that and hence .
We have thus proved that is orthogonal. This proof is pretty “slick” but it isn’t very illuminating. To better demonstrate the concept, I made an interactive Desmos graph that you can access here.
This graph shows example vectors . The vector is fixed at and a quarter circle of radius is drawn. Any vectors that are outside this circle have .
The vector can be moved around inside this quarter circle. This can be done either cby licking and dragging on the point or changing that values of and on the left. The red curve is the path of
.
As goes from to , the path travels from to .
Note that there is a portion of the red curve near that is outside the black circle. This corresponds to a small value of that results in contradicting the definition of . By moving the point around in the plot you can see that this always happens unless lies exactly on the y-axis. That is, unless is orthogonal to .
Like my previous post, this blog is also motivated by a comment by Professor Persi Diaconis in his recent Stanford probability seminar. The seminar was about a way of “collapsing” a random walk on a group to a random walk on the set of double cosets. In this post, I’ll first define double cosets and then go over the example Professor Diaconis used to make us probabilists and statisticians more comfortable with all the group theory he was discussing.
Double cosets
Let be a group and let and be two subgroups of . For each , the -double coset containing is defined to be the set
To simplify notation, we will simply write double coset instead of -double coset. The double coset of can also be defined as the equivalence class of under the relation
for some and
Like regular cosets, the above relation is indeed an equivalence relation. Thus, the group can be written as a disjoint union of double cosets. The set of all double cosets of is denoted by . That is,
Note that if we take , the trivial subgroup, then the double cosets are simply the left cosets of , . Likewise if , then the double cosets are the right cosets of , . Thus, double cosets generalise both left and right cosets.
Double cosets in
Fix a natural number . A partition of is a finite sequence such that , and . For each partition of , , we can form a subgroup of the symmetric group . The subgroup contains all permutations such that fixes the sets . Meaning that for all . Thus, a permutation must individually permute the elements of , the elements of and so on. This means that, in a natural way,
If we have two partitions and , then we can form two subgroups and and consider the double cosets . The claim made in the seminar was that the double cosets are in one-to-one correspondence with contingency tables with row sums equal to and column sums equal to . Before we explain this correspondence and properly define contingency tables, let’s first consider the cases when either or is the trivial subgroup.
Left cosets in
Note that if , then is the trivial subgroup and, as noted above, is simply equal to . We will see that the cosets in can be described by forgetting something about the permutations in .
We can think of the permutations in as all the ways of drawing without replacement balls labelled . We can think of the partition as a colouring of the balls by colours. We colour balls by the first colour , then we colour the second colour and so on until we colour the final colour . Below is an example when is equal to 6 and .
The first three balls are coloured green, the next two are coloured red and the last ball is coloured blue.
Note that a permutation is in if and only if we draw the balls by colour groups, i.e. we first draw all the balls with colour , then we draw all the balls with colour and so on. Thus, continuing with the previous example, the permutation below is in but is not in .
The permutation is in because the colours are in their original order but is not in because the colours are rearranged.
It turns out that we can think of the cosets in as what happens when we “forget” the labels and only remember the colours of the balls. By “forgetting” the labels we mean only paying attention to the list of colours. That is for all , if and only if the list of colours from the draw is the same as the list of colours from the draw . Thus, the below two permutations define the same coset of
When we forget the labels and only remember the colours, the permutations and look the same and thus are in the same left coset of .
To see why this is true, note that if and only if for some . Furthermore, if and only if maps each colour group to itself. Recall that function composition is read right to left. Thus, the equation means that if we first relabel the balls according to and then draw the balls according to , then we get the result as just drawing by . That is, for some if and only if drawing by is the same as first relabelling the balls within each colour group and then drawing the balls according to . Thus, , if and only if when we forget the labels of the balls and only look at the colours, and give the same list of colours. This is illustrated with our running example below.
If we permute the balls according to and the draw according to , then the resulting draw is the same as if we had not permuted and drawn according to . That is, .
Right cosets of
Typically, the subgroup is not a normal subgroup of . This means that the right coset will not equal the left coset . Thus, colouring the balls and forgetting the labelling won’t describe the right cosets . We’ll see that a different type of forgetting can be used to describe .
Fix a partition and now, instead of considering colours, think of different people . As before, a permutation can be thought of drawing balls labelled without replacement. We can imagine giving the first balls drawn to person , then giving the next balls to the person and so on until we give the last balls to person . An example with and is drawn below.
Person receives the ball labelled by 6 followed by the ball labelled 3, person receives ball 2 and then ball 1 and finally person receives ball 4 followed by ball 5.
Note that if and only if person receives the balls with labels in any order. Thus, in the below example but .
When the balls are drawn according to , person receives the balls with labels and , and thus . On the other hand, if the balls are drawn according to , the people receive different balls and thus .
It turns out the cosets are exactly determined by “forgetting” the order in which each person received their balls and only remembering which balls they received. Thus, the two permutation below belong to the same coset in .
When we forget the order in which each person receive their balls, the permutations and become the same and thus . Note that if we coloured the balls according to the permutation , then we could see that .
To see why this is true in general, consider two permutations . The permutations result in each person receiving the same balls if and only if after we can apply a permutation that fixes each subset and get . That is, and result in each person receiving the same balls if and only if for some . Thus, are the same after forgetting the order in which each person received their balls if and only if . This is illustrated below,
If we first draw the balls according to and then permute the balls according to , then the resulting draw is the same as if we had drawn according to and not permuted afterwards. That is, .
We can thus see why . A left coset correspond to pre-composing with elements of and a right cosets correspond to post-composing with elements of .
Contingency tables
With the last two sections under our belts, describing the double cosets is straight forward. We simply have to combine our two types of forgetting. That is, we first colour the balls with colours according to . We then draw the balls without replace and give the balls to different people according . We then forget both the original labels and the order in which each person received their balls. That is, we only remember the number of balls of each colour each person receives. Describing the double cosets by “double forgetting” is illustrated below with and .
The permutations and both result in person receiving one green ball and one blue ball. The two permutations also results in and both receiving one green ball and one red ball. Thus, and are both in the same -double coset. Note however that and .
The proof that double forgetting does indeed describe the double cosets is simply a combination of the two arguments given above. After double forgetting, the number of balls given to each person can be recorded in an table. The entry of the table is simply the number of balls person receives of colour . Two permutations are the same after double forgetting if and only if they produce the same table. For example, and above both produce the following table
Green ()
Red ()
Blue ()
Total
Person
1
0
1
2
Person
1
1
0
2
Person
1
1
0
2
Total
3
2
1
6
By the definition of how the balls are coloured and distributed to each person we must have for all and
and
An table with entries satisfying the above conditions is called a contingency table. Given such a contingency table with entries where the rows sum to and the columns sum to , there always exists at least one permutation such that is the number of balls received by person of colour . We have already seen that two permutations produce the same table if and only if they are in the same double coset. Thus, the double cosets are in one-to-one correspondence with such contingency tables.
The hypergeometric distribution
I would like to end this blog post with a little bit of probability and relate the contingency tables above to the hyper geometric distribution. If for some , then the contingency tables described above have two rows and are determined by the values in the first row. The numbers are the number of balls of colour the first person receives. Since the balls are drawn without replacement, this means that if we put the uniform distribution on , then the vector follows the multivariate hypergeometric distribution. Thus, if we have a random walk on that quickly converges to the uniform distribution on , then we could use the double cosets to get a random walk that converges to the multivariate hypergeometric distribution (although there are smarter ways to do such sampling).
Something very exciting this afternoon. Professor Persi Diaconis was presenting at the Stanford probability seminar and the field with one element made an appearance. The talk was about joint work with Mackenzie Simper and Arun Ram. They had developed a way of “collapsing” a random walk on a group to a random walk on the set of double cosets. As an illustrative example, Persi discussed a random walk on given by multiplication by a random transvection (a map of the form , where ).
The Bruhat decomposition can be used to match double cosets of with elements of the symmetric group . So by collapsing the random walk on we get a random walk on for all prime powers . As Professor Diaconis said, you can’t stop him from taking and asking what the resulting random walk on is. The answer? Multiplication by a random transposition. As pointed sets are vector spaces over the field with one element and the symmetric groups are the matrix groups, this all fits with what’s expected of the field with one element.
This was just one small part of a very enjoyable seminar. There was plenty of group theory, probability, some general theory and engaging examples.
Update: I have written another post about some of the group theory from the seminar! You can read it here: Double cosets and contingency tables.