## A not so simple conditional expectation

It is winter 2022 and my PhD cohort has moved on the second quarter of our first year statistics courses. This means we’ll be learning about generalised linear models in our applied course, asymptotic statistics in our theory course and conditional expectations and martingales in our probability course.

In the first week of our probability course we’ve been busy defining and proving the existence of the conditional expectation. Our approach has been similar to how we constructed the Lebesgue integral in the previous course. Last quarter, we first defined the Lebesgue integral for simple functions, then we used a limiting argument to define the Lebesgue integral for non-negative functions and then finally we defined the Lebesgue integral for arbitrary functions by considering their positive and negative parts.

Our approach to the conditional expectation has been similar but the journey has been different. We again started with simple random variables, then progressed to non-negative random variables and then proved the existence of the conditional expectation of any arbitrary integrable random variable. Unlike the Lebesgue integral, the hardest step was proving the existence of the conditional expectation of a simple random variable. Progressing from simple random variables to arbitrary random variables was a straight forward application of the monotone convergence theorem and linearity of expectation. But to prove the existence of the conditional expectation of a simple random variable we needed to work with projections in the Hilbert space $L^2(\Omega, \mathbb{P},\mathcal{F})$.

Unlike the Lebesgue integral, defining the conditional expectation of a simple random variable is not straight forward. One reason for this is that the conditional expectation of a random variable need not be a simple random variable. This comment was made off hand by our Professor and sparked my curiosity. The following example is what I came up with. Below I first go over some definitions and then we dive into the example.

## A simple random variable with a conditional expectation that is not simple

Let $(\Omega, \mathbb{P}, \mathcal{F})$ be a probability space and let $\mathcal{G} \subseteq \mathcal{F}$ be a sub-$\sigma$-algebra. The conditional expectation of an integrable random variable $X$ is a random variable $\mathbb{E}(X|\mathcal{G})$ that satisfies the following two conditions:

1. The random variable $\mathbb{E}(X|\mathcal{G})$ is $\mathcal{G}$-measurable.
2. For all $B \in \mathcal{G}$, $\mathbb{E}[X1_B] = \mathbb{E}[\mathbb{E}(X|\mathcal{G})1_B]$, where $1_B$ is the indicator function of $B$.

The conditional expectation of an integrable random variable is unique and always exists. One can think of $\mathbb{E}(X|\mathcal{G})$ as the expected value of $X$ given the information in $\mathcal{G}$.

A simple random variable is a random variable $X$ that take only finitely many values. Simple random variables are always integrable and so $\mathbb{E}(X|\mathcal{G})$ always exists but we will see that $\mathbb{E}(X|\mathcal{G})$ need not be simple.

Consider a random vector $(U,V)$ uniformly distributed on the square $[-1,1]^2 \subseteq \mathbb{R}^2$. Let $D$ be the unit disc $D = \{(u,v) \in \mathbb{R}^2:u^2+v^2 \le 1\}$. The random variable $X = 1_D(U,V)$ is a simple random variable since $X$ equals $1$ if $(U,V) \in D$ and $X$ equals $0$ otherwise. Let $\mathcal{G} = \sigma(U)$ the $\sigma$-algebra generated by $U$. It turns out that

$\mathbb{E}(X|\mathcal{G}) = \sqrt{1-U^2}$.

Thus $\mathbb{E}(X|\mathcal{G})$ is not a simple random variable. Let $Y = \sqrt{1-U^2}$. Since $Y$ is a continuous function of $U$, the random variable is $\mathcal{G}$-measurable. Thus $Y$ satisfies condition 1. Furthermore if $B \in \mathcal{G}$, then $B = \{U \in A\}$ for some measurable set $A\subseteq [-1,1]$. Thus $X1_B$ equals $1$ if and only if $U \in A$ and $V \in [-\sqrt{1-U^2}, \sqrt{1-U^2}]$. Since $(U,V)$ is uniformly distributed we thus have

$\mathbb{E}[X1_B] = \int_A \int_{-\sqrt{1-u^2}}^{\sqrt{1-u^2}} \frac{1}{4}dvdu = \int_A \frac{1}{2}\sqrt{1-u^2}du$.

The random variable $U$ is uniformly distributed on $[-1,1]$ and thus has density $\frac{1}{2}1_{[-1,1]}$. Therefore,

$\mathbb{E}[Y1_B] = \mathbb{E}[\sqrt{1-U^2}1_{\{U \in A\}}] = \int_A \frac{1}{2}\sqrt{1-u^2}du$.

Thus $\mathbb{E}[X1_B] = \mathbb{E}[Y1_B]$ and therefore $Y = \sqrt{1-U^2}$ equals $\mathbb{E}(X|\mathcal{G})$. Intuitively we can see this because given $U=u$, we know that $X$ is $1$ when $V \in [-\sqrt{1-u^2},\sqrt{1+u^2}]$ and that $X$ is $0$ otherwise. Since $V$ is uniformly distributed on $[-1,1]$ the probability that $V$ is in $[-\sqrt{1-u^2},\sqrt{1+u^2}]$ is $\sqrt{1-u^2}$. Thus given $U=u$, the expected value of $X$ is $\sqrt{1-u^2}$.

## An extension

The previous example suggests an extension that shows just how “complicated” the conditional expectation of a simple random variable can be. I’ll state the extension as an exercise:

Let $f:[-1,1]\to \mathbb{R}$ be any continuous function with $f(x) \in [0,1]$. With $(U,V)$ and $\mathcal{G}$ as above show that there exists a measurable set $A \subseteq [-1,1]^2$ such that $\mathbb{E}(1_A|\mathcal{G}) = f(U)$.