The -norm of a vector is defined to be:
If , then the -norm is convex. When , this function is not convex and actually concave when all the entries of are non-negative. On a recent exam for the course Convex Optimization, we were asked to prove this when . In this special case, the mathematics simplifies nicely.
When , the -norm can be described as the squared sum of square roots. Specifically,
Note that we can expand the square and rewrite the function as follows
If we restrict to with for all , then this function simplifies to
which is a sum of geometric means. The geometric mean function is concave when . This can be proved by calculating the Hessian of this function and verifying that it is negative semi-definite.
From this, we can conclude that each function is also concave. This is because is a linear function followed by a concave function. Finally, any sum of concave functions is also concave and thus is concave.
A similar argument can be used to show that Hellinger distance is a convex function. Hellinger distance, is defined on pairs of probability distributions and on a common set . For such a pair,
which certainly doesn’t look convex. However, we can expand the square and use the fact that and are probability distributions. This shows us that Helligener distance can be written as
Again, each function is concave and so the negative sum of such functions is convex. Thus, is convex.
As a final comment, I’d just like to say how much I am enjoying the class. Prof. Stephen Boyd is a great lecturer and we’ve seen a wide variety of applications in the class. I’ve recently been reading a bit of John D Cook’s blog and agree with all he says about the course here.