Sunday 27 January 2013

Today I learned: A Pain in the Anal-ysis

I'm currently watching the Aussie Open men's final between Djokovic and Murray, and it's shaping out to be a reasonably long match (I may have written this when they were still at 6-7,7-6). Speaking of long matches, I hear that the longest women's match is six and a half hours long and featured a half-hour 643 point rally! And they only played two sets.

So let's take our minds off such painful, laborious sports and do something relaxing, like analysis. Yes. Analysis.

I feel that most pure mathematicians that I know have very little tolerance or appreciation for analysis, and I am certainly no hipster-analysis-lover. But I've been looking at Osgood, Phillips and Sarnak's paper on finding the (unique) nicest metric compatible with the complex structure of a Riemann surface \(R\). To be bluntly honest, I haven't been exposed to very much functional analysis, and the idea here is sorta cool. Let's try to really crudely summarise the schematic of what I suspect is a generic strategy in this area:
  1. Take a big vector space \(W\) of functions, like a Sobolev space, and define a functional \(F:W\rightarrow\mathbb{R}\) on it. Often, the functional is measuring some sort of energy these functions defined on a Riemannian manifold.
  2. Show that \(F:W\rightarrow\mathbb{R}\) is strictly convex (concave), this means that the global minimum (maximum) of \(F\) is unique.
  3. Plonk \(W\) in a bigger space \(H\) - often a \(L^p\) space, something like the Rellich-Kondrachov theorem is then used to show that a sequence of functions in \(W\) whose energies tend to the infimum of \(F\) has a convergent subsequence tending to some \(\varphi\in H\).
  4. Use stuff like the elliptic regularity theorem to show that \(\varphi\) is a lot smoother than your average function in \(H\) and is in fact in \(W\) and hence unique by point 2.
Well, that's sorta my take on what appears to be important. I'll probably realise at some point that I'm pretty wrong.

During the process of going through and verifying each step of this proof there were two things that I couldn't get my head around. The first I'm still paranoid about, and I'll probably ask about it on MathOverflow in a day or two. It has to do with whether elliptic regularity holds for sufficiently nice nonlinear elliptic operators. The second had to do with why a strongly convergent sequence of measurable functions in \(L^p(\Omega\subset(\mathbb{R}^n,\mathrm{d}V))\) always has a subsequence that converges pointwise amost everywhere (a.e.) to a measurable function.

Let's prove this jazz.

So we've got a sequence of measurable real functions \[f_n:\Omega\rightarrow\mathbb{R}\] in \(L^p(\Omega)\) and we want to show that there's a subsequence that converges pointwise a.e.. Let's begin by figuring out what pointwise convergence actually means.

In order for the sequence \(\{f_n\}\) to converge to some function \(f\), it needs to get arbitrarily close to \(f\) as \(n\) blows up. That is: for any \(x\in\Omega\) and any \(\epsilon>0\), \begin{align}\limsup_{n\rightarrow\infty}|f_n(x)-f(x)|\ngtr\epsilon. \end{align} And since we're dealing with a.e. pointwise convergence, let's just turn this into a measure theoretic statement: for any \(\epsilon>0\), \begin{align}\mathrm{Vol}(\{x\in\Omega:\,\limsup_{n\rightarrow\infty}|f_n(x)-f(x)|>\epsilon\})=0.\end{align}
Okay, good, we've heuristically justified why this condition is equivalent to pointwise a.e. convergence.

Now, just by interpreting what \(\limsup\) is actually doing, we see that \begin{align}\mathrm{Vol}(\{x\in\Omega:\,\limsup_{n\rightarrow\infty}|f_n(x)-f(x)|>\epsilon\})\\=\lim_{n\rightarrow\infty}\mathrm{Vol}\left(\bigcup_{k=n}^{\infty}\{x:\,|f_k(x)-f(x)|>\epsilon\}\right). \end{align} So if we can show that \begin{align}\mathrm{Vol}\left(\bigcup_{k=n}^{\infty}\{x:\,|f_k(x)-f(x)|>\epsilon\}\right)\leq\sum_{k=n}^{\infty}\mathrm{Vol}(\{x:\,|f_k(x)-f(x)|>\epsilon\})\end{align} tends to \(0\) then we're done. But of course, one way of showing that the tail of a series converges to \(0\) is to show that the series converges.

Now, if we knew that \(\mathrm{Vol}(\{x:\,|f_k(x)-f(x)|>\epsilon\})\) tended to \(0\) we could easily take a subsequence \(\{f_{n_m}\}\) so that \begin{align}\mathrm{Vol}(\{x:\,|f_{n_m}(x)-f(x)|>\epsilon\})<2^{-m}\end{align} and this subsequence would have to converge pointwise a.e. to \(f\) - which is precisely the type of conclusion that we'd like. So everything boils down to using strong convergence to get that \begin{align}\mathrm{Vol}(\{x:\,|f_k(x)-f(x)|>\epsilon\})\rightarrow0\text{ as }n\rightarrow\infty.\end{align} This, however, can be shown as follows:\begin{align}&\mathrm{Vol}(\{x:\,|f_k(x)-f(x)|>\epsilon\})=\mathrm{Vol}(\{x:\,\epsilon^{-1}|f_k(x)-f(x)|>1\})\\=&\int_{\{x:\,\epsilon^{-1}|f_k(x)-f(x)|>1\}}\mathrm{d}V\\ <&\int_{\{x:\,\epsilon^{-1}|f_k(x)-f(x)|>1\}}\epsilon^{-p}|f_k(x)-f(x)|^p\mathrm{d}V\\\leq&\int_{\Omega}\epsilon^{-p}|f_k(x)-f(x)|^p\mathrm{d}V\\=&\epsilon^{-p}\|f_n-f\|_p^p.\end{align}Since strong convergence is defined to mean that \(\|f_n-f\|_p\) tends to \(0\) and \(\epsilon\) is just some fixed constant, this proves our claim. Oh, and I should probably point out that \(\|\cdot\|_p\) is notation for the norm on \(L^p(\Omega)\).

I really ought to have finished writing this post before the match ended. Oh well. =)
 

No comments:

Post a Comment