The hunt for a Bellman Function.

This is a beautiful and powerful mathematical technique in Harmonic Analysis that allows, among other things, to prove very complicated inequalities in the theory of Singular Integral Operators, without using much of the classical machinery in this field.

The Bellman function was the tool that allowed their creators (Fedor Nazarov and Sergei Treil) to crack the problem of weighted norm inequalities with matrix weights for the case \( \boldsymbol{p} \neq \boldsymbol{2} \) and finally solve it completely.

Copies of the original paper can be found at the authors’ pages; e.g. [www.math.brown.edu/~treil/papers/bellman/bell3.ps] (notice the postscript file is huge, as the article has more than 100 pages).

Let me illustrate the use of Bellman functions to solve a simple problem:

Dyadic-\( \boldsymbol{L}_\mathbf{2}(\mathbb{R}) \) version of the Carleson Imbedding Theorem

Let \( \mathcal{D} \) be the set of all dyadic intervals of the real line. Given a function \( f \in L_1^{\text{loc}}(\mathbb{R}) \), consider the averages \( \langle f \rangle_I = \lvert I\rvert^{-1} \int_I f \), on each dyadic interval \( I \in \mathcal{D} \).

Let \( { \mu_I \geq 0 \colon I \in \mathcal{D} } \) be a family of non-negative real values satisfying the Carleson measure condition—that is, for any dyadic interval \( I \in \mathcal{D} \), \( \sum_{J \subset I, J~\text{dyadic}} \mu_J \leq \lvert I \rvert. \)

Then, there is a constant \( C > 0 \) such that for any \( f \in L_2(\mathbb{R}) \),

\begin{equation*} \displaystyle{\sum_{ I \in \mathcal{D} } \mu_I \lvert \langle f \rangle_{I} \rvert^2 \leq C \lVert f \rVert_{L_2(\mathbb{R})}^2} \end{equation*}

Fix a dyadic interval \( I \in \mathcal{D} \), and a vector \( (x_1, x_2, x_3) \in \mathbb{R}^3 \). Consider all families \( {\mu_I \colon I \in \mathcal{D} } \) satisfying the Carleson condition

\begin{equation*} \frac{1}{\lvert J \rvert} \displaystyle{\sum_{K \subset J}} \mu_{K} \leq 1, \text{ for all }J \in \mathcal{D} \end{equation*}

and such that

\begin{equation} \frac{1}{\lvert I \rvert} \sum_{J \subset I} \mu_J = x_1 \label{(eq1)} \end{equation}

Also, consider all functions \( f \in L_2(\mathbb{R}) \) for which the following quantities are fixed:

\begin{align} \langle f^2 \rangle_I &= \frac{1}{\lvert I \rvert} \int_I f^2 = x_2, &\langle f \rangle_I &= \frac{1}{\lvert I \rvert} \int_I f = x_3 \label{(eq2)} \end{align}

If we believe that the Theorem is true, then the quantity

\begin{equation*} \displaystyle{\mathcal{B}(x_1,x_2,x_3)=\frac{1}{\lvert I \rvert} \sup \bigg\{ \sum_{J \subset I} \mu_J \langle f \rangle^2_J \colon f, \{ \mu_I \} \text{ satisfy }\eqref{(eq1)}, \eqref{(eq2)} \bigg\}} \end{equation*}

is finite and, moreover, satisfies the inequality \( \mathcal{B}(x_1,x_2,x_3) \leq C x_2 \).

Since \( \mathcal{B}(x_1,x_2,x_3) \) does not depend on the choice of an interval \( I \in \mathcal{D} \), we obtain a function of three real variables; this is the Bellman function associated with the Carleson Imbedding Theorem.

Notice that:

  1. The domain of \( \mathcal{B} \) is the set \( { (x_1, x_2, x_3) \in \mathbb{R}^3 \colon 0 \leq x_1 \leq 1, x_3^2 \leq x_2 }. \)
  2. For each \( (x_1,x_2,x_3) \) in the domain of \( \mathcal{B} \), it is \( 0 \leq \mathcal{B}(x_1, x_2, x_3) \leq C x_2. \)
  3. If \( 0 \leq \lambda \leq x_1 \), then

    \begin{equation*} \mathcal{B}(x_1, x_2, x_3)\geq \lambda x_2^2 + \frac{1}{2} \big\{ \mathcal{B}(x_1^+, x_2^+, x_3^+) + \mathcal{B}(x_1^-, x_2^-, x_3^-)\big\} \end{equation*}

    whenever the triples \( (x_1,x_2,x_3) \), \( (x_1^+,x_2^+,x_3^+) \) and \( (x_1^-,x_2^-,x_3^-) \) belong to the domain and

    • \( x_1 = (x_1^+ + x_1^-)/2 + \lambda \),
    • \( x_2 = (x_2^+ + x_2^-)/2 \),
    • \( x_3 = (x_3^+ + x_3^-)/2. \)

The entire machine can be run backward: if we have any function \( \mathcal{B} \) of three real variables that satisfies properties 1—3, the proof of the Theorem follows immediately. The key property 3 is not very pleasant to verify. Fortunately, this condition can be replaced by “infinitesimal” conditions (conditions on derivatives), which are easier to check: If \( x_1 = \frac{1}{2}(x_1^+ + x_1^-) \), \( x_2 = \frac{1}{2}(x_2^+ + x_2^-) \) and \( x_3 = \frac{1}{2}(x_3^+ + x_3^-) \), and all triples are in the domain of \( \mathcal{B} \), then the key property 3 implies the concavity of \( \mathcal{B} \):

\begin{equation*} \mathcal{B}(x_1,x_2,x_3) \geq \frac{1}{2} \big\{ \mathcal{B}(x_1^+,x_2^+,x_3^+) + \mathcal{B}(x_1^-,x_2^-,x_3^-)\big\} \end{equation*}

and furthermore,

\begin{equation} d^2 \mathcal{B} \leq 0, \qquad \frac{\partial \mathcal{B}}{\partial x_1} \geq x_3^2 \label{(eq3)} \end{equation}

Notice that condition 3 is equivalent to \( \eqref{(eq3)} \). The following function satisfies 1, 2 and \( \eqref{(eq3)} \), and thus the Theorem is proven for \( C=4 \).

\begin{equation*} \mathcal{B}(x_1, x_2, x_3) = 4\bigg( \displaystyle{x_2 - \frac{x_3^2}{1+x_1}}\bigg) \end{equation*}