Entropy

  • [definition]
    Given discrete random variable $X$, the entropy $H(X)$ of $X$ is defined by, \[ H(X)=-\sum_x p(x)\log p(x) \]

  • [non negative]
    For any discrete random variable $X$, we can conclude that $H(X)\geq 0$.

  • [example]
    R.V. $X$, $p(X=0)=1-r, p(X=1)=r, (0\leq r\leq 1)$, then, the entropy of $X$ is, \[ H(X)=-r\log r -(1-r)\log(1-r) \]

  • [joint entropy]
    \[ H(X,Y)=-\sum_x \sum_y p(x,y) \log(p(x,y)) = -E_{p(x,y)}[\log(x,y)] \]

  • [conditional entropy] \[ \begin{split} H(Y|X) &= \sum_x p(x) H(Y|X=x) \
    &= -\sum_x p(x) \sum_y p(y|x)\log p(y|x)\
    &= -\sum_x\sum_y p(x,y)\log p(y|x)\
    &= -E_{p(x,y)}[\log(y|x)] \end{split} \]

  • [chain rule]
    \[ H(X,Y)=H(X)+H(Y|X) \]

Relative Entropy and Mutual Information

  • [relative entropy (KL divergence)]
    Relative Entropy or KL divergence between pdf $p(x)$ and $q(x)$ is \[ KL(p||q)=\sum_x p(x)\log\left(\frac{p(x)}{q(x)}\right)=E_{p}\left(\log\left( \frac{p(x)}{q(x)} \right)\right) \] Generally, the KL divergence is not symmetric, but we can build $KL(p||q)=1/2 KL(p||q)+1/2 KL(q||p)$ to make KL divergence symmetric.

  • [example]
    Let $\mathcal{X}=\{0,1\}$, $p(x),q(x)$ are pdf(pmf), and, $p(X=0)=1-r$, $p(X=1)=r$, $q(X=0)=1-s$, $q(X=1)=s$. Then, \[ \begin{split} KL(p||q) &= (1-r)\log\left( \frac{1-r}{1-s} \right) + r\log\left(\frac{r}{s} \right) \
    KL(q||p) &= s \log\left(\frac{s}{r}\right) +(1-s)\log\left( \frac{1-s}{1-r} \right) \end{split} \]

  • [mutual information]
    Given two variable $X,Y$ with pdf(pmf) $p(x,y)$ and marginal $p(x),p(y)$, then the mutual information $I(X,Y)$ is \[ I(X,Y) = \sum_x\sum_y p(x,y)\log\left( \frac{p(x,y)}{p(x)p(y)}\right)=KL(p(x,y)||p(x)p(y)) \]

  • [theorem] \[ \begin{split} I(X,Y)&=I(Y,X)\
    I(X,X)&=H(X)\
    I(X,Y)&=H(X)-H(X|Y) \
    &= H(Y)-H(Y|X)\
    &=H(X)+H(Y)-H(X,Y) \end{split} \]

  • [conditional mutual information]
    The conditional mutual information of random variable $X$ and $Y$ given $Z$ is \[ I(X,Y|Z)=H(X|Z)-H(X|Y,Z) \]

  • [theorem]
    Let $p(x)$ and $q(x)$ with $x\in \mathcal{X}$ be two pdf(pmf). Then $KL(p||q)\geq 0$ with the equality if and only if $p(x)=q(x)$, for all $x\in \mathcal{X}$.

Differential Entropy###

\[ H(X)=-\int_\mathcal{S} f(x)\log f(x) dx \] where $\mathcal{S}$ is the support set of random variable $X$, and $f(x)$ is pdf of $X$.

  • [conditional differential entropy]
    \[ H(X|Y)=-\int f(x,y)\log(p(x|y)) \]

  • [KL divergence]
    Suppose $X\sim f(X)$ and $Y\sim g(X)$, then \[ KL(f||g)=\int f(x)\log \left( \frac{f(x)}{g(x}\right) \] Note that $KL(f||q)$ is finite only if the support of $f$ is contained in the support of $g$

  • [mutual information]
    \[ I(X,Y)=\int f(x,y)\log \left( \frac{f(x,y)}{f(x)f(y)} \right) = KL(f(x,y)||f(x)f(y)) \] \[ I(X,Y)=H(X)-H(X|Y)=H(Y)-H(Y|X) \]

  • [theorem]
    \[ KL(f||q) \geq 0 \] with the equality iff $f=g$ at almost everywhere.

  • [corollary]
    • $I(X,Y)\geq 0$, with the equality iff $X$ and $Y$ are independent.
    • $H(X|Y)\leq H(X)$, with the equality iff $X$ and $Y$ are independent.
  • [chain rule for differential entropy]
    \[ H(X_1,…,X_n)=\sum_{i=1}^n H(X_i|X_1,…,X_{i-1}) \]

  • [corollary]
    \[ H(X_1,…,X_n) \leq \sum_{i=1}^n H(X_i ) \]

  • [theorem]
    \[ H(aX+c)=H(X)+\log |a| \] where $a\geq 0$.
    proof prompt. using the transformation $Y=aX+c$.

  • [corollary]
    Suppose $A$ is nonsingular, then $H(AX)=H(X)+\log |A|$.

  • [theorem]
    Let $X\in R^m$ have zero mean and covariance $\Sigma=[XX^T]$, then \[ H(X)\leq \frac{1}{2}\log((2\pi)^n |\Sigma|)+\frac{n}{2} \]
    proof prompt: $KL(q||f)=-H(X)-\int g\log f\geq 0$

Exponential Family and the optimization solution of the best pdf with some constraints###

Consider the pdf $p(x)$ which satisfies the $k$ (independent) constraints, \[ \int_{\mathcal{X}} h_i(x)p(x)dx=m_i \leq \infty, i=1,…,k \] where $m_i$ is specified constrants. We want to find certain pdf $p(x)$ that is closet to $f(x)$. That is, \[ \begin{split} min_p &\int p(x)\log\frac{p(x)}{f(x)} \
s.t. &\int_\mathcal{X} h_i(x)p(x)dx=m_i,\quad i=1,…,k\
&\int p(x)dx=1 \end{split} \]

using Lagrangian multiply methods, we obtain the object function, \[ F(p)=\int p(x)\log\frac{p(x)}{f(x)} +\sum_i \theta_i \left(\int_\mathcal{X} h_i(x)p(x)dx-m_i\right)+c\left( \int_\mathcal{X} p(x)dx -1 \right) \]

calculate the Functional derivative, using the following formula, \[ \int \frac{\delta F }{\delta p}\phi(x)dx=\lim_{\epsilon\rightarrow 0} \frac{F(p+\epsilon \phi) -F(p)}{\epsilon} \] where $\phi(x)$ is an arbitary function of the $x$.

Then, we have, \[ 0=\int \frac{\delta F }{\delta p}\phi(x)dx = \lim_{\epsilon\rightarrow 0} (\int p(x)\log(1+\epsilon \frac{\phi(x)}{p(x)})dx +\epsilon\sum_{i=1}^k \int \theta_i h_i(x)\phi(x)dx+\epsilon c\int \phi(x)dx) \]

Thus, \[ c+1+\log\frac{p(x)}{f(x)}+\sum_{i=1}^k\theta_ih_i(x)=0 \] which means $p(x)=\frac{1}{g(\theta)}f(x)\exp(\sum_{i=1}^k\theta_ih_i(x))$, where $g(\theta)=\int_{x\in \mathcal{X}}f(x)\exp(\sum_{i=1}^k\theta_ih_i(x))$.

Functional

  • [definition]
    The mapping $x\rightarrow f(x)$ is a function, where $x$ is an argument of a function $f$. At the same time, the mapping of a function to the value of the function at a point, \[ f\rightarrow f(x) \] is a functional, here $x$ is a parameter.
  • [integral from]
    \[ f\rightarrow I(f)=\int_{\Omega} H(x,f(x),\triangledown f(x),…)\mu dx \] form a special class of functionals. The map a function $f$ into a real number, provided that $H$ is real-valued. examples,
    • the area underneath the graph of a positive function $f$. \[ f\rightarrow \int_{x_0}^{x_1}f(x)dx \]
    • the arclength of a curve in 2-dimensional Euclidean space. \[ f\rightarrow \int_{x_0}^{x_1}\sqrt{1+|\triangledown f(x)|^2} dx \]

###Functional Derivative###

  • [definition]
    Given a manifold $M$ representing functions $p$ (with certain boundary condition etc.) and a functional $F$ defined as \[ F:M\rightarrow R \] the functional derivative of $F(p)$, denoted $\delta F/\delta p$, is defined by, \[ \int \frac{\delta F}{\delta p}\phi(x) dx=\lim_{\epsilon\rightarrow 0}\frac{F(p+\epsilon\phi)-F(p)}{\epsilon}=\left[\frac{d}{d\epsilon}F(p+\epsilon\phi)\right]_{\epsilon =0} \] where $\phi(x)$ is an arbitary function, the quantity $\epsilon\phi$ is called the variation of $p$.

  • [functional differential]
    The differential of the functional $F(p)$ is, \[ \delta F(p,\phi)=\int \frac{\delta F}{\delta p}\phi(x)dx \] $\phi$ is the change in $p$, so we ‘formally’ have $\phi=\delta p$.

  • [properties]
    Like the derivative of a function, the functional derivative satisfies the following properties, where $F(p)$ and $G(p)$ are functionals,
    • [linearity]
      \[ \frac{\delta(\lambda F+\mu G)[p]}{\delta p}=\lambda \frac{\delta F}{\delta p}+\mu\frac{\delta G}{\delta p} \]
    • [product rule]
      \[ \frac{\delta FG}{\delta p}=\frac{\delta F}{\delta p}G + F\frac{\delta G}{\delta p} \]
    • [chain rule] If $F$ is a functional and $G$ is an operator, then \[ \frac{\delta F(G)}{\delta p(y)}=\int dx\frac{\delta F(G)}{\delta G(x)}\cdot \frac{\delta G(x)}{\delta p(y)} \] If $G$ is an ordering differentiable function, then this reduces to \[ \frac{\delta F(G)}{\delta p(y)}=\frac{\delta F(G)}{\delta G}\cdot\frac{\delta G}{\delta p} \]
  • [determining functional derivatives]
    We give a formula to determine functional derivative for a common class of functional that can be written as the integral of a function and its derivatives.
    • [formula]
      Given a functional \[ F(p)=\int f(r,p(r),\triangledown p(r))dr \] and a function $\phi(r)$ that vanishes on the boundary of the region of integration. \[ \begin{split} \int \frac{\delta F}{\delta p}\phi(r)dr &=\left[ \frac{d}{d\epsilon} \int f(r,p+\epsilon\phi,\triangledown p+\epsilon\triangledown \phi)dr \right]_{\epsilon=0} \
      &=\int \frac{\partial f}{\partial p}\phi+\frac{\partial f}{\partial \triangledown p}\triangledown \phi dr \
      &=\int \left[\frac{\partial f}{\partial p}\phi+\triangledown\left(\frac{\partial f}{\partial \triangledown p}\phi\right)-\triangledown \frac{\partial f}{\partial \triangledown p}\phi \right]dr \
      &=\int \left[\frac{\partial f}{\partial p}\phi-\triangledown \frac{\partial f}{\partial \triangledown p}\phi \right]dr \
      &=\int \phi\left[\frac{\partial f}{\partial p}-\triangledown \frac{\partial f}{\partial \triangledown p} \right]dr \
      \end{split} \] The functional derivative is, \[ \frac{\delta F}{\delta p}=\frac{\partial f}{\partial p}-\triangledown\frac{\partial f}{\partial \triangledown p} \] which is also called the Euler-Lagrange equation.

    • [example]
      For the electron-nucleus potential, Thomas and Fermi employed the coulomb potential energy functional, \[ V[p]=\int \frac{p(r)}{|r|}dr \] The the derivative can be calculated by, \[ \int \frac{\delta V}{\delta p}\phi(r)dr=\left[\frac{d}{d\epsilon} \int \frac{p(r)+\epsilon\phi}{|r|}dr \right]_{\epsilon=0}=\int \frac{1}{|r|}\phi(r)dr \] So, $\frac{\delta V}{\delta p}=\frac{1}{|r|}$. Or, we can employ the Euler-Lagrange equation to obtain the functional derivative.
      Now, let $f(r,p(r),\triangledown p(r))=\frac{p(r)}{|r|}$ \[ \begin{split} \frac{\delta V}{\delta p}&=\frac{\partial f}{\partial p}-\triangledown \frac{\partial f}{\partial p’}\
      &=\frac{1}{|r|}-0\
      &=\frac{1}{|r|} \end{split} \]