In a previous post of this blog we have derived the functional form of a **bivariate normal distribution** [**BND**] of a two 1-dimensional random variables *X* and *Y*). By rewriting the probability density function [**pdf**] in terms of vectors *(x, y)*^{T} and a matrix **Σ**^{-1} we recognized that a coefficient appearing in a central exponential of the pdf could be identified as the so called **correlation coefficient*** *describing a linear coupling of our two random variables *X* and *Y*. In this post I shall show how one can derive the *covariance* and the *correlation* coefficient the *hard* *way* by directly integrating the BND’s probabilty density function.

**A bit of motivation** first

Just to motivate those of my readers a bit who may ask themselves what bivariate distributions have to do with Machine Learning. The following feature image shows data points forming a bivariate normal distribution whose pdf can be described by the type of function discussed in the preceding post and below.

The data points correspond to vectors in the multidimensional latent space of a *Convolutional AutoEncoder* [CAE]. Each vector represents an image of a face after having been encoded by the encoder part of the trained CAE. The vector’s endpoints were projected onto a 2-dim coordinate plane of the basic Euclidean Coordinate System [ECS] spanning the latent space. We see that the contours of the resulting (projected) two 2-dimensional probability density are concentric ellipses whose main axes all point in the same directions. The axes of the ellipses obviously are rotated against the ECS axes. In future posts of this blog we will proof that the rotation angle of such ellipses directly depend on the *correlation coefficient* between underlying 1-dimensional distributions. The math of bivariate and multivariate distributions is also of further importance in ML-applications applied to data of industrial processes.

## Formal definition of the covariance of two random variables *X* and *Y*

Let us abbreviate the expectation values of the random variables *X* and *Y* as *E[X])* and *E[Y]*, respectively. Then the formal definition of the covariance of X and Y is:

*XY* is a combined two-dimensional distribution, which is defined by a probability density function *g(x,y) *resulting from a probability g_{x}(x) of *X* assuming *x* and a conditional probability *c _{y}*(

*y|x*) of

*Y*assuming

*y*under the condition of

*X=x*.

The second part giving rise to symmetry arguments. Then in general we have

The correlation coefficient *ρ* for the two random distributions is defined by looking at the standardized distributions :

which after some elementary consideration gives us

To simplify the following steps we choose a 2-dimensional Euclidean coordinate system [ECS] centered such that *E[X]* = 0 and *E[Y] = 0*.

In the case of a bivariate normal distribution we call the respective probability function [**pdf**] *g _{2}(x,y) * Using this pdf we find that we have to solve the following integral:

## Useful integral formulas

We need formulas for two integrals :

## Performing the integration

Assumptions on the *marginal distributions* *X* and *Y* (having variances *σ _{x}*

^{2}and

*σ*

_{y}^{2}) of a BND and the application of symmetry arguments plus normalization conditions have revealed the general form of

*g*. See the preceding post in this blog. In our centered ECS the pdf of the BND is given by

_{2}(x,y)*ρ* is a parameter and constant in this formula. We know already that it might have to to with the *correlation* of the random variables *X* and *Y*. To prove this analytically we now perform the integration to get an analytical expression for the covariance.

The trick to solve the integral over *y* is to complete the expression in the exponential such that we get a full square. First we redefine some variables

The term in the exponential thus can be rewritten as

This gives us

We solve the inner integral with the help of (5) to get:

By application of (6) we arrive at

This is the result we have hoped for. We obviously can identify the parameter *ρ* appearing in the probability density function of a bivariate normal distribution as the correlation coefficient between two underlying 1-dimensional normal distributions.

In the preceding post we have seen that *ρ* appears in the matrix **Σ**^{-1} defining the quadratic form in the exponential. With * v* = (

*x, y*)

^{T}we have

For *standardized* Gaussian distributions *X* and *Y* (with *σ _{x}* =

*σ*= 1) we get a simple covariance matrix completely determined by

_{y }*ρ*

## Conclusion

One can directly derive the correlation coefficient of the Gaussian distributions underlying a Bivariate Normal Distribution by deriving the expectation value for the product *xy* from the BND’s probability density function. The integrals over *x* and *y* have analytical solutions.

In reverse we can say that a Bivariate Normal Distribution [BND] can be constructed via defining a correlation between two 1-dimensional normal distributions and using the inverse of the covariance matrix to define a quadratic form of *x* and *y* in an exponential, which after a normalization gives you the probability density function of the BND. We shall see in further posts that such a construction has a direct *geometrical* interpretation. This will later on also motivate the abstract definition of general multidimensional or *Multivariate Normal Distributions* and their probability density functions.

But in the next post on BNDs I will first look at some specific properties characterizing the hyper-curves (of constant probability density) and the expectation values of the BND’s conditional distributions.