Skip to content

linear transformation

BVD contour ellipses

Bivariate Normal Distribution – Mahalanobis distance and contour ellipses

I continue with my posts on Bivariate Normal Distributions [BVDs]. In this post we consider the exponent of a BVD’s probability density function [pdf]. This function is governed by a central matrix Σ-1, the inverse of the variance-covariance matrix of the BVD’s random vector. We define the so called Mahalanobis distance dm for BVD vectors. A constant value of the… Read More »Bivariate Normal Distribution – Mahalanobis distance and contour ellipses

Bivariate Normal Distribution

Bivariate normal distribution – derivation by linear transformation of a random vector of two independent Gaussians

In an another post on properties of a Bivariate Normal Distribution [BVD] I have motivated the form of its probability density function [pdf] by symmetry arguments and the underlying probability density functions of its marginals, namely 1-dimensional Gaussians. In this post we will derive the probability density function by following the line of argumentation for a general Multivariate Normal Distribution… Read More »Bivariate normal distribution – derivation by linear transformation of a random vector of two independent Gaussians

Linear transformed 3-dim Z-distribution

Multivariate Normal Distributions – II – Linear transformation of a random vector with independent standardized normal components

In Machine Learning we typically deal with huge, but finite vector distributions defined in the ℝn. At least in certain regions of the ℝn these distributions may approximate an underlying continuous distribution. In the first post of this series we worked with a special type of continuous vector distribution based on independent 1-dimensional standardized normal distributions for the vector components.… Read More »Multivariate Normal Distributions – II – Linear transformation of a random vector with independent standardized normal components