# Search Results

## You are looking at 1 - 4 of 4 items :

• "Univariate and Multivariate Distributions"
Clear All
OPEN ACCESS

## Abstract

The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes’ theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

about 10 smaller than the standard deviations of the y coordinates. The corre- lations of the y coordinates of di¤erent points cause the di¤erences between the results of the univariate and multivariate distribution. Coordinate di¤erences, which are computed for Example 1, are less a¤ected than the coordinates themselves used for Example 2. This can be seen from the findings of Table 3. The rise of the standard deviation for Example 2 from 19.6 mm for uncorrelated measurements to 55.9 mm for correlated observations and the corresponding in- crease of the confidence

textbooks, such as Bar89, Lup93, WJ03, Wass10, mentioned in §1.3. The chapter starts with a brief overview of probability and randomvariables, then it reviews the most common univariate and multivariate distribution functions, and follows with correlation coefficients. We also summarize the central limit theorem and discuss how to generate mock samples (random number generation) for a given distribution function. Notation Notation in probability and statistics is highly variable and ambiguous, and can make things confusing all on its own (and even more so in data mining

textbooks, such as Bar89, Lup93, WJ03, Wass10, mentioned in §1.3. The chapter starts with a brief overview of probability and random variables, then it reviews the most common univariate and multivariate distribution functions, and correlation coefficients. We also summarize the central limit theorem and discuss how to generate mock samples (random number generation) for a given distribution function. Notation Notation in probability and statistics is highly variable and ambiguous, and canmake things confusing all on its own (and even more so in data mining publications