Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
December 20, 2010
Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
November 18, 2010
### Abstract

Many studies of randomly packed hyperspheres in multiple dimensions have been performed using Monte Carlo or Molecular Dynamics simulations to probe the behaviour of the systems. The calculations are usually initiated by randomly placing the hyperspheres in a D -dimensional box until some randomly loosely packed density is achieved. Then either a compression algorithm or a particle scaling technique is used to reach higher packing fractions. The interesting aspect in the initial random placing of the hyperspheres is that it is closely related to a test of random number generators that was proposed by Marsaglia, the “parking lot” test. It is this relationship that is investigated in this paper.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
October 20, 2010
### Abstract

We are interested in Monte Carlo (MC) methods for solving the diffusion equation: in the case of a constant diffusion coefficient, the solution is approximated by using particles and in every time step, a constant stepsize is added to or subtracted from the coordinates of each particle with equal probability. For a spatially dependent diffusion coefficient, the naive extension of the previous method using a spatially variable stepsize introduces a systematic error: particles migrate in the directions of decreasing diffusivity. A correction of stepsizes and stepping probabilities has recently been proposed and the numerical tests have given satisfactory results. In this paper, we describe a quasi-Monte Carlo (QMC) method for solving the diffusion equation in a spatially nonhomogeneous medium: we replace the random samples in the corrected MC scheme by low-discrepancy point sets. In order to make a proper use of the better uniformity of these point sets, the particles are reordered according to their successive coordinates at each time step. We illustrate the method with numerical examples: in dimensions 1 and 2, we show that the QMC approach leads to improved accuracy when compared with the original MC method using the same number of particles.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
November 5, 2010
### Abstract

For about fifteen years, the surprising success of quasi-Monte Carlo methods in finance has been raising questions that challenge our understanding of these methods. At the origin are numerical experiments performed with so-called GSobol' and GFaure sequences by J. Traub and his team at Columbia University, following the pioneering work of S. Tezuka in 1993 on generalizations of Niederreiter ( t, s )-sequences, especially with t = 0 (Faure sequences). Then in the early 2000, another breakthrough was achieved by E. Atanassov, who found clever generalizations of Halton sequences by means of permutations that are even asymptotically better than Niederreiter–Xing sequences in high dimensions. Unfortunately, detailed investigations of these GHalton sequences, together with numerical experiments, show that this good asymptotic behavior is obtained at the expense of remaining terms and is not sensitive to different choices of permutations of Atanassov. As the theory fails, the reasons of the success of GHalton, as well as GFaure, must be sought elsewhere, for instance in specific selections of good scramblings by means of tailor-made permutations. In this paper, we report on our assertions above and we give some tracks to tentatively remove a part of the mystery of QMC success.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
December 20, 2010
### Abstract

The well-known logic puzzle Sudoku can be generalized from two to three dimensions by designing a puzzle that is played on the faces of a cube. One variation, already introduced as a puzzle by Dion Church, uses three adjacent faces. Another variation uses all six faces. We have developed a set of rules and constraints for both three-dimensional Sudoku variations and have studied the properties using the method of simulated annealing.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
October 20, 2010
### Abstract

We describe an adaptive algorithm to compute piecewise sparse polynomial approximations and the integral of a multivariate function over hyper-rectangular regions in medium dimensions. The key ingredient is a quasi-Monte Carlo quadrature rule which can handle the numerical integration of both very regular and less regular functions. Numerical tests are performed on functions taken from Genz package in dimensions up to 5 and on basket options pricing.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
November 5, 2010
### Abstract

We consider the exact path sampling of the squared Bessel process and other continuous-time Markov processes, such as the Cox–Ingersoll–Ross model, constant elasticity of variance diffusion model, and confluent hypergeometric diffusions, which can all be obtained from a squared Bessel process by using a change of variable, time and scale transformation, and change of measure. All these diffusions are broadly used in mathematical finance for modeling asset prices, market indices, and interest rates. We show how the probability distributions of a squared Bessel bridge and a squared Bessel process with or without absorption at zero are reduced to randomized gamma distributions. Moreover, for absorbing stochastic processes, we develop a new bridge sampling technique based on conditioning on the first hitting time at the boundary of the state space. Such an approach allows us to simplify simulation schemes. New methods are illustrated with pricing path-dependent options.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
November 5, 2010
### Abstract

In this article we focus on two aspects of one-dimensional diaphony of generalised van der Corput sequences in arbitrary bases. First we give a permutation with the best distribution behaviour concerning the diaphony known so far. We improve a result of Chaix and Faure from 1993 from a value of 1.31574 . . . for a permutation in base 19 to 1.13794 . . . for our permutation in base 57. Moreover for an infinite sequence X and its symmetric version , we analyse the connection between the diaphony F ( X, N ) and the L 2 -discrepancy using another result of Chaix and Faure. Therefore we state an idea how to get a lower bound for the diaphony of generalised van der Corput sequences in arbitrary base b .

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
October 20, 2010
### Abstract

We study the error of reversible Markov chain Monte Carlo methods for approximating the expectation of a function. Explicit error bounds with respect to the l 2 -, l 4 - and l ∞ -norm of the function are proven. By the estimation the well-known asymptotical limit of the error is attained, i.e. our bounds are correct to first order as n → ∞. We discuss the dependence of the error on a burn-in of the Markov chain. Furthermore we suggest and justify a specific burn-in for optimizing the algorithm.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
November 18, 2010
### Abstract

We suggest a randomized version of the projection methods belonging to the class of a “row-action” methods which work well both for systems with quadratic nonsingular matrices and for overdetermined systems. These methods belong to a type known as Projection on Convex Sets methods. Here we present a method beyond the conventional Markov chain based Neumann–Ulam scheme. The main idea is in a random choice of blocks of rows in the projection method so that in average, the convergence is improved compared to the conventional periodic choice of the rows. We suggest an acceleration of the row projection method by using the Johnson–Lindenstrauss (J–L) theorem to find, among the randomly chosen rows, in a sense an optimal row. We extend this randomized method for solving linear systems coupled with systems of linear inequalities. Applied to finite-difference approximations of boundary value problems, the method appears to be an extremely efficient Random Walk algorithm whose convergence is exponential, and the cost does not depend on the dimension of the matrix. In addition, the algorithm calculates the solution in all grid points, and is easily parallelizable.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
October 20, 2010
### Abstract

We consider the problem of simulating tail loss probabilities and expected losses conditioned on exceeding a large threshold (expected shortfall) for credit portfolios. Instead of the commonly used normal copula framework for the dependence structure between obligors, we use the t -copula model. We increase the number of inner replications using the so-called geometric shortcut idea to increase the efficiency of the simulations. The paper contains all details for simulating the risk of the t -copula credit risk model by combining outer importance sampling (IS) with the geometric shortcut. Numerical results show that the applied method is efficient in assessing tail loss probabilities and expected shortfalls for credit risk portfolios. We also compare the tail loss probabilities and expected shortfalls under the normal and t -copula model.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
October 20, 2010
### Abstract

We consider a new method using genetic algorithms to obtain lower bounds for the star discrepancy for any number of points in [0, 1] s . We compute lower bounds for the star discrepancy of samples of a number of sequences in several dimensions and successfully compare with existing results from the literature. Despite statements in the quasi-Monte Carlo literature stating that computing the star discrepancy is either intractable or requires a lot of computational work for s ≥ 3, we show that it is possible to compute the star discrepancy exactly or at the very least obtain reasonable lower bounds without a huge computational burden. Our method is fast and consistent and can be easily extended to estimate lower bounds of other discrepancy measures. Our method can be used by researchers to measure the uniformity quality of point sets as given by the star discrepancy rather than having to rely on the L 2 discrepancy, which is easy to compute, but is flawed (and it is well known that the L 2 discrepancy is flawed).

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
October 20, 2010
### Abstract

Random and deterministic fragmentation models are considered. Their relationship is studied by deriving different forms of the kinetic fragmentation equation from the corresponding stochastic models. Results related to the problem of non-conservation of mass (phase transition into dust) are discussed. Illustrative examples are given and some open problems are mentioned.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Accessible
October 20, 2010
### Abstract

We consider statistical inference from incomplete sets of binary data. Our approach is based on the autologistic model, which is very flexible and well suited for medical applications. We propose a Bayesian approach, essentially using Monte Carlo techniques. The method developed in this paper is a special version of Gibbs sampler. We repeat intermittently the following two steps. First, missing values are generated from the predictive distribution. Second, unknown parametes are estimated from the completed data. The Monte Carlo method of computing maximum likelihood estimates due to Geyer and Thompson (J. R. Statist. Soc. B 54: 657–699, 1992) is modified to the Bayesian setting and missing data problems. We include results of some small scale simulation experiments. We artificially introduce missing values in a real data set and then use our algorithm to refill missings. The rate of correct imputations is quite satisfactory.