Unable to retrieve citations for this document

Retrieving citations for document...

Publicly Available
May 9, 2008
Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The classical model of probability theory, due principally to Kolmogorov, defines probability as a totally-one measure on a sigma-algebra of subsets (events) of a given set (the sample space), and random variables as real-valued functions on the sample space, such that the inverse images of all Borel sets are events. From this model, all the results of probability theory are derived. However, the assertion that any given concrete situation is subject to probability theory is a scientific hypothesis verifiable only experimentally, by appropriate sampling, and never totally certain. Furthermore classical probability theory allows for the possibility of “outliers”—sampled values which are misleading. In particular, Kolmogorov's Strong Law of Large Numbers asserts that, if, as is usually the case, a random variable has a finite expectation (its integral over the sample space), then the average value of N independently sampled values of this function converges to the expectation with probability 1 as N tends to infinity. This implies that there may be sample sequences (belonging to a set of total probability 0) for which this convergence does not occur. It is proposed to derive a large and important part of the classical probabilistic results, on the simple basis that the sample sequences are so constructed that the corresponding average values do converge to the mathematical expectation as N tends to infinity, for all Riemann-integrable random variables. A number of important results have already been proved, and further investigations are proceeding with much promise. By this device, the stochastic nature of some concrete situations is no longer a likely scientific hypothesis, but a proven mathematical fact, and the problem of outliers is eliminated. This model may be referred-to as “quasi-probability theory”; it is particularly appropriate for the large class of computations that are referred-to as “quasi-Monte-Carlo”.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

Pricing of catastrophe bonds leads to integrals with discontinuous and formally infinite-dimensional integrands. We investigate the suitability of Quasi-Monte Carlo methods for the numerical evaluation of these integrals and develop several variance-reduction algorithms. Furthermore, the performance of Quasi-Monte Carlo sequences for asymptotically efficient rare event simulation is examined. Various numerical illustrations are given.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

In this paper we propose an improved quasi-Monte Carlo method for solving Linear Algebra problems. We show that by using low-discrepancy sequences both the convergence and the CPU time of the algorithm are improved. Two parallelization schemes using the Message Passing Interface with static and dynamic load balancing are proposed. The dynamic scheme is useful for computing in the GRID environment.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The paper considers dynamic systems with a distributed change of structure, where the structure number process in Markov process or a conditional Markov process. A statistical algorithm was constructed for probabilistic analysis of such systems. It is based on numerical methods for solution of the Stochastic Differential Equations (SDEs) and on the “maximum cross-section method”. Some results of numerical experiments are given.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The goal of this work is to use a particle-based simulation tool to perform a comparative study of two techniques used to calculate the small-signal response of semiconductor devices. Several GaAs and Si devices have been simulated in the frequency domain to derive their frequency dependent complex output impedance. Conclusions are drawn regarding the applicability and advantages of both approaches.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

We consider the problem of strong approximations of the solution of Itô stochastic functional differential equations involving a distributed delay term. The mean-square consistency of a class of schemes, the ⊝-Maruyama methods, is analysed, using an appropriate Itô-formula. In particular, we investigate the consequences of the choice of a quadrature formula. Numerical examples illustrate the theoretical results.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

To better understand the capacity of Lagrangian Stochastic Models to simulate superdiffusive and subdiffusive tracer motion in oceanic turbulence, we examine their performance on a simple class of model velocity fields which support subdiffusive or superdiffusive regimes of tracer transport associated to power-law regions of the Lagrangian power spectrum. We focus on how well the Lagrangian Stochastic Models can replicate the subdiffusion and superdiffusion in these models, when they are provided with exact Lagrangian information. This simple test reveals fundamental limitations in the type of subdiffusion and superdiffusion which a standard hierarchy of Lagrangian Stochastic Models is able to quantitatively approximate.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

An approach to an approximate evaluation of mathematical expectation of nonlinear functionals from solution of stochastic differential equations is developed. The equations with jump components are included. The method is based on functional integral representation of mathematical expectations, substituting some approximations instead of processes and on using approximate formulas of a given accuracy for functional integrals.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The spectral test provides a reliable measure for lattice assessment and can be computed very efficiently. It has extensively been applied to find good lattices for several MC and QMC applications. In order to enable comparisons across dimensions, a normalized spectral test is widely used. We empirically demonstrate significant shortcomings of this normalization in high dimensions, discuss the empirical distribution of the normalized spectral test values, and propose a new normalization strategy. The new normalization is shown to give more reliable results, especially concerning the comparability of the values accross dimensions.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

Using a sequential Monte Carlo algorithm, we compute a spectral approximation of the solution of the Poisson equation in dimension 1 and 2. The Feyman-Kac computation of the pointwise solution is achieved using either an integral representation or a modified walk on spheres method. The variances decrease geometrically with the number of steps. A global solution is obtained, accurate up to the interpolation error. Surprisingly, the accuracy depends very little on the absorption layer thickness of the walk on spheres.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The paper is devoted to study of a modification of the random walks on spheres in a finite domain G ⊂ ℝ m , m ≥ 2. It is proved that the considered spherical process with shifted centres converges to the boundary of G very rapidly. Namely, the average number of steps before fitting the ε -neighborhood of the boundary has the order of ln |ln ε | as ε → 0 instead of the standard order of |ln ε |. Thus, the spherical process with shifted centres can be effectively used for Monte Carlo solution of different problems of the mathematical physics related to the Laplace operator.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

We study computation of the mean of sequences with values in finite dimensional normed spaces and compare the computational power of classical randomized with that of quantum algorithms for this problem. It turns out that in contrast to the known superiority of quantum algorithms in the scalar case, in high dimensional L M P spaces classical randomized algorithms are essentially as powerful as quantum algorithms.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The Monte Carlo method called “random walks on boundary” has been successfully used for solving boundary-value problems. This method has significant advantages when compared with random walks on spheres, balls or on discrete grids when an exterior Dirichlet or Neumann problem is solved, or when we are interested in computing the solution to a problem at an arbitrary number of points using a single random walk. In this paper we will investigate ways: • to increase the convergence rate of this method by using quasirandom sequences instead of pseudorandom numbers for the construction of the boundary walks, • to find an efficient parallel implementation of this method on a cluster using MPI. In our parallel implementation we use disjoint contiguous blocks of quasirandom numbers extracted from a given quasirandom sequence for each processor. In this case, the increased convergence rate does not come at the cost of less trustworthy answers. We also present some numerical examples confirming both the increased rate of convergence and the good parallel efficiency of the method.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

New algorithms for efficient trajectory splitting are presented. These techniques are derived from randomized quasi-Monte Carlo integration by applying parameterized replication techniques to only selected dimensions of the integrand.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

Based on a duality approach for Monte Carlo construction of upper bounds for American/Bermudan derivatives (Rogers, Haugh & Kogan), we present a new algorithm for computing dual upper bounds in a more efficient way. The method is applied to Bermudan swaptions in the context of a LIBOR market model, where the dual upper bound is constructed from the maximum of still alive swaptions. We give a numerical comparison with Andersen's lower bound method.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

This work deals with a stochastic unconfined aquifer flow simulation in statistically isotropic saturated porous media. This approach is a generalization of the 3D model we developed in [13]. In this paper we deal with a 2D model obtained via depth-averaging of the 3D model. The average hydraulic conductivity is assumed to be a random field with a lognormal distribution. Assuming the fluctuations in the hydraulic conductivity to be small we construct a stochastic Eulerian model for the flow as a Gaussian random field with a spectral tensor of a special structure derived from Darcy's law. A randomized spectral representation is then used to simulate this random field. A series of test calculations confirmed the high accuracy and computational efficiency of the method.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The Wigner equation is well suited for numerical modeling of quantum electronic devices. In this work, the stationary, position-dependent Wigner equation is considered. Carrier scattering is described semi-classically by the Boltzmann collision operator. The development of Monte Carlo algorithms is complicated by the fact that, as opposed to the semiclassical case, the integral kernel is no longer positive semi-definite. Particle models are presented which interpret the potential operator as a generation term of numerical particles of positive and negative statistical weight. The problem arising from the avalanche of numerical particles is thereby solved for the steady state. When constructing the algorithms particular emphasis has been put on the conservation laws implied by the Wigner equation. If particles of opposite sign are generated pairwise, charge is conserved exactly. If the free-flight time is reduced such that only one particle is generated each time, then the sign of the particle weight is selected randomly, and charge is conserved only on average.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

Subgrid modeling of a filtration flow of a fluid in a inhomogeneous porous medium is considered. An expression for the effective permeability coefficient for the large-scale component of the flow is derived using a scale-invariant hypothesis. The permeability coefficient possesses a log-stable distribution.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

Monte Carlo (MC) method is probably the most widespread simulation technique due to its ease of use. Quasi-Monte Carlo (QMC) methods have been designed in order to speed up the convergence rate of MC but their implementation requires more stringent assumptions. For instance, the direct QMC simulation of Markov chains is inefficient due to the correlation of the points used. We propose here to survey the QMC-based methods that have been developed to tackle the QMC simulation of Markov chains. Most of those methods were hybrid MC/QMC methods. We compare them with a recently developped pure QMC method and illustrate the better convergence speed of the latter.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

In this article, we present two Monte Carlo methods to solve some problems related to the Darcy law in geophysics. Both methods do not require any discretization and are exact methods.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

There are situations in the framework of quasi-Monte Carlo integration where nonuniform low-discrepancy sequences are required. Using the inversion method for this task usually results in the best performance in terms of the integration errors. However, this method requires a fast algorithm for evaluating the inverse of the cumulative distribution function which is often not available. Then a smoothed version of transformed density rejection is a good alternative as it is a fast method and its speed hardly depends on the distribution. It can easily be adjusted such that it is almost as good as the inversion method. For importance sampling it is even better to use the hat distribution as importance distribution directly. Then the resulting algorithm is as good as using the inversion method for the original importance distribution but its generation time is much shorter.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

In order to take into account uncertainties about the values of the hydro-geological parameters of the rock hosting a deep geological repository, probabilistic methods are used in the risk assessment of radioactive waste repositories. Random generators could be globally invoked twice in adjoint Monte Carlo (AMC) simulation. Once for sampling hydro-geological parameters from known probability density functions (pdf). Next, for each selected set of parameters, random walks could be simulated for the evaluation of concentration of contaminants. With a moderate number of random walks (batch size), AMC method is efficient for computing mean values of concentrations. However, the higher moments of the concentration distribution and the distribution tails are in general not evaluated with accuracy. To cope with these inconveniences, we propose an adaptive AMC method in which the batch size is dynamically increased. The new approach is applied for the accurate assessment of the probability of exceeding some imposed critical concentrations.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

In this paper we propound a non-destructive and non-invasive technique for measuring the depth of water penetration in a homogeneous slab. The proposed device consists of a 252 Cf spontaneous fission source and a set of 3 He detectors located on the same side of a concrete slab subject to water infiltration. The effectiveness of this technique has been investigated by Monte Carlo simulation with the MCNP4C code.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The quantitative analysis of the reliability and availability of an engineered system can be carried out analytically only for simple systems and under simplifying assumptions, such as the time-homogeneity in the Poisson processes governing the components' failure and repair behaviours. However, real systems are complex, the failure and repair behaviours of the components of a system may be quite different and time-dependencies may become significant in the various phases of components' life. The most suitable method to account for time-dependent failure and repair behaviours of components is the direct Monte Carlo simulation which amounts to sampling, from the corresponding distributions, one occurrence time for each possible transition of each component from its present state. These times are then ordered in a master schedule and the actually occurring transition is that corresponding to the shortest time. If the new system configuration belongs to a cut set, which makes the system fail, the occurring failure is recorded in appropriate counters. Otherwise, the simulation of the system trial proceeds by sampling a new transition time of the component which has undergone the transition and then the master schedule is updated accordingly. In this paper, the realistic time behaviours of components' failure and repair rates are described by means of various distributions of the transition times, such as the exponential, the Weibull, the normal. The effects of these different modelling hypotheses is examined. Moreover, an efficient biasing technique is introduced to guide the system to generate several realizations of failure, which would otherwise constitute a rather rare-event.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The Halton sequence is one of the standard (along with ( t, s )-sequences and lattice points) low-discrepancy sequences, and thus is widely used in quasi-Monte Carlo applications. One of its important advantages is that the Halton sequence is easy to implement due to its definition via the radical inverse function. However, the original Halton sequence suffers from correlations between radical inverse functions with different bases used for different dimensions. These correlations result in poorly distributed two-dimensional projections. A standard solution to this is to use a randomized (scrambled) version of the Halton sequence. Here, we analyze the correlations in the standard Halton sequence, and based on this analysis propose a new and simpler modified scrambling algorithm. We also provide a number theoretic criterion to choose the optimal scrambling from among a large family of random scramblings. Based on this criterion, we have found the optimal scrambling for up to 60 dimensions for the Halton sequence. This derandomized Halton sequence is then numerically tested and shown empirically to be far superior to the original sequence.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The study of chord length distributions across various kinds of geometrical shapes, including stochastic mixtures, is a topic of great interest in many research fields ranging from ecology to neutronics. We have tried here to draw links between theoretical results and actual simulations for simple objects like disks, circular rings, spheres, hollow spheres, as well as for random media consisting of stochastic mono- or polydisperse spheres packing (three different packing algorithms were tested). The Monte Carlo simulations which were performed for simple objects fit perfectly theoretical formulas. For stochastic binary mixtures the simulations were still in rather good agreement with known analytical results.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The standard stochastic approach for simulation of carrier transport described by the Wigner equation introduces hard computational requirements. Averaged values of physical quantities are evaluated by means of numerical trajectories which accumulate statistical weight. The weight can take large positive and negative values which introduce large variance of the calculations. Aiming at variance reduction, we utilize the idea to split the weight so that a part is assigned to the trajectory and a part is left on a phase space grid for future processing. Formally this corresponds to a splitting of the kernel of the Wigner equation into two components. An operator equation is derived which couples the two kernel components and gives an answer how to further process the stored weight. The obtained Monte Carlo algorithm resembles a physical process of generation or annihilation of positive and negative particles. Variance reduction is achieved due to the partial annihilation of particles with opposite sign. Simulation results for a resonant-tunneling diode are presented.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The paper is devoted to a stochastic solution of balance differential equations for measures. Using the technique developed in N. Golyandina and V. Nekrutkin, MC Methods & Appl., V.5, N° 3, 1999 , we show that a regular-grid version of a stochastic Euler method has asymptotically smaller variance than the corresponding Poisson-grid version.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

A quantum mechanical extension of the full band ensemble Monte Carlo (MC) simulation method is presented. The new approach goes beyond the traditional semi-classical method generally used in MC simulations of charge transport in semiconductor materials and devices. The extension is necessary in high-field simulations of semiconductor materials with a complex unit cell, such as the hexagonal SiC polytypes or wurtzite GaN. Instead of complex unit cells the approach can also be used for super-cells, in order to understand charge transport at surfaces, around point defects, or in quantum wells.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

Integral equations with Lipschitz kernels and right-hand sides are intractable for deterministic methods, the complexity increases exponentially in the dimension d . This is true even if we only want to compute a single function value of the solution. For this latter problem we study coin tossing algorithms (or restricted Monte Carlo methods), where only random bits are allowed. We construct a restricted Monte Carlo method with error ε that uses roughly ε −2 function values and only d log 2 ε random bits. The number of arithmetic operations is of the order ε −2 + d log 2 ε. Hence, the cost of our algorithm increases only mildly with the dimension d , we obtain the upper bound C · (ε −2 + d log 2 ε) for the complexity. In particular, the problem is tractable for coin tossing algorithms.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The optimal coefficients in the sense of Korobov serve for obtaining good lattice points sets. By using these sets for multidimensional numerical integration, numerical solution of integral equation and other related applications, the relative error of corresponding Quasi Monte Carlo algorithms can be kept relatively small. This study deals with finding optimal coefficients for good lattice points for high dimensional problems in weighted Sobolev and Korobov spaces. Two Quasi Monte Carlo algorithms for boundary value problems are proposed and analyzed. For the first of them the coefficients that characterize the good lattice points are found “component-by-component”: the ( k + l) th coefficient is obtained by one-dimensional search with all previous k coefficients kept unchanged. For the second algorithm, the coefficients depending on single parameter are found in Korobov's form. Some numerical experiments are made to illustrate the obtained results.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

Sequences of points with a low discrepancy are the basic building blocks of quasi-Monte Carlo methods. Traditionally these points are generated in a unit cube. Not much theory exists on generating low-discrepancy point sets on other domains, for example a simplex. We introduce a variation and a star discrepancy for the simplex and derive a Koksma-Hlawka inequality for point sets on the simplex.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

In this paper we propose a novel Weighted Monte Carlo scheme which overcomes the intrinsic limitations of the conventional Monte Carlo method in describing the electro-optical response of some semiconductor-based quantum devices—e.g., huge photocurrent fluctuations in photodetectors. More specifically, to avoid potential numerical instabilities of existing weighted Monte Carlo approaches at long simulation times, we derive a version of the method specifically designed for the study of the steady-state regime. The latter, based again on the particle-counting paradigm, comes out to be statistically stable also in the long-time limit.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

On the basis of a variational principle, a method of dynamic-probabilistic numerical modeling of ensembles of independent samples of a family of spatial-time stochastic fields is proposed. The ensemble of samples satisfies the statistical structure of real fields, and each sample of this ensemble satisfies the numerical dynamic model.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

We consider the Neumann boundary-value problem for a nonlinear Helmholtz equation. Using Green's formula, the problem can be converted into solving a nonlinear integral equation with a polynomial nonlinearity. This equation can be solved numerically using a Monte Carlo method based on “branching random walks” occurring on specially defined domains, with the parameters of the branching process depending on the coefficients of the integral equation. In this paper, we study the properties of this method when using quasirandom instead of pseudorandom numbers to construct the branching random walks. Theoretical estimates of the convergence rate have been investigated, and numerical experiments with a model problem were also performed.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

A new general stochastic-deterministic approach for a numerical solution of boundary value problems of potential and elasticity theories is suggested. It is based on the use of the Poisson-like integral formulae for overlapping spheres. An equivalent system of integral equations is derived and then approximated by a system of linear algebraic equations. We develop two classes of special Monte Carlo iterative methods for solving these systems of equations which are a kind of stochastic versions of the Chebyshev iteration method and successive overrelaxation method (SOR). In the case of classical potential theory this approach accelerates the convergence of the well known Random Walk on Spheres method (RWS). What is however much more important, this approach suggests a first construction of a fast convergent finite-variance Monte Carlo method for the system of Lamé equations.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

Random walk solutions are commonly used to solve Fredholm equations of the second kind in various linear transport problems such as neutron transport and light transport for photorealistic computer image synthesis. However, they have the drawback that many paths have to be simulated before an acceptable solution is obtained. Often in such applications, the solution is needed at many nearby locations in state space. We present in this talk a technique to re-use random walks for computing results at different locations where a solution is needed. This technique, which was previously introduced by the authors in the context of ray-tracing, can dramatically reduce computation times by amortizing the cost of tracing each random walk over a set of neighboring locations. We will present the technique as a general unbiased estimator for second kind Fredholm equations first. Next, applications in the field of image synthesis are presented, as well as lines for future research.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

In still image compression, the wavelet transformation is a state-of-the-art tool. Recently, wavelet packet decomposition has received quite an interest. One popular approach for wavelet packet decomposition is the near best basis algorithm of Taswell. We extend his set of non-additive cost-functions by measures of uniform distribution, i.e. discrepancy and probability distribution distance. In contrast to usual application of the latter measures, we are interested in point-sets which are maximally not uniformly distributed to enhance entropy-encoding.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

In this paper we describe a Monte Carlo method for permeability calculations in complex digitized porous structures. The relation between the permeability and the diffusion penetration depth is established. The corresponding Dirichlet boundary value problem is solved by random walk algorithms. The results of computational experiments for some random models of porous media confirm the log-normality hypothesis for the permeability distribution.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

Monte Carlo and quasi-Monte Carlo methods are simulation techniques that have been designed to efficiently estimate integrals for instance. Quasi-Monte Carlo asymptotically outperforms Monte Carlo, but the error can hardly be estimated. We propose here to recall how hybrid Monte Carlo/Quasi-Monte Carlo have been developed to easily get error estimations, with a special emphasis on the so-called randomly shifted low discrepancy sequences. Two additional points are investigated: we illustrate that the convergence rate is not always improved with respect to Monte Carlo and we discuss the confidence interval coverage problem.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

The ultimate limits in scaling of conventional MOSFET devices have led the researchers from all over the world to look for novel device concepts, such as dual-date SOI devices, FinFETs, focused ion beam MOSFETs, etc. These novel devices suppress some of the short channel effects exhibited by conventional MOSFET devices. However, a lot of the old issues still remain and new issues begin to appear. For example, in both dual-gate MOSFETs and in Fin-FET devices, quantum mechanical size quantization effects significantly affect the overall device behavior. In addition, unintentional doping leads to considerable fluctuation in the device parameters, and the electron-electron interactions affect the thermalization of the carriers at the drain end of the device. In this work we investigate the role of a single impurity on the operation of narrow-width SOI devices. Our investigations suggest that impurities near the middle portion of the source end of the channel have most significant impact on the device drive current. Regarding the electron-electron interactions, we find that they affect the carrier velocity near the drain end of the channel. Note that in our 3D Monte Carlo particle-based device simulator, we have implemented two schemes that properly account for the short-range electron-ion and electron-electron interactions: the corrected Coulomb approach and the P 3 M method.

Unable to retrieve citations for this document

Retrieving citations for document...

Requires Authentication
Unlicensed
Licensed
May 9, 2008
### Abstract

In this paper a variance-reducing technique for Monte Carlo reliability analysis, named Dagger Sampling, is extended to deal with components which may fail in more than one mode. Particular attention is given to the numerical implementation of the procedure, with respect to both memory requirements and computational burden, which is found to play a crucial role for the overall efficiency of the method. An application is provided in the context of the reliability assessment of a nuclear safety system.