Papers by Christine Choirat
Lecture Notes in Computer Science, 2004
The objective of this paper is to develop a set of reliable methods to build confidence sets for ... more The objective of this paper is to develop a set of reliable methods to build confidence sets for the Aumann mean of a random closed set estimated through the Minkowski empirical mean. In order to do so, we introduce a procedure to build a confidence set based on Weil's result for the Hausdorff distance between the empirical and the Aumann means; then, we introduce another procedure based on the support function.
Monte Carlo and Quasi-Monte Carlo Methods 2004, 2006
The objective is to develop a reliable method to build confidence sets for the Aumann mean of a r... more The objective is to develop a reliable method to build confidence sets for the Aumann mean of a random closed set as estimated through the Minkowski empirical mean. First, a general definition of the confidence set for the mean of a random set is provided. Then, a method using a characterization of the confidence set through the support function is proposed and a bootstrap algorithm is described, whose performance is investigated in Monte Carlo simulations.
Quantifying uniformity of a configuration of points on the sphere is an interesting topic that is... more Quantifying uniformity of a configuration of points on the sphere is an interesting topic that is receiving growing attention in numerical analysis. An elegant solution has been provided by Cui and Freeden [J. Cui, W. Freeden, Equidistribution on the sphere, SIAM J. Sci. Comput. 18 (2) (1997) 595-609], where a class of discrepancies, called generalized discrepancies and originally associated with pseudodifferential operators on the unit sphere in R 3 , has been introduced. The objective of this paper is to extend to the sphere of arbitrary dimension this class of discrepancies and to study their numerical properties. First we show that generalized discrepancies are diaphonies on the hypersphere. This allows us to completely characterize the sequences of points for which convergence to zero of these discrepancies takes place. Then we discuss the worst-case error of quadrature rules and we derive a result on tractability of multivariate integration on the hypersphere. At last we provide several versions of Koksma-Hlawka type inequalities for integration of functions defined on the sphere.
Stochastic Processes and Their Applications, 2010
In this paper, we prove a new version of the Birkho Ergodic Theorem (BET) for random variables de... more In this paper, we prove a new version of the Birkho Ergodic Theorem (BET) for random variables depending on a parameter (alias integrands). This involves variational convergences, namely epigraphical, hypograph- ical and uniform convergence, and requires a suitable definition of the conditional expectation of integrands. We also have to establish the mea- surability of the epigraphical lower and upper limits with respect to the field of invariant subsets. From the main result, applications to uni- form versions of the BET, to sequences of random sets and to the strong consistency of estimators are briefly derived.
Stochastic Processes and their Applications, 2010
We first establish a general version of the Birkhoff Ergodic Theorem for quasi-integrable extende... more We first establish a general version of the Birkhoff Ergodic Theorem for quasi-integrable extended realvalued random variables without assuming ergodicity. The key argument involves the Poincaré Recurrence Theorem. Our extension of the Birkhoff Ergodic Theorem is also shown to hold for asymptotic mean stationary sequences. This is formulated in terms of necessary and sufficient conditions. In particular, we examine the case where the probability space is endowed with a metric and we discuss the validity of the Birkhoff Ergodic Theorem for continuous random variables. The interest of our results is illustrated by an application to the convergence of statistical transforms, such as the moment generating function or the characteristic function, to their theoretical counterparts.
Most treatments of the model selection problem are either re- stricted to special situations (lag... more Most treatments of the model selection problem are either re- stricted to special situations (lag selection in AR, MA or ARMA models, re- gression selection, selection of a model out of a nested sequence) or to special selection methods (selection through testing or penalization). Our aim is to provide some basic tools for the analysis of model selection as a statistical deci- sion problem, independently of the situation and of the method used. In order to achieve this objective, we embed model selection in the theoretical decision framework oered by modern Decision Theory. This allows us to obtain sim- ple conditions under which pairwise comparison of models and penalization of objective functions arise naturally from preferences defined on the collection of statistical models under scrutiny. As a major application of our framework, we derive necessary and sucient conditions for an information criterion to satisfy in the case of independent and identically distributed realizations ...
Mathematics of Computation, 2013
ABSTRACT In this paper, we derive the asymptotic statistical properties of a class of generalized... more ABSTRACT In this paper, we derive the asymptotic statistical properties of a class of generalized discrepancies introduced by Cui and Freeden (SIAM J. Sci. Comput., 1997) to test equidistribution on the sphere. We show that they have highly desirable properties and encompass several statistics already proposed in the literature. In particular, it turns out that the limiting distribution is an (infinite) weighted sum of chi-squared random variables. Issues concerning the approximation of this distribution are considered in detail and explicit bounds for the approximation error are given. The statistics are then applied to assess the equidistribution of Hammersley low discrepancy sequences on the sphere and the uniformity of a dataset concerning magnetic orientations
Management Science, 2010
T he analytic hierarchy process (AHP) is a decision-making procedure widely used in management fo... more T he analytic hierarchy process (AHP) is a decision-making procedure widely used in management for establishing priorities in multicriteria decision problems. Underlying the AHP is the theory of ratio-scale measures developed in psychophysics since the middle of the last century. It is, however, well known that classical ratio-scaling approaches have several problems. We reconsider the AHP in the light of the modern theory of measurement based on the so-called separable representations recently axiomatized in mathematical psychology. We provide various theoretical and empirical results on the extent to which the AHP can be considered a reliable decision-making procedure in terms of the modern theory of subjective measurement.
Journal of Mathematical Psychology, 2011
Journal of Mathematical Psychology, 2008
Arxiv preprint arXiv: …, 2010
As in the Koksma Hlawka inequality, D {zj} is a sort of discrepancy of the sampling points and V ... more As in the Koksma Hlawka inequality, D {zj} is a sort of discrepancy of the sampling points and V {f} is a generalized variation of the function. In particular, we give sharp quantitative estimates for quadrature rules of functions in Sobolev classes. ... In what follows, M is a smooth ...
The Annals of Probability, 2003
In this paper, we prove a new version of the Birkhoff ergodic theorem (BET) for random variables ... more In this paper, we prove a new version of the Birkhoff ergodic theorem (BET) for random variables depending on a parameter (alias integrands). This involves variational convergences, namely epigraphical, hypographical and uniform convergence and requires a suitable definition ...
In this paper, we compare the error in several approximation methods for the cumulative aggregate... more In this paper, we compare the error in several approximation methods for the cumulative aggregate claim distribution customarily used in the collective model of insurance theory. In this model, it is usually supposed that a portfolio is at risk for a time period of length t. The occurrences of the claims are governed by a Poisson process of intensity μ so that the number of claims in [0, t] is a Poisson random variable with parameter λ = μt. Each single claim is an independent replication of the random variable X, representing the claim severity. The aggregate claim or total claim amount process in [0, t] is represented by the random sum of N independent replications of X, whose cumulative distribution function (cdf) is the object of study. Due to its computational complexity, several approximation methods for this cdf have been proposed. In this paper, we consider 15 approximations put forward in the literature that only use information on the lower order moments of the involved distributions. For each approximation, we consider the difference between the true distribution and the approximating one and we propose to use expansions of this difference related to Edgeworth series to measure their accuracy as λ = μt diverges to infinity. Using these expansions, several statements concerning the quality of approximations for the distribution of the aggregate claim process can find theoretical support. Other statements can be disproved on the same grounds. Finally, we investigate numerically the accuracy of the proposed formulas.
Uploads
Papers by Christine Choirat