Statistical Learning Theory and Stochastic Optimization [electronic resource] : Ecole d’Eté de Probabilités de Saint-Flour XXXI - 2001 / by Olivier Catoni ; edited by Jean Picard.

By: Catoni, Olivier [author.]Contributor(s): Picard, Jean [editor.] | SpringerLink (Online service)Material type: TextTextSeries: Lecture Notes in Mathematics ; 1851Publisher: Berlin, Heidelberg : Springer Berlin Heidelberg, 2004Description: VIII, 284 p. online resourceContent type: text Media type: computer Carrier type: online resourceISBN: 9783540445074Subject(s): Statistics | Artificial intelligence | Mathematics | Numerical analysis | Mathematical optimization | Distribution (Probability theory) | Mathematical statistics | Statistics | Statistical Theory and Methods | Optimization | Artificial Intelligence (incl. Robotics) | Information and Communication, Circuits | Probability Theory and Stochastic Processes | Numerical AnalysisAdditional physical formats: Printed edition:: No titleDDC classification: 519.5 LOC classification: QA276-280Online resources: Click here to access online
Contents:
Universal Lossless Data Compression -- Links Between Data Compression and Statistical Estimation -- Non Cumulated Mean Risk -- Gibbs Estimators -- Randomized Estimators and Empirical Complexity -- Deviation Inequalities -- Markov Chains with Exponential Transitions -- References -- Index.
In: Springer eBooksSummary: Statistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously "wrong'' (i.e. over-simplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PAC-Bayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on non-asymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results.
Item type: E-BOOKS
Tags from this library: No tags from this library for this title. Log in to add tags.
    Average rating: 0.0 (0 votes)
Current library Home library Call number Materials specified URL Status Date due Barcode
IMSc Library
IMSc Library
Link to resource Available EBK1261

Universal Lossless Data Compression -- Links Between Data Compression and Statistical Estimation -- Non Cumulated Mean Risk -- Gibbs Estimators -- Randomized Estimators and Empirical Complexity -- Deviation Inequalities -- Markov Chains with Exponential Transitions -- References -- Index.

Statistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously "wrong'' (i.e. over-simplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PAC-Bayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on non-asymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results.

There are no comments on this title.

to post a comment.
The Institute of Mathematical Sciences, Chennai, India

Powered by Koha