Learning Theory [electronic resource] : 20th Annual Conference on Learning Theory, COLT 2007, San Diego, CA, USA; June 13-15, 2007. Proceedings / edited by Nader H. Bshouty, Claudio Gentile.

Contributor(s): Bshouty, Nader H [editor.] | Gentile, Claudio [editor.] | SpringerLink (Online service)Material type: TextTextSeries: Lecture Notes in Computer Science ; 4539Publisher: Berlin, Heidelberg : Springer Berlin Heidelberg, 2007Description: XII, 636 p. online resourceContent type: text Media type: computer Carrier type: online resourceISBN: 9783540729273Subject(s): Computer science | Computer software | Artificial intelligence | Computer Science | Computation by Abstract Devices | Algorithm Analysis and Problem Complexity | Mathematical Logic and Formal Languages | Artificial Intelligence (incl. Robotics)Additional physical formats: Printed edition:: No titleDDC classification: 004.0151 LOC classification: QA75.5-76.95Online resources: Click here to access online
Contents:
Invited Presentations -- Property Testing: A Learning Theory Perspective -- Spectral Algorithms for Learning and Clustering -- Unsupervised, Semisupervised and Active Learning I -- Minimax Bounds for Active Learning -- Stability of k-Means Clustering -- Margin Based Active Learning -- Unsupervised, Semisupervised and Active Learning II -- Learning Large-Alphabet and Analog Circuits with Value Injection Queries -- Teaching Dimension and the Complexity of Active Learning -- Multi-view Regression Via Canonical Correlation Analysis -- Statistical Learning Theory -- Aggregation by Exponential Weighting and Sharp Oracle Inequalities -- Occam’s Hammer -- Resampling-Based Confidence Regions and Multiple Tests for a Correlated Random Vector -- Suboptimality of Penalized Empirical Risk Minimization in Classification -- Transductive Rademacher Complexity and Its Applications -- Inductive Inference -- U-Shaped, Iterative, and Iterative-with-Counter Learning -- Mind Change Optimal Learning of Bayes Net Structure -- Learning Correction Grammars -- Mitotic Classes -- Online and Reinforcement Learning I -- Regret to the Best vs. Regret to the Average -- Strategies for Prediction Under Imperfect Monitoring -- Bounded Parameter Markov Decision Processes with Average Reward Criterion -- Online and Reinforcement Learning II -- On-Line Estimation with the Multivariate Gaussian Distribution -- Generalised Entropy and Asymptotic Complexities of Languages -- Q-Learning with Linear Function Approximation -- Regularized Learning, Kernel Methods, SVM -- How Good Is a Kernel When Used as a Similarity Measure? -- Gaps in Support Vector Optimization -- Learning Languages with Rational Kernels -- Generalized SMO-Style Decomposition Algorithms -- Learning Algorithms and Limitations on Learning -- Learning Nested Halfspaces and Uphill Decision Trees -- An Efficient Re-scaled Perceptron Algorithm for Conic Systems -- A Lower Bound for Agnostically Learning Disjunctions -- Sketching Information Divergences -- Competing with Stationary Prediction Strategies -- Online and Reinforcement Learning III -- Improved Rates for the Stochastic Continuum-Armed Bandit Problem -- Learning Permutations with Exponential Weights -- Online and Reinforcement Learning IV -- Multitask Learning with Expert Advice -- Online Learning with Prior Knowledge -- Dimensionality Reduction -- Nonlinear Estimators and Tail Bounds for Dimension Reduction in l 1 Using Cauchy Random Projections -- Sparse Density Estimation with ?1 Penalties -- ?1 Regularization in Infinite Dimensional Feature Spaces -- Prediction by Categorical Features: Generalization Properties and Application to Feature Ranking -- Other Approaches -- Observational Learning in Random Networks -- The Loss Rank Principle for Model Selection -- Robust Reductions from Ranking to Classification -- Open Problems -- Rademacher Margin Complexity -- Open Problems in Efficient Semi-supervised PAC Learning -- Resource-Bounded Information Gathering for Correlation Clustering -- Are There Local Maxima in the Infinite-Sample Likelihood of Gaussian Mixture Estimation? -- When Is There a Free Matrix Lunch?.
In: Springer eBooks
Item type: E-BOOKS
Tags from this library: No tags from this library for this title. Log in to add tags.
    Average rating: 0.0 (0 votes)
Current library Home library Call number Materials specified URL Status Date due Barcode
IMSc Library
IMSc Library
Link to resource Available EBK7617

Invited Presentations -- Property Testing: A Learning Theory Perspective -- Spectral Algorithms for Learning and Clustering -- Unsupervised, Semisupervised and Active Learning I -- Minimax Bounds for Active Learning -- Stability of k-Means Clustering -- Margin Based Active Learning -- Unsupervised, Semisupervised and Active Learning II -- Learning Large-Alphabet and Analog Circuits with Value Injection Queries -- Teaching Dimension and the Complexity of Active Learning -- Multi-view Regression Via Canonical Correlation Analysis -- Statistical Learning Theory -- Aggregation by Exponential Weighting and Sharp Oracle Inequalities -- Occam’s Hammer -- Resampling-Based Confidence Regions and Multiple Tests for a Correlated Random Vector -- Suboptimality of Penalized Empirical Risk Minimization in Classification -- Transductive Rademacher Complexity and Its Applications -- Inductive Inference -- U-Shaped, Iterative, and Iterative-with-Counter Learning -- Mind Change Optimal Learning of Bayes Net Structure -- Learning Correction Grammars -- Mitotic Classes -- Online and Reinforcement Learning I -- Regret to the Best vs. Regret to the Average -- Strategies for Prediction Under Imperfect Monitoring -- Bounded Parameter Markov Decision Processes with Average Reward Criterion -- Online and Reinforcement Learning II -- On-Line Estimation with the Multivariate Gaussian Distribution -- Generalised Entropy and Asymptotic Complexities of Languages -- Q-Learning with Linear Function Approximation -- Regularized Learning, Kernel Methods, SVM -- How Good Is a Kernel When Used as a Similarity Measure? -- Gaps in Support Vector Optimization -- Learning Languages with Rational Kernels -- Generalized SMO-Style Decomposition Algorithms -- Learning Algorithms and Limitations on Learning -- Learning Nested Halfspaces and Uphill Decision Trees -- An Efficient Re-scaled Perceptron Algorithm for Conic Systems -- A Lower Bound for Agnostically Learning Disjunctions -- Sketching Information Divergences -- Competing with Stationary Prediction Strategies -- Online and Reinforcement Learning III -- Improved Rates for the Stochastic Continuum-Armed Bandit Problem -- Learning Permutations with Exponential Weights -- Online and Reinforcement Learning IV -- Multitask Learning with Expert Advice -- Online Learning with Prior Knowledge -- Dimensionality Reduction -- Nonlinear Estimators and Tail Bounds for Dimension Reduction in l 1 Using Cauchy Random Projections -- Sparse Density Estimation with ?1 Penalties -- ?1 Regularization in Infinite Dimensional Feature Spaces -- Prediction by Categorical Features: Generalization Properties and Application to Feature Ranking -- Other Approaches -- Observational Learning in Random Networks -- The Loss Rank Principle for Model Selection -- Robust Reductions from Ranking to Classification -- Open Problems -- Rademacher Margin Complexity -- Open Problems in Efficient Semi-supervised PAC Learning -- Resource-Bounded Information Gathering for Correlation Clustering -- Are There Local Maxima in the Infinite-Sample Likelihood of Gaussian Mixture Estimation? -- When Is There a Free Matrix Lunch?.

There are no comments on this title.

to post a comment.
The Institute of Mathematical Sciences, Chennai, India

Powered by Koha