Machine Learning: From Theory to Applications [electronic resource] : Cooperative Research at Siemens and MIT / edited by Stephen José Hanson, Werner Remmele, Ronald L. Rivest.

Contributor(s): Hanson, Stephen José [editor.] | Remmele, Werner [editor.] | Rivest, Ronald L [editor.] | SpringerLink (Online service)Material type: TextTextSeries: Lecture Notes in Computer Science ; 661Publisher: Berlin, Heidelberg : Springer Berlin Heidelberg, 1993Description: VIII, 276 p. online resourceContent type: text Media type: computer Carrier type: online resourceISBN: 9783540475682Subject(s): Computer science | Artificial intelligence | Computer Science | Artificial Intelligence (incl. Robotics) | Computation by Abstract Devices | Processor ArchitecturesAdditional physical formats: Printed edition:: No titleDDC classification: 006.3 LOC classification: Q334-342TJ210.2-211.495Online resources: Click here to access online
Contents:
Strategic directions in machine learning -- Training a 3-node neural network is NP-complete -- Cryptographic limitations on learning Boolean formulae and finite automata -- Inference of finite automata using homing sequences -- Adaptive search by learning from incomplete explanations of failures -- Learning of rules for fault diagnosis in power supply networks -- Cross references are features -- The schema mechanism -- L-ATMS: A tight integration of EBL and the ATMS -- Massively parallel symbolic induction of protein structure/function relationships -- Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks -- Phoneme discrimination using connectionist networks -- Behavior-based learning to control IR oven heating: Preliminary investigations -- Trellis codes, receptive fields, and fault tolerant, self-repairing neural networks.
In: Springer eBooksSummary: This volume includes some of the key research papers in the area of machine learning produced at MIT and Siemens during a three-year joint research effort. It includes papers on many different styles of machine learning, organized into three parts. Part I, theory, includes three papers on theoretical aspects of machine learning. The first two use the theory of computational complexity to derive some fundamental limits on what isefficiently learnable. The third provides an efficient algorithm for identifying finite automata. Part II, artificial intelligence and symbolic learning methods, includes five papers giving an overview of the state of the art and future developments in the field of machine learning, a subfield of artificial intelligence dealing with automated knowledge acquisition and knowledge revision. Part III, neural and collective computation, includes five papers sampling the theoretical diversity and trends in the vigorous new research field of neural networks: massively parallel symbolic induction, task decomposition through competition, phoneme discrimination, behavior-based learning, and self-repairing neural networks.
Item type: E-BOOKS
Tags from this library: No tags from this library for this title. Log in to add tags.
    Average rating: 0.0 (0 votes)
Current library Home library Call number Materials specified URL Status Date due Barcode
IMSc Library
IMSc Library
Link to resource Available EBK6112

Strategic directions in machine learning -- Training a 3-node neural network is NP-complete -- Cryptographic limitations on learning Boolean formulae and finite automata -- Inference of finite automata using homing sequences -- Adaptive search by learning from incomplete explanations of failures -- Learning of rules for fault diagnosis in power supply networks -- Cross references are features -- The schema mechanism -- L-ATMS: A tight integration of EBL and the ATMS -- Massively parallel symbolic induction of protein structure/function relationships -- Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks -- Phoneme discrimination using connectionist networks -- Behavior-based learning to control IR oven heating: Preliminary investigations -- Trellis codes, receptive fields, and fault tolerant, self-repairing neural networks.

This volume includes some of the key research papers in the area of machine learning produced at MIT and Siemens during a three-year joint research effort. It includes papers on many different styles of machine learning, organized into three parts. Part I, theory, includes three papers on theoretical aspects of machine learning. The first two use the theory of computational complexity to derive some fundamental limits on what isefficiently learnable. The third provides an efficient algorithm for identifying finite automata. Part II, artificial intelligence and symbolic learning methods, includes five papers giving an overview of the state of the art and future developments in the field of machine learning, a subfield of artificial intelligence dealing with automated knowledge acquisition and knowledge revision. Part III, neural and collective computation, includes five papers sampling the theoretical diversity and trends in the vigorous new research field of neural networks: massively parallel symbolic induction, task decomposition through competition, phoneme discrimination, behavior-based learning, and self-repairing neural networks.

There are no comments on this title.

to post a comment.
The Institute of Mathematical Sciences, Chennai, India

Powered by Koha