Amazon cover image
Image from Amazon.com
Image from Google Jackets

Linear algebra and learning from data

By: Language: English Publication details: United States Wellesley - Cambridge Press 2019Description: xiii, 432p. illISBN:
  • 9780692196380 (HB)
Subject(s):
Contents:
Part I. Highlights of Linear Algebra Part II. Computations with Large Matrices Part III. Low Rank and Compressed Sensing Part IV. Special Matrices Part V. Probability and Statistics Part VI. Optimization Part VII. Learning from Data: Books on machine learning
Summary: This is a textbook to help readers understand the steps that lead to deep learning. Linear algebra comes first, especially singular values, least squares, and matrix factorizations. Often the goal is a low-rank approximation A = CR (column-row) to a large matrix of data to see its most important part. This uses the full array of applied linear algebra, including randomization for very large matrices. Then deep learning creates a large-scale optimization problem for the weights solved by gradient descent or better stochastic gradient descent. Finally, the book develops the architectures of fully connected neural nets and of Convolutional Neural Nets (CNNs) to find patterns in data.
Item type: BOOKS List(s) this item appears in: New Arrivals (06 June 2022) | New Arrivals (01 December 2025)
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Home library Call number Materials specified Status Notes Date due Barcode
IMSc Library 512.64+519.2 STR (Browse shelf(Opens below)) 1 New Arrival Display upto 15 December 2025 78839

Includes index

Part I. Highlights of Linear Algebra
Part II. Computations with Large Matrices
Part III. Low Rank and Compressed Sensing
Part IV. Special Matrices
Part V. Probability and Statistics
Part VI. Optimization
Part VII. Learning from Data: Books on machine learning

This is a textbook to help readers understand the steps that lead to deep learning. Linear algebra comes first, especially singular values, least squares, and matrix factorizations. Often the goal is a low-rank approximation A = CR (column-row) to a large matrix of data to see its most important part. This uses the full array of applied linear algebra, including randomization for very large matrices. Then deep learning creates a large-scale optimization problem for the weights solved by gradient descent or better stochastic gradient descent. Finally, the book develops the architectures of fully connected neural nets and of Convolutional Neural Nets (CNNs) to find patterns in data.

There are no comments on this title.

to post a comment.
The Institute of Mathematical Sciences, Chennai, India