Mathematics of Information

Prof. Dr. Helmut Bölcskei

Offered in:
Lecture:Thursday, 9:15-12:00, ETZ E6
Discussion session: Monday, 13:15-15:00, ML F 38
Instructor: Prof. Dr. Helmut Bölcskei
Teaching assistants: Verner Vlačić, Dmytro Perekrestenko
Office hours: Wednesday, 13:00-14:00, ETF E 117 (Verner Vlačić)
Tuesday, 13:00-14:00, ETF E 119 (Dmytro Perekrestenko)
Lecture notes:Detailed lecture and exercise notes and problem sets with documented solutions will be made available as we go along.
Credits: 8 ECTS credits
Course structure:


News

We will post important announcements, links, and other information here in the course of the semester, so please check back often! The first lecture will take place on Monday! 19 Feb, 13:00-16:00! in HG D7.2!, and the first discussion session will take place on Thursday 22 Feb, 10:00!-12:00 in HG E3!. The second exercise session will take place on Monday 26 Feb, 13:00-15:00 in HG G3. Starting from March 1st, the lectures and discussion sessions will take place as announced in the course catalogue. On May 3rd the class will take place 8:15-11:00! instead of 9:15-12:00, same room, i.e., ETZ E6.



Course Info

The class focuses on fundamental mathematical aspects of data sciences: Information theory (lossless and lossy compression), sampling theory, compressed sensing, dimensionality reduction, randomized algorithms for large-scale numerical linear algebra, approximation theory, neural networks as function approximators, mathematical foundations of deep learning.

Signal representations: Frames in finite-dimensional spaces, frames in Hilbert spaces, wavelets, Gabor expansions

Sampling theorems: The sampling theorem as a frame expansion, irregular sampling, multi-band sampling, density theorems, spectrum-blind sampling

Sparse signals and compressed sensing: Uncertainty principles, recovery algorithms, Lasso, matching pursuits, compressed sensing, nonlinear approximation, best k-term approximation, super-resolution

High-dimensional data and dimensionality reduction: Random projections, the Johnson-Lindenstrauss Lemma, sketching

Randomized algorithms for large-scale numerical linear algebra: Large-scale matrix computations, randomized algorithms for approximate matrix factorizations, matrix sketching, fast algorithms for large-scale FFTs

Information theory: Entropy, mutual information, lossy compression, rate-distortion theory, lossless compression, arithmetic coding, Lempel-Ziv compression

Approximation theory: Kolmogorov epsilon-entropy of signal classes, fundamental limits on compressibility of signal classes

Mathematics of (deep) neural networks: Universal function approximation with single- and multi-layer networks, geometry of decision surfaces, convolutional neural networks, scattering networks

Prerequisites

This course is aimed at students with a background in basic linear algebra, analysis, and probability. We will, however, review required mathematical basics throughout the semester in the discussion sessions.


Lecture and exercise notes

We will use the following book chapter as material for the chapters "Signal representations" and "Sampling theorems". We will send you, by e-mail, the material for the remaining lectures as we go along.

Homework Assignments

There will be 6 homework assignments. You can hand in your solutions and get feedback from us, but it is not mandatory to turn in solutions. Complete solutions to the homework assignments will be posted on the course web page.

Homework Problem Sets

Problems Solutions
Homework 1 Solutions to Homework 1
Homework 2 Solutions to Homework 2 Matlab file
Homework 3 Solutions to Homework 3
Homework 4 Solutions to Homework 4
Homework 5 Solutions to Homework 5 Matlab files
Homework 6 Solutions to Homework 6
Handouts



Recommended reading
If you want to go into more depth or if you need additional background material, please check out these books: