Course label : | Advanced Machine Learning 2 - Machine Learning for Signal Processing |
---|---|
Teaching departement : | EEA / Electrotechnics - Electronics - Control Systems |
Teaching manager : | Mister PIERRE-ANTOINE THOUVENIN / Mister PIERRE CHAINAIS |
Education language : | |
Potential ects : | 0 |
Results grid : | |
Code and label (hp) : | MR_DS_S3_AM2 - Advanced machine learning 2 |
Education team
Teachers : Mister PIERRE-ANTOINE THOUVENIN / Mister PIERRE CHAINAIS
External contributors (business, research, secondary education): various temporary teachers
Summary
Applications of machine learning to statistical signal processing through the lens of inverse problems. Methods and notions in this scope are summarized below: - definition of an inverse problem (Bayesian formulation and estimators, sparse regularization, quality assessment in signal and image processing) - sparsity and compressed sensing - refreshers in (convex) optimization (duality, proximal operator, Legendre-Fenchel conjugate function, 1st and 2nd proximal theorems) - splitting methods in optimization: application with the ADMM and PnP-ADMM algorithms - splitting approaches for MCMC (AXDA, SGS, PnP-SGS)
Educational goals
After successfully taking this course, a student should be able to: - understand the methodological connections between models in machine learning and in statistical signal processing; - identify relevant machine learning models and techniques for statistical signal processing applications; - understand representative algorithms at the interface between machine learning and standard statistical signal processing (PnP optimization algorithm, SPA Gibbs sampler, PnP MCMC); - implement representative algorithms to solve some inverse problems arising in statistical signal processing.
Sustainable development goals
Knowledge control procedures
Continuous Assessment
Comments: Continuous evaluation, based on:
- lab report(s), 50% of the overall grade, grading scale: (min) 0 – 20 (max)
- exams, 50% of the overall grade, grading scale: (min) 0 – 20 (max)
2nd chance exam (session 2):
- grade on 20 points
- final grade for the course: 50% session 1 (grade at the continuous assessment), 50% session 2
Online resources
- Murphy, K. P. (2012). Machine learning: a probabilistic perspective. MIT press. - Beck, A. (2017). First-Order Methods in Optimization. Society for Industrial and Applied Mathematics press.
Pedagogy
- Lectures (6x2h), labs (4x2h) and tutorial sessions (2x2h). - Exams: 2x1h. - Language of instruction is specified in the course offering information in the course and programme directory. English is the default language.
Sequencing / learning methods
Number of hours - Lectures : | 12 |
---|---|
Number of hours - Tutorial : | 12 |
Number of hours - Practical work : | 0 |
Number of hours - Seminar : | 0 |
Number of hours - Half-group seminar : | 0 |
Number of student hours in TEA (Autonomous learning) : | 0 |
Number of student hours in TNE (Non-supervised activities) : | 0 |
Number of hours in CB (Fixed exams) : | 0 |
Number of student hours in PER (Personal work) : | 0 |
Number of hours - Projects : | 0 |
Prerequisites
Lectures from the M1 Data Science programme (or equivalent): Python and tools for research. Machine learning 1 or the equivalent. Probability 1 & 2. Statistics 1 & 2. Signal processing. Models for machine learning. Notions in optimization.