November 30-December 1, 2020

The two-day doctoral school will be co-located with the iTWIST’20 workshop at École Centrale de Nantes. It aims at giving the opportunity to Ph.D. (or M.Sc.) students and postdoctoral fellows to learn some of the theoretical and applicative concepts discussed in the workshop. The school will be divided into four courses and will take place before the workshop.

Schedule

(also available by clicking here).

Call for participation

The registration for the doctoral school is free but mandatory. Registration will cover access to the school. Accepted participants are expected to register to the workshop as well and are encouraged to submit papers. Registration will open on Mar. 9, 2020 after mid-May, 2020.

Speakers and topics

Jérémy Cohen (IRISA, Rennes):
Nonnegative and Low-rank Approximations (for unsupervised machine learning)

This course will present in details the nonnegative and low-rank approximations problems, and how these problems are key to understanding nonnegative matrix and tensor factorizations. The focus will be both on properties of these problems, and (non)convex optimization algorithms to solve them. Some examples of unsupervised machine learning using NMF and NTF in audio processing, remote sensing and text mining will also be introduced.

Jordan Ninin (ENSTA Bretagne, Brest):
Global Optimization based on Branch-and-Bound algorithm

Since about thirty years, the Branch and Bound algorithms are increasingly used to solve non-convex global optimization problems in a deterministic and reliable way. However, these methods may require much CPU time, thus acceleration methods are essential and significant breakthroughs were allowed.
In this talk, we will deal with an overview of different techniques to find the global minimum:

  • Branch-and-bound technique,
  • Reformulation, linearization and convexification technique,
  • Applications to combinatorial optimization,
  • Applications to nonlinear problem.
Gabriel Peyré (CNRS/École Normale Supérieure, Paris):
Computational optimal transport

Optimal transport (OT) is a fundamental mathematical theory at the interface between optimization, partial differential equations and probability. It has recently emerged as an important tool to tackle a surprisingly large range of problems in data sciences, such as shape registration in medical imaging, structured prediction problems in supervised learning and training deep generative networks. This course will interleave the description of the mathematical theory with the recent developments of scalable numerical solvers. This will highlight the importance of recent advances in regularized approaches for OT which allow one to tackle high dimensional learning problems. Material for the course (including a small book, slides and computational resources) can be found online at https://optimaltransport.github.io/.

  1. Foundations of Optimal Transport
    • The basics of Optimal Transport
    • Overview of applications in imaging and learning
    • Special cases: 1-D, Gaussians
    • Network flows solvers
    • Semi-discrete, auction
  2. Entropic regularization
    • Regularization and approximation
    • Sinkhorn’s algorithm
    • Hilbert’s metric, Perron-Frobenius
    • Extensions: multimarginal, unbalanced
  3. Density fitting and generative modeling
    • Statistical divergences
    • Sample complexity
    • Minimum Kantorovich Estimator
    • Deep learning and generative models
Diana Mateus (Ecole Centrale de Nantes): A journey through supervised, weakly-supervised and curriculum deep learning approaches for assisting fracture classification

In this talk we will present recent advances in deep learning methods for classification through their application to a medical image analysis task: Classification of fractures from clinical X-ray images. The problem at hand is very challenging as evidenced by the low reported intra- and inter-expert agreement rates. The talk will revise classical supervised deep learning techniques for localizing and classifying proximal femur fractures in X-ray images. We will then move towards weakly supervised approaches for localization based on attention models to reduce the need of a large number of annotations and favor the interpretability of results. Finally, we present our most recent results on a curriculum learning approach which takes expert knowledge into account to guide the CNN weight optimization. The talk will end with some perspectives about measuring the uncertainty of class predictions.

The talk summarizes results of a collaborative project between Centrale Nantes, University Pompeu Fabra, Barcelona, the Klinikum Rechts der Isar and the Technical University of Munich, as well as a local industrial collaboration with Herami.