This defense will be held in the “Salle des voutes” in St-Charles campus from 14:00 to 18:00. See you there!
The current era of enthusiastic data gathering has made datasets with non-standard structures more common. This includes the already well-known multi-task framework where each data sample is associated with multiple output labels, as well as the multi-view learning paradigm, in which each data sample can be seen to contain numerous possibly heterogeneous descriptions. To obtain a good performance in tasks like these, it is important to model the interactions present in the views or output variables well.
Kernel methods offer a justified and elegant way to solve many machine learning problems. Operator-valued kernels, which generalize the well-known scalar-valued kernels, have been under attention recently as a way to learn vector-valued functions. For both scalar- and operator-valued kernel methods the choice of a good kernel function suitable for the data plays crucial role for the success on the learning task, and a natural question to ask is: is it possible to automate the process of choosing the kernel? Kernel learning tries to answer this question by treating it as a machine learning problem.
This thesis offers kernel learning as a solution for various machine learning problems. The problems range from supervised to unsupervised, yet the data is always described under multiple views or has multiple output variables. In both of these cases it is important to model the interactions present in order to obtain good learning results. Chapters two and three investigate learning the interactions with multi-view data. In the first of these, the focus is in supervised inductive learning and the interactions are modelled with operator-valued kernels. These kernels are learnable, adapting to the data at hand in the learning stage. We give a generalization bound for the algorithm developed to jointly learn this kernel and predictive function, and illustrate its performance experimentally.Chapter three tackles multi-view data and kernel learning in unsupervised context and proposes a scalar-valued kernel learning method for completing missing data in kernel matrices of a multi-view problem. In the last chapter we turn from multi-view to multi-output learning, and return to the supervised inductive learning paradigm. We propose a method for learning inseparable operator-valued kernels that model interactions between inputs and multiple output variables. We also provide insight to current state of operator-valued kernel learning and introduce a general framework to study them.
The jury is composed of:
- Juho Rousu (Prof, Aalto University) : rapporteur
- Amaury Habrard (Prof, Université de Saint-Etienne) : rapporteur
- Alain Rakotomamonjy (Prof, Université de Rouen) : examinateur
- Massih-Reza Amini (Prof, Université Grenoble Alpes) : examinateur
- Liva Ralaivola (Université d’Aix-Marseille) : examinateur
- Cecile Capponi (Université d’Aix-Marseille) : supervisor
- Hachem Kadri (Université d’Aix-Marseille) : supervisor