On Deep Learning for Low-Dimensional Representations

In science and engineering, we are often concerned with creating mathematical models from data. These models are abstractions of observed real-world processes where the goal is often to understand these processes or to use the models to predict future instances of the observed process. Natural proce...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Gedon, Daniel
Format: Dissertation
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In science and engineering, we are often concerned with creating mathematical models from data. These models are abstractions of observed real-world processes where the goal is often to understand these processes or to use the models to predict future instances of the observed process. Natural processes often exhibit low-dimensional structures which we can embed into the model. In mechanistic models, we directly include this structure into the model through mathematical equations often inspired by physical constraints. In contrast, within machine learning and particularly in deep learning we often deal with high-dimensional data such as images and learn a model without imposing a low-dimensional structure. Instead, we learn some kind of representations that are useful for the task at hand. While representation learning arguably enables the power of deep neural networks, it is less clear how to understand real-world processes from these models or whether we can benefit from including a low-dimensional structure in the model. Learning from data with intrinsic low-dimensional structure and how to replicate this structure in machine learning models is studied within this dissertation. While we put specific emphasis on deep neural networks, we also consider kernel machines in the context of Gaussian processes, as well as linear models, for example by studying the generalisation of models with an explicit low-dimensional structure. First, we argue that many real-world observations have an intrinsic low-dimensional structure. We can find evidence of this structure for example through low-rank approximations of many real-world data sets. Then, we face two open-ended research questions. First, we study the behaviour of machine learning models when they are trained on data with low-dimensional structures. Here we investigate fundamental aspects of learning low-dimensional representations and how well models with explicit low-dimensional structures perform. Second, we focus on applications in the modelling of dynamical systems and the medical domain. We investigate how we can benefit from low-dimensional representations for these applications and explore the potential of low-dimensional model structures for predictive tasks. Finally, we give a brief outlook on how we go beyond learning low-dimensional structures and identify the underlying mechanisms that generate the data to better model and understand these processes. This dissertation provides an overview of learn