Learning from and with Pretrained Models
- Ansprechperson:
Supriti Sinhamahapatra, Danni Liu, Jan Niehues
Recently, pre-trained models became an essential part of deep learning-based AI systems as e.g. shown by our successful integration into speech translation systems. The availability of many such models enables new ways of learning: Instead of learning directly from data, task-specific systems can be trained from pretrained models.
In this project, we will investigate how we can push the state-of-the-art with these new resources. Therefore, we need to investigate different research questions that are essential for efficient learning from these models. How can we efficiently combine complementary strength and abilities (e.g. different modalities) of the models? We will epically focus on sharing and aligning representations between the models. How can we add abilities to existing models and what are important properties of the models? Thereby, we will investigate self-supervised learning as well as supervised tasks. A focus on the application of the models will be on research data and the integration of outcomes from the project “Embeddings in Hyperbolic and Weighted Spaces and Applications in Natural Language Processing”