Delete search term

Header

Main navigation

School of Engineering

Machine Perception and Cognition Group

“AI is THE key technology of the digital transformation, across sectors and industries, with major effects on our societies. Our research thus makes major contributions to the development of robust and trustworthy AI methods, and we enthusiastically teach their safe implementation and application.”

Professor Dr Thilo Stadelmann

Fields of expertise

  • Pattern recognition with deep learning
  • Machine perception, computer vision and speaker recognition
  • Neural system development

The MPC group conducts pattern recognition research, working on a wide variety of tasks relating to image, audio, and signal data per se. We focus on deep neural network and reinforcement learning methodology, inspired by biological learning. Each task we study has its own learning target (e.g., detection, classification, clustering, segmentation, novelty detection, control) and corresponding use case (e.g., predictive maintenance, speaker recognition for multimedia indexing, document analysis, optical music recognition, computer vision for industrial quality control, automated machine learning, deep reinforcement learning for automated game play or building control), which in turn sheds light on different aspects of the learning process. We use this experience to create increasingly general AI systems built on neural architectures.

Services

Team

Projects

As part of the reorganization of the research database, the previous lists of research projects are no longer available. Die Zukunft geht in Richtung Volltextsuche und Filterung, um bestmögliche Suchergebnisse für unsere Besucher:innen zur Verfügung zu stellen.
In the meantime, you can easily find the projects via text search using the following link: «To the new search in the project database»

Publications

Other releases

When Type Content
2023 Extended Abstract Thilo Stadelmann. KI als Chance für die angewandten Wissenschaften im Wettbewerb der Hochschulen. Workshop (“Atelier”) at the Bürgenstock-Konferenz der Schweizer Fachhochschulen und Pädagogischen Hochschulen 2023, Luzern, Schweiz, 20. Januar 2023
2022 Extended Abstract Christoph von der Malsburg, Benjamin F. Grewe, and Thilo Stadelmann. Making Sense of the Natural Environment. Proceedings of the KogWis 2022 - Understanding Minds Biannual Conference of the German Cognitive Science Society, Freiburg, Germany, September 5-7, 2022.
2022 Open Reserach Data Felix M. Schmitt-Koopmann, Elaine M. Huang, Hans-Peter Hutter, Thilo Stadelmann, and Alireza Darvishy. FormulaNet: A Benchmark Dataset for Mathematical Formula Detection. One unsolved sub-task of document analysis is mathematical formula detection (MFD). Research by ourselves and others has shown that existing MFD datasets with inline and display formula labels are small and have insufficient labeling quality. There is therefore an urgent need for datasets with better quality labeling for future research in the MFD field, as they have a high impact on the performance of the models trained on them. We present an advanced labeling pipeline and a new dataset called FormulaNet. At over 45k pages, we believe that FormulaNet is the largest MFD dataset with inline formula labels. Our dataset is intended to help address the MFD task and may enable the development of new applications, such as making mathematical formulae accessible in PDFs for visually impaired screen reader users.
2020 Open Research Data Lukas Tuggener, Yvan Putra Satyawan, Alexander Pacha, Jürgen Schmidhuber, and Thilo Stadelmann, DeepScoresV2. The DeepScoresV2 Dataset for Music Object Detection contains digitally rendered images of written sheet music, together with the corresponding ground truth to fit various types of machine learning models. A total of 151 Million different instances of music symbols, belonging to 135 different classes are annotated. The total Dataset contains 255,385 Images. For most researches, the dense version, containing 1714 of the most diverse and interesting images, is a good starting point.