Delete search term

Header

Main navigation

MPC Group excels at the IEEE Swiss Conference on Data Science (SDS) 2024

CAI’s Machine Perception and Cognition Group had a remarkable presence at IEEE SDS 2024, held in Zürich on May 31st. Three papers from our team were presented, and the event featured several highlights, including an award.

The 11th IEEE Swiss Conference on Data Science is a premiere academic gathering in the field of data science and the annual meeting of the Swiss data science industry. The CAI-based Machine Perception and Cognition group of Prof. Thilo Stadelmann presented recent result from their research:

Our PhD candidate Benjamin Meyer presented his poster on the implementation of ScalaGrad, an automated differentiation library for the programming language Scala. Scala is a type-safe language and used for mission-critical code. ScalaGrad implements a feature to the type-safe programming paradigm that did not exist before and introduces new applications for Scala, particularly in mathematical modeling and machine learning. ScalaGrad offers several key functionalities: it is asymptotically efficient, can derive general Scala code, supports higher-order derivatives, and is open-source. Moreover, ScalaGrad promotes, by design, safer code development through its robust type system and automatic differentiation capabilities. Thus, it simplifies development of complex mathematical tasks such as machine learning, making life easier and more secure for software developers working with Scala. For more detailed insights, you can explore the paper.

Dr. Ahmed Abdulkadir and Peng Yan gave a talk about their key findings in “smart manufacturing”, outlining how using AI can help to maintain the quality of plastic parts during the injection molding process. Together with Kistler Instrumente AG within the framework of an Innosuisse project, the team developed a new method for automated monitoring of injection molding based on representation learning and setpoint regression. Their approach uses pressure time series data to detect anomalies early during production. Its application has the potential to reduce negative effects of the expected shopfloor worker shortage. In a nutshell, this AI-driven system paves the way for more efficient injection molding manufacturing. Read their paper here.

We congratulate Lukas Tuggener and Pascal Sager on receiving an honorary mention in the best paper award race for their paper titled “So you want your private LLM at home? A survey and benchmark of methods for efficient GPTs”. In their work, the objective was to enable the use of Large Language Models (LLMs) for personal use at home. Lukas’ presentation expectedly received a lot of attention. He highlighted key findings, emphasizing that due to data protection concerns, existing LLM APIs may not always be suitable while training a private LLM is impractical. In his entertaining talk, Lukas also outlined how quantization reduces GPU memory needs with acceptable impact on text generation quality and how low-rank adapters can provide means for effective fine-tuning with moderate resources. Bottom-line: It is possible to effectively use a powerful LLM on a single consumer GPU. As a practical tip, the team suggests that for those with less than 16 GB of GPU memory, easy-to-use Jupyter notebooks on Google Colab allow deploying state-of-the-art LLMs. Learn more by reading the paper here.