Research Themes
Adaptive, Stable, Robust and Explainable AI for Control
Adaptive, stable, robust, and explainable AI is vital for advancing control systems, particularly in dynamic and uncertain environments. By developing AI algorithms that adapt to changing conditions, we ensure our systems maintain performance and stability despite unexpected challenges. Robustness is achieved through theoretical foundations and comprehensive training, allowing the AI to handle noise and variability in real-world data effectively. Additionally, integrating explainability into AI models fosters transparency and trust, enabling human operators to understand and interpret the decisions made by these systems. Our research in this area aims to create intelligent control solutions that perform reliably and provide insights into their functioning, enhancing the overall effectiveness of autonomous applications.