explainability
Latest
Justus Piater
The Intelligent and Interactive Systems lab uses machine learning to enhance the flexibility, robustness, generalization and explainability of robots and vision systems, focusing on methods for learning about structure, function, and other concepts that describe the world in actionable ways. Three University-Assistant Positions involve minor teaching duties with negotiable research topics within the lab's scope. One Project Position involves the integration of robotic perception and execution mechanisms for task-oriented object manipulation in everyday environments, with a focus on affordance-driven object part segmentation and object manipulation using reinforcement learning.
N/A
The PhD research focuses on the fairness, explainability, and robustness of machine learning systems within the framework of causal counterfactual analysis using formalisms from probabilistic graphical models, probabilistic circuits, and structural causal models.
Dr. Robert Legenstein
For the recently established Cluster of Excellence CoE Bilateral Artificial Intelligence (BILAI), funded by the Austrian Science Fund (FWF), we are looking for more than 50 PhD students and 10 Post-Doc researchers (m/f/d) to join our team at one of the six leading research institutions across Austria. In BILAI, major Austrian players in Artificial Intelligence (AI) are teaming up to work towards Broad AI. As opposed to Narrow AI, which is characterized by task-specific skills, Broad AI seeks to address a wide array of problems, rather than being limited to a single task or domain. To develop its foundations, BILAI employs a Bilateral AI approach, effectively combining sub-symbolic AI (neural networks and machine learning) with symbolic AI (logic, knowledge representation, and reasoning) in various ways. Harnessing the full potential of both symbolic and sub-symbolic approaches can open new avenues for AI, enhancing its ability to solve novel problems, adapt to diverse environments, improve reasoning skills, and increase efficiency in computation and data use. These key features enable a broad range of applications for Broad AI, from drug development and medicine to planning and scheduling, autonomous traffic management, and recommendation systems. Prioritizing fairness, transparency, and explainability, the development of Broad AI is crucial for addressing ethical concerns and ensuring a positive impact on society. The research team is committed to cross-disciplinary work in order to provide theory and models for future AI and deployment to applications.
Seeing things clearly: Image understanding through hard-attention and reasoning with structured knowledges
In this talk, Jonathan aims to frame the current challenges of explainability and understanding in ML-driven approaches to image processing, and their potential solution through explicit inference techniques.
explainability coverage
4 items