TopicAI

explainability

3 Positions1 Seminar

Latest

PositionArtificial Intelligence

Justus Piater

University of Innsbruck, Digital Science Center, Department of Computer Science, Intelligent and Interactive Systems
University of Innsbruck, Austria
Jan 4, 2026

The Intelligent and Interactive Systems lab uses machine learning to enhance the flexibility, robustness, generalization and explainability of robots and vision systems, focusing on methods for learning about structure, function, and other concepts that describe the world in actionable ways. Three University-Assistant Positions involve minor teaching duties with negotiable research topics within the lab's scope. One Project Position involves the integration of robotic perception and execution mechanisms for task-oriented object manipulation in everyday environments, with a focus on affordance-driven object part segmentation and object manipulation using reinforcement learning.

PositionArtificial Intelligence

N/A

Dalle Molle Institute for Artificial Intelligence (IDSIA)
Lugano, Switzerland
Jan 4, 2026

The PhD research focuses on the fairness, explainability, and robustness of machine learning systems within the framework of causal counterfactual analysis using formalisms from probabilistic graphical models, probabilistic circuits, and structural causal models.

PositionArtificial Intelligence

Dr. Robert Legenstein

Graz University of Technology
Austria
Jan 4, 2026

For the recently established Cluster of Excellence CoE Bilateral Artificial Intelligence (BILAI), funded by the Austrian Science Fund (FWF), we are looking for more than 50 PhD students and 10 Post-Doc researchers (m/f/d) to join our team at one of the six leading research institutions across Austria. In BILAI, major Austrian players in Artificial Intelligence (AI) are teaming up to work towards Broad AI. As opposed to Narrow AI, which is characterized by task-specific skills, Broad AI seeks to address a wide array of problems, rather than being limited to a single task or domain. To develop its foundations, BILAI employs a Bilateral AI approach, effectively combining sub-symbolic AI (neural networks and machine learning) with symbolic AI (logic, knowledge representation, and reasoning) in various ways. Harnessing the full potential of both symbolic and sub-symbolic approaches can open new avenues for AI, enhancing its ability to solve novel problems, adapt to diverse environments, improve reasoning skills, and increase efficiency in computation and data use. These key features enable a broad range of applications for Broad AI, from drug development and medicine to planning and scheduling, autonomous traffic management, and recommendation systems. Prioritizing fairness, transparency, and explainability, the development of Broad AI is crucial for addressing ethical concerns and ensuring a positive impact on society. The research team is committed to cross-disciplinary work in order to provide theory and models for future AI and deployment to applications.

explainability coverage

4 items

Position3
Seminar1
Domain spotlight

Explore how explainability research is advancing inside AI.

Visit domain