Ensuring the robustness and providing explanations for Artificial Intelligence (AI) solutions applied in the industry is imperative, particularly in the context of securing trustworthy AI. Increasingly complex real-world applications of AI utilise black-box models based on deep learning approaches to demonstrate high predictive accuracy and enhance industrial process efficiency. However, the decisions made by these black-box models are often difficult for human experts to understand – and, consequently, to act upon. The complete action plan to be performed based on, for example, the detected symptoms of damage and wear often requires complex reasoning and planning processes, involving many actors and balancing different priorities. Thus, operators, technicians and managers require insights to understand what is happening, why it is happening, what is the uncertainty in the observation, and how to react. The effectiveness of an industrial system hinges on the relevance of the actions undertaken by operators in response to the alarms. Therefore, establishing trustworthy AI involves not only accurate detection but also the provision of understandable, reliable, and comprehensive insights to facilitate informed decision-making and enhance the overall performance and robustness of the industrial system, which can be demonstrated by the adaptiveness and efficient decision-making in complex and fast-changing environments.