The world is keen to leverage multi-faceted AI techniques and tools to deploy and deliver the next generation of business and IT applications. Resource-intensive gadgets, machines, instruments, appliances, and equipment spread across a variety of environments are empowered with AI competencies. Connected products are collectively or individually enabled to be intelligent in their operations, offering and output.
AI is being touted as the next-generation technology to visualize and realize a bevy of intelligent systems, networks and environments. However, there are challenges associated with the huge adoption of AI methods. As we give full control to AI systems, we need to know how these AI models reach their decisions. Trust and transparency of AI systems are being seen as a critical challenge. Building knowledge graphs and linking them with AI systems are being recommended as a viable solution for overcoming this trust issue and the way forward to fulfil the ideals of explainable AI.
The authors focus on explainable AI concepts, tools, frameworks and techniques. To make the working of AI more transparent, they introduce knowledge graphs (KG) to support the need for trust and transparency into the functioning of AI systems. They show how these technologies can be used towards explaining data fabric solutions and how intelligent applications can be used to greater effect in finance and healthcare.
Explainable Artificial Intelligence (XAI): Concepts, enabling tools, technologies and applications is aimed primarily at industry and academic researchers, scientists, engineers, lecturers and advanced students in the fields of IT and computer science, soft computing, AI/ML/DL, data science, semantic web, knowledge engineering and IoT. It will also prove a useful resource for software, product and project managers and developers in these fields.