The increasing complexity of systems and technologies in everyday use, make it hard or even impossible for humans to comprehend their function and behavior, and justify surprising observations. Explanation support can ease humans’ interactions with technology: explanations can help users understand a system’s function, justify system results, and increase their trust in automated decisions. In this book the authors provide an overview of existing work in explanation support for data-driven processes. In doing so, they classify explainability requirements across three dimensions: the target of the explanation (“What”), the audience of the explanation (“Who”), and the purpose of the explanation (“Why”). They identify dominant themes across these dimensions and the high-level desiderata each implies, accompanied by several examples to motivate various problem settings. Finally, they discuss explainability solutions through the lens of the “How” dimension: How something is explained (the form of the explanation) and how explanations are derived (methodology). This book provides researchers and system developers with a high-level overview of the complex problems encountered when developing better user interaction with modern large-scale data-driven computing systems and describes a roadmap to solving these issues in the future.