Support vector machines (SVMs) are one of the most active research areas in machine learning. SVMs have shown good performance in a number of applications, including text and image classification. However, the learning capability of SVMs comes at a cost – an inherent inability to explain in a comprehensible form, the process by which a learning result was reached. Hence, the situation is similar to neural networks, where the apparent lack of an explanation capability has led to various approaches aiming at extracting symbolic rules from neural networks. For SVMs to gain a wider degree of acceptance in fields such as medical diagnosis and security sensitive areas, it is desirable to offer an explanation capability. User explanation is often a legal requirement, because it is necessary to explain how a decision was reached or why it was made. This book provides an overview of the field and introduces a number of different approaches to extracting rules from support vector machines developed by key researchers. In addition, successful applications are outlined and future research opportunities are discussed. The book is an important reference for researchers and graduate students, and since it provides an introduction to the topic, it will be important in the classroom as well. Because of the significance of both SVMs and user explanation, the book is of relevance to data mining practitioners and data analysts.