This volume includes some of the key research papers in the
area of machine learning produced at MIT and Siemens during
a three-year joint research effort. It includes papers on
many different styles of machine learning, organized into
three parts.
Part I, theory, includes three papers on theoretical aspects
of machine learning. The first two use the theory of
computational complexity to derive some fundamental limits
on what isefficiently learnable. The third provides an
efficient algorithm for identifying finite automata.
Part II, artificial intelligence and symbolic learning
methods, includes five papers giving an overview of the
state of the art and future developments in the field of
machine learning, a subfield of artificial intelligence
dealing with automated knowledge acquisition and knowledge
revision.
Part III, neural and collective computation, includes five
papers sampling the theoretical diversity and trends in the
vigorous new research field of neural networks: massively
parallel symbolic induction, task decomposition through
competition, phoneme discrimination, behavior-based
learning, and self-repairing neural networks.