Now recognized as a classic that was ahead of its time, this volume has been republished to make available a rich source of background materials for the field of neural networks. It serves as a bridge between today's “second wave” of neural networks and the “first wave” that occurred in the 1960s — and collects in one place the vast amount of material of great interest to “second wavers.” Explores the seminal thinking in the development of such areas as pattern recognition as the foundation for both supervised and unsupervised feedforward neural networks; optimization; gradient optimization algorithms within the contexts of general stochastic approximation theory and equation error system identification; and reinforcement learning control systems — with regard to short-term and long-term memory, goals and sub-goals, and stochastic automata. The Preface explains the content within the context of today's advancements in neural networks.