This book examines how nonlinear optimization techniques can be applied to training and testing neural networks. It includes both well-known and recently-developed network training methods including deterministic nonlinear optimization methods, stochastic nonlinear optimization methods, and advanced training schemes which combine both deterministic and stochastic components. The convergence analysis and convergence proofs of these techniques are presented as well as real applications of neural networks in areas such as pattern classification, bioinformatics, biomedicine, and finance. Nonlinear optimization methods are applied extensively in the design of training protocols for artificial neural networks used in industry and academia. Such techniques allow for the implementation of dynamic unsupervised neural network training without requiring the fine tuning of several heuristic parameters. "Nonlinear Optimization Approaches for Training Neural Networks" is a response to the growing demand for innovations in this area of research. This monograph presents a wide range of approaches to neural networks training providing theoretical justification for network behavior based on the theory of nonlinear optimization. It presents training algorithms, and theoretical results on their convergence and implementations through pseudocode. This approach offers the reader an explanation of the performance of the various methods, and a better understanding of the individual characteristics of the various methods, their differences/advantages and interrelationships. This improved perspective allows the reader to choose the best network training method without spending too much effort configuring highly sensitive heuristic parameters. This book can serve as an excellent guide for researchers, graduate students, and lecturers interested in the development of neural networks and their training.