The book presents a comprehensive development of effective numerical methods for stochastic control problems in continuous time. The process models are diffusions, jump-diffusions or reflected diffusions of the type that occur in the majority of current applications. All the usual problem formulations are included, as well as those of more recent interest such as ergodic control, singular control and the types of reflected diffusions used as models of queuing networks. Convergence of the numerical approximations is proved via the efficient probabilistic methods of weak convergence theory. The methods also apply to the calculation of functionals of uncontrolled processes and for the appropriate to optimal nonlinear filters as well. Applications to complex deterministic problems are illustrated via application to a large class of problems from the calculus of variations. The general approach is known as the Markov Chain Approximation Method. Essentially all that is required of the approximations are some natural local consistency conditions. The approximations are consistent with standard methods of numerical analysis.
The required background in stochastic processes is surveyed, there is an extensive development of methods of approximation, and a chapter is devoted to computational techniques. The book is written on two levels, that of practice (algorithms and applications), and that of the mathematical development. Thus the methods and use should be broadly accessible.