This book considers three basic questions:
1. Why are vision systems fundamental and critical to autonomous flight?
2. What are the vision system tasks required for autonomous flight?
3. How can those tasks be approached?
It addresses the role of vision systems for autonomous operations and discusses the critical tasks required of a vision system, including taxi, takeoff, en-route navigation, detect and avoid, and landing, as well as formation flight or approach and docking at a terminal or with other vehicles. These tasks are analyzed to develop field of view, resolution, latency, and other sensing requirements and to understand when one sensor can be used for multiple applications. Airspace classifications, landing visibility categories, decision height criteria, and typical runway dimensions are introduced.
The book provides an overview of sensors and phenomenology from visible through infrared, extending into the radar bands and including both passive and active systems. Human visual system performance is discussed as a comparison benchmark. System architectures are discussed, including distributed aperture sensor systems and multiuse sensors. Finally, various algorithms for extracting information from sensor data are examined, such as moving target detection for detect and avoid, shape from motion, multisensor triangulation, model-based pose estimation, wire and cable detection, and geo-location techniques.