If robots are to act intelligently in everyday environments, they must have a perception of motion and its consequences. This book describes experimental advances made in the interpretation of visual motion over the last few years that have moved researchers closer to emulating the way in which we recover information about the surrounding world. It describes algorithms that form a complete, implemented, and tested system developed by the authors to measure two-dimensional motion in an image sequence, then to compute three-dimensional structure and motion, and finally to recognize the moving objects.
The authors develop algorithms to interpret visual motion around four principal constraints. The first and simplest allows the scene structure to be recovered on a pointwise basis. The second constrains the scene to a set of connected straight edges. The third makes the transition between edge and surface representations by demanding that the wireframe recovered is strictly polyhedral. And the final constraint assumes that the scene is comprised of planar surfaces, and recovers them directly.
ContentsImage, Scene, and Motion Computing Image Motion Structure from Motion of Points The Structure and Motion of Edges From Edges to Surfaces Structure and Motion of Planes Visual Motion Segmentation Matching to Edge Models Matching to Planar Surfaces