Utility-Based Control for Computer Vision
Several key issues arise in implementing computer vision recognition of world objects in terms of Bayesian networks. Computational efficiency is a driving force. Perceptual networks are very deep, typically fifteen levels of structure. Images are wide, e.g., an unspecified-number of edges may appear...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Several key issues arise in implementing computer vision recognition of world
objects in terms of Bayesian networks. Computational efficiency is a driving
force. Perceptual networks are very deep, typically fifteen levels of
structure. Images are wide, e.g., an unspecified-number of edges may appear
anywhere in an image 512 x 512 pixels or larger. For efficiency, we dynamically
instantiate hypotheses of observed objects. The network is not fixed, but is
created incrementally at runtime. Generation of hypotheses of world objects and
indexing of models for recognition are important, but they are not considered
here [4,11]. This work is aimed at near-term implementation with parallel
computation in a radar surveillance system, ADRIES [5, 15], and a system for
industrial part recognition, SUCCESSOR [2]. For many applications, vision must
be faster to be practical and so efficiently controlling the machine vision
process is critical. Perceptual operators may scan megapixels and may require
minutes of computation time. It is necessary to avoid unnecessary sensor
actions and computation. Parallel computation is available at several levels of
processor capability. The potential for parallel, distributed computation for
high-level vision means distributing non-homogeneous computations. This paper
addresses the problem of task control in machine vision systems based on
Bayesian probability models. We separate control and inference to extend the
previous work [3] to maximize utility instead of probability. Maximizing
utility allows adopting perceptual strategies for efficient information
gathering with sensors and analysis of sensor data. Results of controlling
machine vision via utility to recognize military situations are presented in
this paper. Future work extends this to industrial part recognition for
SUCCESSOR. |
---|---|
DOI: | 10.48550/arxiv.1304.2367 |