Camera field coverage estimation through common event sensing
We propose a technique to estimate camera pose by using common observations of events. Pulsed-point source lights (LEDs) with limited visibility in range and angle are used as surrogate viewable events. A camera yaw estimate, with an error of +/-5 degrees, can be achieved with a single event observa...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a technique to estimate camera pose by using common observations of events. Pulsed-point source lights (LEDs) with limited visibility in range and angle are used as surrogate viewable events. A camera yaw estimate, with an error of +/-5 degrees, can be achieved with a single event observation. Using multiple observations weighted by their expected accuracy further reduces this error. The effects of camera occlusions caused by 3-D terrain elements are also examined. When more than two cameras observe an event, or when multiple events are visible from the same camera, it is possible to perform a secondary pass. Finally, given an occlusion map, the orientation of cameras that do not observe any events may be estimated by excluding pointing angles, at which events would otherwise be visible. Our results explore the dependencies of the system accuracy on factors such as: camera density, visibility range, occlusion rate, LED emission angles, cameras FOV, and GPS accuracies. For each system setup we calculate a 3-D coverage map as an end product. |
---|---|
DOI: | 10.1109/THS.2013.6699015 |