See the wind: Wind scale estimation with optical flow and VisualWind dataset
Wind sensing by learning from video clips could empower cameras to sense the wind scale and significantly improve the spatiotemporal resolution of existing professional weather records that are often at the city scale. Humans can interpret the wind scale from the motion of surrounding environment ob...
Gespeichert in:
Veröffentlicht in: | The Science of the total environment 2022-11, Vol.846, p.157204-157204, Article 157204 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Wind sensing by learning from video clips could empower cameras to sense the wind scale and significantly improve the spatiotemporal resolution of existing professional weather records that are often at the city scale. Humans can interpret the wind scale from the motion of surrounding environment objects, especially the moving dynamics of trees in the wind. The goal of this paper is to train cameras to sense the wind by capturing such motion information using optical flow and machine learning models. To this end, we introduce a novel video dataset of over 6000 labeled video clips, covering eleven wind classes of the Beaufort scale. The videos are collected from social media, pubic cameras, and self-recording with varying numbers of clips in each class. Every video clip has a length of 10 s with varied frame rates, and contains scenes of various trees swaying in different scales of wind from an approximately fixed viewpoint. The variation in scene over the course of a single video is minimal. We propose a dual-branch model to estimate the wind scale including a motion branch, which uses optical flow to extract the tree movement dynamics, and a visual branch, to provide visual complementary clues for wind scale estimation. The two branches are fused adaptively in the end of the network, achieving 86.69 % accuracy which confirms the effectiveness of the proposed method. We have conducted experiments compared with a two-stage baseline model and a model only consisting of the motion branch, achieving the best accuracy with the potential to significantly improve the efficiency in time and storage. The dataset and the code are publicly accessible online.
[Display omitted]
•A large video dataset VisualWind focusing on trees in wind (first of its kind)•Extracting motion dynamics of trees in wind using optical flow•Magnitude normalization to eliminate the impacts of unknown imaging distance•A novel deep learning model achieving 86.69 % estimation accuracy on wind scales |
---|---|
ISSN: | 0048-9697 1879-1026 |
DOI: | 10.1016/j.scitotenv.2022.157204 |