See the wind: Wind scale estimation with optical flow and VisualWind dataset
Wind sensing by learning from video clips could empower cameras to sense the wind scale and significantly improve the spatiotemporal resolution of existing professional weather records that are often at the city scale. Humans can interpret the wind scale from the motion of surrounding environment ob...
Gespeichert in:
Veröffentlicht in: | The Science of the total environment 2022-11, Vol.846, p.157204-157204, Article 157204 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 157204 |
---|---|
container_issue | |
container_start_page | 157204 |
container_title | The Science of the total environment |
container_volume | 846 |
creator | Zhang, Qin Xu, Jialang Crane, Matthew Luo, Chunbo |
description | Wind sensing by learning from video clips could empower cameras to sense the wind scale and significantly improve the spatiotemporal resolution of existing professional weather records that are often at the city scale. Humans can interpret the wind scale from the motion of surrounding environment objects, especially the moving dynamics of trees in the wind. The goal of this paper is to train cameras to sense the wind by capturing such motion information using optical flow and machine learning models. To this end, we introduce a novel video dataset of over 6000 labeled video clips, covering eleven wind classes of the Beaufort scale. The videos are collected from social media, pubic cameras, and self-recording with varying numbers of clips in each class. Every video clip has a length of 10 s with varied frame rates, and contains scenes of various trees swaying in different scales of wind from an approximately fixed viewpoint. The variation in scene over the course of a single video is minimal. We propose a dual-branch model to estimate the wind scale including a motion branch, which uses optical flow to extract the tree movement dynamics, and a visual branch, to provide visual complementary clues for wind scale estimation. The two branches are fused adaptively in the end of the network, achieving 86.69 % accuracy which confirms the effectiveness of the proposed method. We have conducted experiments compared with a two-stage baseline model and a model only consisting of the motion branch, achieving the best accuracy with the potential to significantly improve the efficiency in time and storage. The dataset and the code are publicly accessible online.
[Display omitted]
•A large video dataset VisualWind focusing on trees in wind (first of its kind)•Extracting motion dynamics of trees in wind using optical flow•Magnitude normalization to eliminate the impacts of unknown imaging distance•A novel deep learning model achieving 86.69 % estimation accuracy on wind scales |
doi_str_mv | 10.1016/j.scitotenv.2022.157204 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2691049430</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0048969722043029</els_id><sourcerecordid>2691049430</sourcerecordid><originalsourceid>FETCH-LOGICAL-c397t-be078bf7e5c16716069e3af2d503717daf3b0ae9e011f9ac360d4e40a35513b83</originalsourceid><addsrcrecordid>eNqFUMtOwzAQtBBIlMI34COXhHWc2DW3quIlVeLA62g5zkZ1lcYldov4e1yCuLKH3cPM7O4MIZcMcgZMXK_zYF30Eft9XkBR5KySBZRHZMJmUmUMCnFMJgDlLFNCyVNyFsIaUskZm5DlMyKNK6Sfrm9u6HvqNFjTIcUQ3cZE5_uExRX12-gSQNvOf1KTaG8u7Ez3o2hMNAHjOTlpTRfw4ndOyevd7cviIVs-3T8u5svMciVjVmO6XbcSK8uEZAKEQm7aoqmASyYb0_IaDCoExlplLBfQlFiC4VXFeD3jU3I17t0O_mOXHtUbFyx2nenR74IuhGJQqpJDosqRagcfwoCt3g7J1vClGehDfnqt__LTh_z0mF9SzkclJid7h8OBh73Fxg1oo268-3fHNy5tfYc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2691049430</pqid></control><display><type>article</type><title>See the wind: Wind scale estimation with optical flow and VisualWind dataset</title><source>Access via ScienceDirect (Elsevier)</source><creator>Zhang, Qin ; Xu, Jialang ; Crane, Matthew ; Luo, Chunbo</creator><creatorcontrib>Zhang, Qin ; Xu, Jialang ; Crane, Matthew ; Luo, Chunbo</creatorcontrib><description>Wind sensing by learning from video clips could empower cameras to sense the wind scale and significantly improve the spatiotemporal resolution of existing professional weather records that are often at the city scale. Humans can interpret the wind scale from the motion of surrounding environment objects, especially the moving dynamics of trees in the wind. The goal of this paper is to train cameras to sense the wind by capturing such motion information using optical flow and machine learning models. To this end, we introduce a novel video dataset of over 6000 labeled video clips, covering eleven wind classes of the Beaufort scale. The videos are collected from social media, pubic cameras, and self-recording with varying numbers of clips in each class. Every video clip has a length of 10 s with varied frame rates, and contains scenes of various trees swaying in different scales of wind from an approximately fixed viewpoint. The variation in scene over the course of a single video is minimal. We propose a dual-branch model to estimate the wind scale including a motion branch, which uses optical flow to extract the tree movement dynamics, and a visual branch, to provide visual complementary clues for wind scale estimation. The two branches are fused adaptively in the end of the network, achieving 86.69 % accuracy which confirms the effectiveness of the proposed method. We have conducted experiments compared with a two-stage baseline model and a model only consisting of the motion branch, achieving the best accuracy with the potential to significantly improve the efficiency in time and storage. The dataset and the code are publicly accessible online.
[Display omitted]
•A large video dataset VisualWind focusing on trees in wind (first of its kind)•Extracting motion dynamics of trees in wind using optical flow•Magnitude normalization to eliminate the impacts of unknown imaging distance•A novel deep learning model achieving 86.69 % estimation accuracy on wind scales</description><identifier>ISSN: 0048-9697</identifier><identifier>EISSN: 1879-1026</identifier><identifier>DOI: 10.1016/j.scitotenv.2022.157204</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>Motion feature ; Optical flow ; Visual feature ; Wind scale estimation</subject><ispartof>The Science of the total environment, 2022-11, Vol.846, p.157204-157204, Article 157204</ispartof><rights>2022 The Authors</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c397t-be078bf7e5c16716069e3af2d503717daf3b0ae9e011f9ac360d4e40a35513b83</citedby><cites>FETCH-LOGICAL-c397t-be078bf7e5c16716069e3af2d503717daf3b0ae9e011f9ac360d4e40a35513b83</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.scitotenv.2022.157204$$EHTML$$P50$$Gelsevier$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,3550,27924,27925,45995</link.rule.ids></links><search><creatorcontrib>Zhang, Qin</creatorcontrib><creatorcontrib>Xu, Jialang</creatorcontrib><creatorcontrib>Crane, Matthew</creatorcontrib><creatorcontrib>Luo, Chunbo</creatorcontrib><title>See the wind: Wind scale estimation with optical flow and VisualWind dataset</title><title>The Science of the total environment</title><description>Wind sensing by learning from video clips could empower cameras to sense the wind scale and significantly improve the spatiotemporal resolution of existing professional weather records that are often at the city scale. Humans can interpret the wind scale from the motion of surrounding environment objects, especially the moving dynamics of trees in the wind. The goal of this paper is to train cameras to sense the wind by capturing such motion information using optical flow and machine learning models. To this end, we introduce a novel video dataset of over 6000 labeled video clips, covering eleven wind classes of the Beaufort scale. The videos are collected from social media, pubic cameras, and self-recording with varying numbers of clips in each class. Every video clip has a length of 10 s with varied frame rates, and contains scenes of various trees swaying in different scales of wind from an approximately fixed viewpoint. The variation in scene over the course of a single video is minimal. We propose a dual-branch model to estimate the wind scale including a motion branch, which uses optical flow to extract the tree movement dynamics, and a visual branch, to provide visual complementary clues for wind scale estimation. The two branches are fused adaptively in the end of the network, achieving 86.69 % accuracy which confirms the effectiveness of the proposed method. We have conducted experiments compared with a two-stage baseline model and a model only consisting of the motion branch, achieving the best accuracy with the potential to significantly improve the efficiency in time and storage. The dataset and the code are publicly accessible online.
[Display omitted]
•A large video dataset VisualWind focusing on trees in wind (first of its kind)•Extracting motion dynamics of trees in wind using optical flow•Magnitude normalization to eliminate the impacts of unknown imaging distance•A novel deep learning model achieving 86.69 % estimation accuracy on wind scales</description><subject>Motion feature</subject><subject>Optical flow</subject><subject>Visual feature</subject><subject>Wind scale estimation</subject><issn>0048-9697</issn><issn>1879-1026</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNqFUMtOwzAQtBBIlMI34COXhHWc2DW3quIlVeLA62g5zkZ1lcYldov4e1yCuLKH3cPM7O4MIZcMcgZMXK_zYF30Eft9XkBR5KySBZRHZMJmUmUMCnFMJgDlLFNCyVNyFsIaUskZm5DlMyKNK6Sfrm9u6HvqNFjTIcUQ3cZE5_uExRX12-gSQNvOf1KTaG8u7Ez3o2hMNAHjOTlpTRfw4ndOyevd7cviIVs-3T8u5svMciVjVmO6XbcSK8uEZAKEQm7aoqmASyYb0_IaDCoExlplLBfQlFiC4VXFeD3jU3I17t0O_mOXHtUbFyx2nenR74IuhGJQqpJDosqRagcfwoCt3g7J1vClGehDfnqt__LTh_z0mF9SzkclJid7h8OBh73Fxg1oo268-3fHNy5tfYc</recordid><startdate>20221110</startdate><enddate>20221110</enddate><creator>Zhang, Qin</creator><creator>Xu, Jialang</creator><creator>Crane, Matthew</creator><creator>Luo, Chunbo</creator><general>Elsevier B.V</general><scope>6I.</scope><scope>AAFTH</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>20221110</creationdate><title>See the wind: Wind scale estimation with optical flow and VisualWind dataset</title><author>Zhang, Qin ; Xu, Jialang ; Crane, Matthew ; Luo, Chunbo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c397t-be078bf7e5c16716069e3af2d503717daf3b0ae9e011f9ac360d4e40a35513b83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Motion feature</topic><topic>Optical flow</topic><topic>Visual feature</topic><topic>Wind scale estimation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Qin</creatorcontrib><creatorcontrib>Xu, Jialang</creatorcontrib><creatorcontrib>Crane, Matthew</creatorcontrib><creatorcontrib>Luo, Chunbo</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>The Science of the total environment</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Qin</au><au>Xu, Jialang</au><au>Crane, Matthew</au><au>Luo, Chunbo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>See the wind: Wind scale estimation with optical flow and VisualWind dataset</atitle><jtitle>The Science of the total environment</jtitle><date>2022-11-10</date><risdate>2022</risdate><volume>846</volume><spage>157204</spage><epage>157204</epage><pages>157204-157204</pages><artnum>157204</artnum><issn>0048-9697</issn><eissn>1879-1026</eissn><abstract>Wind sensing by learning from video clips could empower cameras to sense the wind scale and significantly improve the spatiotemporal resolution of existing professional weather records that are often at the city scale. Humans can interpret the wind scale from the motion of surrounding environment objects, especially the moving dynamics of trees in the wind. The goal of this paper is to train cameras to sense the wind by capturing such motion information using optical flow and machine learning models. To this end, we introduce a novel video dataset of over 6000 labeled video clips, covering eleven wind classes of the Beaufort scale. The videos are collected from social media, pubic cameras, and self-recording with varying numbers of clips in each class. Every video clip has a length of 10 s with varied frame rates, and contains scenes of various trees swaying in different scales of wind from an approximately fixed viewpoint. The variation in scene over the course of a single video is minimal. We propose a dual-branch model to estimate the wind scale including a motion branch, which uses optical flow to extract the tree movement dynamics, and a visual branch, to provide visual complementary clues for wind scale estimation. The two branches are fused adaptively in the end of the network, achieving 86.69 % accuracy which confirms the effectiveness of the proposed method. We have conducted experiments compared with a two-stage baseline model and a model only consisting of the motion branch, achieving the best accuracy with the potential to significantly improve the efficiency in time and storage. The dataset and the code are publicly accessible online.
[Display omitted]
•A large video dataset VisualWind focusing on trees in wind (first of its kind)•Extracting motion dynamics of trees in wind using optical flow•Magnitude normalization to eliminate the impacts of unknown imaging distance•A novel deep learning model achieving 86.69 % estimation accuracy on wind scales</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.scitotenv.2022.157204</doi><tpages>1</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0048-9697 |
ispartof | The Science of the total environment, 2022-11, Vol.846, p.157204-157204, Article 157204 |
issn | 0048-9697 1879-1026 |
language | eng |
recordid | cdi_proquest_miscellaneous_2691049430 |
source | Access via ScienceDirect (Elsevier) |
subjects | Motion feature Optical flow Visual feature Wind scale estimation |
title | See the wind: Wind scale estimation with optical flow and VisualWind dataset |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T23%3A42%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=See%20the%20wind:%20Wind%20scale%20estimation%20with%20optical%20flow%20and%20VisualWind%20dataset&rft.jtitle=The%20Science%20of%20the%20total%20environment&rft.au=Zhang,%20Qin&rft.date=2022-11-10&rft.volume=846&rft.spage=157204&rft.epage=157204&rft.pages=157204-157204&rft.artnum=157204&rft.issn=0048-9697&rft.eissn=1879-1026&rft_id=info:doi/10.1016/j.scitotenv.2022.157204&rft_dat=%3Cproquest_cross%3E2691049430%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2691049430&rft_id=info:pmid/&rft_els_id=S0048969722043029&rfr_iscdi=true |