Scalable Semantic 3D Mapping of Coral Reefs with Deep Learning

Coral reefs are among the most diverse ecosystems on our planet, and are depended on by hundreds of millions of people. Unfortunately, most coral reefs are existentially threatened by global climate change and local anthropogenic pressures. To better understand the dynamics underlying deterioration...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sauder, Jonathan, Banc-Prandi, Guilhem, Meibom, Anders, Tuia, Devis
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Sauder, Jonathan
Banc-Prandi, Guilhem
Meibom, Anders
Tuia, Devis
description Coral reefs are among the most diverse ecosystems on our planet, and are depended on by hundreds of millions of people. Unfortunately, most coral reefs are existentially threatened by global climate change and local anthropogenic pressures. To better understand the dynamics underlying deterioration of reefs, monitoring at high spatial and temporal resolution is key. However, conventional monitoring methods for quantifying coral cover and species abundance are limited in scale due to the extensive manual labor required. Although computer vision tools have been employed to aid in this process, in particular SfM photogrammetry for 3D mapping and deep neural networks for image segmentation, analysis of the data products creates a bottleneck, effectively limiting their scalability. This paper presents a new paradigm for mapping underwater environments from ego-motion video, unifying 3D mapping systems that use machine learning to adapt to challenging conditions under water, combined with a modern approach for semantic segmentation of images. The method is exemplified on coral reefs in the northern Gulf of Aqaba, Red Sea, demonstrating high-precision 3D semantic mapping at unprecedented scale with significantly reduced required labor costs: a 100 m video transect acquired within 5 minutes of diving with a cheap consumer-grade camera can be fully automatically analyzed within 5 minutes. Our approach significantly scales up coral reef monitoring by taking a leap towards fully automatic analysis of video transects. The method democratizes coral reef transects by reducing the labor, equipment, logistics, and computing cost. This can help to inform conservation policies more efficiently. The underlying computational method of learning-based Structure-from-Motion has broad implications for fast low-cost mapping of underwater environments other than coral reefs.
doi_str_mv 10.48550/arxiv.2309.12804
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_12804</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_12804</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-1589f37c401945e94ef617a95813509fffa404fa8d56a253cb7468566d92e453</originalsourceid><addsrcrecordid>eNotj8tOwzAQRb1hgQofwIr5gQQ7nnHsDRJKeUlBSIR9NE3HYClNI7fi8fdA6epsro7uUerC6BI9kb7i_JU-ysrqUJrKazxV193AI69GgU42PO3TAHYJTzzPaXqDbYRmm3mEF5G4g8-0f4elyAytcJ5-F2fqJPK4k_MjF6q7u31tHor2-f6xuWkLdjUWhnyIth5Qm4AkASU6U3MgbyzpEGNk1BjZr8lxRXZY1eg8ObcOlSDZhbr8tx7-93NOG87f_V9Hf-iwP_vvQGQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Scalable Semantic 3D Mapping of Coral Reefs with Deep Learning</title><source>arXiv.org</source><creator>Sauder, Jonathan ; Banc-Prandi, Guilhem ; Meibom, Anders ; Tuia, Devis</creator><creatorcontrib>Sauder, Jonathan ; Banc-Prandi, Guilhem ; Meibom, Anders ; Tuia, Devis</creatorcontrib><description>Coral reefs are among the most diverse ecosystems on our planet, and are depended on by hundreds of millions of people. Unfortunately, most coral reefs are existentially threatened by global climate change and local anthropogenic pressures. To better understand the dynamics underlying deterioration of reefs, monitoring at high spatial and temporal resolution is key. However, conventional monitoring methods for quantifying coral cover and species abundance are limited in scale due to the extensive manual labor required. Although computer vision tools have been employed to aid in this process, in particular SfM photogrammetry for 3D mapping and deep neural networks for image segmentation, analysis of the data products creates a bottleneck, effectively limiting their scalability. This paper presents a new paradigm for mapping underwater environments from ego-motion video, unifying 3D mapping systems that use machine learning to adapt to challenging conditions under water, combined with a modern approach for semantic segmentation of images. The method is exemplified on coral reefs in the northern Gulf of Aqaba, Red Sea, demonstrating high-precision 3D semantic mapping at unprecedented scale with significantly reduced required labor costs: a 100 m video transect acquired within 5 minutes of diving with a cheap consumer-grade camera can be fully automatically analyzed within 5 minutes. Our approach significantly scales up coral reef monitoring by taking a leap towards fully automatic analysis of video transects. The method democratizes coral reef transects by reducing the labor, equipment, logistics, and computing cost. This can help to inform conservation policies more efficiently. The underlying computational method of learning-based Structure-from-Motion has broad implications for fast low-cost mapping of underwater environments other than coral reefs.</description><identifier>DOI: 10.48550/arxiv.2309.12804</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-09</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.12804$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.12804$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Sauder, Jonathan</creatorcontrib><creatorcontrib>Banc-Prandi, Guilhem</creatorcontrib><creatorcontrib>Meibom, Anders</creatorcontrib><creatorcontrib>Tuia, Devis</creatorcontrib><title>Scalable Semantic 3D Mapping of Coral Reefs with Deep Learning</title><description>Coral reefs are among the most diverse ecosystems on our planet, and are depended on by hundreds of millions of people. Unfortunately, most coral reefs are existentially threatened by global climate change and local anthropogenic pressures. To better understand the dynamics underlying deterioration of reefs, monitoring at high spatial and temporal resolution is key. However, conventional monitoring methods for quantifying coral cover and species abundance are limited in scale due to the extensive manual labor required. Although computer vision tools have been employed to aid in this process, in particular SfM photogrammetry for 3D mapping and deep neural networks for image segmentation, analysis of the data products creates a bottleneck, effectively limiting their scalability. This paper presents a new paradigm for mapping underwater environments from ego-motion video, unifying 3D mapping systems that use machine learning to adapt to challenging conditions under water, combined with a modern approach for semantic segmentation of images. The method is exemplified on coral reefs in the northern Gulf of Aqaba, Red Sea, demonstrating high-precision 3D semantic mapping at unprecedented scale with significantly reduced required labor costs: a 100 m video transect acquired within 5 minutes of diving with a cheap consumer-grade camera can be fully automatically analyzed within 5 minutes. Our approach significantly scales up coral reef monitoring by taking a leap towards fully automatic analysis of video transects. The method democratizes coral reef transects by reducing the labor, equipment, logistics, and computing cost. This can help to inform conservation policies more efficiently. The underlying computational method of learning-based Structure-from-Motion has broad implications for fast low-cost mapping of underwater environments other than coral reefs.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAQRb1hgQofwIr5gQQ7nnHsDRJKeUlBSIR9NE3HYClNI7fi8fdA6epsro7uUerC6BI9kb7i_JU-ysrqUJrKazxV193AI69GgU42PO3TAHYJTzzPaXqDbYRmm3mEF5G4g8-0f4elyAytcJ5-F2fqJPK4k_MjF6q7u31tHor2-f6xuWkLdjUWhnyIth5Qm4AkASU6U3MgbyzpEGNk1BjZr8lxRXZY1eg8ObcOlSDZhbr8tx7-93NOG87f_V9Hf-iwP_vvQGQ</recordid><startdate>20230922</startdate><enddate>20230922</enddate><creator>Sauder, Jonathan</creator><creator>Banc-Prandi, Guilhem</creator><creator>Meibom, Anders</creator><creator>Tuia, Devis</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230922</creationdate><title>Scalable Semantic 3D Mapping of Coral Reefs with Deep Learning</title><author>Sauder, Jonathan ; Banc-Prandi, Guilhem ; Meibom, Anders ; Tuia, Devis</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-1589f37c401945e94ef617a95813509fffa404fa8d56a253cb7468566d92e453</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Sauder, Jonathan</creatorcontrib><creatorcontrib>Banc-Prandi, Guilhem</creatorcontrib><creatorcontrib>Meibom, Anders</creatorcontrib><creatorcontrib>Tuia, Devis</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sauder, Jonathan</au><au>Banc-Prandi, Guilhem</au><au>Meibom, Anders</au><au>Tuia, Devis</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Scalable Semantic 3D Mapping of Coral Reefs with Deep Learning</atitle><date>2023-09-22</date><risdate>2023</risdate><abstract>Coral reefs are among the most diverse ecosystems on our planet, and are depended on by hundreds of millions of people. Unfortunately, most coral reefs are existentially threatened by global climate change and local anthropogenic pressures. To better understand the dynamics underlying deterioration of reefs, monitoring at high spatial and temporal resolution is key. However, conventional monitoring methods for quantifying coral cover and species abundance are limited in scale due to the extensive manual labor required. Although computer vision tools have been employed to aid in this process, in particular SfM photogrammetry for 3D mapping and deep neural networks for image segmentation, analysis of the data products creates a bottleneck, effectively limiting their scalability. This paper presents a new paradigm for mapping underwater environments from ego-motion video, unifying 3D mapping systems that use machine learning to adapt to challenging conditions under water, combined with a modern approach for semantic segmentation of images. The method is exemplified on coral reefs in the northern Gulf of Aqaba, Red Sea, demonstrating high-precision 3D semantic mapping at unprecedented scale with significantly reduced required labor costs: a 100 m video transect acquired within 5 minutes of diving with a cheap consumer-grade camera can be fully automatically analyzed within 5 minutes. Our approach significantly scales up coral reef monitoring by taking a leap towards fully automatic analysis of video transects. The method democratizes coral reef transects by reducing the labor, equipment, logistics, and computing cost. This can help to inform conservation policies more efficiently. The underlying computational method of learning-based Structure-from-Motion has broad implications for fast low-cost mapping of underwater environments other than coral reefs.</abstract><doi>10.48550/arxiv.2309.12804</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2309.12804
ispartof
issn
language eng
recordid cdi_arxiv_primary_2309_12804
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Scalable Semantic 3D Mapping of Coral Reefs with Deep Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T16%3A16%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Scalable%20Semantic%203D%20Mapping%20of%20Coral%20Reefs%20with%20Deep%20Learning&rft.au=Sauder,%20Jonathan&rft.date=2023-09-22&rft_id=info:doi/10.48550/arxiv.2309.12804&rft_dat=%3Carxiv_GOX%3E2309_12804%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true