Underwater images collected by an Autonomous Surface Vehicle in Aldabra-Arm01, Seychelles - 2022-10-21
This dataset was collected by an Autonomous Surface Vehicle in Aldabra-Arm01, Seychelles - 2022-10-21. Underwater or aerial images collected by scientists or citizens can have a wide variety of use for science, management, or conservation. These images can be annotated and shared to train IA models...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Dataset |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Sylvain Bonhommeau Julien Barde Matteo Contini |
description | This dataset was collected by an Autonomous Surface Vehicle in Aldabra-Arm01, Seychelles - 2022-10-21.
Underwater or aerial images collected by scientists or citizens can have a wide variety of use for science, management, or conservation.
These images can be annotated and shared to train IA models which can in turn predict the objects on the images.
We provide a set of tools (hardware and software) to collect marine data, predict species or habitat, and provide maps.
Image acquisition
This session has 17.5 GB of MP4 files, which were trimmed into 4360 frames (at 2997/1000 fps).
The frames are georeferenced.
99.4% of these extracted images are useful and 0.6% are useless, according to predictions made by Jacques model.
Multilabel predictions have been made on useful frames using DinoVd'eau model.
GPS information:
The data was processed with a PPK workflow to achieve centimeter-level GPS accuracy.
Base : Files coming from rtk a GPS-fixed station or any static positioning instrument which can provide with correction frames.
Device GPS : Emlid Reach M2
Quality of our data - Q1: 99.01 %, Q2: 0.96 %, Q5: 0.03 %
Bathymetry
The data are collected using a single-beam echosounder S500.
We only keep the values which have a GPS correction in Q1.
We keep the points that are the waypoints.
We keep the raw data where depth was estimated between 0.2 m and 40.0 m deep.
The data are first referenced against the WGS84 ellipsoid.
At the end of processing, the data are projected into a homogeneous grid to create a raster and a shapefiles.
The size of the grid cells is 0.121 m.
The raster and shapefiles are generated by linear interpolation. The 3D reconstruction algorithm is ballpivot.
Generic folder structure
YYYYMMDD_COUNTRYCODE-optionalplace_device_session-number
├── DCIM : folder to store videos and photos depending on the media collected.
├── GPS : folder to store any positioning related file. If any kind of correction is possible on files (e.g. Post-Processed Kinematic thanks to rinex data) then the distinction between device data and base data is made. If, on the other hand, only device position data are present and the files cannot be corrected by post-processing techniques (e.g. gpx files), then the distinction between base and device is not made and the files are placed directly at the root of the GPS folder.
│ ├── BASE : files coming from rtk station or any static positioning instrument.
│ └── DEVICE : files coming from the device.
├── METADATA : folder with genera |
doi_str_mv | 10.5281/zenodo.11130577 |
format | Dataset |
fullrecord | <record><control><sourceid>datacite_PQ8</sourceid><recordid>TN_cdi_datacite_primary_10_5281_zenodo_11130577</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_5281_zenodo_11130577</sourcerecordid><originalsourceid>FETCH-datacite_primary_10_5281_zenodo_111305773</originalsourceid><addsrcrecordid>eNqVjjsOwjAQRN1QIKCm3QPg4DVC0EYIRM-ntRZ7A5GcGDmOUDg9QcABqKYYzZsnxBRVttRrnD-5Di5kiLhQy9VqKIpT7Tg-KHGEsqIrN2CD92wTO7h0QDXkbQp1qELbwKGNBVmGM99K6xnKvvWOLpFkHiuFMzhwZ2_cAxqQoJXWEpXUOBaDgnzDk2-OxHy3PW720lEiWyY299jfx86gMm9T8zE1P9PF_4sX9r5MRw</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>dataset</recordtype></control><display><type>dataset</type><title>Underwater images collected by an Autonomous Surface Vehicle in Aldabra-Arm01, Seychelles - 2022-10-21</title><source>DataCite</source><creator>Sylvain Bonhommeau ; Julien Barde ; Matteo Contini</creator><creatorcontrib>Sylvain Bonhommeau ; Julien Barde ; Matteo Contini</creatorcontrib><description>This dataset was collected by an Autonomous Surface Vehicle in Aldabra-Arm01, Seychelles - 2022-10-21.
Underwater or aerial images collected by scientists or citizens can have a wide variety of use for science, management, or conservation.
These images can be annotated and shared to train IA models which can in turn predict the objects on the images.
We provide a set of tools (hardware and software) to collect marine data, predict species or habitat, and provide maps.
Image acquisition
This session has 17.5 GB of MP4 files, which were trimmed into 4360 frames (at 2997/1000 fps).
The frames are georeferenced.
99.4% of these extracted images are useful and 0.6% are useless, according to predictions made by Jacques model.
Multilabel predictions have been made on useful frames using DinoVd'eau model.
GPS information:
The data was processed with a PPK workflow to achieve centimeter-level GPS accuracy.
Base : Files coming from rtk a GPS-fixed station or any static positioning instrument which can provide with correction frames.
Device GPS : Emlid Reach M2
Quality of our data - Q1: 99.01 %, Q2: 0.96 %, Q5: 0.03 %
Bathymetry
The data are collected using a single-beam echosounder S500.
We only keep the values which have a GPS correction in Q1.
We keep the points that are the waypoints.
We keep the raw data where depth was estimated between 0.2 m and 40.0 m deep.
The data are first referenced against the WGS84 ellipsoid.
At the end of processing, the data are projected into a homogeneous grid to create a raster and a shapefiles.
The size of the grid cells is 0.121 m.
The raster and shapefiles are generated by linear interpolation. The 3D reconstruction algorithm is ballpivot.
Generic folder structure
YYYYMMDD_COUNTRYCODE-optionalplace_device_session-number
├── DCIM : folder to store videos and photos depending on the media collected.
├── GPS : folder to store any positioning related file. If any kind of correction is possible on files (e.g. Post-Processed Kinematic thanks to rinex data) then the distinction between device data and base data is made. If, on the other hand, only device position data are present and the files cannot be corrected by post-processing techniques (e.g. gpx files), then the distinction between base and device is not made and the files are placed directly at the root of the GPS folder.
│ ├── BASE : files coming from rtk station or any static positioning instrument.
│ └── DEVICE : files coming from the device.
├── METADATA : folder with general information files about the session.
├── PROCESSED_DATA : contain all the folders needed to store the results of the data processing of the current session.
│ ├── BATHY : output folder for bathymetry raw data extracted from mission logs.
│ ├── FRAMES : output folder for georeferenced frames extracted from DCIM videos.
│ ├── IA : destination folder for image recognition predictions.
│ └── PHOTOGRAMMETRY : destination folder for reconstructed models in photogrammetry.
└── SENSORS : folder to store files coming from other sources (bathymetry data from the echosounder, log file from the autopilot, mission plan etc.).
Software
All the raw data was processed using our plancha-worflow.
All predictions were generated by our inference pipeline.
You can find all the necessary scripts to download this data in this repository.
Enjoy your data with SeatizenDOI!</description><identifier>DOI: 10.5281/zenodo.11130577</identifier><language>eng</language><publisher>Zenodo</publisher><subject>Artificial Intelligence ; ASV ; Autonomous Surface Vehicle ; Bathymetry ; Citizen Sciences ; Computer Vision ; Coral Reef ; Coral Reef Habitat ; Deep Learning ; Ecology ; FOS: Biological sciences ; GeoAI ; Global Coral Reef Monitoring Network ; Indian Ocean ; Machine Learning ; Mapping ; Plancha ; Reef Ecosystem ; Remote Sensing ; Seychelles</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-0882-5918 ; 0000-0002-3519-6141 ; 0009-0007-8665-6201</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,1894</link.rule.ids><linktorsrc>$$Uhttps://commons.datacite.org/doi.org/10.5281/zenodo.11130577$$EView_record_in_DataCite.org$$FView_record_in_$$GDataCite.org$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Sylvain Bonhommeau</creatorcontrib><creatorcontrib>Julien Barde</creatorcontrib><creatorcontrib>Matteo Contini</creatorcontrib><title>Underwater images collected by an Autonomous Surface Vehicle in Aldabra-Arm01, Seychelles - 2022-10-21</title><description>This dataset was collected by an Autonomous Surface Vehicle in Aldabra-Arm01, Seychelles - 2022-10-21.
Underwater or aerial images collected by scientists or citizens can have a wide variety of use for science, management, or conservation.
These images can be annotated and shared to train IA models which can in turn predict the objects on the images.
We provide a set of tools (hardware and software) to collect marine data, predict species or habitat, and provide maps.
Image acquisition
This session has 17.5 GB of MP4 files, which were trimmed into 4360 frames (at 2997/1000 fps).
The frames are georeferenced.
99.4% of these extracted images are useful and 0.6% are useless, according to predictions made by Jacques model.
Multilabel predictions have been made on useful frames using DinoVd'eau model.
GPS information:
The data was processed with a PPK workflow to achieve centimeter-level GPS accuracy.
Base : Files coming from rtk a GPS-fixed station or any static positioning instrument which can provide with correction frames.
Device GPS : Emlid Reach M2
Quality of our data - Q1: 99.01 %, Q2: 0.96 %, Q5: 0.03 %
Bathymetry
The data are collected using a single-beam echosounder S500.
We only keep the values which have a GPS correction in Q1.
We keep the points that are the waypoints.
We keep the raw data where depth was estimated between 0.2 m and 40.0 m deep.
The data are first referenced against the WGS84 ellipsoid.
At the end of processing, the data are projected into a homogeneous grid to create a raster and a shapefiles.
The size of the grid cells is 0.121 m.
The raster and shapefiles are generated by linear interpolation. The 3D reconstruction algorithm is ballpivot.
Generic folder structure
YYYYMMDD_COUNTRYCODE-optionalplace_device_session-number
├── DCIM : folder to store videos and photos depending on the media collected.
├── GPS : folder to store any positioning related file. If any kind of correction is possible on files (e.g. Post-Processed Kinematic thanks to rinex data) then the distinction between device data and base data is made. If, on the other hand, only device position data are present and the files cannot be corrected by post-processing techniques (e.g. gpx files), then the distinction between base and device is not made and the files are placed directly at the root of the GPS folder.
│ ├── BASE : files coming from rtk station or any static positioning instrument.
│ └── DEVICE : files coming from the device.
├── METADATA : folder with general information files about the session.
├── PROCESSED_DATA : contain all the folders needed to store the results of the data processing of the current session.
│ ├── BATHY : output folder for bathymetry raw data extracted from mission logs.
│ ├── FRAMES : output folder for georeferenced frames extracted from DCIM videos.
│ ├── IA : destination folder for image recognition predictions.
│ └── PHOTOGRAMMETRY : destination folder for reconstructed models in photogrammetry.
└── SENSORS : folder to store files coming from other sources (bathymetry data from the echosounder, log file from the autopilot, mission plan etc.).
Software
All the raw data was processed using our plancha-worflow.
All predictions were generated by our inference pipeline.
You can find all the necessary scripts to download this data in this repository.
Enjoy your data with SeatizenDOI!</description><subject>Artificial Intelligence</subject><subject>ASV</subject><subject>Autonomous Surface Vehicle</subject><subject>Bathymetry</subject><subject>Citizen Sciences</subject><subject>Computer Vision</subject><subject>Coral Reef</subject><subject>Coral Reef Habitat</subject><subject>Deep Learning</subject><subject>Ecology</subject><subject>FOS: Biological sciences</subject><subject>GeoAI</subject><subject>Global Coral Reef Monitoring Network</subject><subject>Indian Ocean</subject><subject>Machine Learning</subject><subject>Mapping</subject><subject>Plancha</subject><subject>Reef Ecosystem</subject><subject>Remote Sensing</subject><subject>Seychelles</subject><fulltext>true</fulltext><rsrctype>dataset</rsrctype><creationdate>2024</creationdate><recordtype>dataset</recordtype><sourceid>PQ8</sourceid><recordid>eNqVjjsOwjAQRN1QIKCm3QPg4DVC0EYIRM-ntRZ7A5GcGDmOUDg9QcABqKYYzZsnxBRVttRrnD-5Di5kiLhQy9VqKIpT7Tg-KHGEsqIrN2CD92wTO7h0QDXkbQp1qELbwKGNBVmGM99K6xnKvvWOLpFkHiuFMzhwZ2_cAxqQoJXWEpXUOBaDgnzDk2-OxHy3PW720lEiWyY299jfx86gMm9T8zE1P9PF_4sX9r5MRw</recordid><startdate>20240717</startdate><enddate>20240717</enddate><creator>Sylvain Bonhommeau</creator><creator>Julien Barde</creator><creator>Matteo Contini</creator><general>Zenodo</general><scope>DYCCY</scope><scope>PQ8</scope><orcidid>https://orcid.org/0000-0002-0882-5918</orcidid><orcidid>https://orcid.org/0000-0002-3519-6141</orcidid><orcidid>https://orcid.org/0009-0007-8665-6201</orcidid></search><sort><creationdate>20240717</creationdate><title>Underwater images collected by an Autonomous Surface Vehicle in Aldabra-Arm01, Seychelles - 2022-10-21</title><author>Sylvain Bonhommeau ; Julien Barde ; Matteo Contini</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-datacite_primary_10_5281_zenodo_111305773</frbrgroupid><rsrctype>datasets</rsrctype><prefilter>datasets</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial Intelligence</topic><topic>ASV</topic><topic>Autonomous Surface Vehicle</topic><topic>Bathymetry</topic><topic>Citizen Sciences</topic><topic>Computer Vision</topic><topic>Coral Reef</topic><topic>Coral Reef Habitat</topic><topic>Deep Learning</topic><topic>Ecology</topic><topic>FOS: Biological sciences</topic><topic>GeoAI</topic><topic>Global Coral Reef Monitoring Network</topic><topic>Indian Ocean</topic><topic>Machine Learning</topic><topic>Mapping</topic><topic>Plancha</topic><topic>Reef Ecosystem</topic><topic>Remote Sensing</topic><topic>Seychelles</topic><toplevel>online_resources</toplevel><creatorcontrib>Sylvain Bonhommeau</creatorcontrib><creatorcontrib>Julien Barde</creatorcontrib><creatorcontrib>Matteo Contini</creatorcontrib><collection>DataCite (Open Access)</collection><collection>DataCite</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sylvain Bonhommeau</au><au>Julien Barde</au><au>Matteo Contini</au><format>book</format><genre>unknown</genre><ristype>DATA</ristype><title>Underwater images collected by an Autonomous Surface Vehicle in Aldabra-Arm01, Seychelles - 2022-10-21</title><date>2024-07-17</date><risdate>2024</risdate><abstract>This dataset was collected by an Autonomous Surface Vehicle in Aldabra-Arm01, Seychelles - 2022-10-21.
Underwater or aerial images collected by scientists or citizens can have a wide variety of use for science, management, or conservation.
These images can be annotated and shared to train IA models which can in turn predict the objects on the images.
We provide a set of tools (hardware and software) to collect marine data, predict species or habitat, and provide maps.
Image acquisition
This session has 17.5 GB of MP4 files, which were trimmed into 4360 frames (at 2997/1000 fps).
The frames are georeferenced.
99.4% of these extracted images are useful and 0.6% are useless, according to predictions made by Jacques model.
Multilabel predictions have been made on useful frames using DinoVd'eau model.
GPS information:
The data was processed with a PPK workflow to achieve centimeter-level GPS accuracy.
Base : Files coming from rtk a GPS-fixed station or any static positioning instrument which can provide with correction frames.
Device GPS : Emlid Reach M2
Quality of our data - Q1: 99.01 %, Q2: 0.96 %, Q5: 0.03 %
Bathymetry
The data are collected using a single-beam echosounder S500.
We only keep the values which have a GPS correction in Q1.
We keep the points that are the waypoints.
We keep the raw data where depth was estimated between 0.2 m and 40.0 m deep.
The data are first referenced against the WGS84 ellipsoid.
At the end of processing, the data are projected into a homogeneous grid to create a raster and a shapefiles.
The size of the grid cells is 0.121 m.
The raster and shapefiles are generated by linear interpolation. The 3D reconstruction algorithm is ballpivot.
Generic folder structure
YYYYMMDD_COUNTRYCODE-optionalplace_device_session-number
├── DCIM : folder to store videos and photos depending on the media collected.
├── GPS : folder to store any positioning related file. If any kind of correction is possible on files (e.g. Post-Processed Kinematic thanks to rinex data) then the distinction between device data and base data is made. If, on the other hand, only device position data are present and the files cannot be corrected by post-processing techniques (e.g. gpx files), then the distinction between base and device is not made and the files are placed directly at the root of the GPS folder.
│ ├── BASE : files coming from rtk station or any static positioning instrument.
│ └── DEVICE : files coming from the device.
├── METADATA : folder with general information files about the session.
├── PROCESSED_DATA : contain all the folders needed to store the results of the data processing of the current session.
│ ├── BATHY : output folder for bathymetry raw data extracted from mission logs.
│ ├── FRAMES : output folder for georeferenced frames extracted from DCIM videos.
│ ├── IA : destination folder for image recognition predictions.
│ └── PHOTOGRAMMETRY : destination folder for reconstructed models in photogrammetry.
└── SENSORS : folder to store files coming from other sources (bathymetry data from the echosounder, log file from the autopilot, mission plan etc.).
Software
All the raw data was processed using our plancha-worflow.
All predictions were generated by our inference pipeline.
You can find all the necessary scripts to download this data in this repository.
Enjoy your data with SeatizenDOI!</abstract><pub>Zenodo</pub><doi>10.5281/zenodo.11130577</doi><orcidid>https://orcid.org/0000-0002-0882-5918</orcidid><orcidid>https://orcid.org/0000-0002-3519-6141</orcidid><orcidid>https://orcid.org/0009-0007-8665-6201</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.5281/zenodo.11130577 |
ispartof | |
issn | |
language | eng |
recordid | cdi_datacite_primary_10_5281_zenodo_11130577 |
source | DataCite |
subjects | Artificial Intelligence ASV Autonomous Surface Vehicle Bathymetry Citizen Sciences Computer Vision Coral Reef Coral Reef Habitat Deep Learning Ecology FOS: Biological sciences GeoAI Global Coral Reef Monitoring Network Indian Ocean Machine Learning Mapping Plancha Reef Ecosystem Remote Sensing Seychelles |
title | Underwater images collected by an Autonomous Surface Vehicle in Aldabra-Arm01, Seychelles - 2022-10-21 |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T15%3A23%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-datacite_PQ8&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=unknown&rft.au=Sylvain%20Bonhommeau&rft.date=2024-07-17&rft_id=info:doi/10.5281/zenodo.11130577&rft_dat=%3Cdatacite_PQ8%3E10_5281_zenodo_11130577%3C/datacite_PQ8%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |