An algorithm for cattle counting in rangeland based on multi‐scale perception and image association
To effectively address common issues such as cattle being obscured by fences and images prone to colour shifts and high brightness in a ranch setting, this paper proposes an algorithm for counting cows based on multi‐scale perception and image correlation. The algorithm first adjusts the model outpu...
Gespeichert in:
Veröffentlicht in: | IET image processing 2024-11, Vol.18 (13), p.4151-4167 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | To effectively address common issues such as cattle being obscured by fences and images prone to colour shifts and high brightness in a ranch setting, this paper proposes an algorithm for counting cows based on multi‐scale perception and image correlation. The algorithm first adjusts the model output scale to enhance cattle detection under current conditions. It incor‐porates efficient Partial Convolution (PConv) to replace 3 × 3 convolutions in the Neck segment of the YOLOv7 network, boost‐ing computational speed and reducing complexity. To streamline feature fusion, Dynamic Head (DyHead) unifies multiple at‐tentional operations in the Neck segment, enhancing efficiency. Additionally, it introduces a novel bounding box similarity metric Minimum Point DioU (MPDIoU) based on minimum point distance, encompassing factors from existing loss functions, while simplifying computations. Experimental results demonstrate the algorithm significantly improves detection, achieving 98.8% accuracy, 99.0% recall, and a 92.1% mAP value. Compared with mainstream SOTA models, Precision increases by 0.4%, Recall by 2.0%, and mAP value by 2.2%. Model size decreases by 23.9%, parameter count by 23.0%, and computational load by 6.1%. the algorithm shows improvements across all indices, meeting the challenge of real‐time cattle counting in ranches under complex conditions.
In this study, YOLO+P is proposed, which successfully solves the difficulties. Experimental results show that the recognition accuracy of our proposed YOLO+P algorithm is better than the state‐of‐the‐art deep learning methods. It is believed that this paper is suitable for publication in IET Image Processing and will be of great interest to a wide range of readers, including computer scientists and researchers in other related fields. It will also be of great interest to researchers in other related fields. |
---|---|
ISSN: | 1751-9659 1751-9667 |
DOI: | 10.1049/ipr2.13240 |