84 Low-cost machine learning algorithms to predict growth and carcass traits in pigs
Genomic prediction involves analyzing pedigree, phenotype, and genomic data, facilitating the early prediction of the genetic merit of an individual. Phenotyping is an essential component of the genomic selection program. However, depending on the trait, traditional phenotyping can be time-consuming...
Gespeichert in:
Veröffentlicht in: | Journal of animal science 2024-09, Vol.102 (Supplement_3), p.21-22 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Genomic prediction involves analyzing pedigree, phenotype, and genomic data, facilitating the early prediction of the genetic merit of an individual. Phenotyping is an essential component of the genomic selection program. However, depending on the trait, traditional phenotyping can be time-consuming and labor-intensive. On the other hand, with the advancement in digital phenotyping, obtaining regular images of animals has become increasingly accessible. Coupled with computer vision and machine learning algorithms, phenotypes for traits of interest can be extracted from the images and used for different purposes, including genomic predictions. Previous attempts to predict genetic merit using machine learning were ineffective and computationally expensive for routine genetic evaluations. This research aimed to present a low-cost algorithm for predicting traits that are commonly used in pig breeding programs such as weight (WT), loin depth (LD), and backfat thickness (BF) from image data. The dataset included two-dimensional side views of 9,283 pigs captured between 2021 and 2023 and manually recorded phenotypes for WT, LD, and BF. We proposed a pipeline consisting of two major tasks to predict phenotypes: body segmentation and feature extraction using a convolutional neural network (CNN). Initially, we used the YOLOv7 model to leverage existing pre-trained layers to extract the individual region of interest from the source image. Then, low-variance images were removed to ensure a wide range of diversity within the individual. The images are then annotated with prerecorded traits and divided into training and test sets (80% and 20%, respectively), with no overlap. Our study benchmarked VGG16, InceptionV3, and ResNet50 model architectures by extracting features from the outputs of the final convolutional blocks or layers to reduce the impact of the fully connected layer on potential feature loss. The results show that the features extracted from the ResNet50 model had the lowest mean absolute error, with values of 5.22 for weight, 5.28 for loin depth, and 1.31 for backfat thickness. The performance could be improved further by fine-tuning the model parameters. As a future step for this study, the predicted phenotypes will be utilized for genomic evaluations using a multiple-trait model, where the subset of predicted animals will have prerecorded traits to account for the prediction error. |
---|---|
ISSN: | 0021-8812 1525-3163 |
DOI: | 10.1093/jas/skae234.024 |