When Code Smells Meet ML: On the Lifecycle of ML-specific Code Smells in ML-enabled Systems
Context. The adoption of Machine Learning (ML)--enabled systems is steadily increasing. Nevertheless, there is a shortage of ML-specific quality assurance approaches, possibly because of the limited knowledge of how quality-related concerns emerge and evolve in ML-enabled systems. Objective. We aim...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Context. The adoption of Machine Learning (ML)--enabled systems is steadily
increasing. Nevertheless, there is a shortage of ML-specific quality assurance
approaches, possibly because of the limited knowledge of how quality-related
concerns emerge and evolve in ML-enabled systems. Objective. We aim to
investigate the emergence and evolution of specific types of quality-related
concerns known as ML-specific code smells, i.e., sub-optimal implementation
solutions applied on ML pipelines that may significantly decrease both the
quality and maintainability of ML-enabled systems. More specifically, we
present a plan to study ML-specific code smells by empirically analyzing (i)
their prevalence in real ML-enabled systems, (ii) how they are introduced and
removed, and (iii) their survivability. Method. We will conduct an exploratory
study, mining a large dataset of ML-enabled systems and analyzing over 400k
commits about 337 projects. We will track and inspect the introduction and
evolution of ML smells through CodeSmile, a novel ML smell detector that we
will build to enable our investigation and to detect ML-specific code smells. |
---|---|
DOI: | 10.48550/arxiv.2403.08311 |