Interpreting Audiograms with Multi-stage Neural Networks
Audiograms are a particular type of line charts representing individuals' hearing level at various frequencies. They are used by audiologists to diagnose hearing loss, and further select and tune appropriate hearing aids for customers. There have been several projects such as Autoaudio that aim...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Audiograms are a particular type of line charts representing individuals'
hearing level at various frequencies. They are used by audiologists to diagnose
hearing loss, and further select and tune appropriate hearing aids for
customers. There have been several projects such as Autoaudio that aim to
accelerate this process through means of machine learning. But all existing
models at their best can only detect audiograms in images and classify them
into general categories. They are unable to extract hearing level information
from detected audiograms by interpreting the marks, axis, and lines. To address
this issue, we propose a Multi-stage Audiogram Interpretation Network (MAIN)
that directly reads hearing level data from photos of audiograms. We also
established Open Audiogram, an open dataset of audiogram images with
annotations of marks and axes on which we trained and evaluated our proposed
model. Experiments show that our model is feasible and reliable. |
---|---|
DOI: | 10.48550/arxiv.2112.09357 |