Variance-Reduced Stochastic Optimization for Efficient Inference of Hidden Markov Models
Hidden Markov models (HMMs) are popular models to identify a finite number of latent states from sequential data. However, fitting them to large datasets can be computationally demanding because most likelihood maximization techniques require iterating through the entire underlying dataset for every...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Dataset |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Hidden Markov models (HMMs) are popular models to identify a finite number of latent states from sequential data. However, fitting them to large datasets can be computationally demanding because most likelihood maximization techniques require iterating through the entire underlying dataset for every parameter update. We propose a novel optimization algorithm that updates the parameters of an HMM without iterating through the entire dataset. Namely, we combine a partial E step with variance-reduced stochastic optimization within the M step. We prove the algorithm converges under certain regularity conditions. We test our algorithm empirically using a simulation study as well as a case study of kinematic data collected using suction-cup attached biologgers from eight northern resident killer whales (Orcinus orca) off the western coast of Canada. In both, our algorithm converges in fewer epochs, with less computation time, and to regions of higher likelihood compared to standard numerical optimization techniques. Our algorithm allows practitioners to fit complicated HMMs to large time-series datasets more efficiently than existing baselines. Supplemental materials are available online. |
---|---|
DOI: | 10.6084/m9.figshare.25768631 |