Analyzing Hypersensitive AI: Instability in Corporate-Scale Machine Learning
Proceedings of the 2nd Workshop on Explainable Artificial Intelligence (XAI 2018) Predictive geometric models deliver excellent results for many Machine Learning use cases. Despite their undoubted performance, neural predictive algorithms can show unexpected degrees of instability and variance, part...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Proceedings of the 2nd Workshop on Explainable Artificial
Intelligence (XAI 2018) Predictive geometric models deliver excellent results for many Machine
Learning use cases. Despite their undoubted performance, neural predictive
algorithms can show unexpected degrees of instability and variance,
particularly when applied to large datasets. We present an approach to measure
changes in geometric models with respect to both output consistency and
topological stability. Considering the example of a recommender system using
word2vec, we analyze the influence of single data points, approximation methods
and parameter settings. Our findings can help to stabilize models where needed
and to detect differences in informational value of data points on a large
scale. |
---|---|
DOI: | 10.48550/arxiv.1807.07404 |