Neural Networks for Partially Linear Quantile Regression
Deep learning has enjoyed tremendous success in a variety of applications but its application to quantile regressions remains scarce. A major advantage of the deep learning approach is its flexibility to model complex data in a more parsimonious way than nonparametric smoothing methods. However, whi...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning has enjoyed tremendous success in a variety of applications but
its application to quantile regressions remains scarce. A major advantage of
the deep learning approach is its flexibility to model complex data in a more
parsimonious way than nonparametric smoothing methods. However, while deep
learning brought breakthroughs in prediction, it often lacks interpretability
due to the black-box nature of multilayer structure with millions of
parameters, hence it is not well suited for statistical inference. In this
paper, we leverage the advantages of deep learning to apply it to quantile
regression where the goal to produce interpretable results and perform
statistical inference. We achieve this by adopting a semiparametric approach
based on the partially linear quantile regression model, where covariates of
primary interest for statistical inference are modelled linearly and all other
covariates are modelled nonparametrically by means of a deep neural network. In
addition to the new methodology, we provide theoretical justification for the
proposed model by establishing the root-$n$ consistency and asymptotically
normality of the parametric coefficient estimator and the minimax optimal
convergence rate of the neural nonparametric function estimator. Across several
simulated and real data examples, our proposed model empirically produces
superior estimates and more accurate predictions than various alternative
approaches. |
---|---|
DOI: | 10.48550/arxiv.2106.06225 |