최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기Neurocomputing, v.399, 2020년, pp.141 - 152
Coscrato, Victor (Corresponding author at:.) , Inácio, Marco Henrique de Almeida (Federal University of Sã) , Izbicki, Rafael (o Carlos)
Abstract Stacking methods improve the prediction performance of regression models. A simple way to stack base regressions estimators is by combining them linearly, as done by Breiman [1]. Even though this approach is useful from an interpretative perspective, it often does not lead to high predicti...
Mach. Learn. Breiman 24 1 49 1996 10.1007/BF00117832 Stacked regressions
Mach. Learn. Deroski 54 3 255 2004 10.1023/B:MACH.0000015881.36452.6e Is combining classifiers with stacking better than selecting the best one?
Dietterich 1 2000 Proceedings of the International Workshop on Multiple Classifier Systems Ensemble methods in machine learning
J. Sill, G. Takacs, L. Mackey, D. Lin, Feature-weighted linear stacking, arXiv preprint arXiv:0911.0460 (2009).
Zhou 2012 Ensemble Methods: Foundations and Algorithms
Ann. Stat. Fan 2008 1992 10.1214/aos/1176348900 Variable bandwidth and local linear regression smoothers
Month. Notices R. Astronom. Soc. Freeman 468 4 4556 2017 10.1093/mnras/stx764 A unified framework for constructing, tuning and assessing photometric redshift density estimates in a selection bias setting
Ann. Appl. Stat. Izbicki 11 2 698 2017 10.1214/16-AOAS1013 Photo-z estimation: an example of nonparametric conditional density estimation under selection bias
NLS: an accurate and yet easy-to-interpret regression method Coscrato 2019
Csaji 24 48 2001 Approximation with Artificial Neural Networks
CoRR Kingma 2014 Adam: a method for stochastic optimization.
J. Mach. Learn. Res. Proc. Track Glorot 9 249 2010 Understanding the difficulty of training deep feedforward neural networks
D.-A. Clevert, T. Unterthiner, S. Hochreiter, Fast and accurate deep network learning by exponential linear units (Elus), 2015.
Ioffe 37 448 2015 Proceedings of the 32nd International Conference on Machine Learning Batch normalization: Accelerating deep network training by reducing internal covariate shift
CoRR Hinton 2012 Improving neural networks by preventing co-adaptation of feature detectors
A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, A. Lerer, Automatic differentiation in Pytorch, 2017.
Nugteren 195 2015 Proceedings of the IEEE 9th International Symposium on Embedded Multicore/Many-core Systems-on-Chip Cltune: A generic auto-tuner for opencl kernels
D. Dheeru, E. Karra Taniskidou, UCI machine learning repository, 2017. http://archive.ics.uci.edu/ml.
Buza 145 2014 Data Analysis, Machine Learning and Knowledge Discovery Feedback prediction for blogs
10.1016/j.commatsci.2018.07.052 K. Hamidieh, A data-driven statistical model for predicting the critical temperature of a superconductor, arXiv preprint arXiv:1803.10260 (2018).
Friedman 1 2001 The elements of statistical learning
Meir 118 2003 Advanced Lectures on Machine Learning An introduction to boosting and leveraging
J. Mach. Learn. Res. Pedregosa 12 2825 2011 Scikit-learn: Machine learning in Python
Stat. Appl. Genet. Mol. Biol. Van der Laan 6 1 2007 10.2202/1544-6115.1309 Super learner
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
저자가 공개 리포지터리에 출판본, post-print, 또는 pre-print를 셀프 아카이빙 하여 자유로운 이용이 가능한 논문
※ AI-Helper는 부적절한 답변을 할 수 있습니다.