최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기Siam journal on mathematics of data science, v.3 no.3, 2021년, pp.930 - 958
El Ghaoui, Laurent , Gu, Fangda , Travacca, Bertrand , Askari, Armin , Tsai, Alicia
Implicit deep learning prediction rules generalize the recursive rules of feedforward neural networks. Such rules are based on the solution of a fixed-point equation involving a single vector of hidden features, which is thus only implicitly defined. The implicit framework greatly simplifies the not...
1. B. Amos, I. D. Jimenez Rodriguez, J. Sacks, B. Boots, and J. Z. Kolter, Differentiable MPC for end-to-end planning and control , in Advances in Neural Information Processing Systems, 2018, pp. 8289--8300.
2. B. Amos and J. Z. Kolter, OptNet: Differentiable optimization as a layer in neural networks , in Proceedings of the 34th International Conference on Machine Learning, PMLR 70, JMLR.org, 2017, pp. 136--145.
3. S. Anwar, K. Hwang, and W. Sung, Structured pruning of deep convolutional neural networks , ACM J. Emerging Technologies Comput. Systems (JETC), 13 (2017), 32.
4. A. Askari, G. Negiar, R. Sambharya, and L. El Ghaoui, Lifted Neural Networks , preprint, https://arxiv.org/abs/1805.01532 , 2018.
5. A. Athalye, N. Carlini, and D. A. Wagner, Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples , in Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsma?ssan, Stockholm, Sweden, 2018, Proc. Mach. Learn. Res. 80, PMLR, 2018, pp. 274--283.
6. S. Bai, J. Z. Kolter, and V. Koltun, Deep Equilibrium Models , preprint, https://arxiv.org/abs/1909.01377 , 2019.
7. S. Bai, V. Koltun, and J. Z. Kolter, Multiscale deep equilibrium models , in Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 2020, pp. 5238--5250.
8. N. Carlini and D. A. Wagner, Adversarial examples are not easily detected: Bypassing ten detection methods , in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec'17, Dallas, TX, 2017, ACM, 2017, pp. 3--14.
9. M. Carreira-Perpinan and W. Wang, Distributed optimization of deeply nested systems , in Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, S. Kaski and J. Corander, eds., Reykjavik, Iceland, 2014, Proc. Mach. Learn. Res. 33, PMLR, pp. 10--19, http://proceedings.mlr.press/v33/carreira-perpinan14.html .
10. S. Changpinyo, M. Sandler, and A. Zhmoginov, The Power of Sparsity in Convolutional Neural Networks , preprint, https://arxiv.org/abs/1702.06257 , 2017.
11. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud, Neural ordinary differential equations , in Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, eds., Curran Associates, Inc., 2018, pp. 6571--6583, http://papers.nips.cc/paper/7892-neural-ordinary-differential-equations.pdf .
12. J. M. Cohen, E. Rosenfeld, and J. Z. Kolter, Certified Adversarial Robustness via Randomized Smoothing , preprint, https://arxiv.org/abs/arXiv:1902.02918 , 2019.
13. M. A. Dahleh and I. J. Diaz-Bobillo, Control of Uncertain Systems: A Linear Programming Approach , Prentice-Hall, Inc., 1994.
14. F. de Avila Belbute-Peres, K. Smith, K. Allen, J. Tenenbaum, and J. Z. Kolter, End-to-end differentiable physics for learning and control , in Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, eds., Curran Associates, Inc., 2018, pp. 7178--7189, http://papers.nips.cc/paper/7948-end-to-end-differentiable-physics-for-learning-and-control.pdf .
15. P. Donti, B. Amos, and J. Z. Kolter, Task-based end-to-end model learning in stochastic optimization , in Advances in Neural Information Processing Systems, 2017, pp. 5484--5494.
16. U. Evci, F. Pedregosa, A. Gomez, and E. Elsen, The Difficulty of Training Sparse Neural Networks , preprint, https://arxiv.org/abs/1906.10732 , 2019.
17. V. Ganapathiraman, Z. Shi, X. Zhang, and Y. Yu, Inductive two-layer modeling with parametric Bregman transfer , in International Conference on Machine Learning, PMLR, 2018, pp. 1636--1645.
18. B. Gao and L. Pavel, On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning , preprint, https://arxiv.org/abs/1704.00805 , 2017.
19. I. J. Goodfellow, J. Shlens, and C. Szegedy, Explaining and harnessing adversarial examples , in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, 2015, Conference Track Proceedings; available online at http://arxiv.org/abs/1412.6572 .
20. S. Gowal, K. Dvijotham, R. Stanforth, R. Bunel, C. Qin, J. Uesato, R. Arandjelovic, T. A. Mann, and P. Kohli, On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models , preprint, http://arxiv.org/abs/1810.12715 , 2018.
21. F. Gu, A. Askari, and L. El Ghaoui, Fenchel lifted networks: A Lagrange relaxation of neural network training , in International Conference on Artificial Intelligence and Statistics, PMLR, 2020, pp. 3362--3371.
22. F. Gu, H. Chang, W. Zhu, S. Sojoudi, and L. El Ghaoui, Implicit graph neural networks , in Advances in Neural Information Processing Systems 33, 2020.
23. S. Han, H. Mao, and W. J. Dally, Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding , preprint, https://arxiv.org/abs/1510.00149 , 2015.
24. B. Hassibi, D. G. Stork, and G. J. Wolff, Optimal Brain Surgeon and general network pruning , in IEEE International Conference on Neural Networks, IEEE, 1993, pp. 293--299.
25. K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition , in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770--778.
26. M. H. Khammash, Stability and Performance Robustness of Discrete-Time Systems with Structured Uncertainty , Ph.D. thesis, Rice University, Houston, TX, 1990.
27. J. Kolter, private communication with A. Askari , 2019.
28. A. Kurakin, I. J. Goodfellow, and S. Bengio, Adversarial examples in the physical world , in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 2017, Workshop Track Proceedings, OpenReview.net, 2017, https://openreview.net/forum?id=HJGU3Rodl .
29. A. Kurakin, I. J. Goodfellow, and S. Bengio, Adversarial Machine Learning at Scale , preprint, https://arxiv.org/abs/1611.01236 , 2017.
30. V. Lebedev and V. Lempitsky, Fast ConvNets using group-wise brain damage , in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2554--2564.
31. Y. LeCun, J. S. Denker, and S. A. Solla, Optimal brain damage , in Advances in Neural Information Processing Systems, 1990, pp. 598--605.
32. J. Li, C. Fang, and Z. Lin, Lifted proximal operator machines , in Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, 2019, pp. 4181--4188.
33. Y. Liu, X. Chen, C. Liu, and D. Song, Delving into transferable adversarial examples and black-box attacks , in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 2017, Conference Track Proceedings, OpenReview.net, 2017, https://openreview.net/forum?id=Sys6GJqxl .
34. C. Louizos, M. Welling, and D. P. Kingma, Learning Sparse Neural Networks through $ L_0 $ Regularization , preprint, https://arxiv.org/abs/1712.01312 , 2017.
35. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, Towards deep learning models resistant to adversarial attacks , in International Conference on Learning Representations, 2018, https://openreview.net/forum?id=rJzIBfZAb .
36. C. D. Meyer, Matrix Analysis and Applied Linear Algebra , SIAM, Philadelphia, 2000.
37. S. Narang, E. Elsen, G. Diamos, and S. Sengupta, Exploring Sparsity in Recurrent Neural Networks , preprint, https://arxiv.org/abs/1704.05119 , 2017.
38. N. Papernot, P. D. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, The limitations of deep learning in adversarial settings , in IEEE European Symposium on Security and Privacy, EuroS&P 2016, Saarbru?cken, Germany, 2016, IEEE, 2016, pp. 372--387, https://doi.org/10.1109/EuroSP.2016.36 .
40. A. Raghunathan, J. Steinhardt, and P. S. Liang, Semidefinite relaxations for certifying robustness to adversarial examples , in Advances in Neural Information Processing Systems, 2018, pp. 10877--10887.
41. S. Sastry, Nonlinear Systems: Analysis, Stability, and Control , Interdiscip. Appl. Math. 10, Springer Science & Business Media, 2013.
42. A. See, M.-T. Luong, and C. D. Manning, Compression of Neural Machine Translation Models via Pruning , preprint, https://arxiv.org/abs/1606.09274 , 2016.
43. U. Shaham, Y. Yamada, and S. Negahban, Understanding adversarial training: Increasing local stability of neural nets through robust optimization , Neurocomputing, 307 (2018), pp. 195--204, https://doi.org/10.1016/j.neucom.2018.04.027 .
44. K. Simonyan, A. Vedaldi, and A. Zisserman, Deep inside convolutional networks: Visualising image classification models and saliency maps , in 2nd International Conference on Learning Representations, ICLR 2014, Banff, Canada, 2014, Y. Bengio and Y. LeCun, eds., Workshop Track Proceedings; available online at http://arxiv.org/abs/1312.6034 .
45. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting , J. Mach, Learn. Res., 15 (2014), pp. 1929--1958.
46. G. Taylor, R. Burmeister, Z. Xu, B. Singh, A. Patel, and T. Goldstein, Training neural networks without gradients: A scalable ADMM approach , in International Conference on Machine Learning, 2016, pp. 2722--2731.
48. P.-W. Wang, P. Donti, B. Wilder, and Z. Kolter, SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver , in Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov, eds., Proc. Mach. Learn. Res. 97, Long Beach, CA, 2019, PMLR, pp. 6545--6554, http://proceedings.mlr.press/v97/wang19e.html .
49. E. Winston and J. Z. Kolter, Monotone operator equilibrium networks , in Advances in Neural Information Processing Systems 33, 2020, pp. 10718--10728.
50. E. Wong and J. Z. Kolter, Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope , preprint, https://arxiv.org/abs/1711.00851 , 2017.
51. H. Yin, P. Seiler, and M. Arcak, Stability Analysis Using Quadratic Constraints for Systems with Neural Network Controllers , preprint, https://arxiv.org/abs/2006.07579 , 2020.
52. H. Yin, P. Seiler, M. Jin, and M. Arcak, Imitation Learning with Stability and Safety Guarantees , preprint, https://arxiv.org/abs/2012.09293 , 2020.
53. J. Zeng, T. T.-K. Lau, S. Lin, and Y. Yao, Global Convergence of Block Coordinate Descent in Deep Learning , preprint, https://arxiv.org/abs/1803.00225 , 2018.
54. Z. Zhang and M. Brand, Convergent block coordinate descent for training Tikhonov regularized deep neural networks , in Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, Curran Associates Inc., 2017, pp. 1719--1728.
55. M. Zhu and S. Gupta, To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression , preprint, https://arxiv.org/abs/1710.01878 , 2017.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.