References
- Yang, Kai, Jiang, Geng-Hui, Qu, Qiang, et al. A new modified conjugate gradient method to identify thermal conductivity of transient non-homogeneous problems based on radial integration boundary element method. International Journal of Heat and Mass Transfer, 2019, vol. 133, p. 669-676. doi: https://doi.org/10.1016/j.ijheatmasstransfer.2018.12.145
- Keshtegar, Behrooz. Méthode du gradient conjugué limité pour l’analyse de la fiabilité structurale. Engineering with Computers , 2017, vol. 33, no 3, p. 621-629. doi: https://doi.org/10.1007/s00366-016-0493-7
- Andrei, Neculai. A simple three-term conjugate gradient algorithm for unconstrained optimization. Journal of Computational and Applied Mathematics, 2013, vol. 241, p. 19-29. doi: https://doi.org/10.1016/j.cam.2012.10.002
- Yuan, Gonglin, Wei, Zengxin, et Li, Guoyin. A modified Polak– Ribière–Polyak conjugate gradient algorithm for nonsmooth convex programs. Journal of Computational and Applied mathematics, 2014, vol. 255, p. 86-96. doi: https://doi.org/10.1016/j.cam.2013.04.032
- Wang, Xiao, Ma, Shiqian, Goldfarb, Donald, et al. Stochastic quasi-Newton methods for nonconvex stochastic optimization. SIAM Journal on Optimization, 2017, vol. 27, no 2, p. 927-956. doi: https://doi.org/10.1137/15M1053141
- Lewis, Adrian S. et Overton, Michael L. Nonsmooth optimization via quasi-Newton methods. Mathematical Programming, 2013, vol. 141, no 1-2, p. 135-163. doi: https://doi.org/10.1007/s10107-012-0514-2
- Byrd, Richard H., Hansen, Samantha L., Nocedal, Jorge, et al. A stochastic quasi-Newton method for large-scale optimization. SIAM Journal on Optimization, 2016, vol. 26, no 2, p. 1008-1031. doi: https://doi.org/10.1137/140954362
- Wei, Zengxin, LI, Guoyin, et Qi, Liqun. New quasi-Newton methods for unconstrained optimization problems. Applied Mathematics and Computation, 2006, vol. 175, no 2, p. 1156-1188. doi: https://doi.org/10.1016/j.amc.2005.08.027
- Zhang, Jinlei, Tao, Xiao, Sun, Peng et al. Une méthode de correction de désalignement de position pour la microscopie ptychographique de Fourier basée sur la méthode quasi-Newton avec un module d’optimisation globale. Optics Communications , 2019, vol. 452, p. 296-305. doi: https://doi.org/10.1016/j.optcom.2019.07.046
- Xie, Xinhao, Xu, Lijun, Li, Xiaolu, et al. Online Gauss-Newton-based parallel-pipeline method for real-time in-situ laser ranging. IEEE Sensors Journal, 2020.
- Loke, M. H. et Dahlin, Torleif. A comparison of the Gauss–Newton and quasi-Newton methods in resistivity imaging inversion. Journal of applied geophysics, 2002, vol. 49, no 3, p. 149-162. doi: https://doi.org/10.1016/S0926-9851(01)00106-9
- Rubæk, Tonny, Meaney, Paul M., Meincke, Peter, et al. Nonlinear microwave imaging for breast-cancer screening using Gauss–Newton›s method and the CGLS inversion algorithm. IEEE Transactions on Antennas and Propagation, 2007, vol. 55, no 8, p. 2320-2331. doi: https://doi.org/10.1109/TAP.2007.901993
- Bottou, Léon. Large-scale machine learning with stochastic gradient descent. In : Proceedings of COMPSTAT’2010. Physica-Verlag HD, 2010. p. 177-186.
- Johnson, Rie et Zhang, Tong. Accelerating stochastic gradient descent using predictive variance reduction. In : Advances in neural information processing systems. 2013. p. 315-323.
- Smith, Samuel L. et Le, Quoc V. A bayesian perspective on generalization and stochastic gradient descent. arXiv preprint arXiv:1710.06451, 2017.
- [16] De Sa, Christopher, Re, Christopher, et Olukotun, Kunle. Global convergence of stochastic gradient descent for some non-convex matrix problems. In : International Conference on Machine Learning. 2015. p. 2332-2341.
- Zhang, Tong. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In : Proceedings of the twenty-first international conference on Machine learning. 2004. p. 116. https://doi.org/https://doi.org/10.1145/1015330.1015332
- Bordes, Antoine, Bottou, Léon, et Gallinari, Patrick. SGD-QN: Careful quasi-Newton stochastic gradient descent. July 2009, Journal of Machine Learning Research, 2009, 10:1737-1754. DOI: https://doi.org/10.1145/1577069.1755842.
- Gemulla, Rainer, Nijkamp, Erik, Haas, Peter J., et al. Large-scale matrix factorization with distributed stochastic gradient descent. In : Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. 2011. p. 69-77.
- Konečný, Jakub, Liu, Jie, Richtárik, Peter, et al. Mini-batch semi-stochastic gradient descent in the proximal setting. IEEE Journal of Selected Topics in Signal Processing, 2015, vol. 10, no 2, p. 242-255. doi: https://doi.org/10.1109/JSTSP.2015.2505682
- Peng, Xinyu, Li, Li et Wang, Fei-Yue. Accelerating Minibatch Stochastic Gradient Descent using Typicality Sampling. IEEE Transaction Neural New Learn System, Nov, 2020; 31(11):4649-4659. Doi: https://doi.org/10.1109/TNNLS.2019.2957003.
- Cui, Yuqi, Wu, Dongrui, et Huang, Jian. Optimize TSK fuzzy systems for classification problems: Mini-batch gradient descent with uniform regularization and batch normalization. IEEE Transactions on Fuzzy Systems, 2020.
- Perrone, Michael P., Khan, Haidar, Kim, Changhoan, et al. Optimal Mini-Batch Size Selection for Fast Gradient Descent. arXiv preprint arXiv:1911.06459, 2019.
- Messaoud, Seifeddine, Bradai, Abbas, et Moulay, Emmanuel. Online GMM Clustering and Mini-Batch Gradient Descent Based Optimization for Industrial IoT 4.0. IEEE Transactions on Industrial Informatics, 2019, vol. 16, no 2, p. 1427-1435. doi: https://doi.org/10.1109/TII.2019.2945012
- J. Liu and X. Du, Global convergence of an efficient hybrid conjugate gradient method for unconstrained optimization, Bulletin of the Korean Mathematical Society, 50 (2013), 73–81. doi: https://doi.org/10.4134/BKMS.2013.50.1.073.
- Peter Mtagulwa & P. Kaelo (2019) A convergent modified HS-DY hybrid conjugate gradient method for unconstrained optimization problems, Journal of Information and Optimization Sciences, 40:1, 97-113, DOI: https://doi.org/10.1080/02522667.2018.1424087.
- J. Dong, B. Jiao and L. Chen, A new hybrid HS-DY conjugate gradient method, Fourth International Joint Conference on Computational Sciences and Optimization, (2011), 94–98. doi: https://doi.org/10.1109/CSO.2011.47.