192
Views
1
CrossRef citations to date
0
Altmetric
Original Articles

An inexact first-order method for constrained nonlinear optimization

ORCID Icon, ORCID Icon, &
Pages 79-112 | Received 22 May 2019, Accepted 21 Dec 2019, Published online: 15 Jan 2020

References

  • P.-A. Absil, R. Mahony, and R. Sepulchre, Optimization algorithms on matrix manifolds. Princeton Univ. Press, Princeton, NJ, 2008.
  • R. Andreani, J.M. Martnez, A. Ramos, and P.J.S. Silva, A cone-continuity constraint qualification and algorithmic consequences, SIAM. J. Optim. 26 (2016), pp. 96–110. https://doi.org/10.1137/15M1008488
  • T.E. Baker and L.S. Lasdon, Successive linear programming at exxon, Manage. Sci. 31 (1985), pp. 264–274. doi: 10.1287/mnsc.31.3.264
  • A. Beck and M. Teboulle, Mirror descent and nonlinear projected subgradient methods for convex optimization, Oper. Res. Lett. 31 (2003), pp. 167–175. doi: 10.1016/S0167-6377(02)00231-6
  • A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM. J. Imaging. Sci. 2 (2009), pp. 183–202. doi: 10.1137/080716542
  • E.G. Birgin and J.M. Martnez, Practical Augmented Lagrangian Methods for Constrained Optimization, Society for Industrial and Applied Mathematics, Philadelphia, PA, 2014. https://epubs.siam.org/doi/abs/10 .1137/1.9781611973365.
  • L. Bottou, Stochastic learning, in Advanced lectures on machine learning, Springer, 2004, pp. 146–168.
  • L. Bottou, Large-scale machine learning with stochastic gradient descent, in Proceedings of COMPSTAT'2010, Springer, 2010, pp. 177–186.
  • L. Bottou, Stochastic gradient descent tricks, in Neural networks: Tricks of the trade, Springer, 2012, pp. 421–436.
  • J. Burke, A sequential quadratic programming method for potentially infeasible mathematical programs, J. Math. Anal. Appl. 139 (1989), pp. 319–351. doi: 10.1016/0022-247X(89)90111-X
  • J.V. Burke, F.E. Curtis, and H. Wang, A sequential quadratic optimization algorithm with rapid infeasibility detection, SIAM. J. Optim. 24 (2014), pp. 839–872. doi: 10.1137/120880045
  • J.V. Burke, F.E. Curtis, H. Wang, and J. Wang, A dynamic penalty parameter updating strategy for matrix-free sequential quadratic optimization, Preprint (2018). Available at arXiv:1803.09224.
  • R.H. Byrd, F.E. Curtis, and J. Nocedal, An inexact sqp method for equality constrained optimization, SIAM. J. Optim. 19 (2008), pp. 351–369. doi: 10.1137/060674004
  • R.H. Byrd, F.E. Curtis, and J. Nocedal, Infeasibility detection and sqp methods for nonlinear optimization, SIAM. J. Optim. 20 (2010), pp. 2281–2299. doi: 10.1137/080738222
  • R.H. Byrd, N.I. Gould, J. Nocedal, and R.A. Waltz, An algorithm for nonlinear optimization using linear programming and equality constrained subproblems, Math. Program. 100 (2003), pp. 27–48. doi: 10.1007/s10107-003-0485-4
  • R.H. Byrd, J. Nocedal, and R.A. Waltz, Steering exact penalty methods for nonlinear programming, Opt. Methods Softw. 23 (2008), pp. 197–213. doi: 10.1080/10556780701394169
  • F.H. Clarke, Generalized gradients and applications, Trans. Am. Math. Soc. 205 (1975), pp. 247–262. doi: 10.1090/S0002-9947-1975-0367131-6
  • I. Daubechies, M. Defrise, and C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pure Appl. Math. J. Courant Inst. Math. Sci. 57 (2004), pp. 1413–1457. doi: 10.1002/cpa.20042
  • J. Duchi, E. Hazan, and Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization, J. Mach. Learn. Res. 12 (2011), pp. 2121–2159.
  • R. Fletcher, Practical methods of optimization. Wiley, Hoboken, NJ, 2013.
  • R. Fletcher and E.S. de la Maza, Nonlinear programming and nonsmooth optimization by successive linear programming, Math. Program. 43 (1989), pp. 235–256. doi: 10.1007/BF01582292
  • N.I.M. Gould, D. Orban, and P.L. Toint, Cutest: a constrained and unconstrained testing environment with safe threads for mathematical optimization, Comput. Optim. Appl. 60 (2015), pp. 545–557. https://doi.org/10.1007/s10589-014-9687-3
  • S.P. Han, A globally convergent method for nonlinear programming, J. Optim. Theory. Appl. 22 (1977), pp. 297–309. doi: 10.1007/BF00932858
  • S.P. Han and O.L. Mangasarian, Exact penalty functions in nonlinear programming, Math. Program. 17 (1979), pp. 251–269. doi: 10.1007/BF01588250
  • W. Hock and K. Schittkowski, Test examples for nonlinear programming codes, J. Optim. Theory. Appl. 30 (1980), pp. 127–129. doi: 10.1007/BF00934594
  • R. Johnson and T. Zhang, Accelerating stochastic gradient descent using predictive variance reduction, in Advances in neural information processing systems. 2013, pp. 315–323.
  • A. Juditsky, A. Nemirovski, and C. Tauvel, Solving variational inequalities with stochastic mirror-prox algorithm, Stochast. Syst. 1 (2011), pp. 17–58. doi: 10.1287/10-SSY011
  • W. Karush, Minima of functions of several variables with inequalities as side conditions, in Traces and Emergence of Nonlinear Programming, Springer, 2014, pp. 217–245.
  • H.W. Kuhn and A.W. Tucker, Nonlinear programming, in Traces and emergence of nonlinear programming, Springer, 2014, pp. 247–258.
  • L. Lasdon, A. Waren, S. Sarkar, and F. Palacios, Solving the pooling problem using generalized reduced gradient and successive linear programming algorithms, ACM Sigmap Bull. 27 (1979), pp. 9–15. doi: 10.1145/1111246.1111247
  • R. Luss and M. Teboulle, Conditional gradient algorithmsfor rank-one matrix approximations with a sparsity constraint, SIAM Rev. 55 (2013), pp. 65–98. doi: 10.1137/110839072
  • C. Oberlin and S.J. Wright, Active set identification in nonlinear programming, SIAM. J. Optim. 17 (2006), pp. 577–605. doi: 10.1137/050626776
  • B.N. Pshenichnyj, The linearization method for constrained optimization, Springer Series in Computational Mathematics (English summary) Translated from the 1983 Russian original by Stephen S. Wilson., Vol. 22, Springer-Verlag, 1994.
  • C. Udriste, Convex functions and optimization methods on Riemannian manifolds, Math. Appl. 297. Kluwer Academic Publishers, Dordrecht, 1994.
  • L. Xiao and T. Zhang, A proximal stochastic gradient method with progressive variance reduction, SIAM. J. Optim. 24 (2014), pp. 2057–2075. doi: 10.1137/140961791

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.