Constrained Optimization and Lagrange
Multiplier Methods
Table of Contents:
Introduction
- General Remarks
- Notation and Mathematical Background
- Unconstrained Optimization
- Convergence Analysis of Gradient Methods
- Steepest Descent and Scaling
- Newton's Method and Its Modifications
- Conjugate Direction and Conjugate Gradient Methods
- Quasi-Newton Methods
- Methods not Requiring Evaluation of Derivatives
- Constrained Minimization
- Algorithms for Minimization Subject to Simple Constraints
- Notes and Sources
The Method of Multipliers for Equality
Constrained Problems
- The Quadratic Penalty Function Method
- The Original Method of Multipliers
- Geometric Interpretation
- Existence of Local Minima of the Augmented Lagrangian
- The Primal Functional
- Convergence Analysis
- Comparison with the Penalty Method - Computational Aspects
- Duality Framework for the Method of Multipliers
- Stepsize Analysis for the Method of Multipliers
- The Second-Order Multiplier Iteration
- Quasi-Newton Versions of the Second-Order Iteration
- Geometric Interpretation of the Second-Order Multiplier Iteration
- Multiplier Methods with Partial Elimination of Constraints
- Asymptotically Exact Minimization in the Method of Multipliers
- Primal-Dual Methods Not Utilizing a Penalty Function
- Notes and Sources
The Method of Multipliers for Inequality
Constrained and Nondifferentiable Optimization Problems
- One-Sided Inequality Constraints
- Two-Sided Inequality Constraints
- Approximation Procedures for Nondifferentiable and Ill-Conditioned
Optimization Problems
- Notes and Sources
Exact Penalty Methods and Lagrangian
Methods
- Nondifferentiable Exact Penalty Functions
- Linearization Algorithms Based on Nondifferentiable Exact Penalty Functions
- Algorithms for Minimax Problems
- Algorithms for Constrained Optimization Problems
- Differentiable Exact Penalty Functions
- Exact Penalty Functions Depending on x and lambda
- Exact Penalty Functions Depending Only on x
- Algorithms Based on Differentiable Exact Penalty Functions
- Lagrangian Methods - Local Convergence
- First-Order Methods
- Newton-like Methods for Equality Constraints
- Newton-like Methods for Inequality Constraints
- Quasi-Newton Versions
- Lagrangian Methods - Global Convergence
- Combinations with Penalty and Multiplier Methods
- Combinations with Differentiable Exact Penalty Methods - Newton and
Quasi-Newton Versions
- Combinations with Nondifferentiable Exact Penalty Methods - Powell's Variable
Metric Approach
- Notes and Sources
Nonquadratic Penalty Functions - Convex
Programming
- Classes of Penalty Functions and Corresponding Methods of Multipliers
- Penalty Functions for Equality Constraints
- Penalty Functions for Inequality Constraints
- Approximation Procedures Based on Nonquadratic Penalty Functions
- Convex Programming and Duality
- Convergence Analysis of Multiplier Methods
- Rate of Convergence Analysis
- Conditions for Penalty Methods to be Exact
- Large Scale Integer Programming Problems and the Exponential Method of
Multipliers
- An Estimate of the Duality Gap
- Solution of the Dual and Relaxed Problems
- Notes and Sources
References
Index
[Return to Athena Scientific Homepage]
info@athenasc.com