Unconstrained Optimization: Basic Methods
- Optimality Conditions
- Variational Ideas
- Main Optimality Conditions
- Gradient Methods -- Convergence
- Descent Directions and Stepsize Rules
- Convergence Results
- Gradient Methods -- Rate of Convergence
- The Local Analysis Approach
- The Role of the Condition Number
- Convergence Rate Results
- Newton's Method and Variations
- Modified Cholesky Factorization
- Trust Region Methods
- Variants of Newton's Method
- Least Squares and the Gauss-Newton Method
- Notes and Sources
Unconstrained Optimization: Additional Methods
- Conjugate Direction Methods
- The Conjugate Gradient Method
- Convergence Rate of Conjugate Gradient Method
- Quasi-Newton Methods
- Nonderivative Methods
- Coordinate Descent
- Direct Search Methods
- Incremental Methods
- Incremental Gradient Methods
- Incremental Aggregated Gradient Methods
- Incremental Gauss-Newton Methods
- Incremental Newton Methods
- Distributed Asynchronous Algorithms
- Totally and Partially Asynchronous Algorithms
- Totally Asynchronous Convergence
- Partially Asynchronous Gradient-Like Algorithms
- Convergence Rate of Asynchronous Algorithms
- Discrete-Time Optimal Control
- Gradient and Conjugate Gradient Methods for Optimal Control
- Newton's Method for Optimal Control
- Solving Nonlinear Programming Problems - Some Practical Guidelines
- Notes and Sources
Optimization Over a Convex Set
- Constrained Optimization Problems
- Necessary and Sufficient Conditions for Optimality
- Existence of Optimal Solutions
- Feasible Directions - Conditional Gradient Method
- Descent Directions and Stepsize Rules
- The Conditional Gradient Method
- Gradient Projection Methods
- Feasible Directions and Stepsize Rules Based on Projection
- Convergence Analysis
- Two-Metric Projection Methods
- Manifold Suboptimization Methods
- Proximal Algorithms
- Rate of Convergence
- Variants of the Proximal Algorithm
- Block Coordinate Descent Methods
- Variants of Coordinate Descent
- Network Optimization Algorithms
- Notes and Sources
Lagrange Multiplier Theory
- Necessary Conditions for Equality Constraints
- The Penalty Approach
- The Elimination Approach
- The Lagrangian Function
- Sufficient Conditions and Sensitivity Analysis
- The Augmented Lagrangian Approach
- The Feasible Direction Approach
- Sensitivity
- Inequality Constraints
- Karush-Kuhn-Tucker Optimality Conditions
- Sufficient Conditions and Sensitivity
- Fritz John Optimality Conditions
- Constraint Qualifications and Pseudonormality
- Abstract Set Constraints and the Tangent Cone
- Abstract Set Constraints, Equality, and Inequality Constraints
- Linear Constraints and Duality
- Convex Cost Functions and Linear Constraints
- Duality Theory: A Simple Form for Linear Constraints
- Notes and Sources
Lagrange Multiplier Algorithms
- Barrier and Interior Point Methods
- Path Following Methods for Linear Programming
- Primal-Dual Methods for Linear Programming
- Penalty and Augmented Lagrangian Methods
- The Quadratic Penalty Function Method
- Multiplier Methods -- Main Ideas
- Convergence Analysis of Multiplier Methods
- Duality and Second Order Multiplier Methods
- Nonquadratic Augmented Lagrangians - The Exponential Method of Multipliers
- Exact Penalties - Sequential Quadratic Programming
- Nondifferentiable Exact Penalty Functions
- Sequential Quadratic Programming
- Differentiable Exact Penalty Functions
- Lagrangian Methods
- First-Order Lagrangian Methods
- Newton-Like Methods for Equality Constraints
- Global Convergence
- A Comparison of Various Methods
- Notes and Sources
Duality and Convex Programming
- Duality and Dual Problems
- Geometric Multipliers
- The Weak Duality Theorem
- Primal and Dual Optimal Solutions
- Treatment of Equality Constraints
- Separable Problems and Their Geometry
- Additional Issues About Duality
- Convex Cost - Linear Constraints
- Convex Cost - Convex Constraints
- Conjugate Functions and Fenchel Duality
- Conic Programming
- Monotropic Programming
- Network Optimization
- Games and the Minimax Theorem
- The Primal Function and Sensitivity Analysis
- Discrete Optimization and Duality
- Examples of Discrete Optimization Problems
- Branch-and-Bound
- Lagrangian Relaxation
- Notes and Sources
Dual Methods
- Dual Derivatives and Subgradients
- Dual Dual Ascent Methods for Differentiable Dual Problems
- Coordinate Ascent for Quadratic Programming
- Separable Problems and Primal Strict Convexity
- Partitioning and Dual Strict Concavity
- Proximal and Augmented Lagrangian Methods
- The Method of Multipliers as a Dual Proximal Algorithm
- Entropy Minimization and Exponential Method of Multipliers
- Incremental Augmented Lagrangian Methods
- Alternating Direction Methods of Multipliers
- ADMM Applied to Separable Problems
- Connections Between Augmented Lagrangian-Related Methods
- Subgradient-Based Optimization Methods
- Subgradient Methods
- Approximate and Incremental Subgradient Methods
- Cutting Plane Methods
- Ascent and Approximate Ascent Methods
- Decomposition Methods
- Lagrangian Relaxation of the Coupling Constraints
- Decomposition by Right-Hand Side Allocation
- Notes and Sources
Appendix A: Mathematical Background
Appendix B: Convex Analysis
Appendix C: Line Search Methods
Appendix D: Implementation of Newton's
Method
References
Index