This repository contains the implementation of various iterative optimization algorithms and a testing environment to assess their performance across three specific functions.
The following optimization methods are implemented:
-
Nonlinear Conjugate Gradient Methods (NLCGM):
- Hestenes-Stiefel (HS)
- Restarted Polak–Ribiere (PR)
- Fletcher–Reeves (FR)
-
Newton's Method
The performance of the implemented algorithms is evaluated on the following benchmark problems:
The repository includes scripts for symbolic computation of gradients and Hessians to streamline the optimization process. These scripts utilize the syms
package in MATLAB for symbolic computation:
-
Gradient Computation:
- The script
"Generate [problem's name] Gradient.m"
computes the generic form of the Gradient's entries and saves it as a MATLAB function. - This function is then used iteratively in
"Eval problem's name Gradient.m"
to compute the entire Gradient efficiently.
- The script
-
Hessian Computation:
- The Hessians are computed similarly. Since all three benchmark problems have band matrices, only a few symbolic derivatives are required (maximum of 10 per problem) to derive general expressions for each diagonal of the Hessian matrix.
- Specific expressions are derived and applied to the boundary entries of the Gradient and Hessian based on the problem’s definition.
By using symbolic computation only for a few entries and deriving generic functions for repeated use, the repository allows for efficient computation of both the Gradient and Hessian, speeding up the optimization process.
The performance of the algorithms can be assessed through the following indicators:
- Efficiency: Quantified by both computational time and iterations to convergence.
- Reliability: Measured using the Success Rate, which assesses the algorithm's capacity to perform well in different situations.
- Quality of the Solution: Evaluated based on whether a known minimum point exists. If so, the Fixed-Target Method is used; otherwise, the Fixed-Cost Approach with metrics such as the norm of the gradient and the value of the function in the minimum can be applied.
- The repository allows for the calculation of these metrics, to evaluate the computational effort required and the results obtained.
The repository provides functionality to generate random starting points for a fair comparison of algorithms. These starting points are generated using the script Generate problem’s name Random start pts.m
, which evaluates multiple Gaussian distributions with a fixed mean and standard deviation.
The interface includes a plotting function to visualize the performance and progress of the optimization methods. Additionally, a Python script is available for processing the data obtained from the optimization runs, enabling further analysis and evaluation of results.