There are several different algorithms for minimizing the energy of the system. They all involve calculating the derivative of the potential energy, and possibly the second derivative, and using that information to adjust the coordinates in order to find a lower energy. In the descriptions of the algorithms below, a capitalized keyword at the beginning of each paragraph is the key word used to invoke that method. After the descriptions is a listing of all keywords associated with minimization.
The simplest minimization algorithm is steepest descents (SD). In each step of this iterative procedure, we adjust the coordinates in the negative direction of the gradient. It has one adjustable parameter, the step size, which determines how far to shift the coordinates at each step. The step size is adjusted depending on whether a step results in a lower energy. I.e., if the energy drops, we increase the step size by 20% to accelerate the convergence. If the energy rises, we overshot a minimum, so the step size is halved. Steepest descents does not converge in general, but it will rapidly improve a very poor conformation.
A second method is the conjugate gradient technique (CONJ)
which has better convergence characteristics (R. Flectcher & C.M.
Reeves, The Computer Journal 7:149 (1964)). The method is
iterative and makes use of the previous history of minimization steps as
well as the current gradient to determine the next step. It can be shown
that the method converges to the minimum energy in
N steps for a
quadratic energy surface where
N is the number of degrees of
freedom in the energy. Since several terms in the potential are
quadratic, it requires less evaluations of the energy and gradient to
achieve the same reduction in energy in comparison to steepest descents.
Its main drawback is that with very poor conformations, it is more
likely to generate numerical overflows than steepest descents. The
algorithm used in CONGEN has a slightly better interpolation scheme and
automatic step size selection.
A third method involves solving the Newton-Raphson minimization equations iteratively (NRAP). This procedure requires the computation of the derivative of the gradient which is a matrix of size O(n**2). The procedure here is to find a point where the gradient will be zero (hopefully a minimum in energy) assuming that the potential is quadratic. The Newton-Raphson equations can be solved by a number of means, but the method adopted for CONGEN involves diagonalizing the second derivative matrix and then finding the optimum step size along each eigenvector. When there are one or more negative eigenvalues, a blind application of the equations will find a saddle point in the potential. To overcome this problem, a single additional energy and gradient determination is performed along the eigenvector displacement for each small or negative eigenvalue. From this additional data, the energy function is approximated by a cubic potential and the step size that minimizes this function is adopted. The advantages of this method are that the geometry cannot remain at a saddle point, as sometimes occurs with the previous procedures, and that the procedure converges rapidly when the potential is nearly quadratic (or cubic). The major disadvantage is that this procedure can require excessive storage requirements, o(n**2), and computation time, o(n**3), for large molecules. Thus we are currently restricted to systems with about 200 atoms or less for this method. This method may not be used when Images are used (see Images).
The fourth method available is an adopted basis Newton-Raphson method (ABNR) (D. J. States). This routine performs energy minimization using a Newton-Raphson algorithm applied to a subspace of the coordinate vector spanned by the displacement coordinates of the last (MINDIM) positions. The second derivative matrix is constructed numerically from the change in the gradient vectors, and is inverted by an eigen vector analysis allowing the routine to recognize and avoid saddle points in the energy surface. At each step the residual gradient vector is calculated and used to add a steepest descent step onto the Newton-Raphson step, incorporating the new direction into the basis set.
A fifth method involving an amalgam of conjugate gradient and steepest descents (GCSD) is currently unavailable due to a bug in the command parser.
In the table which follows, keywords enclosed in square brackets means that one can choose one in the set. Such enclosed keywords do not expect a value after them. All other keywords are used for specifying values, See Energy Manipulation Syntax. The method column shows which method the keyword affects. See General Energy Operands, for common variables.
Keyword Default Method Purpose [ CONJ ] CONJ Do conjugate gradient minimization. [ SD ] Do steepest descent minimization. [ NRAP ] Do Newton-Raphson minimization. [ ABNR ] [MASS] Do Adopted Basis Newton-Raphson minimization, with mass weighted forces if specified. [ CGSD ] This is to combine CONJ and SD (Not available) STEP .02 ALL Initial step size for the minimization algorithms. Reasonable values for the various methods are best determined by trial and error. PRTMIN 1 ALL A flag indicating how much to print during minimization. SD No effect CONJ If less than 2, the energy is printed only once each cycle. A setting of 2 shows the energy for each evaluation plus variables used in the method. NRAP if greater than 1, a brief overview of the least squares cubic fitting procedure given for eigenvalues less than TFREQ. ABNR If less than 2, the energy is printed out only for successful steps (improvements in total energy). Some description of how the step was chosen is printed if it is set to 2, and a very verbose description is given for values of 3 or higher. NCGCYC 100 CONJ The number of conjugate gradient cycles executed before the algorithm restarts. This number will be automatically lowered to the shortest hydrogen bond or non-bonded list update frequency. The algorithm will fail if the potential is changed will it is running. PCUT .9999 CONJ If the cosine of the angle between the old and new P vector is greater than PCUT, the algorithm will be restarted. This prevents the algorithm from plodding down the same path repeatedly. If PRTMIN is less than 2, one effect of the restart is that the step size will go its initial value. If this happens many times, you've converged. EIGRNG .0005 ABNR The smallest eigenvalue (relative to the largest) that will be considered nonsingular. MINDIM 5 ABNR The dimension of the basis set stored. STPLIM 1.0 ABNR The maximum Newton Raphson step that will be allowed. STRICT 0.1 ABNR The strictness of descent. The energy of a new step must be less than the previous best energy + STRICT for the new step to be accepted. MASS --- ABNR Use unweighted forces by default or mass weighted if specified. Mass weights converge more slowly but allow association with normal mode frequencies. TFREQ 1.0 NRAP The smallest eigenvalue that is considered to be non-negative (i.e. do cubic fitting on all eigenvalues smaller than this). NFREQ NATOM*3 NRAP The number of degrees of freedom to be optimized (the number of lowest eigenvalues). Use the default whenever practical. TOLENR 0.0 ABNR A tolerance applied to the change in total energy change during a cycle of minimization (NCYCLE steps) If the energy change is less than or equal to TOLENR, the minimization routine will exit. TOLGRD 0.0 ABNR A tolerance applied to the average gradient during a cycle of minimization. If the average gradient is less than or equal to TOLGRD, the routine will exit. TOLITR 100 ABNR The maximum number of energy evaluations allowed CONJ for a single step of minimization. TOLSTP 0.0 ABNR A tolerance applied to the average step size during a cycle of minimization. If the average step size is less than or equal to TOLSTP, the routine will exit.