da Cunha, Rudnei Dias and Hopkins, Tim
A Comparison of Acceleration Techniques Applied to the SOR Method.
University of Kent, Computing Laboratory, University of Kent, Canterbury, UK
(Full text available)
In this paper we investigate the performance of four different SOR acceleration techniques on a variety of linear systems. These are the Dancis's accelerations, Wynn's epsilon algorithm and Graves-Morris's generalisation of Aitken's delta-squared algorithm. The experimental results show that these accelerations can reduce the amount of work required to obtain a solution and that their rates of convergence are generally less sensitive to the value of the relaxation parameter than the straightforward SOR method. Necessary conditions for the reduction in the computational work required for convergence are given for each of the accelerations, based on the number of floating-point operations. It is shown experimentally that the reduction in the number of iterations is related to the separation between the two largest eigenvalues of the SOR iteration matrix for a given omega. This separation influences the convergence of all the acceleration techniques above. Another important characteristic exhibited by these accelerations is that even if the number of iterations is not reduced significantly compared to the SOR method, they are competitive in terms of number of floating-point operations used and thus they reduce the overall computational workload.
- Depositors only (login required):
Downloads per month over past year