Skip to content Skip to sidebar Skip to footer

What Is The Significance Of Omega In Successive Over Relaxation Rate Method?

I have the following matrix I have transformed this to strictly dominant matrix and applied Guass-Siedel and Successive over relaxation rate method with omega=1.1 and tolerance of

Solution 1:

This is actually a question I had myself as I was trying to solve the same problem. Here I will include my results from the 6th iteration from both GS and SOR methods and will analyze my opinion on why this is the case. For both the initial vector x = (0,0,0,0). Practically speaking we see that the L infinity norm is different for each method (see below).

For Gauss-Seidel:

The solution vector in iteration 6 is: 
[[ 1.0001]
[ 2.    ]
[-1.    ]
[ 1.    ]]
The L infinity norm in iteration 6 is: [4.1458e-05]

For SOR:

The solution vector in iteration 6 is: 
[[ 1.0002]
[ 2.0001]
[-1.0001]
[ 1.    ]]
The L infinity norm in iteration 6 is: [7.8879e-05]

Academically speaking "SOR can provide a convenient means to speed up both the Jacobian and Gauss-Seidel methods of solving the our linear system. The parameter ω is referred to as the relaxation parameter. Clearly for ω = 1 we restore the original equations. If ω < 1 we talk of under-relaxation, and this can be important for some systems which will not converge under normal Jacobian relaxation. If ω > 1, we have over-relaxation, with which we will be more concerned. It was discovered during the years of hand computation that convergence is faster if we go beyond the Gauss-Seidel correction. Roughly speaking, those approximations stay on the same side of the solution x. An overrelaxation factor ω moves us closer to the solution. With ω = 1, we recover Gauss-Seidel; with ω > 1, the method is known as SOR. The optimal choice of ω never exceeds 2. It is often in the neighborhood of 1.9."

For more information on the ω you can also refer to Strang, G., 2006 page 410 of the book "Linear Algebra and its applications" as well as to the paper A rapid finite difference algorithm, utilizing successive over‐relaxation to solve the Poisson–Boltzmann equation.

Based on the academic description above I believe that both of these methods have 6 iterations because 1.1 is not the optimal ω value. Changing ω to a value closer tooptimal ω could yield a better result, as the whole point of overrelaxation is to discover this optimal ω. (My belief again is that this 1.1 is not the optimal omega and will update you once I do the calculation). The image is from Strang, G., 2006 "Linear algebra and its applications" 4th edition page 411.

Edit: Indeed by running a graphical representation of omega - iterations in SOR it seems that my optimal omega is in the range of 1.0300 to 1.0440, and the whole range of these omegas gives me five iterations, which is a more efficient way than pure Gauss-Seidel at omega = 1 that gives 6 iterations.

efficiency plot


Post a Comment for "What Is The Significance Of Omega In Successive Over Relaxation Rate Method?"