GLOBAL CONVERGENCE OF THE TMR METHOD FOR UNCONSTRAINED OPTIMIZATION PROBLEMS

TitleGLOBAL CONVERGENCE OF THE TMR METHOD FOR UNCONSTRAINED OPTIMIZATION PROBLEMS
Publication TypeJournal Article
Year of Publication2017
AuthorsBOUALI, T, BELLOUFI, M, GUEFAIFIA, R
Secondary TitleCommunications in Applied Analysis
Volume21
Issue1
Start Page15
Pagination15
Date Published01/2017
Type of Workscientific: mathematics
ISSN1083-2564
AMS65H10, 90C26
Abstract

Conjugate gradient methods are probably the most famous iterative methods for solving large scale optimization problems in scientific and engineering computation, characterized by the simplicity of their iteration
and their low memory requirements. It is well known that the search direction plays a main role in the line search method. In this paper, we propose a new search direction with the Wolfe line search technique for solving unconstrained optimization problems. Under the above line searches and some assumptions, the global convergence properties of the given methods are discussed. Numerical result shows that the proposed formula is superior and more efficient when compared to other CG coefficients.
 

URLhttp://www.acadsol.eu/en/articles/21/1/2.pdf
DOI10.12732/caa.v21i1.2
Short TitleTMR Method for Optimization Problems
Alternate JournalCAA
Refereed DesignationRefereed
Full Text

REFERENCES

[1] M. Al-Baali, Descent property and global convergence of Fletcher-Reeves method with inexact line search, IMA. J. Numer. Anal. 5 (1985) 121-124. 
[2] N. Andrei, An unconstrained optimization test functions collection, Adv. Modell. Optim. 10 (2008) 147-161. 
[3] T. Ahmed, D. Storey, E cient hybrid conjugate gradient techniques, J. Optimiz. Theory Appl. 64 (1990) 379-394. 
[4] M. Belloufi and R. benzine, Descent property andglobal convergence of a new search direction method for unconstrained optimization, Numerical Functional Analysis and Optimization,36:169–180, 2015, doi: 10.1080/01630563.2014.976796.
[5] Y. Dai, Convergence properties of the BFGS algorithm. SIAM Journal on Optimization, 13 (2003) 693-701. 
[6] J. E. Dennis, Jr., J. J. Mor e, (1977). Quasi-Newton methods, motivation and theory. SIAM Review, 19, 46-89. 
[7] Y. Dai, Y. Yuan, A nonlinear conjugate gradient with a strong global convergence properties, SIAM J. Optimiz. 10 (2000) 177-182. 
[8] Y. Dai, Y. Yuan, Convergence properties of the Fletch-Reeves method, MA J. Numer. Anal. 16 (2) (1996) 155-164. 
[9] R. Fletcher, Practical Method of Optimization, second ed., Unconstrained Optimization, vol. I, Wiley, New York, 1997. 
[10] R. Fletcher, C. Reeves, Function minimization by conjugate gradients, Comput. J. 7 (1964) 149-154. 
[11] J.C. Gibert, J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optimiz. 2 (1992) 21-42. 
[12] M.R. Hestenes, E. Stiefel, Method of conjugate gradient for solving linear equations, J. Res. Nat. Bur. Stand. 49 (1952) 409-436. 
[13] W.W. Hager, H. Zhang, A new conjugate gradient method with guaranteed descent and an e cient line search, SIAM Journal on Optimization 16 (2005) 170-192. 
[14] Y.F. Hu, C. Storey, Global convergence result for conjugate gradient method, J. Optimiz. Theory Appl. 71 (1991) 399-405. 
[15] Y. Liu, C. Storey, E cient generalized conjugate gradient algorithms. Part 1: Theory, J. Optimiz. Theory Appl. 69 (1992) 129-137. 
[16] G. Liu, J. Han, H. Yin, Global convergence of the Fletcher-Reeves algorithm with inexact line search, Appl. Math. JCU 10B (1995) 75-82. 
[17] D. C. Luenerger, Linear and nonlinear programming (2nd ed.). Reading,MA: Addition Wesley, 1989. 
[18] J. Nocedal, Conjugate gradient methods and nonlinear optimization, in: L. Adams, J.L. Nazareth (Eds.), Linear and Nonlinear Conjugate Gradient Related Methods, SIAM, Philadelphia, PA, 1995, 9-23. 
[19] J. Nocedal, S.J. Wright, Numerical optimization, Springer Series in Operations Research, Springer, New York, 1999. 
[20] M.J.D. Powell, A new algorithm for unconstrained optimization. In J. B. Rosen, O. L. Mangasarian, & K. Ritter (Eds.), Nonlinear programming. New York: Academic Press, (1970). 
[21] B.T. Polyak, The conjugate gradient method in extreme problems, USSR Comp. Math. Math. Phys. 9 (1969) 94-112. 
[22] M.J.D. Powell, Nonconvex minimization calculations and the conjugate gradient method, Lecture Notes in Mathematics, 1066, Springer-Verlag, Berlin, 1984, 122-141. 
[23] E. Polak, G. Ribi ere, Note Sur la convergence de directions conjug ees, Rev. Francaise Informat Recherche Operationelle 3e Ann ee 16(1969) 3543. 
[24] M. Raydan, The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM Journal of Optimization, 7 (1997) 26-33. 
[25] M. Raydan, B. F. Svaiter, Relaxed steepest descent and Cauchy-BarzilaiBorwein method. Computational Optimization and Applications, 21 (2002) 155-167. 
[26] J. Schropp, A note on minimization problems and multistep methods. Numeric Mathematic, 78 (1997) 87-101. 
[27] J. Schropp, One-step and multistep procedures for constrained minimization problems. IMA Journal of Numerical Analysis, 20 (2000) 135-152. 
[28] J. Sun, J. Zhang, Convergence of conjugate gradient methods without line search, Annals of Operations Research 103 (2001) 161-173. 
[29] Y. Yuan, W. Sun, Theory and Methods of Optimization, Science Press ofChina, Beijing, 1999. 
[30] G. Zoutendijk, Nonlinear programming computational methods, in: J. Abadie (Ed.), Integer and Nonlinear Programming, North-Holland, Amsterdam, 1970, 37-86.