In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton’s method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton’s method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton’s method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton’s method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute...
[1]
Yu-Hong Dai,et al.
Convergence Properties of the BFGS Algoritm
,
2002,
SIAM J. Optim..
[2]
L. June,et al.
Modifications of the Limited Memory BFGS Algorithm for Large-scale Nonlinear Optimization
,
2005
.
[3]
Ronald L. Rivest,et al.
Introduction to Algorithms, third edition
,
2009
.
[4]
Roger Fletcher,et al.
An Overview of Unconstrained Optimization
,
1994
.
[5]
A. Agresti.
Categorical data analysis
,
1993
.
[6]
Subandijo Subandijo.
Efisiensi Algoritma dan Notasi O-Besar
,
2011
.
[7]
Oded Goldreich,et al.
Computational complexity: a conceptual perspective
,
2008,
SIGA.
[8]
Donald Ervin Knuth,et al.
The Art of Computer Programming, 2nd Ed. (Addison-Wesley Series in Computer Science and Information
,
1978
.
[9]
B. Baltagi,et al.
Econometric Analysis of Panel Data
,
2020,
Springer Texts in Business and Economics.
[10]
David W. Hosmer,et al.
Applied Logistic Regression
,
1991
.
[11]
J. J. Moré,et al.
Quasi-Newton Methods, Motivation and Theory
,
1974
.
[12]
Adi Ben-Israel.
A Newton-Raphson method for the solution of systems of equations
,
1966
.
[13]
Rob Malouf,et al.
A Comparison of Algorithms for Maximum Entropy Parameter Estimation
,
2002,
CoNLL.
[14]
Sanjeev Arora,et al.
Computational Complexity: A Modern Approach
,
2009
.
[15]
Donald E. Knuth,et al.
Big Omicron and big Omega and big Theta
,
1976,
SIGA.
[16]
Jorge Nocedal,et al.
On the Behavior of Broyden's Class of Quasi-Newton Methods
,
1992,
SIAM J. Optim..