Seemingly Unrelated Regression Equations Models
暂无分享,去创建一个
The problem of estimation for a system of regression equations where the random disturbances are correlated with each other is investigated. That is, the regression equations are linked statistically, even though not structurally, through the non-diagonality of the associated variance-covariance matrix. The expression Seemingly Unrelated Regression Equations (SURE) is used to reflect the fact that the individual equations are in fact related to one another even though, superficially, they may not seem to be. The SURE model that comprises G regression equations can be written as
$$ \begin{array}{*{20}{c}} {{{y}_{i}} = {{X}_{i}}{{\beta }_{i}} + {{u}_{i}},} & {i = 1, \ldots ,G,} \\ \end{array} $$
(5.1)
where the y i ∈ ℜ T are the response vectors, the \(X_i \in \Re ^{T \times k_i } \) are the exogenous matrices with full column ranks, the \(\beta _i \in \Re ^{k_i } \) are the coefficients, and the ui ∈ ℜ T are the disturbance terms. The basic assumptions underlying the SURE model (5.1) are \(E\left( {u_i } \right) = 0,\,E\left( {u_i ,u_j^T } \right) = \sigma _{i,j} I_T \) and \( {{\lim }_{{T \to \infty }}}(X_{i}^{T}{{X}_{j}}/T) \) exists (i,j = 1,…, G). In compact form the SURE model can be written as
$$ \left( {\begin{array}{*{20}{c}} {{{y}_{1}}} \\ \vdots \\ {{{y}_{G}}} \\ \end{array} } \right) = \left( {\begin{array}{*{20}{c}} {{{X}_{1}}} & \, & \, \\ \, & \ddots & \, \\ \, & \, & {{{X}_{G}}} \\ \end{array} } \right)\left( {\begin{array}{*{20}{c}} {{{\beta }_{1}}} \\ \vdots \\ {{{\beta }_{G}}} \\ \end{array} } \right) + \left( {\begin{array}{*{20}{c}} {{{u}_{1}}} \\ \vdots \\ {{{u}_{G}}} \\ \end{array} } \right) $$
(5.2)
where Y = (y1…y G ) and U = (u1… u G ) . The direct sum of matrices \( \oplus _{i = 1}^G X_i \) defines the GT x K block-diagonal matrix
$$ \mathop{{\mathop{ \oplus }\limits_{{i = 1}} }}\limits^{G} {{X}_{i}} = {{X}_{1}} \oplus {{X}_{2}} \oplus \cdots \oplus {{X}_{G}} = \left( {\begin{array}{*{20}{c}} {{{X}_{1}}} & & & \\ & {{{X}_{2}}} & & \\ & & \ddots & \\ & & & {{{X}_{G}}} \\ \end{array} } \right), $$
(5.3)
where \(K = \sum\nolimits_{i = 1}^G {k_i } \) [125]. The matrices used in the direct sum are not necessarily of the same dimension. It should be noted however, that some properties of the direct sum given in the literature are limited to square matrices [134, pages 260-261]. The set of vectors β1, β2,…, β G is denoted by {β i } The vec() operator stacks the columns of its matrix or set of vectors argument in a column vector, that is,
$$ vec(Y) = \left( {\begin{array}{*{20}{c}} {y1} \\ \vdots \\ {{{y}_{G}}} \\ \end{array} } \right) $$
and
$$ vec({{\{ {{\beta }_{i}}\} }_{G}}) = \left( {\begin{array}{*{20}{c}} {{{\beta }_{1}}} \\ \vdots \\ {{{\beta }_{G}}} \\ \end{array} } \right). $$
Hereafter, the subscript G in the set operator {•} is dropped and the direct sum of matrices \( \oplus _{i = 1}^G \) will be abbreviated to ⨁ i for notational convenience.