Line: 1 to 1  

 
Line: 305 to 305  
I_{g_k} \beta^{(k)} = (X_{g_k}^T W X_{g_k})^{1}_E X^T W ( y  X I_{ \bar{g_k} } \beta^{(k1)} ) \qquad . \qquad \qquad (4.2.10)  
Changed:  
< <  This is a representation of the first line of (4.3.1).  
> >  This is a representation of the first line of (4.1.3).  
Changed:  
< <  The second line of (4.3.1) can be represented with  
> >  The second line of (4.1.3) can be represented with  

Line: 1 to 1  

 
Line: 173 to 173  
that are complementary to indices in .
Consider the following iterative procedure.  
Changed:  
< < 
 
> > 
 
 
Line: 181 to 181  
\beta^{(k)}_i = \left\{ \begin{array}{rl}  
Changed:  
< <  (\hat{ \beta }_{(g_k)})_i & \text{if } i \in g_k ,\  
> >  (\hat{ \beta }_{(g_k)}^{(k)})_i & \text{if } i \in g_k ,\  
\beta^{(k1)}_i & \text{if } i \notin g_k .
\end{array} \right. \qquad , \qquad\qquad (4.1.3)
 
Changed:  
< <  where is the "point"  
> >  where is the "point"  
where as a function of parameters , takes minimum (while the rest parameters are fixed at the values obtained in the previous iteration:  
Changed:  
< <  )
 
> > 
) .
 
(i.e. ) .
One can expect that

Line: 1 to 1  

 
Line: 313 to 313  
Summing up (4.2.10) and (4.2.11) gives
 
Changed:  
< <  \beta^{(k)}  
> >  \boxed { \beta^{(k)} }  
& = & ( I_{g_k} + I_{ \bar{g_k} } ) \beta^{(k)} = \ & = & (X_{g_k}^T W X_{g_k})^{1}_E X^T W y + [I  (X_{g_k}^T W X_{g_k})^{1}_E X_T W X ] I_{ \bar{g_k} } \beta^{(k1)} =\  
Changed:  
< <  & = & A_{g_k} X^T W y + B_{g_k} \beta^{(k1)}  
> >  & = & \boxed {A_{g_k} X^T W y + B_{g_k} \beta^{(k1)} }  
\qquad . \qquad \qquad (4.2.12)
where we denoted
 
Changed:  
< <  \boxed{ B_{g_k}} & = & (I  A_{g_k} X_T W X ) I_{ \bar{g_k} } & \qquad . & \qquad \qquad (4.2.14)  
> >  \boxed{ B_{g_k}} & = & (I  A_{g_k} X^T W X ) I_{ \bar{g_k} } & \qquad . & \qquad \qquad (4.2.14)  
\end{array}  
Line: 334 to 334  
4.3.1 as a function of  
Changed:  
< <  Let us define matrices and as  
> >  Let us define matrices and as  
 
Changed:  
< <  \boxed{ D_k } = \left\{  
> >  \boxed{ A_k } = \left\{  
\begin{array}{lll} 0 & , & k=0 \  
Changed:  
< <  A_{g_k} + B_{g_k} D_{k1} & , & k > 0 .  
> >  A_{g_k} + B_{g_k} A_{k1} & , & k > 0 .  
\end{array} \right. \qquad , \qquad\qquad (4.3.1)
 
Line: 351 to 351  
Then, from (4.2.12),
 
Changed:  
< <  \boxed{ \beta^{(k)} = D_k X^T W y + B_k \beta^{(0)} }  
> >  \boxed{ \beta^{(k)} = A_k X^T W y + B_k \beta^{(0)} }  
\qquad . \qquad \qquad (4.3.3)
Indeed, by induction:
 
Changed:  
< <  \beta^{(1)} = A_{g_1} X^T W y + B_{g_1} \beta^{(0)} = D_1 X^T W y + B_1 \beta^{(0)}  
> >  \beta^{(1)} = A_{g_1} X^T W y + B_{g_1} \beta^{(0)} = A_1 X^T W y + B_1 \beta^{(0)}  
\qquad , \qquad \qquad (4.3.4)
 
Changed:  
< <  & = & A_{g_{p+1}} X^T W y + B_{g_{p+1}} ( D_p X^T W y + B_p \beta^{(0)} ) = \ & = & ( A_{g_{p+1}} + B_{g_{p+1}} D_p ) X^T W y + B_{g_{p+1}} B_p \beta^{(0)} = \ & = & D_{p+1} X^T W y + B_{p+1} \beta^{(0)}  
> >  & = & A_{g_{p+1}} X^T W y + B_{g_{p+1}} ( A_p X^T W y + B_p \beta^{(0)} ) = \ & = & ( A_{g_{p+1}} + B_{g_{p+1}} A_p ) X^T W y + B_{g_{p+1}} B_p \beta^{(0)} = \ & = & A_{p+1} X^T W y + B_{p+1} \beta^{(0)}  
\qquad , \qquad \qquad (4.3.5)
In particular, the eq.(4.3.3) is valid for :  
Changed:  
< <  \boxed{ \beta^{(N)} = D_N X^T W y + B_N \beta^{(0)} }  
> >  \boxed{ \beta^{(N)} = A_N X^T W y + B_N \beta^{(0)} }  
\qquad . \qquad \qquad (4.3.6)  
Line: 384 to 384  
eq.(4.3.3) gives rise to
 
Changed:  
< <  & = & D_k X^T W M_y (D_k X^T W)^T = D_k X^T W M_y W^T X D_k^T = \ & = & D_k X^T W X D_k^T  
> >  & = & A_k X^T W M_y (A_k X^T W)^T = A_k X^T W M_y W^T X A_k^T = \ & = & A_k X^T W X A_k^T  
\qquad . \qquad \qquad (4.3.7)
Comparing this with the covariance matrix of the exact solution, eq.(2.6), yields
 
Changed:  
< <  \boxed{ \lim_{k \to \infty} D_k X^T W X D_k^T = M_{\hat{\beta}} = (X^T W X ) ^{1} }  
> >  \boxed{ \lim_{k \to \infty} A_k X^T W X A_k^T = M_{\hat{\beta}} = (X^T W X ) ^{1} }  
\qquad . \qquad \qquad (4.3.8)  
Line: 402 to 402  
\beta^{(N)} = F(\beta^{(0)}) \qquad . \qquad \qquad (4.4.1)  
Changed:  
< <  we can build the expression for the case in one step as  
> >  one can build the expression for the case in one step as  
 
Line: 410 to 410  
Substituting (4.3.6) into (4.4.2) gives
 
Changed:  
< <  & = & D_N X^T W y + B_N ( D_N X^T W y + B_N \beta^{(0)} ) =\ & = & (I + B_N)D_N X^T W y + B_N^2 \beta^{(0)}  
> >  & = & A_N X^T W y + B_N ( A_N X^T W y + B_N \beta^{(0)} ) =\ & = & (I + B_N)A_N X^T W y + B_N^2 \beta^{(0)}  
\qquad . \qquad \qquad (4.4.3)
Transforming similarly the formula to the one, gives
 
Changed:  
< <  \beta^{(4N)} = (I + B_N^2)(I + B_N)D_N X^T W y + B_N^4 \beta^{(0)}  
> >  \beta^{(4N)} = (I + B_N^2)(I + B_N)A_N X^T W y + B_N^4 \beta^{(0)}  
\qquad . \qquad \qquad (4.4.4)
Then for the case one has
 
Changed:  
< <  \beta^{(8N)} = (I + B_N^4)(I + B_N^2)(I + B_N)D_N X^T W y + B_N^8 \beta^{(0)}  
> >  \beta^{(8N)} = (I + B_N^4)(I + B_N^2)(I + B_N)A_N X^T W y + B_N^8 \beta^{(0)}  
\qquad , \qquad \qquad (4.4.4) and so on...  
Line: 429 to 429  
This generalizes to
 
Changed:  
< <  \beta^{(2^p N)} = \left\lgroup \prod_{i=p1}^0 (I + B_N^{2^i}) \right\rgroup D_N X^T W y  
> >  \beta^{(2^p N)} = \left\lgroup \prod_{i=p1}^0 (I + B_N^{2^i}) \right\rgroup A_N X^T W y  
+ B_N^{2^p} \beta^{(0)} \quad , \quad p = 1,2, \: ... \quad } 
Line: 1 to 1  

 
Line: 397 to 397  
4.4 steps for  
Changed:  
< <  Having the expression (4.3.6)  
> >  With the expression (4.3.6) of the form  
 
Changed:  
< <  we can build the expression for the case in one step as  
> >  we can build the expression for the case in one step as  
 
Line: 435 to 435  
} \qquad , \qquad \qquad (4.4.5)  
Deleted:  
< <  asd 
Line: 1 to 1  

 
Line: 331 to 331  
4.3 steps  
Added:  
> >  4.3.1 as a function of  
Let us define matrices and as
 
Line: 343 to 346  
and
 
Changed:  
< <  \boxed{ B_k} & = & \prod_{i=k}^1 B_{g_i} = B_k \cdot ... \cdot B_1 \qquad . \qquad \qquad (4.3.2)  
> >  \boxed{ B_k} = \prod_{i=k}^1 B_{g_i} = B_{g_k} \cdot ... \cdot B_{g_1}
\qquad . \qquad \qquad (4.3.2)
Then, from (4.2.12),
In particular, the eq.(4.3.3) is valid for :
4.3.2 The covariance matrix ofAccording to the general rule of covariance matrix transformation ( ), eq.(4.3.3) gives rise to  
Added:  
> >  Comparing this with the covariance matrix of the exact solution, eq.(2.6), yields
4.4 steps for  
Added:  
> >  Having the expression (4.3.6)
 
Added:  
> >  This generalizes to
 
asd 
Line: 1 to 1  

 
Line: 299 to 299  
\end{array} \right. \qquad . \qquad\qquad (4.2.9)
 
Added:  
> >  Then (4.2.5) can be written as
The second line of (4.3.1) can be represented with
4.3 stepsLet us define matrices and as  
asd 
Line: 1 to 1  

 
Line: 246 to 246  
& = & (X_{g_k}^T W X_{g_k})^{1} X_{g_k}^T W ( y  X_{\bar{g_k}} \beta^{(k1)} ) \qquad . \qquad \qquad (4.2.5)  
Added:  
> >  Let us write the last expression via matrices of dimensions
We define matrices and as follows We also define the matrix (subscript stands for extended) via the matrix :  
asd 
Line: 1 to 1  

 
Line: 155 to 155  
4. An iterative solution  
Added:  
> >  4.1 The procedure  
Let indices of parameters be distributed among groups
with sizes respectively.
 
Changed:  
< <  g_k = \{ i_1 (g_k), \: ... \:, i_{n(g_k)} (g_k) \} , \quad k=1, \: ... \: , N \qquad . \qquad\qquad (4.1)  
> >  g_k = \{ i_1 (g_k), \: ... \:, i_{n(g_k)} (g_k) \} , \quad k=1, \: ... \: , N \qquad . \qquad\qquad (4.1.1)  
The subset of the parameters corresponding to the group
of indices, can be considered as an column
 
Changed:  
< <  \qquad\qquad (4.2)  
> >  \qquad\qquad (4.1.2)  
Let denote the set of indices that are complementary to indices in .  
Line: 174 to 176  
 
Changed:  
< <  by minimizing over at step :  
> >  by minimizing over at the step :  
\beta^{(k)}_i = \left\{ \begin{array}{rl} (\hat{ \beta }_{(g_k)})_i & \text{if } i \in g_k ,\ \beta^{(k1)}_i & \text{if } i \notin g_k .  
Changed:  
< <  \end{array} \right. \qquad , \qquad\qquad (4.3)  
> >  \end{array} \right. \qquad , \qquad\qquad (4.1.3)  
where is the "point"  
Line: 195 to 197  
(i.e. ) .
One can expect that
 
Changed:  
< <  \lim_{k \to \infty} \beta^{(k)} = \hat{\beta} \qquad . \qquad\qquad (4.4)  
> >  \lim_{k \to \infty} \beta^{(k)} = \hat{\beta} \qquad . \qquad\qquad (4.1.4)  
Added:  
> > 
4.2 The stepThe iteration consists in finding , a set of parameters, that minimizes
Let
be an submatrix of
built from the columns of with indices belonging to
(in other words, it is what remains
after removing columns which have indices not in ).
Therefore, the solution is given by formulae (2.4):  
asd 
Line: 1 to 1  

 
Line: 24 to 24  
Changed:  
< <  Wikipedia links  
> >  1. Wikipedia links  
 
Line: 35 to 35  
http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29#Weighted_linear_least_squares
 
Changed:  
< <  Uncorrelated measurements  
> >  2. Uncorrelated measurements  
Let with variances be measurements of functions  
Line: 46 to 46  
by minimizing over the expression
 
Changed:  
< <  S = \sum_{i=1}^{m} W_{ii}(y_i  \sum_{j=1}^{n} X_{ij}\beta_j)^2 \qquad ,  
> >  S = \sum_{i=1}^{m} W_{ii}(y_i  \sum_{j=1}^{n} X_{ij}\beta_j)^2 \qquad , \qquad \qquad (2.1)  
where the weight matrix of dimension  
Line: 58 to 58  
 
Added:  
> >  \qquad \qquad (2.2)  
Line: 66 to 67  
 
Changed:  
< <  = 0 \quad , \quad p=1,... \; ,n  
> >  = 0 \quad , \quad p=1,... \; ,n \qquad \qquad (2.3)  
or  
Line: 74 to 75  
 
Changed:  
< <  \boxed{ \hat{\beta} = (X^T W X)^{1} X^T W y} \qquad .  
> >  \boxed{ \hat{\beta} = (X^T W X)^{1} X^T W y} \qquad . \qquad \qquad (2.4)  
In a general case of linear transformation ,  
Line: 84 to 85  
 
Added:  
> >  \qquad \qquad (2.5)  
With by the definition of , that simplifies to
 
Changed:  
< <  \boxed{ M_{\hat{\beta}} = (X^T W X)^{1} } \qquad .  
> >  \boxed{ M_{\hat{\beta}} = (X^T W X)^{1} } \qquad . \qquad \qquad (2.6)  
Note, that  
Changed:  
< <  \sum_{i=1}^m W_{ii} ( 2 X_{iq} X_{ip} )= 2(X^T W X)_{qp} \qquad ,  
> >  \sum_{i=1}^m W_{ii} ( 2 X_{iq} X_{ip} )= 2(X^T W X)_{qp} \qquad , \qquad \qquad (2.7)  
and
 
Added:  
> >  \qquad \qquad (2.8)  
Changed:  
< <  Correlated measurements  
> >  3. Correlated measurements  
Let be uncorrelated measurements as those in the previous section, and  
Line: 122 to 125  
& = & (Ay'X\beta)^T (A^{1})^T M_{y'}^{1} A^{1} (Ay'X\beta) = \ & = & (y'A^{1}X\beta )^T M_{y'}^{1}(y'A^{1}X\beta ) \ & = & \boxed{ (y'X'\beta)^T M_{y'}^{1}(y'X'\beta) } \qquad ,  
Added:  
> >  \qquad\qquad (3.1)  
where .  
Line: 132 to 136  
& = & (X^T [M_y^{1}] X)^{1} X^T [M_y^{1}][y] = \ & = & (X'^T A^T [(A^{1})^T M_{y'}^{1} A^{1}] AX')^{1} X'^T A^T[(A^{1})^T M_{y'}^{1} A^{1}][Ay'] = \ & = & \boxed{ (X'^T M_{y'}^{1} X')^{1} X'^T M_{y'}^{1}y' } \qquad ,  
Added:  
> >  \qquad\qquad (3.2)  
and
 
Added:  
> >  \qquad\qquad (3.3)  
Thus, all the formulae for the correlated measurements are similar  
Line: 143 to 149  
with the only complication being the replacement of a diagonal weight matrix with a nondiagonal one:
 
Changed:  
< <  \text{nondiagonal}\quad W_{y'} = M_{y'}^{1} } \qquad .  
> >  \text{nondiagonal}\quad W_{y'} = M_{y'}^{1} } \qquad . \qquad\qquad (3.4)  
Changed:  
< <  An iterative solution  
> >  4. An iterative solution  
Let indices of parameters be distributed among groups
with sizes respectively.
 
Changed:  
< <  g_k = \{ i_1 (g_k), \: ... \:, i_{n(g_k)} (g_k) \} , \quad k=1, \: ... \: , N \qquad . \qquad\qquad (3.1)  
> >  g_k = \{ i_1 (g_k), \: ... \:, i_{n(g_k)} (g_k) \} , \quad k=1, \: ... \: , N \qquad . \qquad\qquad (4.1)  
The subset of the parameters corresponding to the group
of indices, can be considered as an column
 
Changed:  
< <  \qquad\qquad (3.2)  
> >  \qquad\qquad (4.2)  
Let denote the set of indices that are complementary to indices in .  
Line: 175 to 181  
\begin{array}{rl} (\hat{ \beta }_{(g_k)})_i & \text{if } i \in g_k ,\ \beta^{(k1)}_i & \text{if } i \notin g_k .  
Changed:  
< <  \end{array} \right. \qquad , \qquad\qquad (3.3)  
> >  \end{array} \right. \qquad , \qquad\qquad (4.3)  
where is the "point"  
Line: 185 to 191  
in the previous iteration: )  
Changed:  
< < 
 
> > 
 
(i.e. ) .
One can expect that
 
Changed:  
< <  \lim_{k \to \infty} \beta^{(k)} = \hat{\beta} \qquad . \qquad\qquad (3.4)  
> >  \lim_{k \to \infty} \beta^{(k)} = \hat{\beta} \qquad . \qquad\qquad (4.4)  
asd 
Line: 1 to 1  

 
Line: 153 to 153  
with sizes respectively.
 
Changed:  
< <  g_k = \{ i_1 (g_k), \: ... \:, i_{n(g_k)} (g_k) \} , \quad k=1, \: ... \: , N \qquad .  
> >  g_k = \{ i_1 (g_k), \: ... \:, i_{n(g_k)} (g_k) \} , \quad k=1, \: ... \: , N \qquad . \qquad\qquad (3.1)  
The subset of the parameters corresponding to the group
of indices, can be considered as an column
 
Added:  
> >  \qquad\qquad (3.2)  
Added:  
> >  Let denote the set of indices
that are complementary to indices in .
Consider the following iterative procedure.

Line: 1 to 1  

 
Line: 146 to 146  
\text{nondiagonal}\quad W_{y'} = M_{y'}^{1} } \qquad .  
Added:  
> > 
An iterative solution
Let indices of parameters be distributed among groups
with sizes respectively.

Line: 1 to 1  

 
Line: 52 to 52  
where the weight matrix of dimension is diagonal and defined as the inverse of the diagonal covariance matrix for : i.e. .  
Changed:  
< <  In matrix notation (considering and  
> >  In matrix notation (considering and  
as columns and respectively), one has
 
Changed:  
< <  S = (yX\beta)^T W (yX\beta) \qquad .  
> >  S = (yX\beta)^T W (yX\beta) = y^T W y 2 \beta^T X^T W y +\beta^T X^T W X \beta \qquad .  
Line: 92 to 93  
Added:  
> >  Note, that
Correlated measurements
Let be uncorrelated measurements as those in the previous section, and ( is an invertible matrix). Then are generally correlated and have the covariance matrix . With and , one gets Similarly, Thus, all the formulae for the correlated measurements are similar to those for the uncorrelated , with the only complication being the replacement of a diagonal weight matrix with a nondiagonal one: 
Line: 1 to 1  

 
Deleted:  
< <  
 AlexanderFedotov  20150303
<!  
Line: 38 to 37  
Uncorrelated measurements  
Changed:  
< <  Let with variances be measurements  
> >  Let with variances be measurements  
of functions  
Changed:  
< <  with the known matrix and unknown parameters .  
> >  with the known matrix and unknown parameters .  
In linear least square method, one estimates the parameter vector by minimizing over the expression  
Line: 51 to 51  
where the weight matrix of dimension is diagonal and defined as the inverse of the diagonal covariance matrix for :  
Changed:  
< <  i.e. .  
> >  i.e. .  
In matrix notation (considering and as columns and respectively), one has  
Line: 64 to 64  
 
Changed:  
< <  = \sum_{i=1}^{m} W_{ii}(2y_iX_{ip} + 2 \sum_{j=1}^{n}X_{ij}X_{ip}\beta_j) = 0 \quad , \quad p=1,...,n  
> >  = \sum_{i=1}^{m} W_{ii}(2y_iX_{ip} + 2 \sum_{j=1}^{n}X_{ij}X_{ip}\hat{\beta_j}) = 0 \quad , \quad p=1,... \; ,n  
or  
Line: 72 to 73  
 
Changed:  
< <  \hat{\beta} = (X^T W X)^{1} X^T W y  
> >  \boxed{ \hat{\beta} = (X^T W X)^{1} X^T W y} \qquad .  
In a general case of linear transformation ,  
Line: 80 to 81  
via . Hence,
 
Changed:  
< <  M_{\hat{\beta}} = (X^T W X)^{1} X^T W M_y [(X^T W X)^{1} X^T W]^T = (X^T W X)^{1} X^T W M_y W^T X (X^T W^T X)^{1} \qquad .  
> >  M_{\hat{\beta}} & = & (X^T W X)^{1} X^T W M_y [(X^T W X)^{1} X^T W]^T = \ & = &(X^T W X)^{1} X^T W M_y W^T X (X^T W^T X)^{1} \qquad .  
Changed:  
< <  As by the definition of , this simplifies to  
> >  With by the definition of , that simplifies to  
 
Changed:  
< <  M_{\hat{\beta}} = (X^T W X)^{1} \qquad .  
> >  \boxed{ M_{\hat{\beta}} = (X^T W X)^{1} } \qquad .  
Line: 1 to 1  

Added:  
> > 
 AlexanderFedotov  20150303
<!
LATEX VIA MATHMODEPLUGIN
DEFINE A VARIABLE (reference with )
>
Sections:
Wikipedia links
Uncorrelated measurementsLet with variances be measurements of functions with the known matrix and unknown parameters . In linear least square method, one estimates the parameter vector by minimizing over the expression In matrix notation (considering and as columns and respectively), one has The estimate is the solution of the system of equations 