# Difference: AVFLogA012LeastSquares (1 vs. 14)

#### Revision 142017-01-15 - AlexanderFedotov

Line: 1 to 1

 META TOPICPARENT name="AVFedotovLogA"
-- AlexanderFedotov - 2015-03-03
Line: 305 to 305
I_{g_k} \beta^{(k)} = (X_{g_k}^T W X_{g_k})^{-1}_E X^T W ( y - X I_{ \bar{g_k} } \beta^{(k-1)} ) \qquad . \qquad \qquad (4.2.10)
Changed:
<
<
This is a representation of the first line of (4.3.1).
>
>
This is a representation of the first line of (4.1.3).

Changed:
<
<
The second line of (4.3.1) can be represented with
>
>
The second line of (4.1.3) can be represented with

#### Revision 132015-03-27 - AlexanderFedotov

Line: 1 to 1

 META TOPICPARENT name="AVFedotovLogA"
-- AlexanderFedotov - 2015-03-03
Line: 173 to 173
that are complementary to indices in .

Consider the following iterative procedure.

Changed:
<
<
>
>

1. Make N steps or, equivalently, iterations for , thus finding vectors by minimizing over at the step :
Line: 181 to 181
\beta^{(k)}_i = \left\{ \begin{array}{rl}
Changed:
<
<
(\hat{ \beta }_{(g_k)})_i & \text{if } i \in g_k ,\
>
>
(\hat{ \beta }_{(g_k)}^{(k)})_i & \text{if } i \in g_k ,\

Changed:
<
<
where is the "point"
>
>
where is the "point"
where as a function of parameters , takes minimum (while the rest parameters are fixed at the values obtained in the previous iteration:
Changed:
<
<
)
1. Repeat (4.3) infinitly, defining for
>
>
) .
1. Repeat (4.1.3) infinitly, defining for
(i.e. ) . One can expect that

#### Revision 122015-03-25 - AlexanderFedotov

Line: 1 to 1

 META TOPICPARENT name="AVFedotovLogA"
-- AlexanderFedotov - 2015-03-03
Line: 313 to 313
Summing up (4.2.10) and (4.2.11) gives
Changed:
<
<
\beta^{(k)}
>
>
\boxed { \beta^{(k)} }
& = & ( I_{g_k} + I_{ \bar{g_k} } ) \beta^{(k)} = \ & = & (X_{g_k}^T W X_{g_k})^{-1}_E X^T W y + [I - (X_{g_k}^T W X_{g_k})^{-1}_E X_T W X ] I_{ \bar{g_k} } \beta^{(k-1)} =\
Changed:
<
<
& = & A_{g_k} X^T W y + B_{g_k} \beta^{(k-1)}
>
>
& = & \boxed {A_{g_k} X^T W y + B_{g_k} \beta^{(k-1)} }
\begin{array}{lllrr} \boxed{ A_{g_k}} & = & (X_{g_k}^T W X_{g_k})^{-1}_E & \qquad , & \qquad \qquad (4.2.13) \
Changed:
<
<
\boxed{ B_{g_k}} & = & (I - A_{g_k} X_T W X ) I_{ \bar{g_k} } & \qquad . & \qquad \qquad (4.2.14)
>
>
\boxed{ B_{g_k}} & = & (I - A_{g_k} X^T W X ) I_{ \bar{g_k} } & \qquad . & \qquad \qquad (4.2.14)
\end{array}
Line: 334 to 334

#### 4.3.1 as a function of

Changed:
<
<
Let us define matrices and as
>
>
Let us define matrices and as

Changed:
<
<
\boxed{ D_k } = \left\{
>
>
\boxed{ A_k } = \left\{
\begin{array}{lll} 0 & , & k=0 \
Changed:
<
<
A_{g_k} + B_{g_k} D_{k-1} & , & k > 0 .
>
>
A_{g_k} + B_{g_k} A_{k-1} & , & k > 0 .

Line: 351 to 351
Then, from (4.2.12),
Changed:
<
<
\boxed{ \beta^{(k)} = D_k X^T W y + B_k \beta^{(0)} }
>
>
\boxed{ \beta^{(k)} = A_k X^T W y + B_k \beta^{(0)} }
• eq.(4.3.3) holds for :
Changed:
<
<
\beta^{(1)} = A_{g_1} X^T W y + B_{g_1} \beta^{(0)} = D_1 X^T W y + B_1 \beta^{(0)}
>
>
\beta^{(1)} = A_{g_1} X^T W y + B_{g_1} \beta^{(0)} = A_1 X^T W y + B_1 \beta^{(0)}
• and assuming it is true for , leads to
\beta^{(p+1)} & = & A_{g_{p+1}} X^T W y + B_{g_{p+1}} \beta^{(p)} = \
Changed:
<
<
& = & A_{g_{p+1}} X^T W y + B_{g_{p+1}} ( D_p X^T W y + B_p \beta^{(0)} ) = \ & = & ( A_{g_{p+1}} + B_{g_{p+1}} D_p ) X^T W y + B_{g_{p+1}} B_p \beta^{(0)} = \ & = & D_{p+1} X^T W y + B_{p+1} \beta^{(0)}
>
>
& = & A_{g_{p+1}} X^T W y + B_{g_{p+1}} ( A_p X^T W y + B_p \beta^{(0)} ) = \ & = & ( A_{g_{p+1}} + B_{g_{p+1}} A_p ) X^T W y + B_{g_{p+1}} B_p \beta^{(0)} = \ & = & A_{p+1} X^T W y + B_{p+1} \beta^{(0)}

In particular, the eq.(4.3.3) is valid for :

Changed:
<
<
\boxed{ \beta^{(N)} = D_N X^T W y + B_N \beta^{(0)} }
>
>
\boxed{ \beta^{(N)} = A_N X^T W y + B_N \beta^{(0)} }
Line: 384 to 384
eq.(4.3.3) gives rise to
M_{ \beta^{(k)} }
Changed:
<
<
& = & D_k X^T W M_y (D_k X^T W)^T = D_k X^T W M_y W^T X D_k^T = \ & = & D_k X^T W X D_k^T
>
>
& = & A_k X^T W M_y (A_k X^T W)^T = A_k X^T W M_y W^T X A_k^T = \ & = & A_k X^T W X A_k^T
\qquad . \qquad \qquad (4.3.7) Comparing this with the covariance matrix of the exact solution, eq.(2.6), yields
Changed:
<
<
\boxed{ \lim_{k \to \infty} D_k X^T W X D_k^T = M_{\hat{\beta}} = (X^T W X ) ^{-1} }
>
>
\boxed{ \lim_{k \to \infty} A_k X^T W X A_k^T = M_{\hat{\beta}} = (X^T W X ) ^{-1} }
Line: 402 to 402
Changed:
<
<
we can build the expression for the case in one step as
>
>
one can build the expression for the case in one step as

Line: 410 to 410
Substituting (4.3.6) into (4.4.2) gives
\beta^{(2N)}
Changed:
<
<
& = & D_N X^T W y + B_N ( D_N X^T W y + B_N \beta^{(0)} ) =\ & = & (I + B_N)D_N X^T W y + B_N^2 \beta^{(0)}
>
>
& = & A_N X^T W y + B_N ( A_N X^T W y + B_N \beta^{(0)} ) =\ & = & (I + B_N)A_N X^T W y + B_N^2 \beta^{(0)}
Changed:
<
<
\beta^{(4N)} = (I + B_N^2)(I + B_N)D_N X^T W y + B_N^4 \beta^{(0)}
>
>
\beta^{(4N)} = (I + B_N^2)(I + B_N)A_N X^T W y + B_N^4 \beta^{(0)}
Changed:
<
<
\beta^{(8N)} = (I + B_N^4)(I + B_N^2)(I + B_N)D_N X^T W y + B_N^8 \beta^{(0)}
>
>
\beta^{(8N)} = (I + B_N^4)(I + B_N^2)(I + B_N)A_N X^T W y + B_N^8 \beta^{(0)}
Line: 429 to 429
This generalizes to
\boxed{
Changed:
<
<
\beta^{(2^p N)} = \left\lgroup \prod_{i=p-1}^0 (I + B_N^{2^i}) \right\rgroup D_N X^T W y
>
>
\beta^{(2^p N)} = \left\lgroup \prod_{i=p-1}^0 (I + B_N^{2^i}) \right\rgroup A_N X^T W y

#### Revision 112015-03-25 - AlexanderFedotov

Line: 1 to 1

 META TOPICPARENT name="AVFedotovLogA"
-- AlexanderFedotov - 2015-03-03
Line: 397 to 397

### 4.4 steps for

Changed:
<
<
Having the expression (4.3.6)
>
>
With the expression (4.3.6) of the form

Changed:
<
<
we can build the expression for the case in one step as
>
>
we can build the expression for the case in one step as

Line: 435 to 435
Deleted:
<
<
asd

#### Revision 102015-03-25 - AlexanderFedotov

Line: 1 to 1

 META TOPICPARENT name="AVFedotovLogA"
-- AlexanderFedotov - 2015-03-03
Line: 331 to 331

### 4.3 steps

>
>

#### 4.3.1 as a function of

Let us define matrices and as
Line: 343 to 346
and
Changed:
<
<
\boxed{ B_k} & = & \prod_{i=k}^1 B_{g_i} = B_k \cdot ... \cdot B_1 \qquad . \qquad \qquad (4.3.2)
>
>
\boxed{ B_k} = \prod_{i=k}^1 B_{g_i} = B_{g_k} \cdot ... \cdot B_{g_1} \qquad . \qquad \qquad (4.3.2) Then, from (4.2.12),
Indeed, by induction:
• eq.(4.3.3) holds for :
• and assuming it is true for , leads to

In particular, the eq.(4.3.3) is valid for :

#### 4.3.2 The covariance matrix of

According to the general rule of covariance matrix transformation ( ), eq.(4.3.3) gives rise to

M_{ \beta^{(k)} } & = & D_k X^T W M_y (D_k X^T W)^T = D_k X^T W M_y W^T X D_k^T = \ & = & D_k X^T W X D_k^T \qquad . \qquad \qquad (4.3.7)

>
>
Comparing this with the covariance matrix of the exact solution, eq.(2.6), yields

### 4.4 steps for

>
>
Having the expression (4.3.6)
we can build the expression for the case in one step as
Substituting (4.3.6) into (4.4.2) gives
Transforming similarly the formula to the one, gives
Then for the case one has
and so on...

>
>
This generalizes to
asd

#### Revision 92015-03-24 - AlexanderFedotov

Line: 1 to 1

 META TOPICPARENT name="AVFedotovLogA"
-- AlexanderFedotov - 2015-03-03
Line: 299 to 299

>
>
Then (4.2.5) can be written as
This is a representation of the first line of (4.3.1).
The second line of (4.3.1) can be represented with
Summing up (4.2.10) and (4.2.11) gives
where we denoted

### 4.3 steps

Let us define matrices and as

and
asd

#### Revision 82015-03-20 - AlexanderFedotov

Line: 1 to 1

 META TOPICPARENT name="AVFedotovLogA"
-- AlexanderFedotov - 2015-03-03
Line: 246 to 246
& = & (X_{g_k}^T W X_{g_k})^{-1} X_{g_k}^T W ( y - X_{\bar{g_k}} \beta^{(k-1)} ) \qquad . \qquad \qquad (4.2.5)
>
>
Let us write the last expression via matrices of dimensions
and
such that all the arythmetics is done in rows / columns (or ) while complementary rows / columns contain zeros.

We define matrices and as follows

It is noteworthy that

We also define the matrix (subscript stands for extended) via the matrix :

asd

#### Revision 72015-03-19 - AlexanderFedotov

Line: 1 to 1

 META TOPICPARENT name="AVFedotovLogA"
-- AlexanderFedotov - 2015-03-03
Line: 155 to 155

## 4. An iterative solution

>
>

### 4.1 The procedure

Let indices of parameters be distributed among groups with sizes respectively.
Changed:
<
<
g_k = \{ i_1 (g_k), \: ... \:, i_{n(g_k)} (g_k) \} , \quad k=1, \: ... \: , N \qquad . \qquad\qquad (4.1)
>
>
g_k = \{ i_1 (g_k), \: ... \:, i_{n(g_k)} (g_k) \} , \quad k=1, \: ... \: , N \qquad . \qquad\qquad (4.1.1)
The subset of the parameters corresponding to the group of indices, can be considered as an column
\beta_{(g_k)} = \{ \beta_{i_1(g_k)}, \: ... \:, \beta_{i_{n(g_k)}(g_k)} \}^T , \quad k=1, \: ... \: , N \qquad .
Changed:
<
<
>
>
Let denote the set of indices that are complementary to indices in .
Line: 174 to 176

2. Make N steps or, equivalently, iterations for , thus finding vectors
Changed:
<
<
by minimizing over at step :
>
>
by minimizing over at the step :

\beta^{(k)}_i = \left\{ \begin{array}{rl} (\hat{ \beta }_{(g_k)})_i & \text{if } i \in g_k ,\ \beta^{(k-1)}_i & \text{if } i \notin g_k .

Changed:
<
<
>
>
where is the "point"
Line: 195 to 197
(i.e. ) . One can expect that
Changed:
<
<
>
>

>
>

### 4.2 The step

The iteration consists in finding , a set of parameters, that minimizes

Let be an submatrix of built from the columns of with indices belonging to (in other words, it is what remains after removing columns which have indices not in ).
Similarly, let be an submatrix consisting of the columns of with indices belonging to .
Introducing

the (4.2.1) can be written as
which is analogous to eq.(2.2).
Therefore, the solution is given by formulae (2.4):
or
asd

#### Revision 62015-03-18 - AlexanderFedotov

Line: 1 to 1

 META TOPICPARENT name="AVFedotovLogA"
-- AlexanderFedotov - 2015-03-03
Line: 24 to 24

Changed:
<
<

>
>

Line: 35 to 35
http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29#Weighted_linear_least_squares
Changed:
<
<

>
>

## 2. Uncorrelated measurements

Let with variances be measurements of functions
Line: 46 to 46
by minimizing over the expression
Changed:
<
<
S = \sum_{i=1}^{m} W_{ii}(y_i - \sum_{j=1}^{n} X_{ij}\beta_j)^2 \qquad ,
>
>
where the weight matrix of dimension
Line: 58 to 58

S = (y-X\beta)^T W (y-X\beta) = y^T W y -2 \beta^T X^T W y +\beta^T X^T W X \beta \qquad .
>
>

Line: 66 to 67
\frac{\partial S}{\partial \beta_p} = \sum_{i=1}^{m} W_{ii}(-2y_iX_{ip} + 2 \sum_{j=1}^{n}X_{ij}X_{ip}\hat{\beta_j})
Changed:
<
<
>
>
or
Line: 74 to 75
Changed:
<
<
\boxed{ \hat{\beta} = (X^T W X)^{-1} X^T W y} \qquad .
>
>
In a general case of linear transformation ,
Line: 84 to 85
M_{\hat{\beta}} & = & (X^T W X)^{-1} X^T W M_y [(X^T W X)^{-1} X^T W]^T = \ & = &(X^T W X)^{-1} X^T W M_y W^T X (X^T W^T X)^{-1} \qquad .
>
>
With by the definition of , that simplifies to
Changed:
<
<
\boxed{ M_{\hat{\beta}} = (X^T W X)^{-1} } \qquad .
>
>

Note, that

\partial{^2S} / \partial{\beta_p} \partial{\beta_q} =
Changed:
<
<
\sum_{i=1}^m W_{ii} ( 2 X_{iq} X_{ip} )= 2(X^T W X)_{qp} \qquad ,
>
>
and
S(\beta_p ; \beta_{q \ne p}=\hat{\beta_q} ) = S_{min} + \frac{1}{2} \frac {\partial{^2S}} {\partial{\beta_p}^2} \cdot ( \beta_p - \hat{\beta_p} )^2 \qquad .
>
>

Changed:
<
<

>
>

## 3. Correlated measurements

Let be uncorrelated measurements as those in the previous section, and

Line: 122 to 125
& = & (Ay'-X\beta)^T (A^{-1})^T M_{y'}^{-1} A^{-1} (Ay'-X\beta) = \ & = & (y'-A^{-1}X\beta )^T M_{y'}^{-1}(y'-A^{-1}X\beta ) \ & = & \boxed{ (y'-X'\beta)^T M_{y'}^{-1}(y'-X'\beta) } \qquad ,
>
>
where .
Line: 132 to 136
& = & (X^T [M_y^{-1}] X)^{-1} X^T [M_y^{-1}][y] = \ & = & (X'^T A^T [(A^{-1})^T M_{y'}^{-1} A^{-1}] AX')^{-1} X'^T A^T[(A^{-1})^T M_{y'}^{-1} A^{-1}][Ay'] = \ & = & \boxed{ (X'^T M_{y'}^{-1} X')^{-1} X'^T M_{y'}^{-1}y' } \qquad ,
>
>
and
\boxed{ M_{\hat{\beta}} } = (X^T M_y^{-1} X)^{-1} = \boxed{ (X'^T M_{y'}^{-1} X')^{-1} } \qquad ,
>
>

Thus, all the formulae for the correlated measurements are similar

Line: 143 to 149
with the only complication being the replacement of a diagonal weight matrix with a non-diagonal one:
Changed:
<
<
>
>

Changed:
<
<

>
>

## 4. An iterative solution

Let indices of parameters be distributed among groups with sizes respectively.
Changed:
<
<
g_k = \{ i_1 (g_k), \: ... \:, i_{n(g_k)} (g_k) \} , \quad k=1, \: ... \: , N \qquad . \qquad\qquad (3.1)
>
>
g_k = \{ i_1 (g_k), \: ... \:, i_{n(g_k)} (g_k) \} , \quad k=1, \: ... \: , N \qquad . \qquad\qquad (4.1)
The subset of the parameters corresponding to the group of indices, can be considered as an column
\beta_{(g_k)} = \{ \beta_{i_1(g_k)}, \: ... \:, \beta_{i_{n(g_k)}(g_k)} \}^T , \quad k=1, \: ... \: , N \qquad .
Changed:
<
<
>
>
Let denote the set of indices that are complementary to indices in .
Line: 175 to 181
\begin{array}{rl} (\hat{ \beta }_{(g_k)})_i & \text{if } i \in g_k ,\ \beta^{(k-1)}_i & \text{if } i \notin g_k .
Changed:
<
<
>
>
where is the "point"
Line: 185 to 191
in the previous iteration: )
Changed:
<
<
1. Repeat (3.3) infinitly, defining for
>
>
1. Repeat (4.3) infinitly, defining for
(i.e. ) . One can expect that
Changed:
<
<
>
>
asd

#### Revision 52015-03-11 - AlexanderFedotov

Line: 1 to 1

 META TOPICPARENT name="AVFedotovLogA"
-- AlexanderFedotov - 2015-03-03
Line: 153 to 153
with sizes respectively.
Changed:
<
<
g_k = \{ i_1 (g_k), \: ... \:, i_{n(g_k)} (g_k) \} , \quad k=1, \: ... \: , N \qquad .
>
>
g_k = \{ i_1 (g_k), \: ... \:, i_{n(g_k)} (g_k) \} , \quad k=1, \: ... \: , N \qquad . \qquad\qquad (3.1)
The subset of the parameters corresponding to the group of indices, can be considered as an column
\beta_{(g_k)} = \{ \beta_{i_1(g_k)}, \: ... \:, \beta_{i_{n(g_k)}(g_k)} \}^T , \quad k=1, \: ... \: , N \qquad .
>
>

>
>
Let denote the set of indices that are complementary to indices in .

Consider the following iterative procedure.

2. Make N steps or, equivalently, iterations for , thus finding vectors by minimizing over at step :
where is the "point" where as a function of parameters , takes minimum (while the rest parameters are fixed at the values obtained in the previous iteration: )
3. Repeat (3.3) infinitly, defining for    (i.e. ) .
One can expect that
asd

#### Revision 42015-03-10 - AlexanderFedotov

Line: 1 to 1

 META TOPICPARENT name="AVFedotovLogA"
-- AlexanderFedotov - 2015-03-03
Line: 146 to 146
>
>

## An iterative solution

Let indices of parameters be distributed among groups with sizes respectively.

The subset of the parameters corresponding to the group of indices, can be considered as an column

#### Revision 32015-03-05 - AlexanderFedotov

Line: 1 to 1

 META TOPICPARENT name="AVFedotovLogA"
-- AlexanderFedotov - 2015-03-03
Line: 52 to 52
where the weight matrix of dimension is diagonal and defined as the inverse of the diagonal covariance matrix for : i.e. .
Changed:
<
<

In matrix notation (considering and
>
>

In matrix notation (considering and
as columns and respectively), one has
Changed:
<
<
S = (y-X\beta)^T W (y-X\beta) \qquad .
>
>
S = (y-X\beta)^T W (y-X\beta) = y^T W y -2 \beta^T X^T W y +\beta^T X^T W X \beta \qquad .

Line: 92 to 93

>
>
Note, that
and

## Correlated measurements

Let be uncorrelated measurements as those in the previous section, and ( is an invertible matrix). Then are generally correlated and have the covariance matrix .

With and , one gets

where .

Similarly,

and

Thus, all the formulae for the correlated measurements are similar to those for the uncorrelated , with the only complication being the replacement of a diagonal weight matrix with a non-diagonal one:

#### Revision 22015-03-04 - AlexanderFedotov

Line: 1 to 1

 META TOPICPARENT name="AVFedotovLogA"
Deleted:
<
<
-- AlexanderFedotov - 2015-03-03

<!--
Line: 38 to 37

## Uncorrelated measurements

Changed:
<
<
Let with variances be measurements
>
>
Let with variances be measurements
of functions
Changed:
<
<
with the known matrix and unknown parameters .
>
>
with the known matrix and unknown parameters .
In linear least square method, one estimates the parameter vector by minimizing over the expression
Line: 51 to 51
where the weight matrix of dimension is diagonal and defined as the inverse of the diagonal covariance matrix for :
Changed:
<
<
i.e. .
>
>
i.e. .

In matrix notation (considering and as columns and respectively), one has
Line: 64 to 64

\frac{\partial S}{\partial \beta_p}
Changed:
<
<
= \sum_{i=1}^{m} W_{ii}(-2y_iX_{ip} + 2 \sum_{j=1}^{n}X_{ij}X_{ip}\beta_j) = 0 \quad , \quad p=1,...,n
>
>
= \sum_{i=1}^{m} W_{ii}(-2y_iX_{ip} + 2 \sum_{j=1}^{n}X_{ij}X_{ip}\hat{\beta_j}) = 0 \quad , \quad p=1,... \; ,n
or
Line: 72 to 73
Changed:
<
<
\hat{\beta} = (X^T W X)^{-1} X^T W y
>
>
\boxed{ \hat{\beta} = (X^T W X)^{-1} X^T W y} \qquad .
In a general case of linear transformation ,
Line: 80 to 81
via . Hence,
Changed:
<
<
M_{\hat{\beta}} = (X^T W X)^{-1} X^T W M_y [(X^T W X)^{-1} X^T W]^T = (X^T W X)^{-1} X^T W M_y W^T X (X^T W^T X)^{-1} \qquad .
>
>
M_{\hat{\beta}} & = & (X^T W X)^{-1} X^T W M_y [(X^T W X)^{-1} X^T W]^T = \ & = &(X^T W X)^{-1} X^T W M_y W^T X (X^T W^T X)^{-1} \qquad .

Changed:
<
<
As by the definition of , this simplifies to
>
>
With by the definition of , that simplifies to

Changed:
<
<
M_{\hat{\beta}} = (X^T W X)^{-1} \qquad .
>
>
\boxed{ M_{\hat{\beta}} = (X^T W X)^{-1} } \qquad .

#### Revision 12015-03-04 - AlexanderFedotov

Line: 1 to 1
>
>
 META TOPICPARENT name="AVFedotovLogA"

-- AlexanderFedotov - 2015-03-03

<!--
==================
SETTINGS FOR THIS PAGE: (active are only those with 3 spaces before *')
==================
HIGHLIGT + HIDE LEFT BAR
Set USERSTYLEURL = https://twiki.cern.ch/twiki/pub/Main/AlexanderFedotov/My_Highlight_Hideleftbar.css

LATEX VIA MATHMODEPLUGIN
Set DISABLEDPLUGINS = LatexModePlugin
Set LATEXFONTSIZE = footnotesize

DEFINE A VARIABLE (reference with                                         )
Set My40Blanks =

-->`

# Linear Least Squares

## Uncorrelated measurements

Let with variances be measurements of functions with the known matrix and unknown parameters .

In linear least square method, one estimates the parameter vector by minimizing over the expression

where the weight matrix of dimension is diagonal and defined as the inverse of the diagonal covariance matrix for : i.e. .
In matrix notation (considering and as columns and respectively), one has

The estimate is the solution of the system of equations

or
In a general case of linear transformation , the covariance matrice for is transformed into that for via . Hence,
As by the definition of , this simplifies to

Copyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback