Thursday, March 27, 2008

Straight Talk Bluetooth

4: A general matrix method



is obvious that to carry out setting data as it increases the complexity of the model we are using, the summations of the various terms it is necessary to conduct coupled with an increasing number of simultaneous equations to be solved can become a cumbersome bureaucratic issue. There is another more elegant procedure which tends to hide many of the tedious details that distract us from our analytical work making intensive computational work. This procedure is carried out numerical data giving a representation of vectors and matrices
, conducted after that a series of methodical steps, with the help of one of the many software packages available today We can lead relatively quickly to a data set of formulas in a brief time recently would not have been possible.
method which will set out below is general in nature and assumes that the reader already has some familiarity in the handling of vectors and matrices and already know what is the inverse and transpose of a matrix. (If ever a student tired of not seeing an immediate practical application to what I was looking it was tired to ask what they can serve the parent, here is a good example.) Internal details of the technique will be immediately obvious to who has already taken a good course in linear algebra, and will not be repeated here because what interests us is the practical application of concepts rather than theoretical considerations behind them.
a set of five pairs of data {(X


i
, And i }. We can define a vector

data consisting of the five values \u200b\u200band


arranged as a row vector of values \u200b\u200b
:

Y = [Y

0 And 1 Y 2 AND 3 And

4
And 5 ] done this, we can accommodate in addition to the five values \u200b\u200bof X as array of values \u200b\u200bthat has this form: Note carefully that each row of the data matrix corresponds directly each of the data points, while each column of the array corresponds directly to each term in the equation that is being modeled. Therefore, the data matrix has five rows and two columns. The first column of the data matrix contains only the value of 1, since that is the value for the data values \u200b\u200bX when each of them is high to the zero power, which corresponds to the fact that the parameter A in the regression equation is multiplied by unit. The second column contains the data matrix X
values, since this is how values \u200b\u200bappear
X in the second term of the linear regression equation by multiplying the parameter B . If we anticipate a bit, just as the matrix for the least squares parabola will have three columns. The first two columns would be the same as they were for adjustment to a least squares line. However, the third column contain the square of each respective value X
, since this is how values \u200b\u200bappear
X
in the third term in a regression equation of least squares to a parabola squared parameter multiplying a third C .
The next step is the formation of a square matrix from the data matrix. The rectangular data matrix can be converted to a square matrix with the same number of rows and columns, in one simple operation. To do this, we take the

transposed data matrix X , which we denote as X T, and by the same parent postmultiplicamos X to produce something known as the coefficient matrix : K = X T X ∙

For a linear regression equation with five facts as we have been using as an example, multiplying the data matrix X by its transpose X T :

gives us a coefficient matrix K turns out to be:

After that, we form a vector designated in the textbooks as the constant vector

V in that order (remember that in the multiplication of vectors and matrices, the order of factors does alter the product.)

V = Y ∙ X

S = K -1 T ∙ V

This vector contains a single column with multiple lines, and whether it has followed a methodical order so the first parameter A
can be read directly from the first line, the second parameter

B can be read directly from the second line, and so on. is important to note that there is no magic in what we have described. The arithmetic that must be carried out remain the same as before. What we have done is to hide the details, leaving the software package we will be using the burden of carrying out the summation. The need to solve simultaneous equations is still there, is disguised in the pace at which we get the inverse of the coefficient matrix K

.
then solve again the first problem presented in a previous section which led to an adjustment of data to a linear regression formula of Y on X using the normal equations, this in order that it be clarified here the mechanics of the steps to be followed with the matrix method. PROBLEM: Given the following data set




obtain the regression line of Y on X using the matrix method.


We are first a row vector with the values \u200b\u200bof the independent variable Y, which can be considered as a matrix a row and eight columns:


Then we form the data matrix X . Since the setting you want to do is to

formula Y = A + BX


then based on what we have on the right side of the equation the data matrix X
will be a two-column matrix (eight lines) with one in the first column representing the values \u200b\u200bof X raised to a power resulting in a zero in all cases, and eight X
values \u200b\u200bplaced on the second column:



The coefficient matrix
K is given by:


while the constant vector V is:



And finally The solution vector
S is:



Thus, the regression equation in
X Y obtained by the matrix method is:

Y =. 545 + 636 X
This is the same result as that obtained previously when we used the normal equations.
The matrix method can be generalized to a much more potent in which lies the true value of the technique. Take the case of a non-linear equation as follows:

where A
,
B C
and are the parameters to be determined to carry out the "fit." We can apply the matrix method in this case as follows: the first column of the data matrix X
simple contain the numeric value of 1 for the same reasons given in the problem we just saw, the second column contains the reciprocals of the corresponding data

t, and the third column contains the natural logarithm of the data. For its part, the row vector containing the data for the logarithm of ρ

. After this, apply the same steps as above. Note that there is no magic here either. In fact, what is being done in what has been set for this formula is to perform the linearization
of the formula.
PROBLEM:
Get the equation of the line "best fit" for the following data matrix using the technique of least Footage: We set first with the nine values \u200b\u200b And the row vector of values, which is actually a 1x9 matrix:

Y = [
11.35 16.98 24.54 22.23 8.36 34.22 32.19 38.72 42.21 ] and then assemble the data matrix:


After this, with the help of a software package for handling matrices, we form the coefficient matrix :

K = X T X ∙



and the constant vector :

V = Y ∙ X

This gives us directly the vector
solution through the following operation: S = K -1

∙ V

T turns out to be:



The higher value is the parameter A of the least squares line, which has been the intersection of the line with the axis Y , while the lower value is B , the slope of the line. The desired equation is therefore:
Y = 4008 + 4.327X


The graph of this regression line superimposed on the discrete data from which was derived is shown below:

PROBLEM :
get the equations:
a) The parable of "best fit"

b)
The cubic polynomial best fit



a) In the first case, we seek the coefficients
A , B
and
C for least-squares equation:

Y = A + BX + CX ²


We rode first with the nine values \u200b\u200b And the row vector of values:

Y = [ 2.07 8.60 14.42 15.80 18.92 17.96 12.98 6.45 0.27 ] and then assemble the data matrix for a quadratic polynomial fit:



Like as we did the above problem, with the help of a software package for handling arrays form the coefficient matrix
:
K = X T X ∙



and the constant vector
:

V = Y ∙ X

thus gives us directly vector solution through the operation:
S = K -1

T ∙ V

giving
:



In this vector solution, the higher value is the coefficient A , the intermediate value B is the coefficient and the lower value is the ratio C . On this basis, the equation of the parabola "best fit" for the given data is:
Y = 7827 + 10.59X - 1.083X ²


Following is the graph of this parabola plotted together with discrete data which was generated:

To make the "best fit" to a cubic polynomial, all we have to do is change the X matrix by adding an extra column on the far right, filling the column with buckets values \u200b\u200bof X
that are in the second column:




repeating exactly the same steps above, we obtain the following vector solution:
values \u200b\u200bin order solution vector are the coefficients A ,
B
,
C and D the least squares cubic polynomial:


Y = A + BX + CX 2

+ DX 3

so the cubic polynomial best fit is: Y = 7223 X-0.946X ² 10,012 - 0,009 X
3
Following is the graph of this cubic polynomial with discrete data from which are generated by the formula:


We can see that within the range of interest (from X
X = 0 to = 10), the cubic term A correction very small, almost insignificant, the parabola traced by the quadratic term, and it is doubtful that in this case the use of a polynomial of degree greater than we produce a fit perfect.

PROBLEM: Solve
again, this time with the help matrix method, a preliminary issue in which it conducted a set of experimental data to a least squares parabola to obtain an approximate value of the gravity of Earth. The data, repeated here, are:

The adjustment that we want to accomplish in this case an adjustment of the data to a phrase such as:


Y = AX 2


As a first step, we form the vector data And :



As a next step, we form the data matrix X . In this case, as we can guess the only term that appears on the right side of the formula we are using as a model, the data matrix consists of a single column, which will consist of the squares of the values \u200b\u200bof X :



K = X T
∙ X

K = [1032] also
The constant vector turns out in this case a single numerical value:

V = Y ∙ X

V = [5254] Thus the solution vector S
, which gives us the parameter value
A as the only solution is:

S = [5089]

With this, the best-fitting parabola according to the matrix method is:

Y = 5,089 X

2

This is the same result obtained previously.



0 comments:

Post a Comment