How to set up smartphones and PCs. Informational portal
  • home
  • In contact with
  • Matrix multiplication properties are elementary matrix transformations. Elementary transformations of matrix rows

Matrix multiplication properties are elementary matrix transformations. Elementary transformations of matrix rows

Elementary matrix transformations find wide application in various mathematical problems. For example, they form the basis of the well-known Gauss method (method of eliminating unknowns) for solving a system of linear equations.

Elementary transformations include:

1) permutation of two rows (columns);

2) multiplication of all elements of a row (column) of a matrix by some number that is not equal to zero;

3) addition of two rows (columns) of the matrix, multiplied by the same nonzero number.

The two matrices are called equivalent if one of them can be obtained from the other after a finite number of elementary transformations. In the general case, equivalent matrices are not equal, but have the same rank.

Computing determinants using elementary transformations

Using elementary transformations, it is easy to calculate the determinant of a matrix. For example, you need to calculate the determinant of a matrix:

Then you can take out the factor:

now, subtracting from the elements j th column, the corresponding elements of the first column, multiplied by, we get the determinant:

which is: where

Then we repeat the same steps for and, if all the elements, then we finally get:

If for some intermediate determinant it turns out that its top-left element, then it is necessary to rearrange the rows or columns in such a way that the new top-left element is not equal to zero. If Δ ≠ 0, then this can always be done. It should be borne in mind that the sign of the determinant changes depending on which element is the main one (that is, when the matrix is ​​transformed so that). Then the sign of the corresponding determinant is.

EXAMPLE Using elementary transformations, reduce the matrix

to a triangular view.

Solution. First, we multiply the first row of the matrix by 4, and the second by (–1) and add the first row to the second:

Now let's multiply the first row by 6, and the third by (–1) and add the first row to the third:

Finally, multiply the 2nd row by 2, and the 3rd by (–9) and add the second row to the third:

The result is an upper triangular matrix

Example. Solve a system of linear equations using the matrix apparatus:

Solution. Let us write this system of linear equations in matrix form:

The solution of this system of linear equations in matrix form is:

where is the matrix inverse to the matrix A.

Determinant of the coefficient matrix A is equal to:

hence the matrix A has an inverse matrix.

2. Maltsev A.I. Fundamentals of Linear Algebra. - M .: Nauka, 1975 .-- 400 p.

3. Bronstein I. N., Semendyaev K. A. A guide to mathematics for engineers and students of technical colleges. - Moscow: Nauka, 1986 .-- 544 p.

Elementary matrix transformations are such transformations of the matrix as a result of which the equivalence of matrices is preserved. Thus, elementary transformations do not change the set of solutions of the system of linear algebraic equations that this matrix represents.

Elementary transformations are used in the Gaussian method to bring a matrix to a triangular or stepped form.

Definition

Elementary string conversions called:

In some courses of linear algebra, permutation of matrix rows is not separated into a separate elementary transformation due to the fact that the permutation of any two rows of the matrix can be obtained by multiplying any row of the matrix by a constant, and adding to any row of the matrix another row multiplied by a constant,.

Similarly, elementary column transformations.

Elementary transformations reversible.

The designation indicates that the matrix can be obtained from by elementary transformations (or vice versa).

Properties

Rank invariance under elementary transformations

Equivalence of SLAEs under Elementary Transformations

Let's call elementary transformations over the system of linear algebraic equations :
  • rearrangement of equations;
  • multiplying an equation by a nonzero constant;
  • addition of one equation to another multiplied by some constant.
Those. elementary transformations over its extended matrix. Then the following statement is true: Recall that two systems are said to be equivalent if the sets of their solutions coincide.

Finding Inverse Matrices

Theorem (on finding the inverse matrix).
Let the determinant of the matrix is ​​not equal to zero, let the matrix be determined by the expression. Then, with an elementary transformation of the rows of the matrix to the identity matrix in the composition, the transformation to occurs simultaneously.

Reducing matrices to a stepped form

Let us introduce the concept of stepped matrices: The matrix has stepped view , if: Then the following statement is true:

Related definitions

Elementary matrix. Matrix A is elementary if multiplication of an arbitrary matrix B by it leads to elementary transformations of rows in matrix B.

Literature

Ilyin V.A., Poznyak E.G. Linear Algebra: A University Textbook... - 6th ed., Erased. - M .: FIZMATLIT, 2004 .-- 280 p.


Wikimedia Foundation. 2010.

See what "Elementary matrix transformations" are in other dictionaries:

    Introduction. Eh. In the exact meaning of this term are primary, further indecomposable parts, from which, by assumption, all matter consists. In modern physics, the term “E. h. " usually used not in its exact meaning, but less strictly for the name ... ... Physical encyclopedia

    Introduction. E. h. In the exact meaning of this term are primary, further indecomposable particles, of which, by assumption, all matter consists. In the concept of “E. h. " in modern physics the idea of ​​primitive essences finds expression, ... ... Great Soviet Encyclopedia

    This term has other meanings, see Matrix. Matrix is ​​a mathematical object written in the form of a rectangular table of elements of a ring or field (for example, integers, real or complex numbers), which represents ... ... Wikipedia

    Matrix is ​​a mathematical object written in the form of a rectangular table of numbers (or ring elements) and allowing algebraic operations (addition, subtraction, multiplication, etc.) between it and other similar objects. Execution rules ... ... Wikipedia

    Matrix is ​​a mathematical object written in the form of a rectangular table of numbers (or ring elements) and allowing algebraic operations (addition, subtraction, multiplication, etc.) between it and other similar objects. Execution rules ... ... Wikipedia

    Matrix is ​​a mathematical object written in the form of a rectangular table of numbers (or ring elements) and allowing algebraic operations (addition, subtraction, multiplication, etc.) between it and other similar objects. Execution rules ... ... Wikipedia

    Matrix is ​​a mathematical object written in the form of a rectangular table of numbers (or ring elements) and allowing algebraic operations (addition, subtraction, multiplication, etc.) between it and other similar objects. Execution rules ... ... Wikipedia

    Matrix is ​​a mathematical object written in the form of a rectangular table of numbers (or ring elements) and allowing algebraic operations (addition, subtraction, multiplication, etc.) between it and other similar objects. Execution rules ... ... Wikipedia

Elementary matrix transformations are such transformations of the matrix as a result of which the equivalence of matrices is preserved. Thus, elementary transformations do not change the set of solutions of the system of linear algebraic equations that this matrix represents.

Elementary transformations are used in the Gaussian method to bring a matrix to a triangular or stepped form.

Definition

Elementary string conversions called:

In some courses of linear algebra, permutation of matrix rows is not separated into a separate elementary transformation due to the fact that the permutation of any two rows of the matrix can be obtained by multiplying any row of the matrix by a constant k (\ displaystyle k), and adding to any row of the matrix another row multiplied by the constant k (\ displaystyle k), k ≠ 0 (\ displaystyle k \ neq 0).

Similarly, elementary column transformations.

Elementary transformations reversible.

The designation indicates that the matrix A (\ displaystyle A) can be obtained from B (\ displaystyle B) by elementary transformations (or vice versa).

Properties

Rank invariance under elementary transformations

Theorem (on rank invariance under elementary transformations).
If A ∼ B (\ displaystyle A \ sim B), then r a n g A = r a n g B (\ displaystyle \ mathrm (rang) A = \ mathrm (rang) B).

Equivalence of SLAEs under Elementary Transformations

Let's call elementary transformations over the system of linear algebraic equations :
  • rearrangement of equations;
  • multiplying an equation by a nonzero constant;
  • addition of one equation to another multiplied by some constant.
That is, elementary transformations over its extended matrix. Then the following statement is true: Recall that two systems are said to be equivalent if the sets of their solutions coincide.

Finding Inverse Matrices

Theorem (on finding the inverse matrix).
Let the determinant of the matrix A n × n (\ displaystyle A_ (n \ times n)) is not equal to zero, let the matrix B (\ displaystyle B) defined by the expression B = [A | E] n × 2 n (\ displaystyle B = _ (n \ times 2n))... Then, with an elementary transformation of the rows of the matrix A (\ displaystyle A) to the identity matrix E (\ displaystyle E) as part of B (\ displaystyle B) simultaneously transforming E (\ displaystyle E) To A - 1 (\ displaystyle A ^ (- 1)).

The next three operations are called elementary transformations of matrix rows:

1) Multiplication of the ith row of the matrix by the number λ ≠ 0:

which we will write in the form (i) → λ (i).

2) Permutation of two rows in the matrix, for example the i-th and k-th rows:


which we will write in the form (i) ↔ (k).

3) Adding to the i-th row of the matrix its k-th row with the coefficient λ:


which we will write in the form (i) → (i) + λ (k).

Similar operations on columns of a matrix are called elementary column transformations.

Each elementary transformation of rows or columns of a matrix has inverse elementary transformation, which turns the transformed matrix into the original one. For example, the inverse transformation for a permutation of two strings is to permute the same strings.

Each elementary transformation of rows (columns) of matrix A can be interpreted as multiplication of A on the left (right) by a matrix of a special type. This matrix is ​​obtained if the same transformation is performed over unit matrix... Let's take a closer look at elementary string transformations.

Let the matrix B be obtained by multiplying the ith row of the m × n matrix A by the number λ ≠ 0. Then B = Е i (λ) А, where the matrix Е i (λ) is obtained from the identity matrix E of order m by multiplying its i -th row by the number λ.

Let the matrix B be obtained as a result of permutation of the i-th and k-th rows of the matrix A of type m × n. Then B = F ik A, where the matrix F ik is obtained from the identity matrix E of order m by permuting its i-th and k-th rows.

Let the matrix B be obtained by adding to the i-th row of the m × n matrix A its k-th row with the coefficient λ. Then B = G ik (λ) А, where the matrix G ik is obtained from the identity matrix E of order m as a result of adding the kth row with coefficient λ to the i-th row, i.e. at the intersection of the i-th row and the k-th column of the matrix E, the zero element is replaced by the number λ.

Elementary transformations of the columns of the matrix A are implemented in the same way, but at the same time it is multiplied by matrices of a special type not on the left, but on the right.

Using algorithms that are based on elementary transformations of rows and columns, matrices can be transformed to various forms. One of the most important such algorithms forms the basis of the proof of the following theorem.

Theorem 10.1. Using elementary row transformations, any matrix can be reduced to stepped view.

◄ The proof of the theorem consists in constructing a specific algorithm for reducing the matrix to a stepwise form. This algorithm consists in multiple repetitions in a certain order of three operations associated with some current element of the matrix, which is selected based on the location in the matrix. At the first step of the algorithm, we select the upper left one as the current element of the matrix, i.e. [A] 11.

one*. If the current element is zero, go to operation 2 *. If it is not equal to zero, then the row in which the current element is located (the current row) is added with the appropriate coefficients to the rows below, so that all matrix elements in the column below the current element become zero. For example, if the current element is [A] ij, then as the coefficient for the kth row, k = i + 1, ..., we should take the number - [A] kj / [A] ij. We select the new current element, shifting in the matrix one column to the right and one row down, and proceed to the next step, repeating the 1 * operation. If such displacement is not possible, i.e. the last column or row is reached, we stop converting.

2 *. If the current element in some row of the matrix is ​​equal to zero, then we look through the matrix elements located in the column below the current element. If there are no nonzero ones among them, go to operation 3 *. Let there be a nonzero element in the k-th line under the current element. We swap the current and k-th lines and return to operation 1 *.

3 *. If the current element and all the elements below it (in the same column) are equal to zero, we change the current element, shifting one column to the right in the matrix. If such an offset is possible, that is, the current element is not in the rightmost column of the matrix, then we repeat the operation 1 *. If we have already reached the right edge of the matrix and changing the current element is impossible, then the matrix has a stepped form, and we can stop transformations.

Since the matrix has finite dimensions, and in one step of the algorithm, the position of the current element is shifted to the right by at least one column, the transformation process will end, and in no more than n steps (n is the number of columns in the matrix). This means that the moment will come when the matrix will have a stepped form.

Example 10.10. We transform the matrix to a stepped view using elementary string transformations.

Using the algorithm from the proof of Theorem 10.1 and writing the matrices after the completion of its operations, we obtain

Elementary matrix transformations include:

1. Changing the order of rows (columns).

2. Discarding zero rows (columns).

3. Multiplication of elements of any row (column) by one number.

4. Adding to the elements of any row (column) elements of another row (column), multiplied by one number.

Systems of linear algebraic equations (Basic concepts and definitions).

1. System m linear equations with n unknown is called system of equations of the form:

2.Decision system of equations (1) is called the set of numbers x 1 , x 2 , ..., x n , converting each equation of the system into identity.

3. The system of equations (1) is called joint if it has at least one solution; if the system has no solutions, it is called inconsistent.

4. The system of equations (1) is called a certain if it has only one solution, and undefined if she has more than one solution.

5. As a result of elementary transformations, system (1) is transformed to a system equivalent to it (ie, having the same set of solutions).

To elementary transformations systems of linear equations include:

1. Discarding null lines.

2. Changing the order of the lines.

3. Adding to the elements of any line of elements of another line, multiplied by one number.

Methods for solving systems of linear equations.

1) The inverse matrix method (matrix method) for solving systems of n linear equations with n unknowns.

System n linear equations with n unknown is called system of equations of the form:

Let us write system (2) in matrix form, for this we introduce the notation.

Coefficient matrix before variables:

X = is a matrix of variables.

В = - matrix of free members.

Then system (2) will take the form:

A× X = B- matrix equation.

Having solved the equation, we get:

X = A -1 × B

Example:

; ;

1) │А│ = 15 + 8 ‒18 ‒9 ‒12 + 20 = 4  0 matrix А -1 exists.

3)

à =

4) A -1 = × Ã = ;

X = A -1 × B

Answer:

2) Cramer's rule for solving systems of n - linear equations with n - unknowns.

Consider a system of 2 linear equations with 2 unknowns:

Let's solve this system using the substitution method:

From the first equation follows:

Substituting into the second equation, we get:

We substitute the value into the formula for, we get:

Determinant Δ - determinant of the matrix of the system;

Δ x 1 - variable determinant x 1 ;

Δ x 2 - variable determinant x 2 ;

Formulas:

x 1 =;x 2 =;…,x n =; Δ  0;

- are called by the Cramer formulas.

When finding determinants of unknowns X 1 , X 2 ,…, X n the column of coefficients for the variable whose determinant is found is replaced by the column of free members.

Example: Solve a system of equations by Cramer's method

Solution:

Let us first compose and calculate the main determinant of this system:

Since Δ ≠ 0, the system has a unique solution that can be found by Cramer's rule:

where Δ 1, Δ 2, Δ 3 are obtained from the determinant Δ by replacing the 1st, 2nd or 3rd columns, respectively, with the column of free terms.

In this way:

Gauss method for solving systems of linear equations.

Consider the system:

The extended matrix of system (1) is a matrix of the form:

Gauss method Is a method of successive elimination of unknowns from the equations of the system, starting from the second equation by m- th equation.

In this case, by elementary transformations, the matrix of the system is reduced to a triangular one (if m = n and determinant of the system ≠ 0) or stepwise (if m< n ) form.

Then, starting from the last equation by number, all unknowns are found.

Gaussian method algorithm:

1) Create an extended matrix of the system, including a column of free members.

2) If a 11  0, then the first row is divided by a 11 and multiply by (- a 21) and add the second line. Similarly, reach m–Th line:

I page is divided by a 11 and multiply by (- a m 1) and add m- th p.

In this case, from the equations, starting from the second to m- that is, the variable is excluded x 1 .

3) At the 3rd step, the second row is used for similar elementary transformations of rows from 3rd to m- thuyu. This will exclude the variable x 2 starting from 3rd line to m- thuyu, etc.

As a result of these transformations, the system will be reduced to a triangular or stepped shape (in the case of a triangular shape, under the main diagonal zeros).

Reducing the system to a triangular or stepped form is called by the direct course of the Gauss method, and finding unknowns from the resulting system is called reverse.

Example:

Direct course. Let us give an extended matrix of the system

with the help of elementary transformations to a stepwise form. Rearrange the first and second rows of the matrix A b, we get the matrix:

Let's add the second row of the resulting matrix with the first one multiplied by (‒2), and its third row - with the first row multiplied by (‒7). We get the matrix

To the third row of the resulting matrix, add the second row multiplied by (‒3), as a result of which we obtain a stepped matrix

Thus, we have brought this system of equations to a stepwise form:

,

Reverse move. Starting from the last equation of the obtained stepwise system of equations, we successively find the values ​​of the unknowns:

Top related articles