Inverse matrix and finding it. Inverse matrix. Solving matrix equations. Matrix method in economic analysis

ALGEBRAIC COMPLEMENTS AND MINORS

Let us have a third-order determinant: .

Minor, corresponding to this element a ij third-order determinant is called a second-order determinant obtained from a given one by deleting the row and column at the intersection of which the given element stands, i.e. i-th line and j th column. Minors corresponding to a given element a ij we will denote M ij.

For example, minor M 12, corresponding to the element a 12, there will be a determinant , which is obtained by deleting the 1st row and 2nd column from this determinant.

Thus, the formula defining the third-order determinant shows that this determinant equal to the sum products of the elements of the 1st row by their corresponding minors; in this case the minor corresponding to the element a 12, is taken with a “–” sign, i.e. we can write that

. (1)

Similarly, one can introduce definitions of minors for second-order and higher-order determinants.

Let's introduce one more concept.

Algebraic complement element a ij the determinant is called its minor M ij, multiplied by (–1) i+j .

Algebraic complement of an element a ij denoted by A ij.

From the definition we obtain that the connection between the algebraic complement of an element and its minor is expressed by the equality A ij= (–1) i+j Mij.

For example,

Example. A determinant is given. Find A 13, A 21, A 32.

It is easy to see that using algebraic additions of elements, formula (1) can be written as:

Similarly to this formula, you can obtain the expansion of the determinant into the elements of any row or column.

For example, the decomposition of the determinant into the elements of the 2nd row can be obtained as follows. According to property 2 of the determinant, we have:

Let us expand the resulting determinant into the elements of the 1st row.

. (2)

From here because second-order determinants in formula (2) are minors of the elements a 21 , a 22 , a 23. Thus, i.e. we got the decomposition of the determinant into the elements of the 2nd row.

Similarly, we can obtain the expansion of the determinant into the elements of the third row. Using Property 1 of determinants (about transposition), we can show that similar expansions are also valid when expanding over elements of columns.

Thus, the following theorem is valid.

Theorem (about the expansion of a determinant over a given row or column). The determinant is equal to the sum of the products of the elements of any of its rows (or columns) and their algebraic complements.

All of the above is also true for determinants of any higher order.

Examples.

INVERSE MATRIX

The concept of an inverse matrix is ​​introduced only for square matrices.

If A is a square matrix, then reverse for it a matrix is ​​a matrix, denoted A-1 and satisfying the condition. (This definition is introduced by analogy with multiplication of numbers)

Matrix algebra - inverse matrix

inverse matrix

Inverse matrix is a matrix that, when multiplied both on the right and on the left by a given matrix, gives the identity matrix.
Let us denote the inverse matrix of the matrix A through , then according to definition we get:

Where E– identity matrix.
Square matrix called not special (non-degenerate) if its determinant is not zero. Otherwise it is called special (degenerate) or singular.

The theorem holds: Every non-singular matrix has an inverse matrix.

The operation of finding the inverse matrix is ​​called appeal matrices. Let's consider the matrix inversion algorithm. Let a non-singular matrix be given n-th order:

where Δ = det A ≠ 0.

Algebraic addition of an element matrices n-th order A is called the determinant of a matrix taken with a certain sign ( n–1)th order obtained by deleting i-th line and j th matrix column A:

Let's create the so-called attached matrix:

where are the algebraic complements of the corresponding elements of the matrix A.
Note that algebraic additions of matrix row elements A are placed in the corresponding columns of the matrix à , that is, the matrix is ​​transposed at the same time.
By dividing all the elements of the matrix à by Δ – the value of the matrix determinant A, we get the inverse matrix as a result:

Let's note the row special properties inverse matrix:
1) for a given matrix A its inverse matrix is the only one;
2) if there is an inverse matrix, then right reverse And left reverse the matrices coincide with it;
3) a special (singular) square matrix does not have an inverse matrix.

Basic properties of an inverse matrix:
1) the determinant of the inverse matrix and the determinant of the original matrix are reciprocals;
2) the inverse matrix of the product of square matrices is equal to the product of the inverse matrix of factors, taken in reverse order:

3) the transposed inverse matrix is ​​equal to the inverse matrix of the given transposed matrix:

EXAMPLE Calculate the inverse of the given matrix.

Let there be a square matrix of nth order

Matrix A -1 is called inverse matrix in relation to matrix A, if A*A -1 = E, where E is the identity matrix of the nth order.

Identity matrix- such a square matrix in which all the elements along the main diagonal, passing from the upper left corner to the lower right corner, are ones, and the rest are zeros, for example:

inverse matrix may exist only for square matrices those. for those matrices in which the number of rows and columns coincide.

Theorem for the existence condition of an inverse matrix

In order for a matrix to have an inverse matrix, it is necessary and sufficient that it be non-singular.

The matrix A = (A1, A2,...A n) is called non-degenerate, if the column vectors are linearly independent. The number of linearly independent column vectors of a matrix is ​​called the rank of the matrix. Therefore, we can say that in order for an inverse matrix to exist, it is necessary and sufficient that the rank of the matrix is ​​equal to its dimension, i.e. r = n.

Algorithm for finding the inverse matrix

  1. Write matrix A into the table for solving systems of equations using the Gaussian method and assign matrix E to it on the right (in place of the right-hand sides of the equations).
  2. Using Jordan transformations, reduce matrix A to a matrix consisting of unit columns; in this case, it is necessary to simultaneously transform the matrix E.
  3. If necessary, rearrange the rows (equations) of the last table so that under the matrix A of the original table you get the identity matrix E.
  4. Write down the inverse matrix A -1, which is in last table under matrix E of the original table.
Example 1

For matrix A, find the inverse matrix A -1

Solution: We write matrix A and assign the identity matrix E to the right. Using Jordan transformations, we reduce matrix A to the identity matrix E. The calculations are given in Table 31.1.

Let's check the correctness of the calculations by multiplying the original matrix A and the inverse matrix A -1.

As a result of matrix multiplication, the identity matrix was obtained. Therefore, the calculations were performed correctly.

Answer:

Solving matrix equations

Matrix equations can look like:

AX = B, HA = B, AXB = C,

where A, B, C are the specified matrices, X is the desired matrix.

Matrix equations are solved by multiplying the equation by inverse matrices.

For example, to find the matrix from the equation, you need to multiply this equation by on the left.

Therefore, to find a solution to the equation, you need to find the inverse matrix and multiply it by the matrix on the right side of the equation.

Other equations are solved similarly.

Example 2

Solve the equation AX = B if

Solution: Since the inverse matrix is ​​equal to (see example 1)

Matrix method in economic analysis

Along with others, they are also used matrix methods. These methods are based on linear and vector-matrix algebra. Such methods are used for the purposes of analyzing complex and multidimensional economic phenomena. Most often, these methods are used when it is necessary to make a comparative assessment of the functioning of organizations and their structural divisions.

In the process of applying matrix analysis methods, several stages can be distinguished.

At the first stage a system of economic indicators is being formed and on its basis a matrix of initial data is compiled, which is a table in which system numbers are shown in its individual rows (i = 1,2,....,n), and in vertical columns - numbers of indicators (j = 1,2,....,m).

At the second stage For each vertical column, the largest of the available indicator values ​​is identified, which is taken as one.

After this, all amounts reflected in this column are divided by highest value and a matrix of standardized coefficients is formed.

At the third stage all components of the matrix are squared. If they have different significance, then each matrix indicator is assigned a certain weight coefficient k. The value of the latter is determined by expert opinion.

On the last one, fourth stage found rating values Rj are grouped in order of their increase or decrease.

The matrix methods outlined should be used, for example, in a comparative analysis of various investment projects, as well as in assessing other economic indicators of the activities of organizations.

The original one according to the formula: A^-1 = A*/detA, where A* is the associated matrix, detA is the original matrix. The adjoint matrix is ​​a transposed matrix of additions to the elements of the original matrix.

First of all, find the determinant of the matrix; it must be different from zero, since later the determinant will be used as a divisor. Let, for example, be given a matrix of the third (consisting of three rows and three columns). As you can see, the determinant of the matrix is ​​not equal to zero, so there is an inverse matrix.

Find the complements to each element of the matrix A. The complement to A is the determinant of the submatrix obtained from the original by deleting the i-th row and j-th column, and this determinant is taken with a sign. The sign is determined by multiplying the determinant by (-1) to the i+j power. Thus, for example, the complement of A will be the determinant considered in the figure. The sign turned out like this: (-1)^(2+1) = -1.

As a result you will get matrix additions, now transpose it. Transpose is an operation that is symmetrical about the main diagonal of a matrix; the columns and rows are swapped. Thus, you have found the adjoint matrix A*.

Matrix A -1 is called the inverse matrix with respect to matrix A if A*A -1 = E, where E is the identity matrix of the nth order. An inverse matrix can only exist for square matrices.

Purpose of the service. Using this service online you can find algebraic complements, transposed matrix A T, allied matrix and inverse matrix. The decision is carried out directly on the website (online) and is free. The calculation results are presented in a report in Word and Excel format (i.e., it is possible to check the solution). see design example.

Instructions. To obtain a solution, it is necessary to specify the dimension of the matrix. Next, fill out matrix A in the new dialog box.

See also Inverse matrix using the Jordano-Gauss method

Algorithm for finding the inverse matrix

  1. Finding the transposed matrix A T .
  2. Definition of algebraic complements. Replace each element of the matrix with its algebraic complement.
  3. Compiling an inverse matrix from algebraic additions: each element of the resulting matrix is ​​divided by the determinant of the original matrix. The resulting matrix is ​​the inverse of the original matrix.
Next algorithm for finding the inverse matrix similar to the previous one except for some steps: first the algebraic complements are calculated, and then the allied matrix C is determined.
  1. Determine whether the matrix is ​​square. If not, then there is no inverse matrix for it.
  2. Calculation of the determinant of the matrix A. If it is not equal to zero, we continue the solution, otherwise the inverse matrix does not exist.
  3. Definition of algebraic complements.
  4. Filling out the union (mutual, adjoint) matrix C .
  5. Compiling an inverse matrix from algebraic additions: each element of the adjoint matrix C is divided by the determinant of the original matrix. The resulting matrix is ​​the inverse of the original matrix.
  6. They do a check: they multiply the original and the resulting matrices. The result should be an identity matrix.

Example No. 1. Let's write the matrix in the form:

Algebraic additions. ∆ 1.2 = -(2·4-(-2·(-2))) = -4 ∆ 2.1 = -(2 4-5 3) = 7 ∆ 2.3 = -(-1 5-(-2 2)) = 1 ∆ 3.2 = -(-1·(-2)-2·3) = 4
A -1 =
0,6 -0,4 0,8
0,7 0,2 0,1
-0,1 0,4 -0,3

Another algorithm for finding the inverse matrix

Let us present another scheme for finding the inverse matrix.
  1. Find the determinant of this square matrix A.
  2. We find algebraic complements to all elements of the matrix A.
  3. We write algebraic additions of row elements to columns (transposition).
  4. We divide each element of the resulting matrix by the determinant of the matrix A.
As we can see, the transposition operation can be applied both at the beginning, on the original matrix, and at the end, on the resulting algebraic additions.

A special case: The inverse of the identity matrix E is the identity matrix E.