Optimal behavior. Optimal behavior and its formalization in economic and mathematical models. Psychological mechanisms of motivation development

Behavior. Law of Optimal Behavior  


Part III. Law of Optimal Behavior 135  

Part III. LAW OF OPTIMAL BEHAVIOR  

Law of Optimal Behavior  

So, we can state the discovery of the Law of Optimal Behavior, the very Law that reflects the general principle inherent in the behavior of any person.  

From the Law of Optimal Behavior it is clear that a person is not able to go against himself, i.e. against your own interests.  

It would seem that a person should live in complete harmony with the outside world - nature and society. But this is far from true. Apparently, there is a certain reason for disharmony, which we have to identify, given that people’s behavior, being a consequence of their thinking, is subject to an objective Law - the Law of Optimal Behavior.  

It cannot be otherwise, because people’s behavior is subject to the Law of Optimal Behavior, and it is only possible to control it by introducing various conditions.  

Firstly, it is obvious that the regulatory conditions within which ordinary workers are placed do not determine for them all, without exception, favorable consequences in the case of good work and unfavorable consequences in the case of bad work, i.e. introduce uncertainty into the sphere of labor relations. Employees, obeying the Law of Optimal Behavior, follow the path least resistance and choose the type of behavior that is optimal for them in this moment, i.e. allows them to avoid those adverse consequences that are somehow determined by existing conditions. But despite this kind of partial satisfaction of their interests, employees are not able to choose the type of behavior that the administration expects from them, because their behavior is subject not to the intentions, not to the requirements of the administration, but to the Law. Of course, employees are capable of giving much more than they give under existing conditions, and, as a rule, they are well aware of this. All criticism of workers towards the administration is nothing more than an expression on their part of the demand to supplement the regulatory conditions for the fullest satisfaction of their interests in productive work. In fact, employees unconsciously strive for certainty in labor relations, i.e. to ensure that all the favorable and unfavorable consequences for them from one or another of their actions are always clear.  

Moreover, by introducing any regulatory conditions, it is possible to one degree or another - depending on the degree of completeness of these conditions - to control human behavior. In fact, this is what happens in all spheres of social life, because the Law of Optimal Behavior is universal for human society.  

From now on, we know the main property inherent in any person, and therefore in any employee of an organization, to always act optimally, with the greatest benefit for oneself, taking into account all the consequences determined by regulatory conditions. We also know the Law of Optimal Behavior, which we cannot change. There is only one thing left for us to purposefully change the regulatory conditions, to change it so that a person naturally - precisely thanks to his main property - always acts rationally, with the greatest benefit for the organization. Only in this case does a person become a quality labor resource, completely manageable. As an object of management, it will be beneficial for him that management is always aimed at the rational use of all available resources.  

On the other hand, these same people, being subject to the Law of Optimal Behavior and having committed an act that ultimately caused their repentance, certainly faced a series of adverse consequences - a negative reaction from the external environment. Their optimal behavior turned out to be unreasonable (irrational) in relation to her.  

Indeed, since any person is objectively subject to the Law of Optimal Behavior, it can be unequivocally stated that not a single person will act for the benefit of the external environment until this leads him to obtain benefit for himself, until what is rational in relation to the external environment becomes optimal for him.  

If R is always 1, i.e. the initial degree of internal rationality is determined by the action of the Law of Optimal Behavior, then this or that actual degree of general rationality (R external environment, person, daily,  

Employee behavior is formed under the influence of the Law of Optimal Behavior.  

In accordance with the Law of Optimal Behavior, the intellect of each individual tirelessly guards his own interests. At the moment of infringement of these interests, all of his intellectual potential is objectively directed towards their protection. And if the interests of two subjects of labor relations - an entrepreneur and an employee - contradict each other, it is simply impossible, and even unthinkable, to organize constructive and productive work in such a situation.  

Each person is individual, but, regardless of certain character traits, everyone is always inclined to justify their actions. If something goes wrong, a person, as a rule, considers his failure to be a consequence of the erroneous actions of the people around him. And in this he is right in his own way, because his behavior is always built taking into account his own interests - it is always subject to the Law of Optimal Behavior.  

At first glance, the proposed situation is paradoxical. It is completely unclear who is really right and who is wrong. The law of optimal behavior justifies everyone.  

Thus, due to the absence of criterion conditions, the manifestation of the Law of Optimal Behavior becomes negative, “destructive”, everyone justifies - and, as it seems to him, quite justifiably - only himself and blames - just as justifiably - others. In this case, what is irrational in relation to others is optimal.  

The reason for the pattern correctly noted by Parkinson allows us to understand, again, knowledge of the Law of Optimal Behavior.  

So, only the presence of criterion conditions allows us to avoid the negative manifestation of the Law of optimal behavior, and it is the presence of these conditions that leads to the fact that the Law begins to “do its creative work” in all areas of social relations without exception where such conditions are introduced.  

In order to more clearly imagine the negative manifestation of the Law of Optimal Behavior in the field of labor relations, let us consider the negative consequences generated by the force of this Law, using the example of the most pressing problems existing today in this area.  

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Posted on http:// www. allbest. ru/

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION

Ast Rahan State Technical University

Institute of Information Technologies and Communications

Speciality: Automation of technologicalprocesses and production

Form of study: correspondence

Essay

in the discipline "Optimal control systems"

on the topic"Optimal and adaptive control systems"

Performer student

Teacher

Kokuev A.G.

Astrakhan 2016

1. Management system and its principles

2. Optimal control

3. Optimal control problems

4. Adaptive control

5. Classification of adaptive systems

6. Adaptation

7. Levels of adaptation

8. Features of adaptive control systems

9. Structure of adaptive control systems

10. Software for adaptive robot control systems

11. Main functions of the software

Conclusion

Bibliography

1. Control system andits principles

A system is a whole created from parts and elements that interact with each other for purposeful activity. Among its main features are: the multiplicity of elements, integrity and unity between them, the presence of a certain structure, etc. At the same time, the system has properties that differ from the properties of its elements. Any system, in general, has an input effect, a processing system, final results and feedback.

Control is the process of influencing a system in order to maintain a given state or transfer it to a new state.

A management system is a set of all elements, subsystems and communications between them, as well as processes that ensure the specified (purposeful) functioning of the organization.

One of the first definitions of the concept “system” is found in the work of K. Baldwing: “a system is a collection of two or more elements that satisfy the following conditions:

The behavior of each element influences the behavior of the whole;

The behavior of elements and their interaction on the whole are interdependent;

If there are subgroups of elements, then each of them influences the behavior of the whole and none of them exerts such influence independently.”

It follows from this that the system is a kind of unity, and not a simple collection of its constituent parts. In this regard, R. Ackoff notes that “when a system is dismembered, it loses its essential properties.”

The main system principles are:

1) integrity - irreducibility of the properties of a system to the sum of the properties of its constituent elements and irreducibility from the latter properties of the whole;

2) structurality - the possibility of describing a system through establishing its structure, i.e., the behavior of the system is determined not so much by the behavior of its individual elements as by the properties of its structure;

3) interdependence of structure and environment - the system forms and manifests its properties in the process of interaction with the environment, being at the same time an active element of interaction;

4) hierarchy - each element of the system in turn can be considered as a system, and the system under study is one of the elements of a broader, global system;

5) multiplicity of descriptions of each system, giving a macroscopic, microscopic, hierarchical, functional and procedural view of the system. In this regard, the task of the systems approach is to identify the whole, explain the behavior and properties of the whole in terms of its role and function.

A management system from the perspective of a systems approach can be defined as:

Conceptual, if it is considered as a model of a management system;

Empirical, if a specific organization is considered;

Artificial because it is created and used by people;

- “man-machine” (“man-computer”), since an automated information system is included in the control loop;

Closed or open, depending on the tasks being solved and the information used for this - only internal or related to the environment;

Temporary because it is periodically subject to formal or informal changes.

There are two main types of systems: closed and open. A closed system has rigid, fixed boundaries; its actions are relatively independent of the environment surrounding the system. A clock is a familiar example of a closed system.

An open system is characterized by interaction with the external environment. Energy, information, materials are objects of exchange with the external environment, the permeable boundaries of the system. Such a system is not self-sustaining; it depends on energy, information and materials coming from outside. In addition, an open system has the ability to adapt to changes in the external environment and must do so in order to continue to function.

Managers are primarily concerned with open systems because all organizations are open systems. The survival of any organization depends on the outside world.

Management principles are nothing more than the initial, fundamental, basic ideas of management activities, as well as the most important requirements, compliance with which ensures its effectiveness. Management principles are the most important element of the management mechanism, since they grow from the laws and patterns of management, and thereby reflect objective reality. At the same time, the principles belong to the subject, and in this regard they have a subjective nature. This duality in the nature of management principles requires a certain, balanced attitude towards them as a guide to action.

The principles of the management system are very diverse and they quite strictly determine the nature of the connections in the system, the structure of management bodies, the adoption and implementation of management decisions.

The basic principles of the management system may include:

The principle of scientific validity.

The principle of consistency and complexity.

The principle of unity of command and collegiality in management.

The principle of democratic centralism.

The principle of unity of sectoral and territorial management.

The principle of hierarchy and feedback.

The main purpose of the management system is to ensure the sustainability and integrity of operations. We can talk about sustainability of activity when a given direction is maintained despite constantly changing circumstances. About integrity - when all the organs of a large complex organism act as one, only in this case is it possible to move without deviating from a given course. To ensure sustainability and integrity of operations, the management system must be adaptive and manageable accordingly.

The adaptability of a management system is a property that reflects its ability to quickly and flexibly respond to any changes and develop adequate management teams to minimize the effect of disturbing factors. Adaptability is the main property that ensures the sustainability of activities.

In this case, adaptability refers to the ability of a control system not only to respond to external changes, but also to anticipate them. The fact is that changes, especially in the economic sphere, do not occur simultaneously, since they are based on the preferences of many people, i.e. we are talking about trends that are not obvious at the very beginning and become increasingly obvious as they appear. Adaptability is higher the earlier these trends are noticed and correctly assessed. Accordingly, the higher the adaptability, the more effectively the control system learns and improves itself, the faster and more flexibly it responds to external changes.

Adaptability is not only the ability to respond in a timely manner to external changes, it is also the ability to take into account the internal capabilities of the system. What exactly should be changed in the activity so that it remains sustainable, how to rebuild the management system so that it meets the changed requirements, who and what needs to be trained in order to keep up with life - all this and much more depends on how adequate and acceptable each moment in time will be the decisions made by managers. Adaptability is higher the more accurately the system’s capabilities for change are assessed and the more fully these capabilities are used to change activities in the desired direction.

Thus, ensuring high adaptability of activities places special demands on the organization of management: key positions in the system should be occupied by the most capable managers. A position is considered key if the powers of the manager in this position allow him to significantly influence the activities of the enterprise as a whole. And the abilities of this leader must be such that he does not depend on the opinion of the majority, dogmas and stereotypes, is able to think independently and knows how to make optimal decisions in non-standard situations, is responsible and knows how to lead people.

Manageability is a property of a management system that reflects its ability to convey management commands to every employee of the enterprise in a timely manner and without distortion. Only in this case, the activities of all employees of the enterprise will be aimed at achieving a common result, and they will all act in a coordinated and harmonious manner. Controllability is the main property that ensures the integrity of the activity.

As already mentioned, with a large number of employees, management becomes a special activity with its own result, with the need to coordinate the efforts of managers and ensure their effectiveness. In this regard, it becomes relevant how coordinated the managers themselves act: as soon as one of them begins to act on his own, in isolation from everyone else, realizing his own goals, the integrity of the activity immediately begins to collapse. As a rule, this happens in cases where such a leader is focused on achieving personal interests as opposed to production interests, acts selfishly and irresponsibly, is unable to cope with his moods and depends on his preferences. Situations are also possible when the manager’s responsibility is okay and he is focused on solving production problems, but his abilities are not enough to successfully manage the more capable managers subordinate to him. In this case, such a leader has only extremely ineffective formal leverage.

In fact, in both cases, one or more management connections are broken, which means it is impossible to fully achieve coordination of activities in individual areas of production: management influences are distorted or simply do not reach from the top level of management to the bottom. This leads to a decrease in controllability, that is, the “cut off” part of the activity begins to deviate from the general direction of movement. Ultimately, the integrity of the activity is disrupted and its sustainability is reduced.

Thus, ensuring high controllability also places special demands on the organization of the management system: the placement of managers should be carried out taking into account their management abilities so as to ensure the integrity of management connections everywhere. Therefore, the minimum necessary condition for ensuring controllability is that all managers are reliable and responsible, do not depend on their moods and preferences, and are focused on the interests of production, and not on meeting their own needs. Only in this case can you be reasonably sure that orders will be processed reliably.

2. Optimal control

Optimal control is the task of designing a system that provides, for a given control object or process, a control law or a control sequence of influences that ensures the maximum or minimum of a given set of system quality criteria.

To solve the optimal control problem, a mathematical model of the controlled object or process is constructed, describing its behavior over time under the influence of control actions and its own current state. The mathematical model for the optimal control problem includes: the formulation of the control goal, expressed through the control quality criterion; determination of differential or difference equations describing possible ways of movement of the control object; defining restrictions on the resources used in the form of equations or inequalities

According to the prevailing Lately point of view, optimal control is a certain section of the theory of extremal problems (optimization theory), devoted to the study and solution of questions of maximization and minimization of functionals on sets of functions of a special type. On the other hand, optimal control is closely related to the choice of the most profitable (optimal) control modes for complex objects, which are described using systems of ordinary differential equations. If the first point of view is directly consistent with the classification adopted in “classical” mathematics, then the second is more applied, since it is focused on solving a variety of problems in economics and technology. When presenting the material in this manual, preference is given to the second point of view.

3. Optimal control problems

Optimal control problems relate to the theory of extremal problems, that is, problems of determining maximum and minimum values. The very fact that several Latin words were found in this phrase (maximum - greatest, minimum - smallest, extremum - extreme, optimus - optimal) indicates that the theory of extremal problems has been the subject of research since ancient times. Aristotle (384-322 BC), Euclid (3rd century BC) and Archimedes (287-212 BC) wrote about some of these problems. Legend associates the founding of the city of Carthage (825 BC) with the ancient problem of determining a closed plane curve enclosing a figure of the maximum possible area. Such problems are called isoperimetric.

A characteristic feature of extreme problems is that their formulation was generated by current demands for the development of society. Moreover, starting from the 17th century, the dominant idea became that the laws of the world around us are a consequence of certain variational principles. The first of them was the principle of P. Fermat (1660), according to which the trajectory of light propagating from one point to another should be such that the time of passage of light along this trajectory is as short as possible. Subsequently, various variational principles widely used in natural science were proposed, for example: the principle of stationary action of U.R. Hamilton (1834), the principle of virtual movements, the principle of least coercion, etc. At the same time, methods for solving extremal problems were developed. Around 1630, Fermat formulated a method for studying the extremum of polynomials, which consists in the fact that at the extremum point the derivative is equal to zero. For the general case, this method was obtained by I. Newton (1671) and G.V. Leibniz (1684), whose works mark the birth of mathematical analysis. The beginning of the development of classical calculus of variations dates back to the appearance in 1696 of an article by I. Bernoulli (a student of Leibniz), which formulated the formulation of the problem of a curve connecting two points A and B, moving along which from point A to B under the influence of gravity material point reaches B in the shortest possible time.

Within the framework of the classical calculus of variations in the 18th-19th centuries, necessary condition extremum of the first order (L. Euler, J.L. Lagrange), later necessary and sufficient conditions of the second order were developed (K.T.V. Weierstrass, A.M. Legendre, K.G.Ya. Jacobi), Hamilton’s theory was constructed -Jacobi and field theory (D. Hilbert, A. Kneser). Further development of the theory of extremal problems led in the 20th century to the creation linear programming, convex analysis, mathematical programming, minimax theory and some other sections, one of which is the theory of optimal control.

This theory, like other areas of the theory of extremal problems, arose in connection with current problems of automatic control in the late 40s (controlling an elevator in a mine to stop it as quickly as possible, controlling the movement of rockets, stabilizing the power of hydroelectric power plants, etc.). Note that statements of individual problems that can be interpreted as optimal control problems were encountered earlier, for example, in I. Newton’s “Mathematical Principles of Natural Philosophy” (1687). This also includes the problem of R. Goddard (1919) of lifting a rocket to a given height with minimal fuel consumption and its dual problem of lifting a rocket to a maximum height with given quantity fuel. Over the past time, the fundamental principles of optimal control theory have been established: the maximum principle and the dynamic programming method.

These principles represent a development of the classical calculus of variations for the study of problems containing complex control constraints.

Now the theory of optimal control is experiencing a period of rapid development, both due to the presence of difficult and interesting mathematical problems, and due to the abundance of applications, including in such areas as economics, biology, medicine, nuclear energy, etc.

All optimal control problems can be considered as mathematical programming problems and, in this form, can be solved using numerical methods.

With optimal management of hierarchical multi-level systems, for example, large chemical production, metallurgical and energy complexes, multi-purpose and multi-level hierarchical optimal control systems are used. Management quality criteria for each management level and for the entire system as a whole, as well as coordination of actions between management levels, are introduced into the mathematical model.

If the controlled object or process is deterministic, then differential equations are used to describe it. The most commonly used are ordinary differential equations of the form. In more complex mathematical models (for systems with distributed parameters), partial differential equations are used to describe the object. If the controlled object is stochastic, then stochastic differential equations are used to describe it.

If the solution to a given optimal control problem is not continuously dependent on the initial data (an ill-posed problem), then such a problem is solved by special numerical methods.

An optimal control system that is capable of accumulating experience and improving its work on this basis is called a learning optimal control system.

The real behavior of an object or system always differs from the program one due to inaccuracy in the initial conditions, incomplete information about external disturbances acting on the object, inaccuracy in the implementation of program control, etc. Therefore, to minimize the deviation of an object’s behavior from the optimal one, an automatic control system is usually used.

Sometimes (for example, when managing complex objects, such as a blast furnace in metallurgy or when analyzing economic information), the initial data and knowledge about the controlled object when setting the optimal control problem contains uncertain or fuzzy information that cannot be processed by traditional quantitative methods. In such cases, you can use optimal control algorithms based on the mathematical theory of fuzzy sets (Fuzzy control). The concepts and knowledge used are converted into a fuzzy form, fuzzy rules for deriving the decisions made are determined, and then the inverse conversion fuzzy decisions made into physical control variables.

4. Adaptive Control

Adaptive control is a set of control theory methods that make it possible to synthesize control systems that have the ability to change the parameters of the controller or the structure of the controller depending on changes in the parameters of the control object or external disturbances acting on the control object. Such control systems are called adaptive.

5. Classification of adaptive systems

Based on the nature of changes in the control device, adaptive systems are divided into two large groups:

Self-adjusting (only the values ​​of the controller parameters change)

Self-organizing (the structure of the regulator itself changes).

Based on the method of studying an object, systems are divided into search and non-search systems.

In the first group, extreme systems are especially known, the control goal of which is to maintain the system at the extremum point of the static characteristics of the object. In such systems, to determine the control actions that ensure movement towards the extremum, a search signal is added to the control signal. Search-free adaptive control systems are divided into:

Reference model (RM) systems

Systems with an identifier are sometimes referred to in the literature as systems with a custom model (CM).

Adaptive systems with EVs contain a dynamic model of the system that has the required quality. Adaptive systems with an identifier are divided according to the control method into direct and indirect (indirect).

With indirect adaptive control, the parameters of the object are first assessed, after which, based on the obtained estimates, the required values ​​of the controller parameters are determined and they are adjusted. With direct adaptive control, by taking into account the relationship between the parameters of the object and the controller, the parameters of the controller are directly assessed and adjusted, thereby eliminating the stage of identifying the parameters of the object. According to the method of achieving the self-tuning effect, systems with a model are divided into

Systems with signal (passive)

Systems with parametric (active) adaptation.

In systems with signal adaptation, the self-tuning effect is achieved without changing the parameters of the control device using compensating signals. Systems that combine both types of adaptation are called combined.

Application

Used to control a nonlinear system, or a system with variable parameters. Examples of such systems include, for example, asynchronous machines, vehicles magnetically levitated, magnetic bearings, etc. Among mechanical systems can be called an inverse pendulum, lifting transport vehicles, robots, walking machines, underwater vehicles, airplanes, missiles, many types of guided precision weapons, etc.

6. Adaptation

Adaptation (accommodation) is the main reaction of a living organism, providing it with the ability to survive. It means the body's adaptation to changing external and internal conditions. The implementation of this principle in technical systems, namely in robotics, apparently has many advantages, and sometimes is simply necessary. The concept of adaptation or adaptability in technology is very broad and therefore has many interpretations. Unfortunately, there is still no exact generally accepted definition of an adaptive system, so we will try to explain the meaning of this term with the following reasoning.

As is known, with the help of open-loop control without feedback, it is possible to exclude the influence of some predictable external disturbances on the output parameters of the object, provided that the characteristics of the individual components and elements of the control system are quite simple and their properties do not change.

It is possible to eliminate the influence of unpredictable external disturbances on the behavior of an object within the framework of traditional control theory. To do this, it is necessary to use the feedback principle, i.e. organize a closed control system, the properties of all elements of which are assumed to be known and do not change over time. Sometimes drift of some characteristics may be allowed, but within very small limits. However, in practice there are often control objects whose amplitude and frequency parameters vary widely under the influence of external reasons over time and due to the properties of the object itself. The moment of inertia of the manipulator in the folded state can change several times relative to the fully extended one; viscosity of the working fluid in the cavities of the hydraulic cylinders of an underwater robot operating at different depths of the sea at different depths and water temperatures; friction in engine supports during contamination and aging of the lubricant and many other characteristics. At the same time, when managing complex objects - flexible production modules, lines or areas consisting of many pieces of equipment, the number of external and internal factors that have a disturbing effect on their operation increases sharply. Among them may be errors in the positioning of workpieces or even their absence at the right moment, wear of the processing tool, deviation of the joint of welded parts from the specified trajectory of the electrode of the welding machine, rocking of parts on an overhead conveyor during the process of picking them up by the robot, and other similar factors that require adaptation of the control system , i.e. self-tuning and adaptation to real operating conditions. The response of the control system is manifested in changes in the structure, parameters, and sometimes the algorithm of actions so as to guarantee the achievement of the set goal.

There are general properties that characterize the adaptation process:

The output parameters of the regulated object and the characteristics of disturbing factors are under constant monitoring and control using devices additionally included in the control system;

The observed behavior of an object is described by a certain quality indicator that evaluates quantitative form the nature of the management process;

Deviation of a quality indicator beyond the tolerance limits entails automatic adjustment of the controller parameters or replacement of the control algorithm, the result of which is the achievement of the desired quality indicator or the implementation of the set goal.

The described properties are inherent in a more or less pronounced form to all adaptive control systems, which are always feedback systems.

7. Levels of adaptation

Depending on the control goal, adaptive systems in robotics can be divided into the following levels.

First level characterized by the ability to self-adjust controller parameters based on information about the state of an object under the disturbing influence of the external environment. Assessment of the state of an object can be carried out either by direct measurement of the required parameters or by their identification. In the latter case, certain test control actions are applied to the object, its reaction is recorded, and based on an analysis of the object’s behavior, an assessment is made of its a priori unknown or changed parameters. A typical example of this level of adaptation of a robotic system can be a controller that controls a position-locked electro-hydraulic drive of an underwater vehicle manipulator. A peculiarity of the operation of underwater robots is the need to maintain the static and dynamic parameters of hydraulic drives at a given level in a wide range of temperatures and pressures environment. The temperature of the water layers can vary significantly, which can lead to a change in the viscosity of the working fluid and, as a result, to an unpredictable drift in the drive characteristics. An adaptive control system can eliminate this unpleasant phenomenon, identifying changes in characteristics and providing appropriate self-tuning of the controller parameters.

For second level of adaptation Robotic systems are characterized by the inclusion in the control device of additional information tools that ensure the collection and processing of data on the state of the external environment. Based on the analysis of changes in the external environment, the robot’s control program is corrected, allowing it to achieve the goal under new conditions. Although at this level of adaptation, correction of program actions is allowed only within small limits, the effect of using such adaptive control systems in practice is significant. An example is robotic electric arc welding of large-sized products. In this technological process, it is difficult to ensure the constancy of the spatial location of the joint line of the welded parts from product to product. Therefore, the welding robot must be able to adjust the programmed trajectory of the electrode in accordance with the actual position of the joint line, measured by special sensors.

Concept of control goal for adaptive robotic systems third level stems from the requirement to realize maximum productivity while ensuring the absence of defects. Characteristic of this level of adaptation are developed means for collecting information about the external environment, self-diagnosis, and, possibly, self-repair of components of a controlled production system. Let us explain this with examples.

One of the most difficult operations from an automation point of view is the operation of abrasive cleaning of castings, the features of which are the curvilinearity of the shape of the castings, the absence of base surfaces on them that could be taken as a reference point for subsequent precise movements and wear of the abrasive tool, so perform abrasive cleaning or grinding products using a program-controlled robot is almost impossible. The solution to this problem can only be found in the class of adaptive systems, supplementing the robot control device with means for monitoring the quality of casting surface treatment, sensors for cutting forces and wear of an abrasive tool.

The control system of the adaptive abrasive cleaning module, analyzing the degree of surface roughness, can decide to repeat the processing cycle of the current section of the part or command the robot to move the next section to the grinding zone. At the same time, using information about the cutting forces and assessing the wear of the abrasive tool, the adaptive control system can organize processing modes that are optimal in terms of productivity. adaptive robot software

Another example of adaptation of a robotic system, in which a change in the control algorithm occurs, is a flexible production system, for example, machining, which includes several units or dozens of metal-cutting machines, united by an automatic transport warehouse system. Such a system operates according to a given program until a failure occurs. If, for example, one of the machining centers fails, then the GPS control system must, quickly assessing the situation, make a decision on subsequent actions, develop, possibly at the cost of reduced productivity, a new technological scheme for the sequential processing of products produced by this flexible production system, and ensure operation of machines and vehicles according to a new route scheme until the repair team returns the emergency machine to operation.

The considered levels of adaptation of robotic systems differ not so much in the number of additional devices that provide collection and processing of information about changes in equipment parameters, the external environment and the nature of their interaction, but rather in the ability to organize systems capable of functioning in increasingly complex, unpredictable changes in operating conditions.

8. Features of adaptive control systems

The general principles of organizing an adaptive control system can be seen in the example of an industrial robot that removes parts from an overhead conveyor and places them in containers.

If the moment when a part passes a given position is known, then the task can be performed by a robot controlled according to a rigid program. To do this, it is enough to specify the coordinates of the positioning points in the initial position, the gripping position and the position of the container into which the parts are oriented. The algorithm underlying the robot’s action program can be represented as follows:

1 - set the coordinates of positioning points;

2 - move the gripping device to the position of gripping the part;

3 - move to the position of gripping the part;

4 - turn on the pneumatic gripping device;

5 - go to the starting position;

6 - move the gripping device with the part to the container;

7 - turn off the pneumatic gripping device;

8 - repeat from mark 2.

However, successful reloading of parts from the conveyor into containers will continue until even a slight deviation of the position of the part from that specified in the program occurs. The reason for the deviation may be uneven conveyor speed or swaying of the part. In this case, the part will be picked up incorrectly by the robot or not picked up at all. Naturally, the robot will not notice such a failure and will continue to make erroneous actions until a human operator intervenes and turns it off.

Failures due to uneven movement of the conveyor can be eliminated by maintaining the program operating mode. To do this, it is enough to equip the conveyor with a sensor that is triggered when the conveyor hook passes the specified gripping position, setting a conditional operator in the control program between marks 2 and 3 that allows the transition to mark 3 only after receiving the sensor signal. However, the introduction of a conveyor hook position sensor into the control system does not exclude failures due to swinging parts. In addition, software control is powerless if parts are hung incorrectly on the conveyor. It's obvious what to decide this task only an adaptive control system can do it. To do this, the existing robotic module must be equipped not only with a hook position sensor, but also with means for recognizing parts and measuring the coordinates of a point for gripping them. In this case, the algorithm of robot actions discussed above is modified into the following sequence:

1 - set the coordinates of the positioning points: source and container;

2 - move the gripping device to its original position;

3 - based on the signal from the hook position sensor, recognize the part, measure the coordinates of the gripping point and the orientation of the part;

4 - move to the gripping position, orient the gripping device relative to the axis of the part;

5 - turn on the pneumatic gripping device;

6 - go to the starting position;

7 - move the gripping device with the part to the container;

8 - turn off the pneumatic gripping device;

9 - repeat from mark 2.

Thus, additional devices introduced into the control system and modification of the original program make it possible to service a conveyor moving at an a priori unknown speed and an arbitrary, within certain limits, arrangement of parts.

9. Structure of adaptive control systems

Analyzing the functions of the software and adaptive control systems of the robot solving the problem under consideration, one can notice that they differ only in the devices that perceive information about the external environment. These devices process this information and select a sequence to bypass the positioning points already existing in the robot program.

Components of the adaptive and software environment responsible for executing the selected traversal sequence given points, turn out to be similar.

Thus, main property of adaptive systems- implementation of the control goal in conditions of a non-deterministic external environment and drift of robot parameters - is reflected in the structure by two new elements: an information system, as well as a device for calculating the coordinates of target points and the sequence of their traversal, using information about changes that have occurred in the external environment and components of the robot.

The control functions of an adaptive robot are performed by a computing device, the level of complexity of which is determined by the level of adaptation of the robot. In the simplest case, it can be a microprocessor or microcomputer; for complex adaptive robotic systems, the computing device can be a multimicroprocessor network.

Modern adaptive robotic systems are characterized by the combination in a computing device of the function of adapting to changes in the external environment and parameters of the robot drives with a wide range of hardware and software for self-diagnosis and elimination of minor faults in the control system itself.

Although the microprocessor modules have the same structure, they perform different functions. Thus, one of them collects and processes external information, the other provides communication with the terminal and interprets operator commands, the third calculates control actions and controls the operation of the robot drives, and the fourth is responsible for communication with external technological equipment and the upper level of control of an automatic line or section.

A feature of this computing device structure is the ability self-diagnosis and self-repair, which is implemented using a highway control unit (BKM). The functions of self-diagnosis and minor self-repair are among the most important in modern systems control of adaptive robots, since their implementation ensures trouble-free operation of the flexible production module even in conditions of failures and partial failure of the equipment.

By analyzing the order of signals passing along the common highway and their parameters, the BCM evaluates the serviceability of individual microprocessor modules and transceivers connecting the microprocessors with the common highway. If any microprocessor module fails, then, along with a message about the noticed malfunction transmitted to the operator at the upper control level, the line control unit generates a command to turn off the emergency module and transfer its functions to operational microprocessors. If only the microprocessor transceiver fails, then at the command of the BKM the structure of connections between the microprocessors can change. For example, using backup I/O channels, which, as a rule, have less bandwidth than the common backbone, it is possible to transfer information between microprocessors, connecting them according to the “each to each” principle.

Of course, self-repair of an adaptive system is a temporary, necessary measure, since this slightly reduces the performance of the computing device, but the survivability of the robotic module turns out to be very high.

10. Software for control systems of adaptive robots

The functions of the adaptive robot software consist of servicing objects external to the control system: human operator, robot drives, information system, technological equipment and upper-level control computing device.

The control system interacts with the human operator in an active dialogue mode, during which the person performs the following actions:

Forms work program, which can be presented in the form of a set of data describing the positioning points of the robot’s gripping device and control signals to technological equipment, or in the form of a set of instructions in a problem-oriented language;

Edits the work program using a data editor program or a text editor program, since, as stated above, a program can be either data or instructions;

Creates object and load modules of the working program, ensures the removal of old files, the inclusion of new ones, renaming and storing programs in the library;

Debugs the working program, i.e. with the support of the software, carries out its step-by-step execution, analyzes the debugging results and, if the quality of the program is satisfactory, gives a command for its execution;

Implements equipment health monitoring functions, in particular, checks communication channels with technological equipment, calibrates robot measuring systems and performs other diagnostic operations.

11. Main Software Features

In relation to the robot's actuator - the manipulator - the functions of the software are wide and varied. Depending on the robot's intelligence level, these may include: detailed analysis tasks; breaking it down into subtasks and elementary actions; planning the movement of a tool or gripping device to implement these actions; determining a sequence of positioning points, bypassing which will allow one to reproduce the desired trajectory and, finally, converting the coordinates of the tool positioning points into the required positions of the manipulator joints and generating drive control commands.

Important from the point of view of organizing the interaction of flexible production modules that make up lines and sections is the support of robot software information exchange with a control level above it.

Of course, there are flexible production cells with adaptive robots that operate completely autonomously. However, in this case, the robot control system and its software are responsible for coordination functions all components of the PMG. In addition, if any malfunctions or malfunctions occur, it is impossible to send a request for help to a higher-level control system.

On the other hand, if there is a communication channel between an adaptive robot and a top-level computer and the exchange process is supported on both sides by software, a unique opportunity arises to create a hierarchy of control levels with a clear division of tasks for each and the accompanying unification of software and programming languages ​​for each.

In this case, the computer that controls the flexible production module, which, as a rule, is the upper level in relation to the robot, takes upon itself the coordination of the actions of the PMG equipment, eliminating the possibility of emergency situations, for example, a collision of a manipulator with the moving parts of other devices or a collision of two manipulators operating in one zone, diagnosing PMG equipment and a number of other functions performed by the software of the adaptive robot during autonomous operation of the PMG under its control.

At service information systems The functions of an adaptive robot's software depend on the level of intelligence of its sensors. If the processing of information about the external environment is carried out by the sensory system itself, then the robot software should only organize the reception of data. Otherwise, its functions also include processing and selecting information suitable for control purposes, as well as determining the addressee from among the program modules responsible for control to whom this sensory information is intended.

In addition to the listed functions, the software must solve system-wide problems of processing interrupt signals, managing input/output of information, distributing computing resources, etc.

Assessing the above basic functions of adaptive robot software, one can notice their similarity with the functions of universal real-time operating systems. Indeed, if we compare the main components of universal operating systems and programming systems for adaptive robots, we can see their analogy.

Systemprogrammingadaptive robot:

Operator commands;

Work assignment;

Problem-oriented robot programming language;

Maintenance of external devices;

Ensuring exchange with the top level of management.

Real time operating system:

Monitor commands;

File system;

Programming languages;

I/O management;

Network sharing support.

This analogy allows, when designing robot programming systems, to use the experience accumulated not only in the field of the theory of universal operating systems, but also to use the operating systems themselves.

Conclusion

The development of optimal control theory is associated with increasing requirements for the speed and accuracy of control systems. Increasing performance is possible only with the correct distribution of limited control resources, and therefore taking into account control restrictions has become one of the central ones in the theory of optimal control. On the other hand, the construction of high-precision control systems has led to the need to take into account the mutual influence of individual parts (channels) of the system when synthesizing regulators. The synthesis of such complex multidimensional (multi-connected) systems is also the subject of optimal control theory.

To date, a mathematical theory of optimal control has been constructed. On its basis, methods for constructing systems that are optimal in terms of speed and procedures for the analytical design of optimal regulators have been developed. Analytical design of regulators together with the theory of optimal observers of optimal filters) form a set of methods that are widely used in the design of modern complex control systems.

The complexity of the problems of optimal control theory required a broader mathematical base for its construction. This theory uses the calculus of variations, the theory of differential equations, and matrix theories. The development of optimal control on this basis led to the revision of many sections of the theory automatic control, and therefore optimal control theory is sometimes called modern theory management. Although this is an exaggeration of the role of only one of the sections, the development of the theory of automatic control has been determined in recent decades largely by the development of this section.

Soviet scientists A. N. Kolmogorov, L. S. Pontryagin, N. N. Krasovsky, A. M. Letov and foreign scientists N. Wiener, R. Bellman, R. E. made a great contribution to the construction of the theory of optimal control. Kalman.

The development of the theory of adaptive control is caused by the ever-increasing number of complex control objects of various physical nature, the parameters of which are not defined. The reason for this uncertainty may be: a variety of operating modes of objects or the impossibility of them experimental research in order to determine parameters without disrupting the technological process, and finally, short design deadlines that do not allow time spent on research and calculations to determine the parameters of the dynamic model of the object.

The controller of an object with uncertain and changing parameters must change (adapt) so that the performance and accuracy of the system remain unchanged.

Bibliography

1. Tabak D., Kuo B. Optimal control and mathematical programming. - M.: Nauka, 1975.

2. Tyukin I. Yu., Terekhov V. A., Adaptation in nonlinear dynamic systems, (Series: Synergetics: from past to future), St. Petersburg: LKI, 2008.

3. Aleksandrov A.G. Optimal and adaptive systems. M.: graduate School, 1989.

4. Fundamentals of Robotics / Ed. E.P. Popov and G.V. Written. M., 1990

5. Sensing systems and adaptive industrial robots / Ed. E.P. Popov and V.V. Klyueva. M., 1985

6. Control systems of industrial robots / Ed. THEM. Makarov and V.A. Chiganova. M., 1984

Posted on Allbest.ru

...

Similar documents

    Classification of adaptive systems. Advantages and disadvantages of types and classes of adaptive, self-tuning systems. Development of an original adaptive system scheme. Systems with stabilization of the main contour, identifier or refined model of the object.

    article, added 07/24/2013

    Cluster pumping station as an object of program control. Main characteristics of microcircuits and their operating modes. Development of block and circuit diagrams of a microprocessor program control system based on the K1821VM85 microprocessor.

    course work, added 05/03/2012

    The basic idea of ​​adaptive signal processing. Adaptive filtering algorithms. Deterministic optimal filtering problem. Adaptive filters in system identification. RLS algorithm with exponential forgetting. Implementation of adaptive filter models.

    course work, added 03/11/2015

    Stability analysis of an automatic control system (ACS) using the Nyquist criterion. Study of the stability of the ACS based on the amplitude-phase-frequency characteristics of the AFC and logarithmic characteristics. Instrument tracking system control tools.

    course work, added 11/11/2009

    Development of a block diagram of an automatic control system based on the KR580 kit. Characteristic general principles building the device. Calculation and selection of element base. Microprocessor and auxiliary devices. Organization of information input/output.

    course work, added 04/02/2013

    Consideration of the basics of the structural diagram of an automation system. Selection of actuating and master elements, microprocessor control element. Calculation of load characteristics. Drawing up a control algorithm and writing software.

    course work, added 10/06/2014

    Functional diagram of a closed system. Analysis of the stability of the original linearized system according to the algebraic criterion. Construction of mid-frequency and high-frequency sections. Analysis of system quality in transient mode. Processing of input signals.

    thesis, added 02/15/2016

    Synthesis of a proportional-integral-derivative controller that provides indicators of accuracy and control quality for a closed-loop system. Amplitude-frequency response, dynamic analysis and transient process of the corrected system.

    course work, added 08/06/2013

    Analysis of the original automatic control system, determination of the transfer function and coefficients. Analysis of the stability of the original system using the Routh and Nyquist criteria. Synthesis of corrective devices and analysis of synthesized control systems.

    course work, added 04/19/2011

    The reliability of modern automated process control systems is an important component of their quality. Relationship between reliability and other properties. Assessing the reliability of programs and operational personnel. Function reliability indicators.

Introduction to the work

Relevance of the problem. One of the characteristic trends of the modern stage of development of science is the formation of new directions that unite various areas of scientific knowledge. Research in the field of system analysis related to decision-making problems belongs to precisely these areas. As academician N.N. Moiseev notes, system analysis is a discipline that deals with decision-making problems in conditions where the choice of an alternative requires the analysis of a variety of complex information. Therefore, methodological concepts and hardware methods for implementing system analysis are based on general theory systems and in disciplines dealing with decision-making problems - operations research theory, general management theory and game theory.

When constructing mathematical models of the functioning or development of even relatively small real systems, researchers are faced with the need to take into account the complex interrelations of the model components that have an effective impact on the implementation of development alternatives and the achievement of set goals. A significant number of complex management systems are characterized by conflict in the decision-making process, which is a consequence of the presence of a number of management subjects with a system of different ideas about both the global goal of the system’s development and the local goals and criteria for the development of its elements.

Another characteristic feature of complex systems is their hierarchical structure, which is expressed both in the presence of vertically subordinate subsystems in the system, and in the hierarchy of the management process.

The hierarchical management structure in a complex system is a sequence of management levels following each other in order of a certain priority. One of the reasons for the emergence of a hierarchical structure in control and decision-making systems is the large amount of information about controlled processes in the system and the impossibility of processing this information about controlled processes by one control center. Another reason is the decentralization of the decision-making process that exists in real systems, when

4, subordinate to the center, develop control influences

based on the decisions of the center and taking into account their own interests.

To date, two-level static and dynamic systems, as well as some classes of systems with a hierarchical structure of a particular type, have been best studied.

Management problems in conflict two-level systems, first formulated in the 30s in connection with research in the field of economics, were then developed and studied by many Soviet and foreign researchers.

The fundamentals of the information theory of hierarchical systems were developed by N.N. Moiseev, J.B. Germeyer, A.F. Kononenko and their students. One of the main tasks of modeling conflict systems, as N.N. Vorobyov notes, is the design and analysis of optimality principles. The question of the principles of optimality, various modifications of equilibrium continues to attract the attention of specialists and is widely discussed in the literature, for example, in the works of E.I. Vilkas, N.N. Vorobyov, V.S. Vubelis, E. Damme, D.M. Kreps and others.

The development of the problems of conflict dynamic systems with a hierarchical structure is based on the fundamental results in the theory of differential games obtained in the late 60s - early 70s by L.S. Pontryagin and N.N. Kraeovsky and developed in the works of A.F. Kononenko, V.N. Lagunov, M.S. Nikolsky, Yu.S. Osipov, N.N. Petrov, L.A. Petrosyan, B.N. Pshenichny, N.Yu. Satimov, A.I. Subbotin, G. V. Tomsky and many other Soviet and foreign mathematicians.

The need to study hierarchical differential games was emphasized in the works of A.F. Kononenko, A.F. Kleimenov, N.S. Kukushkin, A. Bapchi, T. Bazar, who made a significant contribution to the development of the problems arising here.

The fundamental problem that last years has attracted the attention of many specialists in the theory of conflict dynamic systems, is the problem of dynamic stability of solutions. This is due to the fact that dynamic stability is the most important factor in the feasibility in time of the chosen principles of optimality. The formulation of this problem was first carried out by L.A. Petrosyan in the late 70s and was further developed in the works of N.N. Danilov, S.V. Chistyakov, V.V. Zakharov and other researchers. It should be noted

5 - also later works of foreign scientists D. Kras,

S. Holley, V. Hiller, in which the problem of “temporary consistency” of optimal solutions, similar to the problem of dynamic stability, is studied. Analysis of this problem for various classes of differential games shows the urgent need to obtain conditions under which dynamic stability takes place. Considering that almost all optimality principles used in conflict hierarchical control systems do not have this property, the urgent problem is to develop methods for regularizing solutions of hierarchical differential games that ensure dynamic stability of solutions.

In the proposed work, a significant place is given to these problems. From an applied point of view, the value of any approach is determined by the practical value of the developed methods. Therefore, in this work, theoretical results are applied in the study of specific hierarchical models of decision making.

Goal of the work consists in the development of control theory and rolled up wounds in conflict hierarchical systems of a general type;

application of traditional and construction and research of new principles of optimality in static and dynamic conflict systems management with a hierarchical structure;

studying the properties of solutions to hierarchical games of a general form, comparing the principles of optimality, identifying patterns in the implementation of solutions for an arbitrary optimality principle;

studying the problem of dynamic stability of optimality principles and solutions in hierarchical differential games and developing methods for regularizing solutions on various classes of strategies;

building specific models of hierarchical systems and applying the results obtained to analyze and find optimal solutions.

Naugja I'm new inside For the first time, the problem of control in conflict hierarchical systems of a general form is formulated. Based on the proposed approaches, the Cauchet principles of optimality and procedures for selecting optimal solutions in static and dynamic systems were developed. A new concept was introduced and new results were obtained.

necessary and sufficient conditions for the hierarchical stability of solutions to games of general form for an arbitrary principle of optimality.

The problem of dynamic stability of solutions of hierarchical differential games is formulated and theorems on necessary and sufficient conditions for dynamic stability of solutions for various classes of games are proved. For the first time, methods for regularizing solutions have been developed for hierarchical differential games, ensuring their dynamic and also monotonically dynamic stability.

A class of hierarchical games with discontinuous payoff functions is considered, for which necessary and sufficient conditions for the existence and uniqueness of solutions are formulated, and constructive methods for finding solutions in these games are proposed.

Practically sky value. The methods and approaches developed in the dissertation, the conditions obtained for the existence and uniqueness of solutions to hierarchical games, and methods for regularizing solutions were used in the study of a number of game-theoretic models of decision-making in conflicting ecological-economic systems. The theoretical results obtained can be used for the further development of the theory of hierarchical systems, the study of the problem of dynamic stability of solutions and the principles of optimality in hierarchical differential games.

Research methods. The work uses the methodology and hardware methods for implementing system analysis, based on the concepts and statements of the general theory of dynamic systems, control theory and game theory. The construction and analysis of a number of models of hierarchical control systems were based on methods of the theory of ordinary differential equations and equations V partial derivatives, linear programming.

Approbation of work. The main provisions and results of the work were reported and discussed on 1st Leningrad Symposium on Game Theory (1978), at the 3rd All-Union Conference on Operations Research (Gorky, 1978), at the All-Union Seminar "Applied Aspects of Management" complex systems"(Kemerovo, 19EZ), at the All-Union Conference "Problems of Theoretical Cybernetics" (Irkutsk, 1985), at the All-Union School "Optimal Control. Geometry and Analysis" (Kemerovo, 1986), at the All-Union School" Mathematical methods in ecology"

(Chita, 1936), at the All-Union School "Mathematical Problems of Biology" (Chita, 1988), at scientific seminars of the BC USSR Academy of Sciences, the Institute of Cybernetics of the Ukrainian Academy of Sciences, the Institute of Socio-Economic Problems of the USSR Academy of Sciences, the Faculty of Computational Mathematics and Cybernetics Ї .ISU named after Y.V. Lomonosov, Irkutsk Computing Center of the Siberian Branch of the USSR Academy of Sciences, Institute natural resources Siberian Branch of the USSR Academy of Sciences, Scientific Research Center for Environmental Safety D1Ts of the USSR Academy of Sciences, Computing Center of the Academy of Sciences of the Armenian SSR, Department of Control Theory and Department mathematical statistics, theory of reliability and queuing, Faculty of Applied Mathematics-Control Processes, Leningrad State University.

structure ra work. The dissertation consists of an introduction, three chapters, nineteen paragraphs, a conclusion and a list of references. The total volume of the dissertation is 317 pages. The list of references includes 133 titles.

The principle of optimality is understood as that set of rules with the help of which the decision maker determines his action (decision, alternative, strategy, management decision) that best contributes to the achievement of his goal. The principle of optimality is chosen based on the specific conditions of decision-making: the number of participants, their capabilities and goals, the nature of the conflict of interests (antagonism, non-antagonism, cooperation, etc.).

In decision-making models, especially in game theory, a large number of formal principles of optimal behavior have been developed. We will focus here on just a few of them.

The principle of maximization (minimization). This principle applies in mainly in mathematical programming problems (see (2) - (4)).

The principle of criteria convolution. It is used when “optimizing” many criteria by one coordinating center (multi-criteria optimization problem (5)). For each of the criteria (objective functions)

f 1 (u),...,f n (u)

"weights" (numbers) are assigned by experts

where α i shows the “importance or significance” of the criterion. Next, solution x* from the set of feasible solutions X is selected so as to maximize (or minimize) the convolution of criteria:

The principle of lexicographic preference. This is another principle of optimality in multicriteria optimization problems. First, the criteria are ranked by "importance". Let the following ranking be made:

f 1 (x),f 2 (x),...,f n (x)

Solution x*X is “better” than solution xX in the sense of lexicographic preference if one of n+1 conditions is met:

    f 1 (x*)>f 1 (x);

    f 1 (x*)=f 1 (x), f 2 (x*)>f 2 (x);

    f 1 (x*)=f 1 (x), f 2 (x*)=f 2 (x), f 3 (x*)>f 3 (x);

………………

    f i (x*)=f i (x) for i=1,…,n-1, f n (x*)>f n (x);

n+1) f i (x*)=f i (x) for i=1,…,n.

Minimax principle. It is used when the interests of two opposing sides collide (antagonistic conflict). Each decision maker first calculates a “guaranteed” result for each of its strategies (alternatives), then finally selects the strategy for which this result is the greatest compared to its other strategies. Such an action does not give the decision maker the “maximum gain”, but is the only reasonable principle of optimality in conditions of an antagonistic conflict. In particular, any risk is excluded.

The principle of balance. This is a generalization of the minimax principle, when many parties participate in the interaction, each pursuing their own goal (there is no direct confrontation). Let the number of decision makers (participants in a non-antagonistic conflict) be n. A set of chosen strategies (situation)x 1 *,x 2 *,…,x n * is called equilibrium if A unilateral deviation of any decision maker from this situation can only lead to a decrease in his own “gain”. In an equilibrium situation, participants do not receive a “maximum” payoff, but they are forced to stick to it.

Pareto optimality principle. This principle assumes as optimal those situations (sets of strategies x 1,...,x n) in which improving the “payoff” of an individual participant is impossible without deteriorating the “payoff” of the other participants. This principle makes weaker demands on the concept of optimality than the equilibrium principle. Therefore, Pareto-optimal situations almost always exist.

The principle of non-dominated outcomes. This principle is representative of many optimality principles in cooperative games (collective decision making) and leads to the concept of a "core" of decisions. All participants unite and, through joint coordinated actions, maximize the “total gain.” The principle of non-dominance is one of the principles of “fair” division between participants. This is a situation when none of the participants can reasonably object to the proposed division (the “core” element). There are other principles for the “optimal” division of the total total winnings.

Principlessustainability(threatsAndcounter-threats). The idea behind all threat-counterthreat resilience principles is the following. Each coalition of participants puts forward its proposal, accompanying it with a real threat: if the proposal is not accepted by the remaining participants, then actions will be taken that worsen the position of the other participants and do not worsen (possibly improve) the position of the threatening coalition. The optimal solution is one in which, against any threat to any coalition, there is a counter-threat from some coalition.

Arbitration schemes. Economic conflicts suggest the idea of ​​a "social arbiter." It is undesirable for conflicts of interest to become, for example, open threats and counter-threats. There must be social mechanisms that would allow the preferences and strategic capabilities of each participant to be taken into account and would provide a “fair” solution to the conflict. Such an advance mechanism, whether an individual or a voting system, is called an arbiter. In game theory, an optimal solution, in the sense of an arbitration scheme, is constructed using a system of axioms, including such concepts as status quo, Pareto optimality, linearity of alternatives, independence from “ranks,” etc.

Let us further consider the issues of optimal decision-making under conditions of uncertainty. To develop optimal behavior of the decision maker, it is useful to model such a situation as an antagonistic game of two persons, where nature is considered as the opponent of the decision maker. The latter is endowed with all possibilities conceivable under the given conditions.

“Games with nature” have their own specific (albeit reminiscent of the minimax principle) principles for the optimal choice of solution.

The principle of extreme pessimism (Wald criterion). According to this principle, playing with nature (making decisions under conditions of uncertainty) is played as a game with a reasonable, aggressive opponent who does everything to prevent us from achieving success. A decision maker's strategy is considered optimal if it guarantees a gain no less than that "allowed by nature."

Minimax risk principle (Savage criterion). This principle is also pessimistic, but when choosing the optimal strategy, it advises focusing not on “winning”, but on risk. Risk is defined as the difference between the maximum gain of the decision maker (subject to complete information about the state of nature) and the real gain (given ignorance of the state of nature). The optimal strategy is the one in which the risk is minimal.

The principle of pessimism - optimism (Hurwitz criteria). This criterion recommends that when choosing a solution, you should not be guided by either extreme pessimism (“always count on the worst!”) or extreme optimism (“maybe the curve will take you out!”). According to this criterion, the weighted average between the gains of extreme pessimism and extreme optimism is maximized. Moreover, the “weight” is chosen based on subjective considerations about the danger of situations.

Dynamic stability concept. All the above principles of optimality are formulated in relation to static decision-making problems. An attempt to use them in dynamic problems can be accompanied by all sorts of complications.

The main thing is the features of dynamic processes. It is necessary that one or another principle of optimality, chosen in the initial state of the process (at the initial point in time), remains optimal at any time. current state(at any point in time) until the end of the dynamic process. This principle is called dynamic stability.

It is carried out on the basis of a structural-dynamic approach. The most important sections of ethology are:

  1. morphology of behavior - description and analysis of elements of behavior (postures and movements);
  2. functional analysis - analysis of external and internal factors of behavior;
  3. comparative studies - evolutionary genetic analysis of behavior [Deryagina, Butovskaya, 1992, p. 6].

Within the framework of the systems approach, behavior is defined as a system of interconnected components that provides an integrated optimal response of the body when interacting with the environment; this is a process that occurs in a certain period of time [Deryagina, Butovskaya 1992, p.7]. The components of the system are “external” motor reactions of the body that arise in response to changes in the environment. The object of ethological research is both instinctive forms of behavior and those associated with long-term learning processes (social traditions, tool activity, non-ritual forms of communication).

Modern behavior analysis is based on the following principles: 1) hierarchy; 2) dynamism; 3) quantitative accounting; 4) systems approach, taking into account that forms of behavior are closely interrelated.

Behavior is organized according to a hierarchical principle. Therefore, different levels of integration are distinguished in the behavioral system:

  1. elementary motor acts;
  2. postures and movements;
  3. sequences of interrelated poses and movements;
  4. ensembles represented by complexes of action chains;
  5. functional spheres are complexes of ensembles associated with a specific type of activity [Panov, 1978].

The central property of a behavioral system is the orderly interaction of its components to achieve the final goal. The relationship is ensured through chains of transitions between elements and can be considered as a specific ethological mechanism for the functioning of this system [Deryagina, Butovskaya, 1992, p. 9].

The basic concepts and methods of human ethology are borrowed from animal ethology, but they are adapted to take into account the unique position of man among other members of the animal kingdom. Important feature ethology, in contrast to cultural anthropology, is the use of methods of direct non-participant observation (although methods of participant observation are also used). Observations are organized in such a way that the observed does not suspect it, or has no idea about the purpose of the observations. The traditional object of study by ethologists is behavior characteristic of humans as a species. Human ethology pays special attention to the analysis of universal manifestations of nonverbal behavior. The second aspect of research is the analysis of models social behavior(aggression, altruism, social dominance, parental behavior).

An interesting question is about the boundaries of individual and cultural variability of behavior. Behavioral observations can also be made in the laboratory. But in this case, most often we are talking about applied ethology (the use of ethological methods in psychiatry, psychotherapy, or for experimental testing of a specific hypothesis). [Samokhvalov et al., 1990; Cashdan, 1998; Grummer et al, 1998].

If initially human ethology focused on questions about how and to what extent human actions and actions are programmed, which led to the opposition of phylogenetic adaptations to the processes of individual learning, now attention is paid to the study of behavioral patterns in different cultures (and subcultures), the analysis of processes formation of behavior in the process individual development. Thus, at the present stage, this science studies not only behavior that has phylogenetic origin, but also takes into account how behavioral universals can be transformed within a culture. The latter circumstance contributed to the development of close cooperation between ethologists and art historians, architects, historians, sociologists and psychologists. As a result of such cooperation, it was shown that unique ethological data can be obtained through a thorough analysis of historical materials: chronicles, epics, chronicles, literature, press, painting, architecture and other objects of art [Eibl-Eibesfeldt, 1989; Dunbar et al, 1995; Dunbar, Spoors, 1995].

Levels of Social Complexity

In modern ethology, it is considered obvious that the behavior of individual individuals in social animals and humans largely depends on the social context [Hinde, 1990]. Social influence is complex. Therefore, R. Hind proposed to identify several levels of social complexity. In addition to the individual, the level of social interactions, relationships, the group level and the level of society are distinguished. All levels mutually influence each other and develop under the constant influence of the physical environment and culture. It should be clearly understood that the patterns of behavior functioning at a more complex social level are not reducible to the sum of behavior manifestations at a lower level of organization. A separate complementary concept is required to explain the behavioral phenomenon at each level. Thus, aggressive interactions between siblings are analyzed taking into account the immediate stimuli underlying this behavior, while the aggressive nature of relationships between siblings can be considered from the point of view of the concept of “sibling competition”.

The behavior of an individual within the framework of this approach is considered as a consequence of his interaction with other members of the group. It is assumed that each of the interacting individuals has certain ideas about the probable command of the partner in a given situation. The individual receives the necessary ideas on the basis of previous experience of communication with other representatives of its species. Contacts between two unfamiliar individuals, which are clearly hostile in nature, are often limited to only a series of demonstrations. Such communication is often enough for one of the partners to admit defeat and demonstrate submission. If specific individuals interact many times, then certain relationships arise between them, taking place against a common background social contacts. The social environment for both humans and animals is a kind of shell” that surrounds individuals and transforms the impact of the physical environment on them. Sociality in animals can be considered as a universal adaptation to the environment. The more complex and flexible the social organization, the greater the role it plays in protecting individuals of a given species. The plasticity of social organization could serve as a basic adaptation of our common ancestors with chimpanzees and bonobos, which provided the initial prerequisites for hominization [Butovskaya, Fainberg, 1993].

The most important problem of modern ethology is the search for the reasons why social systems animals and humans are always structured, most often according to a hierarchical principle. The real role of the concept of dominance in understanding the essence social connections It is constantly discussed in society. Networks of relationships between individuals are described in animals and humans in terms of kinship and reproductive ties, systems of dominance, and individual selectivity. They can overlap (for example, rank, kinship and reproductive relationships), but they can also exist independently of each other (for example, networks of relationships between a teenager in family and school with peers in modern human society).

Of course, one should be very careful when using direct parallels in a comparative analysis of animal and human behavior, because all levels of social complexity influence each other. Many types human activity are specific and have a symbolic nature, which can only be understood by having knowledge regarding the social experience of a given individual and the characteristics of the socio-cultural structure of society [Eibl-Eibesfeldt, 1989]. However, the undeniable advantage of the ethological approach when discussing the problems of continuity of the principles of social organization is unification methods for assessing and describing the behavior of primates, including humans, allowing for an objective assessment of the basic parameters of similarities and differences. R. Hinde's scheme allows us to eliminate the main misunderstandings between representatives of the biological and social sciences regarding the possibilities of comparative analysis of human and animal behavior and to predict at what levels of organization real similarities can be sought.