How to set up smartphones and PCs. Informational portal
  • home
  • Windows 10
  • The concept of model and simulation. Concepts of model, physical phenomenon and environment

The concept of model and simulation. Concepts of model, physical phenomenon and environment

Model Is it material or ideal object, which replaces the system under study and adequately reflects its essential aspects. The object model reflects its most important qualities, neglecting the minor ones.

computer model (English computer model), or numerical model (English computational model) - computer program working on separate computer, a supercomputer or a set of interacting computers (computing nodes), which implements the representation of an object, system or concept in a form different from the real one, but close to algorithmic description, which also includes a set of data characterizing the properties of the system and the dynamics of their change over time.

Speaking of computer reconstruction, we will mean the development of a computer model of a certain physical phenomenon or environment.

physical phenomenon - the process of changing position or state physical system. A physical phenomenon is characterized by a change in certain physical quantities related to each other. For example, physical phenomena include all known species interactions of material particles.

Figure 1 shows a computer dynamic model changes magnetic field formed by two magnets, depending on the position and orientation of the magnets relative to each other.

Picture 1- Computer dynamic model of magnetic field change

The presented computer model reflects the dynamics of changes in the parameters of the magnetic field by means of graphic visualization by isolines. The construction of magnetic field isolines is carried out in accordance with physical dependencies that take into account the polarity of the magnets at their specific location and orientation in the plane.

Figure 2 illustrates a computer simulation model of water flow in an open channel bounded by the walls of a long glass tray.

Figure 2- Computer simulation model of water flow in an open channel

Calculation of open flow parameters (free surface shape, water flow and pressure, etc.) in this model is performed in accordance with the laws of open flow hydrodynamics. The calculated dependencies form the basis of the algorithm, according to which the water flow model is built in a virtual three-dimensional space in real time. The presented computer model makes it possible to make geometric measurements of water surface marks at various points along the length of the stream, as well as to determine the water flow rate and other auxiliary parameters. Based on the data obtained, it is possible to investigate the real physical process.

In the examples given, computer simulation models with graphic visualization of a physical phenomenon. However, computer models may not contain visual or graphic information about the object of study. The same physical process or phenomenon can be represented as a set of discrete data using the same algorithm on which the simulation visual model was built.

Thus, the main task of building computer models is a functional study of a physical phenomenon or process with obtaining exhaustive analytical data, and there can be many secondary tasks, including the graphical interpretation of the model with the possibility of interactive user interaction with the computer model.

mechanical system (or a system of material points) - a set of material points (or bodies, which, according to the condition of the problem, it turned out to be possible to consider as material points).

In technical sciences, media are divided into continuous (continuous) and discrete media. This division is somewhat of an approximationor approximation, since physical matter is inherently discrete, and the concept of continuity (continuum) refers to such a quantity as time. In other words, such a “continuous” medium as, for example, a liquid or gas, consists of discrete elements - molecules, atoms, ions, etc., however, it is mathematical to describe the change in time of these structural elements is extremely difficult, so the methods of continuum mechanics are quite reasonably applied to such systems.

– Dvoretsky S.I., Muromtsev Yu.L., Pogonin V.A. Systems Modeling. – M.: Ed. center "Academy", 2009. - 320 p.

"Belov, V.V. Computer implementation solving scientific, technical and educational problems: tutorial/ V.V. Belov, I.V. Obraztsov, V.K. Ivanov, E.N. Konoplev // Tver: TVGTU, 2015. 108 p."

Let us consider some properties of models that allow, to one degree or another, either to distinguish or identify the model with the original (object, process). Many researchers distinguish the following properties of models: adequacy, complexity, finiteness, visibility, truth, proximity.

The problem of adequacy. The most important requirement for the model is the requirement of adequacy (correspondence) to its real object (process, system, etc.) with respect to the selected set of its characteristics and properties.

Model adequacy is understood as a correct qualitative and quantitative description of an object (process) according to a selected set of characteristics with a certain reasonable degree of accuracy. This means adequacy not in general, but adequacy in terms of those properties of the model that are essential for the researcher. Full adequacy means the identity between the model and the prototype.

A mathematical model can be adequate with respect to one class of situations (system state + state external environment) and is not adequate relative to the other. A “black box” model is adequate if, within the chosen degree of accuracy, it functions in the same way as a real system, i.e. defines the same operator for converting input signals to output signals.

You can introduce the concept of the degree (measure) of adequacy, which will vary from 0 (lack of adequacy) to 1 (full adequacy). The degree of adequacy characterizes the proportion of the truth of the model with respect to the selected characteristic (property) of the object under study. The introduction of a quantitative measure of adequacy makes it possible to quantitatively set and solve such problems as identification, stability, sensitivity, adaptation, model training.

Note that in some simple situations, the numerical estimation of the degree of adequacy is not particularly difficult. For example, the problem of approximating a given set of experimental points by some function.

Any adequacy is relative and has its limits of application. For example, the differential equation

reflects only the change in the frequency  of rotation of the GTE turbocharger with a change in fuel consumption G T and no more. It cannot reflect such processes as gas-dynamic instability (surging) of a compressor or vibrations of turbine blades. If in simple cases everything is clear, then in complex cases the inadequacy of the model is not so clear. The use of an inadequate model leads either to a significant distortion of the real process or properties (characteristics) of the object under study, or to the study of non-existent phenomena, processes, properties and characteristics. In the latter case, the adequacy test cannot be carried out on a purely deductive (logical, speculative) level. It is necessary to refine the model based on information from other sources.

The difficulty in assessing the degree of adequacy in the general case arises from the ambiguity and fuzziness of the adequacy criteria themselves, as well as from the difficulty of choosing those features, properties and characteristics by which adequacy is assessed. The concept of adequacy is a rational concept, therefore, the increase in its degree is also carried out at a rational level. Therefore, the adequacy of the model must be checked, controlled, refined in the process of research using particular examples, analogies, experiments, etc. As a result of the adequacy check, it is found out what the assumptions made lead to: either to an acceptable loss of accuracy, or to a loss of quality. When checking the adequacy, it is also possible to justify the validity of the application of the accepted working hypotheses in solving the problem or problem under consideration.

Sometimes the adequacy of the model M has a side adequacy, i.e. it gives a correct quantitative and qualitative description not only of those characteristics for which it was built, but also of a number of secondary characteristics, the need for studying which may arise in the future. The effect of side adequacy of the model increases if it reflects well-tested physical laws, system principles, the basic provisions of geometry, proven techniques and methods, etc. Maybe that's why structural models, as a rule, have a higher side adequacy than functional ones.

Some researchers consider the goal as an object of modeling. Then the adequacy of the model, with the help of which the set goal is achieved, is considered either as a measure of proximity to the goal, or as a measure of the effectiveness of achieving the goal. For example, in an adaptive control system according to the model, the model reflects the form of system movement that is the best in the current situation in the sense of the accepted criterion. As the situation changes, the model must change its parameters in order to be more adequate to the new situation.

Thus, the adequacy property is the most important requirement for the model, but the development of highly accurate and reliable methods for checking the adequacy remains a formidable task.

Simplicity and complexity. Simultaneous requirement of simplicity and adequacy of the model are contradictory. From the point of view of adequacy, complex models are preferable to simple ones. In complex models, it is possible to take into account a larger number of factors that affect the studied characteristics of objects. Although complex models more accurately reflect the simulated properties of the original, they are more cumbersome, hard to see and inconvenient to use. Therefore, the researcher seeks to simplify the model, since with simple models easier to operate. For example, approximation theory is the theory of correct construction of simplified mathematical models. When striving to build a simple model, the basic model simplification principle:

the model can be simplified as long as the basic properties, characteristics and patterns inherent in the original are preserved.

This principle points to the limit of simplification.

At the same time, the concept of simplicity (or complexity) of a model is a relative concept. The model is considered fairly simple if modern facilities studies (mathematical, informational, physical) make it possible to conduct a qualitative and quantitative analysis with the required accuracy. And since the possibilities of research tools are constantly growing, those tasks that were previously considered difficult can now be classified as simple. V general case The concept of model simplicity also includes the psychological perception of the model by the researcher.

"Adequacy-Simplicity"

You can also highlight the degree of simplicity of the model by quantifying it, as well as the degree of adequacy, from 0 to 1. In this case, the value 0 will correspond to inaccessible, very complex models, and the value 1 will correspond to very simple ones. Let's divide the degree of simplicity into three intervals: very simple, accessible and inaccessible (very complex). We also divide the degree of adequacy into three intervals: very high, acceptable, unsatisfactory. Let's build a table 1.1, in which the parameters characterizing the degree of adequacy are plotted horizontally, and the degree of simplicity is plotted vertically. In this table, regions (13), (31), (23), (32) and (33) should be excluded from consideration either because of unsatisfactory adequacy or because of the very high degree of complexity of the model and the inaccessibility of its study by modern means. research. Region (11) should also be excluded, since it gives trivial results: here any model is very simple and highly accurate. Such a situation may arise, for example, when studying simple phenomena subject to known physical laws (Archimedes, Newton, Ohm, etc.).

The formation of models in areas (12), (21), (22) must be carried out in accordance with certain criteria. For example, in area (12) it is necessary to strive to ensure that there is a maximum degree of adequacy, in area (21) - the degree of simplicity is minimal. And only in the area (22) it is necessary to optimize the formation of the model according to two conflicting criteria: minimum complexity (maximum simplicity) and maximum accuracy (degree of adequacy). This optimization problem in the general case is reduced to the choice of the optimal structure and parameters of the model. A more difficult task is to optimize the model as a complex system consisting of separate subsystems connected to each other into a certain hierarchical and multiply connected structure. At the same time, each subsystem and each level has its own local criteria of complexity and adequacy, which are different from the global criteria of the system.

It should be noted that in order to reduce the loss of adequacy, it is more expedient to simplify the models:

a) on physical level while maintaining the basic physical relationships,

b) at the structural level with the preservation of the main system properties.

Simplification of models at the mathematical (abstract) level can lead to a significant loss of the degree of adequacy. For example, truncation of a high-order characteristic equation to the 2nd or 3rd order can lead to completely wrong conclusions about the dynamic properties of the system.

Note that simpler (rougher) models are used in solving the synthesis problem, and more complex exact models– when solving the problem of analysis.

Finiteness of models. It is known that the world is infinite, like any object, not only in space and time, but also in its structure (structure), properties, relations with other objects. Infinity is manifested in the hierarchical structure of systems of different physical nature. However, when studying an object, the researcher is limited by the finite number of its properties, connections, resources used, etc. It is as if it “cuts out” some final piece from the infinite world in the form of a specific object, system, process, etc. and tries to cognize the infinite world through the final model of this piece. Is such an approach to the study of the infinite world justified? Practice answers positively to this question, based on the properties of the human mind and the laws of Nature, although the mind itself is finite, but the ways it generates to know the world are endless. The process of cognition goes through the continuous expansion of our knowledge. This can be observed in the evolution of the mind, in the evolution of science and technology, and in particular, in the development of both the concept of a system model and the types of models themselves.

Thus, the finiteness of system models lies, firstly, in the fact that they reflect the original in a finite number of relations, i.e. with a finite number of connections with other objects, with a finite structure and a finite number of properties at a given level of study, research, description, available resources. Secondly, that the resources (information, financial, energy, time, technical, etc.) of modeling and our knowledge as intellectual resources are finite, and therefore objectively limit the possibilities of modeling and the process of knowing the world through models at this stage development of mankind. Therefore, the researcher (with rare exceptions) deals with finite-dimensional models. However, the choice of model dimension (its degree of freedom, state variables) is closely related to the class of problems to be solved. An increase in the dimension of the model is associated with problems of complexity and adequacy. In this case, it is necessary to know what is the functional relationship between the degree of complexity and the dimension of the model. If this dependence is power-law, then the problem can be solved by using high-performance computing systems. If this dependence is exponential, then the “curse of dimensionality” is inevitable and it is almost impossible to get rid of it. In particular, this applies to the creation universal method search for the extremum of functions of several variables.

As noted above, an increase in the dimension of the model leads to an increase in the degree of adequacy and, at the same time, to the complication of the model. At the same time, the degree of complexity is limited by the possibility of operating with the model, i.e. the modeling tools available to the researcher. The need to move from a rough simple model to a more accurate one is realized by increasing the dimension of the model by involving new variables that are qualitatively different from the main ones and which were neglected when constructing a rough model. These variables can be assigned to one of the following three classes:

    fast-flowing variables, the extent of which in time or space is so small that, in a rough examination, they were taken into account by their integral or averaged characteristics;

    slow-flowing variables, the length of change of which is so great that in rough models they were considered constant;

    small variables (small parameters), the values ​​and influence of which on the main characteristics of the system are so small that they were ignored in rough models.

Note that the division of the complex motion of the system in terms of velocity into fast and slow motion makes it possible to study them in a rough approximation independently of each other, which simplifies the solution of the original problem. As for small variables, they are usually neglected when solving the synthesis problem, but they try to take into account their influence on the properties of the system when solving the analysis problem.

When modeling, they try to isolate as much as possible small number the main factors, the influence of which is of the same order and is not too difficult to describe mathematically, and the influence of other factors can be taken into account using averaged, integral or "frozen" characteristics. At the same time, the same factors can have a significantly different effect on various characteristics and properties of the system. Usually, taking into account the influence of the above three classes of variables on the properties of the system turns out to be quite sufficient.

Approximation of models. From the foregoing, it follows that the finiteness and simplicity (simplification) of the model characterize the qualitative difference (at the structural level) between the original and the model. Then the approximation of the model will characterize the quantitative side of this difference. It is possible to introduce a quantitative measure of approximation by comparing, for example, a rough model with a more accurate reference (complete, ideal) model or with a real model. The proximity of the model to the original is inevitable, it exists objectively, since the model, as another object, reflects only certain properties of the original. Therefore, the degree of approximation (proximity, accuracy) of the model to the original is determined by the formulation of the problem, the purpose of modeling. The pursuit of increasing the accuracy of the model leads to its excessive complication, and, consequently, to a decrease in its practical value, i.e. its possibilities practical use. Therefore, when modeling complex (man-machine, organizational) systems, accuracy and practical meaning are incompatible and exclude each other (the principle of L.A. Zadeh). The reason for the inconsistency and incompatibility of the requirements for accuracy and practicality of the model lies in the uncertainty and fuzziness of knowledge about the original itself: its behavior, its properties and characteristics, the behavior of the environment, the thinking and behavior of a person, the mechanisms of goal formation, ways and means to achieve it, etc. .d.

Model Truth. Each model has a grain of truth, i.e. any model in some way correctly reflects the original. The degree of truth of the model is revealed only by its practical comparison with the original, because only practice is the criterion of truth.

On the one hand, any model contains unconditionally true, i.e. definitely known and correct. On the other hand, the model also contains conditionally true, i.e. true only under certain conditions. Typical error in modeling lies in the fact that researchers apply certain models without checking the conditions for their truth, the limits of their applicability. This approach obviously leads to incorrect results.

Note that any model also contains the supposedly true (plausible), i.e. something that can be either true or false under conditions of uncertainty. Only in practice is the actual relationship between true and false in specific conditions established. For example, in hypotheses as abstract cognitive models, it is difficult to identify the relationship between true and false. Only a practical test of hypotheses can reveal this relationship.

When analyzing the level of truth of the model, it is necessary to find out the knowledge contained in them: 1) accurate, reliable knowledge; 2) knowledge that is reliable under certain conditions; 3) knowledge estimated with a certain degree of uncertainty (with a known probability for stochastic models or with a known membership function for fuzzy models); 4) knowledge that cannot be assessed even with some degree of uncertainty; 5) ignorance, i.e. what is unknown.

Thus, assessing the truth of a model as a form of knowledge comes down to identifying the content in it of both objective reliable knowledge that correctly reflects the original, and knowledge that approximately evaluates the original, as well as what constitutes ignorance.

Model control. When constructing mathematical models of objects, systems, processes, it is advisable to adhere to the following recommendations:

    Modeling should begin with the construction of the roughest models based on the selection of the most significant factors. At the same time, it is necessary to clearly represent both the purpose of modeling and the purpose of cognition with the help of these models.

    It is advisable not to involve artificial and difficult-to-verify hypotheses in the work.

    It is necessary to control the dimension of variables, adhering to the rule: only values ​​of the same dimension can be added and equal. This rule must be used at all stages of the derivation of certain relationships.

    It is necessary to control the order of the values ​​added to each other in order to single out the main terms (variables, factors) and discard the insignificant ones. At the same time, the “roughness” property of the model should be preserved: discarding small values ​​leads to a small change in quantitative conclusions and to the preservation of qualitative results. The above also applies to the control of the order of the correction terms in the approximation of nonlinear characteristics.

    It is necessary to control the nature of functional dependencies, adhering to the rule: to check the preservation of the dependence of the change in direction and speed of some variables on changes in others. This rule allows a deeper understanding of the physical meaning and correctness of the derived relationships.

    It is necessary to control the behavior of variables or some ratios when the model parameters or their combinations approach extremely admissible (singular) points. Usually, at the extreme point, the model simplifies or degenerates, and the relations acquire a more illustrative meaning and can be more easily verified, and the final conclusions can be duplicated by some other method. Research extreme cases can serve for asymptotic representations of the behavior of the system (solutions) under conditions close to extreme.

    It is necessary to control the behavior of the model under certain conditions: satisfaction of the function as a model with the set boundary conditions; the behavior of the system as a model under the action of typical input signals on it.

    It is necessary to monitor the receipt of side effects and results, the analysis of which may give new directions in research or require restructuring of the model itself.

Thus, constant monitoring of the correct functioning of the models in the process of research makes it possible to avoid gross errors in the final result. In this case, the identified shortcomings of the model are corrected during the simulation, and are not calculated in advance.


What is meant by the adequacy of the model:

1) The residual component E satisfies the 4th conditions formulated in the Gauss-Markov theorem and the correspondence of the model to the most important (for the researcher) properties.

2. The value of the coefficient of elasticity shows:

1) By how many% will the result change on average when the factor changes by 1%.

3. When the instrumental variable method is used:

39. A time series is a set of values

1) economic indicator for several consecutive moments (periods) of time.

40. An analysis of the possibility of numerically estimating the unknown coefficients of structural equations according to estimates of the coefficients in the reduced equations is

1) the problem of identification.

41. Stage correlation analysis, on which the forms of connection of the studied economic indicator with the selected factors-arguments are determined, has the name

1) Model specification

42. What is the essence of the method of instrumental variables:

1) In the partial replacement of an unsuitable explanatory variable with a variable that significantly reflects the impact on the resulting variable of the original explanatory variable, but correlates with the random component

43. Determine in which system of equations the unidentifiable regression equation is located:

1) C t \u003d a + b * Y t + u t; Y t \u003d C t + I t

44. The formula for determining the value of the time series level when using exponential smoothing is:

1) y t \u003d a * y t + (1-a) * y t -1

45. The economic model, which is a system of simultaneous equations, consists in the general case

1) from behavioral equations and identities

46. ​​Choose true statements about the system of simultaneous equations:

1) Can be presented in structural form model and reduced form

2) In it, the same dependent variables in some equations are included in the left side, and in others - in the right side of the system.

47. In the linear equation of pair regression y \u003d a + bx + E, the variables are not:

-a, -b.

48. What is meant by indicators characterizing the accuracy of the model:

1) The difference between the values ​​of the actual levels of the series and their theoretical levels, estimated using statistical indicators.

49.Under the anomalous level of the time series is understood:

1) A separate value of the level of the time series, which does not correspond to the potential capabilities of the studied economic system and, remaining as a level of the series, has a significant impact on the value of the main indicators.

50. The value of the correlation coefficient is 0.81. It can be concluded that the relationship between the effective feature and the factor is:

1) quite close.

51. The formula for determining the smoothed value of the time series level when using a moving average is:

1) Y= sums t p=m-1

52. The value of the d-criterion of Durbin-Watson statistics in large samples is associated with the autocorrelation coefficient of the random term of the regression equation approximately as follows:

1)d p \u003d 2-2p

53. What is meant by the dispersion of a random member of the regression equation:

1) Possible behavior of the random term of the regression equation before the sample is made.

54. Choose a counting formal rule that reflects necessary condition identifiability of the equations included in the system of simultaneous equations:

1) H \u003d D + 1

55. In which case it is impossible to reject the null hypothesis about the absence of autocorrelation of a random term of the regression equation:

1) If the calculated value of the criterion d falls into the zone of uncertainty.

56. When is the Chow test used:

1) When deciding the question of expediency, dividing the sample into two sub-samples and building, respectively, two regression models.

57. A non-linear regression equation is considered to be non-linear with respect to its constituents:

1) parameters.

58. The reason for the positive autocorrelation of the random term of the regression equation is usually:

1) The constant direction of the impact of any factor not included in the regression equation.

59. What is the subject of econometrics:

1) Factors shaping the development of economic phenomena and processes.

60. Errors of the first kind are eliminated by:

1) Replacing the anomalous observation with the arithmetic mean of two neighboring levels of the series.

61. A dummy variable can take on the following values:

1)0, 2)1

62. According to the Spearman rank correlation test, the null hypothesis about the absence of a heterosexual random term in the regression equation will be rejected at a significance level of 5% if the test statistic:

1) Will be more than 1.96

63. Correlation implies the existence of a connection between:

1) variables

64. Selection of factors in the economic model multiple regression can be done based on:

1) Matrices of paired correlation coefficients.

65. How to eliminate the autocorrelation of random members of the regression equation, if it is described by a first-order autoregressive scheme:

1) It is necessary to exclude from the regression equation all factors that cause autocorrelation.

66. What is meant by “perfect multicollinearity” of explanatory variables in the regression equation:

1) Functional relationship with each other of the explanatory variables in the regression equation.

67. KMNK is applicable for:

1) an identifiable system of simultaneous equations.

68. The econometric model is

1) economic model presented in mathematical form

69. Using what formula can you calculate the pair correlation coefficient:

1)r x,y = cov(x,y)

(Var(x)*Var(y))^0.5

70. The efficiency of the least squares method of estimating the parameters of the regression equation means that:

1) Estimates have the smallest dispersion compared to any other estimates of these parameters.

The random component of the number series corresponds to the normal distribution law;

The mathematical expectation of the random component is not equal to zero;

The values ​​of the levels of the random component are independent;

2) compliance of the model with the normal distribution law;

3) correspondence of the numerical series model to the most important properties of the object under study for the researcher.


  1. What is tested when using a series-based test?

1) Checking the randomness of fluctuations in the level of the residual sequence.

2) Checking the compliance of the distribution of the random component with the normal distribution law.

3) Evaluation of the statistical reliability of the regression level.


  1. What is the sample median?

1) The median value of an ordered series with n odd or the arithmetic mean of 2 adjacent median values ​​with n even.

2) The length of the longest series.

3) Total number of episodes.

20. What is in the test based on the series criteria, the values:
K=
u=
1) The length of the longest series and total number series.

2) The mean value of the series and the median of the sample.

3) Asymmetry and the total number of series.

21. When checking the compliance of the distribution of a random component with the normal distribution law:


  1. The probability of the predominance of negative deviations over positive ones;

  2. Probability of positive deviations prevailing over negative ones;

  3. Probability of accepting the null hypothesis.

  1. Probability of increasing small deviations;

  2. The probability of reducing large deviations;

  3. The probability of decreasing small deviations, the probability of increasing large deviations.

  1. Standard deviations random variables b 0 and b 1;

  2. Statistical dependence between factorial features;

  3. Influence of individual factors on y.

    1. The hypothesis about the normal distribution of the random component is accepted if the following inequalities are satisfied:




22. When checking the equality of the mathematical expectation of a random component to zero:
22.1 - the calculated value of t - Student's criterion is found by the formula:
1)

22.2 - standard deviation for the residual sequence is:


2)

22.3 - the hypothesis that the mathematical expectation is equal to zero for a given significance level α and the number of degrees of freedom k = n - 1 is accepted if:
1) the calculated value of t does not depend on the standard deviation of the residual sequence;

2) the calculated value of t is less table value according to Student's statistics;

3) the calculated value of t is greater than the tabular value according to Student's statistics.
23. Estimated value Durbin-Watson criterion (d-criterion) is found by the formula:

a)
;

b)
;

v)
.
24. The Durbin-Watson test is used to test:
1 ) independence of the values ​​of the levels of the random component ;

2) random fluctuations in the levels of the residual sequence;

3) equality of the mathematical expectation of the random component to zero.
25. Verification by d - Durbin - Watson criterion is performed by comparing:
1) the calculated value of d p with the upper critical (d 2) and lower critical (d 1) values ​​of the Durbin-Watson statistics;

2) calculated valued R with a range ofd– statistics, inside which there is a critical valued kr ;

3) calculated value d p s critical d cr with a given level of significance and the number of degrees of freedom k=n-1.
26. WHAT IS THE ACCURACY OF THE MODEL:
1) the degree of conformity of the model to the process or object under study;

2) the degree of correct reflection of the systematic components of the series: trend, seasonal, cyclic and random components;

3) degree of coincidence of theoretical values with actual .
27. What statistics are used to evaluate the accuracy of the model?
1) Root-mean-square deviation σ, mean relative error of approximation ε avg., coefficient of convergence φ, coefficient of multiple determination R 2

2) Convergence coefficient φ, standard deviation σ, coefficient of multiple determination R 2

3) Root-mean-square deviation φ, mean-relative approximation error ε sr.
28. What is the disadvantage of the model accuracy indicator - the standard deviation?
1) It does not depend on the scale y, and hence the various σ we can only get from the same objects

2) It depends on the y scale, but for different objects we cannot get different σ

3) It depends on the scale y, i.e. for different objects we can get different σ

2
9. What does the coefficient of convergence show?

1) Shows the proportion of the change in y explained by the change in the factors included in the model

2) Shows what share in the change in the resulting feature can be explained by a change in factors not included in the model

30. What does the coefficient of multiple determination R 2 show?
1) Shows the proportion of the change in y explained by the change in the factors included in the model

2) Shows what share in the change in the resulting feature can be explained by a change in factors not included in the model

3) Shows the proportion of the change in y that is explained by the change in factors not included in the model
31. What formula is used to determine the value of the coefficient of multiple determination?

1)
;

2)
;

3)
.


  1. Why in more cases, is the regression equation expressed as a linear algebraic equation used?

1) because all economic processes are described by linear algebraic regression equations;

2) to avoid biased estimates;

c) because it is necessary to use a linear regression analysis, which can only be applied to linear equations.

33. The law of addition of variances for a function:
1) the total variance is equal to the sum of the variance of the theoretical values ​​of the resulting indicator and the variance of the actual values ​​of the resulting indicator;

2) the total variance is equal to the sum of the variance of the theoretical values ​​of the resulting indicator and the variance of the residuals;

c) the total variance is equal to the sum of the variances that appear under the influence of the factorial features included in the model.

34. What formula displays the residual variance?

a)
;

b)
;

v)
.

35. What characterizes the coefficient of multiple correlation?
1) The coefficient of multiple correlation characterizes the influence of various factors on the resulting trait and the relationship of factors among themselves.

2) The coefficient of multiple correlation characterizes tightness and linearity statistical connection of the considered set of factors with the studied feature, or, in other words, estimates the closeness of the joint influence of factors on the result.

3) The coefficient of multiple correlation characterizes the share of the change in the resulting feature, which can be explained by the change in the factors included in the model.
36. What formula can be used to calculate the pair correlation coefficient?
1)
2)
3)

37. What does the pair correlation coefficient show?
1) The pair correlation coefficient shows the tightness of the connection between the function y and the argument x i and the relationship of the arguments between themselves, provided that the other arguments of this function not included in the regression equation act correlatively regardless of the argument x i .

2) The pair correlation coefficient characterizes the share of the change in the resulting feature, which can be explained by the change in factors not included in the model.

3) The pair correlation coefficient characterizes the closeness of the relationship between the result and the corresponding factor.
38. What does the partial correlation coefficient show?
1) The partial correlation coefficient best characterizes the strength of the individual influence of each factor included in the regression equation on the resulting trait.

2) The coefficient of partial correlation characterizes the closeness of the connection of the considered set of factors with the trait under study, or, in other words, estimates the closeness of the joint influence of factors on the result.

3) The coefficient of partial correlation shows that two or more factors are related linear dependence, i.e. there is a cumulative effect of factors on each other.
39. The value of the partial correlation coefficient is determined by the formula:
1.
2.
3 .

40. What is the elasticity coefficient for a linear algebraic equation?

1.
2 .
3.

41. What is meant by the significance of sample statistical indicators?
- probability of accepting the null hypothesis

The degree of coincidence Ufak. And Utheor.

Compliance of the indicator with the most significant properties or phenomena

42. How is the significance of the regression equation as a whole checked?

43. How is the “null hypothesis” formulated when determining the statistical significance of the regression equation as a whole?
1) Each coefficient of the regression equation in population equals zero.

2) Pair correlation coefficients in the general population are equal to zero.

3) The coefficients of the regression equation in the general population are equal to zero, and 0 = .
44. What formula is used to calculate Fisher's F-test?

1) F = σ 2 y + σ 2 ε

2) F =

3) F=

45. How is the “null hypothesis” formulated when determining the statistical significance of individual coefficients of the regression equation?
1) Pair correlation coefficients in the general population are equal to zero.

2) Each coefficient of the regression equation in the general population is equal to zero.

3) The coefficients of the regression equation in the general population are equal to zero, a 0 = .
46. ​​What formula is used to calculate Student's t-test

1)
3) t f =

2) t p = r x | ε | ×
47. What conditions must the residual component in the regression equation meet in order for given equation adequately reflected the studied relationships between indicators:
1) random fluctuations in the levels of the residual sequence;

2) the mathematical expectation of the random component is not equal to 0;

3) correspondence of the distribution of the random component to the normal distribution law;

4) the values ​​of the levels of the random component are independent;
48. What formula is used to determine confidence interval for individual coefficients of the regression equation:
1) a j - s aj t cr £ a j £ a j + s aj *t cr;

2) a j - s aj t cr ³ a j ³ a j + s aj *t cr;

3) a j + s aj t cr £ a j £ a j + s aj *t cr;

4) a j - s aj t cr ³ a j ³ a j - s aj *t cr;
49. What coefficients characterize the strength of influence on the resulting sign of individual factors and their combined influence:
1) pair correlation coefficient;

2) coefficient of multiple correlation;

3) partial correlation coefficient;

4) coefficient of multiple determination;

D) all answers are correct
50. Why does it make no sense to achieve equality 0 of the residual random component by increasing the order of the regression equation:
1) because with an increase in the order of the regression equation, the value of the residual random component will increase;

2) not change;

3) because it is impossible to achieve that the residual random component was = 0 ;

4) all answers are wrong;
QUESTIONS FOR TESTS

So, we have established: the model is intended to replace the original in studies to which it is impossible or inappropriate to subject the original. But replacing the original with a model is possible if they are sufficiently similar or adequate.

Adequacy means whether the results obtained during the simulation reflect the true state of affairs well enough from the point of view of the objectives of the study. The term comes from the Latin adaequatus - equated.

A model is said to be adequate to the original if its interpretation results in a "portrait" that is highly similar to the original.

Until the question is resolved whether the model correctly displays the system under study (that is, whether it is adequate), the value of the model is zero!

The term "adequacy" seems to have a very vague meaning. It is clear that the effectiveness of modeling will increase significantly if, when building a model and transferring results from the model to the original system, one can use some theory that refines the idea of ​​similarity associated with the modeling procedure used.

Unfortunately, there is no theory that allows assessing the adequacy of the mathematical model and the system being modeled, in contrast to the well-developed theory of the similarity of phenomena of the same physical nature.

The adequacy check is carried out at all stages of building a model, starting from the very first stage - conceptual analysis. If system description will not be properly real system, then the model, no matter how accurately it displays system description, will not be adequate to the original. Here it is said "as if exactly", since it means that there are no mathematical models at all that absolutely accurately reflect the processes that exist in reality.

If the study of the system is carried out qualitatively and conceptual model accurately reflects the real state of affairs, then the only problem facing the developers equivalent transformation one description to another.

So, we can talk about the adequacy of the model in any of its forms and the original, if:

  • the description of the behavior created at any stage coincides quite accurately with the behavior of the modeled system in the same situations;
  • the description is convincingly representative of the properties of the system to be predicted by the model.

Pre original version The mathematical model is subjected to the following checks:

  • are all relevant parameters included in the model;
  • whether there are no insignificant parameters in the model;
  • are they reflected correctly functional connections between parameters;
  • whether restrictions on parameter values ​​are correctly defined;
  • whether the model gives absurd answers if its parameters take extreme values.

Such preliminary adequacy assessment model allows you to identify the most gross errors in it.

But all these recommendations are informal, advisory in nature. formal methods adequacy assessment does not exist! Therefore, basically, the quality of the model (and, first of all, the degree of its adequacy to the system) depends on the experience, intuition, erudition of the model developer and other subjective factors.

The final judgment on the adequacy of the model can only be given by practice, that is, a comparison of the model with the original based on experiments with the object and the model. The model and the object are subjected to the same influences and their reactions are compared. If the responses are the same (within acceptable accuracy), then it is concluded that the model is adequate to the original. However, keep in mind the following:

  • impacts on the facility are limited due to possible destruction object, inaccessibility to system elements, etc.;
  • impacts on the object are of a physical nature (changes in supply currents and voltages, temperature, shaft rotation speed, etc.), and on mathematical model are numerical analogues of physical influences.

To assess the degree of similarity of object structures (physical or mathematical), there is the concept of isomorphism (iso - identical, equal, morphe - form, Greek).

Two systems are isomorphic if there is a one-to-one correspondence between the elements and relations (connections) of these systems.

Are isomorphic, for example, the set of real positive numbers and many of their logarithms. Each element of one set - a number corresponds to the value of its logarithm in another, the multiplication of two numbers in the first set - the addition of their logarithms in another. From the passenger's point of view, the subway map found in each car of the subway train is isomorphic to the actual geographic location of the tracks and stations, although for a worker repairing the tracks, this plan is naturally not isomorphic. A photograph is an isomorphic representation of a real person for a policeman, but not for an artist.

When modeling complex systems, it is difficult and impractical to achieve such a complete correspondence. When modeling absolute similarity does not take place. They only strive to ensure that the model reflects the studied side of the object's functioning well enough. The model can become similar in complexity to the system under study, and there will be no simplification of the study.

To assess the similarity in the behavior (functioning) of systems, there is the concept of isofunctionalism.

Two systems of an arbitrary, and sometimes unknown structure, are isofunctional if, under the same influences, they exhibit the same reactions. Such modeling is called functional or cybernetic and in last years is becoming more and more common, for example, in modeling human intelligence (playing chess, proof of theorems, pattern recognition etc.). functional models do not copy structures. But, by copying the behavior, researchers consistently "get close" to the knowledge of the structures of objects ( human brain, Sun, etc.).

1.5. Requirements for models

So, General requirements to the models.

  1. The model must be relevant. This means that the model should be aimed at important issues for decision makers.
  2. The model must be productive. This means that the obtained simulation results can be successfully applied. This requirement can only be implemented if correct wording the desired result.
  3. The model must be credible. This means that the simulation results will not cause doubts. This requirement is closely related to the concept of adequacy, that is, if the model is inadequate, then it cannot give reliable results.
  4. The model must be economical. This means that the effect of using the simulation results exceeds the cost of resources for its creation and research.

These requirements (usually referred to as external) are achievable provided that the model has internal properties.

The model must be:

  1. essential, i.e., allowing to reveal the essence of the behavior of the system, to reveal non-obvious, non-trivial details.
  2. Powerful, i.e., allowing to obtain a wide range of essential information.
  3. Simple in learning and use, easily calculated on a computer.
  4. open, i.e., allowing its modification.

Let us conclude the topic with a few remarks. Difficult to limit scope mathematical modeling. In the study and creation of industrial and military systems, it is almost always possible to define goals, limitations, and provide that the design or process obeys natural, technical and (or) economic laws.

The range of analogies that can be used as models is also practically unlimited. Therefore, one must constantly expand one's education in a particular area, but, first of all, in mathematics.

In recent decades, problems have arisen with unclear and conflicting goals dictated by political and social factors. Mathematical modeling in this area is still problematic. What are these problems? Defence from pollution environment ; predictions of volcanic eruptions, earthquakes, tsunamis; urban growth; leadership of military operations and a number of others. But, nevertheless, "the process has begun", we will not stop progress, and modeling problems such supercomplex systems are constantly finding their solution. Here it should be noted the leading role of domestic scientists and, first of all, Academician N. N. Moiseev, his students and followers.

Questions for self-control

  1. What is a model? Expand the meaning of the phrase: "a model is an object and a means of experiment."
  2. Justify the need for modeling.
  3. What theory is the simulation based on?
  4. What are the general classification features of models.
  5. Is it necessary to strive for absolute similarity between the model and the original?
  6. Name and explain three aspects of the modeling process.
  7. What means structural model?
  8. What is a functional model?
  9. Classification of models according to the nature of the processes occurring in the simulated objects.
  10. The essence of mathematical modeling and its main classes: analytical and simulation.
  11. Name the stages of modeling and give them a brief description.
  12. What is model fit? Give the concepts of isomorphism and isofunctionalism.
  13. General requirements (external) for models.
  14. Internal properties of the model.
  15. Give examples of objects and their possible models in your subject area.

In general, under adequacy understand the degree of conformity of the model to the real phenomenon or object for the description of which it is built. However, created model focused, as a rule, on the study of a certain subset of the properties of this object. Therefore, we can assume that the adequacy of the model is determined by the degree of its compliance not so much with the real object as with the goals of the study. V most this statement is true for models of designed systems (i.e., in situations where the real system does not exist at all).

Nevertheless, in many cases it is useful to have a formal confirmation (or justification) of the adequacy of the developed model. One of the most common ways to do this is to use methods mathematical statistics. The essence of these methods is to test the hypothesis put forward (in this case- on the adequacy of the model) based on some statistical criteria. At the same time, it should be noted that when testing hypotheses by methods of mathematical statistics, it must be borne in mind that statistical criteria cannot prove a single hypothesis - they can only indicate the absence of a refutation.

So, how can one evaluate the adequacy of the developed model in reality? existing system?

The evaluation procedure is based on a comparison of measurements on a real system and the results of experiments on a model and can be carried out in various ways. The most common ones are:

– by average values ​​of model and system responses;

– according to the variances of deviations of the model responses from the average value of the system responses;

- on maximum value relative deviations of model responses from system responses.

The above methods of evaluation are quite close to each other, in fact, therefore, we will restrict ourselves to the consideration of the first of them. This method tests the hypothesis that the mean value of the observed variable is close to the mean value of the response of the real system.

As a result of experiments on a real system, a set of values ​​(sample) is obtained. Having performed experiments on the model, they also get a set of values ​​of the observed variable .

Then estimates of the mathematical expectation and variance of the responses of the model and the system are calculated, after which a hypothesis is put forward about the closeness of the average values ​​of the quantities and (in the statistical sense). The basis for testing the hypothesis is -statistics (Student's distribution). Its value, calculated from the test results, is compared with the critical value taken from the reference table. If the inequality is satisfied, then the hypothesis is accepted. It must be emphasized again that statistical methods are applicable only if the adequacy of the model to the existing system is assessed. Naturally, it is not possible to carry out measurements on the system being designed. The only way to overcome this obstacle is to take as a reference object conceptual model the designed system. Then the assessment of the adequacy of the software-implemented model consists in checking how correctly it reflects the conceptual model.

Top Related Articles