Construction of a stochastic process model. Method for constructing stochastic models of one-step processes Anastasia Vyacheslavovna Demidova. Material modeling is fundamentally different from ideal modeling, based on an ideal, conceivable St.

Series "Economics and Management"

6. Kondratiev N.D. Large conjuncture cycles and the theory of foresight. - M.: Economics, 2002. 768 p.

7. Kuzyk B.N., Kushlin V.I., Yakovets Yu.V. Forecasting, strategic planning and national programming. M.: Publishing House "Economics", 2008. 573 p.

8. Lyasnikov N.V., Dudin M.N. Modernization of the innovation economy in the context of the formation and development of the venture market // Social sciences. M.: Publishing house "MII Nauka", 2011. No. 1. S. 278-285.

9. Sekerin V.D., Kuznetsova O.S. Development of an innovation project management strategy // Bulletin of the Moscow State Academy of Business Administration. Series: Economy. - 2013. No. 1 (20). - S. 129 - 134.

10. Yakovlev V.M., Senin A.S. There is no alternative to the innovative type of development of the Russian economy // Actual issues of innovative economics. M.: Publishing House "Science"; Institute of Management and Marketing of the Russian Academy of Arts and Sciences under the President of the Russian Federation, 2012. No. 1(1).

11. Baranenko S.P., Dudin M.N., Ljasnikov N.V., Busygin KD. Using environmental approach to innovation-oriented development of industrial enterprises // American Journal of Applied Sciences.- 2014.- Vol. 11, No.2, - P. 189-194.

12. Dudin M.N. A systematic approach to determining the modes of interaction of large and small businesses // European Journal of Economic Studies. 2012. Vol. (2), no. 2, pp. 84-87.

13. Dudin M.N., Ljasnikov N.V., Kuznecov A.V., Fedorova I.Ju. Innovative Transformation and Transformational Potential of Socio-Economic Systems // Middle East Journal of Scientific Research, 2013. Vol. 17, No. 10. P. 1434-1437.

14. Dudin M.N., Ljasnikov N.V., Pankov S.V., Sepiashvili E.N. Innovative foresight as the method for management of strategic sustainable development of the business structures // World Applied Sciences Journal. - 2013. - Vol. 26, No. 8. - P. 1086-1089.

15. Sekerin V. D., Avramenko S. A., Veselovsky M. Ya., Aleksakhina V. G. B2G Market: The Essence and Statistical Analysis // World Applied Sciences Journal 31 (6): 1104-1108, 2014

Construction of a one-parameter, stochastic model of the production process

Ph.D. Assoc. Mordasov Yu.P.

University of Mechanical Engineering, 8-916-853-13-32, [email protected] gi

Annotation. The author has developed a mathematical, stochastic model of the production process, depending on one parameter. The model has been tested. For this, a simulation model of the production, machine-building process was created, taking into account the influence of random disturbances-failures. Comparison of the results of mathematical and simulation modeling confirms the expediency of applying the mathematical model in practice.

Key words: technological process, mathematical, simulation model, operational control, approbation, random perturbations.

The costs of operational management can be significantly reduced by developing a methodology that allows you to find the optimum between the costs of operational planning and the losses that result from the discrepancy between planned indicators and indicators of real production processes. This means finding the optimal duration of the signal in the feedback loop. In practice, this means a reduction in the number of calculations of calendar schedules for launching assembly units into production and, due to this, saving material resources.

The course of the production process in mechanical engineering is probabilistic in nature. The constant influence of continuously changing factors does not make it possible to predict for a certain perspective (month, quarter) the course of the production process in space and time. In statistical scheduling models, the state of a part at each specific point in time should be given in the form of an appropriate probability (probability distribution) of its being at different workplaces. However, it is necessary to ensure the determinism of the final result of the enterprise. This, in turn, implies the possibility, using deterministic methods, to plan certain terms for parts to be in production. However, experience shows that various interconnections and mutual transitions of real production processes are diverse and numerous. When developing deterministic models, this creates significant difficulties.

An attempt to take into account all the factors that affect the course of production makes the model cumbersome, and it ceases to function as a tool for planning, accounting and regulation.

A simpler method for constructing mathematical models of complex real processes that depend on a large number of different factors, which are difficult or even impossible to take into account, is the construction of stochastic models. In this case, when analyzing the principles of functioning of a real system or when observing its individual characteristics, probability distribution functions are built for some parameters. In the presence of high statistical stability of the quantitative characteristics of the process and their small dispersion, the results obtained using the constructed model are in good agreement with the performance of the real system.

The main prerequisites for building statistical models of economic processes are:

Excessive complexity and associated economic inefficiency of the corresponding deterministic model;

Large deviations of the theoretical indicators obtained as a result of the experiment on the model from the indicators of actually functioning objects.

Therefore, it is desirable to have a simple mathematical apparatus that describes the impact of stochastic disturbances on the global characteristics of the production process (commercial output, volume of work in progress, etc.). That is, to build a mathematical model of the production process, which depends on a small number of parameters and reflects the total influence of many factors of a different nature on the course of the production process. The main task that a researcher should set himself when building a model is not passive observation of the parameters of a real system, but the construction of such a model that, with any deviation under the influence of disturbances, would bring the parameters of the displayed processes to a given mode. That is, under the action of any random factor, a process must be established in the system that converges to a planned solution. At present, in automated control systems, this function is mainly assigned to a person, who is one of the links in the feedback chain in the management of production processes.

Let us turn to the analysis of the real production process. Usually, the duration of the planning period (the frequency of issuing plans to workshops) is selected based on the traditionally established calendar time intervals: shift, day, five days, etc. They are guided mainly by practical considerations. The minimum duration of the planning period is determined by the operational capabilities of the planned bodies. If the production and dispatching department of the enterprise copes with the issuance of adjusted shift tasks to the shops, then the calculation is made for each shift (that is, the costs associated with the calculation and analysis of planned targets are incurred every shift).

To determine the numerical characteristics of the probability distribution of random

A series of "Economics and Management" disturbances will build a probabilistic model of a real technological process of manufacturing one assembly unit. Here and hereinafter, the technological process of manufacturing an assembly unit means a sequence of operations (works for the manufacture of these parts or assemblies), documented in the technology. Each technological operation of manufacturing products in accordance with the technological route can be performed only after the previous one. Consequently, the technological process of manufacturing an assembly unit is a sequence of events-operations. Under the influence of various stochastic reasons, the duration of an individual operation may change. In some cases, the operation may not be completed during the validity of this shift job. It is obvious that these events can be decomposed into elementary components: performance and non-performance of individual operations, which can also be put in correspondence with the probabilities of performance and non-performance.

For a specific technological process, the probability of performing a sequence consisting of K operations can be expressed by the following formula:

PC5 \u003d k) \u003d (1-pk + 1) PG \u003d 1P1, (1)

where: P1 - the probability of performing the 1st operation, taken separately; r is the number of the operation in order in the technological process.

This formula can be used to determine the stochastic characteristics of a specific planning period, when the range of products launched into production and the list of works that must be performed in a given planning period, as well as their stochastic characteristics, which are determined empirically, are known. In practice, only certain types of mass production, which have a high statistical stability of characteristics, satisfy the listed requirements.

The probability of performing one single operation depends not only on external factors, but also on the specific nature of the work performed and on the type of assembly unit.

To determine the parameters of the above formula, even with a relatively small set of assembly units, with small changes in the range of manufactured products, a significant amount of experimental data is required, which causes significant material and organizational costs and makes this method for determining the probability of uninterrupted production of products hardly applicable.

Let us subject the obtained model to the study for the possibility of its simplification. The initial value of the analysis is the probability of failure-free execution of one operation of the technological process of manufacturing products. In real production conditions, the probabilities of performing operations of each type are different. For a specific technological process, this probability depends on:

From the type of operation performed;

From a specific assembly unit;

From products manufactured in parallel;

from external factors.

Let us analyze the influence of fluctuations in the probability of performing one operation on the aggregated characteristics of the production process of manufacturing products (the volume of commercial output, the volume of work in progress, etc.) determined using this model. The aim of the study is to analyze the possibility of replacing in the model of various probabilities of performing one operation with an average value.

The combined effect of all these factors is taken into account when calculating the average geometric probability of performing one operation of the averaged technological process. An analysis of modern production shows that it fluctuates slightly: practically within 0.9 - 1.0.

A clear illustration of how low the probability of performing one operation

walkie-talkie corresponds to a value of 0.9, is the following abstract example. Let's say we have ten pieces to make. The technological processes of manufacturing each of them contain ten operations. The probability of performing each operation is 0.9. Let us find the probabilities of lagging behind the schedule for a different number of technological processes.

A random event, which consists in the fact that a specific technological process of manufacturing an assembly unit will fall behind the schedule, corresponds to the underperformance of at least one operation in this process. It is the opposite of an event: the execution of all operations without failure. Its probability is 1 - 0.910 = 0.65. Since schedule delays are independent events, the Bernoulli probability distribution can be used to determine the probability of schedule delay for a different number of processes. The calculation results are shown in Table 1.

Table 1

Calculation of the probabilities of lagging behind the schedule of technological processes

to C^o0.35k0.651O-k Sum

The table shows that with a probability of 0.92, five technological processes will fall behind the schedule, that is, half. The mathematical expectation of the number of technological processes lagging behind the schedule will be 6.5. This means that, on average, 6.5 assembly units out of 10 will lag behind the schedule. That is, on average, from 3 to 4 parts will be produced without failures. The author is unaware of examples of such a low level of labor organization in real production. The considered example clearly shows that the imposed restriction on the value of the probability of performing one operation without failures does not contradict practice. All of the above requirements are met by the production processes of machine-assembly shops of machine-building production.

Thus, to determine the stochastic characteristics of production processes, it is proposed to construct a probability distribution for the operational execution of one technological process, which expresses the probability of performing a sequence of technological operations for manufacturing an assembly unit through the geometric average probability of performing one operation. The probability of performing K operations in this case will be equal to the product of the probabilities of performing each operation, multiplied by the probability of not performing the rest of the technological process, which coincides with the probability of not performing the (K + T)-th operation. This fact is explained by the fact that if any operation is not performed, then the following ones cannot be executed. The last entry differs from the rest, as it expresses the probability of complete passage without failure of the entire technological process. The probability of performing K of the first operations of the technological process is uniquely related to the probability of not performing the remaining operations. Thus, the probability distribution has the following form:

PY=0)=p°(1-p),

Р(§=1) = р1(1-р), (2)

P(^=1) = p1(1-p),

P(t=u-1) = pn"1(1 - p), P(t=n) = pn,

where: ^ - random value, the number of performed operations;

p is the geometric mean probability of performing one operation, n is the number of operations in the technological process.

The validity of the application of the obtained one-parameter probability distribution is intuitively evident from the following reasoning. Let's assume that we have calculated the geometric mean of the probability of performing one 1 operation on a sample of n elements, where n is large enough.

p = USHT7P7= tl|n]t=1p!), (3)

where: Iy - the number of operations that have the same probability of execution; ] - index of a group of operations that have the same probability of execution; m - the number of groups consisting of operations that have the same probability of execution;

^ = - - relative frequency of occurrence of operations with the probability of execution p^.

According to the law of large numbers, with an unlimited number of operations, the relative frequency of occurrence in a sequence of operations with certain stochastic characteristics tends in probability to the probability of this event. Whence it follows that

for two sufficiently large samples = , then:

where: t1, t2 - the number of groups in the first and second samples, respectively;

1*, I2 - the number of elements in the group of the first and second samples, respectively.

It can be seen from this that if the parameter is calculated for a large number of tests, then it will be close to the parameter P calculated for this rather large sample.

Attention should be paid to the different proximity to the true value of the probabilities of performing a different number of process operations. In all elements of the distribution, except for the last one, there is a factor (I - P). Since the value of the parameter P is in the range of 0.9 - 1.0, the factor (I - P) fluctuates between 0 - 0.1. This multiplier corresponds to the multiplier (I - p;) in the original model. Experience shows that this correspondence for a particular probability can cause an error of up to 300%. However, in practice, one is usually interested not in the probabilities of performing any number of operations, but in the probability of complete execution without failures of the technological process. This probability does not contain a factor (I - P), and, therefore, its deviation from the actual value is small (practically no more than 3%). For economic tasks, this is a fairly high accuracy.

The probability distribution of a random variable constructed in this way is a stochastic dynamic model of the manufacturing process of an assembly unit. Time participates in it implicitly, as the duration of one operation. The model allows you to determine the probability that after a certain period of time (the corresponding number of operations) the production process of manufacturing an assembly unit will not be interrupted. For mechanical assembly shops of machine-building production, the average number of operations of one technological process is quite large (15 - 80). If we consider this number as a base number and assume that, on average, in the manufacture of one assembly unit, a small set of enlarged types of work is used (turning, locksmith, milling, etc.),

then the resulting distribution can be successfully used to assess the impact of stochastic disturbances on the course of the production process.

The author conducted a simulation experiment built on this principle. To generate a sequence of pseudo-random variables uniformly distributed over the interval 0.9 - 1.0, a pseudo-random number generator was used, described in . The software of the experiment is written in the COBOL algorithmic language.

In the experiment, products of generated random variables are formed, simulating the real probabilities of the complete execution of a specific technological process. They are compared with the probability of performing the technological process, obtained using the geometric mean value, which was calculated for a certain sequence of random numbers of the same distribution. The geometric mean is raised to a power equal to the number of factors in the product. Between these two results, the relative difference in percent is calculated. The experiment is repeated for a different number of factors in the products and the number of numbers for which the geometric mean is calculated. A fragment of the results of the experiment is shown in Table 2.

table 2

Simulation experiment results:

n is the degree of the geometric mean; k - the degree of the product

n to Product Deviation to Product Deviation to Product Deviation

10 1 0,9680 0% 7 0,7200 3% 13 0,6277 -7%

10 19 0,4620 -1% 25 0,3577 -1% 31 0,2453 2%

10 37 0,2004 6% 43 0,1333 4% 49 0,0888 6%

10 55 0,0598 8% 61 0,0475 5% 67 0,0376 2%

10 73 0,0277 1% 79 0,0196 9% 85 0,0143 2%

10 91 0,0094 9% 97 0,0058 0%

13 7 0,7200 8% 13 0,6277 0% 19 0,4620 0%

13 25 0,3577 5% 31 0,2453 6% 37 0,2004 4%

13 43 0,1333 3% 49 0,0888 8% 55 0,0598 8%

13 61 0,0475 2% 67 0,0376 8% 73 0,0277 2%

13 79 0,0196 1% 85 0,0143 5% 91 0,0094 5%

16 1 0,9680 0% 7 0,7200 9%

16 13 0,6277 2% 19 0,4620 3% 25 0,3577 0%

16 31 0,2453 2% 37 0,2004 2% 43 0,1333 5%

16 49 0,0888 4% 55 0,0598 0% 61 0,0475 7%

16 67 0,0376 5% 73 0,0277 5% 79 0,0196 2%

16 85 0,0143 4% 91 0,0094 0% 97 0,0058 4%

19 4 0,8157 4% 10 0,6591 1% 16 0,5795 -9%

19 22 0,4373 -5% 28 0,2814 5% 34 0,2256 3%

19 40 0,1591 6% 46 0,1118 1% 52 0,0757 3%

19 58 0,0529 4% 64 0,0418 3% 70 0,0330 2%

19 76 0,0241 6% 82 0,0160 1% 88 0,0117 8%

19 94 0,0075 7% 100 0,0048 3%

22 10 0,6591 4% 16 0,5795 -4% 22 0,4373 0%

22 28 0,2814 5% 34 0,2256 5% 40 0,1591 1%

22 46 0,1118 1% 52 0,0757 0% 58 0,0529 8%

22 64 0,0418 1% 70 0,0330 3% 76 0,0241 5%

22 82 0,0160 4% 88 0,0117 2% 94 0,0075 5%

22 100 0,0048 1%

25 4 0,8157 3% 10 0,6591 0%

25 16 0,5795 0% 72 0,4373 -7% 28 0,2814 2%

25 34 0,2256 9% 40 0,1591 1% 46 0,1118 4%

25 52 0,0757 5% 58 0,0529 4% 64 0,0418 2%

25 70 0,0330 0% 76 0,0241 2% 82 0,0160 4%

28 4 0,8157 2% 10 0,6591 -2% 16 0,5795 -5%

28 22 0,4373 -3% 28 0,2814 2% 34 0,2256 -1%

28 40 0,1591 6% 46 0,1118 6% 52 0,0757 1%

28 58 0,0529 4% 64 0,041 8 9% 70 0,0330 5%

28 70 0,0241 2% 82 0,0160 3% 88 0,0117 1%

28 94 0,0075 100 0,0048 5%

31 10 0,6591 -3% 16 0,5795 -5% 22 0,4373 -4%

31 28 0,2814 0% 34 0,2256 -3% 40 0,1591 4%

31 46 0,1118 3% 52 0,0757 7% 58 0,0529 9%

31 64 0,0418 4% 70 0,0330 0% 76 0,0241 6%

31 82 0,0160 6% 88 0,0117 2% 94 0,0075 5%

When setting up this simulation experiment, the goal was to explore the possibility of obtaining, using the probability distribution (2), one of the enlarged statistical characteristics of the production process - the probability of performing one technological process of manufacturing an assembly unit consisting of K operations without failures. For a specific technological process, this probability is equal to the product of the probabilities of performing all its operations. As the simulation experiment shows, its relative deviations from the probability obtained using the developed probabilistic model do not exceed 9%.

Since the simulation experiment uses a more inconvenient than real probability distribution, the practical discrepancies will be even smaller. Deviations are observed both in the direction of decreasing and in the direction of exceeding the value obtained from the average characteristics. This fact suggests that if we consider the deviation of the probability of failure-free execution of not a single technological process, but several, then it will be much less. Obviously, it will be the smaller, the more technological processes will be considered. Thus, the simulation experiment shows a good agreement between the probability of performing without failures of the technological process of manufacturing products with the probability obtained using a one-parameter mathematical model.

In addition, simulation experiments were carried out:

To study the statistical convergence of the probability distribution parameter estimate;

To study the statistical stability of the mathematical expectation of the number of operations performed without failures;

To analyze methods for determining the duration of the minimum planning period and assessing the discrepancy between planned and actual indicators of the production process, if the planned and production periods do not coincide in time.

Experiments have shown good agreement between the theoretical data obtained through the use of techniques and the empirical data obtained by simulation on

Series "Economics and Management"

Computer of real production processes.

Based on the application of the constructed mathematical model, the author has developed three specific methods for improving the efficiency of operational management. For their approbation, separate simulation experiments were carried out.

1. Methodology for determining the rational volume of the production task for the planning period.

2. Methodology for determining the most effective duration of the operational planning period.

3. Evaluation of the discrepancy in the event of a mismatch in time between the planned and production periods.

Literature

1. Mordasov Yu.P. Determining the duration of the minimum operational planning period under the action of random disturbances / Economic-mathematical and simulation modeling using computers. - M: MIU im. S. Ordzhonikidze, 1984.

2. Naylor T. Machine simulation experiments with models of economic systems. -M: Mir, 1975.

The transition from concentration to diversification is an effective way to develop the economy of small and medium-sized businesses

prof. Kozlenko N. N. University of Mechanical Engineering

Annotation. This article considers the problem of choosing the most effective development of Russian small and medium-sized businesses through the transition from a concentration strategy to a diversification strategy. The issues of diversification feasibility, its advantages, criteria for choosing the path of diversification are considered, a classification of diversification strategies is given.

Key words: small and medium businesses; diversification; strategic fit; competitive advantages.

An active change in the parameters of the macro environment (changes in market conditions, the emergence of new competitors in related industries, an increase in the level of competition in general) often leads to non-fulfillment of the planned strategic plans of small and medium-sized businesses, loss of financial and economic stability of enterprises due to a significant gap between the objective conditions for the activities of small businesses. enterprises and the level of technology of their management.

The main conditions for economic stability and the possibility of maintaining competitive advantages are the ability of the management system to respond in a timely manner and change internal production processes (change the assortment taking into account diversification, rebuild production and technological processes, change the structure of the organization, use innovative marketing and management tools).

A study of the practice of Russian small and medium-sized enterprises of production type and service has revealed the following features and basic cause-and-effect relationships regarding the current trend in the transition of small enterprises from concentration to diversification.

Most SMBs start out as small, one-size-fits-all businesses serving local or regional markets. At the beginning of its activity, the product range of such a company is very limited, its capital base is weak, and its competitive position is vulnerable. Typically, the strategy of such companies focuses on sales growth and market share, as well as

4. Scheme for constructing stochastic models

The construction of a stochastic model includes the development, quality assessment and study of the system behavior using equations that describe the process under study. To do this, by conducting a special experiment with a real system, the initial information is obtained. In this case, methods of planning an experiment, processing results, as well as criteria for evaluating the obtained models, based on such sections of mathematical statistics as dispersion, correlation, regression analysis, etc., are used.

Stages of development of a stochastic model:

    formulation of the problem

    choice of factors and parameters

    model type selection

    experiment planning

    implementation of the experiment according to the plan

    building a statistical model

    model validation (related to 8, 9, 2, 3, 4)

    model adjustment

    process exploration with a model (linked to 11)

    definition of optimization parameters and constraints

    process optimization with a model (linked to 10 and 13)

    experimental information of automation equipment

    process control with a model (linked to 12)

Combining steps 1 to 9 gives us an information model, steps 1 to 11 give us an optimization model, and combining all items gives us a management model.

5. Tools for processing models

Using CAE systems, you can perform the following procedures for processing models:

    overlaying a finite element mesh on a 3D model,

    problems of heat-stressed state; problems of fluid dynamics;

    problems of heat and mass transfer;

    contact tasks;

    kinematic and dynamic calculations, etc.

    simulation modeling of complex production systems based on queuing models and Petri nets

Typically, CAE modules provide the ability to color and grayscale images, superimpose the original and deformed parts, visualize liquid and gas flows.

Examples of systems for modeling fields of physical quantities in accordance with the FEM: Nastran, Ansys, Cosmos, Nisa, Moldflow.

Examples of systems for modeling dynamic processes at the macro level: Adams and Dyna - in mechanical systems, Spice - in electronic circuits, PA9 - for multidimensional modeling, i.e. for modeling systems, the principles of which are based on the mutual influence of physical processes of various nature.

6. Mathematical modeling. Analytical and simulation models

Mathematical model - a set of mathematical objects (numbers, variables, sets, etc.) and relations between them, which adequately reflects some (essential) properties of the designed technical object. Mathematical models can be geometric, topological, dynamic, logical, etc.

- adequacy of the representation of the simulated objects;

The area of ​​adequacy is the area in the parameter space, within which the errors of the model remain within acceptable limits.

- economy (computational efficiency)- determined by the cost of resources,
required for the implementation of the model (computer time, memory used, etc.);

- accuracy - determines the degree of coincidence of the calculated and true results (the degree of correspondence between the estimates of the properties of the same name of the object and the model).

Mathematical modeling- the process of building mathematical models. Includes the following steps: setting the task; building a model and its analysis; development of methods for obtaining design solutions on the model; experimental verification and correction of the model and methods.

The quality of the created mathematical models largely depends on the correct formulation of the problem. It is necessary to determine the technical and economic goals of the problem being solved, to collect and analyze all the initial information, to determine the technical limitations. In the process of building models, methods of system analysis should be used.

The modeling process, as a rule, is iterative in nature, which provides for refinement of previous decisions made at the previous stages of model development at each iteration step.

Analytical Models - numerical mathematical models that can be represented as explicit dependences of output parameters on internal and external parameters. Simulation models - numerical algorithmic models that display the processes in the system in the presence of external influences on the system. Algorithmic models are models in which the relationship between output, internal and external parameters is implicitly specified in the form of a modeling algorithm. Simulation models are often used at the system design level. Simulation modeling is performed by reproducing events that occur simultaneously or sequentially in model time. An example of a simulation model can be considered the use of a Petri net to simulate a queuing system.

7. Basic principles for constructing mathematical models

Classical (inductive) approach. The real object to be modeled is divided into separate subsystems, i.e. initial data for modeling are selected and goals are set that reflect certain aspects of the modeling process. Based on a separate set of initial data, the goal is to model a separate aspect of the system's functioning; on the basis of this goal, a certain component of the future model is formed. The set of components is combined into a model.

Such a classical approach can be used to create fairly simple models in which separation and mutually independent consideration of individual aspects of the functioning of a real object is possible. Implements the movement from the particular to the general.

Systems approach. Based on the initial data that are known from the analysis of the external system, those restrictions that are imposed on the system from above or based on the possibilities of its implementation, and on the basis of the purpose of functioning, the initial requirements for the system model are formulated. On the basis of these requirements, approximately some subsystems and elements are formed and the most difficult stage of synthesis is carried out - the choice of system components, for which special selection criteria are used. The system approach also implies a certain sequence of model development, which consists in distinguishing two main design stages: macro-design and micro-design.

Macro design stage– on the basis of data about the real system and the external environment, a model of the external environment is built, resources and limitations for building a system model are identified, a system model and criteria are selected to assess the adequacy of the real system model. Having built a model of the system and a model of the external environment, on the basis of the criterion of the efficiency of the functioning of the system, in the process of modeling, the optimal control strategy is chosen, which makes it possible to realize the possibility of the model to reproduce certain aspects of the functioning of a real system.

Microdesign stage largely depends on the particular type of model chosen. In the case of a simulation model, it is necessary to ensure the creation of information, mathematical, technical and software modeling systems. At this stage, it is possible to establish the main characteristics of the created model, evaluate the time of working with it and the cost of resources to obtain a given quality of correspondence between the model and the process of the system functioning. Regardless of the type of model used
when building it, it is necessary to be guided by a number of principles of a systematic approach:

    proportionally-sequential progress through the stages and directions of model creation;

    coordination of information, resource, reliability and other characteristics;

    the correct ratio of individual levels of the hierarchy in the modeling system;

    the integrity of individual isolated stages of model building.

      Analysis of the methods used in mathematical modeling

In mathematical modeling, the solution of differential or integro-differential equations with partial derivatives is performed by numerical methods. These methods are based on discretization of independent variables - their representation by a finite set of values ​​at selected nodal points of the space under study. These points are considered as nodes of some grid.

Among the grid methods, two methods are most widely used: the finite difference method (FDM) and the finite element method (FEM). Usually one performs discretization of spatial independent variables, i.e. using a spatial grid. In this case, discretization results in a system of ordinary differential equations, which are then reduced to a system of algebraic equations using boundary conditions.

Let it be necessary to solve the equation LV(z) = f(z)

with given boundary conditions MV(z) = .(z),

where L and M- differential operators, V(z) - phase variable, z= (x 1, x 2, x 3, t) - vector of independent variables, f(z) and ψ.( z) are given functions of independent variables.

AT MKR algebraization of derivatives with respect to spatial coordinates is based on the approximation of derivatives by finite difference expressions. When using the method, you need to select the grid steps for each coordinate and the type of template. A template is understood as a set of nodal points, the values ​​of variables in which are used to approximate the derivative at one particular point.

FEM is based on the approximation not of derivatives, but of the solution itself V(z). But since it is unknown, the approximation is performed by expressions with undefined coefficients.

In this case, we are talking about approximations of the solution within finite elements, and taking into account their small sizes, we can talk about using relatively simple approximating expressions (for example, low-degree polynomials). As a result of substitution such polynomials into the original differential equation and performing differentiation operations, the values ​​of phase variables are obtained at given points.

Polynomial approximation. The use of methods is associated with the possibility of approximating a smooth function by a polynomial and then using an approximating polynomial to estimate the coordinate of the optimum point. The necessary conditions for the effective implementation of this approach are unimodality and continuity function under study. According to the Weierstrass approximation theorem, if a function is continuous in some interval, then it can be approximated with any degree of accuracy by a polynomial of a sufficiently high order. According to the Weierstrass theorem, the quality of the optimum point coordinate estimates obtained using the approximating polynomial can be improved in two ways: by using a higher-order polynomial and by decreasing the approximation interval. The simplest version of polynomial interpolation is the quadratic approximation, which is based on the fact that the function that takes the minimum value at the interior point of the interval must be at least quadratic

Discipline "Models and methods of analysis of design solutions" (Kazakov Yu.M.)

    Classification of mathematical models.

    Levels of abstraction of mathematical models.

    Requirements for mathematical models.

    Scheme for constructing stochastic models.

    Model processing tools.

    Mathematical modeling. Analytical and simulation models.

    Basic principles for constructing mathematical models.

    Analysis of applied methods in mathematical modeling.

1. Classification of mathematical models

Mathematical model (MM) of a technical object is a set of mathematical objects (numbers, variables, matrices, sets, etc.) and relations between them, which adequately reflects the properties of a technical object that are of interest to an engineer developing this object.

By the nature of displaying the properties of the object:

    Functional - designed to display the physical or information processes occurring in technical systems during their operation. A typical functional model is a system of equations describing either electrical, thermal, mechanical processes or information transformation processes.

    Structural - display the structural properties of the object (topological, geometric). . Structural models are most often represented as graphs.

By belonging to the hierarchical level:

    Models of the microlevel - display of physical processes in continuous space and time. For modeling, the apparatus of equations of mathematical physics is used. Examples of such equations are partial differential equations.

    macro-level models. Enlargement, detailing of space on a fundamental basis are used. Functional models at the macrolevel are systems of algebraic or ordinary differential equations, and appropriate numerical methods are used to obtain and solve them.

    Metolevel models. Enlarged description of the objects under consideration. Mathematical models at the metalevel - systems of ordinary differential equations, systems of logical equations, simulation models of queuing systems.

How to get the model:

    Theoretical - are built on the basis of studying patterns. Unlike empirical models, theoretical models are in most cases more universal and applicable to a wider range of problems. Theoretical models are linear and non-linear, continuous and discrete, dynamic and statistical.

    empirical

The main requirements for mathematical models in CAD:

    adequacy of the representation of the simulated objects;

Adequacy takes place if the model reflects the given properties of the object with acceptable accuracy and is evaluated by the list of reflected properties and areas of adequacy. The area of ​​adequacy is the area in the parameter space, within which the errors of the model remain within acceptable limits.

    economy (computational efficiency)– is determined by the cost of resources required to implement the model (computer time, memory used, etc.);

    accuracy- determines the degree of coincidence of the calculated and true results (the degree of correspondence between the estimates of the properties of the same name of the object and the model).

A number of other requirements are also imposed on mathematical models:

    Computability, i.e. the possibility of manual or with the help of a computer to study the qualitative and quantitative patterns of the functioning of an object (system).

    Modularity, i.e. correspondence of the model constructions to the structural components of the object (system).

    Algorithmizability, i.e. the possibility of developing an appropriate algorithm and a program that implements a mathematical model on a computer.

    visibility, i.e. convenient visual perception of the model.

Table. Classification of mathematical models

Classification features

Types of mathematical models

1. Belonging to a hierarchical level

    Micro level models

    Macro level models

    Meta level models

2. The nature of the displayed properties of the object

    Structural

    Functional

3. Way of representing object properties

    Analytical

    Algorithmic

    simulation

4. How to get the model

    Theoretical

    empirical

5. Features of the behavior of the object

    deterministic

    Probabilistic

Mathematical models at the micro level of the production process reflect the physical processes that occur, for example, when cutting metals. They describe processes at the transition level.

Mathematical models at the macro level production process describe technological processes.

Mathematical models at the metalevel of the production process describe technological systems (sections, workshops, the enterprise as a whole).

Structural mathematical models designed to display the structural properties of objects. For example, in CAD TP, structural-logical models are used to represent the structure of the technological process, product packaging.

Functional mathematical models designed to display information, physical, temporal processes occurring in operating equipment, in the course of technological processes, etc.

Theoretical mathematical models are created as a result of the study of objects (processes) at the theoretical level.

Empirical mathematical models are created as a result of experiments (studying the external manifestations of the properties of an object by measuring its parameters at the input and output) and processing their results using mathematical statistics methods.

Deterministic mathematical models describe the behavior of an object from the standpoint of complete certainty in the present and future. Examples of such models: formulas of physical laws, technological processes for processing parts, etc.

Probabilistic mathematical models take into account the influence of random factors on the behavior of the object, i.e. assess its future in terms of the likelihood of certain events.

Analytical Models - numerical mathematical models that can be represented as explicit dependences of output parameters on internal and external parameters.

Algorithmic mathematical models express the relationship between the output parameters and the input and internal parameters in the form of an algorithm.

Simulation mathematical models- these are algorithmic models that reflect the development of the process (behavior of the object under study) in time when specifying external influences on the process (object). For example, these are models of queuing systems given in an algorithmic form.

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Hosted at http://www.allbest.ru/

1. An example of building a stochastic process model

In the course of a bank's operation, it is very often necessary to solve the problem of choosing an asset vector, i.e. bank's investment portfolio, and the uncertain parameters that must be taken into account in this task are primarily related to the uncertainty of asset prices (securities, real investments, etc.). As an illustration, we can give an example with the formation of a portfolio of government short-term obligations.

For problems of this class, the fundamental issue is the construction of a model of the stochastic process of price changes, since the operation researcher, of course, has only a finite series of observations of realizations of random variables - prices. Next, one of the approaches to solving this problem is presented, which is being developed at the Computing Center of the Russian Academy of Sciences in connection with solving control problems for stochastic Markov processes.

Are being considered M types of securities, i=1,… , M, which are traded at special exchange sessions. Securities are characterized by values ​​- expressed as a percentage of yields during the current session. If a paper of the type at the end of the session is bought at the price and sold at the end of the session at the price, then.

Yields are random variables formed as follows. The existence of basic returns is assumed - random variables that form a Markov process and are determined by the following formula:

Here, are constants, and are standard normally distributed random variables (i.e., with zero mathematical expectation and unit variance).

where is a certain scale factor equal to (), and is a random variable that has the meaning of a deviation from the base value and is determined similarly:

where are also standard normally distributed random variables.

It is assumed that some operating party, hereinafter referred to as the operator, manages its capital invested in securities (at any moment in paper of exactly one type) for some time, selling them at the end of the current session and immediately buying other securities with the proceeds. Management, selection of purchased securities is carried out according to an algorithm that depends on the operator's awareness of the process that forms the yield of securities. We will consider various hypotheses about this awareness and, accordingly, various control algorithms. We will assume that the researcher of the operation develops and optimizes the control algorithm using the available series of observations of the process, i.e., using information about closing prices at exchange sessions, and also, possibly, about the values, at a certain time interval corresponding to sessions with numbers. The purpose of the experiments is to compare estimates of the expected efficiency of various control algorithms with their theoretical mathematical expectation under conditions when the algorithms are tuned and evaluated on the same series of observations. To estimate the theoretical mathematical expectation, the Monte Carlo method is used by “sweeping” the control over a sufficiently large generated series, i.e. by a matrix of dimensions, where the columns correspond to the realizations of values ​​and by sessions, and the number is determined by computational capabilities, but provided that the matrix elements are at least 10,000. It is necessary that the "polygon" be the same in all experiments. The available series of observations simulates the generated dimension matrix, where the values ​​in the cells have the same meaning as above. The number and values ​​in this matrix will vary in the future. Matrices of both types are formed by means of a procedure for generating random numbers, simulating the implementation of random variables, and calculating the desired elements of the matrices using these implementations and formulas (1) - (3).

Evaluation of control efficiency on a series of observations is made according to the formula

where is the index of the last session in the series of observations, and is the number of bonds selected by the algorithm at the step, i.e. the type of bonds in which, according to the algorithm, the operator's capital will be located during the session. In addition, we will also calculate the monthly efficiency. The number 22 roughly corresponds to the number of trading sessions per month.

Computational experiments and analysis of results

Hypotheses

Exact knowledge by the operator of future returns.

The index is chosen as. This option gives an upper estimate for all possible control algorithms, even if additional information (taking into account some additional factors) allows us to refine the price forecast model.

Random control.

The operator does not know the law of pricing and conducts operations by random selection. Theoretically, in this model, the mathematical expectation of the result of operations is the same as if the operator invested not in one paper, but equally in all. With zero mathematical expectations of the values, the mathematical expectation of the value is equal to 1. Calculations according to this hypothesis are useful only in the sense that they allow to some extent to control the correctness of the written programs and the generated matrix of values.

Management with accurate knowledge of the profitability model, all its parameters and the observed value .

In this case, the operator at the end of the session, knowing the values ​​for both sessions, and, and in our calculations, using rows, and, matrices, calculates by formulas (1) - (3) the mathematical values.

where, according to (2), . (6)

Control with knowledge of the structure of the yield model and the observed value , but unknown coefficients .

We will assume that the researcher of the operation not only does not know the values ​​of the coefficients, but also does not know the number of values ​​influencing the formation that precede the values ​​of these parameters (the memory depth of Markov processes). It also does not know whether the coefficients are the same or different for different values. Let's consider different variants of the researcher's actions - 4.1, 4.2, and 4.3, where the second index denotes the researcher's assumption about the memory depth of the processes (the same for and). For example, in case 4.3, the researcher assumes that it is formed according to the equation

Here, for the sake of completeness, a free term has been added. However, this term can be excluded either for meaningful reasons or by statistical methods. Therefore, to simplify the calculations, we further exclude free terms when setting the parameters from consideration and formula (7) takes the form:

Depending on whether the researcher assumes the same or different coefficients for different values, we will consider subcases 4.m. 1 - 4.m. 2, m = 1 - 3. In cases 4.m. 1 coefficients will be adjusted according to the observed values ​​for all securities together. In cases 4.m. 2 coefficients are adjusted for each security separately, while the researcher works under the hypothesis that the coefficients are different for different and, for example, in case 4.2.2. values ​​are determined by the modified formula (3)

First setup method- the classical method of least squares. Let's consider it on the example of setting the coefficients at in options 4.3.

According to formula (8),

It is required to find such values ​​of the coefficients in order to minimize the sample variance for implementations on a known series of observations, an array, provided that the mathematical expectation of the values ​​is determined by formula (9).

Here and in what follows, the sign "" indicates the realization of a random variable.

The minimum of the quadratic form (10) is reached at the only point where all partial derivatives are equal to zero. From here we obtain a system of three algebraic linear equations:

the solution of which gives the desired values ​​of the coefficients.

After the coefficients are verified, the choice of controls is carried out in the same way as in case 3.

Comment. In order to facilitate the work on programs, it is accepted to write the control selection procedure described for hypothesis 3, focusing not on formula (5), but on its modified version in the form

In this case, in the calculations for cases 4.1.m and 4.2.m, m = 1, 2, the extra coefficients are set to zero.

The second setting method consists in choosing the values ​​of the parameters so as to maximize the estimate from formula (4). This task is analytically and computationally hopelessly difficult. Therefore, here we can only talk about methods of some improvement of the criterion value relative to the starting point. The starting point can be taken from the least squares values ​​and then computed around these values ​​on a grid. In this case, the sequence of actions is as follows. First, the grid is calculated on the parameters (square or cube) with the remaining parameters fixed. Then for cases 4.m. 1, the grid is calculated on the parameters, and for cases 4.m. 2 on the parameters with the remaining parameters fixed. In case 4.m. 2 further parameters are also optimized. When all parameters are exhausted by this process, the process is repeated. Repetitions are made until the new cycle gives an improvement in the criterion values ​​compared to the previous one. So that the number of iterations does not turn out to be too large, we apply the following trick. Inside each block of calculations on a 2- or 3-dimensional parameter space, a rather coarse grid is first taken, then, if the best point is on the edge of the grid, then the square (cube) under study is shifted and the calculation is repeated, but if the best point is internal, then a new grid is built around this point with a smaller step, but with the same total number of points, and so on some, but a reasonable number of times.

Management under unobserved and without taking into account the dependence between the yields of different securities.

This means that the researcher of the operation does not notice the relationship between different securities, knows nothing about the existence and tries to predict the behavior of each security separately. Consider, as usual, three cases when the researcher models the process of generating returns as a Markov process with depths 1, 2, and 3:

The coefficients for predicting the expected return are not important, and the coefficients are adjusted in two ways, described in paragraph 4. The controls are chosen in the same way as it was done above.

Note: As well as for choosing a control, for the least squares method it makes sense to write a single procedure with a maximum number of variables - 3. If the variables are adjustable, say, then for the solution of a linear system, a formula is written that includes only constants, is defined through , and through and. In cases where there are less than three variables, the values ​​of extra variables are set to zero.

Although the calculations in different variants are carried out in a similar way, the number of variants is quite large. When the preparation of tools for calculations in all of the above options turns out to be difficult, the issue of reducing their number is considered at the expert level.

Management under unobserved taking into account the dependence between the yields of different securities.

This series of experiments imitates the manipulations that were performed in the GKO problem. We assume that the researcher knows practically nothing about the mechanism of formation of returns. He has only a series of observations, a matrix. From substantive considerations, he makes an assumption about the interdependence of the current yields of different securities, grouped around a certain basic yield, determined by the state of the market as a whole. Considering the graphs of securities yields from session to session, he makes the assumption that at each moment of time the points whose coordinates are the numbers of securities and yields (in reality, these were the maturities of securities and their prices) are grouped near a certain curve (in the case of GKO - parabolas).

Here - the point of intersection of the theoretical line with the y-axis (base return), and - its slope (which should be equal to 0.05).

By constructing the theoretical lines in this way, the researcher of the operation can calculate the values ​​- the deviations of the values ​​from their theoretical values.

(Note that here they have a slightly different meaning than in formula (2). There is no dimensional coefficient, and deviations are considered not from the base value, but from the theoretical straight line.)

The next task is to predict the values ​​from the currently known values, . Insofar as

to predict the values, the researcher needs to introduce a hypothesis about the formation of the values, and. Using the matrix, the researcher can establish a significant correlation between the values ​​of and. You can accept the hypothesis of a linear relationship between the quantities from: . From meaningful considerations, the coefficient is immediately assumed to be equal to zero, and the least squares method is sought in the form:

Further, as above, and are modeled by means of a Markov process and are described by formulas similar to (1) and (3) with a different number of variables depending on the memory depth of the Markov process in the considered version. (here it is determined not by formula (2), but by formula (16))

Finally, as above, two ways of adjusting the parameters by the least squares method are implemented, and estimates are made by directly maximizing the criterion.

Experiments

For all the described options, the criteria scores were calculated for different matrices. (matrices with the number of rows 1003, 503, 103, and about a hundred matrices were implemented for each dimension option). According to the results of calculations for each dimension, the mathematical expectation and dispersion of values, and their deviation from the values, were estimated for each of the prepared options.

As shown by the first series of computational experiments with a small number of adjustable parameters (about 4), the choice of the tuning method does not significantly affect the value of the criterion in the problem.

2. Classification of modeling tools

stochastic simulation bank algorithm

The classification of modeling methods and models can be carried out according to the degree of detail of the models, according to the nature of the features, according to the scope of application, etc.

Consider one of the most common classifications of models by means of modeling, this aspect is the most important in the analysis of various phenomena and systems.

material in the case when the study is conducted on models, the connection of which with the object under study exists objectively, is of a material nature. Models in this case are built by the researcher or selected by him from the surrounding world.

By means of modeling, modeling methods are divided into two groups: material modeling methods and ideal modeling methods. Modeling is called material in the case when the study is conducted on models, the connection of which with the object under study exists objectively, is of a material nature. Models in this case are built by the researcher or selected by him from the surrounding world. In turn, in material modeling, one can distinguish: spatial, physical and analog modeling.

In spatial modeling models are used that are designed to reproduce or display the spatial properties of the object under study. Models in this case are geometrically similar to the objects of study (any layouts).

Models used in physical modeling designed to reproduce the dynamics of processes occurring in the object under study. Moreover, the commonality of the processes in the object of study and the model is based on the similarity of their physical nature. This modeling method is widely used in engineering when designing technical systems of various types. For example, the study of aircraft based on experiments in a wind tunnel.

analog modeling is associated with the use of material models that have a different physical nature, but are described by the same mathematical relationships as the object under study. It is based on the analogy in the mathematical description of the model and the object (the study of mechanical vibrations with the help of an electrical system described by the same differential equations, but more convenient for experiments).

In all cases of material modeling, the model is a material reflection of the original object, and the study consists in the material impact on the model, that is, in the experiment with the model. Material modeling by its nature is an experimental method and is not used in economic research.

It is fundamentally different from material modeling perfect modeling, based on an ideal, conceivable connection between the object and the model. Ideal modeling methods are widely used in economic research. They can be conditionally divided into two groups: formalized and non-formalized.

AT formalized In modeling, systems of signs or images serve as a model, together with which the rules for their transformation and interpretation are set. If systems of signs are used as models, then modeling is called iconic(drawings, graphs, diagrams, formulas).

An important type of sign modeling is mathematical modeling, based on the fact that various studied objects and phenomena can have the same mathematical description in the form of a set of formulas, equations, the transformation of which is carried out on the basis of the rules of logic and mathematics.

Another form of formalized modeling is figurative, in which models are built on visual elements (elastic balls, fluid flows, trajectories of bodies). The analysis of figurative models is carried out mentally, so they can be attributed to formalized modeling, when the rules for the interaction of objects used in the model are clearly fixed (for example, in an ideal gas, a collision of two molecules is considered as a collision of balls, and the result of a collision is thought of by everyone in the same way). Models of this type are widely used in physics, they are called "thought experiments".

Non-formalized modeling. It can include such an analysis of problems of various types, when the model is not formed, but instead of it, some precisely not fixed mental representation of reality is used, which serves as the basis for reasoning and decision-making. Thus, any reasoning that does not use a formal model can be considered non-formalized modeling, when a thinking individual has some image of the object of study, which can be interpreted as a non-formalized model of reality.

The study of economic objects for a long time was carried out only on the basis of such uncertain ideas. At present, the analysis of non-formalized models remains the most common means of economic modeling, namely, every person who makes an economic decision without the use of mathematical models is forced to be guided by one or another description of the situation based on experience and intuition.

The main disadvantage of this approach is that the solutions may turn out to be ineffective or erroneous. For a long time, apparently, these methods will remain the main means of decision-making, not only in most everyday situations, but also in decision-making in the economy.

Hosted on Allbest.ru

...

Similar Documents

    Principles and stages of building an autoregressive model, its main advantages. The spectrum of the autoregressive process, the formula for finding it. Parameters characterizing the spectral estimate of a random process. Characteristic equation of the autoregressive model.

    test, added 11/10/2010

    The concept and types of models. Stages of building a mathematical model. Fundamentals of mathematical modeling of the relationship of economic variables. Determining the parameters of a linear one-factor regression equation. Optimization methods of mathematics in economics.

    abstract, added 02/11/2011

    Study of the features of the development and construction of a model of the socio-economic system. Characterization of the main stages of the simulation process. Experimentation using a simulation model. Organizational aspects of simulation modeling.

    abstract, added 06/15/2015

    The concept of simulation modeling, its application in the economy. Stages of the process of constructing a mathematical model of a complex system, criteria for its adequacy. Discrete-event modeling. The Monte Carlo method is a kind of simulation modeling.

    test, added 12/23/2013

    Methodological foundations of econometrics. Problems of constructing econometric models. Goals of econometric research. The main stages of econometric modeling. Econometric models of paired linear regression and methods for estimating their parameters.

    control work, added 10/17/2014

    Stages of building decision trees: splitting rule, stopping and pruning. Statement of the problem of multi-step stochastic choice in the subject area. Evaluation of the probability of implementing successful and unsuccessful activities in the task, its optimal path.

    abstract, added 05/23/2015

    Definition, goals and objectives of econometrics. Stages of building a model. Data types in modeling economic processes. Examples, forms and patterns. Endogenous and exogenous variables. Construction of the specification of the neoclassical production function.

    presentation, added 03/18/2014

    The main thesis of formalization. Modeling of dynamic processes and simulation of complex biological, technical, social systems. Analysis of object modeling and extraction of all its known properties. The choice of the form of representation of the model.

    abstract, added 09/09/2010

    Main stages of mathematical modeling, classification of models. Modeling of economic processes, the main stages of their study. Systemic prerequisites for the formation of a model of the management system for the marketing activities of a service enterprise.

    abstract, added 06/21/2010

    General scheme of the design process. Formalization of the construction of a mathematical model during optimization. Examples of using one-dimensional search methods. Zero-order multidimensional optimization methods. Genetic and natural algorithms.

The stochastic model describes the situation when there is uncertainty. In other words, the process is characterized by some degree of randomness. The adjective "stochastic" itself comes from the Greek word "guess". Since uncertainty is a key characteristic of everyday life, such a model can describe anything.

However, each time we apply it, the result will be different. Therefore, deterministic models are more often used. Although they are not as close as possible to the real state of affairs, they always give the same result and make it easier to understand the situation, simplify it by introducing a set of mathematical equations.

Main features

A stochastic model always includes one or more random variables. She seeks to reflect real life in all its manifestations. Unlike stochastic, it does not aim to simplify everything and reduce it to known values. Therefore, uncertainty is its key characteristic. Stochastic models are suitable for describing anything, but they all have the following common features:

  • Any stochastic model reflects all aspects of the problem for which it was created.
  • The outcome of each of the phenomena is uncertain. Therefore, the model includes probabilities. The correctness of the overall results depends on the accuracy of their calculation.
  • These probabilities can be used to predict or describe the processes themselves.

Deterministic and stochastic models

For some, life seems to be a succession for others - processes in which the cause determines the effect. In fact, it is characterized by uncertainty, but not always and not in everything. Therefore, it is sometimes difficult to find clear differences between stochastic and deterministic models. Probabilities are quite subjective.

For example, consider a coin toss situation. At first glance, it looks like there is a 50% chance of getting tails. Therefore, a deterministic model must be used. However, in reality, it turns out that much depends on the dexterity of the hands of the players and the perfection of the balancing of the coin. This means that a stochastic model must be used. There are always parameters that we do not know. In real life, the cause always determines the effect, but there is also a certain degree of uncertainty. The choice between using deterministic and stochastic models depends on what we are willing to give up - simplicity of analysis or realism.

In chaos theory

Recently, the concept of which model is called stochastic has become even more blurred. This is due to the development of the so-called chaos theory. It describes deterministic models that can give different results with a slight change in the initial parameters. This is like an introduction to the calculation of uncertainty. Many scientists have even admitted that this is already a stochastic model.

Lothar Breuer elegantly explained everything with the help of poetic images. He wrote: “A mountain stream, a beating heart, an epidemic of smallpox, a column of rising smoke - all this is an example of a dynamic phenomenon, which, as it seems, is sometimes characterized by chance. In reality, such processes are always subject to a certain order, which scientists and engineers are only just beginning to understand. This is the so-called deterministic chaos.” The new theory sounds very plausible, which is why many modern scientists are its supporters. However, it still remains little developed, and it is rather difficult to apply it in statistical calculations. Therefore, stochastic or deterministic models are often used.

Building

Stochastic begins with the choice of the space of elementary outcomes. So in statistics they call the list of possible results of the process or event being studied. The researcher then determines the probability of each of the elementary outcomes. Usually this is done on the basis of a certain technique.

However, the probabilities are still quite a subjective parameter. The researcher then determines which events are most interesting for solving the problem. After that, it simply determines their probability.

Example

Consider the process of building the simplest stochastic model. Suppose we roll a die. If "six" or "one" falls out, then our winnings will be ten dollars. The process of building a stochastic model in this case will look like this:

  • Let us define the space of elementary outcomes. The die has six sides, so one, two, three, four, five, and six can come up.
  • The probability of each of the outcomes will be equal to 1/6, no matter how much we roll the die.
  • Now we need to determine the outcomes of interest to us. This is the loss of a face with the number "six" or "one".
  • Finally, we can determine the probability of the event of interest to us. It is 1/3. We sum up the probabilities of both elementary events of interest to us: 1/6 + 1/6 = 2/6 = 1/3.

Concept and result

Stochastic simulation is often used in gambling. But it is also indispensable in economic forecasting, as it allows you to understand the situation deeper than deterministic ones. Stochastic models in economics are often used in making investment decisions. They allow you to make assumptions about the profitability of investments in certain assets or their groups.

Modeling makes financial planning more efficient. With its help, investors and traders optimize the distribution of their assets. Using stochastic modeling always has advantages in the long run. In some industries, refusal or inability to apply it can even lead to the bankruptcy of the enterprise. This is due to the fact that in real life new important parameters appear daily, and if they do not, it can have disastrous consequences.

In the last chapters of this book, stochastic processes are almost always represented using linear differential systems excited by white noise. This representation of the stochastic process usually takes the following form. Let's pretend that

a is white noise. By choosing such a representation of the stochastic process V, it can be simulated. The use of such models can be justified as follows.

a) In nature, stochastic phenomena are often encountered, associated with the action of rapidly changing fluctuations on an inertial differential system. A typical example of white noise acting on a differential system is thermal noise in an electronic circuit.

b) As will be seen from what follows, in linear control theory almost always only the average value of u is considered. covariance of the Stochastic process. For a linear model, it is always possible to approximate any experimentally obtained characteristics of the mean value and covariance matrix with arbitrary accuracy.

c) Sometimes the problem arises of modeling a stationary stochastic process with a known spectral energy density. In this case, it is always possible to generate a stochastic process as a process at the output of a linear differential system; in this case, the matrix of spectral anergy densities approximates with arbitrary accuracy the matrix of spectral energy densities of the initial stochastic process.

Examples 1.36 and 1.37, as well as problem 1.11, illustrate the modeling method.

Example 1.36. First order differential system

Suppose that the measured covariance function of a stochastic scalar process known to be stationary is described by the exponential function

This process can be modeled as a state of a first-order differential system (see Example 1.35)

where is the intensity white noise - a stochastic quantity with zero mean and variance .

Example 1.37. mixing tank

Consider the mixing tank from Example 1.31 (Sec. 1.10.3) and calculate the variance matrix of the output variable for it. noise. Let us now add the equations of models of stochastic processes to the differential equation of the mixing tank. We obtain

Here, is the intensity scalar white noise to

to obtain the variance of the process equal to accept For the process, we use a similar model. Thus, we obtain a system of equations

Have questions?

Report a typo

Text to be sent to our editors: