Integral distribution function of a random variable of its properties. Integral probability distribution function of a random variable. Differential and integral laws of distribution

Let us consider the result of observation of a certain or so-called deterministic PV Q as a random variable (CV) taking the values X ) in various observations.

The most universal way to describe SW is to find their integral or differential distribution functions

The integral function of the distribution of the results of observations is the dependence on the value x of the probability R the fact that the result of observations X. will be less jc. It is written as follows:

In other words, the integral distribution function of the random variable X is called the probability of fulfillment of the inequality X

integral function F(x) has the following properties.

  • 1. F(x) - non-decreasing function.
  • 2. F(x) tends to unity as jc -> +°°.
  • 3. F(x) tends to zero as x -> -°o.
  • 4. F(x) - the function is continuous, since the result of observations in a certain interval can take any value.

However, the fourth property is usually not implemented in practice. This is due to the fact that the SI used have a finite resolution: for pointer instruments, this is the scale division value (PV quantum); for digital instruments, this is the price of the smallest code digit. Therefore, in reality, the distribution function has a stepwise form (Fig. 4.4).


Despite this, in metrological practice, the integral distribution function is often assumed to be continuous, which greatly simplifies the analysis.

For a random error, as well as for a random variable, there is also its own integral distribution function:

integral function F(x), like probability, is a dimensionless quantity.

It is more convenient and visual to describe the property of the results of observations using a differential distribution function, which is called probability distribution density. It should be noted that the differential functions of the results of observations X and random error A match, only the origin of the graph for A is located at the zero point:

Graph of the differential distribution function or distribution curve most often is a symmetric function with a maximum at the point Q for the results of observations (Fig. 4.5). The distribution curve for a random error is also most often a symmetrical function, but with a maximum at the “O” point (Fig. 4.6).

For observation results

For random error

Thus, the differential distribution function of the results of observations or random error is obtained by differentiating the integral distribution function.

There are also asymmetric distribution functions, for example, the Rayleigh function (Fig. 4.7), or functions that do not have a maximum (uniform or trapezoidal) (Fig. 4.8, 4.9).


The integral function is related to the differential function as follows:

because , then , i.e. square

under the curve of the distribution function is equal to one. This is the so-called normalization condition.

The dimension of the probability distribution density is inverse to the dimension of the measured physical quantity, since the integral distribution function is dimensionless. Using the concept of a distribution function, one can obtain an expression for the probability that the result of observations is in half-open intervals [x, x 2 ] or [А„А 2]:

This expression says that the probability of hitting the result of the observation X or random measurement error A in a given interval is equal to the difference between the values ​​of the integral distribution function at the indicated boundaries of this interval.

If we express this probability in terms of the differential distribution function or the probability distribution density, we get:

those. the probability of hitting the result of observations X or a random error D within a given interval is numerically equal to the area under the probability density curve bounded by the boundaries of the interval(Fig. 4.10).


Work p x (x)dx called element of probability. In the case when the probability density distribution law is close to the so-called normal law, as can be seen from the graph of the differential distribution function, the most likely small error values. The probability of occurrence of large errors is much less. Observation results centered around the true value measured PV, and as you approach it, the elements of probability increase. This gives grounds to take the abscissa of the center of gravity of the figure formed by the abscissa axis and the distribution density curve as an estimate of the true value of the PV. This characteristic of a random variable is called mathematical expectation (Fig. 4.11):

Now we can give a mathematically rigorous definition of random and systematic error.

Systematic error 0 (Fig. 4.11) is the deviation of the mathematical expectation of the results of observations from the true value of the measured physical quantity:

random error A is the difference between the result of a single observation and the mathematical expectation of the results of observations:

Hence, the actual value of the measured physical quantity is equal to

test questions

  • 1. What is meant by discrete and continuous random variables?
  • 2. Integral distribution function and its properties.
  • 3. Differential distribution function, connection between integral and differential distribution functions.
  • 4. Condition for normalization of the integral distribution function.
  • 5. What is graphically the mathematical expectation of a random variable?
  • 6. How to understand the systematic and random components of the total error from the physical and mathematical points of view?
  • 7. What is meant by the element of probability?
  • 8. How to determine the probability that the result of observations X or a random error D will fall into a given interval numerically, having a graph of the probability distribution density bounded by the boundaries of the interval?

Under the conditions of the local Moivre-Laplace formula, the probability that the number of successes m will be between m 1 and m 2 can be approximately found by the integral formula of Moivre-Laplace

where x 1 =
, x 2 =
,
is the Laplace function.

The values ​​of these functions are in the appendices of textbooks on the theory of probability.

Graphical assignment of the distribution law shown in fig. one

Rice. 1 Distribution polygon of a discrete random variable.

The method of describing the distribution of a random variable in the form of a table, in the form of a formula, or graphically is applicable only to discrete random variables.

1.5. Cumulative distribution function

The integral distribution function allows you to specify both a discrete and a continuous random variable.

The cumulative distribution function (IDF) is a function F(x) that determines, for each possible value x, the probability that a random variable X will take on a value less than x, i.e.

The geometric meaning of the integral distribution function is the probability that the random variable X will take on a value that lies to the left of the point x on the real axis.

For a discrete random variable X, which can take the values X 1 , X 2 , …,X n, the distribution function has the form where the inequality under the sum sign means that the summation concerns all those values X i, the value of which is less X. Let us explain this formula based on the definition of the function F(x). Suppose that the argument x has taken some definite, but such that the inequality is satisfied x i <xx i+1 . Then to the left of the number x on the number axis will be only those values ​​of the random variable that have the index 1, 2, 3, ..., i. Therefore, the inequality X<x is executed if the value X will take on the values X to, where k = 1, 2, …, i. Thus the event X<x will come if any, no matter which, of the events X = X 1 , X=X 2 , X=X 3 , …, X=X i. Since these events are incompatible, then by the probability addition theorem we have

Properties of the cumulative distribution function:

1. The values ​​of the integral distribution function belong to the interval

:
.

2. The probability that the random variable X will take the value contained in the interval (a, b) is equal to the increment of the integral distribution function on this interval

3. If all possible values ​​x of a random variable belong to the interval (a, b), then

, if

, if

The graph of the IGF of a continuous random variable is shown in fig. 2

Rice. 2 Graph of the IGF of a continuous random variable

The graph of the IGF of a discrete random variable is shown in fig. 3

Rice. 3 Graph of the IGF of a discrete random variable

1.6. Differential distribution function

The differential distribution function is used to describe the probability distribution of a continuous random variable.

Differential distribution function (DDF)(or probability density) is the first derivative of the integral function.

The cumulative distribution function is the antiderivative for the differential distribution function. Then

The probability that a continuous random variable X will take a value belonging to the interval (a, b) is equal to the definite integral of the differential function taken from a to b:

The geometric meaning of the DFR is as follows: the probability that a continuous random variable X takes a value belonging to the interval (a, b) is equal to the area of ​​the curvilinear trapezoid bounded by the x axis, the distribution curve f(x) and the straight lines x = a and x = b (Fig. 4).

Rice. 4 The graph of the differential distribution function is commonly called the distribution curve.

Properties of the differential distribution function:

1. The differential distribution function is non-negative, i.e.

2. If all possible values ​​of a random variable belong to the interval (a, b), then

The differential distribution function is often called the law of the probability distribution of continuous random variables.

When solving applied problems, one encounters various laws of the probability distribution of continuous random variables. Often found laws of uniform and normal distribution.

Differential and integral laws of distribution

The distribution law of a random variable establishes a connection between the possible values ​​of this quantity and the probabilities of their occurrence corresponding to these values. There are two forms of describing the law of distribution of a random variable - differential and integral . Moreover, in metrology, the differential form is mainly used - the distribution law probability density random variable.
Differential distribution law characterized distribution density Distribution density of a random variable in this case the probability P hitting a random variable in the interval from x 1 before x2 :

Graphically, this probability is the ratio of the area under the curve f(x) in the interval from x 1 before x2 to the total area bounded by the entire distribution curve.

In this case, the distribution continuous random variable. In addition to them, there are discrete random variables that take on a number of specific values ​​that can be numbered.

Integral distribution law of a random variable is a function F(x), defined by the formula

The probability that the random variable will be less x1 given by function value F(x) at x = x 1:

F(X) is a nondecreasing function and as X → ∞ F(X)→1

When X → - ∞ F(X)→0

F(x) - the function is continuous, because the result of observations in a certain interval can take any value

However, the fourth property is usually not implemented in practice. This is due to the fact that the SI used have a finite resolution: for a pointer device, this is the price of a scale division (quantum FV), for digital devices, this is the price of the smallest code digit. Therefore, in reality, the distribution function for the error has a stepwise form.

Nevertheless, in metrological practice, the integral function is considered continuous, which simplifies the processing of errors.

Uniform law of distribution of a continuous random variable.

A continuous random variable obeys a uniform distribution law if its possible values ​​lie in a certain certain interval, within which all values ​​are equally probable, that is, they have the same probability density. In other words, the probability distribution is called uniform if, on the interval to which all possible values ​​of the random variable belong, the differential function has a constant value.

Random variables having a uniform probability distribution,<<встречаются на практике. Например, при снятии показаний измерительных приборов. Ошибка при округлении отсчёта до ближайшего целого деления шкалы является случайной величиной, которая может с постоянной плотностью вероятности принимать любые значения между двумя соседними делениями. Таким образом, данная случайная величина имеет равномерное распределение.

Let us find the differential function (density) of the uniform distribution, assuming that all possible values ​​of the random variable X enclosed in between , on which the differential function remains constant, i.e.

f(x) = C

By condition X does not take values ​​outside the range , that's why f(x) = 0 for all x< a and x< b.

Let's find the value of the constant FROM . Since all possible values ​​of the random variable belong to the interval , then it is true:

So, the law of uniform distribution of a random variable on the interval (here a< b ) can be analytically written as follows:

Let us now find the integral function of the uniform distribution of a continuous random variable. To do this, we use the formula

if x< a then f(x) = 0 and hence F(x) = 0

if a ≤ x ≤ b then and therefore

if x ˃b then

So, the desired integral distribution function can be analytically written as follows:

F(x) = 0 for x< a

for a ≤ x ≤ b

F(x) = 1 for x ˃ b

Properties of uniform continuous distribution:

1. First moment (expectation)

2. Median: M = M(X)

3. Mode - any number on the segment (mode - the most probable value of the distribution);

Denote by the probability that the random variable x takes a value less than which is a function called the integral distribution function of x. Since any probability must lie between and 1, then for all values ​​we have: If are such that the probability that will be greater than or equal to the probability that i.e. In other words, the function cannot decrease with increasing

A typical form of the integral distribution function is shown in fig. 1, where the horizontal axis is plotted and the vertical function

Knowing the integral distribution function, we can easily determine for any given the probability that Really, since the events are incompatible, the probability of the occurrence of any of these events will be equal to the sum of the probabilities of the occurrence of each of the events, i.e.

(see scan)

Since the probability of the occurrence of any of these two events or coincides with the probability of the occurrence of the event, then, in accordance with relation (1.1), we have

Therefore, the desired probability of the occurrence of the event will be equal to

In the case when a random variable x is the result of measuring some characteristic of an object randomly selected from a group of objects, a simple interpretation of the integral distribution function can be given. As indicated in paragraph 1.1.1, in this case the probability that the observed value x some equality or inequality (say, or is equal to the relative proportion (in a given group of objects) of such objects for which the value x satisfies the corresponding equality or inequality. Thus, simply determines the relative proportion of those objects for which With this interpretation of probabilities, the relation (1.2 ) becomes obvious. It actually states that the relative number of objects for which is equal to the relative number of objects for which, plus the relative number of objects for which the Group of objects is often called the population. So far, we have considered only populations containing finite new number of objects. Such populations are called finite.

The interpretation of the probability of an event for which a certain relation (equality or inequality) is satisfied, as the relative proportion in a given general population of such elements for which the value of x satisfies this relation, turns out to be very useful in many cases, and we will often use it. However, such an interpretation of probabilities is not always possible if we are not limited to finite populations. Indeed, the integral distribution function associated with a finite general population has its own characteristics.

Let's assume that the general population consists of elements. Then the random variable x can take no more than different values. Let the different values ​​that the value of x can take, and these values ​​​​are arranged in ascending order, It is clear that If the value of x is the same for several elements, then the Cumulative distribution function in this case will have the form of a step curve shown in Fig. 2.

The distribution function will have exactly jumps, and the magnitude of each jump will be equal to either or an integer multiplied by the Cumulative distribution function, represented by the continuous curve in Fig. 1 is obviously not of this type.

Thus, if the integral distribution function is a continuous curve, then the interpretation of probabilities as a relative proportion of certain elements of a finite general population is impossible. However, any continuous cumulative distribution function can be approximated with any given accuracy by a stepwise cumulative distribution function associated with a finite population, provided that the number of elements in the latter is large enough. Thus, any continuous cumulative distribution function can be considered the limiting form of the cumulative distribution function associated with a finite population. The limit is reached with an infinite increase in the number of elements in this general

aggregates. This means that if we allow the existence of an infinite population (a population with an infinite number of elements), then any probability associated with this population can always be interpreted as a relative proportion of the corresponding elements of the population. Of course, the concept of an infinite population is just a useful abstraction, introduced only to simplify the theory.

As an example of an infinite general population, consider an experiment that consists in measuring the length of a certain rod. The outcome of each measurement can be considered a random variable, characterized by an integral distribution function. Then the infinite general population will be an infinite sequence of repeated measurements of the length of the rod, so that each measurement actually made can be considered an element of this population. Sometimes the general population is finite, but the number of elements of this population is so large that it turns out to be more convenient to consider the problems associated with this population as if it were infinite, that is, as if the general population were infinite. Suppose, for example, that we are interested in the distribution of the height of all women aged 20 and over living in the United States. It is obvious that the number of such individuals is so large that one can count on significant mathematical simplifications if we consider the general population of such individuals to be infinite.

Integral probability distribution function of a random variable

TZR-3. Integral probability distribution function of CB

This is the most universal way to set the distribution law. It can be used for both discrete and continuous SW. Often, when talking about this method, the words ʼʼintegralʼʼ and ʼʼprobabilitiesʼʼ are discarded and the term ʼʼ is used. distribution function CBʼʼ .

The cumulative probability distribution function is the probability that some random variable X takes on a value less than the current x:

F(x) = P(X< х) (20)

For example, if for such a SW as the current in the power line, the distribution function F(90) = 0.3, then this means that the probability that the current in the power line will take a value less than 90 A is 0.3.

If for the voltage in the network the distribution function F(215) = 0.4, then 0.4 is the probability that the voltage in the network is less than 215 V.

The probability distribution function must be specified analytically, tabularly or graphically.

Example 27

According to a given series of distribution of students' marks in the exam (Table 8, lines 1 and 2), write down the integral distribution function (Table 8, line 3) and build its graph.

Table 8

It is worth saying that in order to find the values ​​of the distribution function, it is extremely important to use its definition (20):

· for X = 2 F(2)= P(X< 2) = 0, since there are no marks less than 2 in the exam;

· for X= 3 F(3)= P(X< 3) \u003d P (X \u003d 2) \u003d 0.1, because less than 3 is only score 2;

· for X = 4 F(4)= P(X< 4) = P( X= 2) + R(X= 3) = 0.1 + 0.5 = 0.6, because less than 4 there are two grades - 2 or 3 (getting a grade less than 4 is equivalent to getting or grades 2 or scores 3 and for finding F(4) you can use the formula for adding the probabilities of incompatible events);

· for X = 5 F(5)= P(X< 5) = R(X< 4) + R(X= 4) = 0.6 + 0.3 = 0.9, that is, to F(4) the probability that the score is 4 is added.

Analyzing the order of finding the values ​​of F(x), we see that the probability of the smallest value of CV is first added to the probability of the second value, then the third, and so on. That is, the probabilities seem to accumulate. For this reason, the integral distribution function is also called ʼʼfunction of cumulative probabilitiesʼʼ.

In the literature on statistics, the function of cumulative probabilities is quite often called cumulative.

Based on the data table. 8 the graph of the integral function should be plotted discrete random variable (Fig. 29). This feature is discontinuous. Jump fit separate discrete values X, a heightsʼʼstepsʼʼ - appropriate probabilities. In places of break, the function (Fig. 29) takes on the values ​​indicated by dots, ᴛ.ᴇ. left continuous. In general terms, for a discrete SW, one can write: F(x) = P(X< х) = . (21)

In order to understand what the graph of the integral distribution function for a continuous CV will look like, you can resort to the following reasoning. If we imagine that the number of discrete SW values ​​increases, then there will be more gaps, and the height of the steps will decrease. In the limit, when the number of possible values ​​becomes infinite (and this is a continuous CV), the step graph will turn into a continuous one (Fig. 30).

Because the integral probability distribution function of CB is of paramount importance, let us consider it in more detail properties:

Property 1. This way of setting the distribution law universal, since it is suitable for setting the distribution law for both discrete and continuous SWs.

Property 2 . Since the integral distribution function is ϶ᴛᴏ probability, then its values ​​lie on the segment from 0 to 1.

Property 3 . distribution function dimensionless, as well as any probability.

Property 4 . The distribution function is non-decreasing function, i.e. the greater value of the argument corresponds to the same or greater value of the function: when x 2 > x 1 F(x 2) ≥ F(x 1).

This property follows from the fact (Fig. 31) that the probability of hitting a larger segment (from -∞ to x 2) should in no way be less than the probability of hitting a smaller segment (from -∞ to x 1).

In the event that in the area from x 2 before x 1(Fig. 32) there are no possible SW values ​​(this is possible for discrete SW), then F(x 2) = F(x 1).

For the distribution function of continuous SW (Fig. 33) F(x 2) always more F(x 1).

Property 4 has two consequences.

Corollary 1

AT the probability that the value of X will take a value in the interval (x 1; x 2) is equal to the difference between the values ​​of the integral function at the boundaries of the interval:

P(x 1 ≤ X< х 2) = F(х 2) – F(х 1). (15)

This consequence can be explained as follows (Fig. 31):

F (x 2) \u003d P (X< х 2)

the probability that the SW takes values ​​to the left of the point x 2 .

F (x 1) \u003d P (X< х 1) is the probability that the SW takes values ​​to the left of the point x 1 .

Hence the difference

P(X< х 2) - Р(Х < х 1) there is a possibility that the SW values ​​are located in the area from x 1 before x 2 (fig.34) .

Integral probability distribution function of a random variable - concept and types. Classification and features of the category "Integral function of the probability distribution of a random variable" 2017, 2018.

Have questions?

Report a typo

Text to be sent to our editors: