Multi-zone shots. Theory of interpretation of aerial and space images

Comparative deciphering a series of zonal images is based on the use of spectral images of the objects depicted in the image. The spectral image of an object in a photograph is determined visually by the tone of its image in a series of zonal black-and-white photographs; tone is evaluated on a standardized scale in units of optical density. Based on the obtained data, a curve of the spectral image is constructed, which reflects the change in the optical density of the image in the images in different spectral zones. In this case, the values ​​of the optical density of prints plotted along the ordinate axis D, in contrast to the accepted one, they decrease up the axis so that the spectral image curve corresponds to the spectral brightness curve. Some commercial programs provide automatic plotting of spectral images from digital images. The logical scheme of comparative interpretation of multizone images includes the following steps: determination by images of the spectral image of an object- comparison with known spectral reflectance- object identification.

When deciphering the contours over the entire area of ​​the image, the spectral image is also successfully used to determine the boundaries of the distribution of decipherable objects, which is carried out by methods of comparative deciphering. Let's explain them. On each of the zonal images, certain sets of objects are separated by image tone, and these sets are different on images in different zones. Comparison of zonal images makes it possible to separate these sets and identify individual objects. Such a comparison can be implemented by combining (“subtracting”) the schemes for deciphering zonal images on each of which different sets of objects are identified or by obtaining differential images from zonal images. Comparative interpretation is most applicable in the study of plant objects, primarily forests and crops.

In the sequential interpretation of multi-zone images, the fact is also used that the dark contours of vegetation in the red zone on a lighter background, due to the increase in the brightness of its image in the near infrared zone, seem to “disappear” from the image, without interfering with the perception of large features of the tectonic structure and relief. This opens up the possibility, for example, in geomorphological studies, to decipher landforms of different genesis from different zonal images - endogenous from images in the near infrared zone and exogenous - in the red. Sequential decoding provides technologically relatively simple operations of stepwise summation of results.



Deciphering multi-temporal images. Multi-temporal images provide a qualitative study of changes in the objects under study and indirect interpretation of objects by their dynamic features.

Dynamics research. The process of extracting dynamic information from images includes the identification of changes, their graphic display and meaningful interpretation. To identify changes in multi-temporal images, they must be compared with each other, which is carried out by alternate (separate) or simultaneous (joint) observation. Technically, the visual comparison of multi-temporal images is carried out most simply by observing them one by one. A very old method of "blinking" allows, for example, quite simply to detect a newly appeared separate object by quickly viewing two images at different times in turn. From a series of shots of a changing object, an illustrative cinegram can be assembled. So, if the images of the Earth received in 0.5 hours from geostationary satellites in the same angle are assembled into an animation file, then it is possible to repeatedly reproduce the daily development of clouds on the screen.

To identify small changes it turns out to be more effective not sequentially, but joint observation of multi-temporal images, for which special techniques are used: combining images (monocular and binocular); synthesizing a difference or sum (usually color) image; stereoscopic observations.

At monocular During observation, images reduced to the same scale and projection and made on a transparent basis are combined by superimposing one on top of the other and viewed through the light. When computer interpretation of images for joint viewing of images, it is advisable to use programs that provide the perception of combined images as translucent or “opening” areas of one image against the background of another.

binocular observation, when each of two images taken at different times is viewed with one eye, is most conveniently carried out using a stereoscope, in which the observation channels have independent adjustment of the magnification and brightness of the image. Binocular observations are good at detecting changes in clear objects against a relatively uniform background, such as changes in the course of a river.

From black-and-white photographs of different times, it is possible to obtain synthesized color image. True, as experience shows, the interpretation of such a color image is difficult. This technique is effective only when studying the dynamics of objects that are simple in structure and have sharp boundaries.

When studying changes due to movement, movement of objects best results gives stereoscopic observation multi-temporal images (pseudo-stereo effect). Here you can evaluate the nature of the movement, stereoscopically perceive the boundaries of a moving object, for example, the boundaries of an active landslide on a mountain slope.

Unlike sequential methods of joint observation of multi-temporal images, they require preliminary corrections - bringing them to the same scale, transformation, and these procedures are often more complex and time-consuming than the definition of changes itself.

Decoding by dynamic features. Patterns of temporal changes in geographical objects, which are characterized by a change in states over time, can serve as their deciphering features, which, as already noted, are called the temporary image of the object. For example, thermal images taken at different times of the day make it possible to recognize objects that have a specific daily course temperature. When working with multi-temporal images, the same techniques are used as when deciphering multi-zone images. They are based on sequential and comparative analysis and synthesis and are common for working with any series of images.

Field and cameral interpretation. At field In deciphering, identification of objects is carried out directly on the ground by comparing the object in kind with its image in the photograph. The results of decoding are applied to the picture or a transparent overlay attached to it. This is the most reliable type of decryption, but also the most expensive. Field interpretation can be performed not only on photographic prints, but also on screen (digital) images. In the latter case, a field microcomputer with a sensitive tablet screen is usually used, as well as a special software. The results of decoding are noted in the field on the screen using a computer pen, fixed with a set of conventional symbols and recorded in text or tabular form in several layers of the microcomputer's memory. It is possible to enter additional sound information about the decryption object. During field interpretation, it is often necessary to put missing objects on images. Additional shooting is carried out by eye or instrumental method. For this, satellite positioning receivers are used, which make it possible to determine in the field the coordinates of objects that are absent in the image, with almost any required accuracy. When deciphering images at a scale of 1:25,000 or smaller, it is convenient to use portable satellite receivers connected to a microcomputer into a single decoder field set.

A type of field interpretation is aerovisual decoding, which is most effective in the tundra, desert. The height and speed of a helicopter or light aircraft flight are chosen depending on the scale of the images: they are the larger, the smaller the scale. Aerovisual interpretation is effective when working with satellite images. However, its implementation is not easy. The performer must be able to quickly navigate and recognize objects.

At cameral decoding, which is the main and most common type of decoding, the object is recognized by direct and indirect deciphering features without entering the field and directly comparing the image with the object. In practice, both types of decryption are usually combined. The rational scheme of their combination provides for preliminary cameral, selective field and final cameral interpretation of aerospace images. The ratio of field and cameral interpretation also depends on the scale of the images. Aerial photographs of a large scale are interpreted mainly in the field. When working with satellite images covering large areas, the role of cameral interpretation increases. Ground field information when working with space images is often replaced by cartographic information obtained from maps - topographic, geological, soil, geobotanical, etc.

Reference decoding. Cameral interpretation is based on the use decrypted standards, created in the field on key areas typical for the given territory. Thus, deciphering standards are pictures of characteristic areas with deciphering results of typical objects printed on them, accompanied by a characteristic of deciphering features. Further, the standards are used in cameral interpretation, which is performed by the method of geographical interpolation and extrapolation, i.e., by spreading the identified deciphering features to the areas between the standards and beyond. Cameral interpretation using standards was developed in topographic mapping of hard-to-reach areas, when photo libraries of standards were created in a number of organizations. The cartographic service of our country published albums of deciphering samples various types objects in aerial photographs. In thematic interpretation of space images, most of them multi-zone, such a teaching role is played by those trained at Moscow State University. M.V. Lomonosov scientific and methodological atlases "Deciphering multi-zone aerospace images", containing guidelines and examples of the results of decoding various components natural environment, socio-economic objects, the consequences of anthropogenic impact on nature.

Preparation of images for visual interpretation. For geographic interpretation, original images are rarely used. When interpreting aerial photographs, contact prints are usually used, and it is desirable to interpret satellite images “through transmission” using transparencies on film, which more fully convey small and low-contrast details of a space image.

Image conversion.For faster, easier and more complete image extraction necessary information perform its transformation, which is reduced to obtaining another image with the specified properties. It is aimed at highlighting the necessary and removing unnecessary information. It should be emphasized that image transformation does not add new information, but only brings it to a form convenient for further use.

Image conversion can be done by photographic, optical and computer methods, or a combination of them. Photographic methods are based on various modes of photochemical processing; optical - on the transformation of the light flux passed through the picture. The most common computer image transformations. We can say that at present there is no alternative to computer transformations. Common computer transformations of images for visual interpretation, such as compression-decompression, contrast transformation, color image synthesis, quantization and filtering, as well as the creation of new derivative geoimages.

Enlarge pictures. In visual interpretation, it is customary to use technical means, expanding the possibilities of the eye, for example, magnifiers with various magnifications - from 2x to 10x. Useful measuring magnifier with a scale in the field of view. The need for magnification becomes clear from a comparison of the resolution of images and the eye. The resolving power of the eye at the distance of best vision (250 mm) is assumed to be 5 mm-1. To distinguish, for example, all the details on a space photographic image with a resolution of 100 mm-1, it must be increased by 100/5 = 20 times. Only in this case can you use all the information contained in the photograph. It must be taken into account that it is not easy to obtain photographs with a high magnification (more than 10x) by photographic or optical methods: photographic enlargers are required large sizes or very high, difficult-to-implement illumination of the original images.

Features of observing images on a computer screen. For the perception of images, the characteristics of the display screen are important: the best interpretation results are achieved on large screens that reproduce maximum amount colors and high image refresh rates. Enlargement of a digital image on a computer screen is close to optimal when one pixel of the image corresponds to one pixel of the pix screen.

If the pixel size on the PIX terrain (spatial resolution) is known, then the image scale on the display screen is equal to:

For example, a digital satellite image TM/Landsat with a pixel size on the ground PIX = 30 m will be reproduced on the display screen with pix d = 0.3 mm on a scale of 1:100,000. If it is necessary to consider small details, a screen shot using a computer program can be additionally enlarged by 2, 3, 4 times or more; in this case, one pixel of the image is displayed by 4, 9, 16 screen pixels or more, but the image takes on a “pixel” structure that is noticeable to the eye. In practice, the most common additional increase 2 - Zx. To view the whole picture on the screen at the same time, the image has to be reduced. However, in this case, only every 2nd, 3rd, 4th, etc. are displayed. rows and columns of the image and on it the loss of details and small objects is inevitable.

The time of effective work when deciphering screen shots is shorter than when deciphering visual prints. It is also necessary to take into account the current sanitary norms work on the computer, regulating, in particular, the minimum distance of the decoder's eyes from the screen (at least 500 mm), the duration of continuous operation, the intensity of electromagnetic fields, noise, etc.

Instruments and aids. Often in the process of visual interpretation, it is necessary to make simple measurements and quantitative estimates. To do this, various kinds of auxiliary tools are used: palettes, scales and tables of tones, nomograms, etc. Stereoscopes are used to view images stereoscopically. various designs. The best device for cameral interpretation should be considered a stereoscope with a double observation system that provides viewing of a stereo pair by two decoders. The transfer of interpretation results from individual images to a common cartographic basis is usually performed using a small special optical-mechanical device.

Formulation of decryption results. The results of visual interpretation are most often presented in graphic, textual and less often digital forms. Usually, as a result of deciphering work, a snapshot is obtained, on which the conventional signs studied objects. The decoding results are also fixed on a transparent overlay. When working on a computer, it is convenient to present the results in the form of printer prints (hard copies). Based on satellite images, so-called decryption schemes, which, in their content, represent fragments of thematic maps compiled to the scale and projection of the image.

Automated decoding is the interpretation of the data in the image, performed by an electronic computer. This method is used due to factors such as the processing of a huge amount of data and the development of digital technologies, offering an image in a format suitable for automated technologies. Certain software (software) is used to interpret images: ArcGIS, ENVI (see Fig. 5), Panorama, SOCETSET, etc.

Fig.5. ENVI 4.7.01 program interface

Despite all the advantages of using computers and specialized programs, the constant development of technology, the automated process also has problems: pattern recognition on machine classification using narrowly formalized decryption features.

To identify objects, they are divided into classes with certain properties; this process of dividing space into sections and classes of objects is called segmentation. Due to the fact that objects during shooting are often closed and with "noises" (clouds, smoke, dust, etc.), machine segmentation is probabilistic in nature. To improve the quality, information about the shape, texture, location and relative position of objects is added to the spectral features of objects (color, reflection, tone).

For machine segmentation and classification of objects, there are algorithms developed on different classification rules:

    with training (supervised classification);

    without training (unsupervised classification).

A classification algorithm without training can segment an image quite quickly, but with a large number of errors. Controlled classification requires the indication of reference areas in which there are objects of the same type as those being classified. This algorithm requires a lot of effort from the computer and gives the result with greater accuracy.

3.1. Automated decryption using envi 4.7.01

To study the methods of interpretation and processing of space images, an image was decoded from the Landsat-8 satellite to the territory of the Udmurt Republic. Image obtained from the US Geological Survey website. The city of Izhevsk is clearly visible in the picture, Izhevsk Pond, the flow of the Kama River from the city of Votkinsk to the city of Sarapul are also read without distortion. Shooting date - 05/15/2013 and 05/10/2017. The percentage of coverage of the 2013 image by clouds is 45%, and the upper part of the image is difficult to decipher (however, almost the entire spring-summer survey period contains a high content of clouds in the image). Therefore, the main work on the analysis of information will take place with a more up-to-date image.

The percentage of coverage of the 2017 image by clouds is 15% and the upper right corner of the image is not suitable for processing due to a group of clouds covering the surface of the territory.

The coordinate system adopted for use in the image is UTM—Universal Transverse Mercator, based on the WGS84 ellipsoid.

The ENVI software package (PC) is a software product that provides a full cycle of processing optoelectronic and radar data from remote sensing of the Earth (ERS), as well as their integration with geographic information systems (GIS) data.

The advantages of ENVI also include an intuitive graphical interface that allows a novice user to quickly master all the necessary data processing algorithms. Logical drop-down menu items make it easy to find the function that is needed in the process of analyzing or processing data. It is possible to simplify, rebuild, Russify or rename ENVI menu items or add new functions. In version 4.7, the integration of ENVI and ArcGIS products has been implemented.

To prepare the image for the decoding process, it is necessary to process it and obtain the spectral image itself for analysis. To obtain an image from a series of images, it is necessary to arrange all channels into a single stream/container using the Layerstacking command on the control panel (see Fig. 6). After all the transformations, we get a multi-channel container/image with which we can continue to work: filtering, binding, unsupervised classification, dynamics detection, vectorization. All image channels will be brought to the same resolution and to the same projection. To load this command, select: BasicTools>LayerStacking or Map>LayerStacking .

Fig.6. ENVI program interface - channel stacking in Layerstacking

When visualizing a multispectral image, it is necessary to select the following commands in the menu of the ENVI software package: File>OpenExternalFile>QuickBird. In the new AvailableBandsList window (see Fig. 7) to synthesize the image in RGB lines, we select the red, green and blue channels, respectively - the sequence of channels "4,3,2". As a result, we get an image that is familiar to the human eye (see Fig. 8.) and 3 new windows appear on the screen - Image, scroll, zoom.

Fig.7. AvailableBandsList window

Fig.8. The synthesized image of the image taken on May 15, 2013 - the sequence of channels "4,3,2".

Recently, in relation to the Landsat-8 image in ENVI, the sequence of channels "3,2,1" is more often used to obtain an image in close to natural colors. To compare two sequences, let's carry out a filtering procedure (there is a Filter tab in the Image window), displaying both results on the screen (see Fig. 9).

Fig.9. Filtering a snapshot in the sequence "3,2,1"

Thanks to this command, you can improve the quality of the image: in this case, the transparency of the clouds has increased, clear contours of the separation of surfaces (water areas, forests, anthropogenic territories) have appeared. In fact, Filter helps to correct the "noise" of the image.

Uncontrolled classification is performed according to the principle of distributing pixels into classes - similar brightness characteristics. There are two unsupervised classification algorithms in ENVI: K-means and IsoData. The K-means command is an order of magnitude more complex: it requires certain skills in the selection of image settings and output results. The IsoData command is simpler and requires only changing the parameters specified in the system (see fig. 10): main panel, Classification - Unsupervised - K-means/ IsoData command (see fig. 11) .

Fig.10. IsoData settings window in ENVI

In the resulting example of unsupervised classification, infrared and blue channels dominate, providing detailed information about the hydro network in the image area.

Fig.11. Unsupervised classification

Through the ENVI complex, it is easy and convenient to register an image using a georeferenced image, and then the resulting image is used in MapInfo. To do this, select Map>Registration>SelectGCPs: Image to Map from the main menu. The result can immediately be displayed in MapInfo for comparison, saving in a special format (see fig. 12).

Fig.12. Image georeference for use in MapInfo

Image vectorization in ENVI occurs with the same data set as image binding from ENVI in MapInfo, through the vectorization command: you must specify the projection, ellipsoid, zone number (see fig. 13).

The dynamics of changes in the selected territory is tracked using multi-temporal multi-zone images (for 2013 and 2017). Dynamics can be tracked in 3 ways:

    flashing method;

    "sandwich" method - combination of layers in MapInfo;

    using change map.

Fig.13. Image vectorization

The blinking method creates two different windows with 2 snapshots using the NewDisplay command in the window for selecting layers to display. Both images are linked using the LinkDisplays command in the Image window, and you can see both images on the screen, which move in the same way at different points in time, displaying the same area (see Fig. 14). At the click of a computer mouse, displays with images will change places - blink, which will allow you to detect changes (dynamics).

Fig.14. Dynamic detection - flashing method

The "sandwich" method consists in a simultaneous combination of both images previously saved in Jpeg2000/.jp2 format using the File - Save Images command. Alternately, both images must be opened in Mapinfo in a single projection (Universal Transverse Mercator). For a comfortable comparison, the transparency of the upper layer/image is changed to 50% and a visual search for changes is carried out, followed by the allocation of areas of dynamics (see Fig. 15).

If 2 obtained images are georeferenced, separated by layers and geotiff/tiff format, then there is a modern actual method - change map. In both images, you must select the same type of layer, for example, the third - green. As a result of the transformations, a map with a large amount of noise is obtained, requiring filter adjustments.

Fig.15. Revealing dynamics - the "sandwich" way

If we compare all three methods, then the author of the work is more impressed by the "sandwich" method, because the blinking method gives a strong load to the eyesight and causes premature physiological eye fatigue. Creating a map of changes is not always effective, because. Noise cannot be completely removed.

For example, for images taken by an aerial camera with a focal length / = 70 mm, C = 250 = 3.5. Consequently,

in stereoscopic viewing of photographs taken with short-focus aerial cameras, the terrain is perceived as exaggerated, which facilitates the study of its various microforms. In this case, it should be borne in mind that with stereoscopic perception of such images, the slopes seem much steeper than they actually are.

In visual interpretation, it is useful, using the properties of binocular vision, to observe not only stereoscopic pairs of images, but also pairs made up of images different color(binocular color mixing), black and white and color, sharp (glossy) and soft (matte) shots, etc.

3.1.3. Types and methods of visual interpretation of images

During visual interpretation, the performer recognizes objects in an aerospace image, determines their quality and some quantitative characteristics, reveals the relationship between objects, phenomena and processes, and also fixes the results of interpretation in a graphical form.

An important methodological approach in geographic deciphering is the analysis of decipherable objects in development and in close connection with their environment. Decoding is carried out according to the principle from the general to the particular. An aerospace image for a geographer is, first of all, an information model of the area being studied, perceived as a whole. However, during targeted decoding, the performer usually encounters both redundant (superfluous) information present in the image and a lack of necessary information. Once again, it should be emphasized that the interpretation of aerospace images requires certain knowledge and skills. The deeper professional knowledge the performer about the subject of research, the more accurate, complete and reliable the information extracted from the image. The results of visual decoding, which is an intellectual activity bordering on art, significantly depend not only on the properties of the images, but also on the experience, erudition, ability to comprehend, and often the intuition of the decoder.

Technological schemes of decoding. The interpretation of images, both research and production, is always carried out purposefully. Geographers study geosystems of different ranks, their components, as well as individual objects using images.

you, phenomena and processes, performing landscape, geomorphological, hydrological, glaciological and other types of interpretation.

The technology and organization of work on interpretation significantly depend on its tasks, territory, scale and type of images (photographic or scanner, thermal, radar, etc.), on the use of single images or their series (multi-zone, multi-temporal). There are various organizational and technological schemes for decryption, but they all include the following steps:

2) identifying a set of decryption objects (drawing up a preliminary legend for a future decryption scheme or map);

3) selection of images for interpretation, transformation of images to increase their expressiveness, preparation of instruments and aids decryption. It should be borne in mind that images that are optimal for solving one problem may not be effective for another;

4) proper interpretation of aerospace images and assessment of its reliability;

5) registration of the results of decoding.

The central point of any work is the actual interpretation of aerospace images. Thematic interpretation can be performed according to two principal logical schemes. The first scheme provides for object recognition first, and then their graphical selection; the second scheme - first, graphical selection in the image of areas with the same type of image, and then their recognition. Both schemes end with the stage of interpretation, the scientific interpretation of the results of deciphering. When working with images, especially with space images, the decoder makes extensive use of additional material, usually cartographic, which serves to refine the deciphering features and evaluate the deciphering results.

The first scheme turns out to be universal for solving most problems; it has received wide recognition in the practice of visual interpretation. The second scheme is very effective in deciphering relatively simple objects by brightness features, but has limited application. Both of these schemes in computer interpretation are implemented in classification technologies with and without training.

decryption signs. In an aerospace image, objects differ from one another in a number of deciphering (unmasking) features. Identify the main features that

it is customary to divide into direct (simple and complex) and indirect (color incl. I, 5). Direct simple deciphering features are the shape, size, tone (color) of the image and shadow, and a complex (complex) feature that combines the above features is the image pattern. Indirect signs are based on relationships between objects, on the possibility of identifying objects that are not visible in the image by other objects that are well depicted. Indirect signs are also the location of the object, geographical proximity, traces of the impact of the object on the environment.

Each object has its own characteristics, which are manifested in direct and indirect deciphering features, which are generally not constant, but depend on the season, time and spectral ranges of the survey, image scale, etc. Most developed for visible range images, these features have their own characteristics in thermal and radar images. Thus, the tone of the image in the images in the visible range depends on the brightness of the objects, in the thermal infrared - on their temperature, and in the radio range - on the surface roughness, moisture content, and the geometry of the illumination by the radio beam. In thermal infrared images, there is no such deciphering feature as a shadow, and in radar images, the use of the image structure of flat areas is complicated by the presence of speckle noise. Depending on the specific conditions, the relative importance of the deciphering signs, and the signs themselves, change. A novice performer works more with direct deciphering signs; the skillful use of indirect signs is evidence of the high qualification of the decoder.

In direct (immediate) deciphering, direct signs are used. We present their characteristics for images of the visible range.

The form is an effective direct sign in visual interpretation. It is in the shape of the contour that the main part of the information about the object is contained. Anthropogenic objects have a geometrically correct, standard shape - agricultural fields are distinguished by a rectangular shape (color incl. I, 5, a), airfields are identified by crossing stripes. The three-dimensional shape allows you to recognize objects stereoscopically.

Size is a feature used mainly when working with large-scale images. Buildings of different functional purposes are distinguished by size (color incl. I, 5, b), fields of grain and fodder crop rotations are separated. Size estimation in the process of deciphering is usually done by visual comparison with the size of a known object. Both absolute dimensions and their ratios matter.

The tone (degree of blackness) of the image, determined by the brightness of the subject and the spectral area of ​​the image, helps to separate

main types of surface: snow, open ground, vegetation. Spot sun glare in the picture often points to water bodies. However, tone is not a stable feature. Even with the same lighting, the same object can appear in different parts a picture in a different tone, and vice versa. The ratio of tones is much more stable - tone contrasts. In a multi-zone image, the tone of the same object reproduced in a series of zone images will be different. Correlating with the spectral brightness curve, it transforms into a complex direct sign - the spectral image of the object.

Color is a more informative and reliable feature than the tone of a black and white image. Water objects, forests, meadows, plowed fields are well distinguished by color (color incl. I, 5, c). Using images with purposefully distorted colors, separate different types of vegetation, rocks etc.

The shadow can be attributed to both direct and indirect deciphering signs. On photographic and scanner images, it is subdivided into proper and incident. The shadow on the detailed photographs reflects the silhouette of the photographed object and makes it possible to estimate its height (color incl. I, 5, d). Since the shadow always has a relative contrast that is much greater than the object itself, often only a falling shadow makes it possible to detect objects that are small in plan, but tall, such as factory chimneys. In mountainous regions, deep shadows make it difficult to decipher. Shadows significantly affect the drawing of the image.

Image drawing - a stable complex deciphering feature that provides unmistakable identification not only of such objects as agricultural fields, settlements, but also different types geosystems. There are several classifications of aerospace image patterns, in which they are subdivided using terms with one or two adjectives: granular, mosaic, radial jet, etc. Each natural-territorial complex is characterized by a certain pattern on the image, which reflects its morphological structure (color incl. I, 6). In the figure, images distinguish between texture - the shape of the pattern-forming elements and structure - the spatial arrangement of texture elements. Sometimes the pattern of the image is characterized by quantitative indicators, which serves as the basis for morphometric interpretation.

In computer interpretation, the texture of a digital image is usually understood as the spatial variability of pixel brightness values, which partially combines the content of the concepts of texture and structure, which are usually distinguished in visual interpretation.

Morphometric interpretation. The deciphering attribute of objects - the form - is usually determined during deciphering

visually, but a more accurate separation of objects by shape is possible based on its measurements. In addition to the shape of individual objects, quantitative statistical characteristics of the shape of objects are determined. mass distribution and their distribution - they can also serve as signs certain type objects.

Recognition and study of objects, based on the determination of quantitative indicators characterizing their shape, size, features of spatial distribution, image pattern - its texture and structure, is called morphometric decryption. Methods for determining morphometric parameters, the number of which in different areas of research is measured in dozens, vary from the simplest visual and instrumental measurements to computer processing of images.

Morphometric interpretation is used when working with images of a wide range of scales - from large-scale aerial photographs to survey satellite images. It is used in various thematic areas research. For example, in forest inventory, one of the important tasks of planting assessment - determining the bonitet of forest stands (i.e., their quality, wood reserves) - is solved indirectly based on an analysis of crown diameter and canopy density using large-scale aerial photographs; statistical indicators of these characteristics are obtained by measuring profiles on stereophotogrammetric instruments.

Another type of morphometric analysis of images used in geological and geomorphological studies is the analysis of the distribution of fault tectonics elements (length, direction, density of lineaments). The diagrams of their distribution, obtained from the results of deciphering the lineaments, serve as the basis for identifying areas with different basement structures that have different prospects for the search for mineral deposits. For such analysis of images, it is widely used software computer processing. close task- zoning of the territory according to the intensity of erosional dissection, for example, according to the density of the ravine-gully network. Isolation from images of areas with different density and depth of dissection, angles of inclination and exposure of slopes based on a stereo model and a digital model created from images is now also provided computer programs. More difficult is morphometric interpretation of the image pattern used in landscape studies, since the characteristics of the pattern are more difficult to formalize and quantify. Nevertheless, the quantitative characteristics of landscape patterns are being studied in order to develop algorithms for landscape morphometric computer interpretation based on them.

Indicative decoding. In contrast to the direct indirect decoding, which is based on the interconnection and interdependence between objects and phenomena objectively existing in nature, the decoder determines not the object itself, which may not be depicted in the picture, but its pointer, indicator. Vegetation cover, as well as topography and hydrography, are most often used as indicators. Indirect signs underlie landscape deciphering method based on multilateral connections between individual components of the landscape, between the object being deciphered and everything natural complex. Usually, as the scale of images decreases, the role of indirect deciphering features increases.

On tsv. incl. I, 5 are examples of objects deciphered by indirect signs. Spots of soil soaking in the fields indicate the development of a subsidence microrelief and a close level of groundwater. Loops and folds of surface moraines on the glacier indicate that this is a pulsating glacier and is expected to move.

Indirect deciphering using indicators is called indicator deciphering, in which components or processes that are less accessible for observation are identified based on the observed "physiognomic" components of the landscape. The geographical basis of such deciphering is indicator teaching (indicative landscape science). Indicative interpretation plays a particularly important role when working with satellite images, when direct features lose their significance due to the strong generalization of the image. On satellite images of flat areas, the outer, vegetation cover is primarily displayed. earth's surface, due to which the microrelief appears; vegetation can also be used to judge soils and soils. During indicative deciphering, they make up the so-called indicator tables, where for each type or state of the indicator, the type of the displayed object corresponding to it is indicated. Such a technique has been especially carefully worked out for hydrogeological interpretation, when it is possible to determine not only the presence, but also the depth and mineralization of groundwater by the distribution of vegetation.

Objects whose connections with the phenomenon under study are not obvious at first glance can act as indicators. Thus, the formation of linear ridges of cumulus clouds over large tectonic faults was repeatedly noted. Field geophysical studies have shown that additional heat flows rise along such faults, which explains the formation of clouds, which, therefore, can act as an indicator of faults.

With indication decoding, a transition from spatial characteristics to temporal ones is possible. Based on the identification spatio-temporal rows by indication signs, it is possible to establish the relative age of the process or the stage of its development. Various forms alasov on

Rice. 3.9. Movement tracers:

a - median moraines on the surface of the glacier; b - sandy ridges in the desert, elongated in the direction of the prevailing winds; c - water flows of different turbidity carried by the river into the sea; d - phytoplankton on the sea surface, visual

lysing mushroom current

satellite images in the permafrost zone, their relationship with thermokarst lakes indicate the stages of development of permafrost thermokarst processes, making it possible to separate a young, mature, decrepit thermokarst relief.

Mass objects (tracers) often serve as indicators of the movement of water masses in the ocean, surface winds, and ice of glaciers, which together visualize the direction and nature of movement (Fig. 3.9). Their role can be played broken ice, suspensions, phytoplankton tracing the movement of waters in the sea, median moraines, a pattern of cracks or layering on the surface of a mountain glacier. The movement of waters is well visualized by the temperature contrasts of the water surface - it is from thermal infrared images that the vortex structure of the World Ocean is revealed. Eolian landforms of sandy massifs and sastrugi on the snow-covered surface of sheet glaciers indicate the predominant direction of surface wind flows. Not only the direction is revealed, but also some quantitative characteristics of the movement, its speed. For example, arcs of ogives on a mountain glacier that appear under an icefall, moving down along with the ice, are extended along the axis of the glacier, indicating a higher speed in the middle part compared to the speed of ice movement at the sides of the glacier, which indicates a laminar rather than a blocky type. ice movement.

Decoding of multizonal images. A multi-zone aerospace image usually consists of 4-6 images obtained in relatively narrow spectral zones. This type of images can also include radar images obtained both when registering reflected radio waves of different lengths, and with their different polarizations. Working with a series of zonal images is more complicated than with a single image, and the interpretation of multizone images requires the use of special methodological approaches. The most versatile approachcolor image synthesis,including the choice of a color synthesis option that is optimal for solving a specific deciphering problem. Additional results can also be obtained by working with a series of achromatic(black and white) zone pictures. In this case, two main methodological approaches are used -comparative and consistent decryption.

Comparative deciphering a series of zonal images is based on the use of spectral images of the objects depicted in the image. The spectral image of an object in a photographic image is determined visually by the tone of its image in a series of zonal black and white pictures; tone is evaluated on a standardized scale in units of optical density. Based on the data obtained, a spectral image curve is constructed (Fig. 3.10), reflecting the change in the optical density of the image

Rice. 3.10. Curves of the spectral image of the main forest-forming species and other objects, obtained from a series of photographic prints of zonal images MKF-6 / Soyuz-22 (vertical lines in the graphs

correspond to filming areas):

1 - sand; 2 - meadows (alases); 3 - pine; 4 - larch; 5 - birch, willow,

poplar; 6 - spruce; 7 - cinder; 8 - water

images in different spectral zones. In this case, the values ​​of the optical density of the prints D plotted along the ordinate axis, in contrast to the accepted one, decrease upward along the axis so that the spectral image curve corresponds to the spectral brightness curve. Some commercial programs provide automatic plotting of spectral images from digital images. The logical scheme of comparative interpretation of multizone images includes the following steps: determination of the spectral image of the object from images - comparison with the known spectral reflectivity - identification of the object.

When deciphering the contours over the entire area of ​​the image, the spectral image is also successfully used to determine the boundaries of the distribution of decipherable objects, which is carried out by methods of comparative deciphering. Let's explain them. On each of the zonal images, certain sets of objects are separated by image tone, and these sets are different on images in different zones. For example, in the one shown in Fig. 3.11 example in the image in the red zone (K), pine, spruce forests and burnt areas, and in the near infrared (IR) - spruce forests and burnt areas. Match-! The division of zonal images makes it possible to separate these aggregates and single out individual objects, in this case, pine forests. Such a comparison can be implemented by combining (“subtracting”) the schemes for deciphering zonal images / on each of which different sets of objects are identified / or by obtaining differential images from zonal images. The sequence of operations for subtracting zonal images or their decoding schemes can be written in the form of decoding formulas (see Fig. 3.11). Comparative interpretation is most applicable in the study of plant objects, primarily forests and crops.

K - IR or IR - K

Larch forests (L) pine forests(FROM)

Spruce forests and burnt areas (F + D) Alasy

L \u003d (L + C) ik - C \u003d (L + C) ik - [(C + E + G) k - (E + G) "]

Rice. 3.11. Comparative interpretation of multi-zonal images MKF-6 / Soyuz-22 for separation by species composition of forests of the middle taiga zone (Central Yakut plain, middle course of the Vilyuy river)

Sequential decryption is based on the fact that different objects are optimally displayed on images in different spectral zones. For example, in photographs of shallow water, due to the different penetration of rays from different spectral zones (K, O, 3) into aquatic environment find mapping objects located on different depth, and interpretation of a series of multi-zone images allows you to perform a multi-depth analysis (Fig. 3.12).

Rice. 3.12. Sequential interpretation of multizone images

IFF-v / Soyuz-22 for different depths

analysis of bottom relief forms in the shallow northeastern part of the Caspian Sea:

1 - crests of underwater manes; 2 - upper parts of the slopes; 3 - the lower parts of the slopes; 4 - flattened intercree-

nye depressions; 5 - interhume hollows

In the sequential interpretation of multi-zone images, the fact is also used that the dark contours of vegetation in the red zone against a lighter background, due to the increase in the brightness of its image in the near infrared zone, seem to “disappear” from the image, without interfering with the perception of large features of the tectonic structure and relief. This opens up the possibility, for example, in geomorphological studies, to decipher landforms of different genesis from different zonal images - endogenous from images in the near infrared zone and exogenous - in the red. Sequential decoding provides technologically relatively simple operations of stepwise summation of results.

Deciphering multi-temporal images. Multi-temporal images provide a qualitative study of changes in the objects under study and indirect interpretation of objects by their dynamic features.

Dynamics research. The process of extracting dynamic information from images includes the identification of changes, their graphic display and meaningful interpretation. To identify changes in multi-temporal images, they must be compared with each other, which is carried out by alternate (separate) or simultaneous (joint) observation. Technically, the visual comparison of multi-temporal images is carried out most simply by observing them one by one. A very old method of "blinking" (flicker-method) allows, for example, quite simply to detect a newly appeared separate object by quickly examining two images at different times in turn. From a series of shots of a changing object, an illustrative cinegram can be assembled. So, if the images of the Earth received in 0.5 hours from geostationary satellites in the same angle are mounted into a “ringing” film or an animation file, then it is possible to repeatedly reproduce the daily development of clouds on the screen.

To detect small changes, it turns out to be more effective not alternately, but joint observation of multi-temporal images, for which special techniques are used: combining images (monocular and binocular); synthesizing a difference or sum (usually color) image; stereoscopic observations.

In monocular observation, images reduced to the same scale and projection and made on a transparent basis are superimposed one on top of the other and viewed through the light. When computer interpretation of images for joint viewing of images, it is advisable to use programs that provide the perception of combined images as

translucent or "revealing" areas of one image against the background of another.

Binocular observation, when each of two images taken at different times is viewed with one eye, is most conveniently carried out using a stereoscope, in which the observation channels have independent adjustment of the magnification and brightness of the image. Binocular observations are good at detecting changes in clear objects against a relatively uniform background, such as changes in the course of a river.

From multi-temporal black-and-white images, it is possible to obtain synthesized color image. True, as experience shows, the interpretation of such a color image is difficult. This technique is effective only when studying the dynamics of objects that are simple in structure and have sharp boundaries.

When studying changes due to movement, movement of objects, the best results are given by stereoscopic observation multi-temporal images (pseudo-stereo effect). Here you can evaluate the nature of the movement, stereoscopically perceive the boundaries of a moving object, for example, the boundaries of an active landslide on a mountain slope.

Unlike sequential methods of joint observation of multi-temporal images, they require preliminary corrections - bringing them to the same scale, transformation, and these procedures are often more complex and time-consuming than the definition of changes itself.

Decoding by dynamic features. Patterns of temporal changes in geographical objects, which are characterized by a change in states over time, can serve as their deciphering features, which, as already noted, are called the temporary image of the object. For example, thermal images obtained at different times of the day make it possible to recognize objects with a specific diurnal temperature variation. When working with multi-temporal images, the same techniques are used as when deciphering multi-zone images. They are based on sequential and comparative analysis and synthesis and are common for working with any series of images.

Field and cameral interpretation. At the field In deciphering, identification of objects is carried out directly on the ground by comparing the object in kind with its image in the photograph. The results of decoding are applied to the picture or a transparent overlay attached to it. This is the most reliable type of decryption, but also the most expensive. Field interpretation can be performed not only on photographic prints, but also on screen (digital) images. In the latter case, a field microcomputer with a sensitive display is usually used. wound-tablet, as well as special software

nie. The results of decoding are noted in the field on the screen using a computer pen, fixed with a set of conventional symbols and recorded in text or tabular form in several layers of the microcomputer's memory. It is possible to enter additional sound information about the decryption object. During field interpretation, it is often necessary to put missing objects on images. Additional shooting is carried out by eye or instrumental method. For this, satellite positioning receivers are used, which make it possible to determine in the field the coordinates of objects that are absent in the image, with almost any required accuracy. When deciphering images at a scale of 1:25,000 and smaller, it is convenient to use portable satellite receivers connected to a microcomputer into a single decoder field set.

A type of field interpretation includes aero-visual interpretation, which is most effective in the tundra, desert. The height and speed of a helicopter or light aircraft flight are chosen depending on the scale of the images: they are the larger, the smaller the scale. Aerovisual interpretation is effective when working with satellite images. However, its implementation is not easy - the performer must be able to quickly navigate and recognize objects.

In cameral decoding, which is the main and most common type of decoding, the object is recognized by direct and indirect deciphering features without entering the field and directly comparing the image with the object. In practice, both types of decryption are usually combined. The rational scheme of their combination provides for preliminary cameral, selective field and final cameral interpretation of aerospace images. The ratio of field and cameral interpretation also depends on the scale of the images. Aerial photographs of a large scale are interpreted mainly in the field. When working with satellite images covering large areas, the role of cameral interpretation increases. Ground field information when working with space images is often replaced by cartographic information obtained from maps - topographic, geological, soil, geobotanical, etc.

Reference decoding. Cameral interpretation is based on the use decryption standards created in the field on key areas typical for the given territory. Thus, deciphering standards are pictures of characteristic areas with the results of deciphering typical objects printed on them, accompanied by a characteristic of deciphering features. Further, the standards are used in cameral decoding, which is performed by the method of geo-

graphic interpolation and extrapolation, i.e., by spreading the identified decoding features to the areas between the standards and beyond. Cameral interpretation using standards was developed in topographic mapping of hard-to-reach areas, when photo libraries of standards were created in a number of organizations. The cartographic service of our country published albums of samples of interpretation of various types of objects on aerial photographs. In the case of thematic interpretation of space images, most of which are multi-zone, such a teaching role is played by those trained at Moscow State University. M.V. Lomonosov scientific and methodological atlases “Deciphering multi-zone aerospace images”, containing methodological recommendations and examples of the results of deciphering various components of the natural environment, socio-economic objects, consequences anthropogenic impact on nature.

Preparation of images for visual interpretation. For geographic interpretation, original images are rarely used. When interpreting aerial photographs, contact prints are usually used, and it is desirable to interpret satellite images “through transmission” using transparencies on film, which more fully convey small and low-contrast details of a space image.

Image conversion. For faster, simpler and more complete extraction of the necessary information from the image, its transformation is performed, which is reduced to obtaining another image with the specified properties. It is aimed at highlighting the necessary and removing unnecessary information. It should be emphasized that image transformation does not add new information, but only brings it to a form convenient for further use.

Image conversion can be done by photographic, optical and computer methods, or a combination of them. Photographic methods are based on various modes of photochemical processing; optical - on the transformation of the light flux passed through the picture. The most common computer image transformations. We can say that at present there is no alternative to computer transformations. Common computer transformations of images for visual interpretation, such as compression-decompression, contrast transformation, color image synthesis, quantization and filtering, as well as the creation of new derivative geoimages, will be discussed in Sec. 3.2.

Enlarge pictures. In visual interpretation, it is customary to use technical means that expand the possibilities

eyes, for example magnifiers with different magnification - from 2x to 10x. Useful measuring magnifier with a scale in the field of view. The need for magnification becomes clear from a comparison of the resolution of images and the eye. The resolving power of the eye at the best vision distance (250 mm) is assumed to be 5 mm-1. To distinguish, for example, all the details in a space photographic image with a resolution

100 mm-1, it must be increased by ^ ^ = 20 times. Only in this

case, you can use all the information contained in the photograph. It must be borne in mind that it is not easy to obtain photographs with a high magnification (more than 10x) by photographic or optical methods: large-sized photographic enlargers or very high illumination of the original photographs are required.

Features of observing images on a computer screen. The characteristics of the display screen are important for the perception of images: the best interpretation results are achieved on large screens that reproduce the maximum number of colors and have a high image refresh rate. Enlargement of a digital image on a computer screen is close to optimal in cases where one pixel of the pix screen rf corresponds to one pixel of the image pix c . In this case, the increase v screenshot will be:

piXrf v = --

PIXc

If the pixel size on the PIX terrain (spatial resolution) is known, then the image scale on the display screen is equal to:

1 = pix

Md PIX"

For example, a TM/Landsat digital space image with a pixel size PIX = 30 m on the ground will be reproduced on the display screen with pix d = 0.3 mm at a scale of 1:100,000. 2, 3, 4 times or more; in this case, one pixel of the image is displayed by 4, 9, 16 screen pixels or more, but the image takes on a “pixel” structure that is noticeable to the eye. In practice, the most common additional increase 2 - Zx. To view the whole picture on the screen at the same time, the image has to be reduced. However, in this case, only every 2nd, 3rd, 4th, etc. are displayed. rows and columns of the image and on it the loss of details and small objects is inevitable.

The time of effective work when deciphering screen shots is shorter than when deciphering visual prints. It is also necessary to take into account the current sanitary standards for working on a computer, which regulate, in particular, the minimum distance of the decoder's eyes from the screen (at least 500 mm), the duration of continuous work, the intensity of electromagnetic fields, noise, etc.

Instruments and aids. Often in the process of visual interpretation, it is necessary to make simple measurements and quantitative estimates. To do this, various kinds of auxiliary tools are used: palettes, scales and tables of tones, nomograms, etc. (Fig. 3.13). For stereoscopic viewing of images, stereoscopes of various designs are used. The best device for cameral interpretation should be considered a stereoscope with a double observation system that provides viewing of a stereo pair by two decoders. The transfer of interpretation results from individual images to a common cartographic basis is usually performed using a small special opto-mechanical device.

Formulation of decryption results. The results of visual interpretation are most often presented in graphic, textual and less often digital forms. Usually, as a result of deciphering work, a snapshot is obtained in which the objects under study are graphically highlighted and indicated by conventional signs. The decoding results are also fixed on a transparent overlay. When working on a computer, it is convenient to present the results in the form of printer prints (hard copies). Based on satellite images, so-calleddecryption schemes,which, in their content, represent fragments of thematic maps compiled to the scale and projection of the image.

II1 -Г- 1

1g G-T-1-~1-g1-1-1-1

1 1 1 1--G1-G 1 1 - t

1 160 1 1

I|" 1 I 1I -1I -I 1-I 1-I 1-I 1-I 1-I 1-I 1-I 1-I -I! -|I -I-|I -| 1-1

^MiMyMiu^MiM^iipyrrpJl

Rice. 3.13. The simplest measuring accessories: a - measuring wedge; b - scale of circles

In the age of the scientific and technological revolution and space exploration, mankind continues to carefully study the Earth, observing the state of the natural environment, taking care of the rational use of natural resources, constantly improving methods for assessing the now limited natural resources. Among the developing methods of studying the Earth from space and space monitoring, multi-zone photographic survey is firmly entering life, opening up additional opportunities for increasing the reliability of image interpretation.

In September 1976, within the framework of international cooperation under the Interkosmos program, specialists from the USSR and the GDR jointly conducted the space experiment Raduga, during which the USSR pilot-cosmonauts V.f. Bykovsky and V.V. Aksenov in the eight-day flight of the Soyuz-22 spacecraft obtained more than 2500 multispectral images of the earth's surface. The shooting was carried out by a multi-zone space camera MKf-6, developed jointly by specialists from the people's enterprise "Carl Zeiss Jena" of the GDR and the Space Research Institute of the USSR Academy of Sciences and manufactured in the GDR. Multi-zone imaging by the MKf-6 apparatus was also carried out from laboratory aircraft, and then from the Salyut-6 manned orbital station. Simultaneously with the MKf-6 apparatus, a multi-zone synthesizing projector MSP-4 was developed, which opened up the possibility of producing high-quality color synthesized images, now widely used in scientific, practical and educational work.

This atlas of images and maps compiled from them illustrates, using typical examples, the possibilities of using materials from multi-zone aerospace photography in various studies of the natural environment, in planning and operational management of economic activities, and for many branches of thematic mapping. The atlas presents a wide range of areas of Earth research. It covers the study of natural conditions and resources not only on land, but also in shallow seas. The interpretation technique for geological studies of mountain-fold areas is presented on the example of the Pamir-Alay region. Geomorphological-glaciological and hydrological aspects of research are considered on the example of studying the tectonic structure and relief of the southern Cis-Baikal region, the relief of the coasts of the Sea of ​​​​Okhotsk, the relief of river floodplains and the permafrost thermokarst relief of central Yakutia, the Pamir-Alay glaciation, the distribution of solid river runoff in Lake Baikal and glacial landscapes in the northern part of the GDR. Vegetation studies were carried out on the example of semi-desert and desert vegetation of southeastern Kazakhstan and forest vegetation of the southern Cis-Baikal region and central Yakutia. Landscape mapping covers arid landscapes of foothill areas and intermountain basins of southeastern Kazakhstan and Central Asia, mountain taiga landscapes of northern

Baikal region, as well as landscapes of the middle part of the GDR. On the examples of southeastern Kazakhstan and a site in the central part of the GDR, the possibilities of using satellite images for the purpose of physical and geographical zoning of the territory are shown. In addition to studies of natural resources, the atlas also presents some areas of socio-economic research - mapping of agricultural land use and settlement, as well as the study of human impact on the natural environment using the example of mapping modern landscapes with their anthropogenic modifications. These studies were carried out in the Central Asian regions of the Soviet Union and in the GDR.

The literature describes in sufficient detail the method of deciphering "classical" aerial photographs. The traditional and well-established technology for processing such images is successfully used in practice. The atlas presents a set of methodological techniques for processing multi-zone aerial and space images at different levels of technical equipment - visual, instrumental and automated. In visual interpretation, the most versatile work is with color synthesized images. When using a series of zonal images, several techniques are used. The simplest technique - the choice of the optimal spectral zone for deciphering specific phenomena - is effective only for some objects, for example, the coastline of shallow water bodies, and therefore has a relatively limited application. Comparison of a series of zonal images using the spectral image of survey objects, approximately determined using a standardized density scale, is advisable when deciphering objects characterized by a specific course of spectral brightness, in particular, for separating forest-forming rocks when mapping forest vegetation, to identify the boundaries of glaciers and the firn line by differences in the image of snow with different moisture content, etc.

Sequential interpretation of a series of zonal images, using the effect of optimal display of various objects in certain zones of the spectrum, is used to separate tectonic faults of different ranks, to consistently study water areas at different depths, etc.

Interpretation of multi-zone space images is carried out with the selective use of aerial photographs obtained in sub-satellite experiments. To identify subtle differences between decoded objects that are not visually captured, for example, those associated with the state of agricultural crops, measurement interpretation is used, based on photometric determinations of the spectral brightness of objects from zonal images, taking into account distortions due to shooting conditions. This provides spectrophotometric determinations with an error of 3-5%.

For more complex data analysis, including when solving operational problems associated with a large amount of processed information, automated image processing is required, the capabilities of which are illustrated by the example of land use and the classification of cotton crops depending on their condition.

All the maps included in the atlas, compiled from multi-zone images, are cartographic works of a new type and demonstrate the possibilities for improving thematic maps based on aerospace surveys.

A special role in solving various problems in relatively small territories well studied by classical methods is played by multi-zone images obtained from an aircraft. This method of detailed study of natural resources and environmental control is promising, for example, for the territory of the GDR. The presented examples of multi-zone aerial images cover the test site in the area of ​​the lake. Süsser See in the central part of the GDR, as well as areas of the Ferghana Valley, the Okhotsk coast, and others in the USSR. Space images, in turn, have the well-known advantages of visibility, spectral and spatial image generalization. The presented space images cover the coasts of the Baltic Sea, the northeastern Caspian and the Sea of ​​Okhotsk, the southern Cis-Baikal and northern Baikal regions, central Yakutia, southeastern Kazakhstan and Central Asia.

The aerospace method of studying the Earth is, by its principle, complex and interdisciplinary. Each image is suitable, as a rule, for multi-purpose use in various areas of Earth exploration. This is also consistent with the regional structure of the atlas, in which, for each image, a deciphering technique is presented in those directions where it turned out to be the most effective. Each section, which opens with a color synthesized image of the study area with a reference scheme and a textual description of the territory, presents the results of interpretation of the images in the form of thematic maps, mainly at a scale of 1:400,000-1:500,000, with brief text comments. On the main topics, explanations and recommendations are given on the method of thematic interpretation of multi-zone images.

The Atlas can serve as a scientific and methodological guide for interpreting multi-zone images for specialists involved in the study of natural resources by remote methods, and can be used more widely as a visual aid for the use of satellite imagery in compiling thematic maps in cartography, geology, soil scientists, specialists in agriculture and forestry, and as well as conservationists. Undoubtedly, it will find wide application in universities. Students will be able to use it when studying the theory and practice of aerospace

cal methods, to master the skills of working with space images in the development and compilation of maps and in the study of natural resources.

The main work on the preparation of the atlas was carried out by the Faculty of Geography of Moscow State University, the Space Research Institute of the USSR Academy of Sciences, and the Central Institute of Earth Physics of the Academy of Sciences of the GDR.

The atlas was compiled in the laboratory of aerospace methods of the Department of Cartography, Faculty of Geography, Moscow University with the participation of the departments of geomorphology, cartography, glaciology and cryolithology, physical geography of the USSR, physical geography of foreign countries, problem laboratories for complex mapping and atlases, soil erosion and channel processes of the same faculty, as well as Faculty of Geology, Department of Scientific Photography and Cinematography of Moscow State University, All-Union Association "Aerogeology", in the Center for Remote Earth Exploration Methods of the Central Institute of Physics of the Earth of the Academy of Sciences of the GDR, the Department of Geography of the Pedagogical Institute of Potsdam and the Department of Geography of the University. M. Luther of Halle-Wittenberg.

Interpretation of space images- recognition of the studied natural complexes and ecological processes or their indicators according to the pattern of the photographic image (tone, color, structure), its size and combination with other objects (texture of the photographic image). These external characteristics are inherent only in those physiognomic components of landscapes that are directly reflected in the image.

In this regard, only a small number of natural components can be deciphered by direct signs - landforms, vegetation cover, sometimes surface deposits.

Decoding includes detection, recognition, interpretation, as well as determination of the qualitative and quantitative characteristics of objects and displaying the results in graphic (cartographic), digital or textual forms.

There are general geographic (topographic), landscape and thematic (sectoral) geological, soil, forest, glaciological, agricultural, etc. interpretation of images.

The main stages of interpretation of space images: binding; detection; recognition; interpretation; extrapolation.

Snapshot snapshot- this is the definition of the spatial position of the boundaries of the image. It consists in the exact geographical establishment of the territory depicted in the image. It is carried out using topographic maps, the scale of which corresponds to the scale of the image. The characteristic contours of the snapshot are the coastlines of reservoirs, the pattern of the hydrographic network, and the forms of macrorelief (mountains, large depressions).

Detection consists in comparing different drawings of a photographic image. According to the features of the image (tone, color, structure of the picture), the photophysiognomic components of landscapes are separated.

Recognition, or identification of decryption objects,- includes analysis of the structure and texture of the photographic image, by which the photophysiognomic components of landscapes, technogenic structures, the nature of land use, technogenic disturbance of physiognomic components are identified. At this stage, direct deciphering signs of photophysiological components are established.

Interpretation consists in classifying identified objects according to a certain principle (depending on the thematic focus of decoding). So, in landscape interpretation, the physiognomic components of geosystems are interpreted, and the identified technogenic objects serve only for correct orientation. When deciphering economic use, attention is drawn to the identified objects of land use - fields, roads, settlements, etc. The interpretation of the decipient (hidden) components of landscapes or their technogenic changes is carried out by the landscape-indication method. A complete and reliable interpretation of images is possible only on the basis of the complex use of direct and indirect deciphering signs. The process of interpretation is accompanied by the drawing of contours, i.e., the creation of deciphering schemes from individual images.

Extrapolation- includes the identification of similar objects throughout the study area and the preparation of a preliminary map layout. To do this, all the data obtained during decoding individual pictures. In the course of extrapolation, similar objects, phenomena and processes are identified in other areas; establish landscapes-analogues.

Decryption carried out according to the principle from the general to the particular. Every picture is, first of all, an information model of the area, perceived by the researcher as a whole, and objects are analyzed in development and inseparable connection with their environment.

There are the following types of encryption.

Thematic decoding perform according to two logical schemes. The first one provides for the first recognition of objects, and then their graphical selection, the second - first, the graphical selection of similar areas in the image, and then their recognition. Both schemes end with an interpretation - a scientific interpretation of the results of deciphering. With computer interpretation, these schemes are implemented in clustering and classification technologies with learning.

Objects in the pictures are distinguished by deciphering features, which are divided into straight and indirect. To direct include the shape, size, color, tone and shadow, as well as a complex unifying feature - the drawing of the image. indirect the signs are the location of the object, its geographical proximity, traces of interaction with the environment.

At indirect decoding, based on objectively existing connections and interdependence of objects and phenomena, the decoder reveals in the image not the object itself, which may not be depicted, but its indicator. Such indirect interpretation is called indicative, the geographical basis of which is indicative landscape science. Its role is especially great when direct signs lose their significance due to the strong generalization of the image. At the same time, special indicator tables are compiled, where for each type or state of the indicator, the corresponding type of the displayed object is indicated.

Indicative decoding allows you to move from spatial characteristics to temporal ones. On the basis of space-time series, one can establish the relative age of the process or the stage of its development. For example, according to the giant river meanders left in the valleys of many Siberian rivers, their size and shape are used to estimate the flow of water in the past and the changes that have taken place.

Broken ice, suspensions, etc. often serve as indicators of the movement of water masses in the ocean. The movement of water is also well visualized by the temperature contrasts of the water surface - it is from thermal infrared images that the eddy structure of the World Ocean was revealed.

Decoding of multizonal images. Working with a series of four to six zonal images is more difficult than with a single image, and their interpretation requires some special methodological approaches. Distinguish between comparative and sequential deciphering.

Comparative deciphering consists in determining the spectral image from the images, comparing it with the known spectral reflectivity and identifying the object. First, sets of objects that are different in different zones are identified on zonal images, and then, by comparing them (subtracting zonal interpretation schemes), individual objects are isolated in these sets. Such decoding is most effective for plant objects.

Sequential decryption is based on the fact that area images optimally display different objects. For example, in images of shallow water, due to the uneven penetration of rays of different spectral ranges into the aquatic environment, objects located at different depths are visible, and a series of images allows you to perform a layer-by-layer analysis and then gradually summarize the results.

Deciphering multi-temporal images provides the study of changes in objects and their dynamics, as well as indirect interpretation of changeable objects according to their dynamic features. For example, agricultural crops are identified by the change of image during the growing season, taking into account the agricultural calendar.

Have questions?

Report a typo

Text to be sent to our editors: