Dynamic characteristics of sound. SA Sound waves. General sound theory and musical terminology

affiliate material

Introduction

One of the five senses available to man is hearing. We use it to hear the world around us.

Most of us have sounds that we remember from childhood. For some, it is the voices of relatives and friends, or the creaking of wooden floorboards in the grandmother's house, or maybe it is the sound of train wheels on the railway that was nearby. Everyone will have their own.

What do you feel when you hear or remember sounds familiar from childhood? Joy, nostalgia, sadness, warmth? Sound is able to convey emotions, mood, encourage action or, conversely, soothe and relax.

In addition, sound is used in various areas of human life - in medicine, in the processing of materials, in the study of the deep sea, and many, many others.

At the same time, from the point of view of physics, this is just a natural phenomenon - vibrations of an elastic medium, which means that, like any natural phenomenon, sound has characteristics, some of which can be measured, others can only be heard.

When choosing musical equipment, reading reviews and descriptions, we often come across a large number of these same characteristics and terms that authors use without appropriate clarifications and explanations. And if some of them are clear and obvious to everyone, then others for an unprepared person do not carry any meaning. Therefore, we decided to tell you in simple terms about these incomprehensible and complex, at first glance, words.

If you remember your acquaintance with portable sound, it started quite a long time ago, and it was such a cassette player that my parents gave me for the New Year.

He sometimes chewed the tape, and then he had to unravel it with paper clips and a strong word. He devoured batteries with an appetite that would be the envy of Robin Bobbin Barabek (who ate forty people), and therefore my, at that time, very meager savings of an ordinary schoolboy. But all the inconveniences paled in comparison with the main plus - the player gave an indescribable feeling of freedom and joy! So I "got sick" with a sound that you can take with you.

However, I would sin against the truth if I said that since that time I have always been inseparable from music. There were periods when there was no time for music, when the priority was completely different. However, all this time I tried to keep abreast of what is happening in the world of portable audio, and, so to speak, keep my finger on the pulse.

When smartphones appeared, it turned out that these multimedia combines can not only make calls and process huge amounts of data, but, which was much more important for me, store and play a huge amount of music.

The first time I got hooked on the “telephone” sound was when I listened to the sound of one of the musical smartphones, which used the most advanced sound processing components at that time (before that, I confess, I did not take a smartphone seriously as a device for listening to music ). I really wanted this phone, but I couldn't afford it. At the same time, I began to follow the model range of this company, which had established itself in my eyes as a manufacturer of high-quality sound, but it turned out that our paths constantly diverged. Since that time, I have owned various musical equipment, but I do not stop looking for a truly musical smartphone that could rightfully bear such a name.

Characteristics

Among all the characteristics of sound, a professional can immediately stun you with a dozen definitions and parameters, which, in his opinion, you definitely, well, you should definitely pay attention to and, God forbid, some parameter will not be taken into account - trouble ...

I will say right away that I am not a supporter of this approach. After all, we usually choose equipment not for the “international audiophile competition”, but still for our loved ones, for the soul.

We are all different, and we all appreciate something different in sound. Someone likes the sound "lower", someone, on the contrary, is clean and transparent, for someone certain parameters will be important, and for someone - completely different. Are all parameters equally important and what are they? Let's figure it out.

Have you ever encountered the fact that some headphones play on your phone in such a way that you have to do it quieter, while others, on the contrary, make you turn the volume up to full and still not enough?

In portable technology, resistance plays an important role in this. Often, it is by the value of this parameter that you can understand whether you will have enough volume.

Resistance

It is measured in Ohms (Ohm).

Georg Simon Ohm - German physicist, derived and experimentally confirmed the law expressing the relationship between the current strength in the circuit, voltage and resistance (known as Ohm's law).

This parameter is also called impedance.

The value is almost always indicated on the box or in the instructions for the equipment.

There is an opinion that high-impedance headphones play quietly, and low-impedance headphones play loudly, and for high-impedance headphones you need a more powerful sound source, and a smartphone is enough for low-impedance headphones. You can also often hear the expression - not every player will be able to "rock" these headphones.

Remember, low-impedance headphones will sound louder on the same source. Despite the fact that from the point of view of physics this is not entirely true and there are nuances, in fact this is the easiest way to describe the value of this parameter.

For portable equipment (portable players, smartphones), headphones with an impedance of 32 ohms and below are most often produced, however, it should be borne in mind that different impedance will be considered low for different types of headphones. So, for full-size headphones, an impedance up to 100 ohms is considered low-resistance, and above 100 ohms - high-resistance. For headphones of the in-ear type (“gags” or earbuds), a resistance indicator up to 32 ohms is considered low-resistance, above 32 ohms - high-resistance. Therefore, when choosing headphones, pay attention not only to the resistance value itself, but also to the type of headphones.

Important: The higher the headphone impedance, the clearer the sound will be and the longer the player or smartphone will work in playback mode, because. high-impedance headphones draw less current, which in turn means less signal distortion.

AFC (frequency response)

Often in a discussion of a particular device, whether it be headphones, speakers or a car subwoofer, you can hear the characteristic - “pumps / does not pump”. You can find out whether the device, for example, will “pump” or is more suitable for vocal lovers without listening to it.

To do this, it is enough to find its frequency response in the description of the device.

The graph allows you to understand how the device reproduces other frequencies. At the same time, the fewer drops, the more accurately the equipment can convey the original sound, which means that the closer the sound will be to the original.

If there are no pronounced “humps” in the first third, then the headphones are not very “bass”, and if vice versa, they will “pump”, the same applies to other parts of the frequency response.

Thus, looking at the frequency response, we can understand what kind of timbre / tonal balance the equipment has. On the one hand, you might think that a straight line would be considered an ideal balance, but is it?

Let's try to understand in more detail. It just so happened that a person uses mainly medium frequencies (MF) for communication and, accordingly, is best able to distinguish this particular frequency band. If you make a device with a "perfect" balance in the form of a straight line, I'm afraid that you will not really like listening to music on such equipment, since most likely the high and low frequencies will not sound as good as the middle ones. The way out is to look for your balance, taking into account the physiological characteristics of hearing and the purpose of the equipment. There is one balance for voice, another for classical music, and a third for dance music.

The graph above shows the balance of these headphones. Low and high frequencies are more pronounced, in contrast to the middle ones, which are less, which is typical for most products. However, the presence of a “hump” at low frequencies does not necessarily mean the quality of these very low frequencies, since they may turn out to be, although in large quantities, but of poor quality - mumbling, buzzing.

The final result will be influenced by many parameters, ranging from how well the geometry of the case was calculated, and ending with what materials the structural elements are made of, and you can often find this out only by listening to the headphones.

In order to approximately imagine how high-quality our sound will be before listening, after the frequency response, you should pay attention to such a parameter as the harmonic distortion coefficient.

Harmonic Distortion


In fact, this is the main parameter that determines the sound quality. The only question is what is quality for you. For example, the well-known Beats by Dr. Dre at 1kHz have a total harmonic distortion of almost 1.5% (above 1.0% is considered pretty mediocre). At the same time, oddly enough, these headphones are popular with consumers.

It is desirable to know this parameter for each specific frequency group, because the valid values ​​differ for different frequencies. For example, for low frequencies, 10% can be considered an acceptable value, but for high frequencies, no more than 1%.

Not all manufacturers like to indicate this parameter on their products, because, unlike the same volume, it is quite difficult to comply with it. Therefore, if the device you choose has a similar graph and you see a value of no more than 0.5% in it, you should take a closer look at this device - this is a very good indicator.

We already know how to choose the headphones/speakers that will play louder on your device. But how do you know how loud they will play?

There is a parameter for this, which you most likely heard more than once. Nightclubs love to use it in their promotional materials to show how loud it will be at a party. This parameter is measured in decibels.

Sensitivity (loudness, noise level)

The decibel (dB), a unit of sound intensity, is named after Alexander Graham Bell.

Alexander Graham Bell is a scientist, inventor and businessman of Scottish origin, one of the founders of telephony, the founder of Bell Labs (formerly Bell Telephone Company), which determined the entire further development of the telecommunications industry in the United States.

This parameter is inextricably linked with resistance. A level of 95-100 dB is considered sufficient (in fact, this is a lot).

For example, the loudness record was set by Kiss on July 15, 2009 at a concert in Ottawa. The sound volume was 136 dB. By this parameter, Kiss outperformed a number of famous competitors, including such bands as The Who, Metallica and Manowar.

At the same time, the unofficial record belongs to the American team The Swans. According to unconfirmed reports, at several concerts of this group, the sound reached a volume of 140 dB.

If you want to repeat or surpass this record, remember that a loud sound can be regarded as a violation of public order - for Moscow, for example, the standards provide for a sound level equivalent to 30 dBA at night, 40 dBA during the day, and a maximum of 45 dBA at night, 55 dBA during the day .

And if the volume is more or less clear, then the next parameter is not so easy to understand and track as the previous ones. It's about dynamic range.

Dynamic Range

It is essentially the difference between the loudest and quietest sounds without clipping (overdrive).

Everyone who has ever been to a modern cinema has experienced for himself what a wide dynamic range is. This is the very parameter, thanks to which you hear, for example, the sound of a shot in all its glory, and the rustle of the boots of a sniper creeping on the roof, which this shot fired.

The greater range of your equipment means more sounds that your device can transmit without loss.

At the same time, it turns out that it is not enough to convey the widest possible dynamic range, you need to manage to do it so that each frequency is not just audible, but audible with high quality. One of those parameters that can be easily assessed by almost everyone when listening to a high-quality recording on the equipment of interest is responsible for this. It's about detail.

Detailing

This is the ability of the equipment to divide sound into frequencies - low, medium, high (LF, MF, HF).


It depends on this parameter how clearly individual instruments will be heard, how detailed the music will be, whether it will turn into just a hodgepodge of sounds.

However, even with the best detail, different equipment can produce very different listening experiences.

It depends on the skill of the equipment. localize sound sources.

In reviews of musical technology, this parameter is often divided into two components - stereo panorama and depth.

stereo panorama

In reviews, this parameter is usually described as wide or narrow. Let's see what it is.

From the name it is clear that we are talking about the width of something, but what?

Imagine that you are sitting (standing) at a concert of your favorite band or artist. And in front of you on the stage, the instruments are arranged in a certain order. Some are closer to the center, others further.


Represented? Let them start playing.

Now close your eyes and try to distinguish where this or that tool is located. I think you can easily do it.

And if the tools are placed in front of you in one line one after another?

Let's bring the situation to the point of absurdity and move the tools close to each other. And ... let's put the trumpeter on the piano.

Do you think you'll like this sound? Can you figure out which tool is which?

The last two options can most often be heard in low-quality equipment, the manufacturer of which does not care what sound his product produces (as practice shows, the price is not an indicator at all).

High-quality headphones, speakers, music systems should be able to build the correct stereo panorama in your head. Thanks to this, when listening to music through good equipment, you can hear where each instrument is located.

However, even with the ability of the equipment to create a magnificent stereo panorama, such sound will still feel unnatural, flat due to the fact that in life we ​​perceive sound not only in the horizontal plane. Therefore, no less important is such a parameter as the depth of sound.

sound depth

Let's go back to our fictional concert. Let's move the pianist and violinist a little deeper into our stage, and put the guitarist and saxophonist a little forward. The vocalist will take his rightful place in front of all the instruments.


Have you heard this on your musical equipment?

Congratulations, your device is able to create the effect of spatial sound through the synthesis of the panorama of imaginary sound sources. And if it's simpler, then your equipment has good sound localization.

If we are not talking about headphones, then this issue is solved quite simply - several emitters are used, placed around, allowing you to separate the sound sources. If we are talking about your headphones and you can hear it in them, congratulations for the second time, you have very good headphones in this parameter.

Your equipment has a wide dynamic range, is well balanced and localizes sound well, but is it ready for sharp sound transitions and rapid rise and fall of impulses?

How is her attack?

Attack

From the name, in theory, it is clear that this is something swift and inevitable, like a blow from a Katyusha battery.

But seriously, here's what Wikipedia tells us about this: Sound attack - the initial impulse of sound production, necessary for the formation of sounds when playing a musical instrument or singing vocal parts; some nuance characteristics of various methods of sound production, performance strokes, articulation and phrasing.

If you try to translate this into an understandable language, then this is the rate of increase in the amplitude of the sound until a given value is reached. And if it’s even clearer - if your equipment has a bad attack, then bright compositions with guitars, live drums and fast sound changes will sound cottony and deaf, which means goodbye to good hard rock and others like it ...

Among other things, in articles you can often find such a term as sibilants.

Sibilants

Literally - whistling sounds. Consonant sounds, during the pronunciation of which the air flow rapidly passes between the teeth.

Remember this friend from the Disney cartoon about Robin Hood?

There are a lot of sibilants in his speech. And if your equipment also whistles and hisses, then alas, this is not a very good sound.

Remark: by the way, Robin Hood himself from this cartoon is suspiciously similar to the Fox from the recently released Disney cartoon Zootopia. Disney, you're repeating yourself :)

Sand

Another subjective parameter that cannot be measured. And you can only hear.


At its core, it is close to sibilants, it is expressed in the fact that at high volume, during overload, high frequencies begin to fall apart and the effect of pouring sand appears, and sometimes high-frequency rattling. The sound becomes somehow rough and at the same time loose. The sooner this happens, the worse, and vice versa.

Try at home, from a height of a few centimeters, slowly pour a handful of granulated sugar onto the metal lid of the pan. Did you hear? Here, this is it.

Look for a sound that doesn't contain sand.

frequency range

One last immediate sound parameter that I would like to consider is the frequency range.

It is measured in hertz (Hz).

Heinrich Rudolf Hertz, the main achievement is the experimental confirmation of the electromagnetic theory of light by James Maxwell. Hertz proved the existence of electromagnetic waves. Since 1933, the unit of measurement of frequency, which is included in the international metric system of units SI, has been named after Hertz.

This is the parameter that you will find with a 99% probability in the description of almost any musical technique. Why did I leave it for later?

You should start with the fact that a person hears sounds that are in a certain frequency range, namely from 20 Hz to 20,000 Hz. Anything above this value is ultrasonic. Everything below is infrasound. They are inaccessible to human hearing, but available to our smaller brothers. This is familiar to us from school courses in physics and biology.


In fact, for most people, the real audible range is much more modest, moreover, for women, the audible range is shifted upward relative to the male, so men are better at distinguishing low frequencies, and women are better at high frequencies.

Why, then, do manufacturers indicate on their products a range that goes beyond our perception? Maybe it's just marketing?

Yes and no. A person not only hears, but also feels, feels the sound.

Have you ever stood near a large speaker or subwoofer playing? Remember your feelings. The sound is not only heard, it is also felt by the whole body, it has pressure, power. Therefore, the larger the range indicated on your equipment, the better.


However, you should not attach too much importance to this indicator - you rarely see equipment whose frequency range is already the limits of human perception.

additional characteristics

All of the above characteristics directly relate to the quality of the reproduced sound. However, the final result, and hence the pleasure of viewing / listening, is also affected by the quality of the source file and what sound source you use.

Formats

This information is on everyone's lips, and most already know about it, but just in case, we recall.

In total, there are three main groups of audio file formats:

  • uncompressed audio formats such as WAV, AIFF
  • lossless audio formats (APE, FLAC)
  • lossy audio formats (MP3, Ogg)

We recommend reading more about this by referring to Wikipedia.

We note for ourselves that it makes sense to use the APE, FLAC formats if you have professional or semi-professional equipment. In other cases, the possibilities of the MP3 format, compressed from a high-quality source with a bit rate of 256 kbps or more (the higher the bit rate, the less loss in audio compression), are usually enough. However, this is more a matter of taste, hearing and individual preferences.

Source

Equally important is the quality of the sound source.

Since we were originally talking about music on smartphones, let's consider this particular option.

Not so long ago, the sound was analog. Remember reels, cassettes? This is analog audio.


And in your headphones, you hear analog audio that has gone through two stages of conversion. First, it was converted from analog to digital, and then converted back to analog before being fed to the earphone / speaker. And on what quality this conversion was, in the end the result will depend - the sound quality.

In a smartphone, the DAC is responsible for this process - a digital-to-analog converter.

The better the DAC, the better the sound you will hear. And vice versa. If the DAC in the device is mediocre, then no matter what your speakers or headphones are, you can forget about the high sound quality.

All smartphones can be divided into two main categories:

  1. Smartphones with a dedicated DAC
  2. Smartphones with built-in DAC

At the moment, a large number of manufacturers are engaged in the production of DACs for smartphones. You can decide what to choose by using the search and reading the description of a particular device. However, do not forget that among smartphones with a built-in DAC, and among smartphones with a dedicated DAC, there are samples with very good sound and not very good, because the optimization of the operating system, the firmware version and the application through which you listen to music play an important role. In addition, there are software kernel audio mods that improve the final sound quality. And if engineers and programmers in a company do one thing and do it competently, then the result is noteworthy.

However, it is important to know that in a head-to-head comparison of two devices, one with a good built-in DAC and the other with a good dedicated DAC, the latter will always win.

Conclusion

Sound is an inexhaustible topic.

I hope that thanks to this material, much in music reviews and texts has become clearer and easier for you, and previously unfamiliar terminology has gained additional meaning and meaning, because everything is easy when you know it.

Both parts of our educational program about sound were written with the support of Meizu. Instead of the usual praising devices, we decided to make useful and interesting articles for you and pay attention to the importance of the playback source in obtaining high-quality sound.

Why is this needed for Meizu? The pre-order of the new musical flagship Meizu Pro 6 Plus has recently begun, so it is important for the company that the average user is aware of the nuances of high-quality sound and the key role of the playback source. By the way, by placing a paid pre-order before the end of the year, you will receive a Meizu HD50 headset as a gift for your smartphone.

We have also prepared a musical quiz for you with detailed comments on each question, we recommend you try your hand:

February 18, 2016

The world of home entertainment is quite varied and can include: watching a movie on a good home theater system; fun and addictive gameplay or listening to music. As a rule, everyone finds something of their own in this area, or combines everything at once. But no matter what the goals of a person in organizing their leisure time and no matter what extreme they go to, all these links are firmly connected by one simple and understandable word - "sound". Indeed, in all these cases, we will be led by the handle by the soundtrack. But this question is not so simple and trivial, especially in cases where there is a desire to achieve high-quality sound in a room or any other conditions. To do this, it is not always necessary to buy expensive hi-fi or hi-end components (although it will be very useful), but a good knowledge of physical theory is sufficient, which can eliminate most of the problems that arise for everyone who sets out to get high-quality voice acting.

Next, the theory of sound and acoustics will be considered from the point of view of physics. In this case, I will try to make it as accessible as possible for the understanding of any person who, perhaps, is far from the knowledge of physical laws or formulas, but nevertheless passionately dreams of the realization of the dream of creating a perfect acoustic system. I do not presume to claim that to achieve good results in this area at home (or in a car, for example) you need to know these theories thoroughly, however, understanding the basics will avoid many stupid and absurd mistakes, as well as allow you to achieve the maximum sound effect from the system. any level.

General sound theory and musical terminology

What is sound? This is the sensation that the auditory organ perceives. "ear"(the phenomenon itself exists even without the participation of the “ear” in the process, but it’s easier to understand this way), which occurs when the eardrum is excited by a sound wave. The ear in this case acts as a "receiver" of sound waves of different frequencies.
Sound wave It is, in fact, a sequential series of seals and discharges of the medium (most often the air environment under normal conditions) of various frequencies. The nature of sound waves is oscillatory, caused and produced by the vibration of any bodies. The emergence and propagation of a classical sound wave is possible in three elastic media: gaseous, liquid and solid. When a sound wave occurs in one of these types of space, some changes inevitably occur in the medium itself, for example, a change in the density or pressure of air, the movement of particles of air masses, etc.

Since the sound wave has an oscillatory nature, it has such a characteristic as frequency. Frequency measured in hertz (in honor of the German physicist Heinrich Rudolf Hertz), and denotes the number of vibrations over a period of time equal to one second. Those. for example, a frequency of 20 Hz means a cycle of 20 oscillations in one second. The subjective concept of its height also depends on the frequency of the sound. The more sound vibrations are made per second, the "higher" the sound seems. The sound wave also has another important characteristic, which has a name - the wavelength. Wavelength It is customary to consider the distance that a sound of a certain frequency travels in a period equal to one second. For example, the wavelength of the lowest sound in the human audible range at 20 Hz is 16.5 meters, and the wavelength of the highest sound at 20,000 Hz is 1.7 centimeters.

The human ear is designed in such a way that it is able to perceive waves only in a limited range, approximately 20 Hz - 20,000 Hz (depending on the characteristics of a particular person, someone is able to hear a little more, someone less). Thus, this does not mean that sounds below or above these frequencies do not exist, they are simply not perceived by the human ear, going beyond the audible range. Sound above the audible range is called ultrasound, sound below the audible range is called infrasound. Some animals are able to perceive ultra and infra sounds, some even use this range for orientation in space (bats, dolphins). If the sound passes through a medium that does not directly come into contact with the human hearing organ, then such a sound may not be heard or be greatly weakened later.

In the musical terminology of sound, there are such important designations as octave, tone and overtone of sound. Octave means an interval in which the ratio of frequencies between sounds is 1 to 2. An octave is usually very audible, while sounds within this interval can be very similar to each other. An octave can also be called a sound that makes twice as many vibrations as another sound in the same time period. For example, a frequency of 800 Hz is nothing but a higher octave of 400 Hz, and a frequency of 400 Hz is in turn the next octave of sound with a frequency of 200 Hz. An octave is made up of tones and overtones. Variable oscillations in a harmonic sound wave of one frequency are perceived by the human ear as musical tone. High frequency vibrations can be interpreted as high-pitched sounds, low-frequency vibrations as low-pitched sounds. The human ear is able to clearly distinguish sounds with a difference of one tone (in the range up to 4000 Hz). Despite this, an extremely small number of tones are used in music. This is explained from considerations of the principle of harmonic consonance, everything is based on the principle of octaves.

Consider the theory of musical tones using the example of a string stretched in a certain way. Such a string, depending on the tension force, will be "tuned" to one specific frequency. When this string is exposed to something with one specific force, which will cause it to vibrate, one specific tone of sound will be steadily observed, we will hear the desired tuning frequency. This sound is called the fundamental tone. For the main tone in the musical field, the frequency of the note "la" of the first octave, equal to 440 Hz, is officially accepted. However, most musical instruments never reproduce pure fundamental tones alone; they are inevitably accompanied by overtones called overtones. Here it is appropriate to recall an important definition of musical acoustics, the concept of sound timbre. Timbre- this is a feature of musical sounds that give musical instruments and voices their unique recognizable specificity of sound, even when comparing sounds of the same pitch and loudness. The timbre of each musical instrument depends on the distribution of sound energy over the overtones at the moment the sound appears.

Overtones form a specific color of the fundamental tone, by which we can easily identify and recognize a particular instrument, as well as clearly distinguish its sound from another instrument. There are two types of overtones: harmonic and non-harmonic. Harmonic overtones are, by definition, multiples of the fundamental frequency. On the contrary, if the overtones are not multiples and deviate noticeably from the values, then they are called inharmonious. In music, the operation of non-multiple overtones is practically excluded, therefore the term is reduced to the concept of "overtone", meaning harmonic. For some instruments, for example, the piano, the main tone does not even have time to form, for a short period there is an increase in the sound energy of the overtones, and then a decline occurs just as rapidly. Many instruments create the so-called "transitional tone" effect, when the energy of certain overtones is maximum at a certain point in time, usually at the very beginning, but then abruptly changes and moves to other overtones. The frequency range of each instrument can be considered separately and is usually limited by the frequencies of the fundamental tones that this particular instrument is capable of reproducing.

In the theory of sound there is also such a thing as NOISE. Noise- this is any sound that is created by a combination of sources that are inconsistent with each other. Everyone is well aware of the noise of the leaves of trees, swayed by the wind, etc.

What determines the sound volume? It is obvious that such a phenomenon directly depends on the amount of energy carried by the sound wave. To determine the quantitative indicators of loudness, there is a concept - sound intensity. Sound intensity is defined as the flow of energy passing through some area of ​​space (for example, cm2) per unit of time (for example, per second). In a normal conversation, the intensity is about 9 or 10 W/cm2. The human ear is capable of perceiving sounds with a fairly wide range of sensitivity, while the susceptibility of frequencies is not uniform within the sound spectrum. So the best perceived frequency range is 1000 Hz - 4000 Hz, which most widely covers human speech.

Since sounds vary so much in intensity, it is more convenient to think of it as a logarithmic value and measure it in decibels (after the Scottish scientist Alexander Graham Bell). The lower threshold of hearing sensitivity of the human ear is 0 dB, the upper 120 dB, it is also called the "pain threshold". The upper limit of sensitivity is also not perceived by the human ear in the same way, but depends on the specific frequency. Low frequency sounds must have a much greater intensity than high frequencies in order to elicit a pain threshold. For example, the pain threshold at a low frequency of 31.5 Hz occurs at a sound intensity level of 135 dB, when at a frequency of 2000 Hz the sensation of pain appears already at 112 dB. There is also the concept of sound pressure, which actually expands the usual explanation for the propagation of a sound wave in air. Sound pressure- this is a variable overpressure that occurs in an elastic medium as a result of the passage of a sound wave through it.

Wave nature of sound

To better understand the system of sound wave generation, imagine a classic speaker located in a tube filled with air. If the speaker makes a sharp forward movement, then the air in the immediate vicinity of the diffuser is compressed for a moment. After that, the air will expand, thereby pushing the compressed air region along the pipe.
It is this wave movement that will subsequently be the sound when it reaches the auditory organ and “excites” the eardrum. When a sound wave occurs in a gas, excess pressure and density are created, and particles move at a constant speed. About sound waves, it is important to remember the fact that the substance does not move along with the sound wave, but only a temporary perturbation of air masses occurs.

If we imagine a piston suspended in free space on a spring and making repeated movements "forward and backward", then such oscillations will be called harmonic or sinusoidal (if we represent the wave in the form of a graph, then in this case we get a pure sine wave with repeated ups and downs). If we imagine a speaker in a pipe (as in the example described above), performing harmonic oscillations, then at the moment the speaker moves "forward", the already known effect of air compression is obtained, and when the speaker moves "back", the reverse effect of rarefaction is obtained. In this case, a wave of alternating compressions and rarefaction will propagate through the pipe. The distance along the pipe between adjacent maxima or minima (phases) will be called wavelength. If particles oscillate parallel to the direction of wave propagation, then the wave is called longitudinal. If they oscillate perpendicular to the direction of propagation, then the wave is called transverse. Usually, sound waves in gases and liquids are longitudinal, while in solids, waves of both types can occur. Transverse waves in solids arise due to resistance to shape change. The main difference between these two types of waves is that a transverse wave has the property of polarization (oscillations occur in a certain plane), while a longitudinal wave does not.

Sound speed

The speed of sound directly depends on the characteristics of the medium in which it propagates. It is determined (dependent) by two properties of the medium: elasticity and density of the material. The speed of sound in solids, respectively, directly depends on the type of material and its properties. Velocity in gaseous media depends on only one type of medium deformation: compression-rarefaction. The change in pressure in a sound wave occurs without heat exchange with the surrounding particles and is called adiabatic.
The speed of sound in a gas depends mainly on temperature - it increases with increasing temperature and decreases with decreasing. Also, the speed of sound in a gaseous medium depends on the size and mass of the gas molecules themselves - the smaller the mass and size of the particles, the greater the "conductivity" of the wave and the greater the speed, respectively.

In liquid and solid media, the principle of propagation and the speed of sound are similar to how a wave propagates in air: by compression-discharge. But in these media, in addition to the same dependence on temperature, the density of the medium and its composition/structure are quite important. The lower the density of the substance, the higher the speed of sound and vice versa. The dependence on the composition of the medium is more complicated and is determined in each specific case, taking into account the location and interaction of molecules/atoms.

Speed ​​of sound in air at t, °C 20: 343 m/s
Speed ​​of sound in distilled water at t, °C 20: 1481 m/s
Speed ​​of sound in steel at t, °C 20: 5000 m/s

Standing waves and interference

When a speaker creates sound waves in a confined space, the effect of wave reflection from the boundaries inevitably occurs. As a result, most often interference effect- when two or more sound waves are superimposed on each other. Special cases of the phenomenon of interference are the formation of: 1) Beating waves or 2) Standing waves. The beat of the waves- this is the case when there is an addition of waves with close frequencies and amplitudes. The pattern of the occurrence of beats: when two waves similar in frequency are superimposed on each other. At some point in time, with such an overlap, the amplitude peaks may coincide "in phase", and also the recessions in "antiphase" may also coincide. This is how sound beats are characterized. It is important to remember that, unlike standing waves, phase coincidences of peaks do not occur constantly, but at some time intervals. By ear, such a pattern of beats differs quite clearly, and is heard as a periodic increase and decrease in volume, respectively. The mechanism for the occurrence of this effect is extremely simple: at the moment of coincidence of peaks, the volume increases, at the moment of coincidence of recessions, the volume decreases.

standing waves arise in the case of superposition of two waves of the same amplitude, phase and frequency, when when such waves "meet" one moves in the forward direction, and the other in the opposite direction. In the area of ​​space (where a standing wave was formed), a picture of superposition of two frequency amplitudes arises, with alternating maxima (so-called antinodes) and minima (so-called nodes). When this phenomenon occurs, the frequency, phase and attenuation coefficient of the wave at the place of reflection are extremely important. Unlike traveling waves, there is no energy transfer in a standing wave due to the fact that the forward and backward waves forming this wave carry energy in equal amounts in both forward and opposite directions. For a visual understanding of the occurrence of a standing wave, let's imagine an example from home acoustics. Let's say we have floor standing speakers in some limited space (room). Having made them play some song with a lot of bass, let's try to change the location of the listener in the room. Thus, the listener, having got into the zone of minimum (subtraction) of the standing wave, will feel the effect that the bass has become very small, and if the listener enters the zone of maximum (addition) of frequencies, then the reverse effect of a significant increase in the bass region is obtained. In this case, the effect is observed in all octaves of the base frequency. For example, if the base frequency is 440 Hz, then the phenomenon of "addition" or "subtraction" will also be observed at frequencies of 880 Hz, 1760 Hz, 3520 Hz, etc.

Resonance phenomenon

Most solids have their own resonance frequency. To understand this effect is quite simple on the example of a conventional pipe, open only at one end. Let's imagine a situation where a speaker is connected from the other end of the pipe, which can play some one constant frequency, it can also be changed later. Now, a pipe has its own resonant frequency, in simple terms, this is the frequency at which the pipe "resonates" or makes its own sound. If the frequency of the speaker (as a result of adjustment) coincides with the resonance frequency of the pipe, then there will be an effect of increasing the volume several times. This is because the loudspeaker excites the vibrations of the air column in the pipe with a significant amplitude until the same “resonant frequency” is found and the addition effect occurs. The resulting phenomenon can be described as follows: the pipe in this example "helps" the speaker by resonating at a specific frequency, their efforts add up and "pour out" into an audible loud effect. On the example of musical instruments, this phenomenon is easily traced, since the design of the majority contains elements called resonators. It is not difficult to guess what serves the purpose of amplifying a certain frequency or musical tone. For example: a guitar body with a resonator in the form of a hole, matched with the volume; The design of the pipe at the flute (and all pipes in general); The cylindrical shape of the body of the drum, which itself is a resonator of a certain frequency.

Frequency spectrum of sound and frequency response

Since in practice there are practically no waves of the same frequency, it becomes necessary to decompose the entire sound spectrum of the audible range into overtones or harmonics. For these purposes, there are graphs that display the dependence of the relative energy of sound vibrations on frequency. Such a graph is called a sound frequency spectrum graph. Frequency spectrum of sound There are two types: discrete and continuous. The discrete spectrum plot displays the frequencies individually, separated by blank spaces. In the continuous spectrum, all sound frequencies are present at once.
In the case of music or acoustics, the usual schedule is most often used. Peak-to-Frequency Characteristics(abbreviated "AFC"). This graph shows the dependence of the amplitude of sound vibrations on frequency throughout the entire frequency spectrum (20 Hz - 20 kHz). Looking at such a graph, it is easy to understand, for example, the strengths or weaknesses of a particular speaker or speaker system as a whole, the strongest areas of energy return, frequency drops and rises, attenuation, as well as trace the steepness of the decline.

Propagation of sound waves, phase and antiphase

The process of propagation of sound waves occurs in all directions from the source. The simplest example for understanding this phenomenon: a pebble thrown into the water.
From the place where the stone fell, waves begin to diverge on the surface of the water in all directions. However, let's imagine a situation using a speaker in a certain volume, let's say a closed box, which is connected to an amplifier and plays some kind of musical signal. It is easy to notice (especially if you give a powerful low-frequency signal, such as a bass drum), that the speaker makes a rapid movement "forward", and then the same rapid movement "back". It remains to be understood that when the speaker moves forward, it emits a sound wave, which we hear afterwards. But what happens when the speaker moves backwards? But paradoxically, the same thing happens, the speaker makes the same sound, only it propagates in our example entirely within the volume of the box, without going beyond it (the box is closed). In general, in the above example, one can observe quite a lot of interesting physical phenomena, the most significant of which is the concept of a phase.

The sound wave that the speaker, being in volume, radiates in the direction of the listener - is "in phase". The reverse wave, which goes into the volume of the box, will be correspondingly antiphase. It remains only to understand what these concepts mean? Signal phase- this is the sound pressure level at the current time at some point in space. The phase is most easily understood by the example of the playback of musical material by a conventional stereo floor-standing pair of home speakers. Let's imagine that two such floor-standing speakers are installed in a certain room and play. Both speakers in this case reproduce a synchronous variable sound pressure signal, moreover, the sound pressure of one speaker is added to the sound pressure of the other speaker. A similar effect occurs due to the synchronism of the signal reproduction of the left and right speakers, respectively, in other words, the peaks and valleys of the waves emitted by the left and right speakers coincide.

Now let's imagine that the sound pressures are still changing in the same way (they have not changed), but now they are opposite to each other. This can happen if you connect one of the two speakers in reverse polarity ("+" cable from the amplifier to the "-" terminal of the speaker system, and "-" cable from the amplifier to the "+" terminal of the speaker system). In this case, the signal opposite in direction will cause a pressure difference, which can be represented as numbers as follows: the left speaker will create a pressure of "1 Pa" and the right speaker will create a pressure of "minus 1 Pa". As a result, the total sound volume at the listener's position will be equal to zero. This phenomenon is called antiphase. If we consider the example in more detail for understanding, it turns out that two speakers playing "in phase" create the same areas of air compression and rarefaction, which actually help each other. In the case of an idealized antiphase, the area of ​​air space compaction created by one speaker will be accompanied by an area of ​​air space rarefaction created by the second speaker. It looks approximately like the phenomenon of mutual synchronous damping of waves. True, in practice, the volume does not drop to zero, and we will hear a heavily distorted and attenuated sound.

In the most accessible way, this phenomenon can be described as follows: two signals with the same oscillations (frequency), but shifted in time. In view of this, it is more convenient to represent these displacement phenomena using the example of ordinary round clocks. Let's imagine that several identical round clocks hang on the wall. When the second hands of these watches run in sync, 30 seconds on one watch and 30 seconds on the other, then this is an example of a signal that is in phase. If the second hands run with a shift, but the speed is still the same, for example, 30 seconds on one watch and 24 seconds on the other, then this is a classic example of phase shift (shift). In the same way, phase is measured in degrees, within a virtual circle. In this case, when the signals are shifted relative to each other by 180 degrees (half of the period), a classical antiphase is obtained. Often in practice, there are minor phase shifts, which can also be determined in degrees and successfully eliminated.

Waves are flat and spherical. A flat wavefront propagates in only one direction and is rarely encountered in practice. A spherical wavefront is a simple type of wave that radiates from a single point and propagates in all directions. Sound waves have the property diffraction, i.e. the ability to avoid obstacles and objects. The degree of envelope depends on the ratio of the sound wave length to the dimensions of the obstacle or hole. Diffraction also occurs when there is an obstacle in the path of sound. In this case, two scenarios are possible: 1) If the dimensions of the obstacle are much larger than the wavelength, then the sound is reflected or absorbed (depending on the degree of absorption of the material, the thickness of the obstacle, etc.), and an "acoustic shadow" zone is formed behind the obstacle . 2) If the dimensions of the obstacle are comparable to the wavelength or even less than it, then the sound diffracts to some extent in all directions. If a sound wave, when moving in one medium, hits the interface with another medium (for example, an air medium with a solid medium), then three scenarios may arise: 1) the wave will be reflected from the interface 2) the wave can pass into another medium without changing direction 3) a wave can pass into another medium with a change of direction at the boundary, this is called "wave refraction".

The ratio of the excess pressure of a sound wave to the oscillatory volumetric velocity is called the wave impedance. In simple words, wave resistance of the medium can be called the ability to absorb sound waves or "resist" them. The reflection and transmission coefficients directly depend on the ratio of the wave impedances of the two media. Wave resistance in a gas medium is much lower than in water or solids. Therefore, if a sound wave in air is incident on a solid object or on the surface of deep water, then the sound is either reflected from the surface or absorbed to a large extent. It depends on the thickness of the surface (water or solid) on which the desired sound wave falls. With a low thickness of a solid or liquid medium, sound waves almost completely "pass", and vice versa, with a large thickness of the medium, the waves are more often reflected. In the case of reflection of sound waves, this process occurs according to a well-known physical law: "The angle of incidence is equal to the angle of reflection." In this case, when a wave from a medium with a lower density hits the boundary with a medium of higher density, the phenomenon occurs refraction. It consists in bending (refracting) a sound wave after "meeting" with an obstacle, and is necessarily accompanied by a change in speed. Refraction also depends on the temperature of the medium in which reflection occurs.

In the process of propagation of sound waves in space, their intensity inevitably decreases, one can say the attenuation of waves and the weakening of sound. In practice, it is quite simple to encounter such an effect: for example, if two people stand in a field at some close distance (a meter or closer) and begin to say something to each other. If you subsequently increase the distance between people (if they start to move away from each other), the same level of conversational volume will become less and less audible. A similar example clearly demonstrates the phenomenon of reducing the intensity of sound waves. Why is this happening? The reason for this is the various processes of heat transfer, molecular interaction and internal friction of sound waves. Most often in practice, the conversion of sound energy into thermal energy occurs. Such processes inevitably arise in any of the 3 sound propagation media and can be characterized as absorption of sound waves.

The intensity and degree of absorption of sound waves depends on many factors, such as pressure and temperature of the medium. Also, absorption depends on the specific frequency of the sound. When a sound wave propagates in liquids or gases, there is an effect of friction between different particles, which is called viscosity. As a result of this friction at the molecular level, the process of transformation of the wave from sound into thermal occurs. In other words, the higher the thermal conductivity of the medium, the lower the degree of wave absorption. Sound absorption in gaseous media also depends on pressure (atmospheric pressure changes with increasing altitude relative to sea level). As for the dependence of the degree of absorption on the frequency of sound, then taking into account the above dependences of viscosity and thermal conductivity, the absorption of sound is the higher, the higher its frequency. For example, at normal temperature and pressure, in air, the absorption of a wave with a frequency of 5000 Hz is 3 dB / km, and the absorption of a wave with a frequency of 50,000 Hz will be already 300 dB / m.

In solid media, all the above dependencies (thermal conductivity and viscosity) are preserved, but a few more conditions are added to this. They are associated with the molecular structure of solid materials, which can be different, with its own inhomogeneities. Depending on this internal solid molecular structure, the absorption of sound waves in this case can be different, and depends on the type of particular material. When sound passes through a solid body, the wave undergoes a series of transformations and distortions, which most often leads to scattering and absorption of sound energy. At the molecular level, the effect of dislocations can occur, when a sound wave causes a displacement of atomic planes, which then return to their original position. Or, the movement of dislocations leads to a collision with dislocations perpendicular to them or defects in the crystal structure, which causes their deceleration and, as a result, some absorption of the sound wave. However, the sound wave may also resonate with these defects, which will lead to distortion of the original wave. The energy of a sound wave at the moment of interaction with the elements of the molecular structure of the material is dissipated as a result of internal friction processes.

In I will try to analyze the features of human auditory perception and some of the subtleties and features of sound propagation.

> Sound characteristic

Explore characteristics and properties of sounds like waves: the movement of sound along sinusoidal waves, frequency, tone and amplitude, sound perception, speed of sound.

Sound- a longitudinal pressure wave passing through space in liquid, solid, gaseous states or plasma.

Learning task

  • Understand how people characterize sound.

Key Points

Terms

  • Media is a general concept for various types of materials.
  • Hertz is a measurement of the audio frequency.
  • Frequency is the ratio of the number of times (n) of a periodic event in time (t): f = n/t.

Let's get familiar with the basics of sound. We are talking about a longitudinal pressure wave passing through compressible spaces. In a vacuum (free of particles and matter) sound is impossible. A vacuum has no medium, so sound simply cannot travel.

Sound characteristics:

  • Transported along longitudinal waves. In a graphical representation, they are shown as sinusoidal.
  • Possess frequency (height rises and falls).
  • Amplitude describes loudness.
  • Tone is a measure of the quality of a sound wave.
  • Transported faster in a hot space than in a solid. The speed is higher at sea level (where the air pressure is higher).
  • Intensity is the energy transmitted in a particular area. It is also a measure of audio frequency.
  • Ultrasound uses high frequency waves to find what is normally hidden (tumors). Bats and dolphins also use ultrasound to navigate and find objects. On ships, the same scheme is used.

Sound perception

Each sound wave has properties, including length, intensity, and amplitude. In addition, they have a range, that is, the level of sound perception. For example:

  • People: 20 - 20,000 Hz.
  • Dogs: 50 - 45,000 Hz.
  • Bats: 20 - 120,000 Hz.

It can be seen that among the three representatives, people have the smallest indicator.

Sound speed

The transport speed is based on the medium. It rises in the solid state and falls in liquid and gas. Formula:

(K is the stiffness factor of the material and p is the density).

If it says "faster than the speed of sound", then this is a comparison with an indicator of 344 m / s. The overall measurement is taken at sea level with a temperature mark of 21°C and under normal atmospheric conditions.

Shown here is a plane moving faster than sound speed.

Basic characteristics of sound. Transmission of sound over long distances.

Main characteristics of sound:

1. Sound tone(number of oscillations per second). Low-pitched sounds (such as the sound produced by a bass drum) and high-pitched sounds (such as a whistle). The ear easily distinguishes these sounds. Simple measurements (oscillation sweep) show that low-pitched sounds are low-frequency oscillations in a sound wave. A high-pitched sound corresponds to a higher vibration frequency. The frequency of vibrations in a sound wave determines the tone of the sound.

2. Sound volume (amplitude). The loudness of a sound, determined by its effect on the ear, is a subjective assessment. The greater the flow of energy flowing to the ear, the greater the volume. Convenient for measurement is the sound intensity - the energy transferred by a wave per unit of time through a single area perpendicular to the direction of wave propagation. The intensity of the sound increases with an increase in the amplitude of the vibrations and the area of ​​the body that oscillates. Loudness is also measured in decibels (dB). For example, the loudness of the sound of good leaves is estimated at 10 dB, whisper - 20 dB, street noise - 70 dB, pain threshold - 120 dB, and death level - 180 dB.

3. Sound timbre. The second subjective assessment. The timbre of a sound is determined by a combination of overtones. A different number of overtones inherent in a particular sound gives it a special color - timbre. The difference between one timbre and another is due not only to the number, but also to the intensity of the overtones that accompany the sound of the fundamental tone. By timbre, one can easily distinguish the sounds of various musical instruments, the voices of people.

Sound vibrations with a frequency of less than 20 Hz are not perceived by the human ear.

The sound range of the ear is 20 Hz - 20 thousand Hz.

Transmission of sound over long distances.

The problem of transmitting sound over a distance was successfully solved through the creation of telephone and radio. Using a microphone that imitates the human ear, acoustic vibrations of air (sound) at a certain point are converted into synchronous changes in the amplitude of an electric current (electrical signal), which is delivered to the right place by wires or using electromagnetic waves (radio waves) and converted into acoustic vibrations similar to the original ones.

Scheme for transmitting sound over a distance

1. Converter "sound - electrical signal" (microphone)

2. Electrical signal amplifier and electrical communication line (wires or radio waves)

3. Converter "electrical signal - sound" (loudspeaker)

Volumetric acoustic vibrations are perceived by a person at one point and can be represented as a point signal source. The signal has two parameters related by a function of time: vibration frequency (tone) and vibration amplitude (loudness). It is necessary to proportionally convert the amplitude of the acoustic signal into the amplitude of the electric current, while maintaining the oscillation frequency.

Sound sources- any phenomena that cause a local change in pressure or mechanical stress. Widespread sources. sound in the form of oscillating solid bodies. Sources sound vibrations of limited volumes of the medium itself can also serve (for example, in organ pipes, wind musical instruments, whistles, etc.). A complex oscillatory system is the human and animal vocal apparatus. An extensive class of sources Sound-electroacoustic transducers, in which mechanical vibrations are created by converting electric current oscillations of the same frequency. In nature Sound is excited when air flows around solid bodies due to the formation and separation of vortices, for example, when wind blows wires, pipes, crests of sea waves. Sound low and infra-low frequencies occurs during explosions, collapses. There are various sources of acoustic noise, which include machines and mechanisms used in technology, gas and water jets. Much attention is paid to the study of sources of industrial, transport and aerodynamic noise due to their harmful effects on the human body and technical equipment.

Sound receivers serve to perceive sound energy and convert it into other forms. To receivers sound applies, in particular, to the hearing apparatus of humans and animals. In reception technology sound mainly electro-acoustic transducers are used, for example, a microphone.
The propagation of sound waves is characterized primarily by the speed of sound. In a number of cases, sound dispersion is observed, i.e., the dependence of the propagation velocity on frequency. Dispersion sound leads to a change in the shape of complex acoustic signals, including a number of harmonic components, in particular - to the distortion of sound pulses. During the propagation of sound waves, the phenomena of interference and diffraction, common to all types of waves, take place. In the case when the size of obstacles and inhomogeneities in the medium is large compared to the wavelength, sound propagation obeys the usual laws of reflection and refraction of waves and can be considered from the standpoint of geometric acoustics.

When a sound wave propagates in a given direction, its gradual attenuation occurs, i.e., a decrease in intensity and amplitude. Knowing the laws of attenuation is practically important for determining the maximum range of propagation of an audio signal.

Ways of communication:

· Images

The coding system must be understandable to the addressee.

Sound communications appeared first.

Sound (carrier - air)

Sound wave– air pressure drops

Encoded information - eardrums

hearing sensitivity

Decibel- relative logarithmic unit

Sound properties:

Volume (db)

Key

0 dB = 2*10(-5) Pa

Hearing threshold - pain threshold

Dynamic Range is the ratio of the loudest sound to the smallest

Threshold = 120 dB

Frequency Hz)

Parameters and spectrum of the sound signal: speech, music. Reverberation.

Sound- an oscillation that has its own frequency and amplitude

The sensitivity of our ear to different frequencies is different

Hz - 1 fps

20 Hz to 20,000 Hz - audio range

Infrasounds - sounds less than 20 Hz

Sounds over 20 thousand Hz and less than 20 Hz are not perceived

Intermediate encoding and decoding system

Any process can be described by a set of harmonic oscillations

Spectrum of the audio signal- a set of harmonic oscillations of the corresponding frequencies and amplitudes

Amplitude changes

Frequency is constant

Sound vibration– amplitude change in time

Dependence of mutual amplitudes

Frequency response is the dependence of the amplitude on the frequency

Our ear has a frequency response

The device is not perfect, it has a frequency response

frequency response- for everything related to the conversion and transmission of sound

Equalizer adjusts frequency response

340 m / s - the speed of sound in air

Reverberation- sound blurring

Reverb time- the time for which the signal will decrease by 60 dB

Compression– sound processing technique where loud sounds are reduced and soft sounds are louder

Reverberation- characteristics of the room in which the sound propagates

Sampling frequency- counts per second

Phonetic coding

Fragments of an information image - coding - phonetic apparatus - human hearing

Waves can't travel far

You can increase the volume of the sound

Electricity

Wavelength - distance

Sound=function A(t)

Convert A of sound vibrations to A of electric current = secondary encoding

Phase– delay in angular measurements of one oscillation relative to another in time

Amplitude modulation– information is contained in the amplitude change

Frequency modulation- in frequency

Phase modulation- in phase

Electromagnetic oscillation - spreads without reason

Circumference 40 thousand km.

Radius 6.4 thousand km

Instantly!

Frequency, or linear distortions occur at each stage of information transmission

Amplitude transfer coefficient

Linear– signals with loss of information will be transmitted

can compensate

Nonlinear– cannot be prevented, associated with unrecoverable amplitude distortion

1895 Oersted Maxwell discovered energy - electromagnetic oscillations can propagate

Popov invented the radio

1896 abroad Marconi bought a patent, the right to use the works of Tesla

Real application at the beginning of the twentieth century

The fluctuation of electric current is not difficult to superimpose on electromagnetic oscillations

The frequency must be higher than the information frequency

Early 20s

Signal transmission by amplitude modulation of radio waves

Range up to 7000 Hz

AM Broadcasting, longwave

Long waves having frequencies above 26 MHz

Medium waves from 2.5 MHz to 26 MHz

No boundaries of distribution

VHF (frequency modulation), stereo broadcast (2 channels)

FM - frequency

Phase not used

Radio carrier frequency

Broadcast range

carrier frequency

Reception zone- the territory in which radio waves propagate with energy sufficient for high-quality information reception

Dcm=3.57(^H+^h)

H is the height of the transmitting antenna (m)

h - height of the reception room (m)

from the height of the antenna, subject to sufficient power

radio transmitter– carrier frequency, power and height of the transmitting antenna

Licensed

A license is required to distribute radio waves

Broadcast network:

Source sound content (content)

Connecting lines

Transmitters (Lunacharsky, near the circus, asbestos)

Radio

Power redundancy

radio program- a set of audio messages

radio station– radio program broadcast source

Traditional: Radio editorial office (creative team), Radiohouse (a set of technical and technological means)

radio house

radio studio– a room with suitable acoustic parameters, soundproofed

Discretization by purity

The analog signal in time is divided into intervals. Measured in hertz. The number of intervals is needed to measure the amplitude on each segment

Bit quantization. Sampling frequency - splitting the signal in time into equal segments in accordance with the Kotelnikov theorem

For undistorted transmission of a continuous signal occupying a certain frequency band, it is necessary that the sampling frequency be at least twice the upper frequency of the reproducible frequency range

30 to 15 kHz

CD 44-100 kHz

Digital compression of information

- or compression- the ultimate goal is the exclusion of redundant information from the digital flow.

Sound signal is a random process. Levels are related over time correlation

Correlative- links describing events in time intervals: previous, present and future

Long-term - spring, summer, autumn

short-term

extrapolation method. From digital to sine wave

Only the difference between the next signal and the previous one is transmitted.

Psychophysical properties of sound - allows the ear to select signals

Specific gravity in signal volume

Real/impulsive

The system is noise-resistant, nothing depends on the shape of the pulse. Momentum is easy to recover

AFC - the dependence of the amplitude on the frequency

AFC adjusts the tone of the sound

Equalizer - frequency response corrector

Low, medium, high frequencies

Bass, mids, highs

Equalizer 10, 20, 40, 256 bands

Spectrum analyzer - delete, recognize voice

Psychoacoustic devices

Forces are a process

Frequency processing device - plugins- modules that, when the program is open source, are finalized, sent

Dynamic Signal Processing

Applications– devices that regulate dynamic devices

Volume– signal level

Level controls

Faders / mixers

Fade in \ Fade out

Noise reduction

pico cutter

Compressor

Squelch

color vision

The human eye contains two types of light-sensitive cells (photoreceptors): highly sensitive rods responsible for night vision and less sensitive cones responsible for color vision.

In the human retina, there are three types of cones, the sensitivity maxima of which fall on the red, green and blue parts of the spectrum.

binocular

The visual analyzer of a person under normal conditions provides binocular vision, that is, vision with two eyes with a single visual perception.

AM (LW, MW, HF) and FM (VHF and FM) broadcasting frequency bands.

Radio- a type of wireless communication in which radio waves freely propagating in space are used as a signal carrier.

The transmission takes place as follows: a signal with the required characteristics (frequency and amplitude of the signal) is formed on the transmitting side. Further transmitted signal modulates a higher frequency oscillation (carrier). The received modulated signal is radiated by the antenna into space. On the receiving side of the radio wave, a modulated signal is induced in the antenna, after which it is demodulated (detected) and filtered by the low-pass filter (thus getting rid of the high-frequency component - the carrier). Thus, the useful signal is extracted. The received signal may differ slightly from that transmitted by the transmitter (distortion due to interference and interference).

In the practice of broadcasting and television, a simplified classification of radio bands is used:

Extra long waves (VLW)- myriameter waves

Long waves (LW)- kilometer waves

Medium waves (MW)- hectometric waves

Short waves (HF) - decameter waves

Ultrashort waves (VHF) - high-frequency waves, the wavelength of which is less than 10 m.

Depending on the range, radio waves have their own characteristics and propagation laws:

DV are strongly absorbed by the ionosphere, the main importance is ground waves that propagate around the earth. Their intensity decreases relatively quickly with increasing distance from the transmitter.

SW are strongly absorbed by the ionosphere during the day, and the area of ​​action is determined by the surface wave, in the evening they are well reflected from the ionosphere, and the area of ​​action is determined by the reflected wave.

HF propagate exclusively through reflection by the ionosphere, therefore, around the transmitter there is a so-called. radio silence zone. Shorter waves (30 MHz) propagate better during the day, longer ones (3 MHz) at night. Short waves can travel long distances with low transmitter power.

VHF propagate rectilinearly and, as a rule, are not reflected by the ionosphere, however, under certain conditions, they are able to go around the globe due to the difference in air densities in different layers of the atmosphere. Easily bend around obstacles and have a high penetrating power.

Radio waves propagate in the void and in the atmosphere; the earthly firmament and water are opaque to them. However, due to the effects of diffraction and reflection, communication is possible between points on the earth's surface that do not have a direct line of sight (in particular, located at a great distance).

New TV broadcasting bands

· MMDS range 2500-2700 GHz 24 channels for analog TV broadcasting. Used in cable TV system

· LMDS: 27.5-29.5 GHz. 124 TV analog channels. Since the digital revolution. Acquired by mobile operators

· MWS - MWDS: 40.5-42.4 GHz. Cellular broadcasting system. High 5km frequencies are rapidly absorbed

2. Decompose the image into pixels

256 levels

Key frame, then its changes

Analog to digital converter

At the input - analog, at the output - a digital stream. Digital compression formats

Uncompensated video - three colors in pixels 25 fps, 256 megabits / s

dvd, avi - has a stream of 25 mb / s

mpeg2 - additional compression from 3-4 times in satellite

Digital TV

1. Simplify, reduce the number of points

2. Simplify color selection

3. Apply Compression

256 levels - Luminance dynamic range

Digital 4 times larger horizontally and vertically

Flaws

· A sharply limited signal coverage area within which reception is possible. But this territory, with equal transmitter power, is larger than that of an analog system.

· Fading and scattering of the picture into "squares" with an insufficient level of the received signal.

Both "disadvantages" are a consequence of the advantages of digital data transmission: data is either received with 100% quality or restored, or received poorly and cannot be restored.

Digital radio- technology of wireless transmission of a digital signal by means of electromagnetic waves of the radio range.

Advantages:

· Better sound quality than FM broadcast. Currently not implemented due to low bit rate (typically 96 kbps).

· In addition to sound, texts, pictures and other data can be transmitted. (More than RDS)

· Weak radio interference does not change the sound in any way.

· More economical use of frequency space through signaling.

· Transmitter power can be reduced by 10 to 100 times.

Flaws:

· In case of insufficient signal power, interference appears in analog broadcasting, and in digital broadcasting, the broadcast disappears altogether.

· Audio delay due to the time it takes to process the digital signal.

· “Field trials” are currently being carried out in many countries around the world.

· Now in the world the transition to digital is gradually beginning, but it is much slower than that of television due to shortcomings. So far, there are no mass blackouts of radio stations in the analog mode, although their number in the AM band is decreasing due to more efficient FM.

In 2012, the SCRF signed a protocol, according to which the radio frequency band 148.5-283.5 kHz is allocated for the creation of DRM standard digital broadcasting networks in the Russian Federation. Also, in accordance with paragraph 5.2 of the minutes of the meeting of the SCRF dated January 20, 2009 No. 09-01, a research work was carried out “Study of the possibility and conditions for using DRM standard digital broadcasting in the Russian Federation in the frequency band 0.1485-0.2835 MHz (long waves).

Thus, for an indefinite time, FM broadcasting will be carried out in an analog format.

In Russia, the federal radio stations Radio Russia, Mayak and Vesti FM are broadcast in the first DVB-T2 digital terrestrial television multiplex.

Internet radio or web radio- a group of technologies for transmitting streaming audio data over the Internet. Also, the term Internet radio or web radio can be understood as a radio station using Internet streaming technology for broadcasting.

There are three elements in the technological basis of the system:

Station- generates an audio stream (either from a list of audio files, or by direct digitization from an audio card, or by copying an existing stream in the network) and sends it to the server. (The station consumes a minimum of traffic because it creates one stream)

Server (flow repeater)- receives an audio stream from the station and redirects copies of it to all clients connected to the server, in fact it is a data replicator. (Server traffic is proportional to the number of listeners + 1)

Client- receives an audio stream from the server and converts it into an audio signal that the listener of the Internet radio station hears. It is possible to organize cascade broadcasting systems using a stream repeater as a client. (The client, like the station, consumes a minimum of traffic. The traffic of the client-server of the cascade system depends on the number of listeners of such a client.)

In addition to the audio data stream, text data is usually also transmitted so that the player displays information about the station and the current song.

The station can be a regular audio player program with a special codec plug-in or a specialized program (for example, ICes, EzStream, SAM Broadcaster), as well as a hardware device that converts an analog audio stream into a digital one.

As a client, you can use any media player that supports streaming audio and is able to decode the format in which the radio broadcasts.

It should be noted that Internet radio, as a rule, has nothing to do with on-air broadcasting. But rare exceptions are possible, which, in the CIS, are not common.

Internet protocol television(Internet television or on-line TV) - a system based on two-way digital transmission of a television signal over Internet connections through a broadband connection.

The Internet TV system allows you to implement:

· Manage the subscription package of each user

Broadcasting of channels in MPEG-2, MPEG-4 format

Presentation of television programs

The function of registering television programs

Search for past TV shows to watch

・Pause function for live TV channel

Individual package of TV channels for each user

New media or new media- a term that at the end of the 20th century began to be used for interactive electronic publications and new forms of communication between content producers and consumers to denote differences from traditional media such as newspapers, that is, this term refers to the development of digital, network technologies and communications. Convergence and multimedia editorials have become commonplace elements of today's journalism.

This is primarily about digital technologies and these trends are associated with the computerization of society, since until the 80s media relied on analog media.

It should be noted that according to Ripple's law, more highly developed mass media are not a replacement for the previous ones, so the task new media it is also the recruitment of its consumer, the search for other areas of application, "the online version of the printed publication is hardly capable of replacing the printed publication itself."

It is necessary to distinguish between the concepts of "new media" and "digital media". Although both there and here digital means of encoding information are practiced.

Anyone can become a "new media" publisher in terms of process technology. Wyn Crosby, who describes "mass media" as a "one-to-many" broadcasting tool, considers new media as "many-to-many" communication.

The digital age creates a different media environment. Reporters are getting used to working in cyberspace. As noted, earlier “covering international events was a simple matter”

Speaking about the relationship between the information society and new media, Yasen Zasursky focuses on three aspects, highlighting new media precisely as an aspect:

· Possibilities of the media at the present stage of development of information and communication technologies and the Internet.

Traditional media in the context of "internetization"

· New media.

Radio studio. Structure.

How to organize faculty radio?

Content

What to have and be able to? Broadcasting zones, composition of equipment, number of people

License not required

(Territorial authority "Roskomnadzor", registration fee, ensure periodicity, at least once a year, certificate to a legal entity, a radio program is registered)

Creative team

Chief editor and legal entity

Less than 10 people - contract, more than 10 - charter

The technical basis for the production of radio products is a set of equipment on which radio programs are recorded, processed and subsequently broadcast. The main technical task of the radio stations is to ensure a clear, uninterrupted and high-quality operation of the technological equipment for broadcasting and sound recording.

Radio houses and television centers are the organizational form of the channel for the formation of programs. Employees of radio and television centers are divided into creative specialists (journalists, sound and video directors, employees of production departments, coordination departments, etc.) and technical specialties - a hardware-studio complex (employees of studios, hardware and some auxiliary services).

Hardware-studio complex- these are interconnected blocks and services, united by technical means, with the help of which the process of formation and release of audio and television broadcasting programs is carried out. The hardware-studio complex includes a hardware-studio block (for creating parts of programs), a broadcasting hardware (for RV) and a hardware-software block (for TV). In turn, the equipment-studio block consists of studios and technical and director's equipment rooms, which is due to different technologies for direct broadcasting and recording.

radio studios- these are special rooms for radio broadcasts that meet a number of requirements for acoustic processing in order to maintain a low level of noise from external sound sources, to create a sound field that is uniform in the volume of the room. With the advent of electronic devices for controlling phase and time characteristics, small, completely "muted" studios are increasingly used.

Depending on the purpose, the studios are divided into small (on-air) (8-25 sq. m), medium-sized studios (60-120 sq. m), large studios (200-300 sq. m).

In accordance with the plan of the sound engineer, microphones are installed in the studio, their optimal characteristics are selected (type, directivity diagram, output signal level).

Editing hardware designed to prepare parts of future programs from simple editing of musical and speech phonograms after the initial recording to the reduction of multi-channel sound to mono or stereo sound. Further, in the hardware preparation of programs, parts of the future transmission are formed from the originals of individual works. Thus, a fund of ready-made phonograms is formed. The entire program is formed from individual transmissions, which enters the central control room. Departments of publication and coordination carry out coordination of actions of editions. In large radio houses and television centers, in order to ensure that old recordings comply with modern broadcasting technical requirements, there are hardware restorations of phonograms, where the noise level and various distortions are edited.

After the complete formation of the program, electrical signals enter the broadcasting equipment.

Hardware-studio block it is completed with a director's console, a loud-speaking control unit, tape recorders and sound effects devices. Glowing inscriptions are installed in front of the entrance to the studio: "Rehearsal", "Get ready", "Microphone on". The studios are equipped with microphones and a speaker console with microphone activation buttons, signal lamps, telephone sets with a ringing light. Announcers can contact the control room, production department, editorial office, and some other services.

Master device director's room is the sound engineer's console, with the help of which both technical and creative tasks are solved at the same time: montages, signal conversion.

AT broadcast hardware radio house from various transmissions a program is formed. The parts of the program that have undergone sound processing and editing do not require additional technical control, but they need to combine different signals (speech, musical accompaniment, sound savers, etc.). In addition, equipment for automated production of programs is installed in modern broadcast hardware.

The final control of programs is carried out in the central control room, where additional regulation of electrical signals and their distribution to consumers takes place on the sound control panel. Here the frequency processing of the signal, its amplification to the required level, compression or expansion, the introduction of call signs of the program and exact time signals are performed.

The composition of the hardware complex of the radio station.

The main expressive means of radio broadcasting are music, speech and service signals. To bring together in the correct balance (mixing) of all sound signals, the main element of the broadcasting hardware complex is used - Mixer(mixing console). The signal formed on the console from the console output passes through a number of special signal processing devices (compressor, modulator, etc.) and is fed (via a communication line or directly) to the transmitter. Signals from all sources are fed to the console inputs: microphones that transmit the speech of the presenters and guests on the air; sound reproduction devices; signal playback devices. In a modern radio studio, the number of microphones can be different - from 1 to 6 or even more. However, for most cases, 2-3 is enough. Various types of microphones are used.
Prior to being input to the console, the microphone signal can be subjected to various processing (compression, frequency correction, in some special cases - reverberation, tonal shift, etc.) in order to increase speech intelligibility, equalize the signal level, etc.
Sound reproduction devices at most stations are represented by CD players and tape recorders. The range of used tape recorders depends on the specifics of the station: it can be digital (DAT - digital cassette recorder; MD - recording and playback device for digital minidisk) and analog devices (reel-to-reel studio tape recorders, as well as professional cassette decks). Some stations also use playback from vinyl discs; for this, either professional "gram tables" are used, or - more often - simply high-quality players, and sometimes special "DJ" turntables, similar to those used in discos.
Some stations, where the principle of song rotation is widely used, play music directly from the computer's hard drive, where a certain set of songs rotated this week is pre-recorded in the form of wave files (usually in WAV format). Service signal playback devices are used in various types. As in foreign broadcasting, analog cassette devices (jingles) are widely used, the sound carrier in which is a special tape cassette. On each cassette, as a rule, one signal is recorded (intro, jingle, beat, substrate, etc.); the tape in the cassettes of the jingle guide is looped, therefore, immediately after use, it is again ready for playback. On many radio stations that use the traditional type of broadcasting organizations, the signals are played from reel-to-reel tape recorders. Digital devices are either devices where the carrier of each individual signal is floppy disks or special cartridges, or devices where signals are played directly from a computer hard drive.
Various recording devices are also used in the broadcasting hardware complex: these can be both analog and digital tape recorders. These devices are used both for recording individual fragments of the air in the archive of the radio station or for the purpose of subsequent repetition, and for continuous control recording of the entire air (the so-called police tape). In addition, the radio broadcasting hardware complex includes monitor acoustic systems both for listening to the program signal (mix at the output from the console) and for preliminary listening ("eavesdropping") of the signal from various media before broadcasting this signal on the air, as well as headphones ( headphones) into which the program signal is fed, etc. A part of the hardware complex can also be an RDS (Radio Data System) device - a system that allows a listener who has a special receiving device to receive not only an audio signal, but also a text signal (the name of the radio station, sometimes the name and artist of the sounding work, other information), displayed on a dedicated display.

Classification

By sensitivity

Highly sensitive

Medium sensitive

Low sensitive (contact)

By dynamic range

· Speech

· Office communication

By direction

Every microphone has a frequency response

Not directed

One-way directional

Stationary

Friday

TV studio

Special lighting - lighting in the studio

Sound-absorbing underfoot

· Scenery

· Means of communication

soundproof room for the sound engineer

· Producer

· Video monitors

Sound control 1 mono 2 stereo

· Technical staff

Mobile TV station

Mobile reporting station

video recorder

Sound path

Video camera

TS time code

Color- the brightness of the three points of red, green, blue

clarity or resolution

Bitrate- digital stream

· Discretization of 2200 lines

quantization

TVL (TV Line)

Broadcast (broadcast)

Line- unit of measurement of resolution

Analog to Digital Converter - Digital

VHS up to 300 TVL

Broadcast over 400 TVL

DPI - dots per inch

Gloss=600 DPI

Photos, portraits=1200 DPI

TV picture=72 DPI

Camera resolution

Lens - megapixels - quality electr. block

720 to 568 GB/s

Digital video DV

HD High Definition 1920\1080 - 25mb/s

Objective

To study the basics of the theory of sound recording and playback, the main characteristics of sound, methods of sound conversion, the device and features of the use of equipment for converting and amplifying sound, to gain skills in their practical application.

Theoretical reference

sound called the oscillatory motion of particles of an elastic medium, propagating in the form of waves in a gaseous, liquid or solid medium, which, acting on the human auditory analyzer, cause auditory sensations. The sound source is an oscillating body, for example: string vibrations, tuning fork vibration, loudspeaker cone movement, etc.

sound wave the process of directed propagation of vibrations of an elastic medium from a sound source is called. The region of space in which a sound wave propagates is called the sound field. A sound wave is an alternation of compression and rarefaction of air. In the area of ​​compression, the air pressure exceeds atmospheric pressure, in the area of ​​rarefaction - less than it. The variable part of atmospheric pressure is called sound pressure. R . The unit of sound pressure is Pascal ( Pa) (Pa \u003d N / m 2). Oscillations that have a sinusoidal shape (Fig. 1) are called harmonic. If a sound-emitting body oscillates sinusoidally, then the sound pressure also changes sinusoidally. It is known that any complex oscillation can be represented as a sum of simple harmonic oscillations. The sets of amplitudes and frequencies of these harmonic oscillations are called respectively amplitude spectrum and frequency spectrum.

The oscillatory motion of air particles in a sound wave is characterized by a number of parameters:

Oscillation period(T), the smallest period of time after which the values ​​of all physical quantities characterizing the oscillatory motion are repeated, during this time one complete oscillation occurs. The oscillation period is measured in seconds ( With).

Oscillation frequency(f) , the number of complete oscillations per unit time.

where: f is the oscillation frequency; T is the period of oscillation.

The frequency unit is hertz ( Hz) is one complete oscillation per second (1 kHz = 1000 Hz).

Rice. 1. Simple harmonic oscillation:
A is the amplitude of the oscillation, T is the period of the oscillation

Wavelength (λ ), the distance over which one period of oscillation fits. Wavelength is measured in meters ( m). Wavelength and oscillation frequency are related by:

where With is the speed of sound propagation.

Oscillation amplitude (BUT) , the largest deviation of the oscillating value from the state of rest.

Oscillation phase.

Imagine a circle whose length is equal to the distance between points A and E (Fig. 2), or the wavelength at a certain frequency. As this circle “rotates”, its radial line in each individual place of the sinusoid will be at a certain angular distance from the starting point, which will be the phase value at each such point. Phase is measured in degrees.

When a sound wave collides with a surface, it is partially reflected at the same angle at which it falls on this surface, its phase does not change. On fig. 3 illustrates the phase dependence of the reflected waves.

Rice. 2. Sine wave: amplitude and phase.
If the circumference is equal to the wavelength at a certain frequency (distance from A to E), then as it rotates, the radial line of this circle will show an angle corresponding to the phase value of the sinusoid at a particular point

Rice. 3. Phase dependence of reflected waves.
Sound waves of different frequencies emitted by a sound source with the same phase, after passing the same distance, reach the surface with a different phase

A sound wave is able to bend around obstacles if its length is greater than the dimensions of the obstacle. This phenomenon is called diffraction. Diffraction is especially noticeable on low-frequency oscillations having a significant wavelength.

If two sound waves have the same frequency, then they interact with each other. The process of interaction is called interference. When the in-phase (coinciding in phase) oscillations interact, the sound wave is amplified. In the case of interaction of antiphase oscillations, the resulting sound wave weakens (Fig. 4). Sound waves whose frequencies differ significantly from each other do not interact with each other.

Rice. 4. Interaction of oscillations in phase (a) and in antiphase (b):
1, 2 - interacting oscillations, 3 - resulting oscillations

Sound vibrations can be damped and undamped. The amplitude of damped oscillations gradually decreases. An example of damped vibrations is the sound that occurs when a string is excited once or a gong is struck. The reason for the damping of the vibrations of a string is the friction of the string against the air, as well as the friction between the particles of the vibrating string. Continuous oscillations can exist if friction losses are compensated by an influx of energy from outside. An example of undamped oscillations are the oscillations of the cup of a school bell. While the power button is pressed, there are undamped vibrations in the call. After the cessation of the energy supply to the bell, the oscillations die out.

Propagating in the room from its source, the sound wave transfers energy, expands until it reaches the boundary surfaces of this room: walls, floor, ceiling, etc. The propagation of sound waves is accompanied by a decrease in their intensity. This is due to the loss of sound energy to overcome friction between air particles. In addition, propagating in all directions from the source, the wave covers an ever larger area of ​​space, which leads to a decrease in the amount of sound energy per unit area, with each doubling of the distance from the spherical source, the force of vibrations of air particles drops by 6 dB (four times in power) (Fig. 5).

Rice. 5. The energy of a spherical sound wave is distributed over an ever-increasing area of ​​the wave front, due to which the sound pressure decreases by 6 dB with each doubling of the distance from the source

Encountering an obstacle in its path, part of the energy of the sound wave passes through the walls part absorbed inside the walls, and part reflected back inside the room. The energy of the reflected and absorbed sound wave is equal in total to the energy of the incident sound wave. To varying degrees, all three types of sound energy distribution are present in almost all cases.
(Fig. 6).

Rice. 6. Reflection and absorption of sound energy

The reflected sound wave, having lost part of the energy, will change direction and will propagate until it reaches other surfaces of the room, from which it will be reflected again, losing some more energy, etc. This will continue until the energy of the sound wave finally fades away.

The reflection of a sound wave occurs according to the laws of geometric optics. High-density substances (concrete, metal, etc.) reflect the sound well. Sound wave absorption is due to several reasons. The sound wave expends its energy on vibrations of the obstacle itself and on vibrations of air in the pores of the surface layer of the obstacle. It follows that porous materials (felt, foam rubber, etc.) strongly absorb sound. In a room filled with spectators, sound absorption is greater than in an empty one. The degree of reflection and absorption of sound by a substance is characterized by the coefficients of reflection and absorption. These coefficients can range from zero to one. A coefficient equal to one indicates ideal sound reflection or absorption.

If the sound source is in the room, then the listener receives not only direct sound energy, but also sound energy reflected from various surfaces. The volume of sound in a room depends on the power of the sound source and the amount of sound-absorbing material. The more sound-absorbing material placed in the room, the lower the sound volume.

After turning off the sound source due to reflections of sound energy from various surfaces, a sound field exists for some time. The process of gradual attenuation of sound in enclosed spaces after turning off its source is called reverb. The duration of the reverberation is characterized by the so-called. reverberation time, i.e. the time during which the sound intensity decreases by 10 6 times, and its level by 60 dB . For example, if an orchestra in a concert hall reaches a level of 100 dB with about 40 dB of background noise, then the final chords of the orchestra will fade into noise when their level drops by about 60 dB. Reverberation time is the most important factor in determining the acoustic quality of a room. It is the greater, the larger the volume of the room and the lower the absorption on the bounding surfaces.

The amount of reverberation time affects the degree of speech intelligibility and the sound quality of the music. If the reverberation time is too long, speech becomes slurred. If the reverberation time is too short, speech is intelligible, but the music becomes unnatural. The optimal reverberation time, depending on the volume of the room, is about 1–2 s.

Basic characteristics of sound.

Sound speed in air is 332.5 m/s at 0°C. At room temperature (20°C), the speed of sound is about 340 m/s. The speed of sound is indicated by the symbol " With ».

Frequency. The sounds perceived by the human auditory analyzer form a range of sound frequencies. It is generally accepted that this range is limited to frequencies from 16 to 20,000 Hz. These boundaries are very conditional, which is associated with the individual characteristics of people's hearing, age-related changes in the sensitivity of the auditory analyzer and the method of recording auditory sensations. A person can distinguish a frequency change of 0.3% at a frequency of about 1 kHz.

The physical concept of sound covers both audible and inaudible vibrational frequencies. Sound waves with a frequency below 16 Hz are conventionally called infrasound, above 20 kHz - ultrasound. . The region of infrasonic frequencies is practically unlimited from below - in nature, infrasonic vibrations occur with a frequency of tenths and hundredths of a Hz .

The sound range is conventionally divided into several narrower ranges (Table 1).

Table 1

The sound frequency range is conditionally divided into subranges

Sound intensity(W / m 2) is determined by the amount of energy carried by a wave per unit of time through a unit of surface area perpendicular to the direction of wave propagation. The human ear perceives sound over a very wide range of intensities, from the faintest audible sounds to the loudest, such as those generated by the engine of a jet aircraft.

The minimum sound intensity at which an auditory sensation occurs is called the auditory threshold. It depends on the frequency of the sound (Fig. 7). The human ear has the highest sensitivity to sound in the frequency range from 1 to 5 kHz, respectively, and the threshold of auditory perception here has the lowest value of 10 -12 W/m 2 . This value is taken as the zero level of audibility. Under the action of noise and other sound stimuli, the threshold of audibility for a given sound increases (Sound masking is a physiological phenomenon, which consists in the fact that with the simultaneous perception of two or more sounds of different loudness, quieter sounds cease to be audible), and the increased value persists for some time after cessation of the interfering factor, and then gradually returns to its original level. For different people and for the same persons at different times, the hearing threshold may vary depending on age, physiological state, fitness.

Rice. 7. Frequency dependence of the standard hearing threshold
sinusoidal signal

High-intensity sounds cause a sensation of pressing pain in the ears. The minimum sound intensity at which there is a sensation of pressing pain in the ears (~ 10 W / m 2) is called the pain threshold. As well as the threshold of auditory perception, the threshold of pain depends on the frequency of sound vibrations. Sounds approaching the pain threshold have a detrimental effect on hearing.

A normal sensation of sound is possible if the sound intensity is between the threshold of hearing and the threshold of pain.

It is convenient to evaluate sound by level ( L) intensity (sound pressure), calculated by the formula:

where J 0 - hearing threshold, J- sound intensity (Table 2).

table 2

Characteristics of sound in terms of intensity and its assessment in terms of intensity relative to the threshold of auditory perception

Sound characteristic Intensity (W/m2) Intensity level relative to hearing threshold (dB)
hearing threshold 10 -12
Heart sounds generated through a stethoscope 10 -11
Whisper 10 -10 –10 -9 20–30
Speech sounds during calm conversation 10 -7 –10 -6 50–60
Noise associated with heavy traffic 10 -5 –10 -4 70–80
Noise generated by a rock music concert 10 -3 –10 -2 90–100
Noise near a running aircraft engine 0,1–1,0 110–120
Threshold of pain

Our hearing aid is capable of handling a huge dynamic range. Changes in air pressure caused by the quietest of the perceived sounds are of the order of 2×10 -5 Pa. At the same time, sound pressure with a level approaching the pain threshold for our ears is about 20 Pa. As a result, the ratio between the quietest and loudest sounds that our hearing aid can perceive is 1:1,000,000. It is rather inconvenient to measure such different level signals on a linear scale.

In order to compress such a wide dynamic range, the concept of "bel" was introduced. Bel is the simple logarithm of the ratio of two powers; and a decibel is equal to one tenth of a bela.

To express acoustic pressure in decibels, it is necessary to square the pressure (in Pascals) and divide it by the square of the reference pressure. For convenience, squaring the two pressures is done outside the logarithm (which is a property of logarithms).

To convert acoustic pressure to decibels, the following formula is used:

where: P is the acoustic pressure of interest to us; P 0 - initial pressure.

When 2 × 10 -5 Pa is taken as the reference pressure, then the sound pressure, expressed in decibels, is called the sound pressure level (SPL - from the English sound pressure level). Thus, sound pressure equal to 3 Pa, equivalent to a sound pressure level of 103.5 dB, therefore:

The above acoustic dynamic range can be expressed in decibels as the following sound pressure levels: from 0 dB for the quietest sounds, 120 dB for pain threshold sounds, up to 180 dB for the loudest sounds. At 140 dB, severe pain is felt, at 150 dB, damage to the ears occurs.

sound volume, a value that characterizes the auditory sensation for a given sound. The loudness of the sound depends in a complex way on sound pressure(or sound intensity), frequency and form of vibrations. With a constant frequency and shape of vibrations, the volume of sound increases with increasing sound pressure (Fig. 8.). The loudness of a sound of a given frequency is estimated by comparing it with the loudness of a simple tone with a frequency of 1000 Hz. The sound pressure level (in dB) of a pure tone with a frequency of 1000 Hz, which is as loud (by ear) as the sound being measured, is called the loudness level of this sound (in backgrounds) (Fig. 8).

Rice. 8. Curves of equal loudness - the dependence of the sound pressure level (in dB) on the frequency at a given loudness (in phons).

Spectrum of sound.

The nature of the perception of sound by the organs of hearing depends on its frequency spectrum.

Noises have a continuous spectrum, i.e. the frequencies of the simple sinusoidal oscillations contained in them form a continuous series of values ​​that completely fill a certain interval.

Musical (tonal) sounds have a line spectrum of frequencies. The frequencies of the simple harmonic oscillations included in them form a series of discrete values.

Each harmonic vibration is called a tone (simple tone). The pitch depends on the frequency: the higher the frequency, the higher the tone. The pitch of a sound is determined by its frequency. A smooth change in the frequency of sound vibrations from 16 to 20,000 Hz is perceived at first as a low-frequency buzz, then as a whistle, gradually turning into a squeak.

The main tone of a complex musical sound is the tone corresponding to the lowest frequency in its spectrum. The tones corresponding to the rest of the frequencies in the spectrum are called overtones. If the frequencies of the overtones are multiples of the frequency f o of the main tone, then the overtones are called harmonic, and the fundamental tone with a frequency f o is called the first harmonic, the overtone with the next highest frequency 2f o is the second harmonic, etc.

Musical sounds with the same fundamental tone can differ in timbre. The timbre is determined by the composition of the overtones - their frequencies and amplitudes, as well as the nature of the increase in amplitudes at the beginning of the sound and their decline at the end of the sound.


Similar information.


Have questions?

Report a typo

Text to be sent to our editors: