![Axis Network video Скачать руководство пользователя страница 67](http://html1.mh-extra.com/html/axis/network-video/network-video_manual_3038725067.webp)
67
8.5.1 Sampling frequency
There are many different audio codecs supporting different sampling frequencies and levels of
compression. Sampling frequency refers to the number of times per second a sample of an
analog audio signal is taken and is defined in hertz (Hz). In general, the higher the sampling
frequency, the better the audio quality and the greater the bandwidth and storage needs.
8.5.2 Bit rate
The bit rate is an important setting in audio since it determines the level of compression and,
thereby, the quality of the audio. In general, the higher the compression level (the lower the bit
rate), the lower the audio quality. The differences in the audio quality of codecs may be particu-
larly noticeable at high compression levels (low bit rates), but not at low compression levels
(high bit rates). Higher compression levels may also introduce more latency or delay, but they
enable greater savings in bandwidth and storage.
The bit rates most often selected with audio codecs are between 32 kbit/s and 64 kbit/s. Audio
bit rates, as with video bit rates, are an important consideration to take into account when
calculating total bandwidth and storage requirements.
8.5.3 Audio codecs
Axis network video products support three audio codecs. The first is AAC-LC (Advanced Audio
Coding - Low Complexity), also known as MPEG-4 AAC, which requires a license. AAC-LC,
particularly at a sampling rate of 16 kHz or higher and at a bit rate of 64 kbit/s, is the recom-
mended codec to use when the best possible audio quality is required. The other two codecs are
G.711 and G.726, which are non-licensed technologies.
8.6
Audio and video synchronization
Synchronization of audio and video data is handled by a media player (a computer software
program used for playing back multimedia files) or by a multimedia framework such as Micro-
soft DirectX, which is a collection of application programming interfaces that handles multi-
media files.
Audio and video are sent over a network as two separate packet streams. In order for the client
or player to perfectly synchronize the audio and video streams, the audio and video packets
must be time-stamped. The timestamping of video packets using Motion JPEG compression may
not always be supported in a network camera. If this is the case and if it is important to have
synchronized video and audio, the video format to choose is MPEG-4 or H.264 since such video
streams, along with the audio stream, are sent using RTP (Real-time Transport Protocol), which
timestamps the video and audio packets. There are many situations, however, where synchro-
nized audio is less important or even undesirable; for example, if audio is to be monitored but
not recorded.
AUDIO - CHAPTER 8