NETAVIS Observer 4.6 User Manual (October 2015)
Page 8 of 204
different sizes and frame rates) or for optimizing iCAT video analytics performance (see
15.2.1
Considerations for setting up a system with iCAT
on page 142).
Usually MJPEG cameras can deliver several MJPEG streams while MPEG cameras (MPEG-4, H.264, and
MxPEG) usually can deliver only 1 or 2 MPEG streams and some camera types can deliver several
MJPEG streams in addition to the MPEG stream(s).
However there are a few important restrictions with multi streaming:
Some cameras have performance limitations in providing multiple streams depending on the
streaming format, resolution, and frame rate. We found out that some cameras just stop
streaming when the streaming processors of the camera get overloaded by certain resolution
and frame rates settings. Please refer to the camera data sheet and documentation.
In the current version Observer supports 1 format setting for MPEG streams (MPEG-4, H.264, and
MxPEG) and multiple format settings for MJPEG streams.
Please note:
Please refer to camera data sheet and documentation for camera limitations. Also the
document
NETAVIS Observer Supported Video Sources
may provide further details on camera
restrictions.
1.4.2 Motion JPEG
A network camera captures individual images and compresses them into a JPEG format. The network
camera can capture and compress, for example, 30 such individual images per second (30 fps), and
then make them available as a continuous flow of images over a network to an Observer server which
then distributes it to Observer clients and / or stores it in the camera archive. At a frame rate of about
16 fps and above, the viewer will perceive full motion video.
As each individual image is a complete JPEG compressed image, they will all have the same
guaranteed quality, determined by the compression level as defined for the network camera or
network video server.
Example of a sequence of three complete JPEG images:
1.4.3 MPEG (MPEG-4, H.264, and MxPEG)
Some of the best-known audio and video streaming techniques are defined by the so called MPEG
consortium (Moving Pictures Expert Group). Under the MPEG umbrella several streaming methods are
available like MPEG-4, H.264, and MxPEG (strictly taken, MxPEG is not part of the standards defined by
the MPEG group but is a proprietary standard by the company Mobotix. However, because of reasons
of simplicity we refer to MxPEG also as an MPEG format). MPEG-4 and H.264 are well known and widely
supported MPEG streaming standards.
Simply described, MPEG’s basic principle is to compare two compressed images to be transmitted
over the network, and using the first compressed image as a reference image (called an I-frame), only
sending the parts of following images (B- and P-frames) that differ from the reference image. A viewing
client will then reconstruct all images based on the reference image and the “difference data”.