MG-IP TDM Over IP Gateway Reference Manual
32
MG-IP configuration supports one protocol at a time and mixing of PSN headers is not
allowed. In other words, CES sessions may be either SAToP, CESoPSN or CESoETH
sessions, with no mixing allowed
Jitter Buffer/Underrun/Overrun
As packets traverse the packet network, the arrival delay varies from packet to packet. To
accommodate this packet delay variation (PDV), the MG-IP uses a jitter buffer, whose main purpose is
to smooth out variation in CES frame arrival time. Data is played out of the jitter buffer onto the TDM
service at constant rate. The delay through this buffer needs to be as short as possible, in order to
reduce TDM service latency, but long enough to absorb known variation in the network packet delay
(PDV).
The MG-IP supports a jitter buffer that can be sized according to the maximum PDV expected for the
specific network in order to avoid under run and overrun conditions.
An “overrun” condition occurs when the jitter buffer cannot accommodate the newly arrived
packet due to insufficient storage space. Packets are then discarded and counted as overrun
packets.
An “underrun” condition occurs when there is no correctly received CES payload ready to be
played out on the TDM interface, and filler packets are played out instead. This may occur
due to a frame getting lost on the Ethernet network, or discarded due to error conditions.
Typically, in order to minimize end-to-end delay, the maximum jitter is set to the lowest value possible,
given the conditions of the network. Based on this value, a number of packets received over the
network are buffered before bitstream transmission begins.
The number of packets in the jitter buffer is calculated based on the maximum jitter in milliseconds and
the packet payload length. For example, with a packet payload of three frames on an E1 circuit, one
packet is transmitted every 375 microseconds. If the maximum jitter setting is 10 milliseconds, the
MG-IP will create an initial 27-packet backlog (10 msec/.375 msec = 26.6).
While setting the jitter buffer size, the user configures the normal jitter buffer operating point. This
configuration should correspond to the previously measured PDV of the network.
The entire size of the jitter buffer will actually be larger than that, to accommodate larger latencies that
occur from time to time. However, this situation is not recommended as, should the delay last longer
than the total size of the buffer, it will eventually lead to discarded packets that will overflow this size.
In a low-PDV environment, the jitter buffer will vary in length by only one or two packets. If the
maximum jitter setting matches the network PDV, the maximum and minimum jitter may vary by a
larger span. Underrun and overrun occurrences indicate that the MG-IP parameters should be adjusted.
See Using the Get Status Command to Evaluate Performance on page 168.