We are asked by customers regularly about what is acceptable network performance and what can be done to improve things when performance is sub-par. The unfortunate answer is that there will not be able to offer you a Silver Bullet to all network issues. Instead, lets start by understanding latency a bit better.

“Netflix recommends 3.0 Mbps in order to get DVD quality video (1.5Mbps is the minimum recommended).  This means that with even 2% packet loss and only a 60ms round trip time, users will be BELOW the recommendation!  (Even if nothing else is happening on the network)”

Definition of Latency:

For the purpose of our discussions, we will consider Network Latency to be the time it takes for a packet to travel from one device to another.  Latency is much like the time it takes for your voice to travel from your mouth to the ear of the person you are speaking with.

A more complicated definition of latency will begin to take into account many things that are well beyond the scope of this exercise, including things like “jitter,” which measures how much variation there is in packet delay over time.

Where Does Latency Come From:

Latency is a cumulative effect of the individual latencies along the end-to-end network path.  This includes every network segment along the way between two devices (like a switch or access point).  Every segment represents another opportunity to introduce additional latency in the network.

Network routers are the devices that create the most latency of any device on the end-to-end path.  Additionally, packet queuing due to link congestion is often the culprit for large amounts of latency.  Some types of network technology such as satellite communications add large amounts of latency because of the time it takes for a packet to travel across the link.  Since latency is cumulative, the more links and router hops there are, the larger end-to-end latency will be.

What Happens With High Latency:

Bear with me for a second as I get a bit technical…

TCP (Transmission Control Protocol) traffic represents the majority of network traffic on your local network.  TCP is a “guaranteed” delivery protocol, meaning that the device sending the packets gets a confirmation for every packet that is sent.  The receiving device sends back an acknowledgment packet to let the sender know that it received the information.  If the sender does not receive an acknowledgement in a certain period of time, it will resend the “lost” packets.

For simplicity, lets call that period of time that the sender waits before re-sending packets the “window size.”  While the sender is re-sending packets, it is no longer sending new information.  The window size is adjusted over time and tightly correlates to the amount of latency between the two devices.

As latency increases, the sending device spends more and more time waiting on acknowledgements rather than sending packets!

But Does It Really Effect Anything?

Since the window size is adjusted upwards as latency increases, there is a direct inverse relationship between latency and throughput on the network.  Lets look at an example of two devices that are directly connected via a 100Mbs Ethernet network (nothing in between).  The theoretical max throughput of this network is 100Mbps.  Lets take a look at what happens to that throughput as latency increases.  The results were obtained by using a latency and generator between the two devices.

Round trip latency

TCP Throughput

0ms

93.5 Mbps

30ms

16.2 Mbps

60ms

8.07 Mbps

90ms

5.32 Mbps

Notice how drastic the drop in throughput is with round trip times as low as 30ms!

It Gets Worse!

Remember when I mentioned that some packets become “lost”?  These lost packets have to be resent, thus increasing the amount of data that must be transmitted.  Packet loss will cause the sender to sit idle for longer periods of time waiting for the acknowledgments to come back from the receiver.  The packets that get lost might even be the acknowledgement back from the receiver, meaning that the sender will be re-sending information that was already sent successfully.  The simple result is a further significant decrease in throughput.

Taking the same test system from above and introducing 2% packet loss through a packet loss generator gives you the following results.

Round trip latency

TCP Throughput with no packet loss

TCP Throughput with 2% packet loss

0 ms

93.50 Mbps

3.72 Mbps

30 ms

16.20 Mbps

1.63 Mbps

60 ms

8.07 Mbps

1.33 Mbps

90 ms

5.32 Mbps

0.85 Mbps

To put this into perspective, Netflix recommends 3.0 Mbps in order to get DVD quality video (1.5Mbps is the minimum recommended).  This means that with even 2% packet loss and only a 60ms round trip time, users will be BELOW the recommendation!  (Even if nothing else is happening on the network)

What Latency Should We See?

This is a very difficult question to answer with a blanket rule.  There are some situations where increased latency is unavoidable.  What is critical is that you are monitoring that latency so that you can identify what is typical and respond to issues quickly.

Finally, what you have been waiting for – guidelines for acceptable latency on your local network:

  • A round trip latency of 30ms or less is healthy.
  • Round trip latency between 30ms and 50ms should be monitored closely.  Consider looking deeper at the network for potential issues.
  • Round trip latency over 50ms quires immediate attention to determine the cause of the latency and potential remedied.  Continue monitoring to track improvements.

 

What Can I Do To Lower Latency?

Answering this is a bit out of this post, but here are some pointers to get you started.

If you see high latency through Ihiji invision, you first need to identify the full path of communication between the Ihiji appliance and the device in question.  Next, you will want to look at the reported latency between the appliance and any of the devices that are in the path.  If one or more devices appear to be contributing significantly to the latency, you should begin to device strategies to test possible changes that might improve performance (ie, firmware, wireless signal, etc).

At this point, you may want to use a computer on the local network to do more granular tests like ping and traceroute.

 

Share this post Facebooktwittergoogle_plusredditpinterestlinkedinmail

BECOME A DEALER
Reduce service calls, get recurring revenue, and keep clients happy! Get Started
We Are Here to Help
An Ihiji dealer, but have questions? Request a call from us!