We are often asked about what is acceptable network performance and what can be done to improve things when performance is sub-par. The unfortunate answer is that there is not a Silver Bullet to all network issues but armed with enough knowledge and Lead Bullets you can make your customers’ networks hum.

“Netflix recommends 25 Mbps per stream in order to get 4K HDR quality video.  This means that with only a 30ms round-trip time (latency), users will be BELOW the recommendation!  (Even if nothing else is happening on the network)”

Definition of Latency:

For the purpose of our discussions, we will consider Network Latency to be the time it takes for a packet to travel from one device to another.  Latency is much like the time it takes for your voice to travel from your mouth to the ear of the person you are speaking with.

Where Does Latency Come From:

Latency is a cumulative effect of the individual latencies along the end-to-end network path.  This includes every network segment along the way between two devices (like a switch or access point).  Every segment, or hop, represents another opportunity to introduce additional latency in the network.

Network routers are the devices that create the most latency of any device on the end-to-end path.  Additionally, packet queuing due to link congestion is often the culprit for large amounts of latency.  When a switch, access point or router becomes loaded the time it takes to process each packet increases, driving up latency. Some types of network technology such as satellite communications add large amounts of latency because of the time it takes for a packet to travel across the link. Since latency is cumulative, the more links and router hops there are, the larger end-to-end latency will be.

What Happens With High Latency:

TCP (Transmission Control Protocol) traffic represents the majority of network traffic on your local network.  TCP is a “guaranteed” delivery protocol, meaning that the device sending the packets gets a confirmation for every packet that is sent.  The receiving device sends back an acknowledgment packet to let the sender know that it received the information.  If the sender does not receive an acknowledgment in a certain period of time, it will resend the “lost” packets.

For simplicity, let’s call that period of time that the sender waits before resending packets the “window size.”  While the sender is re-sending packets, it is no longer sending new information.  The window size is adjusted over time and tightly correlates to the amount of latency between the two devices.

As latency increases, the sending device spends more and more time waiting on acknowledgments rather than sending packets!

But Does It Really Affect Anything?

Since the window size is adjusted upwards as latency increases, there is a direct inverse relationship between latency and throughput on the network.  Let’s look at an example of two devices that are directly connected via a 100Mbs Ethernet network (nothing in between).  The theoretical max throughput of this network is 100Mbps.  Now take a look at what happens to that throughput as latency increases.  The results were obtained by using a latency and generator between the two devices.

Round trip latency

TCP Throughput

0ms

93.5 Mbps

30ms

16.2 Mbps

60ms

8.07 Mbps

90ms

5.32 Mbps

Notice how drastic the drop in throughput is with round trip times as low as 30ms!

It Gets Worse!

Remember when I mentioned that some packets become “lost”?  These lost packets have to be resent, thus increasing the amount of data that must be transmitted.  Packet loss will cause the sender to sit idle for longer periods of time waiting for the acknowledgments to come back from the receiver.  The packets that get lost might even be the acknowledgment back from the receiver, meaning that the sender will be re-sending information that was already sent successfully.  The simple result is a further significant decrease in throughput.

Taking the same test system from above and introducing 2% packet loss through a packet loss generator gives you the following results.

Round trip latency

TCP Throughput with no packet loss

TCP Throughput with 2% packet loss

0 ms

93.50 Mbps

3.72 Mbps

30 ms

16.20 Mbps

1.63 Mbps

60 ms

8.07 Mbps

1.33 Mbps

90 ms

5.32 Mbps

0.85 Mbps

To put this into perspective, Netflix recommends 25 Mbps in order to get a single stream of 4K HDR quality video (1.5Mbps is the minimum recommended).  This means that with just a 30ms round trip time, users will be BELOW the recommendation!  (Even if nothing else is happening on the network)

Here’s a great visual representation of the effect of packet loss and latency on network throughput.

What Network Performance Should We See?

This is a very difficult question to answer with a blanket rule.  There are some situations where increased latency is unavoidable.  What is critical is that you are monitoring that latency and packet loss so that you can identify what is typical and respond to issues quickly.

Finally, what you have been waiting for – guidelines for acceptable performance on your networks:

  • Latency on a local area wired ethernet network should be in single milliseconds, 1-2ms
  • Wireless networks often have higher latency and packet loss. Maximize signal strength, coverage and RF interference to get latency and packet loss to a minimum.
  • A round trip latency of 30ms or less is healthy on a typical broadband WAN connection (fiber has much lower latency)
  • Round trip latency between 30ms and 50ms should be monitored closely.  Consider looking deeper at the network for potential issues.

What Can I Do To Lower Latency?

Answering this is a bit out of this post, but here are some pointers to get you started.

If you see high latency through Ihiji invision, you first need to identify the full path of communication between the Ihiji appliance and the device in question.  Next, you will want to look at the reported latency between the appliance and any of the devices that are in the path.  If one or more devices appear to be contributing significantly to the latency, you should begin to devise strategies to test possible changes that might improve performance (ie, firmware, wireless signal, etc).

At this point, you may want to use a computer on the local network to do more granular tests like ping and traceroute.

Want to get more training on this issue? Contact your account manager for access to our technical network training webinars or catch us on the road at our many industry training events we conduct year round.

Share this post Facebooktwittergoogle_plusredditpinterestlinkedinmail

BECOME A DEALER
Reduce service calls, get recurring revenue, and keep clients happy! Get Started
We Are Here to Help
An Ihiji dealer, but have questions? Request a call from us!