Latency is the time that passes between user action and the resulting response. Network latency refers specifically to delays that take place within a network, or on the Internet.

In practical terms, latency is the time between user action and the response from the website or application to this action – for instance, the delay between when a user clicks a link to a webpage and when the browser displays that webpage.

What Are Typical Values For Latency?

Typical, approximate, values for latency that you might experience include:

  • 800ms for satellite
  • 120ms for 3G cellular data
  • 60ms for 4G cellular data which is often used for 4G WAN and internet connections
  • 20ms for an Mpls network such as BT IP Connect, when using Class of Service to prioritize traffic
  • 10ms for a modern Carrier Ethernet network such as BT Ethernet Connect or BT Wholesale Ethernet in the UK

Latency, on the other hand, refers to the length of time it takes for the data that you feed into one end of your network to emerge at the other end.  Actually, we usually measure the round trip time; for data to get to one end, and back again. 

Although data on the Internet travels at the speed of light, the effects of distance and delays caused by internet infrastructure equipment mean that latency can never be eliminated completely. It can and should, however, be minimized. A high amount of latency results in poor website performance negatively affects SEO and can induce users to leave the site or application altogether.

READ
Google Removes 3 Android Apps For Children, With 20M+ Downloads Between Them, Over Data Collection Violations