It’s easy to forget every online activity we undertake involves a two-way stream of information being distributed around the world.
Even if you live in London, accessing the website of a local business with a British web hosting provider, the information you send and receive is likely to travel far and wide.
That’s because the internet is configured to distribute individual packets of digital data along the quickest route, rather than the shortest one.
The fastest data transit route from London to London at that precise instant might be via America, or through Europe.
And even though the internet’s backbone is mostly comprised of fibre optic cables capable of distributing data not far below the speed of light, these packets still have to negotiate various obstacles.
The technical function of these nodes and relays is a topic for another day. What matters is that each time data has to travel through an obstacle, it slows down slightly.
As a result, there can often be a marked delay between asking a web-enabled device to action something and a response being returned.
This delay in data’s round-trip journey from device to server and back to device is known as latency. And in many industries, it can be a killer.
Examples of latency
Imagine playing an online game where split-second decision-making is crucial to success, like a war game.
Action is controlled by a central server, and the depictions on end-user screens may have already happened a little while ago – like the satellite delay on TV interviews.
(Indeed, this is a particularly extreme example of latency, especially when the video and audio aren’t in perfect synch).
If your internet connection is slow, by the time you’ve instructed your weapon to fire, the target may have moved.
And by the time you’ve turned around, a character that’s appeared on the game’s servers while your internet connection was processing previous events will have ended your game.
This is latency in action.
It also manifests through jerky or buffering online streams, delays in webpages responding to action requests, and other fractional – yet noticeable – delays.
Despite being measured in thousandths of a second, latency of just 50ms can significantly affect the user experience.
Indeed, tech giants including Microsoft and Oracle have suggested 50ms is the largest acceptable period of latency from a provider’s perspective.
Streaming media services tackle the challenges of minimising latency with clever technology called adaptive bitrates.
This sends each fragment of programming at the highest picture quality the line speed can safely deliver, from 360p to 4K (depending on the maximum resolution available).
Even so, if latency increases, you might notice a drop in picture quality, or even a temporary pause while the service increases its buffer of available content.
Tips for minimising latency
Firstly, it’s important to ensure your own hardware is running optimally.
If device storage is nearly full, or if there are lots of programs running, this could increase latency. Delete non-essential programs and move files into the cloud.
Next, try to optimise your internet connection. Eliminate sluggish WiFi wherever possible by hardwiring devices into a WiFi router and using Powerline plug-socket adaptors.
WiFi can be the single biggest cause of latency; wireless data transmission is notoriously inefficient, and there’s a high risk of interference from other devices.
If you’re planning an activity like video calling, where latency could damage the experience, turn off bandwidth-hogging hardware like Sky TV boxes and Alexa smart speakers first.
Don’t sign up to services if domestic bandwidth isn’t up to it – minimum connection speeds are there for a reason. That’s especially applicable to online gaming platforms.
Minimising latency also involves performing regular malware scans. Many malware variants harness computer connections for nefarious purposes, making it hard to do anything online.
Finally, if you’re planning to host your own content (such as a hobby or community website), choose hosting companies with multiple data centres.
This should ensure data is always readily available when audiences request it, regardless of where they are or how much traffic is being generated at any given moment.