Across the globe, businesses in all sectors are integrating innovative new, often cloud-based, technologies into their day-to-day operations.
Enabled by artificial intelligence (AI), virtual or augmented reality (VR or AR), and the Internet of Things (IoT), these technologies help businesses by:
- Increasing efficiency and productivity
- Reducing costs
- Providing better customer experiences
- Improving operational sustainability
In other words they allow you to keep up with – or ideally get ahead of – your competitors.
But to make the most of these technologies, your business needs robust network connectivity and, more specifically, high-speed data flows. Reducing latency – which is simply the time it takes data packets to travel from source to destination – is no longer just about speeding up website loading times or stopping a video from buffering on a streaming service. Network latency can limit the performance of everyday business applications, and in some cases it can stop them from functioning altogether. This means minimizing latency has become business critical.
Evolving latency expectations
Expectations for latency vary widely between different industries and applications. The rollout of 5G has significantly increased expectations around latency and the speeds that should be possible as standard. The development of 6G towards the end of the decade will likely see another step change.
Latency is usually measured in milliseconds (ms), and during speed tests it can be referred to as a ping rate. A couple of example recommendations for acceptable latency include:
- Azure Remote Rendering, which is used for 3D, mixed-reality content, requires latency to be consistently less than 80 ms (although the recommendation is less than 40 ms).
- Virgin Media suggests a good latency for a professional gamer is 10-20 ms, although 20-50 ms is acceptable. A latency of 100-300 ms is described as an “unplayable ping”.
Check out our recent article to discover how quickly data will need to travel to support the immersive Internet of the future.
Low latency use cases
While low latency is desirable for all business applications, it’s more critical to those that require real-time response to inform decision making. Applications for financial trading based on rapidly changing market information are one example. In fact, low-latency trading has become an established method for responding rapidly to short-term market opportunities.
Any application that requires two systems to communicate via application programming interfaces (APIs) will also require low latency, as processing often comes to a halt until the API returns a response. And any application that relies on remote robotics will undoubtedly require ultra-low latency.
Read our case study to discover how dm-drogerie markt, Germany’s biggest drugstore chain, reduced latency to cloud-hosted servers to less than 3 ms.
Let’s take a look at some specific use cases for low latency applications in the fields of healthcare, manufacturing, and automotive:
Minimizing latency in healthcare
Healthcare is one sector where rapid advances in technology must be accompanied by ultra-low latency to ensure its safe and effective use. The most obvious example is remote surgery, often called telesurgery, where an operation is carried out by a surgeon in a remote geographic location using a robotic-assisted surgery platform. An extremely low level of latency is vital to the success and safety of such an operation.
But there are many other healthcare applications that also require low latency connectivity, including:
- Connected ambulances that enable medical care to be provided on the way to a hospital by exchanging relevant real-time data about the patient with that hospital.
- Devices such as wearables or implants that monitor health in real-time to identify changes and take the necessary action.
- Robot assistants that monitor the needs of patients to help them with daily living tasks.
Reducing lag in manufacturing
Automated production plants that use the latest technologies are becoming the norm. But the data flows that enable those technologies rarely remain within the factory walls.
Smart factories need ultra low-latency communications between machines, sensors, data centers, and cloud-based applications, as well as the networks of their suppliers, service providers, and customers. If they are using a Robotics-as-a-Service (RaaS) model, where the machine manufacturer operates, monitors, and services them remotely, factories need data to flow instantaneously between the machines and the operator’s data center.
A specific use case within manufacturing is digital-twin technology – a virtual model of a machine used for simulation, design, and testing. Digital twins rely on real-time data generated by sensors on the machines they are replicating, to make sure they are as accurate as possible. Digital-twin technology can be used for customizable design, where the customer alters the design through additive manufacturing or 3D printing. This requires ultra-low latency connection to the end-user access networks.
Take a look at our article to discover more about the factory of the future.
Another low-latency use case that spans manufacturing and healthcare, as well as other sectors, is training using VR or AR technologies. Rapid transmission of data is vital for these applications to run smoothly and deliver a fully immersive experience without lag or buffering and enable realistic real-world simulations in a risk-free environment. It also allows trainers to give real-time feedback, making the learning process more effective.
Rapid response in automotive
Today’s connected vehicles already need low-latency connectivity. When a connected car sends data about road conditions and the vehicle’s status to the cloud, for example, this data has to move as quickly as possible to maximize safety and performance. But ultra-low latency will become critical as we move towards self-driving features, and ultimately full autonomy where cars can navigate with no driver intervention.
Autonomous vehicles are fitted with an array of sensors and cameras that generate multi-directional information about the immediate environment. This data needs to be continually analyzed using AI-based perception processing so the vehicle can make sense of what is happening around it. It then needs to use this information for path planning, allowing it to avoid collisions, change lanes, or find an appropriate speed.
This data must flow at exceptional speeds to allow ample time for vehicle reactions, especially taking into account that visibility, reaction times, and braking distances can be significantly impacted by adverse weather conditions.
Interconnection platforms enable high performance
The low latency required by today’s critical business applications can be achieved using an interconnection platform, such as an Internet Exchange or a Cloud Exchange. An Internet Exchange facilitates the movement of data between networks via a process known as peering (the exchange of data on a cost-neutral basis). It reduces latency in two key ways:
1. Controlled data exchange
The traditional method of sending data between networks via the Internet means it often takes a long, complex, and unpredictable path to find a point where both networks happen to be in the same data center. With one connection to an Internet Exchange, an organization can get rid of unpredictable paths and reach almost any network, not only in the data center it’s in but in many other data centers in the region. It becomes part of an ecosystem with the lowest possible latency connection to all kinds of networks, and gains controlled, end-to-end handling of valuable traffic streams.
2. Geographic presence
Minimizing latency requires your applications to be physically located as close to the end user as possible. Effective video conferencing for example, requires the application to be less than 1,200 km from the user, equating to 15 ms latency. A user in Europe will typically experience a lag of at least 65 ms if the application is hosted on a US server. For applications such as robotic manufacturing or autonomous vehicles a distance of under 80 km is required to achieve a latency of under 1 ms.
Because Internet Exchanges are located in data centers all over the world, and you control the path your data traffic takes, you can bring applications close to the end user, minimizing latency and maximizing performance.
Check out our guide Building connectivity for the world of tomorrow to learn more about how Internet Exchanges and Cloud Exchanges enable low latency as well as all the other elements you need to transform your digital infrastructure and support your business long into the future.