AI that has the potential to revolutionize entire industries has long passed its demo stage – it now runs through the veins of businesses and will soon become the lifeblood of our economy. From fraud detection and humanoid robotics to autonomous vehicles and real-time language processing, AI is now expected to operate instantly, intuitively, and everywhere. But with this shift comes a new bottleneck: latency. No matter how powerful the model or how abundant the compute power, if the network can’t deliver data with single-digit millisecond precision, AI won’t deliver the intended results. The reality is simple – without ultra-low-latency connectivity, there is no viable future for AI at scale.
That’s why I was pleased to join Tonya Witherspoon of Wichita State University and Hunter Newby of Connected Nation Internet Exchange Points (CNIXP) for a webinar on one of the most overlooked constraints in digital infrastructure: round-trip delay (RTD). While our conversation covered AI, network design, and public-private collaboration, the central message was clear: we cannot solve tomorrow’s challenges with yesterday’s networks. Latency isn’t just a technical metric; it’s an economic limiter, a competitive differentiator, and now a make-or-break component of AI. Below are five key discussion points from our webinar, titled “Latency Kills: Solving the bottleneck of RTD to unlock the future of AI”, on why solving the latency challenge – both locally and nationally – is the next critical step on the road to AI mastery.
1. “Low latency is no longer optional”
AI applications are no longer abstract, back-end computations; they are real-time, front-line systems that increasingly underpin daily life. Whether it’s a fintech company performing fraud detection at a keystroke, a vehicle processing sensory data on the move, or a manufacturing plant using robotics for precision tasks, latency has become the hard ceiling of performance. As I’ve said many times before, latency is not just a metric – it’s currency. For 4K streaming, the boundary is around 15 milliseconds. For high-frequency trading and autonomous driving, it’s under five. And when we enter the realm of humanoid robotics and AI agents that interact like humans, we’re talking about single-digit millisecond responsiveness, and that translates to a physical radius of 50 to 150 miles. Beyond that range, the round-trip delay is too high, and the application breaks down.
As Hunter puts it, “Fraud detection from the major banks is something that they want to do at the keystroke, on the phone, as it's occurring. That’s a three or sub-three millisecond requirement. Without the right physical infrastructure in place – land, buildings, fiber, and an Internet Exchange – it simply can’t happen. We’re talking about needing thousands of facilities like that across the US, and they don’t exist.” This is the kind of performance that enterprises must now design for, and it’s impossible to achieve without rethinking where infrastructure lives and how data moves. Adding more compute capacity won’t solve the problem if the data can’t get there and back in time. The only way forward is through proximity; by building the right interconnection points in the right places, we can shrink that round-trip delay and make real-time AI a reality.
2. “Geography matters”
For decades, network infrastructure has mirrored population density and economic gravity, clustering around coastal metros and skipping over large swathes of the country. The result is what Hunter described during the webinar as “flyover cities” – places where fiber may pass through but never breaks out. These cities aren’t disconnected; they just have poor accessibility. And that has real consequences for latency. Every extra mile a packet travels adds delay, which adds cost, which in turn limits the viability of emerging AI services. This is why geography matters. If we want real-time digital experiences, whether for a bank customer in Kansas or an autonomous vehicle in rural Texas, then we need physical infrastructure built where the people, machines, and data actually are.
Hunter offered an analogy that resonated with many: “Think of the Internet like air travel. Nobody wants to take three connecting flights to get somewhere. Everyone wants a direct flight. But right now, for many parts of the US, we’ve built the equivalent of runways with no airports.”
And the stakes are rising. Applications like autonomous vehicles and robotics don’t just benefit from direct interconnection – they require it. Which is why projects like the new IXP at Wichita State University are so important. By bringing neutral, high-performance interconnection into the heartland, we’re not just solving the latency problem, we’re rewiring the connectivity map. As Tonya put it, “It’s not just about bringing fast connectivity to the middle of the country in areas that don’t have it, but also for the redundancy and resilience that the entire country needs so that we’re not solely dependent on the few nodes that have already been built.”
3. “Round-trip delay will define AI connectivity”
Round-trip delay (RTD) is becoming the defining metric for AI viability. It measures the time it takes for data to travel from the user to the compute instance and back again. In the context of AI inference, that journey needs to be completed in just a few milliseconds. And yet, RTD remains poorly understood, often confused with “fast fiber” or dismissed as a non-issue by those who focus solely on power and compute capacity.
Hunter made the point with some clarity: “Gone are the days where you could say, ‘I've got powered land and fiber is a mile away, so latency isn’t a problem.’ None of that makes sense anymore.” He’s right. The location of compute power, the route of fiber, and the presence of a neutral interconnection point all now define whether an AI application can function as intended. This is especially true for applications that require deterministic routing, where the AI doesn’t just need access to data, but needs it from a specific source, at a specific time, through a specific path. That’s what RTD is really about: not just speed, but certainty. And to achieve that, we need to bring routing, interconnection, and AI workloads into much closer physical alignment.
4. “Enterprises must control their data journey, or be disrupted by it”
In a post-AI world, the enterprises that succeed will be those that treat digital infrastructure as a core business asset instead of just a back-end IT function. Enterprises must think about their networks the same way they think about physical supply chains; something to control, optimize, and secure. That means building presence in regional interconnection hubs, engaging directly with Internet Exchanges, and designing architectures that support real-time performance at scale. If you want to monetize a digital asset effectively – whether it’s a vehicle generating real-time telemetry, or a financial service delivering AI-driven insights – you need to control how and where that data flows.
And this isn’t just theoretical. As I said in our discussion, if automakers don’t control the data flows into and out of the car, someone else will. The same logic applies across every industry. Tonya summarized this well when she said, “It’s not nearly enough to just think about your own building, service, or product, because with inference – with all the data that’s moving – you don’t know where you’re going to connect or who you’re going to connect to.” Enterprises must extend their digital presence beyond their walls and build infrastructure strategies that align with how AI actually operates. Those who don’t will soon find themselves disrupted by those who do.
5. “We need a new Internet for the AI era – and it starts locally”
We’ve entered a new phase of digital infrastructure – one that requires us to rethink not just how the Internet works, but where it works. AI can’t be built solely on hyperscale data centers or global cloud backbones. It needs a new layer: local, neutral, high-performance interconnection that sits closer to users, devices, and machines. We’re not talking about replacing the global Internet – just filling in the gaps. The future will depend on a distributed mesh of IXs, regional edge hubs, and localized routing platforms that can support deterministic, low-latency data flows at scale. That’s why we’re building smaller, more accessible Internet Exchange models – what I like to call the “pizza box” IX – that can be deployed anywhere from shopping malls and universities to roadside fiber huts.
Infrastructure isn’t just an enabler of AI – it’s the foundation. And unless we build it where people actually live and work, AI will remain an uneven promise: powerful in some places, inaccessible in others. The good news is that we now have a blueprint – combining neutral facilities, regional IXs, and public-private cooperation – to ensure that the AI revolution reaches every corner of the map.
Readers can watch the webinar, titled “Latency Kills: Solving the bottleneck of RTD to unlock the future of AI”, in full here.