The switch to broadband has been an eye-opener for many consumers. Internet connections have accelerated, disrupting their digital experience. But it’s not just about speed. They also found that all the digital content and services they used had become richer, more dynamic, more personalized. And it’s not speed that allows that, it’s capacity.
Now take the feeling of these consumers discovering broadband and multiply it, you will begin to get closer to what cloud providers, communication service providers (CSP) and businesses can expect from Ethernet connectivity 400 gigabits (400GbE). Upgrading data centers and WAN transport links to 400 GbE – four times the capacity of the largest pipes that most organizations use today – expands the field of possibilities. Equally important, by providing 400GbE capacity with only 2.5 times the energy consumption of 100GbE, all of these players will be able to respond much more effectively to their customers’ insatiable needs for digital content and services.
But while sellers and analysts have been talking about 400GbE for some time, we haven’t yet seen large-scale 400GbE commercial solutions deployed. Have the benefits of 400GbE – or the urgency to adopt it – been overstated? The answer is an emphatic no. The huge improvements in capacity, density and energy efficiency that accompany the 400GbE are in great demand, and this demand is only growing. The delay is explained by more prosaic factors. First, the industry must agree on standards. Most importantly, 400GbE pluggable optics must become available in volume and at a reasonable cost. Fortunately, the industry is making progress on both of these fronts.
On the road to 400GbE
To understand the history of 400GbE, let’s start by looking at the side of the players who most ardently claim it: hyperscale cloud providers (on a very large scale). Companies like Google, Amazon, Facebook and many others are facing exponential growth in data center traffic. Facebook, for example, generates 4 petabytes of new data every day. Google’s datacenter network needs double every 12 to 15 months. Trying to respond to these requests with 100GbE links is like trying to get the flow of a fire hose through a straw. Clearly, it is desirable to consider another method.
And hyperscale cloud providers are far from the only applicants. Service providers face similar growth trends in their own networks, and the data centers of large enterprises are not far behind. So what are the drivers of this growth?
- New requests : We are witnessing an exponential growth of digital and cloud applications, in particular video. HD and ultra-HD 4K video content currently consumes 73% of global bandwidth, and is expected to exceed 82% by 2021. The increasing use of artificial intelligence (AI) and machine learning in cloud datacenters is generating also a need for a substantial increase in capacity and speed.
- Evolution of data center backbones : While hyperscale providers are adopting new architectures to meet these demands, they are switching from 50GbE to 100GbE server connections. The only way to transport this traffic volume economically is to use 400GbE optics.
- Emergence of 5G networks : The deployment of 5G brings the network requirements of mobile operators closer to those of hyperscale providers. Many 5G applications will result in the need for faster links: broadband, telemedicine, immersive virtual reality experiences, and many more. But just connecting 5G cell sites will require huge capacity increases. Thanks to MIMO antenna technologies (multiple inputs, multiple outputs), a cell site that needed four 10GbE interfaces to serve a dense urban area could soon need 250 interfaces.
- Rapid growth of digital home services : The COVID-19 pandemic has triggered a massive and sudden spike in the use of digital home services. As more and more people are teleworking, the large capacity services they use – video conferencing, remote workspaces, streaming video, games, etc. – have a profound impact on traffic patterns worldwide. Some of these changes will be fleeting, while others will have lingering effects. It can also be expected that delays in upgrading the capacity of residential networks will be accelerated.
Overcome the obstacles
The prospects for additional income for the actors involved are substantial. But to take it to the next level, they need more density and scale, more energy efficiency, at lower cost. 400 GbE seems to be the logical answer. So why is this technology not yet effective?
First, suppliers need time to align with multi-source optical standards agreements (MSAs), which then require extensive testing. This type of process can be long and complex, but the result is often extremely beneficial for the whole ecosystem. We can expect, on this side, to see things progress in the next six months.
The main obstacle is therefore financial. Customers need 400GbE solutions that can be deployed at a reasonable price. And while network equipment suppliers and ASIC manufacturers have already gradually reduced cost per bit, optical costs have not kept pace. Ten years ago, 10GbE optics accounted for about 10% of the cost of data center networking equipment. Today, optics represent more than half.
Wholesale manufacturing processes, automation, and the economies of scale that enable semiconductors to become more affordable with each new generation must be integrated into optical networks. The industry is getting there, but slowly. Hyperscale providers are starting to deploy 400GbE optics on a large scale (probably by the end of the year). Prices should start to fall.
Preparing for the future
With the 400GbE on the verge of commercial operation, we can expect major advances in optics this year. The ongoing development of photonics on silicon for example, will advance the convergence of optical transport with routing and switching, by offering plug-in transceiver capacities on platforms with fixed configuration or directly on online cards for modular platforms. Also on the horizon: the 400ZR, which can transform the economics of WAN, metro and DCI use cases by extending the range of high-speed Ethernet and coherent optics up to 120 kilometers.
As you prepare for these advances and wait for the 400GbE economy to catch up with demand, here are a few things to think about:
- Opening and interoperability, without locking. Use technologies based on open standards, with multi-supplier interoperability. This allows you to save on investment costs in the long term and guarantees a certain operational flexibility.
- 400GbE-ready. As part of network refresh cycles, look for fixed or modular configuration solutions that support QSFP56-DD interfaces for 400GbE services, so you can transition quickly and easily, for example by replacing a plug-in module with a other.
- Online security. For many organizations, all traffic leaving the data center must be encrypted. Look for 400GbE solutions capable of providing MACsec encryption online, so you don’t have to use third-party components that increase power consumption and costs while undermining performance.
- Scale telemetry. To manage and monitor your network as it evolves, you need network equipment capable of adapting telemetry capabilities. This means that you need to be able to support millions of counters and millions of filter operations per second.
In any case, if you run into network capacity limits, it’s time to seriously consider 400GbE. We are rapidly approaching the tipping point where business solutions become viable. Taking the 400GbE train today means making sure you are ready to deliver the next generation of digital services and applications.
By Michaël Melloul, Technical Director, Juniper Networks France