by Shavit Baruch
With Communications Service Providers today challenged to deliver more and more advanced services to subscribers, higher speeds alone won’t be the answer. Services such as high-definition video, multi-player online gaming, augmented and virtual reality, machine learning, artificial intelligence, and IoT are forcing a transition to smarter, more flexible, lower-latency, virtualized networks.
The Central Office (CO), at the edge of the CPS’s network, is the focus for this transition to a virtualized, distributed network. Initiatives such as OPNFV and CORD (Central Office Re-Architected as a Datacenter) are beginning to foster the economies of a data center, with the agility of software-defined networking, by applying cloud design principles to the CO. This involves moving away from proprietary hardware such as switch-routers to commercial off-the-shelf server arrays, allowing software to be ported onto any virtual switch and standard DPDK APIs creating an open environment for NFVi (network functions virtualization infrastructure).
Thanks to their proximity to end users, disaggregated virtual COs (vCOs) represent a huge latency advantage to CSPs over centralized data centers. By replacing hardware with servers that offer general purpose compute resources and the ability to run any function, providers slash that latency by moving computing closer to the access points.
From the CSP’s view, vCOs are a great opportunity to take advantage of the physical assets they already possess at the network edge. It becomes far easier to offer value-added services such as AR/VR applications when virtualization and cloud design transform the CO.
Take augmented reality, for example. It only works when geographically dispersed servers are used to deliver an on-demand augmented reality environment with high-quality content to web-connected devices. Supporting this application from a central data center introduces delays that makes the application impractical. The same goes for autonomous cars, online gaming, CDN video streaming, and many more applications that mandate extremely low latency.
CSPs have moved to add server CPU cores to meet demand, but while CPUs may handle compute functions ably, they aren’t good at networking data plane processing. ASICs are better than CPUs at data processing, but lack programmability and are closed, proprietary systems.
Rather than add more servers, the answer is NFVi acceleration – through the use of programmable FPGAs – which helps the virtualized solution run more efficiently and with the same flexibility as software, enabling greater functionality while reducing operating expenses.
As the VNF (virtual network function) data plane is offloaded from the CPU and network and security functions are ported onto the FPGA, CPUs can be dedicated to the compute functions and user applications for which they were designed. Further benefits are gained when multiple VNFs are included on a single server. This multi-access edge compute (MEC) solution reduces the physical need for servers and saves even more power.
Other FPGA advantages are scalability even at high bandwidth; enhanced security, with flow isolation and the option to bypass the CPU entirely for encryption and decryption tasks; and the ability to deliver ASIC-like deterministic performance with the programmability of a software solution. When it comes to deterministic performance, the consensus is that parallel processing enhances that, which is why most ASIC solutions today are based on the concept of a parallel pipeline architecture. By reducing networking cycles, it can lower latency and power needs. Of course, ASICs are limited by their inability to be upgraded in the field in order to adapt to changing requirements and conditions. FPGA acceleration now makes it possible to use parallel packet processing in a pipeline architecture while adding programmability. Dedicated network processing functions can be designed in this manner on low-cost FPGAs to offer a compact solution with all of the architecture’s advantages.