Network function virtualization infrastructure (NFVI) is the platform that supports virtual network functions (VNFs), such as virtual switches and routers, virtual firewalls, load balancers, and so on. NFVI includes the underlying hardware, as well as the hypervisor and the virtual resources for storage, compute, and networking. Together, these allow standard CPU-based servers to virtually perform traditional networking functions in software and only employ the functions that are required at any given time.
The hypervisor, which is a major part of NFVI, is responsible for the hardware-to-software virtualization. Within the hypervisor, a virtual switch is used to direct traffic to the VNFs. This switch could be an Open vSwitch (OVS), Titanium AVS, Vector Packet Processor (VPP), or Tungsten Fabric (TF).
In a typical virtualization scheme without any acceleration, traffic flows through the NIC (network interface card) to the NFVI hypervisor, where the virtual switch directs it to the appropriate VNFs that perform networking functions on the data. The NFVI hypervisor is frequently a bottleneck to the efficiency of the data path.
To bypass that bottleneck, PCIe Passthrough can completely circumvent the NFVI. A direct memory access (DMA) engine sitting on the NIC uses SR-IOV to pass traffic directly from the NIC to the VNFs. The main drawback of this approach is that the NFVI is no longer aware of what traffic went to which VNFs, which hamstrings its ability to properly manage the traffic flow.
Alternatively, some companies offer to offload the entire NFVI to a CPU sitting on a multi-core SmartNIC on a bare-metal server. The traffic flows through the NFVI, which now sits on the NIC, and bypasses the hypervisor entirely on its way to the VNFs. This works well for north-south traffic (to/from the greater network), but it adds additional overhead for east-west traffic (between virtual machines on the same server). That east-west traffic now must be routed unnecessarily through the NIC, as opposed to flowing directly between VNFs, which can introduce a bottleneck, especially as the bandwidth available from the PCIe to and from the NIC is limited.
Ethernity Networks provides a solution that avoids these drawbacks. We seamlessly offload the NFVI functions to an FPGA-based SmartNIC using standard DPDK APIs through RTE_Flow. The traffic runs through the FPGA SmartNIC to the VNFs on the server, and a representor port notifies the NFVI of the traffic’s whereabouts. Moreover, the switch-router engine on Ethernity’s ACE-NIC accelerates performance, achieving optimal traffic flow for VNFs. East-west traffic continues to run through the hypervisor on the server, and it is never limited by the PCIe.
At Ethernity Networks, our NFVI acceleration solution includes support for OVS, VPP, Titanium AVS, and Tungsten Fabric. We accelerate NFVI by offloading the data plane for advanced networking functions, such as switching, routing, VxLAN processing, H-QoS, and MPLS protocols. ACE-NIC SmartNICs provide high-speed packet processing, overlay tunnel termination, and data plane offload from CPU cores, all with hardware-level performance as well as the flexibility and TCO to rival any software-based solution.
FPGA SmartNICs provide the ability to virtualize on the hardware level, allowing the agility of software-based networking with the high bandwidth, low latency, and deterministic performance of hardware. FPGAs have the added benefit of good security with support for protocols like IPSec and VPNs. At the same time, FPGAs are programmable and hardware-independent, so they do not compromise the flexibility and low TCO of NFVI.
Thus, Ethernity’s solution offers both enhanced, bottleneck-free performance and greater efficiency.
By Brian Klaff