galit – Ethernity Networks https://ethernitynet.com Wed, 15 May 2024 11:58:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://ethernitynet.com/wp-content/uploads/2020/03/red-square-with-en.png galit – Ethernity Networks https://ethernitynet.com 32 32 The Future of Broadband: A Converged Approach with DOCSIS and PON https://ethernitynet.com/the-future-of-broadband-a-converged-approach-with-docsis-and-pon/ Mon, 06 May 2024 11:45:38 +0000 https://ethernitynet.com/?p=38635

The Future of Broadband: A Converged Approach with DOCSIS and PON

For years, DOCSIS has been the workhorse technology for cable internet service providers (ISPs), delivering reliable broadband access to millions of customers. However, Passive Optical Network (PON) technology is experiencing a surge in popularity, particularly with the advent of XGS-PON, which offers superior throughput capabilities. This begs the question: what does this mean for the future of DOCSIS and the companies that rely on it?

The answer lies not in competition, but in convergence. Here’s how DOCSIS and PON can coexist and create a more robust and future-proof broadband landscape:

The Advantages of PON

PON technology offers several technical and economic advantages over DOCSIS. Notably, PON boasts significantly faster and symmetrical speeds, ideal for today’s bandwidth-intensive applications like video conferencing and cloud storage. Additionally, PON utilizes fiber optics, resulting in lower operational costs (OPEX) and the ability to seamlessly scale to meet future bandwidth demands.

The Future of Cable: A Hybrid Approach

The rise of PON doesn’t necessitate to the forklift DOCSIS infrastructure. Instead, in brownfield deployments we’re likely to see a future where these two technologies work together in a hybrid DOCSIS over PON and Virtualized PON approach. Hybrid solutions leverage existing DOCSIS management systems for user provisioning and configuration, streamlining integration. However, data transmission occurs over the faster and more reliable PON infrastructure. This allows ISPs to offer the benefits of fiber optics while retaining their expertise in DOCSIS management. Additionally, virtualized PON solutions seamlessly integrate with existing DOCSIS networks, delivering a significant performance boost for businesses and heavy internet users, effectively eliminating potential bottlenecks within traditional DOCSIS networks.

The Power of Remote OLTs

Remote Optical Line Terminals (OLTs) are compact devices that can be deployed within existing networks. These powerhouses connect to the central DOCSIS orchestrator via a “PON Stack” and network configuration tools. This allows for seamless management of a hybrid DOCSIS/PON network as a single, unified infrastructure. With remote OLT technology, ISPs can enhance the efficiency of their existing infrastructure to the same level as DOCSIS 4.0 standards, without requiring them to lay more cables and active equipment throughout their networks.

The Bottom Line: A Collaborative Future

By embracing PON technologies, DOCSIS providers can leverage their existing infrastructure and expertise while evolving towards a future-proof network with a powerful technology by their side. This convergence translates to higher bandwidth, lower operational costs, and ultimately, superior internet experience for their customers. It’s a win-win for ISPs and consumers alike.

The future of broadband is bright thanks to Remote OLTs with programmable routing capabilities. These devices empower ISPs to deliver high-quality, flexible broadband that’s ready for future demands. As ISPs increasingly adopt new fiber-optic technologies and leverage existing infrastructure through incumbent overlay networks, programmable Remote OLTs will become the new standard.

What to learn more?

Come visit us May 21-23 at Informatech Light Reading Network X Americas

Irving Convention Center, Texas USA

Ilan Tevet

VP Marketing and Business Development

Contact Us

[contact-form-7]
]]>
The Advantages of Putting a Router on a NIC https://ethernitynet.com/the-advantages-of-putting-a-router-on-a-nic-2/ Wed, 01 Apr 2020 06:41:10 +0000 https://ethernitynet.com/?p=36436

Despite all that software has done to transform the networking landscape, it does have its limitations.

Traditional hardware appliances are rapidly being replaced by software-defined networking and network virtualization, offering service providers and others tremendous flexibility in features and vendor choice.

The software/virtual approach can speed time to market, allow more to be done in less space – a critical consideration as the network moves to the edge – and reduce costs. But there are a few things that it does not do as efficiently.

For instance, with software it is less efficient to perform the networking functions that require deterministic performance with high bandwidth, low latency, and a high level of security than it is to use a hardware forwarding packet processing device.

Routing is a perfect example. A physical router uses a dedicated packet processor in hardware to forward traffic while a virtual router performs those functions in a software instance. The physical router is more efficient, but the challenge is to gain that efficiency in a way that doesn’t take up space or add significant costs.

The answer lies in putting a router on a network interface card (NIC), to gain the agility of a virtual router with the performance of a physical unit. Ethernity has achieved that with our Router-on-NIC capabilities, in which the physical router is implemented on an FPGA-based network adapter. It accomplishes the task in a way that clearly differentiates this approach from anything else in the market.

Router-on-NIC provides a high-performance switch/router data plane, including Carrier Ethernet Switch, Layer 3 forwarding, protocol interworking, and traffic management. It is a unique approach to delivering all the benefits of a traditional router with standard NIC functionality, in a smaller footprint with less power consumption. It is all enabled by Ethernity’s patented packet processing and traffic manager design, ported onto an FPGA.

What Router-on-NIC does different from other solutions is that while they can partially offload the data plane – Layer 2 only – in order to accelerate packet processing and delivery, Router-on-NIC can offload the entire data plane, including Layer 3 forwarding and features. This is a far more efficient solution and much more compact.

Other approaches need to handle Layer 3 functions in software, and as a result use CPU cores. The Ethernity approach – in which everything is done on the FPGA on the NIC – requires no CPU core resources. It needs software only for control.

Router-on-NIC provides full routing functionality from an FPGA-based NIC on a standard x86 server for any scenario where a router is needed but space is limited.

It is ideal in use cases where multiple forwarding schemes are needed beyond what a typical SmartNIC can provide. Those include applications such as virtual Broadband Network Gateway (vBNG), 5G User Plane Function (UPF), SD-WAN acceleration, and Cell Site Router.

Virtualizing broadband network gateways allows service providers to keep up with rapidly increasing internet traffic in their network, aggregate access devices, bridge the gap between core and access networks, and enforce QoS policies. But relying solely on software virtualization to accomplish routing tasks can compromise performance speed, determinism, and latency.

For 5G User Plane Function (UPF), Ethernity’s FPGA SmartNIC is ideal for accelerating the 5G data plane. The many key operations associated with UPF are much more efficiently handled by the ENET Flow Processor on an FPGA, which can help keep latency to a minimum. Layer 3 traffic offload and other functions are critical for full UPF.

Similarly, Router-on-NIC addresses virtualization of the 5G Distributed Unit with hardware offload for fronthaul aggregation with full Carrier Ethernet Switch, OAM/CFM, and router functionality.

In SD-WAN environments, each connected campus or branch needs an access router in addition to its own enterprise routers. Instead of adding another physical router, Router-on-NIC can be used inside uCPE to effectively create a “double router,” connecting to the enterprise and carrier networks and allowing customers to use their own uCPE to connect to the carrier network.

In each instance, with its offload of Layer 3 networking functions, Router-on-NIC creates a virtualized solution that can adhere to the strictest requirements of bandwidth, latency, determinism, and security.

The features and capabilities of Router-on-NIC are detailed here.

By Brian Klaff

]]>
ENET-D FPGA-based Ethernet Controller Advances Network Aggregation https://ethernitynet.com/enet-d-fpga-based-ethernet-controller-advances-network-aggregation/ Thu, 19 Mar 2020 14:01:50 +0000 https://ethernitynet.com/?p=36390

Earlier this week, Ethernity Networks introduced the new ENET-D, an FPGA-based Ethernet Controller DMA engine IP core that efficiently processes millions of data flows and offers performance acceleration for networking and security appliances.

ENET-D is an add-on firmware technology to the ACE-NIC100 SmartNIC that enables customers to further avoid ASIC-based components and reduce power consumption and valuable silicon real estate.  As an FPGA-based Ethernet adapter and DMA (direct memory access) engine, ENET-D eliminates the need for proprietary hardware serving as the Ethernet controller on a network interface card. This allows complete disaggregation of the Ethernet controller on the FPGA SmartNIC.

This disaggregation adds flexibility to the network and saves operators significantly on operating expenses such as power consumption and physical space on the NIC. Moreover, because ENET-D is implemented on fully programmable FPGA hardware, it eliminates the need to replace monolithic ASIC-based hardware with upgrades or new devices that include a broader feature set. Replacing field-deployed hardware to support new functionality can cost up to five times the initial cost of the hardware itself.

When ENET-D is combined with Ethernity’s ENET Flow Processor and run on Ethernity’s cost-optimized and affordable ACE-NIC100, it delivers a complete Router-on-NIC with integrated Ethernet controller. ENET-D can fit into different FPGAs and scales the number of queues, physical functions and virtual functions that can be handled by the NIC. It is capable of connecting to multiple virtual machines, containers, or virtual networking functions. ENET-D also provides both Linux kernel drivers and DPDK drivers.

ENET-D supports NFV and 5G customers who demand further disaggregation of their networks. This is an important step toward Ethernity’s vision: that users can purchase an x86 server with a bare-metal FPGA-based NIC, then select the best FPGA firmware for intelligent NIC functionality, in the same way they can currently run any Linux package on a bare-metal server. This will entirely eliminate ASIC vendor lock-in for the emerging edge compute market.

By using Ethernity’s patented technology that reduces the amount of logic used on the FPGA, the ENET-D Ethernet controller firmware consumes much less space on the chip than its competitors. This enables it to fit into a small FPGA and to consume 2-3 times less power than similar Ethernet controllers on the market.

Ultimately, this is yet another positive development in the advancement of network disaggregation, ensuring that users can achieve top-tier performance with maximum flexibility and choice of vendors while futureproofing their networks.

By Brian Klaff

]]>
ACE-NIC100 Accelerates 5G Networks with Wind River Titanium Cloud https://ethernitynet.com/ace-nic100-accelerates-5g-networks-with-wind-river-titanium-cloud/ Thu, 12 Dec 2019 16:31:00 +0000 https://ethernitynet.com/?p=34679

by Barak Perlman

There is an increasing need for flexible cloud-based infrastructure and orchestration solutions to allow for dynamic placement of functions where and when they are needed. When it comes to the Telco edge, the gold standard for OpenStack-based virtualization software platforms is Wind River Titanium Cloud. Titanium Cloud is an ultrareliable deployment-ready cloud platform, capable of handling the rigorous demands of telecommunications and critical infrastructure networks.

Ethernity Networks and Wind River have worked closely together to show that Ethernity’s ACE-NIC100 FPGA SmartNIC can easily integrate with the Titanium Cloud virtualization framework. Titanium Cloud provides an edge-optimized all-in-one installation that runs an operating system with a hypervisor that enables the use of virtual machines (VMs) for network functions that are both lightweight and optimized for Telco.

The results: The ACE-NIC100 can accelerate any VNF on Titanium Cloud using Ethernity’s Router-on-NIC capabilities, with very little required effort toward integration. With the ACE-NIC using standard DPDK API calls for the ENET Flow Processor embedded on the FPGA, an Intel XL710 controller, and well-known i40e drivers, integration is seamless.

The ACE-NIC100 can be a separately managed entity. The ACE-NIC features the ENET Flow Processor, which runs on an FPGA on the ACE-NIC card and can be configured as a full router. This enables the ACE-NIC100 to provide both standard Intel-based Ethernet controller and router functions, achieving a true “Router-on-NIC.” As a Router-on-NIC, the ACE-NIC can provide a wide variety of Telco features through hardware implementation.  

The ENET Flow Processor is configured using the Ethernity SDK (Software Development Kit). The Ethernity SDK for ACE-NIC100 configuration is on-boarded on the Titanium Cloud controller, which is then used to configure policing, classification, and TAG/Tunnel editing. Two Linux virtual machines are instantiated with DPDK, each running the open-source TRex packet generator to emulate realistic traffic flow and provide per-stream statistics.

The integration demonstrates three distinct test configurations:

  1. PCI-Passthrough + DPDK
    In this scenario, both of the ACE-NIC100’s 40GbE interfaces are configured with PCI-Passthrough, binding to DPDK drivers in the virtual machine.
  2. SR-IOV + DPDK
    In this configuration, both of the ACE-NIC100’s 40GbE interfaces are configured with SR-IOV and a VLAN-based provider network, binding to DPDK drivers in the virtual machine. The traffic is run and validated twice – once at full rate, and once when the ACE-NIC100 is configured with per-flow rate-limiting policies using MEA CLI.
  3. SR-IOV + DPDK + QoS
    In this scenario, one ACE-NIC100 40GbE interface is configured with SR-IOV, and the second 40GbE interface is configured as a data interface using Wind River’s Accelerated Virtual Switch (AVS). Wind River’s AVS ports connect a virtual machine to the AVS bound to the DPDK-AVP drivers in the VM. The traffic is validated twice, once at full rate and once when the ACE-NIC100 is configured with per-flow rate-limiting policies and VLAN-based provider networks defined for both interfaces.

The test emulates 40Gbps traffic with high diversity generated by multiple guest instances, connected to multiple virtual functions. The result is that the ACE-NIC100 easily couples with the Wind River AVS and enables cross-VM connectivity, while enforcing the rate-limit policy and other functions based on the virtual function traffic. In fact, the ACE-NIC100 is capable of fully offloading the VMs with high networking load to the FPGA, saving CPU cycles on Titanium Cloud.

The ACE-NIC introduces a router entity to the virtualization environment that allows Access Network traffic to enter Titanium Clouds’ virtual networks (for example, termination of PPPoE traffic or GTP tunnels), provides traffic management toward the external network, and adds VxLAN termination and translation to VLAN. And yet, the Router-on-NIC can be managed as a standalone managed entity, and the FPGA is transparent to the applications unless configured otherwise.

Moreover, by using SR-IOV for flows that do not run through the AVS, Ethernity has overcome the difficulty of supporting container-based virtualization. By instantiating multiple containers within a single VM, Ethernity was able to achieve per-container provisioning in hardware based on classification of the different flows arriving to the containers based on MAC address or VLAN, while applying per-flow policies.  Ethernity can enable provisioning per container by providing logic through the FPGA SmartNIC.

Furthermore, Ethernity was able to overcome the limitations of SR-IOV in terms of number of virtual functions. The FPGA SmartNIC can enable scalability to thousands of virtual functions (and therefore thousands of containers) within a single VM – classified, provisioned, and with policy application per-flow in the hardware.

“Service providers are seeking validated and market-ready end-to-end cloud solutions. To address this need, collaboration across the ecosystem is vital. We are working with innovators such as Ethernity Networks to create optimized, interoperable solutions for service providers and TEMs. By leveraging pre-validated virtual network elements, service providers can quickly achieve their goals such as reducing OPEX while accelerating the introduction of new high-value services,” noted Paul Miller, vice president of Telecommunications at Wind River.

By integrating Wind River AVS with the FPGA-based ACE-NIC100, Ethernity can deliver an approach capable of achieving efficient, scalable, high performance virtualized networking.

By working smoothly with Titanium Cloud, the ACE-NIC100 accelerates VNFs and adds routing functionality with very little required effort.

By Barak Perlman, CTO Ethernity Networks

Wind River is a global leader in delivering software for the intelligent edge. The company’s technology has been powering the safest, most secure devices in the world since 1981, and is found in more than 2 billion products. Wind River offers a comprehensive portfolio supported by world-class global professional services and support and a broad partner ecosystem. Wind River software and expertise are accelerating digital transformation of critical infrastructure systems that demand the highest levels of safety, security, performance, and reliability. To learn more, visit Wind River at www.windriver.com.

]]>
Hot Topics at DPDK Summit 2019 https://ethernitynet.com/hot-topics-at-dpdk-summit-2019/ Thu, 05 Dec 2019 13:18:46 +0000 https://ethernitynet.com/?p=34529

by Brian Klaff

At the annual DPDK Summit North America, Ethernity Networks CTO Barak Perlman was a featured speaker, providing insight into the role DPDK plays in 5G UPF hardware offload.

In fact, 5G was a topic mentioned in a couple presentations at the DPDK Summit this year. While few companies in attendance were as intimately aware of the trends within the telecommunications industry as Ethernity, a few speakers addressed edge virtualization, UPF offload, and/or GTP tunnels, all of which are integral components of a complete 5G deployment.

Barak’s presentation provided both an informative overview of UPF to those who were less familiar and insightful ideas on how UPF offload can be achieved for those who work with FPGA SmartNICs. The main discussion centered around using DPDK hardware offload APIs as the interface between the UPF software and the SmartNIC. This seems to be a natural choice for DPDK users, and yet, this is the first time the topic was formally presented.

Perhaps most relevant to the DPDK community were Barak’s suggested improvements to rte_flow in order to facilitate full UPF offload. Rte_flow was another hot topic at the Summit, with Intel, Mellanox, and Microsoft addressing the benefits and challenges of working with this hardware offload API. Intel presented a good overview of the rte_flow library and discussed the practical implications of using rte_flow for partial or full offload of OVS traffic.

Microsoft brought significant weight to the discussion as an rte_flow user in its cloud environments, and highlighted the fact that rte_flow is the de facto standard for hardware offload, supported by numerous NIC vendors. However, Microsoft spoke of their concern that some APIs might behave differently with different vendors, and suggested ideas for testing and resolving this issue.

Mellanox also spoke of rte_flow in relation to enabling hairpin offloading, in which data never needs to reach the CPU, but rather is entirely processed within the NIC. This was validation of the method that Ethernity suggested in its UPF presentation, offering the same packets-per-second performance with zero CPU involvement compared to the many cores required by partial offload. This allows organizations to Stop Burning CPU Cores!

One more hot topic at the conference was containers, which are rapidly supplanting virtual machines as the go-to technology for embedding applications in a virtual environment. Multiple sessions were devoted to applying DPDK to container applications and scenarios, as well as DPDK’s interplay with Kubernetes, the open source system for orchestrating and managing containers.

Overall, the DPDK Summit was an opportunity for some of the primary companies in the open-source networking world to share their experience and expertise toward improving this primary set of API libraries and drivers for network processing. Ethernity was happy to take an active role in these discussions

]]>
Open and Virtualized Access Was Trending at BBWF2019 https://ethernitynet.com/open-and-virtualized-access-was-trending-at-bbwf2019/ Thu, 10 Oct 2019 16:56:00 +0000 https://ethernitynet.com/?p=33570

by Eugene Zetserov

Broadband World Forum has traditionally been an event for fixed networks, but in recent years it has addressed the needs of fixed, mobile, and cable networks, covering a variety of topics, including 5G, virtualization, open source, NG PON, DOCSIS, and edge computing. BBWF2019, earlier this month in Amsterdam, was all about providing reliable connectivity and delivering new high-speed services in broadband access networks that leverage a variety of technologies, virtualization, and open source principles.

5G and new services that require ultra-low latency are totally changing the required network infrastructure, demanding gigabit connectivity and 100% coverage, which are not so simple to achieve. The paradigm of the Network Edge has been conceptualized as a solution for implementing IoT and handling the special requirements of high-resolution video content and low-latency applications, such as interactive gaming, autonomous cars, and AR/VR. But the Edge differs from the core in its physical infrastructure, its ownership, and in the necessary tasks to fulfill its connectivity and service requirements. At the same time, the Edge must be a natural extension of the core network, a so-called vector from the perspective of management and operations. It needs to run applications migrated from the core and provide the relevant resources despite restricted capabilities. This means it must be high throughput on the one hand and low latency and low power on the other.

To this end, Broadband Forum and Open Networking Foundation announced at the start of the 2019 event in Amsterdam a new agreement to pair some of their projects that provide abstraction of broadband access, namely Open Broadband-Broadband Access Abstraction (OB-BAA), SDN Enabled Broadband Access (SEBA), and Virtual OLT Hardware Abstraction (VOLTHA), helping operators who are looking to interconnect different parts of their network with open source solutions and systems from various suppliers.

A primary example was showcased by ONF at BBWF2019. ONF presented its SEBA platform for vBNG, which supports a multitude of virtualized access technologies at the edge of the carrier network. SEBA integrates software components and common APIs for handling interoperability, together with acceleration products, including SDN switches and SmartNICs that support separation of the control and user planes (CUPS). The SmartNICs provide the necessary acceleration at the Edge, where compute and power resources are especially limited, but high performance and low latency are still required.

ONF, of course, was founded under the premise that telecom operators want to avoid any vendor lock-in, not only of system vendors, but also chipmakers. The market has evolved to demand open solutions not only for software, but also for acceleration, and the FPGA is the top option for providing a solution. There are numerous FPGA flavors, ranging from low-throughput models up to versions that offer hundreds of gigabits of throughput with integrated ARM and DDR for a simplified, optimized data processing experience. FPGAs are programmable, even once field-deployed, meaning they are future-proof, and they are becoming much more accessible with newly available tools to enable the community to contribute to further FPGA development. This is exactly where Ethernity Networks enters the picture by offering FPGA-based acceleration solutions to the portfolio of the Network Edge.

ONF is now planning to show a 5G User Plane Functionality (UPF) SEBA platform at Mobile World Congress in February. This should be of particular interest to operators and system integrators.

Ethernity is ready with its 5G UPF Acceleration solution, which leverages our patented ENET Flow Processor technology along with standard DPDK APIs to offload the data plane to an FPGA-based SmartNIC. Our ACE-NIC100  integrates easily with third-party UPF software networking elements from any vendor to offload user plane data, thereby releasing server CPU cores, enhancing scalability, assuring deterministic performance, improving latency, and providing future-ready programmability. This accelerates the entire 5G network with the lowest possible TCO.

]]>
Who Will Own the Network Edge? https://ethernitynet.com/who-will-own-network-edge/ Tue, 22 Oct 2019 10:59:27 +0000 http://democontent.codex-themes.com/thegem/?p=31820 by Mark Reichenberg

As both telecom operators and cloud providers move their computing resources to the network edge, to bring applications closer to users and reduce latency, the question has arisen as to who the dominant players will be.

Whereas the trend for the past decade or so has been for the big three cloud companies to control much of the technological advancement and revenue streams related to cloud services, the move to edge computing is suggesting a shift of that industry trend toward the telecom operators.

Since operators already own so much existing infrastructure at the network edge, the pendulum is swinging in their direction. We are seeing operators turning existing central offices into advanced, next generation versions that serve as new mini-data centers nearer to the biggest sources of required resources.

You can be sure the big three cloud companies – Amazon, Microsoft, and Google – are concerned. They have built truly impressive centralized data centers, which still have tremendous value, but which are just too far away in a world that is becoming increasingly concerned with communication latency. Latency is a critical issue in the age of IoT and real-time healthcare, manufacturing, and other applications.

Industry analyst Chetan Sharma in a recent Bloomberg article had this to say: “Over time, cloud will be primarily used for storage and running longer computational models, while most of the processing of data and AI inference will take place at the edge.” Sharma sees a huge edge market in the next decade – worth more than $4 trillion by 2030.

And as we noted, who dominates the edge? It is the operators and owners of cellular towers, who control the valuable real estate that comprises today’s edge, and these assets will only become more precious in the coming years.

Sharma contends that the big cloud players are realizing that it’s in their best interests to partner with operators in order to get access to the edge. But again, that reinforces the controlling position that operators could hold going forward.

We have talked a lot about the edge, both here and in our white paper, Enabling the Virtualized Edge with SmartNIC Data Acceleration. The white paper addresses important issues about the edge, including ways that operators can meet the huge demand for delivering virtualized services efficiently given the scarcity of space and power at the network edge. It also touches on the need for the cloud providers to extend their cloud networks closer to the edge to benefit from the low latency provided by being physically close to end users.

Because edge sites are – and will be – compact, with a reduced physical and power footprint, operators are going to need to provide maximum levels of computing, networking, and security in a small space. That is where FPGA-based SmartNICs are the ideal solution, as they are much more space-conscious and power-efficient than adding servers, and they are optimized for networking and security functions, freeing CPUs to handle the control functions and user applications for which they were intended.

 

]]>
Bringing 5G Broadband to Developing Regions Requires Network Acceleration https://ethernitynet.com/bringing-5g-broadband-to-developing-regions-requires-network-acceleration/ Mon, 14 Oct 2019 17:36:00 +0000 https://ethernitynet.com/?p=34690

by David Levi

Residents of technologically advanced countries sometimes take for granted the blessings of instantaneous access to the wealth of information on the internet, the almost infinite options for media and entertainment, and connections to friends, family, and colleagues anywhere. All those advantages are only getting better with the advent of 5G broadband. With 5G will come interconnectivity between devices, the Internet of Things (IoT), augmented/virtual learning and gaming, and other advanced latency-sensitive applications.

But what about the developing world? 5G networks’ potential to replace physical fiber networks and better penetrate areas currently lacking a real broadband infrastructure could transform these regions economically.

Broadband is clearly a driver for economic growth. A World Bank report calculated that for every 10 percent that a developing economy can improve broadband penetration, it benefits from a nearly 1.4 percent improvement in GDP.

This advancement creates new jobs in cities and rural areas. Existing businesses gain access to innovations and new business models. Small businesses can grow and even become globally competitive. The quality, skills, and technological savvy of local workforces improve.

5G broadband can also improve education, health care, and emergency services. The ultra-low latency anticipated with 5G will enable instantaneous communications for medical imaging, monitoring, and screening, and make remote surgery possible. With 5G-connected IoT sensors, agricultural growth can be optimized through better water and fertilizer management, reducing the risks of droughts and agricultural crises and aiding the environment.

But there are challenges to bringing 5G broadband to the developing world. First is a lack of infrastructure; there is little in the way of cell towers and base stations, and much of the equipment currently deployed is too antiquated to ever support 4G. We need a significant leap – essentially skipping a generation of technology – but this requires a large investment and some hard choices.

Financial realities inhibit service providers when it comes to investing in rural and underdeveloped areas. In the face of the high costs of network deployment and management, plus a generally low level of revenue per user, providers must keep costs as low as possible. And with concerns about the reliability and availability of power in many developing areas, a low-power solution is an absolute requirement for 5G broadband projects.

If service providers focus on deploying distributed compute resources closer to the network’s edge instead of laying fiber, they will be more likely to be able to support the high bandwidth low latency services that will most benefit the developing world. The key is to move from a heavy network infrastructure to the network of the future – one that is agile, virtualized, and cloud-based, with smaller and more efficient virtual central offices strategically located at the edge of the network.

Network acceleration is critical for reducing latency and minimizing costs associated with large rural 5G deployments. Such solutions can save space and power, especially when the network acceleration is performed with hardware that can replace traditional server cores. For example, by using FPGAs embedded in network adapters – SmartNICs – to handle all data plane functions, operators can significantly reduce the need for servers, saving both space and power.

Programmable for a wide range of functions, FPGAs offer deterministic, low latency performance and low power and space requirements, along with high flexibility and long-term cost-effectiveness for carriers. This enables easy future upgrades compared with other solutions.

Ultimately, to achieve the great benefits of 5G broadband, FPGA-based network acceleration will be a key element in reducing power, space, and cost. It will further enable the high bandwidth and low latency needed for delay-sensitive services – the kinds of services that can positively transform remote rural communities and underprivileged urban centers.

]]>
Filling the Needs at Network Edge https://ethernitynet.com/filling-the-needs-at-network-edge/ Wed, 05 Jun 2019 14:09:00 +0000 https://ethernitynet.com/?p=34454

by Shavit Baruch

With the imminent spread of 5G and IoT as well as greater demands on enterprise WANs, there is a need for network edge platforms that provide efficient and secure high-speed connectivity.

That is how Roy Chua, principal at the research and analyst firm AvidThink, describes the situation. And that is the rationale behind our new ENET Universal Edge Platform, which we introduced last week.

The ENET UEP is optimized for network edge applications. It offers high performance, solid security, and nearly unlimited flexibility in its protocol and port configurations. Those are critical needs at the edge, where service providers need to be able to do more in less space and with lower power consumption.

That is true for whatever form the network edge takes, whether that is a cell tower, a street cabinet, a multi-dwelling distribution/cable box, or a remote business office.

The ENET UEP is an edge-optimized, compact, low-power, and FPGA-based (so it is highly programmable) network appliance. On top of its 40 Gbps of networking capacity and 10 Gbps of IPSec security, its modular design makes it easily adaptable for use cases such as:

  • High-end network interface device (NID) for demarcation of the WAN from the LAN networks. With its dual-core ARM processors, the ENET UEP can handle all control functions, while the onboard FPGA handles the data path.
  • Mobile backhaul with XGS-PON. Because it is compact and power-conscious, the ENET UEP can be located at cellular base stations to provide cell site aggregation and XGS-PON connectivity with the optical line termination.
  • Distribution point unit or multi-dwelling unit. For DPU or MDU applications, the ENET UEP can be converted via its modularity to handle G.fast, and in addition it can offer cascading switching for even greater distribution capacity.
  • Internet of Things. The ENET UEP can support IoT aggregation elements, such as a radio modem for the IoT sensor network.

One major differentiator for the ENET UEP is that it offers a unique PCIe connection to any standard server, which enables it to be used for NFVI acceleration. Analyst Roy Chua drew attention to that, noting that the flexibility afforded by the PCIe connection to a standard server allows the ENET UEP to act as a viable accelerator to NFV workloads at the network edge.

More information about the ENET UEP is available here.

]]>
Thoughts on Edge Computing Congress https://ethernitynet.com/thoughts-edge-computing-congress/ Thu, 26 Sep 2019 10:51:39 +0000 http://democontent.codex-themes.com/thegem/?p=31626 by Brian Klaff

At the 2019 edition of the Edge Computing Congress last week, Ethernity Networks was in attendance to promote our 5G UPF Acceleration solution and Universal Edge Platform network appliance. The conference had been held in Berlin the previous two years, but it moved to London this year, with an impressive agenda of top speakers and a small but power-packed exhibition area.

The most prevalent topic of conversation was, of course, Multi-access Edge Computing (MEC), with emphasis on both the required infrastructure to enable the edge and the development of the applications that will benefit from edge computing. The clear takeaway from the two days of sessions was that operators are ready to invest heavily in the edges of their networks.

There are various reasons for this, but first and foremost among them is the push to implement 5G. There is no doubt that 5G is coming, with initial trials already underway and widespread rollouts expected in 2020. But from the analysts that Ethernity spoke to, the overarching sense is that operators have both high expectations and serious flaws in the planning. For example, many 5G deployments are barreling ahead and relying heavily on virtualization without considering whether the existing hardware is able to support such a network. In one presentation, Julian Bright of Ovum emphasized the need for flexible, scalable hardware that is capable of being hosted in any location, from cell sites to remote closets.

Of course, this plays perfectly into Ethernity’s strength. Our ACE-NIC100 FPGA SmartNIC is ideal for deployment at the edge, saving customers precious space and power, while providing high performance that can easily scale to handle even the most demanding access networks.

Despite the commitment to spend on edge computing, a number of major concerns still exist.  For example, given the need to build hundreds of edge facilities, operators worry about the total cost of ownership, including maintaining a workforce across the edge. Operators therefore need networking automation at the edge, and Ethernity’s flow processors can help support such automation with programmable OAM features on the FPGAs.

Another issue that operators face at the edge is that of standards, regulation and SLAs. The ETSI MEC Industry Specification Group is working on yet another phase of specifications and edge computing use cases for telecom companies that, unlike their cloud provider counterparts, are heavily regulated and forced to maintain high quality of service experience. Ethernity’s 15 years of networking experience and our support for carrier-grade SLAs and hierarchical quality of service on our ACE-NICs make us an ideal partner for operators with edge ambitions.

A couple of other questions regarding 5G permeated the event:

  • Is the race to implement 5G-capable infrastructure driving developments in edge computing, or are developments in low latency-dependent applications, such as Industrial IoT, autonomous vehicles, augmented/virtual reality, and gaming the stronger drivers for progress at the edge?
  • Are the 5G trials and initial deployments realistic indicators of success, or are the current bandwidths still too low to predict success upon full rollout?

Ethernity is confident that the ideal 5G deployment will combine network virtualization with hardware acceleration.  By offloading the 5G User Plane Function to an FPGA SmartNIC, the server CPUs can be dedicated to the functions they are better at handling, namely control function and user applications.

FPGAs are better optimized for data transport than CPUs, and Ethernity’s patented ability to reduce the space required for flow processor logic allows our FPGA SmartNIC to include full router capability directly on the NIC. This unlocks the potential for tremendous savings in space, power, and CPU utilization for cost-conscious telecom operators.

After many positive conversations with industry analysts and potential partners and customers, we are already looking forward to the 2020 Edge Computing Congress. By then we expect to know a lot more about 5G progress and the need for Ethernity’s solutions.

]]>