by Mark Reichenberg
As both telecom operators and cloud providers move their computing resources to the network edge, to bring applications closer to users and reduce latency, the question has arisen as to who the dominant players will be.
Whereas the trend for the past decade or so has been for the big three cloud companies to control much of the technological advancement and revenue streams related to cloud services, the move to edge computing is suggesting a shift of that industry trend toward the telecom operators.
Since operators already own so much existing infrastructure at the network edge, the pendulum is swinging in their direction. We are seeing operators turning existing central offices into advanced, next generation versions that serve as new mini-data centers nearer to the biggest sources of required resources.
You can be sure the big three cloud companies – Amazon, Microsoft, and Google – are concerned. They have built truly impressive centralized data centers, which still have tremendous value, but which are just too far away in a world that is becoming increasingly concerned with communication latency. Latency is a critical issue in the age of IoT and real-time healthcare, manufacturing, and other applications.
Industry analyst Chetan Sharma in a recent Bloomberg article had this to say: “Over time, cloud will be primarily used for storage and running longer computational models, while most of the processing of data and AI inference will take place at the edge.” Sharma sees a huge edge market in the next decade – worth more than $4 trillion by 2030.
And as we noted, who dominates the edge? It is the operators and owners of cellular towers, who control the valuable real estate that comprises today’s edge, and these assets will only become more precious in the coming years.
Sharma contends that the big cloud players are realizing that it’s in their best interests to partner with operators in order to get access to the edge. But again, that reinforces the controlling position that operators could hold going forward.
We have talked a lot about the edge, both here and in our white paper, Enabling the Virtualized Edge with SmartNIC Data Acceleration. The white paper addresses important issues about the edge, including ways that operators can meet the huge demand for delivering virtualized services efficiently given the scarcity of space and power at the network edge. It also touches on the need for the cloud providers to extend their cloud networks closer to the edge to benefit from the low latency provided by being physically close to end users.
Because edge sites are – and will be – compact, with a reduced physical and power footprint, operators are going to need to provide maximum levels of computing, networking, and security in a small space. That is where FPGA-based SmartNICs are the ideal solution, as they are much more space-conscious and power-efficient than adding servers, and they are optimized for networking and security functions, freeing CPUs to handle the control functions and user applications for which they were intended.