At MWC 2018, I had the honor to participate in a panel discussion titled “Do 5G Business Cases Depend More on Core or Edge Upgrades”. The panel featured an impressive group of representatives from across the global telecom industry who were split into teams to advocate one side over another. I was a proud member of “Team Core,” and focused on the importance of a 5G core in terms of its ability to deliver, tailored “slices” of the network to enterprise customers, offering significant growth prospects for network operators offering connected services such as IoT and M2M. Although it was a lively and sometimes heated debate it did addresses the significant change in Networks as we know that 5G will bring.
Among other things, 5G will provide dramatically faster speeds, and thus, greater overall bandwidth, which sounds great for wireless devices. But mobile networks don’t exist by themselves. The new 5G network and the devices that use it will need a network that supports them on the back end so that the data they need and the computing services they require can be available with as little latency as possible. That low-latency requirement will be more insistent than ever as services like self-driving transports will need to transfer data almost instantly in order to do their jobs.
Latency can be thought of as a network delay, but it’s really caused by several factors, the most basic of which is the speed of light in glass fiber. The longer the distance a data packet has to travel on a network, the longer it’ll take to get to its destination. While it’s still measured in tiny fractions of a second, those fractions add up as other factors join in. So does the time it takes a server and whatever application or database in running to find the information you need and send it back to you. As the network gets busier, and the network infrastructure becomes less able to cope with the traffic, latency increases. This is especially true with servers as they become overloaded.
Because communicating with a centralized computing and data repository takes time, the only way to save time (i.e. decrease latency) is to avoid using that centralized repository—which means moving big chunks of your network’s computing power to the edge of the network. The result is something called “edge computing,” with architectures referred to as “edge cloud computing,” which, in turn, uses things called “cloudlets” or sometimes called “fog computing.” A key driver is mobile computing, which necessarily uses data at the edge.
The edge of the network is the part that’s closer to the ultimate user. By moving the data to the edge of the network, you cut down on delays in two ways: First is that you reduce the distance between the user of the data and the place where it’s stored (the repository), which reduces the time it takes data to move back and forth. Second, by keeping just the required data near the user, you’re also reducing the amount of data that the server has to handle, which also speeds things up.
While it's common to assume that cloud and edge computing are competing approaches, it’s a fundamental misunderstanding of the concepts. Edge computing speaks to a computing topology that places content, computing and processing closer to the user/things or “edge” of the networking. Cloud is a system where technology services are delivered using internet technologies, but it does not dictate centralized or decentralized service delivery. When implemented together, cloud is used to create the service-oriented model and edge computing offers a delivery style that allows for executions of disconnected aspects of cloud service.
[Keywords] Core, Edge, 5G, computing at the edge