Hyperscale And Edge Computing: The What, Where And How

Networks are adapting to meet the upcoming explosion of latency-sensitive data.

popularity

We hear a lot about “edge computing” these days. We are approaching an era in which unfathomable amounts of data are created, which need to be transmitted, stored, processed and made sense of. As we are witnessing never-before-seen scaling in all those domains, the term “hyperscale” computing has been invented. But what about the edge? As it turns out, the definition seems to have changed over time! What is the edge? Where is it? How is it defined?

This is a follow-on to my last post, “The Four Pillars Of Hyperscale Computing.” As emphasized by Facebook’s Vijay Rao, director, Technology and Strategy, during a CadenceLIVE keynote, the pillars are compute, storage, memory and networking. When going beyond the data center, it is probably fair to define hyperscale computing as the cycle of sensing and creating data, transmitting it through networks, and processing and storing it to eventually make sense of that data to create actionable results. Let’s look at the four pillars from above in that broader context, as illustrated here:


Hyperscale computing, from endpoints through networks to the data center.

How do we quantify what we will need to expect in this scenario? Let’s look at the predictions.

For data volume, traffic through networks is a key indicator, and the networks will throttle the amount of data that can be transmitted for processing outside of sensors itself. The latest Ericsson mobility report, “Mobile Network Evolution” from June 2020, provides a treasure trove of insights into the underlying drivers generating data and about the expected speed of transition from 4G to 5G. Video already accounted for 63% of the traffic of 2019’s 33 exabytes per month and is predicted to become 76% of the estimated 164 exabytes per month in 2025. At that time, 5G adoption could reach 2.8 billion subscriptions, and 5G population coverage is forecast to reach 55%, plus 10% via existing 4G networks.

In addition, as network topologies change, by the end of 2025, 25% of the world’s mobile network data traffic is forecast to be fixed wireless access. Personally, I am already down to only two communication expenses—my cable home internet and my mobile phone. I already replaced cable TV, wired phone, and my alarm system with the “pure internet” option, saving quite a bit. The next step will be the full consolidation, getting down to only one vendor.

As for storage, IDC predicts that the “global datasphere” will grow from 175 Zettabytes by 2025. The Seagate Technology report, “DataAge 2025 – The Digitization of the World,” gives some key insights into how much data there is estimated to be, and where and how it is stored. For instance, the percentage of storage is predicted to “plummet” in 2025 to just above 20% at the end point, about 10% at the edge and the rest in core data center storage.

By the way, I admit that I had to look it up to get a feel for the scale. It goes #Tera, #Peta, #Exa, #Zetta in multiples of 1,000. Most of us have several #Terabytes on our desk as external storage. It is as big as all the X-rays in a large hospital, and my data plan allows the download of 1.2TB per month. A #Petabyte is half the contents of all US academic research libraries. An #Exabyte holds about one-fifth of the words people have ever spoken. The information in a #Zettabyte (175 of them in 2025!) is as much as all the grains of sand on all the world’s beaches. Next up is #Yottabyte, as much information as there are atoms in 7,000 human bodies. “Ginormous” is the term my daughter used for these amounts when she was about four.

Back to the edge. Karim Arabi, in his DAC 2014 keynote, “Mobile Computing Opportunities, Challenges and Technology Drivers,” defined edge computing broadly as “all computing outside the cloud happening at the edge of the network.” Cloud computing would operate on big data while edge computing operates on “instant data” that is real-time data generated by sensors or users. In the tutorial “Next-Generation Verification for the Era of AI/ML and 5G” that I organized at DVCon 2020, I referenced industry data that timed “devices/things” at <5ms, “edge computing nodes” at <5ms, “network hubs and regional data centers” at <10-40ms, the “core network” at <60ms and the “cloud data center” at ~100ms. In contrast, more recently I heard a definition from analysts that defined edge computing as everything within 20ms latency, as indicated in the figure above.

So, what’s the bottom line?

Just five years from now, by 2025, sensors will create exabytes of data per day that will be transmitted through next-generation networks with the lowest latencies possible—where zettabytes of data need to be stored in the global datasphere. When combined with consumer expectations for instantaneous responses to all their needs, networks, storage and compute must “hyper-scale” to speeds and capacities that are hard to comprehend, coining the term hyperscale computing.

There are definitely different types of edges in play, some closer to the network, like an “inner edge,” as well as “outer edges” closer to the sensors in the real world. Topics like low power, compute latencies, security, network coverage and speed play a key role that translate into measurable effects for the consumer. For instance, think of the time it takes to compute an athlete’s performance during a workout measurement or the time it taxes Siri, Alexa and others to respond to an audio-captured question. This time drives consumer decisions and is directly impacted by network latency, availability for data at the edge, speed and compute performance, whether at the inner or outer edge or in the data center.

EDA as part of the technical software markets is at the center enablement of all this—from advanced mixed-signal technology to enable sensors through radio frequency (RF) capabilities to enable 5G networks and beyond; emulation and prototyping to verify the most complex AI/ML and high-performance computing (HPC) designs; key design and processor IP enabling high-speed interfaces, AI, audio and vision processing; and advanced-node implementation approaching 3nm silicon technology, all the way to advanced 3D-IC packaging technologies to implement increasingly heterogenous components from raw chiplets and system analysis of multi-physics effects including electromechanical, power and thermal. Computational software—the complex algorithms and sophisticated numerical analysis that spans numerous industries, including semiconductors, systems, weather prediction, scientific software, and financial, medical, and business analytics—takes center stage as large portions of chips and systems are simulated using virtual integrations prior to production of actual products.

What a fascinating, fast transforming area of computing. Exciting times are ahead!



Leave a Reply


(Note: This name will be displayed publicly)