Edge computing, after digital transformation, the latest marketing craze, but where is the edge?  Is it any different than a remote datacenter? Is this marketing hype, or does it deserve our attention? This goal of this post is to answer those questions and give you a starting point for busting through they hype.

Defining Terms 

Let’s start with redefining edge computing terms. We originally did this in a previous blog post about edge computing. When people say, “the edge” most often they are really talking about Edge Computing. Here’s a technical definition of edge computing from Wikipedia:

Edge computing is a distributed computing paradigm which brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth.

It almost sounds like edge computing is an attempt to solve for the data latency issues caused by data gravity. As a reminder, this is the definition for data gravity:

The number or quantity and the speed at which services, applications, and even customers are attracted to data increases as the mass of the data also increases.

Edge Computing is one of the pillars of the new distributed computing age. Here’s the Wikipedia definition for distributed computing:

A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another.

These distributed components could be virtual machines, or containers.

Edge Computing at the Center of a Perfect Storm

We are truly in an era of digital transformation. Here are some of the things that are setting up this storm:

  • We have TONS of digital data to be processed
    • The amount of digital data created is now measured in zetabytes, and some experts say we’ll reach a global volume of 175 zetabytes in the next 5 years.
    • The data is not just being created by humans or server log files. There is data being created by all sorts of IoT devices, by the digital trails left by customers, and all sorts of new technologies.
    • This data, even if it is massive, can be turned into information with machine and deep learning (the building blocks of AI). Historical data can be called up and combined with newer data to create this information.
  • Hardware has evolved to be able to handle this amount of data
    • We have systems that are able to store this data now. There are disks available that cam store 100TB of data.
    • There are new hardware technologies that aid in this, such as GPUs and FPGAs.
    • Architectures using technologies such as object storage are making it possible to create applications that can perform compute functions on these valuable stacks of data.
  • Developers prefer a public cloud experience
    • Developers are more likely to create new applications using languages and services found in the public cloud.
    • Developers really don’t care about the underlying architecture, they just want it to be available and be reliable.
  • On-premises challenges about data have not changed.
    • The old data and new data exists in pockets on premises, creating latency, privacy, and compliance issues for developers wanting to use the public cloud protocols.

The digital transformation everyone is talking about is this fundamental shift in how we develop, architect, deploy, and support applications. We are exiting the client/server era and transitioning to the distributed era. It is time to start thinking about how to support these the applications that will be developed with distributed computing.

So Where Is the Edge?

Edge Computing is a shift from server/client, 3 tier architecture to distributed computing. This Data Center Frontier article describes four potential edge computing architectures:

  • Data centers in regional markets and smaller cities
  • Micro data centers at telecom towers
  • On-site IT enclosures and appliances to support IoT workloads (often referred to as the “fog” layer)
  • End-user devices, including everything from smart speakers to drones and autonomous cars.

Let’s think back to the definition of edge computing (emphasis mine):

Edge computing is a distributed computing paradigm which brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth.

The edge is where you need it to be. Maybe you have users that live in areas far from your central datacenter, whether that is an on-premises or public cloud location. The edge could be a data center in that far-flung region to boost their performance. Maybe it’s a micro datacenter at telecom towers or in a corn field or anyplace on-site compute is needed. Maybe the edge is specialized appliances in your central datacenter or at a remote site that can perform machine learning on real-time data from IoT devices.

Where Will You Put the Edge?

The trick with edge computing will be architecting the best place to bring the computation and data storage so it works best with the data your organization wants to turn into information. The elements should be designed to work together fluidly.

Are you building distributed computing systems by bringing compute to the data? What does that look like in your organization? Let us know in the comments.