Last Updated:
@Edge
Network Edge and What it Means https://static.independent.co.uk/s3fs-public/thumbnails/image/2018/01/16/16/hunter-s-thompson.jpg?width=1368&height=912&fit=bounds&format=pjpg&auto=webp&quality=70

It’s lonely at the Edge

Website Performance

The ongoing debate about where “the edge” is … what it all really means

Today there is a renewed focus on “‘The Edge”, as more companies begin the realize the benefits of decreasing the latency between your users (Mobile devices, IoT or Web browsers) and your software (Website, AI platform, Video streams etc). But where exactly is the edge?

Before the multi-datacenter solution was The Edge. Then the Cloud was the Edge. Then the Content Distribution Network was the Edge. Now it’s the mobile towers? The end user device? Where is the EDGE!?

The Edge… there is no honest way to explain it because the only people who really know where it is are the ones who have gone over.

- Hunter S. Thompson, Hell’s Angels (1966)

The Edge …. basics

So, what is edge and what is edge computing? The word edge in this context means literal geographic distribution. Edge computing is computing that’s done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work. It doesn’t mean the cloud will disappear. It means the cloud is coming to you.

Edge 1.0 : The Edge … of the Cloud

Edge 1.0: The Cloud

In the beginning there was The Cloud™, and infinitely scalable multi-datacenter deployment at the click of a button. In the old days, applications were deployed to one or maybe two datacenters (for disaster recovery). This meant you users had to travel whatever ‘internet distance’ was required to reach your server.

Then with the advent of cloud computing, deployments could now spread across multiple datacenters. At the time of writing the three biggest contenders run 18 datacenter regions (AWS), 50 datacenter regions (Azure) and 17 datacenter regions (Google Cloud) respectively, each with their own subdivision into isolated networks and availability zones. These datacenters exist on the major backbone networks , the main connection points of the internet (called IXs or Internet Exchanges).

With this new found multi-region deployment model, this means your users are now perhaps 3–10 ‘internet hops’ away from the your server. For example, from my house here in Los Angeles CA to AWS.AMAZON.COM I get the following:

  1. LA (internal network) -> LA (Time Warner Network)
  2. LA (Time Warner Network)-> LA (Time Warner Upstream)
  3. LA (Time Warner Upstream)-> Dallas (Time Warner Transit backbone)
  4. Dallas (Time Warner Transit backbone) -> Unknown Transit Provider
  5. Unknown Transit Provider-> Ashburn VA (Amazon) 

[~ 100 milliseconds of latency ]

Ping from LA to AWS Ashburn

Edge 2.0: The Edge … of the Content Delivery Network

Edge 2.0: CDNs et. al

At the dawn of the Web Age (1990s) it was realized that the web was going to be a global phenomena. With this knowledge, Akamai developed a system for ‘caching’ or keeping copies of web content spread across a global network of servers. They called this model a Content Delivery Network, or CDN. Over the last 20 years , CDN’s have increased both in distribution (Points of Presence or datacenters) and interconnectivity. Akamai, the original CDN company connects to more than 1700 networks and operates more than 130 Points of Presence. Cloudflare by contrast, operates approximately 150 PoPs — each with their own peering and deep network connectivity. Here we are likely just 1–5 hops from most users.

Looking at an Akamai example, I can see that when I visit Pinterest.com they are pulling some of their images from an Akamai cache at the IP address of 23.57.41.248.

Looking at HTTP headers to get server IP

And mapping this via traceroute we can see we are indeed closer to the Edge (me) at around 30ms of latency (roughly 3x faster than AWS was):

Edge 3.0: The Edge … of the network

The last edge, is the actual last mile network. Until now this has been limited by the sheer economics of the problem. How does one deploy not 100’s of micro PoPs but thousands?

Here we have only a few players. Mobile operators (like Deutsche Telekom’s MobiledgeX group  —  which Edgemesh runs on) operate inside the last mile network. Meaning — they are 0 network hops away. The core idea of Edge 3.0 is to never leave the last mile network (Time Warner or your home network). For 5G this will become critical, as 5G bandwidth is nearly 20x that of 4G and the strains on the backbone will begin to crack.

With Edgemesh , the cache content is running on the edge device itself (the browser or a network local Supernode)… so -1 or 0 network hops away :).

Edgemesh allows content to be moved onto the device either from Origin (CDN or Cloud) or a compaitble Edgemesh Server (MobiledgeX etc). When the user requests the content, the Edgemesh client skips the network all together and serves the asset from cache!

Below is an example. We can see the browser requests an Asset (images) but the Edgemesh client was able to serve for its local cache. The result is this request goes like this:

My Browser (Laptop) -> cache [~ 1–4ms milliseconds of latency ]