Wikipedia defines Edge Computing as “pushing the frontier of computing applications, data, and services away from centralized nodes to the logical extremes of a network. It enables analytics and data gathering to occur at the source of the data. This approach requires leveraging resources that may not be continuously connected to a network such as laptops, smartphones, tablets and sensors.”
Basically, it refers to the computing that you’re going to do with your data before it gets to the cloud. Computing on the edge of the cloud.
If you didn’t already know, the cloud it’s a magical place up in the sky where your data lives, it’s actually a set of servers somewhere. So when you send stuff to the cloud, you’re just sending it to some else’s computer.
The reason edge computing has caught on is that it’s faster if some of the computing is done on the device before it heads to the cloud.
Let’s take the ever so popular smart speaker, when you ask it a question, it has to send that question to a server, the server figures out the response, and send it back to you.
Edge Computing is allow your smart speaker to be able to handle more of the task so that there is less being sent to and from the cloud.
This is a pretty simple example, the potential of edge computing is the creation of a bridge between the digital and physical world and will be the foundation of the industrial internet.
How did we get to Edge Computing?
We are firmly in the era of cloud computing. Most people will use a centralized cloud based service like dropbox, gmail, or slack. You also have devices that are powered by content and intelligence that’s in the cloud, things like your Apple TV, Google chromecast or amazon echo.
What’s even more impressive is that massive number of companies rely on the infrastructure, hosting, machine learning and compute power from just a handful of companies like Amazon, Microsoft, Google and IBM.
Private clouds took 47% of the market last year, that’s companies like Apple, Facebook or Dropbox.
So, What is the Edge?
Edge in this context refers to the geography, the action is happening away from servers, on the edge. Edge Computing is computing that is done at or near the source of the data, rather than in the cloud aka the data centers located miles away.
Why do we want Edge Computing?
Voice assistance can be the most frustrating, when you ask a question and you have to wait for a response. Your Echo has to process your speech, send a compressed version to the cloud, the cloud uncompressed it, then processes it, depending on what you’ve asked it it might have to access an API to figure out the weather, then it has to compress it at sent it back to you to tell you that you might need an umbrella today.
This is why companies are working on AI chips, so that your devices rely less on the cloud. For a company like Amazon it would save them server costs if they were less busy processing requests or doing your kids math homework.
The other advantage is that if enough of the work is done locally, you could end up with more privacy.. if the company giving you the services thinks it’s a good idea.
We have edge computing in our lives already, the industry has just run out of ways of making the cloud sound new, which is why Edge Computing has become the new “It’ term.
Looking at security our phones have been providing edge compute for years. When you make a payment on your phone and you’re asked to verify your biometric information. There are lots of security concerns for centralizing security.
Security isn’t the only way that edge computing will help solve the problems IoT introduced. The other hot example I see mentioned a lot by edge proponents is the bandwidth savings enabled by edge computing.
For instance, if you buy one security camera, you can probably stream all of its footage to the cloud. If you buy a dozen security cameras, you have a bandwidth problem. But if the cameras are smart enough to only save the “important” footage and discard the rest, your internet pipes are saved.
Almost any technology that’s applicable to the latency problem is applicable to the bandwidth problem. Running AI on a user’s device instead of all in the cloud seems to be a huge focus for Apple and Google right now.
We’re also seeing progressive web apps that are embracing the edge when they have offline first functionality. This means that you can open a website on your phone with an internet connection, do some work, save your changes locally, and sync with the cloud when it’s convenient for you.
Self Driving Cars and the Edge
Self Driving cars are the ultimate example of edge computing. Due to latency, privacy, and bandwidth, you can’t feed all the numerous sensors of a self-driving car up to the cloud and wait for a response. Your trip can’t survive that kind of latency, and even if it could, the cellular network is too inconsistent to rely on it for this kind of work.
But cars also represent a full shift away from user responsibility for the software they run on their devices. A self-driving car almost has to be managed centrally. It needs to get updates from the manufacturer automatically, it needs to send processed data back to the cloud to improve the algorithm, and the nightmare scenario of a self-driving car botnet makes the toaster and dishwasher botnet we’ve been worried about look like a Disney movie.
Edge computing is just beginning to gain mainstream recognition. What do you think? Is this next buzzword justified? And do you feel like you’ve got a solid grasp of the basics from this article?