The End of Embedded Computing as We Know It…(Part 2) – The Cloud holds the keys…

In this series of blog posts, we will discuss the end of embedded computing as we know it. The impact of IoT’s growth is profound in its ability to change society and how humans experience the world. But in order for that growth to be realized and that impact to be felt, embedded computing as we know it will disappear.  “The End of Embedded Computing as We Know It… “ will examine how the current mindset of embedded computing being applied to IoT even in its infancy has already led to near catastrophic cybersecurity issues, what technology and processes will replace traditional embedded computing, and finally a contrast between traditional embedded computing and the future of embedded computing completely changes the dynamics of the world of IoT using a simple thing that we all see every now and then – an automotive “software update” notice.

Cloud Native Edge: Cloud principles + embedded computing = Disruption

Thats the conclusion the founders and the founding team of ZEDi have figured out. We all have experience with very advanced systems that work in the embedded world as well as open-source systems that run the Cloud. As edge computing became critical to running high powered apps that need to compress the definition of “real-time” from milliseconds to micro-seconds, and the forecasts for how many IoT devices would be generating data to be processes (basically the digitization of the real-world), the answer became clear – the Cloud holds the keys.

We’ve stated before the scope of the issue when legacy embedded techniques create security problems, as well as the scale of the problem. However the scale is important because that is what makes the Cloud Native approach so key to disrupting the embedded computing world. In the link referenced white paper, we discuss a simple theoretical of Amazon dealing with the “Meltdown” bug. They probably have on the order of 2-3 million servers in their data centers but they are all cloud-native datacenters and were able to clean up any mess in hours or days to proactively protect their customers. Edge computing will be dealing with many more devices – Ford sold ~6 million cars in 2016 all with multiple computers in them. So if Ford had to deal with a “Meltdown” style bug they could be dealing with 8-10X the number of devices that need low-level upgrades. So how does a company whose core competency is building cars deal with software updates for security 8-10X the number of devices than Amazon? Because of embedded computing, Ford would have to issue a recall, ask customers to bring their cars in, do a “flash update” at the dealership so that a tech can be on hand in case the car “bricks”, i.e. the software update fails and the system needs a “hard reboot”. Thats not how Amazon would upgrade their servers in a datacenter…

Figure 1 – Full stack integration silo of embedded computing

Why does embedded computing require this sort of attention? How can these devices “brick” and why does a tech need to be on hand? Simple, it goes back to the “tightly integrated, single function” embedded computing was optimized for. Looking at the stack in the Figure 1 here, a low-level security flaw would require the fix being created and the entire stack being erased and replaced with “version 2”. Because the app is so tied to the OS, Networking, and all sub-systems they all need updating. If you’ve ever upgraded an embedded system (your home router, or other IT equipment at work) or experienced embedded computing (ever wonder why you can’t just upgrade your car’s navigation system?) then you know the experience of dealing with embedded software – upload the file, shutdown the whole system (run time environment), turn it back on and hope it reboots. If it doesn’t, go get the console cable and start over.

What the cloud and virtualized datacenters do different is conceptually simple (but the implementation details are complex). The hardware is abstracted out of the equation via “hypervisors” and other virtualization technology (in the case of Cloud Native edge one takes advantage of embedded virtualization technology) creating an abstraction layer between the hardware and the OS/App/etc. This allows the OS, drivers, App, etc. to be upgraded without impact to the hardware itself. And since the process, storage, networking, etc. are all virtual systems, a single system with enough horsepower can actually become “multi-tennant” and host multiple apps/OS/etc. and they would run in complete isolation from each other even though they’re technically on the same hardware. Conversely, a well formed edge orchestration system could disaggregate the workload, distributing tasks to the appropriate system and optimizing performance and security all transparently to the hosted app.

Figure 2 – Cloud Native Edge abstracts hardware from application facilitating security, scalability, and simple software lifecycle management

Applying these cloud principles to the edge breaks the old embedded way of thinking. While you still update code with security patches and the like, the changes can be made without shutting down the entire system, if the app and/or OS fail their upgrade the system remains accessible (because its only the “virtual” access that is “bricked”) and therefore it can easily rolled back to the last known “working” version, e.g. backed off, without any need for remote hands. An added bonus is since the app and OS are tied to “virtual” resources, the orchestrating layer can optimze apps to the best hardware for the workload. In the edge this becomes very important as often real-time data may require an app to move close to the source of the data to improve performance and timing. Figure 2 gives a picture of what the above diagram would look like in a Cloud-Native edge.

This approach also have revolutionary implications for security of edge systems which we can discuss in Part 3…