There’s a bigger picture to go along with the talk around the Internet of Things and artificial intelligence. As technology advances to the point where autonomous vehicles, sentient software and robotics help “intelligence everywhere” become reality, the structure of it all must change. Pushing computing horsepower to the edge is now necessary, and as it rolls out, old hands may just get a sense of déjà vu.
Computing horsepower at the edge is becoming big business. In an October 2018 report, McKinsey & Company identified 107 distinct edge use cases, estimating the potential value of edge computing at $175B–$215B by 2025—and that’s just the value for hardware companies. A 2018 IDC study sponsored by Seagate forecast that by 2025 30% of the world’s data will need processing at the edge. Yet another report titled ‘Edge Internet Economy: The Multi-Trillion Dollar Ecosystem Opportunity’ puts the value of the edge at $4.1 trillion by 2030.
The big question is ‘why?’ Why is the edge emerging as a new and rapidly growing frontier in computing, in an environment where the cloud is all but de facto for so many use cases?
It’s perhaps worth taking a quick and massively simplified walk down memory lane. Industry veterans will recognise IT 1.0 as mainframes and terminals; not unlike cloud computing, except distances to terminals were a lot shorter. IT 2.0 was driven by the rise of PCs and distributed computing, with (relatively) powerful endpoints in client-server models, and content consumption. As components shrank, IT 3.0 emerged on the back of mobile devices and the rise of the cloud.
Right now, IT 4.0 is well underway, characterised by the talk, and in many cases reality, of artificial intelligence, robotics, industrial IoT, autonomous vehicles and large-scale industry transformation.
At each point along the way, data generation grew exponentially. The basic unit of a hard drive went from a capacity of single-digit megabytes, to 16 million. The amount of data generated in 2016 alone is estimated at 16 zettabytes. One zettabyte is a million petabytes; by 2025, IDC estimates the world will produce 175 zettabytes.
Not for nothing, then, that market watcher IDC calls this ‘the data age1’.
Sensors…and making sense of data
A proliferation of devices is behind the creation of all that data. Much of it comes from sensors – but, as you’ll appreciate, simple sensors just don’t make a lot of data at all. Temperature, wind speed, location – that’s the relatively small stuff.
Cameras, on the other hand, generate enormous files of ‘unstructured’ data and organisations will increasingly depend on AI to make sense of it. The depth of that data also represents enormous value, going well beyond the metrics pumped out by a simple sensor – allowing for, for example, the operation of an autonomous car.
In fact, the autonomous car demonstrates most readily why powerful computing at the edge is a necessary characteristic of IT 4.0. Because on the road, nanoseconds matter and latency of milliseconds or microseconds can have disastrous effects.
That means the vast amounts of data an autonomous vehicle generates must be analysed, contextualised, understood and acted on quickly. Some of that happens in the vehicle, to be sure. Some of it happens elsewhere – but that elsewhere can’t be the cloud. Not only can the data not travel there and back again fast enough, but the pipes just aren’t big enough.
Multiply that single vehicle example by a hundred, a thousand, a million high-volume data use cases and the scale of the problem is apparent.
Endpoint, edge and core
This simple example shows why IT 4.0 must have three distinct architectural structures: the endpoint (in this case, the car – but it could be a drone, a phone or an industrial IoT device); the edge (a cellular tower, an office building or other server-equipped facility); and the core (the cloud and traditional datacenters).
It’s a tiered approach; when convenient and when capacity allows, the data feeds back to the core for further analysis where it will ultimately enter a standard data management process. The tiered approach also puts the lie to any contentions that ‘The edge will eat the cloud’, as provocatively suggested by Gartner VP and analyst Thomas J. Bittman. Co-existing, the cloud and the edge each complement and mutually enhance the value of the other in a classic ‘the whole is more than the sum of the parts’ way. As always in the IT industry, it is about use-cases and appropriate solutions.
It’s a fact put forward by the author of the Edge Internet Economy report, Chetan Sharma: “It comes down to understanding which apps and services will benefit from edge architecture. For example, if you are in manufacturing or construction industry and you manually assemble machinery, you can benefit from automation and deploy the edge to speed things up. Whatever your business, you need to ensure the infrastructure is ready.”
What this means in practice is that with the edge/cloud structure, intelligent applications like the autonomous car can function effectively in the field (everywhere), with crucial data processing taking place on the edge, at the last mile.
The rest happens in the cloud, when convenient.
1 Data Age 2025 – The Digitization of the World, IDC