Exploring Artificial Intelligence at the Edge
As the adoption of artificial intelligence (AI), deep learning, and big data analytics continues to grow, it is becoming increasingly important for edge computing systems to process large data sets in a timely and efficient manner. The basic compute, storage and networking capabilities are all present today at the edge, but speeds and capacity will only continue to increase and advancements like NVMe (Non Volatile Memory Express) will offer significant performance advantages and boost AI adoption at the edge.
Edge-based AI: Are We There Yet?
It is possible, and becoming easier, to run AI and machine learning with analytics at the edge today, depending on the size and scale of the edge site and the particular system being used.
While edge site computing systems are much smaller than those found in central data centers, they have matured, and now successfully run many workloads due to an immense growth in the processing power of today’s x86 commodity servers. It’s quite amazing how many workloads can now run successfully at the edge.
For example, many large retailers are using edge computing solutions today because it is cost-prohibitive to send data to the cloud for processing, and the cloud is not able to keep up with retailers real-time demands. They are running local analytics applications as well as AI algorithms at these edge sites.
While the basic compute, storage, and networking capabilities are “there” today, we anticipate they will continue to improve over time to allow for more workloads to run successfully at the edge. Processing speeds and storage capacities will continue their torrid pace.
For instance, one advancement that is making its way to the edge is NVMe . This new protocol offers significant performance advantages for solid state disks (SSDs) since they communicate directly on the PCIe bus. Legacy spinning disk drives primarily use the SATA interface, which is much slower and designed for performance characteristics of spinning disks and not for the “new age” storage of flash memory (used within SSDs).
As NVMe adoption continues to rise, SSD-based edge sites with NVMe protocol will be able to scale to meet the needs of AI processing. Deploying edge computing solutions with NVMe provides the increased performance that is needed for artificial intelligence, machine learning and big data analytics.
Overcoming Cost Barriers for AI at the Edge
As AI adoption moves forward and more data is created outside the primary data center, the key challenge will be cost. It’s easy to design an edge computing system to support AI and machine learning applications. However, it’s extremely pricey. Cost is a paramount concern for edge deployments, since there are likely many sites to provision. When you’re multiplying the cost of one edge site by 1,000 or 2,000 sites, the total cost escalates quickly.
To keep edge computing costs down to support AI, machine learning, and big data analytics, IT generalists should seek to:
- Deploy software-based virtual storage area network (SAN) technology, instead of physical equipment. The software-defined storage offerings available today eliminate the need for expensive external storage systems, and instead leverage the storage inside the servers. Again, this is especially important for edge environments with dozens, hundreds, or even thousands of sites.
- Find simple solutions that require as few servers as possible. Many edge computing systems today still require three or more servers in order to build a highly available system. Look for solutions that only require two servers to control costs, but still maintain availability.
- Be able to manage many locations centrally. Onsite management at edge sites is a huge problem because there typically is no IT staff available at each site. Edge computing systems require deployment and management from a single remote location.
Technical Requirements for AI at the Edge
Data encryption is becoming more and more important at the edge, and the technology is maturing to make it effective from cost and performance perspectives.
One processor feature that is also becoming more important is the encryption offload engine. This is a specific instruction deployed via dedicated hardware accelerators that process the encryption algorithms exclusively, thereby minimizing the impact on the CPU running the main application. The most common offload engine is called AES-NI (Advanced Encryption Standard New Instruction), as used by Intel and AMD.
While the brand and model of processors no longer matters in today’s world, to be able to support AI, machine learning, and big data analytics workloads, an organization would typically want to use a processor with a speed of at least 2.1GHz to 2.4GHz, and preferably with 10 -14 cores.
Tiered storage/caching is also required to enable data to automatically move between storage tiers (spinning disk drives, SSDs and in some cases – system memory) as its importance changes. For instance, when the edge computing system is running a big data project, all of the relevant data would move to the fastest SSD, but when that data isn’t being used it will move to the less expensive spinning disks.
In order to run multiple applications on these small yet powerful edge computing systems, a hypervisor is required to easily share the processing power of each server. The most popular hypervisors are VMware vSphere, Microsoft is Hyper-V, and open-source KVM for Linux-based systems.
All of these technologies are available today and will help propel the adoption of AI on edge computing devices.
Why AI at the Edge?
Organizations will continue to address AI data management challenges by architecting powerful and highly available edge computing systems, which will lower customer costs. New technologies that were previously cost-prohibitive will become more viable over time, and find uses in new markets. Take the following use cases as examples:
- Self-driving cars are a great example as each car can be considered its own edge computing site and must make real-time decisions on the data being collected in real-time. There simply isn’t enough time to send data to a cloud somewhere for processing.
- Airplane monitoring is also more common for modern aircraft that deploy thousands of sensors that generate massive amounts of data. In some cases, there could be 300,000 sensors generating over 1 petabyte of data per flight. This data needs immediate processing to make flight corrections and to ensure passenger safety.
- Smart Cities are another booming AI use case, as many municipalities are moving towards an abundance of traffic sensors, video surveillance cameras, and other monitoring devices throughout the city. This data is being collected in many locations and needs to be analyzed in real-time to make decisions to keep traffic moving and their population safe from crime.
Previously, powerful AI apps required large, expensive data center-class systems to operate. But edge computing devices can reside anywhere, as demonstrated in the above use cases. AI at the edge offers endless opportunities that a can help society in ways never before imagined.
About the author: Bruce Kornfeld is general manager of the Americas for StorMagic, where he is responsible for the Americas region sales and go-to-market strategy, as well as global strategic alliances and marketing. Prior to joining StorMagic, Bruce held marketing leadership positions at Compellent, Dell and NCR.
Related Items: