Edge computing has risen to the forefront of information management in a very short time. Edge computing houses data processing capability at or near the “edge” of a network. The efficiency of this data management is well-documented. It is difficult to argue that accessing real time data for analysis is superior to later data retrieval. Latency of mission critical data is virtually eliminated. The decreased use of bandwidth and the elimination of data bottleneck to the main data center or cloud enhances productivity and cost savings.
As worthy as these benefits may be, IT will face new challenges and tasks in edge computing implementation. These include security concerns, new data classifications, and machine learning.
Edge computing, by definition, exposes hardware to an environment that may be challenging—in footprint, ambient temperatures, contaminants, particulates, vibration or accessibility. Solutions abound for each of these concerns: micro data centers, NEMA appropriate enclosures, thermal management and filtration systems and shock absorbing designs. Once a location has been selected for the edge system, your distributor will assist in directing you toward a practical set-up. Both IT and OT (operational technology) will cooperate in the site selection, placement and population of the edge system to maximize security and accessibility.
Ensuring Physical Security
Proactive security and maintaining data integrity is a paramount issue for edge computing.
Security begins with the physical protection of the data housing. This is the time for worst-case scenario thinking. Every possible physical access point should be reinforced against access by intruders—two legged and four legged. The value of the data enclosed should correspond with the security level protection implemented. Elemental security (wind, hail, rain, tornado, etc.) would be evaluated in the section of the enclosure unit.
Cloud computing and the IoT have security concerns. Edge applications add another component to the security challenge, and IT must be proactive.
Sensitive data, relating to company policies, workflows, and customer information, now exists in both on-site equipment and sensors, along with being stored in the cloud. It is possible that passwords are now required for more components. Security policies must in place to for maximum protection. A process for the development of strong passwords and a strict schedule of password revisions should be made. Passwords should be changed as often as required to maintain ultimate security. (Nuclear launch codes for the United States are changed daily.) Passwords should be long and without any discernable pattern. Never leave default passwords on any device. Every employee should be briefed on the importance of keeping data secure.
When a setup is using third-party cloud software to store data, make sure the third-party service has robust security protocols. Carefully read contracts and be ready to ask them tough questions about their security setup.
Data architecture had been traditionally classified in a three-tier system: user interface, business logic, and databases. Reliable data classified appropriately is vital to management. To better describe the data produced and its function, edge computing introduces new information classification terms that industries should be familiar with.
- Source data comes from the sensors, transducers, machinery, or anything else that produces data in a factory or other industrial setting. This is data that contains details on operations and processes.*
- Intelligence data informs machine learning. It may be sent to the cloud to run tests and create algorithms to implement back on the original machinery. It is often used to increase efficiency.*
- Operational and actionable insights data is used to make business decisions. These decisions are made by both people and also computers themselves. For example, if they sense a product is running low, they may automatically order more.*
Additionally, data is called “hot” and “cold” in edge computing. “Hot” data is real-time and used immediately by processes and machines. “Cold” data is used for longer-term applications such as analytics and analyzing trends.
Intelligence data as a dataset in edge computing allow for machine learning. The machines gather data at the source level, and push it to the cloud or data center. Here, variables are tested to create an algorithm which produces the maximum output or other goal.
They use the cloud portion to test operations, and because the cloud has far higher computing power than they do, the machines gain access to far more sophisticated modeling and testing than they could perform on their own.
Essentially, when this information is returned to the edge, it will alter the program to learn how to function faster, more efficiently, and possibly using less resources and energy on their own.
This setup makes edge computing installations large, powerful, and abstracted, yet also niche, intimate, and directly applicable to a company’s market sector all at once.
Edge computing does present a series of new considerations and challenges for IT. As with any idea or invention which challenges the status quo, workarounds are created as issues arise. For a data-centric operation, the innovation of edge computing is still in its infancy, but the multi-faceted benefits are being reaped in both industrial and information businesses.
Learn more in our Edge Computing Guide.