factories of the future
Reading Time: 3 minutes

Today’s data economy depends on the data generated by people, and increasingly by the devices, appliances, and machines that make up the IoT. Data is enabling the granular targeting of people’s affinities, so people are expecting more personalized products. Although apps are smart, emerging robots and virtual assistants will take personalization to the next level, dramatically changing the consumer’s mindset. To meet the needs of this new consumer, manufacturing will need to design products that are intelligent, personalized, customized, connected, and also cheap.

Today’s factories will need to be able to produce customized products with variable requirements, and this will trigger them to become smart factories, which will leverage emerging technologies like AI and the IoT, or what is being called the Industrial IoT (IIoT). In the data economy, industrial companies will need to act fast, since relying on static scheduling in production will not suffice to cope with changing order information. Also, they will need to be able to distribute the workload among robots and machines dynamically, also taking into account the equipment’s energy consumption, which most traditional factories are just not equipped to do.

The Necessary Edge

Personalizing services through dynamic task distribution, in this context, requires a technology that achieves ultra-low latency at the level of the local machines that distribute the production tasks. Because it is a burden to send this data to a cloud manufacturing platform for analysis, edge computing is the recommended approach for this scenario, as it would be able to support real-time production cycles. For a smart factory engaged in high personalization and a variety of machine tasks, predefined rules would not work unless an intelligent edge node is established that can use its own self-organized coordination abilities to negotiate with other nodes to solve largescale problems in parallel. Machines and robots at a factory are now starting to be equipped with edge nodes that can measure energy consumption. These nodes also have analytics embedded, such as optimization algorithms like swarm intelligence, which can prioritize tasks based on each machine’s individual energy consumption.

In the scenario above, each edge node has intelligence and autonomy, as each one is embedded with reasoning, planning, and machine-to-machine coordination abilities. The role of these edge nodes differs from that of a smart meter, which can only estimate the relationship between the workload of one piece of equipment and the energy it is consuming. In contrast, these edge nodes act as data storage, analytics, and intelligent information interaction nodes.

What Does This Edge Platform Look Like?        

This use case, among many others, including predictive maintenance through real-time altering, is not possible without a proper multi-access edge computing (MEC) platform. A MEC platform working with 5G could make it possible to produce ultra-low-latency-sensitive manufacturing applications. Because of the large volumes of data that will be generated by the machines and robots in a smart factory,  applications like this will be required, not only to handle the data volumes but also to deliver the needed processing capability in a local context with location awareness. A MEC platform consists of different tiers that collect data from IIoT machines to perform different tasks across each tier, such as reducing redundancy and errors; decoding and compressing the data, as well performing other raw-data pre-processing operations; and storing the data. To be optimally effective, a MEC platform must also be able to support deep-learning-processing-intensive, low-latency features such as real-time anomaly alerts, to meet stakeholders’ needs for enhanced decision-making insights.

Why Does the MEC Platform Need Data Virtualization?

To be optimally effective, a MEC platform requires access to all data sources across all the different tiers. However, MEC platforms that rely on traditional data integration methods like ETL processes will struggle to connect to these disparate data sources. This is why MEC platforms require an agile data integration technology like data virtualization to be fully effective. Data virtualization can be deployed at any intermediary layer between the edge and data center or the cloud, including organization and region  based layers. It works by abstracting the data sources and creating a unified view of all the information, i.e. maintenance, inventory, parts, and dealers, to optimize processes and enable predictive analysis.

The world’s largest independent biotechnology firm was able to increase production yield and plant efficiency by implementing data virtualization. It needed to align its manufacturing data in real time to identify  “weak signal” trends, which could signify a lapse in production that could ultimately cost millions. To define its ideal yield, the company needed to integrate and analyze its current and historical data. Leveraging data virtualization, this biological firm was able to combine the data from 46 source systems and create unified views that made it easier to take corrective steps to ensure ideal yield.

Maximize Yield, Minimize Downtime

To keep up with supply and demand, manufacturers can settle for nothing less than highly optimized systems that maximize yield and minimize business downtime. They need advanced technologies that work together. Data virtualization is a vital component in the world of edge computing and in the evolution of the smart factories of the future.

Ali Rebaie