Life in a factory is hard. Depending on the materials and processes required in a plant, the floor can be filled with the whirls of forklifts, fumes of corrosive vapor, or lung-clogging dust. To maximize utilization, a manufacturing line might run around the clock. Because this type of environment can be hazardous for the health of the employees running the facility, great care is taken to ensure that their safety is guaranteed.
However, these same challenges exist for the computer systems that assist them, and until recently there have been only two choices: expensive hardened systems or a disruptive rip-and-replace policy. Fortunately, two alternative architectures, thin-client and client virtualization, have emerged that reduce the risk of hardware failure, cumulative maintenance, and expensive downtime.
The typical factory automation system consists of three layers of computing power:
- The control layer is the core set of sensors, valves, motors, and controllers that automatically orchestrate each part of the process. Hardened embedded controllers typically address this layer.
- The supervisory layer is a monitoring system that allows for monitoring, auditing, and adjusting macro parts of the process. Workers interact with the supervisory layer of the system using terminals. Specialty automation controllers, industrial PCs, or even business-class servers are used in this layer.
- The IT layer is the typical IT infrastructure with an enterprise resource planning (ERP) system that feeds forecasts and genealogy data to the supervisory layer and pulls quality and batch information from the supervisory layer. Typical tier-1 IT hardware dominates this layer.
The choices for hardware are clear for the control layer and the IT layer. The control layer typically requires hardened controllers due to the critical assignments they fulfill and their proximity to the factory line. The IT layer is IT hardware. Yet the supervisory layer can vary a great deal depending on vendor preference, the desired cost of the system, or initial environmental expectations.
As equipment vendors move into the emerging markets, they find cost pressure and flexibility have caused more deployments to choose globally available tier-1 workstations and servers. Usually, these systems are designed for calmer environments than a manufacturing plant. How should a system designer balance the desire for tier-1 hardware with these additional requirements?
The first approach to consider is moving the computing equipment away from the hazards. Rather than placing workstations around the factory floor, manufacturers should consider consolidating their supervisory hardware into a safe place or protected enclosure.
Over the past three years, IT systems have made enormous strides with console redirection, making it possible to replace each exposed workstation with a PC over Internet protocol (PCoIP) thin-client device, such as the Dell FX100 Zero Client Access Device, that has no moving parts and can be instantly replaced in the event of damaged hardware.
The workstation hardware (sans monitor/mouse/USB connections) can be consolidated to an enclosed rack workstation, like the Dell Precision R5500, in a ventilated container, greatly decreasing the risk of unexpected failure without sacrificing console responsiveness. The downside of this approach is that moving 50 workstations into a single enclosure can require a large amount of space.
The second approach is an extension of the first: move the equipment to a safe place and consolidate many systems into a few. It is standard practice in IT to consolidate 50 physical workstations with a small network operations center (NOC) of two servers, three switches, and one consolidated storage device hosting 50 virtualized clients.
Thin clients minimize exposed hardware to local hazards across the plant and provide the normal redundancy of server systems (power supplies, hard disks, fans). Minimizing the hardware requirement significantly decreases hardware costs. Restoring a failed server using a replacement is easy, as hard disks can be pulled from the failed system and inserted into a new system.
In the hands of a system architect, virtualization provides myriad redundancy and failover options that can meet almost any criteria. For example, running virtual machines can be migrated between servers in a matter of seconds in response to a preemptive hardware event.
By utilizing thin-client architecture and including a workstation virtualization strategy, a system designer can create a supervisory layer that benefits from the quality, cost, availability, and support of a tier-1 computer manufacturer. Additionally, the chances and consequences of hardware failure can be greatly reduced, meaning less downtime for customers and less hassle with field replacements should they be required.
Thoughts? Let me know!