Control Systems, HMI Change Management, Security

Change management and security concerns have increased as industrial visualization tools such as human-machine interface (HMI) have become more open. HMIs can expose process data and process metadata, so appropriate security precautions need to provide selective availability to control functions. Standards such as OPC UA and other newer technologies can allow off-the-shelf products to be flexible and reusable, while withstanding the most demanding requirements.

Modern control systems value graphical user interfaces. It seems that HMIs in automation extend far beyond the adage that a picture is worth a thousand words.

A lightweight local user interface of a machine is sometimes referred to as the HMI; in this context it is an embedded part of the machine. SCADA (supervisory control and data acquisition), on the other hand, is an all-in-one software package that consists of tightly coupled components implementing functionality to operate a system as a whole. Because of the complexity of design, development and management of software grows rapidly with the growing variety of functionality provided. SCADA systems more recently tend to be layered and offered as loosely coupled, dedicated components, such as a historian, communication server, user interface, and other functions.

The user interface serves as the intersection between someone responsible for making a decision and something responsible for the decision’s execution.

Today’s software allows the same interface to start drilling by a CNC machine, for example, or, alternatively, to start remotely shifting 200 MW in load from one power plant to another. In both cases the operation can be initiated by pressing a virtual “ACCEPT” button on a touch screen. In both cases, the HMI interface can decrease development and deployment costs.

Human-interface interactions

Efficient human-machine interaction requires providing:

  • A representation of the process behavior and its current state—output interface
  • Sensors to allow entering the operator decision—input interface.

HMI vendors can employ 3D graphic, touch screen, voice recognition, motion tracking, and many other technologies. Effective communications require considerations beyond the surface.

Automated processes are dynamic and rely on states, so the interface has to provide an informative context for decision making. To reach this goal the process behavior must be tracked by processing variables to optimally adjust the screen content and expose the most important elements in an instant of time. As automation systems add process variables, one has to choose how to organize the structure of the control system and mappings for visualization.

Each variable can be recognized as a set of attributes: value, quality, time stamp, and meaning. The first three attributes can be simply expressed as simple, complex, or structured numbers and bind to the graphic on the screen in a generic way. The attribute for meaning usually does not change over time, so interface behavior and appearance is designed (hard-coded) appropriately. For example, a certain part of a screen can allow an operator to communicate with a chromatograph analyzer in a pharmacy automation process.

Unfortunately, this design time approach is often too rigid to seamlessly adapt or upgrade when a replacement HMI device is installed, especially if it is from another vendor. Furthermore, a hard-coded approach is useless when dealing with multifunction devices that use pluggable components and variety of accessories. To avoid this unnecessary design cost and avoid proprietary solutions, the next generation of interfaces needs to use a “semantic HMI” approach. A semantic HMI discovers the meaning of process variables using the metadata provided by the plant floor measurement and control devices, like an analyzer, PLC, DCS, and so on. Metadata provides context for the real-time process data and must be processed simultaneously by a smart-enough semantic HMI.

Process-interface interaction

To make two devices interoperable, both must use the same (vendor-specific or standard-compliant) protocol and be connected by an underlying communication infrastructure. Relying on proprietary vendor-specific solutions limits future system expandability, so it generally is not recommended. While vendors usually offer a standard protocol for plant floor devices, unfortunately, there are hundreds of “open standards” defined in the automation marketplace.

To overcome this disadvantage, OPC specifications from the OPC Foundation were designed to bridge applications based on general-purpose operating systems, process control hardware, and software applications. OPC can replace direct communication between plant floor devices (process) and process data user (HMI) by indirect communication. (See graphic.)  Hundreds of communication standards are commonly used by the process control industry. The OPC specification has distinguishing features that help with integration and implementation of the “process observer” concept.

The OPC specifications suite is not a new protocol competing to be the best. It is a data access technology: a set of interfaces representing precisely defined services dedicated to managing the process data access. It assumes that the Microsoft DCOM technology is used as a system platform to access these services. Using DCOM, and being integrated as part of the Microsoft Windows operating system family, creates a strong, reusable platform providing support to address communication and security issues. The main disadvantage of this standard is that process metadata cannot be adequately exposed.

To overcome this disadvantage and migrate the widely accepted DCOM de facto standard to new emerging technologies, OPC Foundation developed the OPC Unified Architecture (OPC UA) specifications suite. This service-oriented architecture (SOA) is deployed using Web services defined by the World Wide Web Consortium (W3C).

OPC UA meets requirements of modern control systems, because it:

  • Is Internet-based technology
  • Is a platform neutral standard allowing implementation on any (including embedded) system
  • Supports complex types to get access to process variables and object model to expose process metadata
  • Achieves high-speed data transfers using efficient protocols
  • Is scalable from embedded applications up to the process automation at enterprise level, and
  • Has broad industry support and is being used in support of other industry standards, such as PAT, OpenPLC, ISA95, ISA88, EDDL, MIMOSA, OAGiS, and others.

Security issues

Connecting the HMI (the decision entrance device) and process control device (the decision execution device) may engage many technologies, such as the RS-232 serial bus located inside the box containing Internet, wireless connections, and the like. Vulnerability of communications is only one measure of the security severity. Robust security also depends on authentication of transferred data, data sources, and users.

Even in the completely shielded control rooms of nuclear power plants, at the end of the day we must know who is responsible for pressing the virtual “ACCEPT” button if any problems occur. On the other hand, it would be unacceptable to see a message on the screen saying, “You must log in to continue” during a critical situation.

Appropriate communication layer support is required for secure HMI designs.  Fortunately, the new OPC UA standard offers strong and effective cybersecurity technologies available out of the box.

OPC UA is important for further development of open process control systems, according to Maciej Zbrzezny, a software architect at CAS, an OPC Foundation member. CAS offers software that deploys OPC UA in three simple steps: design the information model (design phase), bind it with the process variables without programming (deployment phase), and expose data and metadata (run-time phase) as the out-of-the-box solution.

8,000-node process control system

More modern HMI solutions are being developed with advanced graphics, high resolution, touch screens, high IPs for front panels, faster CPUs, integration with modern operating systems, and so on. However, they must offer much more than that to be used as a decision entrance device.

A process control system for a municipal-wide heat distribution network in the city of Lodz, Poland, (750,000 citizens) handles three plants with total thermal output power of 2,560 MW producing hot water distributed using approximately 800 km of pipes interconnected by about 8,000 nodes. The most important features are openness, interoperability, visualization flexibility to expose process data in the context of process metadata, and appropriate security precautions to provide selective availability to control functions.

New standards, such as OPC UA used with reusable, off-the-shelf products, can withstand the most demanding requirements.

Krzysztof Pietrusewicz, PhD, is currently an assistant professor at the Control Engineering and Robotics chair, Faculty of Electrical Engineering, at the West Pomeranian University of Technology, Szczecin, Poland (formerly Szczecin University of Technology). His current research is in control engineering, computer controlled systems, hybrid control systems, real-time systems, artificial intelligence, and mechatronics. As a co-researcher, he has introduced simplified engineering design methods of fuzzy-logic PI/PD, PID, and PIDD controllers. He is also coauthor of two books: Two-Degrees of Freedom Robust PID Control in Practice, Polish, in 2006, and Programmable Automation Controllers PAC, in Polish, in 2007. He teaches courses in embedded control systems, hybrid control systems, programmable automation controllers as well as PLCs, and digital control of intelligent servodrives (digital motion control).

Mariusz Postol, PhD, is currently an assistant professor at the Technical University of Lodz, Poland. His current research is in architecture and communication in large-scale, highly distributed process control systems. He has introduced the process observer concept as a systematic approach to architecture design of the process control systems. He is also coauthor of the book OPC from Data Access to Unified Architecture, VDE Verlag Gmbh, in 2010, in English and German, and about 45 papers. At present, he teaches courses in distributed operating systems and security of computer systems.




Filed under Uncategorized

Web-based HMI: An emerging trend?

Realization of Web-based Control
Once considered impractical for applications requiring responsive animation and real-time control, a new breed of web-based HMI system is starting to appear on plant floors and in manufacturing enterprises. “Java (web) based systems can now deliver sub-second response, rich animation and natural integration with other parts of the corporate information infrastructure,” touts Nathan Boeger of Inductive Automation. Unlike traditional systems, these web-based systems can economically be extended to every aspect of a business such as QC, maintenance, logistics,  plant manager, and so forth. Now every participant in the manufacturing cycle can have unprecedented access to vital plant production information.
It’s easy to see why web-based systems are gaining popularity. Web-based systems install and run client applications from any web-browser and when users login they always get the most recent version of an application.   There are no client licenses manage, no tedious software installations, no application files to copy over and no communication configurations to setup. IT departments are willing to embrace technology they understand. All this is in sharp contrast to traditional systems. The economic advantages of using web-based systems are compelling. The bottom line is, web-based HMIs systems fit well with the rest of the enterprise and facilitate the smooth flow of information throughout an organization without unnecessary difficulty and expense.
Security Issues
When potential users first consider using web-based technology they usually ask about security.   Just how secure are web-based systems?   The question is especially valid now that post 9/11 committees have deemed HMI and SCADA security “one of the most serious risks to our national security.”  Traditional vendors rely heavily on “security by obfuscation” which has never been considered a safe practice.  Web-based systems, on the other hand, are already positioned to leverage standard and proven web security techniques as administered by IT departments.
It’s only be a matter of time before legislation mandating minimum HMI and SCADA security requirements will surface. Traditional providers will likely have to overhaul their products to come into compliance. They will welcome this day since they will sell lots of mandated security upgrades.
Seeing What’s Next
Functionally speaking, HMIs haven’t changed much over the past five years. “HMIs that just do operator interface tasks are a commodity, and you can buy them dirt cheap off the Internet…The real action is in HMIs that provide web access, interface to higher-level enterprise software, perform MES functions”, says Rich Merritt, Senior Technical Editor of Control Global, in his article, “HMI Software is disappearing”.
The book Seeing What’s Next by Christensen, Anthony and Roth, introduces theories to predict major industry changes. These theories are supported with interesting historical examples. Applying these  predictive theories to this industry suggests incumbent HMI vendors will continue to service their large existing market without much change. They will probably not compete with their own model. On the other hand, web-based vendors will find success selling where traditional vendors have failed; to those companies who refuse to spend big bucks on systems perceived as being unnecessarily complex, cumbersome and overshooting needs. This is likely to lead to explosive growth for web-based systems in market segments which have been unfulfilled by traditional systems.
Anyone familiar with manufacturing knows the majority of factories barely implement information technology at the plant floor level. There are exceptions, but when you see clipboards being used to record schedules, downtime and production, when you envision how things should be done, you finally come to realize this is a vast untapped market.
There is an accelerating pace of web-based systems being installed in what was essentially a non-consuming market. Users are finally getting what they want – the functionality of an HMI with the economics of a web browser. The real question is not whether web based control systems are an emerging trend – they cannot be stopped, but rather which vendors are poised to jump on the bandwagon and deliver the technology.

1 Comment

Filed under Automation News, Industrial communications

Powerful PLCs and PACs have built-in data acquisition, data logging, and data analysis functionality

Fast Forward

  • Data acquisition used to be performed by dedicated SCADA systems.
  • Standalone DCS and PC-based data acquisition evolved along separate paths.
  • Powerful PLC and PAC-based data acquisition systems provide the best features of older technologies more simply and at lower cost.
By Jeff Payne

Data acquisition, data logging, and data analysis are required functions for most modern industrial control systems. The simplest and lowest-cost way to provide these functions is often by using the same platform that is providing real-time control—namely the programmable logic controller (PLC) or the process automation controller (PAC).

A few years ago, this was not possible as PLCs lacked the required processing power and data-storage capability. But more powerful PLCs and the evolution of PACs has made it possible to perform many data acquisition tasks within the controller—saving money and providing an overall simpler system.

PLCs have been controlling machines and processes for more than 30 years. Demand for data and the need for real-time process information has increased during this time and continues to grow rapidly in nearly every industry.

End users need intelligence about their processes, machines, and manufacturing operations. They need to know about system alarms and events, about process variables, and about production amounts. They need this information to make better manufacturing and business decisions in real time.

Historically, acquisition of and access to this information was available only via applications with high-priced standalone supervisory control and data acquisition (SCADA) systems, third-party software, or expensive PC-based systems. But many modern PLCs and PACs now have built-in data acquisition, data storage, and networking capabilities—so accessing this critical information can be as simple as connecting a PLC’s Ethernet port to a network, or pulling data from a removable USB mass storage device such as a USB pen drive.

Data acquisition roots

Originally, SCADA systems performed monitoring and control of geographically dispersed systems, such as power transmission lines, oil and gas pipelines, water/wastewater systems, and the like. Older SCADA systems used remote telemetry unit (RTU) equipment, none of which had much intelligence by today’s standards. The term SCADA still conjures images of remote automation equipment installations, but when the term data acquisition is extracted from the acronym, it can take on a different meaning.

Data acquisition systems acquire representations of real-world physical conditions through inputs. Typically, sensors provide measurements that indicate how an object or process behaves under specific conditions. Measured physical parameters can be temperature, pressure, flow, pH, speed, force, sound level, and other process variables. Other parameters that can become data acquisition system inputs are electrical signal values such as voltage, current, resistance, power, and other related variables.

Since there are so many types of real-world parameters that can be measured and analyzed with data acquisition systems, input data almost always requires signal conditioning. Sensors convert physical parameters such as temperature, pressure, and flow to electrical signals. Even though physical parameters become electrical signals, most of the time, their values are not consistent or compatible with the circuits that must measure them.

Signal conditioning circuits accept sensor signals, apply the appropriate filtering and scaling, and further process signals if math functions or linearization are required. Many data acquisition input signals are analog, but some are digital, such as pulses that represent item counts or totalizer inputs. Analog signals, however, must be converted to digital using analog-to-digital (A/D) converters. A/D converters convert analog voltage or current signals to digital numbers proportional to the magnitude of the voltage or current.

Analysis is where data acquisition starts to get tricky. Application requirements define the type and method of data acquisition, as well as how collected data are analyzed. For example, a production test engineer may use a portable data logger to run a temperature conformity test on a heat treating furnace to ensure it meets specifications prior to shipment. A quality engineer may extract archived statistical process control data from a historian to ensure chemical plant products are within spec.

Another basic function of data acquisition is presentation. Presentation can take the form of a temperature readout on a panel meter; a graphical display on a human-machine interface (HMI); or a report generated at the end of a run, shift, week, month, or quarter. Presentation is often left to an external system by sending acquired data to a historian or an enterprise resource planning system.

Transferring acquired data to analysis and presentation systems can be complex, as can performing those functions within a stand-alone data acquisition system.

Stand-alone data acquisition

Data logging is one type of data acquisition typically associated with dedicated stand-alone systems. In many cases, data logging is also associated with temporary configurations such as lab or field testing, where test configurations and locations change frequently.

Product or system design engineers have used data logging in laboratory environments for decades. Field engineers use data logging for many types of temporary data gathering, such as system error logging, power quality monitoring, system troubleshooting, commissioning, and equipment or process verification.

Microprocessor-based data loggers began to appear around 1975. Instead of measuring and recording parameters individually, multiple readings could be taken in a relatively short period of time by scanning multiple inputs with each scanned signal applied to a measurement circuit. Multiple-input scanning was accomplished using either relays or field-effect transistor (FET) switching, both based on scan and measurement times controlled by the data logging system.

Relay and FET methods have advantages and drawbacks. Relay scanners have low contact resistance, are suitable for high-voltage applications, have excellent channel-to-channel isolation and voltage stand-off, and share one simple protection circuit—but they are slow. FET scanners have high resistance when switched, are limited to low voltage applications, and have poor channel-to-channel isolation and voltage stand-off—but they are fast.

Typical data acquisition applications

  • Store, set, change, and manage recipes
  • Manage inventory
  • Log and analyze production data
  • Track, maintain, and archive Key Performance Indicators
  • Enable overall equipment effectiveness programs
  • Log and archive test data
  • Enhance quality control and audit tasks

Stand-alone data logging systems with switched measurement helped pave the way for a specialized field of data acquisition called automatic test equipment (ATE). Because of the usefulness of ATE to manufacturing, especially semiconductors and electronics, it developed independently from other forms of data acquisition. Regardless, ATE still shares many similarities and characteristics with other types of data acquisition systems.

Stand-alone systems still have their place as they can provide power and functionality not available elsewhere. For temporary data logging applications, in particular, stand-alone systems are often the best solution. But in many cases, acquiring data with an existing control system is a better solution.

Data acquisition and distributed control

A distributed control system (DCS) typically controls batch and/or continuous processes. Within the context of a DCS, data acquisition can take on several forms. As DCSs began to replace dedicated control computers in continuous processes such as refineries and chemical plants, the meanings of some of these terms began to evolve.

Whereas the purpose of the DCS is to maintain control of the process—the purpose of data acquisition is to measure, analyze, and present. Sensing, signal conditioning, and measuring must be done so the DCS can control the process. All that remains to accomplish data acquisition is to determine which parameters are meaningful to collect and then route them to the appropriate channels.

This would seem to make a DCS an ideal platform for data acquisition, and in many applications, this is the case. If a process is controlled by an existing DCS, many data acquisition functions can be performed within that platform.

For data acquisition functions that cannot be performed within the DCS, connectivity options exist to link the DCS to other more specialized and powerful data analysis, reporting, and presentation systems. But for processes not controlled by a DCS, it does not make sense to use an expensive, high-powered, complex real-time control system just to perform data acquisition.

PC-based data acquisition

Shortly after the personal computer made its debut in the early 1980s, companies began introducing add-on products such as multi-functional data acquisition cards for the PC, disk-based storage units, and PC expansion chassis. PC-based measurement and data acquisition products were originally developed for lab, testing, and product development environments. Although PC-based data acquisition has expanded beyond these applications, they remain a primary market.

PC-based data acquisition is typically accomplished with one or more circuit boards that plug directly into a PCI bus card slot within a PC. Advantages of PC plug-in boards include low cost and high speed. Disadvantages include the need for external signal conditioning, difficulty connecting to sensors, low isolation and protection, poor noise rejection, and poor expandability.

The measurement chassis also falls within the PC-based data acquisition category, and it solves the expandability problem. Also typically used for lab and testing scenarios, measurement chassis have good noise rejection and isolation, but they usually cost significantly more than plug-in boards.

Early PC-based data acquisition hardware required extensive programming to interface to data acquisition software. Now, most PC-based data acquisition applications include driver software to communicate with the hardware. The driver software simplifies data acquisition programming because it eliminates the need to use complex commands or register-level programming.

Data acquisition software must necessarily meet end-use criteria. Complex or advanced applications may require custom programming in a development environment such as C++ or Visual Basic. However, most applications can be handled by configuring off-the-shelf software programs.

The drawbacks to PC-based data acquisition include the need for a PC, the cost of purchasing the application software, and the annual licensing fees. For many applications, these shortcomings can be addressed by using PLC-based data acquisition.

PLC and PAC-based data acquisition benefits

  • Performs control and data acquisition with single platform
  • Low cost
  • Small footprint
  • Don’t need to buy and learn separate software package
  • Don’t need to pay software licensing fees
  • Doesn’t require database programming expertise

PLCs and PACs

Powerful PLCs and PACs are designed to satisfy complex requirements brought on by today’s automation application demands. These controllers have evolved into full-featured systems that combine much of the functionality of traditional technologies such as SCADA systems, DCSs, and PCs.

Many applications that were strictly DCS-based can now be accomplished using the flexibility and functionality of PACs and powerful PLCs. As with DCSs, the sensing, signal conditioning, measuring, and analyzing is done within the controller and its I/O, which is also used to control the process or machine. This double-duty approach assures the lowest overall cost, smallest footprint, and simplest data acquisition system.

PLCs and PACs perform basic data acquisition as part and parcel of real-time control tasks. Extra I/O can be added to acquire data from areas that do not require control, only monitoring. Data logging can be triggered by an event within the process or scheduled to occur at regular intervals.

Once the data is collected, it can be stored locally at the controller, or it can be transferred to other systems. Data stored locally is usually saved to a USB pen or flash drive.

Data can be logged in a comma-delimited/comma-separated variable text file, or a tab-delimited file. Users can include a date-and-time stamp and an alias for each data item sent. Data log files can be compressed into zip files that can be archived for easy data management.

Users can view this data using Microsoft Notepad, Microsoft Excel, or other applications. Excel remains the most widely used data analysis tool in manufacturing because of its familiarity and because most every user already has Excel installed on his or her PC.

Data transfers from the controller to other systems are typically done by an Ethernet port, which is built in to most every PLC and PAC nowadays. Popular protocols are supported, which negates the need to write complex drivers for the transfer of data from the controller to external systems.

Some PLCs and PACs allow users to collect data via connection to networked database servers. Users can collect real-time data from processes or machines on the plant floor and store it in standard Microsoft Access-, SQL server-, or ODBC-compatible databases. Report-by-exception data logging allows direct communication between the controller and the database. Giving control of the data logging and storage functions to the controller, it can send data only when needed and greatly reduce network traffic.

PLC data improves quality

Associating a timeline to process variables can improve the quality of a product or reveal a quality problem. For example, one company chose a simple data-logging method to extract information from the PLC controlling its plastic injection molding process. The company simply connected a removable USB pen drive to the dedicated USB data-out port of the PLC’s CPU.

Using the data logging utility within the PLC’s software, the company has the capability to capture up to 64 process variable tags from the injection molding application each time they conduct their manual quality control inspections. By using an event bit within the PLC, they can control the exact time the data points are logged onto the USB drive, which allows them to precisely coordinate the data collection with the quality control inspection.

The user combines the process data and the quality control data into a single table in a spreadsheet application. From this table, they can correlate specific conditions and process variables to differentiate between good and bad product quality. After they collect sufficient historical data, they can identify the critical variables that represent the key indicators in their quality management program.

Using this procedure allows the company to monitor its plastic injection molding process and make real-time adjustments when they notice conditions that would lead to poor quality products. Now, instead of using complicated high-priced systems and software, this company is improving its overall product quality, reducing waste, and saving money—all for the price of a USB pen drive and the ease of selecting a few tag names in a table.

These types of applications demonstrate the power of PLCs and PACs in data acquisition applications. As these controllers increase their processing power, data manipulation features, and data storage capabilities, look for increased use in data acquisition applications in manufacturing and process facilities worldwide.


Filed under Automation News

PC-based control – it is time for change

In many process plants, the PLC simply plays the role of an ‘interlock’ or PID code repository, with most of the system intelligence based in the external scada, MES or analysis software application zones.

Volumes of I/O and tag variables are scaled and mapped to higher-level systems, essentially just providing the process image to the intelligence layers. In this context, the computational power of the PLC is less important than its memory capacity and its communication throughput to the control platforms. It makes complete sense to use a PLC more as an I/O interface than as a process controller here, since the intricacies of large scale process plants often fall outside the scope of the traditional PLC programmer, or PLC for that matter. The ability to adapt to complex control cycles easily through parameter exchange rather than PLC program development is final justification for these types of control systems.
Parameter exchange vs machine control
The automotive industry is an example of more intense PLC utilisation. In this and similar industries, the PLC often acts as a ‘hub’ for connecting separate intelligent systems. This integration between systems and machines involves a higher degree of communication flexibility, as well as rigorous integration with safety logic. In production, it is common to see PLC connections to vision or inspection systems, conveyors, barcode and RFID readers, bolting and press fit machines as well as marking and printing facilities. Always, there are numerous systems with their own intelligence, while the exchange of typical Start/Stop/Done parameters still happens through hard-wired I/O or fieldbusses. When compared to typical process plant PLC implementations, the variety of distributed control and intelligence is higher. It is also more common to see production variations implemented at PLC level. Producing several different components on one machine or production line is common, but involves many mechanical and electrical adaptations more easily accomplished through PLC code integration rather than the parameter adjustments common in process plants.
OEMs demanding more
A unique level of PLC utilisation can be found in OEM machine building. Whether you are manufacturing specialised welding, cutting, injection moulding or test and verification beds – your PLC requirements are even more demanding than those just described. The combination of fast motion control and high accuracy of movement requires multi-faceted controllers. Even though many servo drives are capable of standalone operation, full scale integration and rapid response make the use of combined PLC/NC controllers a viable option. OEMs need to keep costs and component count down, leaving little room for large scale use of ‘black-box’ solutions. All sensors and actuators need direct termination to PLC I/O as efficiently as possible – the use of standard controllers across different machines is required. This simplifies inventory to the choice of a memory card to determine the destined machine function of a PLC. Visualisation is also required to be more local and less complex. Combined with data storage and remote software serviceability, these all form excellent arguments for PC based control.
New PLC capabilities mean new markets
Technologies such as combined PLC and HMI elements, TCP/IP and wireless communication have lead to new opportunities for PLCs. Industries where test and measurement tasks often relied on dedicated equipment with high integration costs are now easily within reach of many modern medium level devices. The technology drive of the IT industry is constantly pushing the price of semiconductors down and performance levels up to the point where full-scale adoption by the PLC market must become inevitable. Chipsets with built in serial transmission and Ethernet connectivity mean reduced size and implementation costs in PLC design. Modern IT chipsets like that of the ARM and Intel Atom configurations allow for high performance control, while still maintaining temperature and environmental tolerances well within the levels required of a PLC. For many manufacturers, I believe the intellectual property held in their communication and multitasking functionality will soon exceed the importance of pure memory and number crunching abilities. Older PLC CPUs with limited amounts of timer/counter/memory blocks cannot compete with modern integrated solutions for high-end applications. The flexibility of modern controllers has opened up applications like building automation, wind energy generation, robotics, medical engineering and many others.
Actual machine benefits
Let us consider the functions a PLC must fulfil in what seems like a simple application. A bagging machine needs to weigh product as it travels, close a flap at the correct weight, and then seal the top of the bag. But, the machine builder wants to offer more to his customers without having to add much complexity to the machine. Among his wish list are things like: more accurate filling; historic logging; recipes for different sized bags; simple visualisation; remote diagnostics; easy parameter adjustments; notifications and plant integration functionality. The goals are clear, make a machine that is easy to use and understand, will push information to the relevant personnel and has the ability to adapt to the needs of different customers with minimum re-engineering. Modern PLCs easily cater for all these needs.
Over history, machines are now capable of recording predicted versus achieved fill weights. Simple changes to the flap closure rate then allow the machine to self-tune for different bag variations. Not only is the historic data valuable for self-tuning, but it also allows accurate data storage to satisfy FDA and similar regulations. The addition of an RFID or barcode scanner to the system allows direct variant identification and exact product traceability.
Recipe control
This could easily be handled by the PLC through XML data files. A normal database could handle this as well of course, but many manufacturers prefer independent XML files due to the ease of distributing complex recipes to different machines. PLCs now have the ability to read (and write) XML files as well as CSV, XLS and any other text-formatted files. This allows modification from user inputs on the HMI or across the network from a PC. New recipe files can even be stored on an FTP site, allowing the PLC to retrieve updated recipe files at set intervals.
Machine control solutions seldom require 3D graphics or vector scaled images. A simple diagram outlining the machines components, operating state, alarms, I/O checklist and recipe configuration is mostly all that is needed. Modern PLCs allow for onboard visualisations, either through DVI/USB connections to touch panels or through built-in displays. Another convenience to the machine builder is the benefit of being able to stream HMI across the network/Internet for authorised use by personnel in remote locations. The bag machines live view, its I/O list, its alarm table, its function view, its historical data, its auto-tuned closing performance as well as its recipe management; can all easily be implemented in the same software environment that the IEC-61131-3 programming takes place.
Standardised software development makes it all possible
To implement all these added features and benefits in the modern PLC, we need an open, easy to use IEC 61131-3 PLC programming environment. The fact that approved Function Blocks already exist for most of these advanced functions then makes implementation quick and easy. Open integration into the .NET framework for expansion and external code integration future proofs your software and keeps one integrated platform for different modules. Module-based automation in software means: real-time performance; motion control; safety; hardware management; data-logging; remote connections and all forms of extensions realised in one easy to manage open PLC software development tool.
Due to the modular structure of the interfaces already predefined for many applications, there is now the possibility to instance the different controllers on the machine’s central control hardware. These can also now be created independently of one another, and in different programming languages allowing the large selection of existing (or self-developed) basic modules to form an automation kit from which new applications can be created easily.
This is the key to extended PC based control functionality.

Leave a comment

Filed under Automation News

National Instruments Technical Seminars



National Instruments Events offers you the chance to find out more about the products, services and the company vision and also to achieve new technical knowledge by organizing different seminars and workshops all over the world.

In 2011 NI plans also to organize a lot of this kind of events in the instrumentation and data aquisition field. For example starting 17.01.2010 to 23.01.2010, NI scheduled multiple workshops in different places in Romania.

To view seminar schedule dates and locations, select your region below:

Seminar Highlights

As technology advances, so does instrumentation capability. National Instruments regularly holds seminars around the world. The seminars are designed to keep you abreast of the latest technologies and trends in measurement and automation. The material is presented either lecture-style or in a hands-on format, with equipment supplied by leading instrumentation and computer manufacturers.

Leave a comment

Filed under Field Events