AUTOMATION TECHNOLOGY:AUTOMATIC CONTROL SYSTEMS
AUTOMATIC CONTROL SYSTEMS
Control is the fundamental engineering and managerial function whose major purpose is to measure, evaluate, and adjust the operation of a process, a machine, or a system under dynamic conditions so that it achieves desired objectives within its planned specifications and subject to cost and safety considerations. A well-planned system can perform effectively without any control only as long as no variations are encountered in its own operation and its environment. In reality, however, many changes occur over time. Machine breakdown, human error, variable material properties, and faulty information are a few examples of why a system must be controlled.
When a system is more complex and there are more potential sources of dynamic variations, a more complicated control is required. Particularly in automatic systems where human operators are replaced by machines and computers, a thorough design of control responsibilities and procedures is necessary. Control activities include automatic control of individual machines, material handling, equipment, manufacturing processes, and production systems, as well as control of operations, in- ventory, quality, labor performance, and cost. Careful design of correct and adequate controls that continually identify and trace variations and disturbances, evaluate alternative responses, and result in timely and appropriate actions is therefore vital to the successful operation of a system.
Fundamentals of Control
Automatic control, as the term is commonly used, is ‘‘self-correcting,’’ or feedback, control; that is, some control instrument is continuously monitoring certain output variables of a controlled process and is comparing this output with some preestablished desired value. The instrument then compares the actual and desired values of the output variable. Any resulting error obtained from this comparison is used to compute the required correction to the control setting of the equipment being controlled. As a result, the value of the output variable will be adjusted to its desired level and maintained there. This type of control is known as a servomechanism.
The design and use of a servomechanism control system requires a knowledge of every element of the control loop. For example, in Figure 1 the engineer must know the dynamic response, or complete operating characteristics, of each pictured device:
1. The indicator or sampler, which senses and measures the actual output
2. The controller, including both the error detector and the correction computer, which contain the decision making logic
3. The control value and the transmission characteristics of the connecting lines, which com- municate and activate the necessary adjustment
4. The operating characteristics of the plant, which is the process or system being controlled
Dynamic response, or operating characteristics, refer to a mathematical expression, for example, differential equations, for the transient behavior of the process or its actions during periods of change in operating conditions. From it one can develop the transfer function of the process or prepare an experimental or empirical representation of the same effects.
Because of time lags due to the long communication line (typically pneumatic or hydraulic) from sensor to controller and other delays in the process, some time will elapse before knowledge of changes in an output process variable reaches the controller. When the controller notes a change, it must compare it with the variable value it desires, compute how much and in what direction the control valve must be repositioned, and then activate this correction in the valve opening. Some time is required, of course, to make these decisions and correct the valve position.
Some time will also elapse before the effect of the valve correction on the output variable value can reach the output itself and thus be sensed. Only then will the controller be able to know whether its first correction was too small or too large. At that time it makes a further correction, which will, after a time, cause another output change. The results of this second correction will be observed, a third correction will be made, and so on.
This series of measuring, comparing, computing, and correcting actions will go around and around through the controller and through the process in a closed chain of actions until the actual process valve is finally balanced again at the desired level by the operator. Because from time to time there are disturbances and modifications in the desired level of the output, the series of control actions never ceases. This type of control is aptly termed feedback control. Figure 1 shows the direction and path of this closed series of control actions. The closed-loop concept is fundamental to a full under- standing of automatic control.
Although the preceding example illustrates the basic principles involved, the actual attainment of automatic control of almost any industrial process or other complicated device will usually be much more difficult because of the speed of response, multivariable interaction, nonlinearity, response limitations, or other difficulties that may be present, as well as the much higher accuracy or degree of control that is usually desired beyond that required for the simple process just mentioned.
As defined here, automatic process control always implies the use of a feedback. This means that the control instrument is continuously monitoring certain output variables of the controlled process,
such as a temperature, a pressure, or a composition, and is also comparing this output with some preestablished desired value, which is considered a reference, or a set point, of the controlled variable. An error that is indicated by the comparison is used by the instrument to compute a correction to the setting of the process control valve or other final control element in order to adjust the value of the output variable to its desired level and maintain it there.
If the set point is altered, the response of the control system to bring the process to the new operating level is termed that of a servomechanism or self-correcting device. The action of holding the process at a previously established level of operation in the face of external disturbances operating on the process is termed that of a regulator.
Instrumentation of an Automatic Control System
The large number of variables of a typical industrial plant constitute a wide variety of flows, levels, temperatures, compositions, positions, and other parameters to be measured by the sensor elements of the control system. Such devices sense some physical, electrical, or other informational property of the variable under consideration and use it to develop an electrical, mechanical, or pneumatic signal representative of the magnitude of the variable in question. The signal is then acted upon by a transducer to convert it to one of the standard signal levels used in industrial plants (3–15 psi for pneumatic systems and 1–4, 4–20, or 10–50 mA or 0–5 V for electrical systems). Signals may also be digitized at this point if the control system is to be digital.
The signals that are developed by many types of sensors are continuous representations of the sensed variables and as such are called analog signals. When analog signals have been operated upon by an analog-to-digital converter, they become a series of bits, or on–off signals, and are then called digital signals. Several bits must always be considered together in order to represent properly the converted analog signal (typically, 10–12 bits).
As stated previously, the resulting sensed variable signal is compared at the controller to a desired level, or set point, for that variable. The set point is established by the plant operator or by an upper- level control system. Any error (difference) between these values is used by the controller to compute the correction to the controller output, which is transmitted to the valve or other actuator of the system’s parameters.
A typical algorithm by which the controller computes its correction is as follows (Morriss 1995). Suppose a system includes components that convert inputs to output according to relationships, called gains, of three types: proportional, derivative, and integral gains. Then the controller output is
Basic Control Models
Control Modeling
Five types of modeling methodologies have been employed to represent physical components and relationships in the study of control systems:
1. Mathematical equations, in particular, differential equations, which are the basis of classical control theory (transfer functions are a common form of these equations)
2. Mathematical equations that are used on state variables of multivariable systems and associated with modern control theory
3. Block diagrams
4. Signal flow graphs
5. Functional analysis representations (data flow diagram and entity relationships)
Mathematical models are employed when detailed relationships are necessary. To simplify the analysis of mathematical equations, we usually approximate them by linear, ordinary differential equations. For instance, a characteristic differential equation of a control loop model may have the form
where x (t) is a time function of the controlled output variable, its first and second derivatives over time specify the temporal nature of the system, a and � are parameters of the system properties, ƒ(t) specifies the input function, and X0 and V0 are specified constants.
Mathematical equations such as this example are developed to describe the performance of a given system. Usually an equation or a transfer function is determined for each system component.
Then a model is formulated by appropriately combining the individual components. This process is often simplified by applying Laplace and Fourier transforms. A graph representation by block diagrams (see Figure 2) is usually applied to define the connections between components.
Once a mathematical model is formulated, the control system characteristics can be analytically or empirically determined. The basic characteristics that are the object of the control system design
are:
1. Response time
2. Relative stability
3. Control accuracy
They can be expressed either as functions of frequency, called frequency domain specifications, or as functions of time, called time domain specifications. To develop the specifications the mathematical equations have to be solved. Modern computer software, such as MATLAB (e.g., Kuo 1995), has provided convenient tools for solving the equations.
Control Models
Unlike the open-loop control, which basically provides a transfer function for the input signals to actuators, the feedback control systems receive feedback signals from sensors then compare the signals with the set point. The controller can then control the plant to the desired set point according to the feedback signal. There are five basic feedback control models (Morriss 1995):
1. On / off control: In on / off control, if the e(t) is smaller than 0, the controller may activate the plant; otherwise the controller stays still. Most household temperature thermostats follow this model.
2. Proportional (PE ) control: In PE control, the output is proportional to the e(t) value, i.e., e(t)
= KPe(t). In PE, the plant responds as soon as the error signal is non-zero. The output will not stop exactly at the set point. When it approaches the set point, the e(t) becomes smaller. Eventually, the output is too small to overcome opposing force (e.g., friction). Attempts to reduce this small e(t), also called steady state error, by increasing KP can only cause more overshoot error.
3. Proportional-integral (PI ) control: PI control tries to solve the problem of steady state error.
In PI, output = KPe(t) + KI J e(t) dt. The integral of the error signal will have grown to a certain value and will continue to grow as soon as the steady state error exists. The plant can thus be drawn to close the steady state error.
4. Proportional-derivative (PD) control: PD control modifies the rate of response of the feedback control system in order to prevent overshoot.
When the e(t) gets smaller, a negative derivative results. Therefore, overshoot is prevented.
5. Proportional-integral-derivative (PID) control: PID control takes advantage of PE, PI, and PD controls by finding the gains (KP, KI, and KD) to balance the proportional response, steady state reset ability, and rate of response control, so the plant can be well controlled.
Advanced Control Models
Based on the control models introduced in Section 3.3, researchers have developed various advanced control models for special needs. Table 1 shows the application domains and examples of the models, rather than the complicated theoretical control diagram. Interested readers can refer to Morriss (1995).
Comments
Post a Comment