Mathematical Equations or Processing Routines ?
A review of the reasons for a need to change from mathematical modelling to cognitive modelling of human operators doing slow control tasks. Basically, mathematical equations do not contain the components needed to describe many aspects of human cognitive behaviour.
In quoting from Kok & van Wijk's work, I had no intention of singling out their work for criticism. On the contrary, I quoted from them because they had the clearest exposition of control theory models I had found.
Topics
1. Introduction
2. The goals of process control
2.1. Future targets
2.2. Planning future sequences of behaviour
2.3. Problem solving
2.4. On-line behaviour
2.5. Evaluation
3. Input processes
3.1. Human response lag
3.2. Accuracy of display readings
3.3. Intersample interval and process knowledge
3.4. Reliability of input measures
3.5. Multi-variable sampling and working storage
4. Control action choice
4.1. The form of the operator's process knowledge
4.2. Acquisition of the internal model
4.3. Comparing alternative actions
4.4. Predicting action effects in new situations
5. General conclusions
Mathematical Equations or Processing Routines ?
Lisanne Bainbridge
Department of Psychology, University of Reading
1981
in Rasmussen, J. and Rouse, W.B. (eds.) Human Detection and Diagnosis of System Failures. NATO Conference Series III : Human Factors, Vol. 15. Plenum Press, New York, pp. 259-286.
1. Introduction
Within the context of this book, we want to know the factors which affect human ability to detect and diagnose process failures, as a basis for console and job design. Understandably, human factors engineers want fully specified models for the human operator's behaviour. It is also understandable that these engineers should make use of modelling procedures which are available from control engineering. These techniques are attractive for two reasons. They are sophisticated and well understood. They have also been very successful at giving first-order descriptions of human compensatory tracking performance in fast control tasks such as flying. In that context they are sufficiently useful for the criticism, that they are inadequate as a psychological theory of this behaviour, to be irrelevant for many purposes. Engineers have therefore been encouraged to extend the same concepts to other types of control task. In this paper we will be considering particularly the control of complex slowly changing industrial processes, such as steel making, petrochemicals and power generation. We will find that while control engineering methods may be adequate to give a first-order description of human control behaviour in very simple tasks, they do not contain mechanisms with sufficient potential to account for all aspects of behaviour in more complex situations. I will try to discuss the types of mathematical model which appear to best fit the data from these tasks. I will also describe some of the other types of behaviour for which a mathematical representation seems insufficient or inappropriate.
In order to make these comparisons I will take some of the basic concepts in some typical engineering approaches and compare them with data from human control studies. As many engineering models distinguish between input processes and control decisions, these are discussed separately, in Sections 3 and 4. Unfortunately, as studies of human process control behaviour show that the distinction between obtaining information, making decisions (and setting control targets) is often not clear, this way of organising the paper gives a rather distorted representation, it also entirely omits some central aspects. The paper will therefore start, in Section 2, with a discussion of the overall nature of process control tasks, and of some of the richer aspects of behaviour for which mathematical equations do not provide the types of concept needed.
I would certainly accept that most current psychological models cannot be used effectively in solving engineering problems, and that, as they are not fully specified, they have little intrinsic value for engineers. I would like however to argue that the most effective response to this is not to make more sophisticated control theory models, but to put more work into developing more powerful models of different types. It must also be admitted that it is only possible at the moment to give a very superficial indication of the wealth of problems which need to be dealt with, and of the mechanisms which might be useful in more fruitful models of human process control.
2. The goals of process control
A simple expression of a process operator's goal would be that they should maintain the process output within tolerance around the target level. This description may be appropriate to the control of single variables, as in Veldhuyzen and Stassen's (1977) model of ship steering. In many processes however the operator's task is more like that of a ship's pilot deciding which direction the ship should go in, and dynamic control is a relatively minor aspect of the task. Umbers (1979), in a study of gas grid control, found an average of 0.7 control actions per hour. Beishon (1969), interviewing a bakery oven controller while he was doing his job, found that 11% of his skilled decision making was concerned with control and observation. These investigators both found that goal- or target- setting was a major part of the operator's activity; and that it influences task behaviour in such an integral way that it cannot be dealt with in models which are distinct from models of control decision.
In general, studies suggest that an operator is concerned with goals at several levels. They convert goals which are not fully specified into an operationalisable form. They predict targets which are affected by varying product demand or process behaviour. They plan sequences of future behaviour, by the process and themselves. Such sequences may be evaluated against multiple goals, and generation and evaluation of these sequences may be difficult to distinguish. They may also be able to generate entirely new behaviour in unfamiliar situations. We therefore need to understand and be able to model all these types of behaviour.
2.1. Future Targets
Frequently the general goals specified by management are inadequate as guides for controlling the plant, and the operator has to convert them to sub-goals. For example, Beishon (1969) found that the time/temperature combinations at which cakes could be baked were not fully specified in the job description, and had been learned by the operator. Such knowledge could be modelled, at a first-order level, by table-look-up, and the learning by parameter trimming.
Many process tasks are more complex than this, as the control target varies on-line as a function of product demand or stages in production. In some industries, management makes the decisions at this level. In others, predicting targets may be part of the operator's task. Umbers (1979) studied gas grid controllers, who ensure that sufficient gas is available to meet demand over the day. It is inefficient to control the gas supply by feedback from demand levels, as 2-4 hours notice is needed to increase production using cheap methods, and the pressure of stored gas provides a 'free' force for distribution, so a major part of the task is to predict how much gas will be used that day and have it available beforehand. Umbers concluded that observational techniques provided inadequate information on the controller's methods, so he recorded verbal protocols, asking the controllers to 'think aloud' while doing the task. Verbal protocols are often analysed by producing flow-diagrams which would generate the same sequence of activity as in the verbal report. Therefore, to accept many of the following examples in which verbal reports are used as data one has to accept that an operator can give useful and valid reports (see Bainbridge, 1979), and that some of the operator's behaviour can be described validly in terms of program-like routines. Umbers found that 28% of the verbal reports was concerned with predictions of gas demand. A support computer made demand predictions based on the 4 previous same days of the week. The operator can revise and override this estimate, by finding records of a previous day which better matches the present day than the ones used by the computer, allowing for the weather, national holidays, etc. Such judgments and comparisons would be most easily modelled by conditional statements. The knowledge of factors influencing demand can presumably be modelled in the same way as other types of knowledge of dynamic relations, as discussed further in 4.1.
The future target may be a function not only of product demand but also of process behaviour. The writer (Bainbridge, 1974) studied allocation of power to 5 electric-arc steel-making furnaces, with a limit on the amount of power which could be used in a half-hour period. Steel-making furnaces go through a sequence of stages which require different amounts of power. Therefore, for example, if too much power is being used now, but shortly one of the furnaces will change to a stage using less power, then this may compensate for the present over-usage, i.e. the present target for power usage can be higher than average yet there is no need for action by the operator. Goal setting and control strategy are here combined. Protocols from these controllers show that they make a list of future furnace changes, and keep in mind particularly those which will happen in the current half-hour. Generating the time of the next stage change in each furnace is done by arithmetic, using displayed and reference data. Ordering these events in time requires a simple sort procedure. Simple arithmetic could be used to model the process of checking whether a future event will compensate for a present power usage which is unacceptable by average criteria. However this would not be a good model for all controllers. The protocols suggest that some operators assess the effect of an event in relative rather than absolute terms, i.e. they say that an event 'lowers' power usage without calculating specifically by how much. Also an arithmetic account does not model the operator's understanding of why this anticipation is a good strategy.
Another important aspect of this behaviour is that the operator does not make this list of future events at a time when power usage is unacceptable. The operator studied in detail made the list during a general review of the process state made when the control state was acceptable. The list is then immediately available for reference when power usage is unacceptable. The process of building up and referring to this background information (or 'mental picture') about the present process state plays an important part in control behaviour, as discussed further in 3.5.
These examples suggest that control activity is generally oriented to the future. An operator can only make such predictions if they have adequate knowledge of the process dynamics. Further examples of prediction are given in 3.3, and Section 4 discusses the nature and acquisition of process knowledge
2.2. Planning Future Sequences of Behaviour
Allowing for the future can be still more complex. In Beishon's (1969) bakery study, much of the operator's thinking was concerned with planning future activity. For example, he had to choose the best sequence in which to bake the waiting cakes. To minimise time and power usage, the cakes should be baked in a sequence of increasing or decreasing temperatures. Again there is no clear distinction between goal-setting and control. It would seem that a simple sort of routine for arranging cakes in order of required conditions would give a simple description of what the operator does, though not necessarily of how he does it. The actual situation is more complex however, as he was controlling 3 ovens, so assignment of cakes to each will depend on present conditions and capacity, requiring a multidimensional evaluation.
As well as planning the best sequence of behaviour for the process, an operator also has to plan their own sequence of actions. In simple tasks with one display and one control, the operator's control actions are all on the same variable. The sequence of actions to obtain a step change : 'turn control up to start controlled variable moving towards target, turn it down to stop the movement at the target, turn it to a level which will maintain output on target', has been observed in many tasks, e.g. Crossman and Cooke (1962). Passage from one type of activity to another can be modelled by dividing the phase-plane into different areas, as by Veldhuyzen and Stassen (1977). An operator however also has to time their display and product sampling. With 3 multi-dimensional ovens, or any task where the operator has several responsibilities, interleaving the potential different types of activity requires again a complex multidimensional evaluation. This will be affected by personal as well as production goals, for example, an operator may do one activity after another because they are physically close together on the process [a long oven], so this reduces their effort, rather than because this is the optimum timing of the second action from the point of view of process operation. Beishon comments that planning gives rise to activities, the reasons for which are not obvious from the current context, and are therefore not identifiable unless one has been discussing with the operator what they are doing and thinking about.
All these types of planning involve the operation of multiple goals. Some preliminary points about this will be made in the next sections.
2.3. Problem Solving
One can see from the previous examples that the operator is evaluating possible behaviours against several goals which operate simultaneously. There are parallel goals, e.g. baking cakes, using minimum time and fuel, and an acceptable level of operator effort. Some of these may be implemented by sub-goals, e.g. cakes are baked with minimum time and fuel if they are put in the oven which is nearest to the required temperature. Thus within one goal there may be multiple levels of goal organisation which cannot necessarily be described as a hierarchy (cp. Bainbridge, 1975). Therefore, to understand their effects on behaviour, we need to understand the nature of different types of goal structure, and this is something about which we know very little.
This complex structure of goals is involved in the generation of new types of behaviour in unfamiliar situations. This occurs both in problem solving, when the process is not reacting as expected (as in system failure), and in the related situation of learning, which will be discussed further in 4.2.
These goals are also involved in on-line choice between known behaviour in a specific known situation, see 2.4. Problem solving involves generating new behaviours which will meet the task goals. (If the task goals have not been fully specified in the job description then the ways of meeting them are also unlikely to have been.) To be brief it is easiest to give a non-process-control example. The goal is to get a nail in, a simple response to this goal is to find a hammer and hit the nail in. However, an important characteristic of human behaviour, which allows flexibility, is that we do not only have simple and direct goal-response links. If we cannot find a hammer then we look for another object which can be used with the same effect. To be able to do this, each goal must have associated with it a specification of the characteristics required of a 'response' which will implement it, and instructions about how to check for them. Each response. must have associated with it information about the type of results it can provide. Winograd (1972) has described an interesting attempt at a program which works in this way. The above points therefore suggest that for a process operator to generate new behaviours their process knowledge must be specified in a goal-oriented way, which can also be searched in a goal-directed way to find new sequences of process behaviour which will lead to the control goals. A model of such behaviour would be necessary for understanding how an operator reacts to system failure, but is again something about which we know very little. (see further in 4.4).
2.4 . On-line Behaviour
It is perhaps easier to model how multiple goals affect the choice between already available strategies from moment to moment, to suit particular on-line conditions.
The writer, (Bainbridge, 1974), in the study of power allocation to 5 steel furnaces, found that the operator had one routine for finding the target power usage for the rest of the half-hour, and other routines for choosing the best action to make. These routines choose which of the 5 furnaces to reduce the power supply to, total level of power usage being the primary control goal. The first routine finds the furnace which started melting last. Steel quality is not affected if power is cut during the melting stage, while it is affected if power is cut during some other stages. This first routine finds the furnace which has heated up least, so least fuel will be wasted by allowing it to cool. The second routine finds which melting furnace is least loaded with metal, for the same reasons, and so on. Each routine therefore considers actions which are increasingly refined from the point of view of meeting the secondary control goals of maximising steel throughput and quality. Suppose the operator studied in detail has judged that the level of power usage indicates an action will be needed soon. He uses the first action-choice routine, which indicates a good, though not necessarily the best, furnace to be changed. He then samples the control state again. If action is urgent he can make the action chosen on a simple basis, if it is less urgent he uses the next action choice routine, which refines the action choice, and so on.
There are therefore two ways, in this example, in which the urgency of the control state affects the choice of next best behaviour. When action is urgent, the primary goal of power control overrides the secondary goals and action is chosen on the simplest basis, while when under less pressure the operator considers a larger number of factors which define the best action in the circumstances. In addition, he does not always return to check the control state at the end of each of the control choice routines. One might model this by returning, at the end of each routine, to some 'executive' which decides what would be the most effective way for the operator to spend the next few moments. In the unacceptable-power-usage context there is a choice between refining the action choice, sampling the power usage, or making a control action, and the choice between these is a function of the urgency of the control state at the last sample. (This example is referred to again in 3.3, when discussing the factors affecting inter-sample interval). The effect of utility 'calculations' on the operation of multiple goals in behaviour choice has therefore not disappeared, as it is presumably used in these 'executive' decisions, but it plays only a partial role within a more complex structure.
Umbers' (1979b) study of gas-grid control, mentioned in 2.1, is another example of the effect of multiple goals on behaviour choice. He found 40% of the verbal reports from operators were taken up with decisions about whether action was necessary, compared with 3.5% concerned with selecting a suitable action. Again the controller's task was to find a balance between the various costs of changing the levels of gas generation, considering various goals which are secondary to the main one of meeting gas demand. The verbal report analysis suggested that the different costs were combined together in some absolute judgement (categorising) process which involves a subjective scale which may differ between operators.
2.5. Evaluation
Frequently the operator's choice between alternatives is modelled by an Expected Value calculation as in decision theory. The example from Umbers raises the question whether the operator's decisions are in this specific, numerical form. The EV calculation does require the assumptions that numerical utilities are known to and used by the operator. Operators however do not necessarily have full cost information. Paternotte (1978) found, from interviews with operators and management, that operators controlling 8 distillation columns had very little information about costs. This was deliberate policy. It was argued that the costs were complicated, as the different columns had different relative costs and these varied with market conditions, and that giving these costs to an operator would give them a too high mental load. However, if one assumes that an operator must use some costs in making decisions, then if they are not given information about costs, they must assign their own, which may lead to idiosyncratic actions. It is necessary to distinguish between the questions : has the operator got accurate information about costs? do they use cost information in making their decisions? if so do they do so in a way which can be validly represented by EV calculations? Hopefully, the answers to such questions would allow one to give correct cost information to an operator without adding to his difficulties.
Verbal report evidence, as in the example from Umbers op cit, suggests that the operator usually works by making relative comparisons, e.g. a is better than b, where a and b are categorical values. This implies that decisions should be modelled by a mechanism which can operate on an ordinal scale, rather than the ratio scale required by the calculation of EV functions. This 'linguistic' handling of utilities might be better represented by 'fuzzy' methods, cp. preliminary discussion by Efstathiou and Rajkovic (1979).
The discussion so far has implied, along with many others, that the generation of alternatives and their evaluation are in two separate stages. When generating complex sequences one quickly comes against the problem, which has been found for example in chess-playing programs, that generating all the present alternative behaviours using an algorithm is an unrealistically large task. Heuristics therefore have to be used to suggest the most useful alternatives to consider first. Heuristics of course are a form of evaluation, so it becomes difficult to separate generation from evaluation.
2.6. Conclusion
In general this section suggests that some of the main problems in understanding and modelling a process operator's behaviour lie in representing their orientation to the future, and their generation of the future sequence of sub-goals by which the main goals in a complex multi-variable task will be achieved. In the main studies described, Beishon, Umbers and the writer have found it appropriate to describe parts of the operators' behaviour using information processing flow diagrams. These are described only at a first-order level of accuracy in the papers quoted, but on the whole it would not be difficult to represent them using a conventional numerical computer programming language, though list-processing facilities would be useful to model some of the features. It should therefore not be too large a problem to give a fully specified description of the operators' activities. The main problem for the general researcher might be that, at this level there is less scope for the analysis of general properties of behaviour, as one is more concerned with representing the decision sequences in specific tasks. More general research, which would not be valid however unless it was also based on studies of real tasks, would be concerned with goal structures and their effect on behaviour choice, and with the operator's 'mental picture' and 'mental model', which will be discussed further in 3.5 and 4.1.
One should of course remember that this discussion has by no means captured the complexity of the goals in any real situation. An operator acts on the process to control it, they may also act on it to maintain their skill, prevent boredom, have some fun, or to impress or confuse the onlookers. Perhaps we do not need to have a model which accounts for all of these in order to have a model which is useful for practical purposes. However, it should be evident that goals are important components of complex process control activity and require a major research effort.
3. Input Processes
Most models of human input processes in control tasks have been concerned on one hand with human input limitations, with the accuracy of readings taken by an operator and the lag before they make their response, and on the other hand with the interval between samples, and the problems of sampling several variables. The purpose of taking a sample is usually assumed, i.e. to compare it with a target value. The discussion in the previous section has suggested that the notion of a control target is not necessarily simple, so presumably making a comparison with it is not simple either. For example, Carbonnell et al (1968) needed to allow for complex target trajectories of aircraft behaviour when analysing sampling performance in simulator flying tests in which pilots followed a familiar realistic flight plan. Also, for example, comparisons are often not made by arithmetic but by pattern recognition, particularly but by no means only in check reading, and this can have important implications for optimum instrument design. However, these aspects have not received much accurate analysis, although there are many sensible design recommendations, so the discussion here will concentrate on concepts which do appear in several models, and on comparing these concepts with actual operator behaviour, to suggest aspects which need to be included in fuller understanding of how the process operator takes in information about the process.
3.1. Human Response Lag
The lag (reaction time) between taking in information and reaching a decision has an important influence on the quality of human control of quickly reacting 'processes' such as aircraft, but becomes of progressively less direct importance when the process response lags are much longer than the human lags. However there are some points which need to be kept in mind.
Kok and van Wijk (1978, p.6) state that :
'the human information processing time [can be modelled] by a single pure time delay t which is equal for all observed outputs independent of its modality (visual, auditory, etc.)'.
I have been reading this sentence to colleagues who are expert on the determinants of human reaction time, and enjoying the response. [Measuring changes in reaction times is a major method used by psychologists for studying mental processes, and in human/factors/ ergonomics to find good interface designs.]
This assumption may be an acceptable approximation in models of fast tracking tasks, in which one well-designed display is used, and neuromuscular control is the major limiting factor in performance. It can be positively misleading in slow process control for several related reasons. Decision making time is usually much longer than the 150-200 msec assumed in fast tracking models, as more complex distinctions are involved. As we have already seen, coming to a decision may involve lengthy information-considering procedures for which a simple stimulus-response representation could be inappropriate. The variability in reaction time as a function of display and task is not a small fraction of the average RT, but reaction times in some situations can be several times longer than in others (e.g. Teicher and Krebs, 1974).
If one does not consider all the factors which can affect reaction time, such as quality of display, compatibility, and console layout, one can easily forget the major influence which interface design can have on process control performance. Sophisticated mathematics may be intrinsically attractive, and ad hoc design recommendations comparatively prosaic, but if one is concerned to improve the general quality of process operators performance then practical points may be more important than mathematical elegance.
3.2. Accuracy of Display Readings
Many modellers assume that the human operator is an inaccurate reader of displays, for example Kok and van Wijk (1978, p.6) say that :
’the human uncertainties due to scanning or sampling can be modelled by a normal white observation noise on each of the observed variables.'
Indeed the human sensory threshold can be considered as adding 'noise' to the reading obtained (as in any other sensing device) and identifying a stimulus does involve a decision. The sensory threshold is not constant but is a function of location of stimulus, e.g. on retina or skin, and of display design, as mentioned by Baron and Kleinman (1969). It is also a ratio (to a first-order level of description) of the sensory adaptation level, and is affected by the utilities of misses, false alarms, etc, However, it seems that introducing noise into models of the human operator has had the main effect that a Kalman filter is then needed to remove the noise (whose characteristics have been carefully chosen to be those a Kalman filter can deal with), as the model control equations acting on the sensory measures cannot handle stochastic inputs. One can argue both that the human operator may use 'inaccurate' i.e. categorised display readings for good reason, and also that sometimes when noise has to be added into control engineering models of the human operator to make them match human control behaviour, this may be due to limitations in the model rather than to limitations in the operator. One can also mention that Paternotte's (1978) operators complained that their control task was difficult because the instruments and controls were not accurate. (This is discussed further in 3.4.)
When the human process operator makes categorised display readings these will be inaccurate by engineering standards, but the display may be read in this way because the reading is being used, not for feeding into an action-size calculation, but as a determinant of decision making for which a categorised value is more appropriate. In the writer's steel-making study (Bainbridge, 1971) the control state (power usage level) was evaluated, as described in the verbal reports, into 5 categories of three types : alright, above/below, increase/decrease action needed. These three types of category have different implications for the next best behaviour, as discussed in 2.4. To describe this simply as a noisy display reading would be to misrepresent what the operator is doing with the information.
In a different way, "noise" may be introduced into modelling to account for a mismatch between model and human performance, i.e., the 'remnant'. In some cases the need for this might be taken as a measure of the inadequacy of the model, with a larger remnant indicating that less of the human operator's performance has been accounted for by the model. It may of course be necessary to include noise in a human operator model, particularly when an operator’s knowledge of the process is only adequate to predict its behaviour to within statistical or categorised limits (see 4.1).
However, I consider that noise should be used in models with great care, as it has a more general danger. Many models of a human operator represent a person as a rather simple control device with noise. While the originators of such models may hold that such a model has only the status of a first-order description of the human behaviour, it is very easy, especially for people hearing about rather than doing the research, to fall into the trap of thinking that such a model is a valid representation of actual human control mechanisms. One can then easily infer that the human operator is a rather poor quality simple control device best replaced by something which has been properly designed. As this type of model does not contain any components by which one could begin to represent the things which the human operator is particularly good at, such as pattern recognition, planning and problem solving, it could be easy to forget, when using such models, that human operators do have this sort of potential.
3.3. Intersample Interval and Process Knowledge
Most operator models are concerned with the task of sampling at a rate such that the behaviour of the variable being sampled can be reconstructed, though it is now accepted that there are several other reasons why the operator might sample, such as failure detection, which would lead to different optimum strategies (see e.g. Kleinman and Curry, 1977). We need to discuss what does determine the human operator's inter-sample intervals. We can also ask whether this type of notion is sufficient to account for all human sampling behaviour, particularly in multi-variable process control tasks, and how much the operator can remember about a complex process state, which will be discussed in 3.5.
The simplest sampling decision is whether to sample or not. Sheridan (1976) describes a model of supervisory control in which a sample measure gives an estimate of the system state, this indicates an appropriate action which has an expected value. The model selects the measure (including not sampling) which maximises this expected value. The Expected Value calculation requires knowledge of the distribution of process states likely to occur, and of the precision of the various measures available. While it is unlikely that assessments of the optimum time to sample could be made without these types of knowledge, it would be optimistic to assume that they are perfect. (Knowledge about measurement precision will be discussed in the next section.)
We can note that the input track to be followed in tracking tasks (such as flying or car-driving, though the analogy is closer to tracking without preview), and the process output to be controlled, are equivalent in the sense that both are behaviour of the external world over time about which information is needed as a basis for action choice. Presumably one can therefore use related models for the two types of knowledge : knowledge which gives ability to predict a future track and knowledge which gives ability to predict rather than sample future process behaviour. However, on the whole, different types of model have been used to represent knowledge in these two tasks. When following an input track (without preview) it is usually assumed that a human operator's sampling follows sampling theory or its later developments, i.e. that they know the statistical properties of the track. (Preview allows the human operator to use more complex types of input processing such as pattern recognition, e.g., Sheridan, 1966.) In sampling process outputs, the signals are more redundant when an operator also has knowledge about the process dynamics and their own behaviour. Baron and Kleinman (1969) allow for sampling as a function of system dynamics and controller gains. Carbonell (1966) explicitly allows for the effects of control actions, while he and Kleinman and Curry (1977) mention the effect of correlations between instruments on sampling. In discussing monitoring behaviour. Sheridan (1976) suggests that knowledge of process states which are likely to occur is obtained by extrapolation from trends in the present observation. Several types of control model assume that the human operator has knowledge of the process dynamics in the form of a deterministic internal model, which is used to predict the effect of an action on process behaviour as the basis for choosing the best control action. Presumably there is no reason why this model could not also be used to predict process behaviour as a result of the present control settings, as a basis for determining the best next sampling time.
One might then suggest that either type of model, statistical or deterministic, or any possible intermediate combination, could be the basis for determining sampling intervals for both tracks and process outputs. For example, human sampling of input tracks has usually been tested with random inputs, so it is hardly surprising that a random model is sufficient to fit the behaviour. Other studies however show that an operator can also learn deterministic properties of the track, e.g. they can follow pure sine waves (see e.g. Pew 1974), which will influence both their need to sample and their strategy for following the track. Presumably an operator can learn about, and develop their own model for the behaviour of the statistical properties of the external world by some ability to extract the correlations and conditional probabilities in its behaviour. They might learn about its determinacies by noting that conditional probabilities are close to 1, or by some different type of mechanism such as pattern recognition.
On the other hand, the operator's internal model of the process is usually not perfect but may be partly statistical (see further discussions of the internal model in 3.4 and 4.1). For example, the process behaviour may fluctuate as a function of factors not under the operator's control, such as changes in the environment or in the quality of input materials. If an operator has not been able to learn these dependencies (due for example to lack of information about input quality) they would react as if these are random fluctuations in process output, about which some statistical properties are known.
We therefore need much more research on how human operators learn and use their knowledge of non-random inputs before we can predict how their sampling of such inputs will differ from that predicted by statistical models.
Knowledge of process dynamics can be important even when the human operator is monitoring the operation of automatic controllers for failure. Kleinman and Curry (1977) discuss monitoring of such a system driven by white noise. When the automatics are working correctly, process behaviour should be random within tolerance around the target. However, this is only simple to monitor in steady-state control, when the target is stationary. If the automatics/computer are controlling trends or step changes, then the operator needs to know the trajectory of target performance against which to compare actual behaviour. We can ask how a human operator knows this trajectory, and whether they need to control the process themself in order to know it. Brigham and Laios (1975) found that operators did not learn how to control a process by watching automatics do the task.
Failure detection requires that the operator's sampling should be a function of failure probabilities rather than the probabilities of normal events. The operator should therefore sample displays with low signal probabilities. It is well known (e.g., Mackworth, 1970) that human beings cannot attend effectively in directions in which nothing is happening, for periods longer than half an hour. This implies that monitoring automated control is like other watch-keeping situations, and requires rapidly rotating shifts for optimal behaviour.
The discussion so far has implied acceptance of the notion that decisions about when to sample are distinct from other types of task decision, though some models do allow in a simple way for the effects of the operator's actions. The use of the same process knowledge in action choice and behaviour prediction would suggest that these two decisions could be more closely interrelated. Actual studies of process behaviour suggest that sampling decisions may also be more complex, and may be affected by different types of mechanisms (in some cases at least). Section 2.4 described the writer's study (Bainbridge, 1974) in which deciding to sample was part of a more general 'executive' decision about the operator's best next behaviour. The operator alternated between a routine concerned with sampling the control state and ones for increasing refinement of action choice. The sampling model that this is most like is Carbonell's (1966) queuing model, but this is more complex as all the operator's activities, not just their sampling, are queuing for consideration. The effect of such a mechanism is that the time between samples is determined by the length of the other routines. However the sampling interval is also a function of control state urgency, as this affects how many other routines are allowed to intervene before the operator returns to checking the control state. Therefore, in such a mechanism, the operator is not necessarily deciding on an explicit sampling interval, nor to sample at a particular time. This will be the case in any except the very simplest control tasks, whenever the operator has to choose between many possible activities and sampling is just one component.
3.4. Reliability Measures
We can ask whether an operator's knowledge of the reliability of the measures available to them is independent of the knowledge they use in predicting future process behaviour. If, as suggested above, an operator acquires their knowledge of the process by learning about correlations and conditional probabilities in its behaviour, then there is no way in which they could distinguish noise in the instrument measures from noise in the process behaviour, unless independent evidence is available about one or the other. Noticing unreliability in the instruments is not different from noticing unusual behaviour in the process as a whole. Both can only be done if independent evidence, or a model in some general sense of knowledge of what should happen, is available for comparison with what is actually happening. An effective model of the process is necessary before the operator can diagnose that one part of the process is not behaving as it should do given the readings of other related factors. Sheridan (1976) contrasts his model of human supervisory control, in which sensory measures and control parameters are trimmed and the process model left constant, with a 'conventional Kalman-Bucy estimator control system' in which the process model is trimmed and the sensory measures are left constant. It may be more realistic to assume that knowledge of these two aspects develop together.
With both the instruments and the process, given sufficient experience, the operator should be able to learn about this unreliability. This knowledge too would presumably be in the form of statistical properties or determinacies (e.g. levels of uncertainty about process behaviour, and knowledge of the types of things which can go wrong) which could be incorporated with other aspects of the operator's internal model of the process. Normally such knowledge is effective and useful. There are at least two ways however in which it can be misled. Once an instrument has been identified as unreliable, the operator may diagnose peculiar changes on it as due to instrument failure rather than as something wrong with the process, and so fail to diagnose a process failure. This of course does not indicate that the operator is no good at assessing likelihoods, but underlines the importance of instrument maintenance. On the other hand, judgements that the process is behaving correctly may change relatively quickly given contrary evidence, which can be a problem if this contrary evidence is unreliable. Bartlett (1943) found that fatigued pilots blame system malfunction when things go wrong. Bartlett inferred that the tired pilot was not implementing the size and timing of actions that he should have done, though he thought that he was. Consequently the aircraft behaved in an unexpected way, and the pilot attributed this to failure in the aircraft rather than to himself. This again would support the notion that it is difficult for the operator to distinguish different parts of the process behaviour without independent evidence.
3.5. Multi-variable sampling and Working Storage
The usual approach to modelling human sampling of multi-variable processes is to describe this behaviour using developments of sampling theory. Senders (1964), studying sampling of randomly varying displays, found that attention was allocated according to the probability of an event on a display, although Hamilton (1969) has found that this occurs only when signal rates are relatively high. One can suggest that statistical or deterministic knowledge of dependencies between process variables would increase the redundancy of signals and so reduce the sampling rate necessary. This is still to argue within the same framework of notions about sampling, however, while the writer's (Bainbridge, 1974) study described in 2.4 suggests that something rather different may be going on. The more complex determinants of sampling primary control variables were mentioned in 3.3. The same study suggests that different mechanisms may determine the sampling of other variables. In the power control task the operator sampled many variables which were not relevant to the primary goal of controlling power usage but were relevant to the choice of an action which would best meet secondary control goals (see 2.4). These secondary variables were, in this task at least, not sampled at a rate such that the pattern of changes in their levels could be reconstructed. In fact such a notion would be inappropriate here as changes in these variables were step-changes occurring at fairly predictable times. Also there was a much larger number of variables than could be remembered perfectly between samples (though the form in which some variables are remembered is an important part of efficient decision making, see below).
It seems that these secondary variables are sampled in two contexts. Their levels are checked during the action-choice routines, so sampling of these inputs is not independent of control choice. They are also sampled during the general process reviewing which an operator does when they are not under action pressure (see also 2.1). This is one example of the general checking which operators have been seen to do during quiet periods in many studies. The writer's analysis suggest that the operators, during such periods of wandering around, are not simply or only checking control loops which they do not normally have time to get round to. The items about the process which are remembered after making this review are not, on the whole, the variable values as originally sampled (see Bainbridge 1974, 1975). The sampled values may be stored in a pattern, different from the one in which they are displayed, which is more convenient for future reference, e.g. furnace stages are displayed by furnace, but are remembered as lists of furnaces in the same stage. The more important items remembered are the results of previous thinking, for example the time of the next power demand change, and the best action to make then. Here the operator is not storing raw data about the process state, and the present 'process state' as seen by the operator is not only its behaviour at this particular moment but also includes its and their own potential behaviour. This suggests that an operator is building up a background 'mental picture' of the process state which will enable them, when they do need to make an action, to make a wise decision within the overall context of immediately accessible and relevantly organised information about plant behaviour, rather than simply in response to a particular variable which has been found to be out of tolerance. The on-line development of this background memory, and its operation in decision making, have strong implications for manual takeover from automated systems. Modelling the development and operation of this memory requires devices which are not available in most programming languages but which could be mimicked by them.
3.6. Summary
Section 2 concentrated on important aspects of complex manual process control behaviour which are missing in simple models. In this section we have again argued that simple representations of a human operator's information processing limits give a misleading impression of their potential contribution to control tasks. The need for an operator to sample may depend on their statistical or deterministic knowledge of the process, and knowledge about process behaviour may be difficult to distinguish from behaviour of the instruments or the person's own muscle dynamics. In more complex tasks the operator may not sample at predetermined times or intervals, but at the end of a sequence of other activities, when sampling is the next behaviour with the highest expected value. They may also review the task variables in a way which is structured by the task decisions rather than by the process dynamics, to build up a working store of the whole state of the process, for use in later decision making. Such analyses again support the need for information processing representations of at least some aspects of manual control behaviour, and suggest that much further research is needed to understand the nature of the operator's temporary 'mental picture' of a particular process state and long-term 'mental model' of the overall process dynamics, and their relation to the person’s sampling behaviour. As these are both aspects of the operator's knowledge, their sampling may vary with experience.
This is another topic about which we know very little, though Umbers (1975, p.224) found that trainees sampled and then decided what to do, while experienced operators sampled with a purpose.
4. Control Action Choice
Many engineering models of control action choice are concerned with the size and timing of actions. However, many of the examples given here have already implied that action choice is often more complex than simply aligning control change to error. For example, in Umbers' (1979b) study mentioned in 2.4, the multiple goals to be met are being considered within a decision about whether an action is necessary. As choice between different types of action, as a function of several goals, has already been mentioned in 2.4 and 3.5 this section will be primarily concerned with the operator's knowledge of process dynamics. Many engineering models of control of single variables assume that the controller has perfect knowledge of the process behaviour, and is an ideal feedback controller. We have already seen that control by an experienced operator is oriented to the future. It might also be more realistic to say that they exercise great ingenuity in controlling given the information at their disposal. In models which assume that the operator has perfect knowledge, this knowledge is used to predict the effects of the actions available, as a basis for choosing the best. We can ask whether these two assumptions, which will be discussed separately in the next sections, are wrong in detail, or wrong in kind as a way of accounting for all types of manual process control behaviour. Within the discussion of the operator's knowledge we will consider what form it may take, and what may affect its acquisition.
4.1. The form of the operator's process knowledge
Kok and van Wijk (1978, p.6) start their operator modelling from several assumptions, including :
'The human operator has a perfect understanding of the system to be controlled, the statistics of the system disturbances, the interrelation between the observed output and the system state (the display parameters), the task to be performed, and his own inherent limitations.'
This assumption greatly simplifies the task of modelling, as one can include the known process dynamics in the model without asking any questions about the form in which they should be represented (and avoiding the interesting point that operators are able to control processes for which the dynamics cannot be fully specified). Of course, not all engineering models of process controllers do assume that they have perfect knowledge, some interesting examples are fuzzy-set controllers, e.g. Mamdani and Assilian (1975). However, as several models do make this assumption we need to discuss the extent to which this is a valid notion, and the distortions of understanding which it might lead to. In many cases it is essential to be able to represent the operator's non-perfect knowledge as a basis for valid predictions of their behaviour. Detailed evidence on control choices suggests that it is possible to control without full knowledge of process dynamics, that correct open-loop actions can be made by experienced operators with only a very primitive type of process knowledge, and that process knowledge may sometimes differ in form from a simple description of input/output relations.
Control behaviour in several studies suggests that a very primitive level of knowledge is sufficient for control. Beishon (1967) and Paternotte (1978) both found that operators controlled by moving the process output in the required direction in small steps. Both investigators suggest that this occurs because the operator has poor knowledge of the process dynamics. This method of control, which is possible but not efficient, requires knowledge only of the direction and (approximate) gain of control movement, and the time to wait before checking for an action's effect. Paternotte's operator's were controlling 8 distillation columns. In interviews the operators said that it was impossible, with the existing controllers, to make actions as accurately as desired, that accurate control was useless because of inaccurate instrument readings, that the precise effects of control actions on quality values are unknown, and that lack of information concerning quality (which was sampled at 40 minute and 2 hour intervals) forced careful strategies. Evidently these operators are trying to control within a high level of uncertainty about the process behaviour measures given by the instrumentation. One may infer that the reason for inefficient control is not because a human operator is basically incompetent, as they can control much more efficiently in other tasks, but because the nature of the console design, process or task (see next section) make it difficult to acquire the higher levels of knowledge about process behaviour which are necessary for more sophisticated control strategies.
When the operator does predict process behaviour this may also, in some tasks, be a simple statement about direction of change rather than a numerical specification of what will happen. In Cooke's (1965) study only 10-15% of statements about the present state were in relative rather than numerical form, but predictions were not numerical. In protocols collected from operators starting up turbines for electric power generation (Rasmussen and Goodstein, personal communication) the predictions were mainly simple, e.g., 'it'll soon be up'. However, Umbers (1976) found that predictions were numerical. It is rash to generalise to the reasons for this difference from so few examples, but one might point out that the operators making trend predictions were working from analogue displays and predicting the process behaviour, while Umbers' operators were working with digital information and predicting control targets.
Studies of experienced operators who do exert efficient control (e.g. Crossman and Cooke, 1962; Cooke, 1965) suggest that they may choose their actions without considering alternatives, by a process which is not available to verbal description. Acquisition of knowledge about actions appropriate to a given control context does not require predictions from a perfect internal model, but can be acquired from experience of correlations between action and effect. It might be misleading to assume that this knowledge is in the form of a very simple input-output look-up table. However, Crossman and Cooke op cit found, by measuring correlations between control actions and various dimensions of control state, that it was only valid to describe an inexperienced controller as working by feedback, the correlations [indicating use of feedback] decreased with practice. From other data they concluded that the experienced operator used mainly open-loop control. This correlational learning, which enables the experienced operator to control without trial-and-error, gives a primitive form of process knowledge, without a separate specification of the nature of and reasons for process dynamics which can be used and discussed independently of doing the task.
These types of example suggest that the operator acquires these simple forms of knowledge by parameter trimming, though this may not be represented in the operator in specific numerical terms. However, analyses such as Cooke's (1965) suggest that such parameter trimming can be only a component of the learning, rather than the whole or even a major determinant of its development. Cooke's verbal protocols, and other data collected from university students controlling a simple one-dimensional process (the temperature of a water bath) suggest that control strategies are based on hypotheses about how the process works. Some relatively simple propositions about the process behaviour are combined together to make predictions. An alternative way of expressing this would be to suggest that the operator must start with some hypotheses about the 'equations' which it is appropriate to trim, for example realising that it is necessary to take lag into account, e.g. Cooke op cit, p.365 :
'I think I'll take it off a bit when it gets up to about 75 because I don't want it to overshoot the mark [85] and I imagine it will still have some effect on the water inside the tube some time after the heating has stopped.'
[Cooke’s test subjects were Oxford university students, so they might be more analytic than most !]
Some of the other propositions which Cooke's students mentioned were that heating was faster than cooling, sampling continuously was not necessary, and various control strategies were available. The students mentioned these points as they realised them from their experience of trying to control the process. (It was possible for a student to mention one of these points but for it not to lead to a revision of his control strategy, and vice versa.) Such propositions are not automatically assumed by a beginner operator, but are acquired by training and experience. Development of a sufficient set of these propositions is necessary for adequate control. Some people may not be able to acquire such propositions from unaided experience, or may not be able to implement the more complex control which they imply. For example Crossman and Cooke (1962) found that the control performance of low intelligence people trying the water bath task showed that they understood the notion of proportional control but were not taking lag into account.
Such analyses suggest that human learning of process behaviour does not start with a complete equation of the appropriate order, in which the parameters are then trimmed by learning, but that learning also involves acquiring the appropriate components of the 'equation' before they can be trimmed. At a more complex level of modelling one would also have to account for the way in which increasingly sophisticated knowledge of process dynamics leads to the generation of new (to the operator) and more effective control strategies (see 2.3 and 4.4). One assumes that such 'propositional' knowledge of the process is at a 'higher level' than the simple correlational learning described above, precisely because it may have the potential to be used for working out what to do in unfamiliar situations, which would not be possible given knowledge only of an 'if this, do that' form.
Most of the above points about the nature of the operator's internal model have been inferred from their task performance, and sometimes from their verbal reports. The problems of studying the form of this model directly are very large. Presumably it can only be studied by the classic technique of making models for the operator's internal processes and testing their performance against the operator’s. An interesting example of this is given by Jagacinski and Miller (1978), who consider that parameter estimation, to fit a model to behaviour in the usual tracking task, is too multidimensional to be successful, so they use a simpler step-change task. They fit an equation to the operator's performance, but admit that they have no information about how the operator has actually represented the system dynamics internally. This is probably an ultimate limit to any modelling of mental processes. However one might still be able to do useful work by testing, via models, a richer range of ideas about the nature of the operator's internal processes.
4.2. Acquisition of the internal model
The last section suggested that the operator's process knowledge can be at different levels of sophistication. If their potential ability to control is a function of knowledge, then the quality of control is affected by anything which affects this knowledge. Studies suggest that the main influences, which all interact in their effect, are interface design, experience or training (see Crossman and Cooke, 1962, in 4.1), and the more general working conditions. Some aspects of these will be mentioned briefly.
The interface, and an operator's interactions with it, can affect the extent to which they can notice correlations between variables and learn the properties of their behaviour over time. One example comes from a study by Wickens and Kessel (1979). They assume that the effectiveness of a previously learned internal model can be measured by the speed and accuracy of detecting that system dynamics have changed. They found that this detecting is done better by people who have controlled manually than by those who have learned about the process by monitoring automatic controllers. The inference is that direct interaction with the process in manual control allows better learning of process dynamics. (This has strong implications about manual takeover from automated systems.)
If one accepts that primitive knowledge of process dynamics is mainly in the form of knowledge about directions and sizes of changes, then this emphasises the importance of compatibility between directions of change in displays, controls and process.
Task load, the amount of work to be done, can affect moment to moment use of strategies of greater or less complexity (see 2.4). For example Paternotte and Verhagen (1979) studied control of a simulated distillation column which was run at several speeds. The operators commented that they changed from feedback to feedforward control at higher task speeds. Task load may also affect opportunities for longer-term learning about process behaviour. For example, Paternotte (1978), in his study of operators controlling 8 distillation columns (mentioned in 4.7) argued that the operators used a 'small changes with feedback' strategy because they had to divide their attention, so it was easiest to use a simple strategy with standard actions. One could also argue that 8 columns is above the number of processes which an operator could keep track of separately. They would therefore not be able to learn the individual characteristics of the 8 columns, which were not the same, so they would not have the knowledge from which they could generate control choices specific to particular columns.
Another more general aspect of working conditions would be the division of decision-making responsibility between operators, foremen and management, or between operators in a team, which would affect a particular operator's opportunities to experience parts of the process dynamics.
4.3. Comparing alternative actions
Control models often assume that the operator's process knowledge is used to predict the effects of alternative actions, as a basis for choosing between them. This is an attractive idea, but the actual data on manual control suggests as usual that the human operator works in ways which are sometimes simpler and sometimes more complex than this. We can discuss the two aspects separately : does an operator compare alternative actions? and do they predict the effect of actions in choosing their behaviour? Observational data on manual control can only show what the operator finally decided to do, not what other possibilities they considered while making the decision. Some information on this is available from verbal reports, although it should be remembered that there are occasions when several possibilities slip through one's mind much too quickly to be reported on. With this proviso on interpreting verbal protocols, when one looks for evidence of comparing actions one finds that this happens in two contexts : when comparing the effectiveness of past actions, and when predicting possible sequences of action in complex tasks with many degrees of freedom.
Past exemplars are used as patterns for effective behaviour now (see the discussion of Umbers' 1979b gas-grid controllers in 2.1), or a previous lack of success is used to suggest the way to revise behaviour to try this time, e.g. Cooke, 1965 :
'It seems that this time I got up to the required range rather quickly but at the expense of the fine control keeping it there. It first of all went up too far, then it went below as in the first trial but not nearly so bad. The second trial I seemed to take more time getting up there but once I got there I stayed there better than this time at any rate'.
A complete model for this type of behaviour would need to contain a memory which could retain 'episodes' rather than simply numbers, plus comparison processes which could also suggest new strategies to try. Again (cp 4.1) this example suggests that some operators revise their control strategy by more complex cognitive processes than would be represented by 'parameter trimming'.
There are two examples of situations in which experienced operators do make predictions about alternative behaviours. Umbers' (1976) operators predicted the ways in which gas demand might develop, and therefore the need for actions later in the shift e.g. :
(p.326) 'We'll be looking at it hourly and then we'll decide later whether it's necessary to increase during this shift or whether to make provision for it between 6 and 8 o'clock this morning'.
The clearest example of comparing several predicted actions does not come from a control task. Smith and Crabtree (1975) collected verbal protocols from well-practised people doing a laboratory task in which items had to be scheduled through a sequence of machines with different capacities. The task was to optimise the routes used. This is a task with a large number of degrees of freedom of action, and the people predicted and compared up to 3-4 alternatives.
4.4. Predicting action effects in new situations
These examples lead one to ask whether the notion that control choice is made by predicting and comparing the effect of alternatives is inappropriate. Certainly the notion that control is oriented to the future has been mentioned frequently already. In particular, predictions about the future have been discussed in relation to future control targets (2.1), possible sequences of behaviour (2.2), and process sampling (3.3). Umbers (1979a) lists studies which find evidence for open-loop or feed-forward control. The data on predicting and comparing the effects of actions suggest however that this is not done by experienced operators in standard situations.
Predictions may appear as a reason for behaviour, e.g. (from Cooke, 1965) :
'I'm turning it down to 90, which will make it go down because it's below boiling point'.
Presumably knowledge about the effects of different actions is also used in the original development of a good control strategy. For example, in the furnace power control task, the knowledge that - cutting power to a furnace which is at a particular stage of making a quality steel will disrupt the quality of its output - leads to the strategy of cutting power to other furnaces. This information is no longer mentioned when the operator uses the strategy, e.g. :
'I shall have to cut [furnace] E a bit, it was the last to come on, what is it making by the way? ... E make stainless, oh that's a bit dicey ... I shall not have to interfere with E then',
but the information may be available when the operator is asked to explain his behaviour when he is not actually doing the job, e.g.:
'if a furnace is making stainless, it's in the reducing period, obviously it's silly, when the metal temperature and the furnace itself is at peak temperature, it's silly to cut that furnace off'.
(These extracts come from the same operator in the same session).
This suggests that predicting and comparing actions may be done primarily during the development of new behaviour in unfamiliar situations, which would occur particularly during learning, or when something has gone wrong with the process.
Here is an extract from some operators having difficulty with starting up a turbine (Rasmussen and Goodstein, personal communication) :
Kn |
I don't think anything will happen if we run it all the way up |
OLH |
Yeah, we're repeating ourselves, right....as soon as we come up past a given point there....then we can't |
Kn |
That's the question |
OLH |
Yeah, but that was the one which alarmed....wait and see....when it comes over 15, what will it do ? |
Kn |
We won't get any 'reverse program' will we ? |
OLH |
No, no |
Kn |
So we can continue |
Alarm |
There it goes again |
These predictions seem to be made using conditional statements which include fairly simple propositions about process behaviour. The types of conditional statement which occur can be analysed, as a basis for beginning to understand this type of behaviour. These are basically of two types, about conditions which affect variable values (including the effect of actions) and about conditions on the use of actions. This use of process knowledge can be compared with Rasmussen and Jensen's (1974) study of electronic maintenance technicians, in which they found that the technicians used a simple general strategy appropriate to many instruments rather than using functional knowledge specific to a particular one. Laboratory studies which test diagnosis, given minimal information about random links, may be appropriate for investigating this type of diagnosis.
The above anecdotal examples from process operators do however suggest that they may use functional knowledge. This would be more appropriate in process operators as they are concerned with one process with meaningful relations between the parts, of which they have a great deal of experience, and they need to find out how to control the process despite there being something wrong with it, as well as finding out what is wrong. [And note the comparison between repair technicians and process controllers suggests there is not only one method of fault diagnosis, but the best method depends on the task context.]
This type of protocol evidence therefore suggests that process operators do their trouble shooting by thinking through conditional statements which include simple dynamic knowledge about the process, mainly in the form of directions of change. This is related to the points made in 4.1 about Cooke's (1965) finding that sufficient strategies are based on sufficient simple, mainly cause-and-effect propositions about process behaviour. It is also related to the points made on problem solving in the section on control goals (2.3). This would imply that control strategies are the result of goal (i.e. required behaviour) oriented search through conditional propositions about potential process behaviour. This would suggest that the adequacy of problem solving/trouble shooting would depend on the adequacy of these propositions, and of the procedures used in searching through them. This is something that we need to understand much more fully if we want to aid operators in their trouble-shooting. Rigney and Towne (1969) present a simple model for maintenance technicians' activity sequences which is of this type.
4.5. Summary
The operator's knowledge of process dynamics may not be in the form of control equations, but may be the result of simple correlational learning, which could lead to, or be related to, conditional propositions about general aspects of process behaviour. The effectiveness of the operator's 'internal model' will depend on their opportunities for interaction with the plant. This emphasises the importance of interface design and training, and has implications for manual take-over. In choosing their control actions an operator may recall previous control attempts. They may predict the effects of an action as a justification for that action, or in trying to work out what to do in unfamiliar situations. Their ability to do this will depend on the form and content of their process knowledge, and modelling this type of behaviour may require sophisticated models of cognitive activity.
5. General Conclusions
This paper has attempted to review the usefulness of control theoretic models, developed for fast tracking tasks, in describing manual control of slowly changing industrial processes. In many cases it seems that slowly changing tasks allow different types of cognitive processes to be used, or the task complexity requires different responses. The paper has not described the full complexity of process control behaviour as would be evident, for example, from reading Edwards and Lees (1974). However, there are still several major themes which have required much cross-referencing in a discussion divided into sections on goals, inputs and output decisions, particularly as these three aspects of behaviour are not necessarily clearly distinguished. An operator's mental or internal model depends on their interactions with the task and is basic to their potential performance. Task decisions are also a function of their 'mental picture' or knowledge of the present process state. Complex cognitive activity may be involved in deciding which of the available behaviours is most appropriate in a given multidimensional situation, or in generating new behaviours in unfamiliar situations.
The question remains however, whether there are alternative models which can be developed to a level of rigour which would he equally attractive to engineers. Section 2.6 suggested that much of the cognitive activity could be modelled by existing computer programming languages. Such programs could be used to predict qualitative aspects of behaviour, the results of decisions. They would not automatically produce quantitative predictions about time and accuracy. This would require parallel calculations, for which we have not yet really got adequate data. For some of the more sophisticated notions which have been mentioned briefly, neither the concepts nor the performance data are yet available. Perhaps some suitable concepts are emerging from Artificial Intelligence, though often their concepts represent logically possible ways of doing complex tasks, rather than ones which have been tested for parallels with human performance. Essentially we need a great deal more sophisticated analysis of performance in complex tasks, from which human operator underlying mechanisms can be inferred.
- - -
Access to other pages via the Home page
©2021 Lisanne Bainbridge
Comments
Post a Comment