The Change in Concepts needed to Account for Human Behaviour in Complex Dynamic Tasks

Operators in complex dynamic tasks need to be adaptable to changing circumstances, and their understanding, planning, and organisation of behaviour are crucial to their effectiveness. 

The first main part of this paper summarises the key features of complex behaviour which need to be included in a model.

The second part is about a cyclic model which builds up a task overview/ situation awareness, which is used as the context for later behaviour.

The third part illustrates how messy it gets to try to include an overview and top-down processing in a linear model.


Studies of people doing complex dynamic tasks show that often :

- they do not react passively to changes in the environment ('bottom up' processing), but actively look for the information needed in their thinking ('top down' processing).

- some information needed about the current situation may not be available so must be inferred.

- they do not simply remember exact specifics about the current environment, but the result of thinking about it (e.g. in air traffic control : not the specific locations of 2 aircraft, but how far apart they are and their relative speed).

- they do not react only to current events (feedback), but predict future events, and what to do about them (feed forward).

- they build up an overview of what is happening and what will happen in the future, which provides the context for task-related decisions.

- there may be several simultaneous tasks, so the person must allocate their attention between them.

- they may evaluate alternative actions : situations are complex and often there is not only one way of dealing with them, so a simple input-output 'if this, do that' model is inadequate.

- many dynamic processes are too complex for it to be possible for the designer to predict what will happen in all circumstances, so the operator must be flexible and able to problem solve when the unexpected happens. 

- many of these thinking tasks are best done only after development of considerable expertise.


These findings suggest that simple linear sequential models of the human operator do not include important reminders about what these people are doing and how they do it. Of course there are many types of human task and cognitive behaviour which can be modelled as a linear sequence, but this is not sufficient to describe more complex tasks. This paper outlines the need for a contextual model of cognitive processing, and ends with brief comments on the implications for design of the interface and training.


Jens Rasmussen’s cognitive models are much referred to when discussing complex tasks.  He based his cognitive models on his interesting and detailed studies of maintenance technicians.  It just happens to be the case that maintenance technicians :

- mainly use context-free working methods, as each piece of equipment they work on is different.  

- do not usually repair equipment while it is being used, so have no need to build up a dynamic mental model of its state.  

- are not usually trying to control the item's behaviour, so have no need to anticipate what it might do and act to prevent the worst from happening.  

So studying and modelling their behaviour does not capture the need for or nature of context-oriented working methods in dynamic environments.


This paper includes many generalisations about the nature of human cognitive processes.  There was not space in this paper to give much of the evidence supporting these generalisations, but there is much evidence in other papers on this site. See :

Analysis of verbal protocols from a process control task.

Working memory in air-traffic control.

Complex tasks.

Diagnostic skill in process operation.

Topics :

I. Introduction


II. Some evidence for these features of behaviour.

A. Inference and understanding.

B. Organising the sequence of behaviour.

C. Unfamiliarity, expertise, workload


III. The processes and types of model needed.

A. Overview structured by task goals, a simple contextual model.

B. Sequential and contextual models : examples of what needs to be added to a sequential model to describe complex behaviour.


IV.  Brief notes on practical implications : for design of interface and training.


References


(I don't know why this paper was published several years after it was written.)




The Change in Concepts needed to Account for Human 

Behaviour in Complex Dynamic Tasks


Lisanne Bainbridge

Department of Psychology, University College London

London WC1E 6BT, England


IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 1997, 27(3), 351-359.




I. Introduction


This paper will argue that the models of human cognitive processes typically used by human engineers do not include any account of some important features of human behaviour in complex tasks.  This means these aspects of behaviour tend to be forgotten, with important consequences for the design of systems to support human behaviour.  The aim of this paper is to indicate the mechanisms needed and to suggest an alternative type of model.


Over a decade ago, human engineers began to recognise that mathematical models of the human operator are inadequate for representing human cognitive behaviour, although they are useful for representing human behaviour in tasks in which the critical human limitations are neuromuscular.  The cognitive models used by human engineers still tend to follow engineering concepts.  For example, the model of Wickens [1] and Rasmussen's ladder model (e.g.[2]) are in a family of models developed by Broadbent [3] at a time when concepts in experimental psychology derived from information theory and control theory.  These are 'processing stages' models, which are some variant of an 'attention - input processing - output choice - output execution' sequence.  Their chief characteristics are [4] (and see Figure 3.b):

- processing reacts to inputs,

- each processing stage responds to, and only to, the output of the previous stage,

- the output of a stage is (usually) a simple mapping of its input,

- there is a one-directional sequence of processing, from stimulus reception to response execution, 

- there is a set sequence of processing, though some of the stages may be omitted,

- other knowledge is referred to, if at all, only at later stages in the processing, 


Even in simple control tasks, such concepts may be inadequate to describe human behaviour.  Bainbridge [5] points out that in human behaviour :

- the distinction between obtaining information, making decisions, and setting control targets is often not clear,

- goal or target setting may be an important part of the operator's activity,

- expert control behaviour is often oriented to ensuring future states will be acceptable, not just to correcting present unacceptable situations.  Experts do this by making predictions, using some 'mental model' of the process, and by planning.  Prediction, goal setting and planning, which are 'central' processes, must come before input processing to identify the mismatch between actual and required states which needs to be corrected.  So the sequence of processing cannot be as described in sequential stages models.

- action choice may involve evaluating alternative actions.  Extra information about the situation or the action may be needed to make this evaluation.  So action choice may involve active attention to, and processing of, inputs.  Again the sequence cannot be as described in the typical sequential stages model.


The present paper focuses on cognitive processes in very complex dynamic tasks such as industrial process operation, naval mine sweeping, flying, air-traffic control, car driving, emergency services management (e.g. fire or battle fighting), and various medical professions.  (In this paper, most of the examples come from process operation.)  These tasks are cognitively demanding [6], a person doing this kind of task has to co-ordinate a larger number of different types of cognitive processing than are used in other tasks.  This is because these tasks have one or more of at least four groups of key features.


1.  Information may not be available about the state of some parts of the system. Information which is available about the state, and the effects of actions on it, is often ambiguous.  The person needs to build up a structure of inference to describe the current, and anticipated future, situation as they understand it.  They actively search for the information they need (rather than just reacting to inputs), and combine this information with their knowledge of the possibilities.  They may have to make decisions under uncertainty, so bias can enter.


2.  The person is expected to keep under control one or more independent dynamic entities.  These devices will continue to behave in their own way over time even if no action is made on them.  Because of this evolution over time, actions need to have the right size and timing to have the required effect.  

Alternatively, there may not be time to think about alternative actions before making a response.  Predictions, and anticipatory actions  to avoid an error developing, may be more efficient than responding to an error after it has developed.  


3.  The dynamic entities may have several variables to be controlled, or there may be inter-dependencies, or a hierarchy of sub-tasks, so that the operator has allocate processing resources between several simultaneous task responsibilities, each with different payoffs.  Because of the combination of evolution over time with multi-tasking, it may not be possible to complete one part of the task before starting on another, so sub-tasks have to be interleaved, which puts an emphasis on the organisation of behaviour, and on planning.  


4.  Most of these tasks are sufficiently complex for it not to be possible (at least in practice) to anticipate beforehand all the possible situations which might arise, and to pre-specify how to deal with them.  So people doing this type of task need to be flexible, to adapt their behaviour to changing details of the situation, to use general strategies, and to work out for themselves how to deal with unfamiliar situations.


Two aspects are fundamental to all these features of complex behaviour :

a. Inferences, task organisation, and adaptability are handled most effectively by building up an overview of the total task situation. (In teams, a team builds up a group overview, team understanding and plan of action [7], but team work will not be discussed in this paper.)


b.  All these types of behaviour are only possible after the development of considerable expertise, about recognition-primed decisions, working methods, appropriate knowledge, and how to build up an overview relative to the task. 


These features of complex dynamic tasks have led to a change in models of how complex tasks are done, from sequential to contextual.  Contextual models focus on the overview, the temporary structure of inference built up in working storage to describe the task situation, and how this provides the context for later processing and for the effective organisation of behaviour.  


The remainder of this paper is in three sections, on:

- examples of evidence for the more complex features of behaviour and cognitive processes claimed above,

- the nature of the general cognitive processes, and the contextual models, implied by these aspects of behaviour,

- some brief notes on practical issues raised by these more complex aspects of behaviour.


II. Some evidence for these features of behaviour


This paper suggests the crucial features of complex dynamic tasks are that they involve building up an overview, and adapting the sequence of behaviour to the context, and that they can only be done effectively by someone with considerable expertise.  This paper gives a brief summary of the evidence.  There are fuller accounts in [8], [9] and [10]. 


A.  Inference and Understanding

When doing a complex dynamic task, people may not react solely to the external stimuli.  The information directly available is often inadequate for understanding, or choosing an effective action.  For example, it is not possible to see a leak in the primary circuit of a pressurised water nuclear power plant, because the circuit is enclosed by a thick concrete wall.  Instead, the operator infers there is a leak when there is a mismatch between the displayed flows into and out of part of the circuit.  It is often necessary to make inferences, to interpret the situation, rather than to assume that the situation is only and exactly what can currently be sensed in the environment.  Someone doing a complex dynamic task builds up, in working storage, a temporary structure of inference which represents their understanding of the present and future situation, and their plans for what to do about it.  


B. Organising the Sequence of Behaviour

In complex dynamic tasks, the task variability is such that it is often not possible to make an automated or standardised response to a situation.  The optimum action may be one which takes into account expected future events, perhaps because they will give a natural solution to a present problem (for example, a change shortly due on a furnace will correct the present over-use of electric power by it), or in order to prevent an undesirable situation from developing.  Planning may be necessary to work out how to meet task goals.  In complex dynamic tasks there are usually several simultaneous goals to be met, and choosing a sequence of activity which is satisfactory by as many criteria as possible needs some predicting and comparing.  


In continuous control tasks, the operators have to integrate their actions with the evolution of changes in the process.  For example, in a steel-works power control task [12], the operators cycled between checking the acceptability of the process state and doing a section of task thinking.  If, when they checked the plant state, there was a clear need for action, then they made an action, usually one which they had thought out beforehand.  If the process state was unacceptable but action not urgent they chose, or refined the choice of, their future actions.  If the process state was acceptable, they reviewed what would happen in the future, and the implications for action, so that they were prepared for future events and what to do about them.  The details of their thinking depended on the results of previous thinking about this and other topics, and where they had got to previously in the sequence of possible thinking about this topic, as well on the current state of the process.  And how long they went on thinking about a topic before they stopped and went back to checking the acceptability of the process state, depended on how acceptable the process state had been when they last checked it.


Beishon [13], Reinartz [11] and Amalberti [14] studied operators doing tasks in which the process has to pass through a sequence of states.  For example, Reinartz studied a team of operators dealing with a major process fault, while Amalberti studied pilots during the phases of flight.  Operators may have to work simultaneously on several task goals, such as controlling the dynamics of the air-craft while following air-traffic control instructions about safe position and heading.  Reinartz' team of nuclear operators were working concurrently on nine to ten goals.  Operators may have a prior plan which they have to integrate with the expected evolution of the plant or aircraft behaviour, and with events which may be unpredictable in various ways, such as unexpected aspects of a plant fault, or messages from air-traffic control, as well as with the exact nature (such as the size and timing) of expected events.


All these factors mean it is unlikely that someone can complete the working method for one part of their whole task before starting on another part of the task.  People doing these tasks have to be flexible and adaptive to details of the present situation.  They have to decide on a satisfactory sequence in which to do parts of the task.  They have to choose how to switch between different task goals, so that adequate action is taken, at the appropriate time, to keep the external situation acceptable.  This allocation of effort between sub-tasks and goals is a key aspect of the effectiveness with which a complex dynamic task is done.


C.  Unfamiliarity, Expertise and Workload

Unfamiliarity, or its inverse, expertise, is closely linked to mental workload  [10], [12], [13].  Unfortunately it is difficult to put a number on human processing capacity limits in real tasks, for at least three reasons, to do with equipment design, the working methods available, and other effects of expertise.  


Operators use 

- part of their mental effort do the main task, to operate the process or aircraft, for example, and 

- part of their mental effort to understand and operate the interface and job-aids they have been given to do the task with.  The same task can require more or less mental (or physical) effort, depending on the equipment used to do it.  This is one of the fundamental motivations for human factors.


If there are alternative working methods or strategies for doing a task, each of which requires a different amount of mental work to meet the same task demands, then there may not be a monotonic relation between task demands and mental workload [15], [16], [17].  While in physical tasks the effort expended and amount of work done are the same, this is not necessarily true in mental tasks.  And the person needs a way of choosing between the alternative strategies.


Expertise increases mental working capacity.  The efficiency with which parts of the task are done increases.  (This is the general meaning of the word 'skill', as used in psychology and in British industry.  In this sense, any type of behaviour may be more or less skilled.)  People who do complex dynamic tasks, such as nuclear power station operators, air-traffic controllers, or pilots, are often expected to have several years of experience before they are considered fully qualified to do the work.   There are various changes in processing with extended practice [10] :  

- Perceptual-motor skills improve and may become 'automatic', i.e. not using any of the limited cognitive processing capacity.  

- People doing cognitive tasks develop readily available working methods, including appropriate predispositions, recognition-primed decisions [18], and reference knowledge.  

- In addition, expertise affects how large an overview of the current situation can be built up. The 'magical number 7 plus or minus 2' limit to short-term memory capacity applies in situations in which people without relevant experience have to remember arbitrary material.  In contrast, Bisseret [19] found that experienced air-traffic controllers remembered on average 33 items about the situation they were controlling.  With experience, people learn the redundancies in the items to be remembered, and their overview, working methods and reference knowledge become interrelated and mutually reinforcing [20].  

With skill, the amount of mental work needed to achieve some given task demand is reduced.  Experts may therefore have more spare time, which they may use to plan ahead, which in turn makes them more efficient.


In many complex tasks, particularly highly automated processes, the operator may be expected to deal with situations which have not been anticipated.  In most complex dynamic tasks there is the possibility that combinations of circumstances will arise which have not been met before, so the person cannot use a standard working method and/or reference knowledge, either for understanding what is happening or for choosing an effective action.  They have to develop a working method and related knowledge, that is to problem solve.  The strategies a person uses to solve a problem may include a mixture of approaches [10], such as to : 

- ask someone with appropriate expertise; 

- think of a previous similar situation on the process, and adapt the working method used then ('case-based' reasoning);

- think of an analogous situation, then solve the problem on the analogy and check if this solution applies to the real problem; 

- reason from basic principles, perhaps to think of the required goal state, imagine a trajectory of states connecting the actual state to the required state, and then think of actions which will carry out each of these state transforms.


Whichever approach is used, the person may mentally check the proposed method to assess its effectiveness.  If they have previous experience with part or all of the proposed working method they should know something about it, such as how long it takes, how accurate a result it gives, how much mental effort it involves, how much risk, etc.  So they can check this 'meta-knowledge' to test whether it fits the requirements of the present situation [16]. This check may include personal as well as task goals : is the proposed activity interesting, amusing, helpful, exciting, easy, etc.?  If the person does not know much about the properties of a proposed working method, then they may imagine (mentally simulate) carrying it out, to check its imagined effects against the task and personal criteria.  


Problem solving in practice may involve many factors which are often not included in theories of human problem solving based on laboratory studies.  Observation suggests that :

- reasoning from basic principles is not necessarily the first strategy used; 

- multi-dimensional probabilities, costs and values may affect the choice of solution (not necessarily consciously); 

- much use is made of two groups of human cognitive processes: 

- - firstly categories, similarities, analogies and case-based reasoning, and 

- - secondly meta-cognition using knowledge of the properties of potential activities; 

- neither the starting point nor the 'solution', the final state required, are necessarily pre-specified in detail, for example, if the aim is to get a faulty high-risk plant 'into a safe state'; 

- the nature of the 'problem space', the structure of the environment and the facilities available, are not always fully specifiable in advance.  For example in fire-fighting management the fire distribution, available appliances and work force are not exactly repeated, so it is only possible to develop prior working methods to deal with general categories of situation [18],[21].


An unfamiliar situation is one which the person does not have a previously practised working method and /or relevant reference knowledge for dealing with.  By definition, if expertise consists of having a readily available working method, etc. as suggested above, then expertise will be lower in unfamiliar situations.  However, experts are less likely to meet situations which they have not met before, and are more likely to have appropriate components of behaviour and knowledge which they can fit together to deal with the new situation.  But an unfamiliar situation will be more demanding even for them, in the sense of needing problem solving, which is the most capacity-demanding type of cognitive processing.


III. The processes and types of model needed


The key aspects of behaviour in complex dynamic tasks, to be accounted for by models [of this behaviour], fall into three main groups :

- the overview built up of the operator's understanding of the situation and what to do about it,

- the adaptive sequencing of behaviour, planning and multi-tasking,

- dealing with unfamiliarity, problem solving as the basis for developing expertise, and the interaction with mental workload.  

Problem solving, the development of expertise, and the effects on workload, are important processes, but they will not be discussed further here, see [16], [18], [20], [22], [23]. 


The flexible sequencing of behaviour in multi-tasking, the choice of strategy, and the place of meta-knowledge in these choices, have been modelled by Amalberti et al [14] (see also [15], [16]).  These choices are affected by the contextual overview.  Some issues about the nature of the overview will be expanded here, in sections on :

- inference via cognitive goals,

- the need to change from 'sequential stages' to 'contextual' models.


A.  Overview Structured by Cognitive Goals


As mentioned above, when doing a complex dynamic task, people do not react directly to the external stimuli.  They build up, in working storage, a temporary structure of inference which represents their understanding of the present and future situation, and their plans for what to do about it.  These inferences are built up from : information in the environment, processed for its relevance to the task; knowledge from their knowledge bases; and items already in working storage as a result of previous thinking.  (The word 'inference' may be somewhat misleading, as there are many processes by which this temporary inference structure is built up.)  For a review of evidence for these processes see [8].


The process of building up the inferences is mainly structured by cognitive working methods related to cognitive goals.  Cognitive goals are an intermediary to meeting task goals.  For example, the task goal : 'keep temperature at 300º', involves the cognitive goals : 'find current temperature', 'evaluate actual against required temperature', 'choose corrective action' (these may not be consciously explicit or distinct to the person doing the task).  Cognitive goals are not just trivial intermediaries, but are also a major aspect of how the person doing a complex task structures what they are doing [8], [24], [25]. 


In industrial process operation, typically the main cognitive goals are to : infer the present process state, review future events and states, review (future) product demands and plant constraints, evaluate states against demands, review actions available and their effect, choose appropriate actions, prepare an activity plan [8].  However, it is not necessary to describe all responses to task demands as involving cognitive goals : for example if the person’s response is automated (perceptual-motor skill) or proceduralised, or if a person rejects a goal they have been given.


Cognitive goals are also involved in the way information is obtained.  People acquire information from the environment in two ways, actively and reactively.  

In 'active' information acquisition, people search for the information they need for what they are currently thinking about, that is, to meet their cognitive goals.  As part of this, people keep up to date with changes in the environment, depending on their confidence in the ongoing validity of the information they already have in working storage.  Knowledge from the knowledge base can suggest the information which it is appropriate to search for, and the assumed likelihood and importance of events in the environment which need to be checked on.  

'Reactive' information acquisition occurs when something in the environment overrides the attention processes controlled by the person's current train of thought.  Strong signals are usually used to cause this attention over-ride, for example a warning buzzer, but this process can also happen by serendipity, when the person happens to notice an item of information that is relevant to a sub-task which they are not currently thinking about [13]. Reactive information acquisition is much affected by the salience of the information.  


The two main groups of cognitive goas are concerned with :

- the person's understanding of the situation, and 

- what to do about it.  

Except at the beginning of work, these are interdependent.  At first, at the beginning of shift or after a major unexpected event such as a fault, understanding must be built up, and it may be incomplete due to time pressure.  After this, throughout the rest of the working period, understanding, planning and acting are interdependent.


Operators sequence their behaviour by choosing what to do next (not necessarily consciously) on the basis of the context.  This context is provided by the operator's overview : their understanding of the present state of the task, future events and plans, the probability, importance and constraints on various events and activities, and the point the person had reached in their previous thinking.  The results of any one aspect of thinking are kept in working storage, and provide the context for other thinking.  Understanding determines what to do next, and that determines new understanding.  





Figure 1 : Cycle of contextual processing, a basic contextual model.















Figure 1 shows a simple representation of the main features of this cycle.  





Figure 2 : Cycle of contextual processing, with links to knowledge and environment added (see also notes about overview in Figure 1).









Figure 2 adds the knowledge base [referred to during task thinking], and the active and reactive relations with the environment.


B.  Sequential and Contextual Models 


Different types of cognitive processing model may be appropriate for describing cognitive behaviour in different types of task.  In human engineering, the most frequently used models of cognitive processes consist of a set sequence of processing stages, starting with reacting to information, and ending with action execution.  Rasmussen's 'ladder' model (e.g.[2]) is an open-loop example in which some of the stages may be omitted.  This model represents the cognitive processes found in a study of diagnosis by electronic maintenance technicians [26].  This is a non-dynamic task which did not involve control of events in time or multi-tasking : every thought was followed by action, and the technicians minimised their mental effort by using a context-free strategy, in which they did not need to know about the specific device which they were repairing or to build up an overview of its functional state.  So this task did not contain many of the key features of complex dynamic tasks identified at the beginning of this paper.  


'Sequence of processing stages' models of human behaviour can be derived conveniently from engineering models.  





Fig. 3.a. Sequential model : Box diagram showing place of human operator in control system.




Figure 3.a shows a conventional box diagram for feedback control, with the human operator shown by a single box. 

The human operator box might be developed into a fuller model, in three phases. 

It could first be expanded into a sequence of boxes which specify more detailed cognitive operations, as in Figure 3b.  






Figure 3.b. Sequential model : Description of human operator expanded to show some of the possible intermediate cognitive processes. 




Note that this representation now distinguishes between the overall task to be done - producing an appropriate action on the environment - and the many different cognitive processes which may be required in order to choose this action.  


Once the cognitive processes have been distinguished, then the intermediate states [in working memory] between the transforms might be added, as in Figure 3c.  





Figure 3.c.  Sequential model : Intermediate states added.








As these cognitive operations are not all needed in all tasks, some short cuts might be added, as in Figure 3d.





Figure 3.d. Sequential model : Short cuts added, to represent tasks in which some of the processes are not used.








What is missing from this as a representation of human behaviour in complex dynamic tasks ?  

First, knowledge is used in all operations.  Even in activation, knowledge is used as the basis for lowering the threshold for noticing items which are expected.  So knowledge bases need to be added to all the operations.  

All the operations may also involve 'observing'.  Whatever someone is thinking about, they may look for additional information that they need in their thinking.  

When executing the task, a person may find the state of the environment is such that they cannot do the task as planned, so they have to backtrack and change the task objectives and the procedure chosen for meeting them.  So further links need to be added to the human operator model, at least as in Figure 4.  





Figure 4. Some of the additional links needed to account for the sequence of cognitive behaviour in complex dynamic tasks.  (KB knowledge base)  [Compare with contextual model in Figure 2.]






In fact, the evidence on human behaviour suggests that any of the cognitive operations can follow any of the other operations [9].  One of the reasons different operations in human cognitive processing can be distinguished, such as identifying the system state or formulating a procedure, is because they form relatively independent modules of thinking, which can be done before or after any of the other modules.


Sequential models have difficulty with describing cognitive behaviour in complex dynamic environments, because this behaviour does not occur in a simple perception-decision-action sequence.  Essential features of complex cognitive behaviour are the multiple interdependent cognitive operations, and the flexibility of the sequence in which these task topics are considered.  A competent person in this type of task does whatever aspect of thinking is appropriate to the current situation, which can be very varied.  This behaviour cannot be represented by a model which specifies the sequence of cognitive processing as one-directional (e.g. arrows only from left to right).  The flexibility of behaviour sequences can be better modelled in terms of an overview which describes the present task situation and the proposed plan of action, and also provides the context for choosing the optimum sequence of behaviour, as suggested in Figures 1 and 2.


Any representation, even more complex ones than the one-page 'models' in Figures 2 and 3d, necessarily only shows a sub-set of what could be represented.  So it is important to choose a sub-set of items to represent which make explicit the points which are important for a particular purpose.  The way these items are shown should provide a simple mnemonic for the important points, provide reminders to other relevant points which will be remembered by an expert, and provide a framework for thinking.


Typically, sequential models do not provide reminders about the following aspects of cognitive processing in complex dynamic tasks :

- the interplay of top-down (knowledge based) and bottom-up (input driven) processes in all cognitive operations.

- active search for information needed.

- the integrative and continuing contextual overview.

- the flexible sequencing of processes, and multi-tasking.


Some of these cannot be added to a sequential type of model without a fundamental change to the input-to-output organisation of behaviour, as suggested in Figure 4.  Others, like the overview, require the addition of new mechanisms [in a model].


Any simple representation of contextual processes will also be incomplete.  There are several important aspects of cognitive processing which need to be added to Figure 2 to give a complete account of cognition in complex dynamic tasks, such as :

- the use of non-rigid working methods, and particularly the ability to devise new working methods,

- the goal-oriented nature of behaviour,

- the effects of experience on the modes of cognitive processing used,

- risky decision making,


Perhaps the greatest gap in this contextual representation, which will be felt by those who study tasks which are mainly sequential, rather than cyclic control tasks such as industrial process operation or flying, is that it does not show the sequences of activity which can occur.  I would argue that a minimum of two forms of representation are needed for sequential  tasks.  A sequential representation would show the sequence of overt task behaviour people (should) carry out.  A cyclic contextual representation would show the flexible way they do the thinking which underlies this overt behaviour.


The need for this change from sequential to contextual models of cognitive processing is discussed in [4], [27], [28].  The terms 'sequential' and 'contextual' emphasise the comparison between a set sequence of processing stages and the choice of processing type as a function of context.  For the purposes in this paper it is only necessary to recognise the need for the general features of a contextual model, as in this section, without details of how they might be implemented.  References [9] and [24] contain more detail about suggested versions of such a model.  This distinction between sequential and contextual models is not the same as the difference between serial and parallel processing.  Existing context models in ergonomics do not constrain which parts of the processing are done in parallel.


IV. Notes on practical implications


This difference in models is not trivial.  Because the models most frequently used by human engineers do not contain any reminder about the effects of experience or the existence of context effects, top down processing, the flexibility of behaviour sequencing, problem solving, etc. these aspects tend to be forgotten when designing to support people doing these complex tasks, which can have undesirable effects.  However, approaches to the optimisation of design to support human beings in complex tasks, which are suggested by the context approach, have previously been discussed elsewhere (see references below).  There is only space to give some reminder comments here.


People need much hands-on experience, in order to develop and maintain their physical and cognitive skills.  The importance of expertise leads to an emphasis on training.  In cognitive skill :

- working methods are only easy to use, 

- a temporary inference structure effectively built up, 

- and reference knowledge structured relative to task thinking so that it is easy to access, 

if the cognitive activity is regularly practised.  Learning knowledge separately from using it in the task is less successful.  

This implies that a generous amount of training is needed, both initial and refresher, not just perfunctory demonstrations.  Different training methods are suited to different types of cognitive process, e.g. [20], [22].  Systems which involve control automation or cognitive automation (decision support systems) need to be designed so that operators can develop and maintain the perceptual-motor and cognitive skills they may need during manual take-over (for a discussion of some of the issues, see [29], [30]). 


As building up and maintaining a temporary inference structure (representing the current state of the person's understanding and planning) is an essential part of doing a complex task, this needs to be supported by equipment and job-aids design, and by training.  As the ability to build up an effective overview develops with expertise, it would be interesting to study the design of training methods which are oriented to improving these cognitive processes.  Also equipment interfaces need to be designed to support maintaining an overview of the state of the task, and so that this overview is easy to build up again if it is interrupted. 

Modern computer based interfaces, such as are used in industrial process operation and flying, only display part of all the potentially available information at any one time.  It is important for such interface systems to be designed so that users can keep an overview of what is happening in parts of the task which are not currently displayed in detail (for a discussion of some of the issues, see [31]).  Because the importance of overview and context effects tends to be forgotten, there is a current emphasis in display system design on sub-task-specific displays.  These displays may be optimum for given sub-tasks but may cause problems in the whole task, because the difficulties the user may have in combining such sub-task displays together, to reach an overview of what is happening, have not been considered in the design.


The temporary inference structure which is built up is not an unprocessed representation of the environment, but is the result of thinking about the task situation in relation to the task demands.  So it takes time to develop.  This needs to be considered in the design of, and allocation of function in, automated systems in which the operator is expected to take over operation (see discussion in [29], [30]).


Bainbridge [6] illustrates the importance of the overview in understanding the nature of human error in complex dynamic tasks.  Hollnagel [28] reviews the evidence that cognitive error processes are contextual, and gives an interesting discussion of the implications of this view for the nature of ergonomic task analysis, and the processes of making a Human Reliability Analysis.


All these approaches to human factors design and training need to be based on an appropriate task analysis, to identify the knowledge, and task and cognitive goals, which need to be supported.  Hierarchical Task Analysis (e.g. [32]) describes the hierarchy of task goals, which gives each activity its larger context.  Reference [33] reviews methods of cognitive task analysis, and describes a simulation which builds and revises a situation assessment, equivalent to the overview described in this paper.  However, a major gap in current human factors tools is an effective method for cognitive task analysis which focuses on cognitive goals.


References


[1] C.R.Wickens, Engineering Psychology and Human Performance, Columbus: Merrill, 1984.

[2]  J.Rasmussen, Information Processing and Human-Machine Interaction, an Approach to Cognitive Engineering, New York: North-Holland, 1986.

[3]  D.Broadbent, Perception and Communication, London: Pergamon Press, 1958.

[4]  L.Bainbridge, "Types of hierarchy imply types of model," Ergonomics, 1993, vol.36, pp. 1399-1412.

[5]  L.Bainbridge, "Mathematical equations or processing routines ?" in Human Detection and Diagnosis of System Failures, J.Rasmussen and W.B.Rouse, Eds. New York: Plenum Press, 1981, pp.259-286.

[6]  L.Bainbridge, "Difficulties and errors in complex dynamic tasks," Ergonomics, in press.

[7]  S.J.Reinartz and G.Reinartz, "Verbal communication in collective control of simulated nuclear power plant incidents," in Proc. 2nd European Meeting on Cognitive Science Approaches to Process Control, 24-27 October, Siena, Italy, 1989, pp.195-203.

[8]  L.Bainbridge, "Mental models in cognitive skill : the case of industrial process operation," in Models in the Mind, Y.Rogers, A.Rutherford and P.Bibby, Eds. London: Academic Press, 1992, pp.119-143. (The editors did not ask authors to proof read, and much of Table 4 is missing.)

[9] L.Bainbridge, Building up behavioural complexity from a cognitive processing element,  Department of Psychology, University College London, November 1993, 95 pp.

[10] L.Bainbridge, "Processes underlying human performance," in Aviation Human Factors, D.J.Garland, J.A.Wise and V.D.Hopkin, Eds. Hillsdale NJ: Lawrence Erlbaum Associates, in press.

[11]  S.J.Reinartz, "Analysis of team behaviour during simulated nuclear power plant incidents," in Contemporary Ergonomics 1989, E.D.Megaw, Ed. London: Taylor & Francis, 1989, pp.188-193.

[12]  L.Bainbridge, "Analysis of verbal protocols from a process control task," in The Human Operator in Process Control, E.Edwards and F.P.Lees, Eds. London: Taylor & Francis, 1974, pp.146-158.

[13] R.J.Beishon, "An analysis and simulation of an operator's behaviour in controlling continuous baking ovens," in The Human Operator in Process Control,  E.Edwards and F.P.Lees, Eds. London: Taylor & Francis, 1974, pp.79-90.

[14] R.Amalberti, Models d'activité en conduite de processus rapides : implications pour l'assistance à la conduite, PhD Thesis, University of Paris 8, 1992.

[15] L.Bainbridge, "Problems in the assessment of mental load," Le Travail Humain, 1974, vol.37, pp.279-302.

[16] L.Bainbridge, "Forgotten alternatives in skill and workload," Ergonomics, 1978, vol.21, pp.169-185, 1978.

[17] J-C. Sperandio, "Charge de travail et regulation des processus operatoires," Le Travail Humain, 1972, vol.35, pp.85-98.  English summary in Ergonomics, 1971, vol. 14, pp.571-577.

[18] G.A.Klein, "Recognition-primed decisions," in Advances in Man-Machine System Research Volume 5, W.B.Rouse, Ed. Greenwich CT: JAI Press, 1989, pp.47-92.

[19]  A.Bisseret, "Memoire operationelle et structure du travail," Bulletin de Psychologie, 1970, vol. XXIV, pp.280-294.  English summary in Ergonomics, 1971, vol. 14, pp.565-570, 1971.

[20] L.Bainbridge, "Development of skill, reduction of workload," in Developing Skills with Information Technology, L.Bainbridge and S.A.Ruiz Quintanilla, Eds. Chichester: Wiley, 1989, pp.87-116.

[21] R.Samurçay and J.Rogalski, "Analysis of operator's cognitive activities in learning and using a method for decision making in public safety," in Training, Human Decision Making and Control, J.Patrick and K.D.Duncan, Eds. Amsterdam: North Holland, 1988.

[22] R.M.Gagné, Conditions of Learning and the Theory of Instruction, 4th Ed., New York: CBS College Publishing, 1985.

[23]  L.Bainbridge, "Planning the training of a complex skill," Le Travail Humain, 1993, vol. 56, pp. 211-232.

[24] G.A.Sundström and A.C.Salvador, "Principles for support of joint human-machine reasoning : supervision and control of future multi-service networks, in Proc. 3rd European Conf. on Cognitive Sci. Approaches to Process Control, 2-6 September, University of Wales College of Cardiff, School of Psychology, 1991, pp.269-280.

[25] J.Duncan, "Goal weighting and the choice of behaviour in a complex world," Ergonomics, 1990, vol. 33, pp.1265-1279.

[26] J.Rasmussen and Aa.Jensen, "Mental procedures in real life tasks : a case-study of electronic trouble-shooting," Ergonomics, 1974, vol. 17, pp. 293-307.

[27] E.Hollnagel, "Coping, coupling and control : the modelling of muddling through", in Proc. 2nd Interdisciplinary Workshop on Mental Models, 23-25 March, Cambridge, Robinson College, pp.61-73, 1992.

[28] E.Hollnagel, Reliability of Cognition : Foundations of Human Reliability Analysis, London: Academic Press, 1993.

[29] L.Bainbridge, "Ironies of Automation," Automatica, 1983, vol. 19, pp. 775-779.

[30] L.Bainbridge, "Will expert systems solve the operators' problems?" in Proceedings  of the Workshop on Technological Change Process and its Impact on Work, R.A.Roe, M.Antalovitz and E.Dienes, Eds.  Siofok, Hungary, September 9-13, 1990, pp.197-218. 

[31] L.Bainbridge, "Multiplexed VDT display systems," in Human-Computer Interaction and Complex Systems, G.R.S.Weir and J.L.Alty, Eds. London: Academic Press, 1991, pp.189-210.

[32] A.Shepherd, "Hierarchical task analysis and training decisions," Programmed Learning and Educational Technology, 1985, vol. 22, pp. 162-176.

[33] E.M.Roth, D.D.Woods and H.E.Pople, "Cognitive simulation as a tool for cognitive task analysis," Ergonomics, 1992, vol.35, pp. 1163-1198.




Access to other papers via the Home page


© 2021 Lisanne Bainbridge

Comments

Popular posts from this blog

Ironies of Automation

Types of skill, and Rasmussen's SRK schema

Complex Processes Review : References