Multiple representations or 'good' models

I became interested in these issues because I found it a problem that Jens Rasmussen’s cognitive process models are so popular, despite the problem that they do not represent many aspects of human cognitive processes in complex dynamic tasks.

One issue with Rasmusssen's models is that they derive from his detailed studies of the activities of maintenance technicians. The technicians do not need to maintain :
- easy access to device specific knowledge and all types of skill (in the general sense of expertise), or
- working storage about the current and anticipated states of their device, 

which are essential for process operators, so are not included in his models.

So why are his models so successful ? Because his models communicate to engineers about cognitive processes in a language that engineers understand.

This has advantages, but it also has the serious disadvantage that some important issues about human cognitive processes are not communicated, e.g. Bainbridge 1997.

Topics :

1.  Introduction.

2. What can be a model ?

3. Models for different purposes.

3.1. Design.

3.1.1.  Understanding underlying processes.

3.1.2.  Data on performance using different interface-task combinations.

3.2. Prediction.

4. Cross reference between models for different purposes.

4.1. Examples of different models for different purposes.

4.2. Difficulties with cross-referencing.

This paper was not on the original site, but the diagrams make a small point clearly.


Lisanne Bainbridge

Department of Psychology, University College London

in Patrick, J., Duncan, K.D. (eds.) Training, Human Decision Making and Control. Elsevier/ North Holland Amsterdam. 1988, pp.1-11. 


There can be many different models in the same general field, each oriented to a particular practical question and making different information explicit. Many of the implications of a model are implicit, understanding a model is left to professional expertise.  This makes it difficult both to cross-reference between models, and to communicate between disciplines.  This paper outlines the different purposes for which different models are used, and the problems raised by using multiple models.

Models of human beings have to represent a subset of all the knowledge we have, or they would be too complex. The best practical models are problem oriented, focussing on information which aids thinking about a particular question.  Human factors/ ergonomists (HF/E) have several purposes for representation :

1.  describing human behaviour.

2.  representing the task (Task Analysis).

3.  designing displays and interfaces which represent information about equipment and task.

We will briefly review three topics :

1.  What can be a ‘model’ or representation?

2.  What are the different purposes models can be used for?

3.  If there are good reasons for having many different types of model, does this in itself raise problems?

We could ask these questions about each of the representation areas, but this paper will concentrate on models of human behaviour which are used as a basis for equipment design and performance prediction. (For some discussion of these aspects of interface design, see Rasmussen (1), Bainbridge (2)).


Any description (verbal, pictorial, mathematical), which may or may not be fully specified but which is simpler and more compact than a complete listing of the behaviour it describes, can be called a ‘model’. A model is a statement or diagram which encapsulates some (useful) knowledge. A relational sentence, such as ‘For pointing tasks, a mouse is an easier control to use than a keyboard’, is a ‘model’ in this general sense. We can use the sentence to make predictions about behaviour, and the equipment designs which improve behaviour. Most of the information used in ergonomic design is of this sort, although it is not usually dignified by the title of ‘model’.

The constructions that we are more familiar with as ‘models’ are frameworks for representing a wider range of behaviour, such as box diagrams representing person-equipment systems or stages in cognitive processing.

In formal models, the formalism constrains the information which can be expressed completely and rigorously. The formalism provides a set of symbols, cues with special meanings. For example :

a.  In a box diagram, the boxes represent transforms and the links between them show variables. This is not usually stated on the diagram.

b. Within a box, the nature of its transform may be represented by an equation, in calculus or Laplace notation. The components of this equation will also act as cues to an expert, indicating, for example, the type of transient response made by the transform.

c.  An actual transient response could be represented explicitly by another type of formalism, a graph of the output variable against time. A graph as a formalism represents other information by cues which can be interpreted by an expert. For example a particular shape indicates an effect with an asymptote.

The formalism provides the symbols (in the ergonomic rather than semiotic sense) which are reminder cues to other relevant knowledge. It provides a language for expressing explicitly some characteristics of a particular situation, for example the actual time constants involved, or the actual connections between the boxes.


As models represent only a sub-set of the possible information, there can be many different models for the same general behaviour, each of which represents a particular aspect of it. The models may be different in detail, or in what they attempt to represent. We can make some preliminary comments about what representations are useful for what purposes.

The primary purpose of HF/E models is to describe and account for human capacities and their variability, as a basis for prediction and design. We will discuss these 2 responsibilities separately. In both cases, dealing with the enormous range in potential human performance from supreme creativity to blind panic, is a representational problem which HF/E can have tools for [2022 ! ? ]. We can describe and predict this variability using statistics. 

We also need an attitude to modelling which limits the variability considered. The most useful philosophy is  ‘support the best, predict the worst’. To design good interfaces, we need to know what is the best that people can do, and how to encourage it. When predicting performance, we need a conservative estimate, and use models which assume a human operator with minimum task-specific expertise.

3.1. Design

To support design decisions, we need two types of model, for understanding the mechanisms underlying human behaviour, and for describing how well people can do different tasks using different interface components. These two types of model can be based on different types of evidence, from case studies and experiments.

3.1.1.  Understanding underlying processes.

Understanding the processes underlying human behaviour is an important basis for identifying the types of interface support that will be needed. It is especially important in designing for complex tasks using advanced technology, because many of the thinking processes cannot be observed directly. For example, if we know that short-term memory can easily be disrupted, this could have important implications for designing to minimise the need for it or to support it when it is unavoidable. Evidence about such processes comes both from psychological laboratory experiments, and from case-studies of complex tasks.

Psychological experiments are deliberately set up to test between alternative explanations for behaviour. However, as they are concerned with few and independent variables, not many of them tell us much about complex behaviour.  Complex behaviour raises the problem of how the small units of behaviour, which can be tested in the laboratory, are selected and interrelated when dealing with a large, interacting, and multiple-goal environment. For these task situations, we get our information about underlying mechanisms from case-studies. These may tell us about, for example :

a.  Types of knowledge used in a particular task  : e.g. process dynamics, operating routines.

b.  Ways in which the knowledge-base is accessed : e.g. maintenance technicians and process operators may diagnose faults in different ways.

c.  Task strategy : e.g. opportunistic sampling, or future activities planning.

d.  Attitudes : e.g. to introduction of new technology.

All of these give us clues on how to design complex interfaces, and how to introduce them to their users.

There can be at least three different types of model for underlying processes.

1.  Educational models provide a framework in which inexperienced people can begin to understand a new area of knowledge. These models often use familiar analogies as a route to unfamiliar ideas. For example, Rasmussen’s (3) model has been successful and popular in introducing human cognitive processes to engineers. The model is very good for this purpose, though it may not be adequate as a basis for human factors design.

2.  Scientific models give logically consistent, parsimonious, and preferably elegant, mechanisms which account for rich data, and aim for completeness. It should be possible to generate hypotheses by which the model can be tested. When this type of model is fully specified, it may be sufficiently rich and interesting to be called a ‘theory’.

3.  Applied models have a more modest aim, to represent the information needed for a particular design situation. It is interesting that applied models can be former scientific models. For example, Newtonian mechanics are still useful in engineering, even if not in physics, while the distinction between short-term and long-term memory is a useful one in designing memory support systems, even though it is no longer sufficiently rich as a model for the actual cognitive processes involved in remembering and using complex information.

3.1.2.  Data on different interface - task combinations.

These data are usually collected from experiments in which the standard laboratory methods are used, to ensure that the results will be valid. But the experiment is designed not to test between hypotheses but to measure performance in a range of test situations. At a minimum, the results give ordinal data, or inequalities, such as :

Reading time (passive sentences) > Reading time (active sentences). 

These inequalities give us much of our basis for making design recommendations to improve performance, e.g. sentences in operating procedures should be active not passive.

3.2.  Prediction

Predicting human performance raises a different range of issues, and has different criteria for success. It is relatively easy for ergonomists to make ordinal statements about trends in human behaviour. These indicate the way to change an interface to improve human performance What is very much more difficult is to get ratio scale data (numbers which can validly be added and divided) about human behaviour, which can be used to make predictions, for example to calculate whether a specific investment in re-designing an interface will give an acceptable cost-benefit return in improved performance level. It would be particularly useful if the relation between task and performance could be described by an equation. We have only a few of these equations, for several reasons related to both attitudes and techniques.

Aeronautics engineers are not concerned that their equations do not describe the behaviour of a leaf blowing down the street. Where behaviour equations are concerned, we are all 'leaves', and people who are not used to the nature of human behaviour modelling are very good at spotting that a behaviour model does not represent their own idiosyncracies, and therefore rejecting it as no good. Perhaps a more important aspect of attitude is that most ergonomists are trained within an analytic framework, for relating human behaviour to the principles of the underlying scientific disciplines of psychology, physiology, etc. In contrast, what is needed for practical predictions is simply an equation specifying the relation between task and behaviour. This need not contain any representation of processes by which this correlation is produced. The components of such equations just need to describe the relation. This is not an approach to modelling which is much taught in the ergonomic disciplines.

Early control theory models of flying are a good example. They include only mechanisms for responding by feedback to random sine-waves. They include no account of how human tracking performance can improve if the pilot can anticipate or preview the track. They model only a modest part of the control task, flying straight and level, so they do not suggest or justify interface designs for the whole flying task. They give a worst-case conservative prediction of performance. They do however predict, to a first-order of accuracy, whether a proposed air/spacecraft of known dynamics can be flown by a human controller, and can therefore save millions in development costs. And they make this prediction in language which can be used to describe all the relevant parts of the system. The fact that they describe only straight and level flight is not a limitation for this purpose, as if an aircraft cannot be flown straight and level it is unlikely to be able to land safely either.

This type of modelling also needs a different approach to model evaluation. Most models of cognitive mechanism are evaluated [in experiments] by using statistical tests which measure how likely it is that experimental results happened by chance. What is needed in developing a practical technique is to identify the minimum number of important factors which must be taken into account. We have to find which variables account for most of the variance in the data, and which of them do not give any additional information. This is a sort of statistical testing which is much less frequently used, but which is necessary in developing pragmatic techniques which can be used with minimum effort.

It is much more difficult to ‘predict the worst’ in this way for cognitive tasks, in which people work out for themselves what to do, i.e. problem solving, than for flying. Arguing by analogy with control theory models of flying, to make a conservative performance prediction we need to :

1.  assume that the human operator has minimum special knowledge about non-randomness of the environment, and so has maximum difficulty with the task. this would mean that we have to account for ability to solve problems, which is needed when specific knowledge is not available.

2.  choose a part-task which tests the limits of performance, but simplifies the ergonomist’s task from one of accounting for performance in all task situations. An example of a crucial part-task in process control might be fault diagnosis, which is hardly a trivial problem to model.

3. describe the behaviour in language also used for describing other parts of the system. Programming language descriptions are more appropriate for cognitive processes, but they are difficult to combine with ratio scale measures and statistical testing.

Such descriptions might be tested in some type of Monte-Carlo simulation, to estimate the probability of different behaviour types and performance levels. At present this is not practical, because it is very demanding on computer time. We do not know much about the possible simplifications of cognitive models which would reduce simulation time, without significantly reducing the validity of the predictions.


To illustrate both that different models are appropriate for different purposes, and the difficulties of cross-reference between them, we will look at some models for human response to workload.

4.1.  Different models for different purposes.

Mental workload is usually modelled as a monotonic relation between task demands and mental work : as the task demands increase, mental work increases up to its capacity limit, and additional task demands cannot be handled. In fact, if people doing a task have several alternative strategies, each of which uses a different amount of mental work, then as the workload increases they can change to an easier strategy, which reduces their mental workload below capacity, and so they can keep going. This means that there is not necessarily a monotonic relation between task demands and mental workload. This is made clear by Figure 1. 

Figure 1.  Effect of changing strategy on the relation between task demands and mental work (adapted from Sperandio (4).

Figure 1 represents the non-monotonic nature of the relation explicitly, which is important for making the point when arguing against monotonic theories. However it also illustrates the way that models can mislead. By representing the behaviour in a way which emphasises the point about changing strategies, the graph implies that people change strategy when mental load is approaching a maximum. Actually this change happens as a result of many constraints. More experienced workers change before they reach the limit.

[The graph also implies there is a step change between use of strategies, and so big changes in mental effort, while actually the controllers use a mix of strategies as workload increases.]

Figure 2.  The major factors relating mental task demands and task achievement, from Bainbridge (5).  A possible mechanism underlying the results shown in Figure 1.

Figure 1 represents the function relating task demands and mental work, but does not show any mechanism by which this might happen.  Figure 2 represents a possible mechanism. It is educational, using a familiar feedback loop representation in which the symbols have known meanings, and can be used to make trend predictions. As a representation it focusses on the response to task and mental workload requirements, and the choice of behaviour as a balance between the two. It shows that task demands and mental workload are indirectly, rather than directly, related.

Figure 2 illustrates another aspect of representations, that points which are not explicitly represented may be indicated by cues, reminders, or aids for working out the implications. The task demands - mental capacity relation is represented as a causal link, but there is no explicit information about how this occurs. The link acts as a reminder to take this into account. Knowing how to do so needs professional expertise, or reference elsewhere.

Figure 2 does not represent the actual mechanism of strategy choice. Figure 3 is a representation which focusses on the choice between alternative behaviours. It is part of a more general 'scientific’ model accounting for the complexity of human behaviour. The effects illustrated in Figures 1 and 2 could result from this mechanism, but this would not be obvious to the casual observer. Figure 2 is an easier form of representation to use to predict that the relation between task demands and mental load is as shown in Figure 1.

Figure 3.  Behaviour Choice.  A method is chosen whose parameters best meet all sub-goals, both task and personal, from Bainbridge (6).  A possible mechanism for the choice of working method in Figure 2.

4.2.  Difficulties with cross referencing

If there are many different representations of the same thing, it may be difficult to cross reference between them, to see how they map onto each other. We can illustrate the difficulty of mapping between different models by following task demands and mental work through the three diagrams. In Figure 1 task demand is the independent variable on the graph, and mental work the dependent variable. In Figure 2 task demand is the input to one feedback loop, and mental work is the output of the other. In Figure 3, task demands appear in terms of the task context, while mental work is represented in terms of the working storage and mental operations used in a particular strategy. These anticipated mental workload effects are taken into account in the choice of behaviour. 

There is a parallel problem of cross-reference in modern display design. Computer based VDT displays make it possible to provide an industrial process operator with many alternative displays, each of which could be optimum for a particular sub-task (by making the task decisions ‘obvious') or knowledge type (physical components, cause-effect relations, behaviour over time, etc.). But these displays pose similar problems. Specialised displays may omit information which is important in other tasks, and it takes time to find it on another display. Cross referencing between the two displays, finding the same item on each, may involve detailed search. This means that, from the point of view of optimising the system and task as a whole, rather than a display for each sub-task considered independently, it may be better to have a more general display which can be used for many purposes.

The obviousness of the mappings between representations, as well as of the meaning of cues, depends on previous training. For example, much of training in control theory consists of learning what behaviour is implied by particular equations (e.g. first or second order responses - and this is itself a cued reminder), and the effect of different parameters on behaviour, e.g. the damping constant. Control engineers also learn to map between time and frequency domains, and polar plot and Bode plot representations of the behaviour. Different disciplines, which have different primary interests or practical problems, use models which focus on different aspects of the data (e.g. psychological or physiological models of movement). This, in combination with the different training in what is taken for granted from a given representation by someone trained in a particular discipline, causes major problems in communication between disciplines.

In conclusion, a model may be a better representation of the same problem than another model, but it more often will be an alternative representation for a different problem. It is therefore important to make clear what purpose is served by a particular model, and how it maps onto existing models in the same general area.


(1) Rasmussen, J., On the structure of knowledge - a morphology of mental models in a man-machine system context. Riso-M-2192. Riso National Laboratory, Roskilde, Denmark, 1979.

(2)  Bainbridge, L., VDT/VDU interfaces for process operation, in Proceedings of the Second International Symposium on Occupational Ergonomics, Zadar, Jugoslavia, April 1987.

(3)  Rasmussen, J., IEEE Transactions on Systems, Man and Cybernetics, SMG-13 (1983) 257.

(4)  Sperandio, J.C., Le Travail Humain 35 (1972) 85.

(5) Bainbridge, L., Le Travail Humain 37 (1974) 279.

(6) Bainbridge, L., Ergonomics 21 (1978) 169.

(7)  Bainbridge, L., What should a ‘good’ model of the NPP operator contain?, in Proceedings of the International Topical Meeting on Advances in Human Factors in Nuclear Power Systems, American Nuclear Society, Knoxville, Tennessee, April, 1986. pp.3-11.

Access to other papers via Home page

©  2022 Lisanne Bainbridge


Popular posts from this blog

Ironies of Automation

Types of skill, and Rasmussen's SRK schema

Complex Processes Review : References