Is your simulation valid ? helpful ? for what purpose ?

 Despite the title, this is not about ’simulation’ in the sense of computer models of the operator (or of the mechanism they are controlling).  Instead this is about ’simulation’ in the general sense of a simplified representation of part of the world, which can be used to make predictions.

As I became more in demand in the late 1980s, I no longer had the extended and totally uninterrupted periods of several months in the long vacations which I found essential for developing anything new.  This meant that many of my later papers re-iterated points which had been made before.  Which means there are many papers in my publications list which I didn’t feel the need to keep long-term.

This paper is an example of re-using some material - though there is also new content. I happen to have kept this because it was for an uncompleted project.

Some sections are much like the 'Multiple representations or good models' paper.  In other ways it is a supplement to that paper, suggesting many more ways in which extrapolating a representation from one situation to another may be misleading or, inversely, acceptable and used although it is invalid.

Topics :

-  the different forms of representation.

-  what is made explicit or distorted by different representations.

- the validity of the representations.

- how purpose affects the optimum choice of representation. This section is in four main subsections, on :

- - design,

- - measurement,

- - communication 

- - satisfaction.

Is your simulation valid ? helpful ? for what purpose ?

Lisanne Bainbridge

Department of Psychology, University College London

(1992) prepared for proposed book : Cacciabue, C., Pavard, B. and Hoc, J-M (eds.) Simulation in Dynamic Environments.

Of course some sort of representation of the human operator, properly based on evidence about their behaviour, is essential as a basis for designing to support such people. But the basic question of this paper is: what is the optimum form that this representation should take? This question arises because making a representation involves selecting a sub-set from all the possible aspects which could be represented. And any such selection will make some features clear and explicit, and will omit, obscure or distort others, making it more difficult to see their importance or contribution. This paper will focus on three aspects of representations : technique, validity and purpose.

The obvious constraint on the choice of representation is to link it to the purpose of the representation. A representation of operator behaviour or the mechanisms underlying it may be made for several purposes. The main groups of ergonomic aims are to :

- suggest a design strategy or carry out a design.

- make a numerical performance prediction as a basis for making decisions about the allocation of (re)design resources.

There is also a group of reasons which are not directly ergonomic, but which are influential in the choice of representations. These are:

- educational purposes, conveying the basic concepts and aims of an area,

- personal aims, such as obtaining grants, impressing colleagues, or getting a feeling of excitement or safety from using known or exploring new concepts or techniques. 

The impersonal purpose of the representation should constrain its content and the necessary level of detail, and these should then constrain the technique used for the representation. Unfortunately what often happens is the other way round. An investigator wants to use a particular technique. This constrains the content and detail of the representation, and the purpose may be distorted to fit what is given by the technique.

There are several possible technologies for making a representation: computer programmes, mathematical equations, diagrams/graphics, and verbal description. And each of these can take many forms (e.g. Bainbridge, 1988). Different authors use the terms ‘simulation’ and ‘model’ to indicate from one to all of these techniques for representation. As these terms are not used consistently I plan to avoid them, and to use more speqrcific labels throughout.  (This paper is also exclusively concerned with representing human operators, not with representing the task with which they are faced).

The possible representation methods differ in their validity.  There are two aspects to this validity : whether the representation is valid as a representation of human behaviour, and whether the behaviour being modelled is valid as behaviour representative of the task under consideration. As I am a psychologist/ ergonomist, I consider the most important criterion of validity is that the representation maps onto actual human behaviour and mechanisms, and that representations which do not do so are distortions. However there are reasons why the use of techniques which are distortions of the ‘truth’ may be either acceptable or understandable.

The sections of this paper discuss more fully the following topics:

-  the different forms of representation.

-  what is made explicit or distorted by different representations.

- the validity of the representations.

- how purpose affects the optimum choice of representation. This section is in three main subsections, on:

- - design,

- - measurement,

- - communication, satisfaction.

Forms of representation

Any description (verbal, pictorial, mathematical, program), which may or may not be fully specified but which is simpler and more compact than a complete listing of the behaviour it describes, can be used as a representation which encapsulates some useful knowledge.

A relational sentence, such as ‘For pointing tasks, mice are easier to use than keyboards’, is a representation in this general sense. We can use this representation to make ordinal predictions about behaviour and the equipment designs which improve it. Most of the information used in ergonomic design is of this sort, although it is not usually dignified by the title of a ‘model’.

The constructions we are more familiar with as ‘models’ are frameworks for representing a wider range of behaviour. A representational technique is a formalism which provides the symbols (in the ergonomic rather than semiotic sense) which are reminder cues to other relevant knowledge. It provides a language for expressing explicitly some characteristics. Diagrams and mathematical/ logical equations usually provide symbolic representations with more or less rigour. The term ‘simulation’ is used by some (but not all) to mean a computer programme which, when run, mimics some aspect of behaviour.

As a basis for considering which of these tools is best for different purposes, we need to discuss the general issues of what different techniques can represent (it is not possible to cover all the possibilities), and how valid they are.

Explicit and distorted representations

Some representations may be ambiguous. They may also not be rigorous, consistent or complete. For example, verbal descriptions cannot be rigorous or fully specified, and may be misunderstood. Diagrams also may not be drawn with a consistent use of a formalism, and there is no inherent check on their completeness.

There are also limits to what can be expressed using different tools, because of the facilities and cues they can provide. For example, in mathematical equations it is difficult to express such aspects as conditionals and semantics. Using traditional computer programming techniques it is not easy to express Gestalt pattern processes or fuzzy concepts, and tools such as SOAR or ACT* do not represent concepts or meta-knowledge (Kjaer-Hansen, this volume).

In some diagrams, and in mathematical and computer representations, the formalism constraints the information which can be expressed completely and rigorously. The formalism provides a set of symbols, cues with special meanings. For example:

- in a box diagram the boxes represent transforms, and the links between them represent variables. This coding is not usually stated on the diagram.

- within a box, the nature of its transform may be represented by an equation, in calculus or Laplace notation. The components of this equation act as cues to an expert, indicating the type of transient response made by the transform.

- an actual transient response could be represented explicitly by another type of formalism, a graph of the output variable against time. A cartesian graph as a formalism also represents information by cues which can be interpreted by an expert. For example, a particular shape on a graph indicates an effect with an asymptote.

The obviousness of the meanings of cues, as well as of the mappings between different representations, depends on previous training. For example, much of training in control theory consists of learning what device behaviour is implied by particular equations.

Different disciplines, which have different primary interests or practical problems, use models which focus on representing different aspects of the data. This, in combination with the different training in what is taken for granted from a given representation, causes major problems in communication between disciplines.

One representation may be a more convenient representation of a given problem than another, but it more often will be an alternative representation for a different problem. It is therefore important to make clear what purpose is served by a particular representation, and how it maps onto existing representations in the same general area. Making a choice between representations is similar to the way in which we have to decide, when we design displays for the process operator, whether to :

- make some information explicit, 

- provide a reminder cue to the operators, 

- expect them to remember it fully or to work out it for themselves.

An example

We will use three diagrams to illustrate the way in which different representations make explicit, and ignore or obscure, different aspects of the concepts referred to, and make cross-reference between representations difficult.  They also illustrate the problem that there is unlikely to be any practical limit to the number of different types of diagram (or other types of representation) which there might be. These three diagrams all represent aspects of the relation between task demands and mental workload, and the mechanisms underlying this relation.

Mental workload is often represented as a monotonic relation between task demands and mental work : as task demands increase so mental work increases up to a mental capacity limit, and additional task demands above this limit cannot be handled. However, if people doing a task can use several alternative strategies, each of which uses a different amount of mental work, then as the workload increases they can change to an easier strategy and so keep going (Sperandio, 1972). 

This means that there is not necessarily a monotonic relation between task demands and mental workload. This is made clear by Figure 1. Figure 1 represents the non-monotonic nature of the relation explicitly, which is important for making the point when arguing against monotonic theories. However this figure also illustrates the way in which representations can mislead. By representing the behaviour in a way which emphasises the point about changing strategies, the graph implies that people change strategy when mental load approaches maximum,  and that at any one workload one strategy is used exclusively. Actually more experienced workers change before they reach the limit,  they use a mixture of strategies - an easier strategy more often and a more difficult strategy less often as workload increases.  Figure 1 emphasises the effect of using different strategies, it does not show 'the truth'.  And many other factors may influence the choice of strategy (see below). Figure 1 also represents the function relating task demands and mental work, but does not show any mechanism by which this might happen.

Figure 1. A. Sperandio 1972 

Figure 2 represents a possible mechanism. The feedback loop representation has been used, to communicate easily with engineers, and the box-and-arrows representation allows predictions to be made by following round the loops. As a representation, it focusses on the response to task and mental requirements, and the choice of behaviour by making a balance between the two. It shows that task demands and mental workload are indirectly, rather than directly, related.

  Figure 2. Bainbridge 1974

Figure 2 also illustrates another aspect of a formalism, that the symbols used can cue to points which are not explicitly represented. For example, the task demands - mental capacity relation is represented as a causal link, but there is no explicit information about how this occurs. The link acts as a reminder to take this into account, but knowing how to do so needs professional expertise, or reference elsewhere. 

Figure 2 does not represent the actual mechanism of strategy choice. Figure 3 is a representation which focusses on the choice between alternative behaviours, as a function of ‘meta-knowledge’ about the outcomes of using each of the behaviours. The effects illustrated in Figures 1 and 2 could result from the operation of the mechanism in Figure 3, but this would not be obvious to a casual observer of Figure 3. Figure 2 is an easier representation of the underlying mechanisms to use to predict that the relation between task demands and mental load is as shown in Figure 1.

Figure 3. Bainbridge, e.g.1975

With each figure, the viewer can only get the message intended if they already know how to interpret the 'visual code' used :

Figure 1         cartesian graph

Figure 2 feedback loop

Figure 3 read table - relate rows and columns, link between goal and means via meta-knowledge.

These three figures also illustrate another point. If there are many different representations of the same thing, it may be difficult to see how they map onto each other. We can illustrate the difficulty of mapping between different models by following task demands and mental work through the three diagrams. In Figure 1.A, task demand is the independent variable on the graph, and mental work the dependent variable. In Figure 2 task demands is the input to one feedback loop, and mental work is the output of the other. In Figure 3, task demands appear in terms of the task context, while mental work is represented in terms of the working storage and mental operations used in a particular strategy.

Similar points could be made using examples drawn from mathematical equations or computer programs. The technology of the representation imposes constraints, and easily makes explicit, or does not allow, certain features to be represented. For another example of a diagram analysed in terms of its effectiveness as a representation, see Bainbridge (1989). For an example of alternative verbal representations see Bainbridge (1979).


There is a number of factors affecting whether a representation of human behaviour is a valid one:

Are data about actual human behaviour the primary driving factor in deriving the model?

Does the representation claim to represent:

- the cognitive processes underlying observed behaviour?

- the observed behaviour (without claiming to represent the underlying processes)?

Are there factors which are needed to get the representation to work/ for completeness or rigour, but for which there is no behavioural evidence?

Is the reference data used relevant to the task of present interest?, e.g. should one use data from everyday habitual behaviour, or laboratory experiments, as the basis for a model of behaviour in real complex dynamic environments? (A clear example of the problems which can be caused is using the same model for 'diagnosis' by both maintenance technicians and process operators, see Bainbridge, 1984.)

The process of extrapolation, of taking the known and using it to infer the unknown, is an essential feature of using representations to make predictions. (As in extrapolating from the mental processes used for diagnosis by maintenance technicians into a model for cognitive processes in all complex tasks.)  We assume that what has been found out in one situation which has been studied also applies to another which has not. So a major question is : what is it that makes two situations ‘the same’, so it is valid to extrapolate from one to the other? Identifying such task categories will be a huge enterprise, particularly as extrapolation between tasks may be appropriate at one level of detail but not at another, or may be valid for some dimensions of a task, such as cognitive workload or cognitive style, but not for others. Even so, identifying the set of possible tasks involves only nominal scaling (identifying qualitative differences between tasks), and in this way it is a simpler enterprise than trying to generalise specific numerical results (ratio scaling) from one situation to another, as discussed by Chapanis (1988) (see also below).

How does the purpose of a representation affect its optimum content ? 

Depending on the ergonomist’s aims, different items may be in primary focus when representing the equipment user, and different levels of detail may be appropriate. There can be a large number of different models, each of which is optimum for some purposes and unusable for others.

For example, there are two main categories of ergonomic aims : to design working conditions, and to predict working performance. 

In ergonomic design, a representation of the user needs to represent the cognitive processes which should be supported in the user’s task, focussing particularly on cognitive processes which are limited in capacity so should be avoided and on cognitive processes which are high in capacity so should be exploited. 

On the other hand, in performance prediction, the primary aim may be to produce a numerical prediction, correct to a first-order of magnitude, which can be combined with rigorous representations of non-human parts of the system. So a mathematical equation representation may be most appropriate. 

Thus the primary focus of ergonomists’ tools may be very different in the two cases.

This section will discuss some of the factors influencing the choice of a representational approach to meet different aims. The main groupings of aims are:

- design,

- measurement of performance,

- communication with others, or personal satisfaction.


Suggesting a design approach

In a representation which is used as the basis for design, whether of an interface, a support system, a training scheme, a job, or whatever, the validity of the user representation is primary. There is no point in optimising a design according to criteria which are not truly representative of human characteristics.

However, the rigour and level of detail of such a representation are not necessarily a high priority. For example, in Bainbridge (1991) quite strong claims are made about the optimum design of multi-plexed VDT display systems, based on a small number of simple statements about the nature of human information processing, such as:

- human beings take time and make errors in translating from one representation to another,

- ‘short-term memory’ is limited in capacity,

- ‘short-term memory’ is seriously disrupted by thinking about something else. 

These statements are all about human cognition, and refer to limitations, but they are far from sophisticated, make ordinal not numerical predictions, and are not embedded in a complete cognitive model.

Indeed there is a serious irony with a rigorous approach to representations. The human operator is in the operating system to do the things that the automated devices cannot. Yet typically in rigorous representations the methods used to describe the contribution of the human operator to the system are methods which can only describe activities which can be automated. If the cognitive mechanisms which make a person a useful part of a system (such as Gestalt pattern handling, flexible goals-means links, problem solving, or nonverbal communication) have not been modelled, then the representation is unlikely to make a valid prediction of the human contribution to the system, or a valid assessment of an optimum design to support this contribution. The representation should at least give reminder cues about what the human operator does best, and most needs help with. Some of these aspects are discussed in Bainbridge (1989).

Also, representations used for practical purposes can be simpler than the best current scientific representation of the same phenomenon. 

Scientific representations give logically consistent, parsimonious, and preferably elegant mechanisms which account for rich data, and aim for completeness. It should be possible to generate hypotheses by which the representation can be tested. When this type of representation is fully specified, it may be sufficiently interesting to be called a ‘theory’.

Applied representations have a more modest aim, to represent the information needed for a particular design situation. It is interesting that applied representations may be former scientific representations. For example, Newtonian mechanics are still useful in engineering, even if not in physics. Or: the distinction between short-term and long-term memory is a useful one in designing memory support systems, even though it is no longer sufficiently rich as a representation of actual human cognitive processes.

Like scientific representations, applied representations may include mechanisms, not just a description of behaviour. But these representations emphasise the features which are most important for particular purposes. This makes it easier to deal with problems but has its dangers. The valid application domain of the representations must be made clear, so that they are not extended to domains in which they do not provide sufficient or necessary concepts (Bainbridge, 1981). 

As an illustration, the 'scientific' representation of human cognitive processes currently represents memory and information processing as distributed, rather than as isolated within special processors. However, it is easier to think about the general properties of memory, such as capacity, decay time or code type, or the general properties of information processing such as speed and accuracy, when these processors are isolated in a stages-and-boxes model. But a model of isolated single purpose processors can be misleading by representing the operator as a serial processor. 

Some aspects of information processing, particularly conscious problem solving within one level of information processing, may be limited to serial processing, but in skilled tasks people deal simultaneously with inputs from one part of a task and outputs from another, and they are also meeting several parallel responsibilities, each organised as a hierarchy of goals and sub-goals, within which behaviour may be automated at any level. The actual mechanisms of resource allocation must be complex.

More specifically, there are two aspects of ergonomic design, interface and cognitive design, which need to be handled differently. There are all the facts about the equipment designs which are optimally compatible with human receptors and effectors (classical ergonomics). And then there is our knowledge about how people understand, work towards goals, organise their behaviour, etc., which determines what needs to be on the interface or support equipment, and how it should be laid out.

Detailed design of interface components may be based on empirical results which are not strongly linked to any theory of underlying mechanisms - such as the optimum sizes for display scale markings or push buttons.  In this case the criterion for validity is that the numerical data should be correct. These data are usually collected from experiments in which standard laboratory methods are used, to ensure that the results will be valid. At a minimum, the results give ordinal data, or inequalities, such as:

Reading time (passive sentences)> Reading time (active sentences). 

These inequalities give us much of our basis for making design recommendations to improve performance, e.g. sentences in operating procedures should be active not passive.

Deeper aspects of design need to be based on some notion of the cognitive processes used in doing the task, which indicates optimum ways of supporting these processes in working situations. For validity here the concepts should be correct and sufficient. For example, if a model represents the human operator as working by feedback, while an expert operator actually works by anticipation, then the models may suggest job supports which are wrong in principle.

Evidence about such processes comes both from psychological laboratory experiments and from case-studies of complex tasks. Psychological experiments may be deliberately set up to test between alternative explanations for behaviour, However, as they are concerned with few and independent variables, not many of them tell us much about complex behaviour, in which the main problem is how small units of behaviour (which can be tested in isolation in the laboratory) are selected and interrelated in dealing with large, interacting, and multiple-goal environments. For these task situations, we get our information about underlying mechanisms from case-studies. It is then important to select the case-studies from appropriate tasks.

In making this selection, basing it on general words for task goals such as ‘control’ (keeping something within required limits) is not sufficiently specific to identify the cognitive processes involved. As an example, my own ‘model’ (e.g. Bainbridge, 1974) describes the cognitive processes in ‘industrial process operation’. However, the process controlled responded quickly to control action, and the operators only had one task responsibility. So this model does not include the cognitive processes involved in allowing for slow changes to develop, or in planning activities so as to meet several task goals within the same time period. Anyone investigating an ‘industrial process operation’ task in which either process timing or multitasking were the primary sources of difficulty for the operator would find that this model does not give them what they need. ‘Diagnosis’ (fault finding) is another general word which is used to describe several tasks which actually involve different cognitive mechanisms (see Diagnosis paper, Bainbridge, 1984).

There are many peculiarities in the user simulation literature because investigators have applied models taken from one task to another task which is superficially similar, but in which the cognitive processes are not the same. For more discussion of this see Bainbridge (1990).


Numerical Performance Prediction

Predicting human performance raises a different range of issues, and has different criteria for success. The chief criteria for a useful technique are to minimise effort (e.g. taking days rather than months to make a conservative prediction), and to describe all parts of the system in ways which are complementary.

Early control theory models of flying make a good example. They only represent human response by feedback to random sine-waves. They include no account of how human control performance can improve if the person can anticipate or preview the track, so they give a conservative prediction of potential performance. They model only a modest part of the control task, flying straight and level, so they do not suggest or justify interface designs for the whole flying task. They do however predict, to a first-order of accuracy, whether a proposed air/space craft of known dynamics can be flown by a human controller, and they can therefore save millions in development costs. And they make this prediction in language which can be used to describe the relevant nonhuman parts of the system. The fact that they describe only straight and level flight is not a limitation for this purpose, as if an aircraft cannot be flown straight and level it is unlikely to be able to land safely either.

As in other domains of representation, specific techniques may be useful for some purposes but not others. For example, time-line analysis is used in workload prediction. This analysis assumes that the operator correctly carries out a prescribed sequence of actions. It divides the task into elements and, using data on human processing limits, predicts the time taken and interface requirements for each element (e.g. Card et al, 1983). If we have data on how interface designs affect performance capacity, we can use this method to predict whether design changes will have worthwhile effects on behaviour time. This time-line approach is however inadequate for risk assessment, as it does not include the processes by which an operator chooses between alternative methods of reaching goals, and so cannot simulate errors.

There are several problems in developing a good practical technique. It is relatively easy for ergonomists to make ordinal statements about trends in human behaviour. These indicate which direction to change an interface design in to improve human performance. What is much more difficult is to get ratio-scale data (numbers which can validly be added and divided) about human behaviour, which can be used to make numerical predictions, for example to calculate whether a specific investment in redesigning an interface will give an acceptable cost-benefit return in improved performance level.

These numerical evaluations need to be made to a first-order of accuracy, with minimum effort. For pragmatic reasons the models used are therefore usually simple. And there are good reasons for using the same method of representation for both the human and the engineered parts of the system. The models therefore tend to use mathematical equations to represent control tasks, and computer program or production system representations for cognitive tasks. The result is that numerical techniques are not used with any claim to accuracy, but simply when it is necessary to have a numerical answer. Pragmatism takes precedence over fundamental validity in evaluating these techniques. The predictive technique is accepted, not because it is based on first principles, but because it has been assessed by expert judgement as giving the same results as would a human expert making the same predictions (e.g. Siegel & Wolf, 1961).

What is also needed in developing a practical technique is to identify the minimum number of important factors which must be taken into account. We have to find which variables account for most of the variance in the data, and which do not give any additional information. This is a type of statistical testing which is not so often used in ergonomics, but is necessary in developing pragmatic techniques which can be used with minimum effort.

As an aid to simplification such methods only need to make conservative predictions of poor performance. However, it is more difficult to do this for cognitive tasks, as in difficult cases people have to work out for themselves what to do. Arguing by analogy with control theory models of flying, to make a conservative performance prediction for cognitive tasks we need to:

- assume that the operator has minimum special knowledge about the non-randomness of the environment, and so has maximum difficulty with the task. This would mean that we have to account for ability to solve problems, as this is needed when specific (context-sensitive) knowledge is not available.

- choose a part-task which tests the limits of performance, but reduces the ergonomist’s task from one of accounting for performance in all task situations.  An example of a crucial part-task in process control might be fault management, which is hardly a trivial problem to model.

- describe the behaviour in language also used for describing other parts of the system. Programming language descriptions may be more appropriate for representing cognitive processes, but they are more difficult to combine with ratio scale measures and statistical testing, to give numerical results.

There are practical situations in which it is very useful to describe both person and machine using the same modelling tools, so that complete system performance can be predicted. For other purposes this approach can be misleading, because it may not represent the special contribution of each part of the system. For example, a device which is much better than a human being at compensatory tracking can be described by a one-line equation. But this sort of mathematics is an inappropriate tool for describing the potential human contribution to a system, as it cannot represent abilities such as pursuit tracking, route planning or fault management. Representations are needed which can describe the complementary contributions of person and machine to a system. After all, people are retained in a complex system to do the things machines cannot do, so it is ironic to describe these people using representational tools which can only describe what machines can do. This even suggests there may be a fundamental limit to making a fully specified account of the human contribution to a system.


There are two further aspects which can distort the type of representation used, which are distinct from ergonomic problem solving. These are using representations in education or to communicate with people in other disciplines, and using representations for personal satisfaction or convenience.

Conveying concepts which are new to a given audience can be done by using concepts which are familiar to them. The dangers of this are that the usable concepts may distort what should be said out of all recognition, and one may end up by not conveying the important distinguishing features of new concepts. So this method should only be used as a first stage.

Many people enjoy working in cognitive science because it involves cross-disciplinary concepts. But this does raise questions of communication between different disciplines which all claim to be working on the same thing (human or cognitive representations) but which actually have different working concepts and primary concerns. Unless this is recognised, it can lead to heated debates about the relative usefulness of different approaches, in which everyone is arguing for ‘the’ ideal model without realising that each person has different requirements of that model.

For example, engineers and psychologists are familiar with different concepts and representations, and have a different notion of the nature of a model. For engineers and cognitive scientists, a model is a simplified representation which can be manipulated easily prior to building a device, to test the properties of the device. The device has a known purpose and properties and known components. Engineers are therefore used to accepting models on ‘face validity’. Models are judged to be the same as the device and to be useful, without more formal test. People in these disciplines have little experience with the notion that a model may not actually represent what it claims to represent.

Unfortunately, many people also use this way of assessing the validity of models of the human operator.  But the human operator has incompletely specified purposes, mostly unknown components, is hugely complex, and it is well known that face validity is invalid as a test for a model.  (For example, most of us think that we know the factors which influence our choice of behaviour, but research shows that our awareness of this is actually very inaccurate, see Nisbett & Wilson, 1977). Much of a psychologist’s training consists of learning how to test the validity of what look superficially like convincing explanations of human behaviour, and with learning how to study human behaviour in such a way that the results are not due to fatigue, learning, social pressure, etc. rather than to the question under test. But how many engineering/ cognitive science based papers are there in the literature with the title ‘A model of human x behaviour?, which do not contain even any reference to papers reporting ‘x’ behaviour, or any informal comparison of the model with some behavioural data, let alone a properly controlled formal comparison? Of what use can such a model be? It is of course a device which produces behaviour x, but it has not been shown to be a model of human x behaviour. So what responsible person would use it in human performance prediction or design decisions?

There may also be problems of communication within one discipline, for example between ergonomists. In complex situations, such as fault management in industrial process operation, people do not simply react to input information. They have a complex structure of knowledge about the actual and potential states of the situation, which they use in deciding what fuller information they need, inferring what this information implies, predicting what will happen next, and considering actions to optimise the future. People who are learning to use advanced technology devices bring with them prior assumptions about cause-effect and goal-action relations, which they use in trying to understand the instructions and to devise a plan of what to do, and then in trying to understand why the device did not do what they expected. The types of model for human behaviour commonly used by ergonomists, which derive from control theory, engineering and early experimental psychology, conceive of behaviour as data-driven, so they do not contain concepts for representing the types of behaviour just outlined, which are ’top-down’ -  driven by knowledge about the situation, not always by changes in the environment. And there is no simple way of expanding them to do so.

Assumptions about the basic concepts of human behaviour are closely linked to methods of data collection. If one thinks that behaviour is data-driven, then the traditional psychological experiment with a 1:1 mapping between stimulus and response is the way to investigate it. If one thinks that people respond to a piece of information by fitting it into an ongoing train of thought, by drawing rich inferences from it, by thinking through and rejecting many possible actions, then none of this mental behaviour is directly observable.  Some research questions can be answered using the sophisticated experimental techniques devised by cognitive psychologists, others need techniques such as interviews, verbal protocols, or video analysis of case studies.

Satisfaction and familiarity

The area of personal satisfaction affects representation when someone uses a representation, not because it is the optimum one for the purpose, but because it is familiar or interesting or approved of by powerful people.

People, understandably, tend to employ representation techniques which they are used to. Control engineers represent the human operator using mathematical equations from control theory. People who have learned number crunching computer programming techniques use conventional programmes to describe cognitive processes, while people from cognitive science use predicates and production rules.  Problems may arise when they extrapolate their familiar technique to a task for which this sort of representation is not appropriate.

To repeat a previous example, equations from control theory describe the human controller in terms of linear negative-feedback compensatory control, to meet given goals. However, real human controllers may set their own goals, anticipate or preview what they need to do next, and learn open-loop control. These abilities mean that human controllers can perform better than is predicted by the control equations. So a representation based on control equations only has limited application. It is a powerful technique for making decisions, to a first order of accuracy, about whether a proposed air- or spacecraft can be flown manually, but it does not contain any reminders about the factors which help human controllers to perform better, so should not be used as a basis for designing interfaces or training schemes.

As in inverse example, cognitive science/ expert system representations of cognitive processes tend to predict that human performance will be better than it is.These representations do not draw attention to the errors people make when they have incomplete information or are overloaded. So again such representations do not remind an ergonomist to help human users to employ pattern handling or automated skills, nor do the representations indicate ways to design interfaces, training schemes and workload to minimise human error (for a fuller discussion see Bainbridge, 1989).

There are many people, in industry as well as universities, who use techniques because they are fascinating rather than because they are appropriate.

For obtaining grants it may be a good idea, given the minimum funding available, to propose to use techniques which are fashionable and which are either the techniques used by, or do not threaten the techniques and concepts used by, the people who make the decisions for the grant-giving bodies. Research proposals which develop existing ideas to the limit are more likely to be successful in getting a grant than ones which explore new ideas. They are less risky, and they will be better understood by the members of the grant-giving committee. By this criterion, the optimum representation is the one which gets the most money. This is not a joke but a serious consideration if one’s primary purpose is ensuring the continuation of jobs for one’s research assistants. But if people can get large sums of money to support their representational approach, this may reinforce the view that they are taking the correct approach to representation, as well as the best strategy for getting grants.


Although the title of this paper focusses on simulation, the content has focussed on representation. This is because, when using simulation (in the sense of developing a large computer programme) as a technique it is important to ask whether a representation is needed which has this level of rigour and completeness, and whether the data, concepts and programming techniques used in developing the simulation are actually the most valid and appropriate.  Also what is the justification for the parts of the simulation which have to be included to get the thing to work, but about which there is no evidence on the nature of the actual mechanisms.

There are many situations in which it is valid to use invalid representations but, in my opinion, the people using them need to be aware of the limitations.  Otherwise there can be many false claims and unnecessary arguments.


Bainbridge, L. (1974) Problems in the assessment of mental load.  Le Travail Humain. 37, 279-302.

Bainbridge, L. (1974) Analysis of verbal protocols from a process control task. in E.Edwards and F.P.Less (eds.) The Human Operator in Process Control. Taylor & Francis, Ltd., pp.146-158.

Bainbridge, L. (1978) Forgotten alternatives in skill and workload. Ergonomics. 21, 169-185.

Bainbridge, L. (1979) Verbal reports as evidence of the process operator’s knowledge.  International Journal of Man-Machine Studies, 11, 411-436.

Bainbridge, L. (1981) Mathematical equations or processing routines. in J.Rasmussen and W.B.Rouse (eds.) Human Detection and Diagnosis of System Failures. Plenum Press.

Bainbridge, L. (1984)  Diagnostic skill in process operation.  Proceedings of the 1984 International Conference on Occupational Ergonomics, Volume 2 : Reviews. May 7-9, Toronto, Canada, pp. 1-10.

Bainbridge, L. (1988) Types of Representation. in L.P.Goodstein, H.B.Anderson and S.E.Olsen (eds.) Tasks, Errors and Mental Models. Taylor & Francis Ltd, London, pp.70-91.

Bainbridge, L. (1988b) Multiple representations or 'good' models. in Patrick, J., Duncan, K.D. (eds.) Training, Human Decision Making and Control. Elsevier/ North Holland Amsterdam. pp.1-11. 

Bainbridge, L. (1989) Cognitive science approaches to process operation:

present gaps and future requirements. in Proceedings of the Second European Meeting on Cognitive Science Approaches to Process Control. Siena, Italy, October 24-27, pp. 1-9.

Bainbridge, L. (1990) Extrapolating from one task to another. in M.A.Life, C.S.Narborough-Hall and W.I.Hamilton (eds.) Simulation and the User Interface. Taylor & Francis, pp.11-30.

Bainbridge, L. (1991) Multi-plexed VDT display systems.  in G.R.S.Weir and J.L.Alty (eds) HCI and Complex Systems.  Academic Press, pp.189-210.

Card, S.K., Moran, T.P. and Newell, A. (1983) The Psychology of Human-Computer Interaction. Lawrence Erlbaum

Chapanis, A. (1988) Some generalizations about generalization.  Human Factors, 30, 233-245.

Nisbett, R.E. and Wilson, T.D. (1977) Telling more than we can know: verbal reports on mental processes.  Psychological Review84, 231-259. 

Siegel, A.L., and Wolf,J.J. (1961) A technique for evaluating man-machine designs.  Human Factors,  3, 18-28.

Sperandio, J.C. (1972) Charge de travail et regulation des processus operatoires. Le Travail Humain. 35, 85-98.

Access to other papers via Home page

©  2022 Lisanne Bainbridge


Popular posts from this blog

Ironies of Automation

Types of skill, and Rasmussen's SRK schema

Complex Processes Review : References