Journal of the Operational Research Society (1995) 46, 562-578
Soft Systems Models for Knowledge Elicitation and Representation
The paper contends that the conceptual modes used in Soft Systems Methodology have an unusual logical status. This enables them to be rendered in modal logic and used as a framework for knowledge elicitation and for the design of knowledge-based systems with learning capability.
Key words: conceptual models, Soft Systems Methodology, logico-linguistic models, non-monotonic logic, modal logic, knowledge based systems, Prolog
SSM for information systems design
The logical status of the SSM models
TRADITIONAL INFORMATION SYSTEMS SOLUTIONS
OTHER TYPES OF KNOWLEDGE REPRESENTATION
Modal logic for knowledge representation
An earlier paper introduced the idea of expressing (SSM) conceptual models in symbolic logic . This was shown to be relevant to cause and effect modelling, to an account of efficiency and to information system design. In the present paper, the connection between conceptual models, logic and information system design is explored further.
A more sophisticated logic, namely modal predicate logic, is used. This is combined with the interpretation of the SSM conceptual model building process as a Wittgensteinian language game  . This shows how, with some modification of the modelling technique, a logically precise artificial language can be created by the stake-holders in the organization concerned. This language provides an essential framework for knowledge elicitation and, following this, can provide a formal specification for a knowledge-based system that is capable of learning in accordance with the classic account of scientific method, as propounded by Karl Popper .
The paper will attempt to put forward suggestions for a practical method and consists of six stages. These are concerned with systems analysis, language creation, knowledge elicitation, knowledge representation, codification and verification. This paper concentrates on systems analysis, language creation and knowledge elicitation. The appendix deals with the technical details of knowledge representation in modal logic, codification in terms of a Prolog program and how the system can be verified by non-monotonic logic implicit in the program.
The present paper begins with a review of SSM as a method of systems analysis for information system design. Criticism of SSM conceptual models from the perspective of philosophical logic are answered and the true logical status of the SSM model is explained. The paper continues by showing how the models produced by SSM systems analysis can be developed into logico-linguistic models that represent, with logical precision, a language created by the stake-holders. The logico-linguistic model then provides a sound framework to which empirical knowledge can be added. The formal expression of the model in modal predicate logic and how this operates in a Prolog program is briefly explained.
In the penultimate section, traditional information system design solutions, such as structured methods, data modelling and object orientation, are discussed. The final section compares logico-linguistic models with Sowa's conceptual graphs.
SOFT SYSTEMS METHODOLOGY
Provided 'systems analysis' is understood in a suitably broad sense, SSM is primarily a methodology for systems analysis. It claims to be relevant to any problem situation involving human activity. The early stages of the method are more concerned with the identification of who is involved in the problem and what the problem is than with the solution of the problem. SSM is not an information system design method as such, but a general problem structuring method that may be used in the production of an information system design as one of many possible solutions.
SSM, therefore, has a versatility not found in mainstream information system design methodologies. The use of an information system design method should be based on some form of system analysis, which indicates that an information system will be a solution to the problem. In the case where the system analysis is a front end part of an information system design methodology, the work of systems analysis will tend to be wasted if an information system is not required. In the case where the method of system analysis is distinct from the information system design method, both methods will tend to use different tools and have different perspectives; as a result, little of the information gained in system analysis will be used in the design process.
This paper offers another possibility - one in which there is a continuity between the general systems analysis of SSM and the design process. The idea of preserving this continuity is not new. Methods have been put forward by Wilson , and in Avison and Wood-Harper's Multiview . A quite different approach will be taken here. While Wilson and Multiview add to the stake-holder constructed conceptual models in order to create information systems, the method described below seeks to increase the logical power and content of these models to the point where an information system design can be derived by formal methods.
Figure 1 gives an SSM conceptual model of a human activity system. It is taken from Wilson  but has been simplified to exclude the monitor and control systems that accompany all SSM models. There are three factors that make these SSM models quite different from models to be found in other forms of system analysis. The first is that the models are actually constructed, rather than merely approved, by the stake-holders in the organization concerned. The second factor is that the models are notional. Checkland and Scholes  described them as 'holons'; intellectual constructs rather than systems in the world. The third factor is that the arrows in the models are intended to represent logical contingency (Reference 6, p 36). They are not, as many people assume, intended to represent causal connections. The difference between purely logical connections and causal connection will be brought out below.
The results of Checkland's case work can generally be described as a change of thinking on the part of the people in the organization. This is quite compatible with the notional status of the models. However, theoretical problems arise when we consider Wilson’s work, which attempts to use these models as a basis for the design of information systems, such as stock control systems, which deal with real-world objects and events. It needs to be explained how models that are only inventions of the mind can tell us anything about the physical world.
One solution is to take the models to be models of future possibilities rather than models of existing states of affairs. This might be right if ‘possibilities’ is understood as ‘logical possibilities’ or as the possible worlds of Kripke  semantics for modal logic. However, if possibilities is taken to mean possibilities in the actual world, as some unpublished SSM enthusiasts think, then the problem remains unresolved. It still needs to be explained how a model that is only an invention of the mind can tell us what might be possible in the physical world. To answer this requires an understanding of the logical status of the SSM models.
Recently, some authors have been looking at the connection between SSM models and formal logic. Merali has shown a way in which formal logic can be instrumental in converting an SSM model into a data flow diagrams; she has also shown how logic can be used to describe the process of accommodation that takes place in the construction of a consensus SSM mode1 . Probert ,  argued that SSM conceptual models are not logical in any sense of the word and that Checkland’s contention that the arrows represent logical contingency is wrong.
Probert’s argument has a number of strands. One is that SSM models are expressed as commands and that a logic of commands is unsatisfactory. It has been argued previously that the use of commands is not essential and that they are a rhetorical device for which declarative statements can be substituted without loss of meaning’  . Thus, the command ‘discharge patient’ in Figure 1 can be replaced, in Figure 2, with the declarative statement ‘the patient has been, is being, or will be discharged’ or more concisely ‘patient is discharged’.
Given a declarative form, part of Figure 1 can be interpreted as ‘the patient has been discharged is logically contingent upon the patient has been treated’. This is the same thing as ‘the patient being discharged logically entails that the patient has been treated’. A second strand of Probert’s argument states, correctly, that this simply is not true. An earlier paper  has given an answer to this. If we add a universal statement to the two statements given above, the problem is resolved.
Major Premise. All cases where a patient has been discharged are cases where the patient has been treated.
Minor Premise. The patient has been discharged.
Conclusion. The patient has been treated.
Here we have the classic syllogism in which the major premise is a universal, the minor premise is a particular, and the particular conclusion follows by Modus Ponens. The universal here can be taken as a hidden premise in the model. Once this hidden premise is stated we can, truly, say ‘the patient being discharged logically entails that the patient has been treated’.
A third strand of Probert’s argument reflects the area of concern at the end of the last section. SSM models cannot describe real-world causal sequences because they are notional and if this is the case how can they be useful in dealing with real-world events? The answer to this question is that the SSM models are not about real-world events at all, and they do not need to be about real-world events in order to be useful. Consider the following statements:
(1) All panthers are black
(2) All crows are black
(3) Some panthers are not black
(4) Some crows are not black
The first two statements, (1) and (2), have the same grammatical form. However, they do not have the same logical form. In standard English ‘panther’ is defined as a black leopard. The term ‘logically true’ can be used to describe statements that are true by definitions and what follows, deductively, from definitions. Therefore, (1) is logically true. Statement (2) by contrast is not logically true because crows are defined independently of their colour. Statements such as (2) can be called ‘factually true’. Given that (1) is logically true it follows that (3) is self-contradictory. Given that all panthers are, by definition, black then the statement that some panthers are not black does not make sense. (No matter how long you search you will never be able to find a panther that is not black. This is because before you identify a thing as a panther you must first determine that that thing is black, and if you have determined that the thing is black then it cannot be not black.) Given that (2) is factually true it follows that (4) is factually false but it does not follow that (4) is self-contradictory. Because crows are defined independently of their colour there is always the logical possibility that a crow that is not black exists; and from this it follows that there is always the logical possibility that (2) is false.
Although they appear to be factual, like (2), the implicit statements in SSM models, such as ‘All cases where a patient has been discharged are cases where the patient has been treated’, have the same status as (1). They are logically true rather than factually true. The SSM models can be read as giving defining criteria. Thus, having treatment applied is a defining criterion for being discharged - nobody should be counted as a discharged patient unless that person has had treatment.
A distinction needs to be made between a definition in terms of qualities and a definition in terms of process. What often appears to be a description of a causal sequence is actually a process definition. Whiskey is, by definition, a distillate of fermented grain; which means that anything that has been distilled from fermented grain is a whiskey no matter what its qualities (taste, colour, alcoholic strength etc) are like. A graduate, is by definition, a person who has been through a process of examination and award no matter what their qualities in terms of intelligence or wisdom are like.
It remains to explain how SSM conceptual models, understood as sets of defining criteria, can be useful in dealing with the real world. Although definitions and logically true statements do not describe the real world their presence is a necessary condition of any description of the real world. They are an essential cement that binds factual statements together. The argument needed to prove this, which is too lengthy to repeat here, is tied to Wittgenstein’s later ideas about the rule-based nature of language and it is a recurrent theme throughout the Investigations". On a more accessible level, the usefulness of logically true statements will be demonstrated in the remainder of this paper.
In the design of an information system there are good reasons for giving precedence to logical truth. With panthers and crows we need to understand their defining criteria before we can make factual statements about them. If we do not know what the defining criteria for X is, we will not know whether ‘X is black’ is a factual statement or not. Unless we know how panthers and crows are defined we cannot say that ‘panthers are black’ is logically true and ‘crows are black’ is factually true.
KNOWLEDGE ELICITATION AND REPRESENTATION
SSM prides itself upon being able to deal with unstructured problem situations. One of the reasons that unstructured problems exist is because the people involved in the problem lack a common vocabulary. The stake-holders’ debate in which the SSM model is constructed is a process not just of recording existing definitions but of creating new ones. They are stipulative definitions, an agreement amongst the stake-holders to use words in a certain way. It has been argued previously  that, from the perspective of the philosophy of language, the stake-holders’ debate is a Wittgensteinian language game in which the finalised conceptual model is a definition of a desirable state of affairs.
If this is accepted, it opens the way for the SSM models to play an important role in information system design. However, if they are to fulfil their full potential in this role it will be necessary to increase the logical power of the stake-holder driven modelling process. This will require an increase in the number of logical connectives. The model will need to have logical connectives capable of expressing causal sequences. There are two reasons. One is that these connectives will be needed later when we come to build the empirical model. The, other is that they are needed immediately to give full definitions rather than just giving some defining criteria.
Altogether, six types of logical connective and two modal operators mill be used. The logico-linguistic model shown in Figure 2 has three types of logical connective. These are ‘AND’, ‘ANDOR’ and the double-headed arrow. The ‘AND’ connective stands for conjunction, the double-headed arrow for the biconditional otherwise known as mutual implication. The left-hand side of the figure can be read as ‘((s and a and b) implies t) and (t implies (s and a and b))’. In English this could be expressed as ‘if a patient is a treated, alive and signed out then that patient is a discharged patient, and if a patient is a discharged patient then that patient is a treated, alive and signed out patient’.
‘ANDOR’ stands for inclusive disjunction. The formula ‘p ANDOR q’ means ‘p or q, or both’. The top part of Figure 2 can be read as ‘s implies one or more of u, y, w; and one or more of u, y, w implies p’. In English this could be ‘if a patient is a treated patient then that patient has had surgery, medicine or therapy and if a patient has had surgery, medicine or therapy then the patient is a treated patient’.
It should be noted that standard logics are temporally neutral. We can say ‘p implies q’ without specifying which is an earlier or later event, or even if p and q are events. However, when the logic is rendered into English, grammar requires that a temporal reference be included. The grammatical form of English can, therefore, be extremely misleading as to the correct logical form of a statement. Some oriental languages, such as Thai, do not have this feature. Temporal references are not built into Thai grammar any more than spatial references are built into English grammar. Thai is, therefore, much more logical than English. One of the great advantages of getting stake-holders to construct models such as Figure 2 is that they have a precision that cannot be achieved in English; this will pay dividends when it comes to expressing the model in a computer language.
Logico-linguistic models, such as Figure 2, give an easily understood graphic form of logic that will enable stake-holders to construct full definitions. Figure 2 is a hypothetical model it shows how a group of stake-holders might have chosen to give a full definition for the terms ‘discharged’ and ‘treated’. The model does not, of course, give full definition for all the terms. The terms ‘alive’, ‘signed out’, ‘surgery’, ‘medicine’ and ‘therapy’ are not given full definition. However, defining criteria for each of them exist. If it is true that a patient is discharged it will be true by definition that the patient is signed out. Being discharged implies being signed out, but it is not necessarily the only thing that implies being signed out, just as being discharged is not the only thing that implies being alive. The model does not give a full definition of the term ‘patient’. However, any object that can qualify as a patient must be an object that can be signed out, be alive and have surgery, medicine or therapy; this eliminates lumps of coal and pieces of fruit but not dogs and cats. How many terms will require full definition will depend upon the size and scope of the information system required.
Another logical move is present in Figure 2. This is the addition of ‘L’ symbols alongside the arrows. These are modal operators. As ‘((s and a and b) implies t) and (t implies (s and a and b))’ standing alone could be either a process definition or a causal description we need a means of indicating which it is. The ‘L’ modal operator indicates that the relation is either a definition or a deduction from a definition. It is paired with the ‘M’ operator which indicates that the relation is factual, logically contingent and not definitional.
The logico-linguistic model provides a framework that will enable stake-holders to build an empirical model without ambiguity. In the empirical model, putative facts about the real world will be added to the logico-linguistic model.
Two types of definition can be distinguished on the basis of intention, sometimes known as ‘connotation’; and extention, sometimes known as ‘denotation’. An intensive definition will give a criterion or criteria for class inclusion. An extensive definition will specify the members of the class. Thus, in Figure 2 ‘patient is discharged’ is given an intensive definition. What it says is that anything that fulfils the criterion of being a treated patient and a living patient and a signed out patient is a member of the class of discharged patients, and vice versa. ‘Patient is treated’ is given an extensive definition. The figure states that this class has three member classes (u, y and w) and only three member classes. Therefore, anything that is a member of one or more member classes will be a member of the class of patients treated, and anything that is a member of the class of patients treated must be a member of at least one of the member classes. An intension will be specified by ‘AND’. An extension by ‘ANDOR’ or ‘OR’ (‘OR’ is explained below).
It is contended that any term that is given a useful intensive definition will have an empirical extension, and any term that is given a useful extensive definition will have an empirical intension. Knowledge elicitation, therefore, will consist simply of giving the intension for an extensive definition and the extension for an intensive definition.
The intension of ‘Patient is treated’ might be that every patient has been attended to by a doctor or nurse who has taken some action that is believed to improve the patient’s health. Empirical intensions are not always particularly useful. Far more important are the empirical extensions of intensive definitions. In Figure 3 we will imagine that the stake-holders take the extension of ‘Patient is discharged’ to be the class that comprises the class of patients who return home and the class of patients who are transferred to other institutions. These classes are mutually exclusive in that a member of one cannot be a member of another; a new logical connective needs to be introduced to express this.
‘OR’ stands for exclusive disjunction. The formula ‘p OR q’ means p or q but not both. The bottom part of Figure 3 can be read as ‘t implies c or t implies d; and c implies t and d implies t; and it is not the case that c and d’. In English this might be ‘if a patient is discharged the patient will return home or transfer to another institution and if a patient has returned home or transferred to another institution then that patient will have been discharged but a patient cannot return home and be transferred to another institution'.
This extension is putatively true as a matter of empirical fact not as a matter of definition. It is, therefore, marked with the 'M' modal operator. These empirical counterparts of definitions will be factual universals and, where they are formulated as the result of anything other than pure guess work, they will be 'inductive hypotheses'.
The fact that bubbles c and d are linked to bubble t by a double-headed arrow indicates that the stake-holders think that the formula 'c OR d' forms the full extension of t. In this case we have full knowledge of t. In practice, it has been found that knowledge of the stake-holders is often insufficient to give the full empirical extension for every intensive definition in the system. In this case there are three possible courses of action. One is to conduct empirical research in order to find the full extension. A second is to build a system with incomplete knowledge; if this is done the system will be logically incomplete and there will be statements that are not undecidable -that is, the system will not be able to determine whether they are true or false. A third possibility is for the stake-holders to make an educated guess and hope that the system will detect any errors. A system which makes this third possibility viable is introduced below.
It is pertinent to raise a question about Wilson's method at this point. Wilson starts with a notional conceptual model for a human activity system that is significantly different from the existing system. He then proceeds to design an information system that will support the real-world activities of the new system. However, in Wilson's method there is no empirical research into how the new system could work. Therefore, it can only work if the stake-holders already know all the activities that are needed to implement the new system and know what type of information will be necessary to support them. In green field situations there are usually unforeseen circumstances, and Wilson provides no means of identifying or dealing with them.
The structured approach to knowledge elicitation by logico-linguistic modelling is much more powerful in this respect. It clearly brings out what the stake-holders know and do not know and, more importantly, what they need to find out in order to implement the new system and build an information system to support it.
Figures such as Figure 3 are graphic systems of logic and can be expressed in a more conventional form. The following three formulae in modal predicate logic are the equivalent of what is shown in Figure 3:
L (A x) Tx --> (Sx & Ax & Bx)
L (" x) Sx « (Ux v Yx v Wx)
M (" x) Tx « ((Cx & - Dx) v (-Cx & Dx))
The expression of the model in symbolic logic enables inferences to be made in a formal calculus. From these three formulae we can infer that ‘(c OR d) implies (u ANDOR y ANDOR w) and that (u ANDOR y ANDOR w) is implied by (c OR d)’. These inferences are represented in Figure 4 by the single-headed broken arrow and the single-headed solid arrow. This could be expressed in English as ‘if a patient has returned home or been transferred to another institution then that patient has had surgery, medicine or therapy’. These arrows are flagged with the ' M ' modal operator in order to indicate that they are factually true rather than logically true. The formal derivation is given in the appendix.
The single-headed broken arrows and single-headed solid arrows are the last logical connectives that will be needed in the models. They do, in fact, express the same logical relation so one or other is logically redundant. However, when facilitating the stake-holders’ construction of the models it has been found convenient to use the language of causation rather than the language of formal logic to express the connectives. The solid single-headed arrow can be described as a necessary condition (as in the original SSM models), the broken single-headed arrow can be described as a sufficient condition and the double-headed arrow as a necessary and sufficient condition. The relation between logical connectives and causal statements was dealt with extensively in an earlier paper 1. But it needs to be stressed that logical relations, those flagged with the ‘L’ modal operator, do not actually denote causal relations but definitions or deductions from definitions.
Expressing the model in predicate logic can be useful in three ways. Inferences can be formally proven and the model can, if this is required, be shown to be formally consistent. The logic enables complex inferences to be made that could not easily be made using intuitive logic. The third advantage is that the logic translates easily into some artificial intelligence languages.
Models such as Figure 4 can be expressed in Prolog with a comparatively small loss of logical power. The modal distinction between logically true and factually true statements can be preserved in Prolog. The method will be detailed in the appendix but essentially two types of Prolog rule are written in the program. Incorrigible rules correspond to the logically true universals. Factual universals, inductive hypotheses, are written as rules that are open to falsification by factual particulars (Prolog facts). This can be described as a form of non-monotonic logic (a logic in which the truth value of the axioms or premises can change).
Suppose we instantiate ‘patient’ in Figure 4 with ‘Jack’ and enter the facts that Jack has returned home, not been transferred to another institution and not had surgery, medicine or therapy. A suitably configured Prolog program will detect that there is something wrong with the original model. It will show that these facts about Jack falsify the inductive hypothesis that everyone who has returned home must have surgery, medicine or therapy. This constitutes a basic form of machine learning. It would prompt a manual revision of the model which might consist of the formulation of new inductive hypotheses or the reconstruction of the logico-linguistic model. Although logico-linguistic models, such as Figure 2, can never themselves be falsified it can be found that some do not connect to any factual statements whatsoever and these, therefore, cannot play any part in descriptions of the real world.
Because inductive hypotheses can be falsified by particular events, the Prolog model conforms closely to Karl Popper’s ideas on the philosophy of science. The reason this scientific program can be produced is because the logically true elements of the SSM model, given in Figure 1, and the logico-linguistic model, given in Figure 2, have been distinguished from the factual elements introduced in Figure 3. It is somewhat ironic that SSM, which has been regarded as of doubtfully scientific validity, and which some people have tried to justify using new wave philosophy, can play a vital role in the design of an artificial intelligence program that works on a classic account of scientific method.
It is part of the legacy of David Hume that we now know that a logical truth cannot be derived from a factual truth or any set of factual truths. But it appears that this is what structured methods, such as the Structured Systems Analysis and Design Method (SSADM), try to do. They start empirically with a manual systems flow chart and then produce a data flow diagram that is encoded in a set of rules that are incorrigible within the system. A system produced by structured methods, therefore, occupies a logical nether world of incorrigible inductive hypotheses. It is this fundamental inadequacy that causes structured methods to fail in many, if not most, green field situations. This is the root of the logical problems facing SSADM that have been identified by other authors .
In an earlier attempt to use a logical analysis of SSM for information system design Gregory 15 and Merali 8 tried to preserve the logically true status of SSM models by converting them into logically true data flow diagrams. However, the method did not provide for the introduction of factual universals, which are necessary for descriptions of the real world. This attempt at a solution is, as it currently stands, just as inadequate as SSADM.
Data Modelling, like structured methods, fails to produce a structure that is capable of distinguishing between logical and factual truth. A relational database can be derived from a logico-linguistic model but, in these cases, says nothing about the real world. Alternatively, a relational database can be based on empirical models such as Figure 4, but in these cases they treat factual truths as though they were logically true and, therefore, inhabit the same logical nether world as SSADM.
Frame or object orientated approaches are more promising candidates. A non-monotonic logic was developed by McDermott and Doyle  for use in a frame-based system, but this had consistency problems and has now been abandoned . McDermott and Doyle's non-monotonic logic and most of the others that go by that name were created in order to represent default reasoning. This makes them quite different, both in their form and their objectives, from the non-monotonic logic that is advocated here. A modal non-monotonic logic for scientific reasoning need not have the problems associated with default reasoning and it might function well in a frame-based system.
It is contended that the foregoing is fundamentally different from other approaches to knowledge representation and elicitation. Considerations of space prevent a comparison with all of them; an exercise which would, in any case, be tedious. Instead, Sowa's Conceptual Graphs, which are superficially the most similar, will be compared with logico-linguistic modelling. Like logico-linguistic models, Sowa's conceptual graphs have a bubble diagram style, are concerned with concepts and can be expressed in the predicate calculus.
The logical and epistemological status of Sowa's graphs, like many other semantic net style graphs, is not perfectly clear. He recently described them as 'a graphic system of logic ... equivalent to predicate logic'  , yet earlier he described them as 'a method or representing mental models' .
As a graphic system of logic, Sowa's graphs differ from logico-linguistic models, firstly, in that they do not include modality, which is one of the principle features of logico-linguistic models. The second difference is that Sowa's graphs contain the plethora of detail needed to capture the vagaries of English syntax. For example, the verb 'to run' is represented by ten bubbles and ten arrows in a conceptual graph , see Figure 5. Such detail is not essential for the construction of a knowledge-based system, nor is it practical as a stake-holder driven modelling device. It is clear that conceptual graphs are a tool for an analyst intending to represent discourse in a natural language. Logico-linguistic models, by contrast, are not intended to represent a natural language but are intended to be an artificial language.
Logico-linguistic models are not, therefore, dependent on lexicographical science nor are they prone to the paradoxes of self reference, which are a feature of natural languages. It pertinent to point out here that it may, as both Frege and Tarski believed, be impossible formulate a theory of truth for natural languages .
If Sowa's conceptual graphs are intended to be mental models, then it seems that they are very different from logico-linguistic models. Logico-linguistic models are based on the Wittgensteinian theory of language  which contends that mental models, if they are anything other than publicly observable neurological states or elements of a public language, simply do not exist. Sowa's discussion of 'percepts'  sounds very similar to the sense datum theories that were discredited by Wittgenstein's private language argument. Logico-linguistic models are not intended to be representations of mental models nor are they representations of anything, they are just records of an agreement to use words in a certain way. It seems that the theory of meaning, which forms the basis of Sowa's graphs, is fundamentally different from the one that is assumed here. This could explain the fact that Sowa's graphs lack the modal operators that form vital components of empirical models such as Figure 4.
It has been argued that the notional conceptual models of SSM can be developed into a logically precise language belonging to the stake-holders in the client organization. This provides an unambiguous framework to which beliefs about the real world can be added. The use of SSM in knowledge-based system design is thereby given a philosophically sound basis. It represents another viable outcome for the use of Soft Systems Methodology. It also provides knowledge-based system designers with new and powerful tools for system analysis and knowledge elicitation.
In the appendix technical details will be given to show how the resulting empirical model can be represented in a computer program that will enable particular real-world facts to detect false general beliefs.
The first section of the appendix shows how Figure 3 can be represented in modal predicate logic. Formal inferences made using the modal logic can be used to develop the model shown in Figure 4.
The second section is concerned with codification. As an example, a Prolog program written using the modal logic as a specification, is given. This program is capable distinguishing between definitions, i.e. rules taken from the artificial language, and inductive hypothesis, i.e. empirical rules taken from the knowledge elicitation process.
The third section concerns verification. Verification is concerned to substantiate or falsify not the program but the model upon which the program is based. The program is able to accommodate data entry in the form of particular facts that conflict with its empirical rules and also to recognise when empirical rules have been falsified by particular facts. The system therefore, is capable of learning in accordance with the classic account of scientific method propounded by Karl Popper .
The formal rules of inference and replacement for the predicate inferences that will be use here are fairly standard and conform closely to those found in Copi ; Newton-Smith  gives similar set. The notation is as follows: '|-' the syntactic turnstile, means '. . . is provable from . . .'; 'L' a modal operator, means 'is logically true . . .'; 'M' a modal operator, means 'is contingently true . . .'; '"' the universal quantifier means 'for every . . .'; the existential quantifier is symbolised by a backwards 'E' and means 'for some . . .'; '«' the biconditional, means 'p if and only if q'; ' ® ' the conditional means 'if p then q' ; '&' conjunction, means 'p and q'; 'v' disjunction, means 'p or q or both; '- ' negation, means 'not p'.
To determine the modal operators four meta-rules are used that follow from the axioms of the modal system 'S5'. These meta-rules are:
Meta-rule one: if A |- B then L(A) |- L(B)
Meta-rule two: if A |- B then M(A) |- M(B)
Meta-rule three: if A, B |- C then L(A), L(B) |- L(C)
Meta-rule four: if A, B |- C then M(A), L(B) |- M(C)
Figure 3 can be formally expressed in modal predicate logic as follows:
Domain: people who go to hospital
Sx: x is a patient who is treated
Ax: x is a patient who is alive
Bx: x is a patient who is signed out
Tx: x is a patient who is discharged
Ux: x is a patient who has surgery
Yx: x is a patient who has medicine
Wx: x is a patient who has therapy
Cx: x is a patient who returns home
Dx: x is a patient who is transferred to another institution
Prem. (1) L (" x) (Tx « (Sx & Ax & Bx))
Prem. (2) L (" x) (Sx « (Ux v Yx v Wx))
Prem. (3) M (" x) (Tx « ((Cx & -Dx) v (-Cx & Dx)))
From this the following inferences can be made:
(4) L (" x) (Tx ® (Sx & Ax & Bx)) & ((Sx & Ax & Bx) ® Tx)
From (1) by Material Equivalence and Meta-rule one.
(5) L (" x) (Sx ®(Ux v Yx v Wx)) & ((Ux v Yx v Wx) ® Sx)
From (2) by Material Equivalence and Meta-rule one.
(6) M (" x) (Tx ®((Cx & -Dx) v (-Cx & Dx)) & ((Cx & -Dx) v (-Cx & Dx)) ® Tx)
From (3) by Material Equivalence and Meta-rule two.
(7) L (" x) Tx ® (Sx & Ax & Bx)
From (4) by Simplification and Meta-rule one.
(8) L (" x) -Tx v (Sx & Ax & Bx)
From (7) by Material Implication and Meta-rule two.
(9) L (" x) (-Tx v Sx) & (-Tx v Ax) & (-Tx v Bx)
From (8) by Distribution and Meta-rule one.
(10) L (" x) -Tx v Sx
From (9) by Simplification and Meta-rule one.
(11) L (" x) Tx ® Sx
Form (7) by Material Implication and Meta-rule one.
(12) L (" x) Sx ® (Ux v Yx v Wx)
From (5) by Simplification and Meta-rule two.
(13) M (" x) ((Cx & -Dx) v (-Cx & Dx)) ® Tx
From (6) by Simplification and Meta-rule two.
(14) L (" x) Tx ® (Ux v Yx v Wx)
From (11) and (12) by Hypothetical Syllogism and Meta-rule three.
(15) M (" x) ((Cx & -Dx) v (-Cx & Dx)) ® (Ux v Yx v Wx)
From (13) and (14) by Hypothetical Syllogism and Meta-rule four.
Premise (1) can be expressed in English as 'For all x, x is a patient who is discharged if and only if x is a patient who is treated and alive and signed out'. Formula (15) can be deduced from the three premises; it is shown in Figure 4 by the dotted arrow and the solid single-headed arrow. The figure and the logical formulae are universals, they are only about object variables, in this case 'x'. They say nothing about the real world, not even that anything exists. The real-world connection is made when particulars and existential statements are added to the system. We shall not introduce particulars into this system of modal predicate logic: instead, we shall move on to Prolog where the universals shall become 'rules' and particulars will become Prolog 'facts'.
The horn clauses that form the logical format of all Prolog rules could be derived from the predicate logic given above. As a practical method this would not be very useful as the formal derivations are long and tedious. It is easier to look at Figure 4 when writing the Prolog. The logic is useful in so far as it shows that a formal specification of the Prolog program is possible and that there would be no epistemological barrier to automating the procedure. The logic could also be useful in that it could provide a syntactic proof of the consistency of the Prolog rules. The formal logic also serves to highlight some of the logical shortcomings of Prolog.
The program given here is written in Turbo Prolog. There are two serious difficulties in converting the logic, or a logical model like Figure 4, into Prolog: one is with negation, the other is with the biconditional. Turbo Prolog is unable to express horn clauses with a negative consequent and some formulae in predicate logic cannot be expressed in horn clauses with positive consequents. The biconditionals express mutual implication, as is indicated by the double-headed arrows in Figure 4. As it works on the chaining principle, Prolog is unable to run a program that contains mutual implication, or any substitute for it, without going into an infinite loop. For example, if we want to express the biconditional between t and b & a & s from Figure 4 we would expect to be able to express it in a logically equivalent form such as (t ® (b & a & s)) & (b ® t) & (a ® t) & (s ® t) which can be expressed in the Prolog rules. This is done along with the instantiations in Program 1.
discharge (X) if sign_out (X) and alive (X) and treated (X).
sign_out (X) if discharge (X).
alive (X) if discharge (X).
treated (X) if discharge (X). treated (isabel).
Although Program 1 will compile it will not run. Prolog looks for the value for discharge (X) and sees that it will have the same value as sign_out (X) and alive (X) and treated (X); it then looks for a value for sign_out (X); on the third line it sees that sign_out (X) has the same value as discharge (X); returning to the first line it tries to repeat the process infinitely.
There are a number of ways this problem can be avoided. One is by means of the cut '!'. Another is to use a new predicate, say 'bicond', to express the relation. A third solution is to replace straightforward negation, which is troublesome anyway, with a substitute predicate in the Prolog programs. This can be achieved in the same way in which subtraction is eliminated from commercial accounts by a system of double entry book_keeping. We shall use artificial predicates prefixed by 'not_' to express negation. Corresponding to these will be artificial objects also prefixed by 'not_'. A positive predicate will always be paired with a negative predicate and a positive object paired with a negative one. Thus, if we wish to say that Isabel is alive we will also say that not_Isabel is not alive:
The two negatives can be understood as cancelling each other out. We can also use this method to specify events that have not happened. For example, if Icabod has not had surgery we can say:
Program 2 can be put together on the basis as the same particular facts as Program 1, no additional data is required.
not_discharge (X) if not_sign_out (X).
not_discharge (X) if not_alive.
not_discharge (X) if not_treated.
sign_out (X) if discharge (X).
alive (X) if discharge (X).
treated (X) if discharge (X).
not_sign out (not_isabel).
In Program 2 one half of the biconditional is expressed in 'not_' predicates and the other half in normal predicates. This solves the infinite loop problem and the program will run. The program will return the same information as would Program 1 but sometimes twice the number of queries are required. For example the query 'Goal:discharge (X)' returns only 'icabod'. We can find out that Isabel has also been discharged by 'Goal: not_discharge (X)' which returns 'not_isabel' and 'not_icabod'. This can be read as 'it is not the case that isabel has not been discharged' or simply as 'isabel has been discharged'. Other queries function as normal, 'Goal: treated (X)' returns 'icabod' and 'isabel'. Prolog can work out from its rules that Icabod has been discharged even though this has not been specifically stated. The program, therefore, confers all the advantages of a Prolog style program over an a SQL and database system.
Some Prolog programmers might consider that biconditionals should be eliminated prior to writing the program. This would certainly save space but increases the amount of logical work that the programmer needs to do. Ultimately even more space could be saved by not writing the program at all; this is because there is nothing that Prolog can work out that could not be worked out manually using the predicate calculus. A logic program that does not do much logic is not much of a logic program.
These double entry procedures will enable us to express the three biconditionals from Figure 4 in Prolog. However, they do not produce concise programs. Indeed, Program 3, represents only one line of predicate logic - this being formula (15). The last four lines of the program are concerned with verification. Systems of the Program 2 type will be useful provided that the rules are correct. However, the modal logic provides the power to create programs, such as Program 3, in which false rules can be detected.
surgery (X) if not_medicine (X) and not_therapy (X) and returns_home (X).
surgery (X) if not_medicine (X) and not_therapy (X) and another_institution (X).
medicine (X) if not_surgery (X) and not_therapy (X) and returns_home (X).
medicine (X) if not_surgery (X) and not_therapy (X) and another_institution (X).
therapy (X) if not_surgery (X) and not_medicine (X) and returns_home (X).
therapy (X) if not_surgery (X) and not_medicine (X) and another_institution (X).
not another_institution (not_jack).
if not_surgery (X) and not_medicine (X) and
not_therapy (X) and returns_home (X).
if not_surgery (X) and not_medicine (X) and
not_therapy (X) and another_institution (X).
if not_medicine (X) and not_surgery (X) and
not_therapy (X) and returns_horne (X).
if not_medicine (X) and not_surgery (X) and
not_therapy (X) and another_institution (X).
if not_therapy (X) and not_surgery (X) and
not_medicine (X) and returns_home (X).
if not_therapy (X) and not_surgery (X) and
not_medicine (X) and another_institution (X).
Validation of the program is not a theoretical problem in this system because the rules can be formally derived from the formulae in predicate logic. Any error will be the result of either mistakes made during the construction of the empirical model or mistakes made in entering particular facts into the program. Errors in both respects can be picked up by the double entry system. Program 3 has been deliberately constructed to include an error. This is brought out by the following query:
Goal: surgery (X)
X = not_jack
X = jill
X = jack
This says that Jack has and has not had surgery. This could have been a result of a mistake at a data entry level but in this case it is not. The mistake is in the empirical model. The last three lines of the program are designed to detect these errors. The incorrect_hypothesis predicate picks up inductive hypotheses that have been falsified by particular facts:
Goal: incorrect_hypothesis (X)
X = surgery__if_not__medicine_not_therapy_and another_institution
X = medicine_if_not_surgery_not_therapy_and_another__institution
X = therapy if_not_surgery_not_medicine_and_another_institution
Jack has not had surgery, medicine or therapy; he has not returned home but he has been transferred to another institution. The formula:
(15) M (" x) ((Cx & -Dx) v (-Cx & Dx)) ® (Ux v Yx v Wx)
which represents the broken arrow and the single-headed arrow in Figure 4, is therefore, incorrect. It follows from this that one of the three premises (1), (2) or (3) must be incorrect. As premises (1) and (2) are logically true it must be premise (3), the one with the 'M' modal operator, that is false. In simple terms, the hypothesis that all patients who return home or are transferred to another institution are discharged patients, has been falsified by a particular event. This event is Jack being transferred to another institution without having had surgery, medicine or therapy. The Prolog program has been configured in such a way that the entry of data about Jack has enabled us to detect this. This is a form of non-monotonic logic; the program has learned that one of its premises is false.
The benefits of the earlier SSM work can now be seen. The modal distinctions were made using SSM and, without the modal distinctions, we would not be able to determine which of the three biconditionals in Figure 4 is false. Without the modal distinctions, all three biconditionals would have the same status. If they all had the status of inductive hypotheses then the fact that Jack has been transferred to another institution without having surgery, medicine or therapy could be equally well explained by 'all discharged patients are treated, alive and signed out' being false or by 'all treated patients have surgery, medicine or therapy' being false. If they all had the status of logical truth the situation would be even more unsatisfactory.
Consider what would happen if the three biconditionals had the status of logical truth. If this were the case the system would only accept those empirical particulars that are consistent with its in-built logical configuration. All other particulars would be rejected. Consider the biconditional between the t bubble and the bubble containing 'c or d', which is expressed as (" x) (Tx ®((Cx & -Dx) v (-Cx & Dx)) in predicate logic. If this were a logical truth, then before we could establish that Jack has been transferred to another institution we would have to establish that he had not returned home and that he has been discharged. To establish that he has been discharged we would have to establish that he had treatment and to do this we would have to establish that he has had surgery, medicine or therapy. In other words, to establish that Jack has been transferred we must first establish that Jack has had surgery, medicine or therapy. We need to do this because having surgery, medicine or therapy is part of the extended definition of a patient who has been transferred. However, in this case the model does not enable us to infer anything new about Jack at all. All that it says is that Jack fulfils all the defining criteria then each defining criterion will be true of Jack.
If (" x) (Tx ®((Cx & -Dx) v (-Cx & Dx)) is contingent, as it is in the figure, then we can establish that Jack has been transferred by a defining criterion that is independent system shown in Figure 4. In this case the system can be genuinely informative and some real-world facts about Jack.
Verification, unlike validation, is only possible if part of the system is open to falsification. The hypotheses in the system will be verified with the addition of each particular fact that does not falsify them. Systems that cannot be verified, even if they can be validated, cannot in themselves refer to real-world objects and events. Real-world events are contingent therefore, any statement about the real world must also be contingent. Systems that do not contain contingent elements will only map on to these real-world events if they want to and, as such, do not really map onto the real world at all  . Inductive hypotheses form an indispensable buffer between definitions and real-world particular facts.
Finally, it needs to be pointed out that the task has not been to show how to economical Prolog to monitor the progress of patients in a hospital. Nor has it be develop a powerful learning algorithm. Rather it has been concerned with problems in the area of philosophical logic - a subject concerned with how abstract systems can relate to real-world events.
Acknowledgements -The findings in this paper were the result of research funded by the Science and Engineering Research Council (SERC).
1. Cause, effect, efficiency and soft systems models. J. Opl Res. Soc. 44, 333-344.
Soft systems methodology to information systems: a Wittgensteinian approach. J. Information Syst. 3, 149-168.
3. K. R. POPPER (1963) Conjectures and Refutations: The Growth of Scientific Knowledge. Routledge & Kegan Paul, London.
4. B. WILSON (1984) Systems: Concepts, Methodologies and Applications. Wiley, Chichester.
5. D. E. AVISON and A. T WOOD-HARPER (1990) Multiview: An Exploration in Information Systems Development. Blackwell, Oxford.
6. P. B. CHECKLAND and J. SCHOLES (1990) Soft Systems Methodology In Action. Wiley, Chichester.
7. S. A. KRIPKE (1980)Naming and Necessity. Blackwell, Oxford.
8. Y. MERALI (1992) Analytic data flow diagrams: an alternative to physicalism. Systemist 14 (3), 190~198.
9. Y. MERALI (1993) Retaining logical consistency in information systems development. Proceedings of the Conference on the Theory, Use and Integrative Aspects of IS Methodologies, British Computer Society Inform System Methodologies Special Interest Group, pp. 337-350.
10. S. K. PROBERT (1991) A critical study of the National Computing Centre's systems analysis and design methodology, and soft systems methodology. M.Phil Thesis, Newcastle Upon Tyne Polytechnic.
11. S. K. PROBERT (1993) Logic and conceptual modelling in soft systems methodologies. Proceedings of the Conference on the Theory, Use and Integrative Aspects of IS Methodologies; British Computer Society Inform, System Methodologies Special Interest Group, pp. 233-246.
Logic and meaning in conceptual models: implications for information system design. Systemist 15 (1), 28-43.
13. L. WITTGENSTEIN (1953) Philosophical Investigations. Blackwell, Oxford.
14. R. WINDER and P. WERNICK (1993) The inductive nature of software engineering and its consequences. Proceedings of the Conference on the Theory, Use and Integrative Aspects of IS Methodologies, British Comp Society Information System Methodologies Special Interest Group, pp. 431-444.
15. F. H. GREGORY (1992) SSM to information systems: A logical account. Systemist 14 (3), 180-189.
16. D. McDERMOTT and J. DOYLE (1980) Non-monotonic logic I. Artificial Intelligence, 13, 41-72.
17. D. McDERMOTT (1987) A critique of pure reason. Computational Intelligence 3, 151-60.
18. J. F. SOWA (1992) Logical structures in the lexicon. Knowledge Based Systems 5 (3), 173-182.
19. J. F. SOWA (1984) Conceptual Structures: Information Processing in Mind and Machine. Addison-Wesley, Reading, Massachusetts.
20. J. F. NOGIER and M. ZOEK (1992) Lexical choice as pattern matching. Knowledge Based Systems 5 (3), 200-212
21. A. C. GRAYING (1990) An Introduction to Philosophical Logic. p. 248. Duckworth, London.
22. I. M. COPI (1978) Introduction to Logic, 5th edition. Macmillan, New York.
23. W. H. NEWTON-SMITH (1985) Logic, An Introductory Course. Routledge & Kegan Paul, London.
24. F. H. GREGORY (1993) Mapping conceptual models onto the real world. In Systems Science Addressing Global Issues. (F. A. STOWELL, D. WEST and J. G. HOWELL, Eds) pp. 117-122. Plenum Press, New York.
Mapping information systems onto the real world. City University of Hong Kong, Department of Information Systems. Working paper No. WP 95/01.
Received September 1993; accepted September 1994 after two revisions
end of paper