Erik A Colban and Klaus Gaarder
A public network operator offering Intelligent Network (IN) services to its customers must be able to describe the services not only to their users, but also to the subscribers of the services and to the service and equipment providers. A subscriber can customise a service to her specific needs by setting values to different parameters or selecting features. This implies that the subscriber needs a more detailed and deeper understanding of the service and its components than the user. The differing needs of the user and subscriber must be reflected in the descriptions aimed at each of them. A description aimed at the service and equipment provider will have yet another focus. In this case, the role of the public network operator is to specify a product. Implementational details are not always critical. However, the specification must be precise enough to allow for a quality evaluation of the product.
In order to operate services on an international scale across different networks and have services adapt smoothly to technological changes, services must be described independently from any specific network. This should be the case whether the descriptions are aimed at the users, the subscribers or the providers.
The requirements above are specific to IN service description. In addition there are the usual requirements that every description or specification should meet, such as precision, coherence and verifiability. In the following we shall discuss how the latter can be met in the context of IN.
Services cannot be described entirely in a natural language. In a final section, criteria for appropriate choices of description languages are discussed.
Any description or specification should be precise. To ensure a common understanding of "precision" we quote the following interpretation from a standard dictionary:
"precision exactness and clarity; quality of being precise (stated clearly and accurately)."
Note that precision embodies not only exactness but also clarity. Clarity and exactness may be perceived as conflicting requirements, since a higher degree of exactness often means a more involved description, which may reduce clarity. This is an important issue for us and we must take care to cater for this as we develop our methods. We may talk about differing levels of precision being right for different uses and thus justify varying descriptions to obtain maximum clarity on each level.
A description is coherent if its parts fit naturally and logically together. If a term is not used with the same meaning in all parts of a description, it may become difficult to see how the parts relate to each other.
In order to assure coherence in the description of IN services, a set of concepts that are particular to the subject need to be defined. An attempt to define these concepts is found in the CCITT Q.1200 Recommendation Series, see (9), which draws a picture of the IN concept and some of the services that it supports. In our opinion, these definitions are too rough and may cause divergent understandings of the concepts. Since several persons usually collaborate in describing the services, coherence is at stake.
A service can be given different descriptions, each focusing on different aspects while abstracting others away. This does not necessarily mean that there is incoherence as long as the different descriptions are related to each other in such a way that they complete each other and give a consistent view of the service.
A specification constitutes a reference against which a product is evaluated.
Definition 1 (Quality) A product has the right quality if and only if it is according to specification.
When strictly defined quality is binary term taking values "right" or "wrong", not "good" or "bad" or something in between. This may seem a strange way of judging quality, but it is the only verifiable or quantifiable way. Quality concepts which are not amenable to verification or testing are useless in technical environments, a specification constitutes a protection for producer and consumer alike. Ideally the consumer knows what to expect from the product and the producer knows what she must supply. It is the consumer who supplies the producer with a specification of the desired product. If the producer then delivers a product which is according to the specification, she has done her job. The consumer cannot expect more and the producer need not supply more, than stated in the specification. Verification consists of checking that a product is satisfying a given specification. If the product is e.g. a computer program, it can never be "correct" in and of itself. Correctness in this context is only defined relative to a specification. Section 5 discusses this matter in further detail.
Specifications are written to be used as input to a production process where the result depends explicitly on the specifications entering the process. In our case this is the IN service creation process. The term service creation is used to designate the process from specification up to and including deployment of the finished service.
A small difference in perception of meaning in a description may imply very large differencs in the characteristics of the end product, which is undesirable. If the process towards the finished product is long and involves many links in a chain, small imprecisions in the initial phases are likely to grow larger as they pass through the chain.
In an object oriented environment, object interfaces must be according to specification in order for objects to be able to cooperate. In the specification of classes this results in a specification that is "open ended". That is, it does not specify every detail, only what is necessary at the interface. A specification may thus allow various implementations which are all correct, since the specification leaves some decisions open. Precision is nevertheless required as before in the interface specifications and the further specification of the interior of classes.
Our goal is to make IN service creation
We believe that service description and specification are two main issues in achieving these goals, and that successful solutions to these issues will be of crucial importance.
In the intelligent network we aim for high precision and formality in specifications which will be created by people with various training on many levels of sophistication. This is the novel challenge posed to us by the intelligent network concept. Below we shall further concentrate on three key characteristics of specifications:
We shall discuss each of these in turn.
The readability of a description is obviously crucial for how successful it is in bringing across the intended meaning. Whether it is considered readable depends on the profession of the reader. Law text is readable to lawyers and mathematical text to mathematicians but not necessarily vice-versa. This is of no problem since lawyers mostly stick to law and mathematicians to mathematics as professionals. The challenge of IN service creation is that specifications must be readable yet precise, to a quite wide audience and across many levels of abstraction and sophistication.
Attaining readability can be done in many ways, and possibly by different means for different readers. The structure of a specification is probably one of the most important parameters deciding readability. A poorly structured document is more difficult to read than one with a good structure. It is not obvious that the same structure is good for all kinds of documents, so we will have to consider the various kinds separately. The structures will have to serve different purposes at various levels of detail and formality.
The structure of a formal specification is of special importance. By its nature a formal specification has to have a strict structure from which we cannot deviate, and this structure is a part of the formality of the method. When we choose a formal method we also choose its way of structuring specifications. The technique must have available structures which achieve the wanted readability. For examples of different structures we refer the reader to e.g. (3, 5, 8, 1).
Even if clarity may be seen as an aspect of readability it has its own particular needs. We shall look into two which we see as especially important, the use of definitions and symbols.
Definitions are crucial in any description which aims to be precise and consistent. Many descriptions become cluttered and unclear because the central concepts lack a good definition. Worse still, a concept may be ill- defined such that its extension is vacuous or trivial. Definitions are the means to help us fix and delineate our concepts. For any discussion of concepts or description of concepts, the definitions form the basis from which all further reasoning proceeds.
Consider as an example the service concept of the IN. We talk freely about service creation, service features, service interactions, etc., the service concept permeates the entire discussion of IN. It becomes necessary to consider a definition of the service concept. Note that this does not mean a narrowing of the service concept's extension, since our intention in the intelligent network is a wider extension, but always retaining a precise understanding of exactly what it includes.
Following the object oriented line of thought we could start by defining so-called "service-constituents" from which we aim to build other services, and then define what it means to put these together. One possible definition of a service is then "whatever is put together by the (correct) use of service-constituents". We have then moved the problem of defining complete services to the problem of defining its constituents and the way these are put together. Hopefully, this will make the problem more tractable.
The use of symbols should be restricted to obtain clarity without throwing readability out of the window. A document packed with symbols tend to appear impenetrable or unduly compact, especially if each symbol carries a lot of meaning, which they tend to do. Used with care symbols are of great value when writing specifications. They let us use a more compact language and thus reduce the chances of misunderstandings and errors following from a verbose presentation. When good definitions have been made of concepts, then symbols can be introduced to denote instances of objects in the extension of these concepts. Used with care symbols are of great value to us, and in formal specifications they are key ingredients. The success or failure of a formal specification language may rest with its use of symbols. This is particularly true when the language is to be used by people without heavy mathematical training.
Ambiguities in specifications invariably lead to faults and products which are not exactly as the specifier intended. Note that a product may well be said to be conforming to an ambiguous specification, if it is conforming to one of the possible interpretations of the specification. People seem to have a strong capacity to arrive at what they think is a reasonable interpretation when faced with an ambiguous specification.
Lives may or may not be lost if the specification of an intelligent network service is misinterpreted, but time and money will surely be at stake. Ambiguity is not an easy thing to control and its source is often in the initial phases of specification where ideas are transformed into specifications. We may not even be aware of the ambiguities in our own ideas. This initial fuzzyness can never be completely removed, and its presence is a part of the creative process. However, once a decision is made to go forward and turn the idea into a product the unwanted ambiguities must be removed.
According to a standard Oxford dictionary ambiguity means
"presence of more than one meaning".
Thus, to remove ambiguity we need to ensure that concepts have only one meaning, if we want them to have only one meaning. Natural language draws some of its power from the fact that it has an abundance of words which have different meaning in different contexts and usages. This is often not perceived as ambiguity since it is a natural part of our language, but it makes unrestricted natural language unsuitable for many description and specification tasks. Ironically this feature of natural language is perceived as enriching the expressive power of natural language while it can be devastating in specifications. How can we cope with ambiguity without loosing expressive power?
One way of removing most of the ambiguity is by increasing formality. In a formal language we have the power to decide exactly what each expression is to mean, and may thus ensure that each expression has only one well-defined meaning. One of the benefits is that even complex expressions have a meaning which is defined in terms of the meanings of its components. A simple example is an expression like A L B which has a meaning decided by the meaning of A and B and L. Precision is really all about semantics, the syntax is merely helpful or not in the textual layout. By attempting to formally define the semantics of a language we may uncover sources of ambiguity in the language itself.
In an incoherent description it is problematic to see how different parts relate to each other and the description is difficult to understand. Attaining coherence in the description of IN services is particularly challenging, because services necessitate several descriptions aimed at different people and serving different purposes that need to be related to each other in a logical way.
Coherence requires precision. Vague terms may be understood differently and used with different meanings in different parts of a description, making it difficult to relate the parts to each other.
Terms must also be used in a manner that is consistent with their definition throughout the description. If a term is already used in an inconsistent manner, any attempt to make it precise will result in a disagreement between definition and use.
Consider the term call used by CCITT in describing a considerable set of services. It is defined (in Rec. E.600) as:
"A generic term related to the establishment, utilization and release of a connection. Normally, a qualifier is required to make clear the aspect being considered, e.g. call attempt."
This careful formulation seems to intentionally keep the definition vague. The reason, presumably, is that it is difficult to give a definition that covers all uses of the term call.
Missing concepts often result in gaps in a description, which are another source of incoherence.
The semantics of a service is provided by its functional description, stating what the service does rather than how it does it. In the IN model described in (9), Q.1202, the "Global Functional Plane" is intended to provide (the concepts necessary for providing) functional descriptions of services. How a service is executed is taken care of on another "plane", the so-called "Distributed Functional Plane". Here the execution of a service is described as a set of processes running concurrently on different "functional entities". However, the way the Global Functional Plane is described today, service descriptions are not amenable to any precise description without resorting to the Distributed Functional Plane. This means that in order to get a complete picture of a service's functionality, one has to analyse it as a set of different processes running on different functional entities. The notions necessary to give complete functional descriptions of services are missing in the Global Functional Plane (contrary to what seems to be the intention). What remains after distribution aspects have been stripped from the service descriptions is so little that it becomes difficult to find coherence.
The reason that it is not possible to give a precise functional description of services that are entirely contained in the Global Functional Plane is that a description of the network is missing. On the Global Functional Plane the "IN structured network is viewed as a single entity", see (9), Q.1203. However, no description of such a network is provided. Since it is not possible to give a description of a service without involving the network, and since (9) only gives a description of the network on the Distributed Functional Plane, the functional descriptions have to resort to this plane.
A "functional network", generic enough to encompass all networks that the IN concept is intended to be applicable to, such as mobile or fixed, line switched or packet switched, connectionless or connection-oriented, etc., has yet to be defined. Such a network should capture the universal character of a network. A common characteristic of networks is that they allow for information transport. Connections and their end points are also common to all networks. Technology dependent aspects such as transmission speed should only be reflected in the functional network insofar as they are needed in describing the semantics of a service. The transmission speed could, e.g., be reflected by a parameter with no specific value assigned to it.
In order to provide the semantics of IN services in a coherent manner, a set of concepts that we shall hereafter refer collectively to as the "functional network" must be defined. Genericity constraints imply that the concepts are abstract in the sense that they do not refer to any specific network. Abstraction does not imply vagueness, however. We view the relation between the functional network and the possible realisations of it, as similar to the relation between a mathematical concept, described by some theory, and the specific models satisfying the theory. Figure 1 explains this analogy. The relation between the functional network and the possible realisations of it should be compared to the relation between the notion of a group and the different models that satisfy the group axioms such as (Z, +), (Q, +), ... By adopting mathematical rigor in the description of the functional network, we believe we can satisfactorily handle abstract concepts.
Throughout this article we advocate an object oriented approach to service descriptions. From a coherence point of view, there are two major reasons for adopting object orientation. First, there is the reusability aspect. If two descriptions use the same classes at least some coherence is achieved. Object orientation supports reusability particularly well due to its encapsulation and inheritance properties (for a presentation of the virtues of object oriented programming, see e.g. (7)). Second, object orientation allows different "viewpoints" to be related to each other in a well-defined manner. Consider a distributed database. From a functional point of view a database can be seen as one single object to which queries are presented. A distributed view will show several interrelated objects which are capable of handling concurrent accesses, maintaining consistency, etc.
These two views, functional and distributed, are reconciled by encapsulating distribution aspects within the object representing the functional view. Using deferred classes, i.e. partially defined classes, one could even describe the functional view postponing distribution matters to the description of some subclass of it (i.e. a class that inherits the deferred class). The reader is invited to compare this example with the one presented in Figure 1.
Time consuming work on specifications is of little use unless the specifications obtained provide some real benefits. Once we arrive at a product it remains to see whether it is according to the specification, if it has the right quality. The actual process of checking this is called verification.
In different contexts it is of varying importance to be absolutely assured that a product is according to its specification. This will always depend on how sensitive we are to an eventual failure. If the product is your hat you have not even thought of a "hat failure", but if it is a piece of software controlling the engine in an aeroplane we have vivid imaginations of what results a failure could have! These are two extremes and we have a broad spectrum in between. The need for verification becomes stronger as we move towards the "engine failure" end of the spectrum. The intelligent network services will be distributed over this spectrum, with a "medium" to "high" placement as typical. The reason is that the failure of telecommunications services is undesireable in most cases and can be catastrophical in some. We should therefore strain to keep the chances of this happening to a minimum. This is partly achieved through good specification and verification work. In fact it is often stated that the most efficient use of resources for quality control is in the specification and verification phase.
An interesting aspect of verification is appearing if we consider two distinct aspects of the specification, namely the
By non-functional properties are meant properties which do not relate to the presence of functionality but to properties of the functionality itself when present. Examples of non-functional properties are
In the case of security we can see that a functionality may be present and be secure or insecure relative to a chosen security policy. If the specification says that "operation f(x,y,z) should be there" with pre- and post-conditions in order and nothing more, a verification could proceed to give an "OK" even if the operation was totally unacceptable security- wise.
Different methods of verification may be appropriate for different purposes and properties. We can name four broad classes of such methods, these are
Simulation may be appropriate when the final system or product is so large/complex or expensive to manufacture that a verification of the final system is impractical. Simulation will then give a picture of the most important behavioural features of the system, if the simulations are themselves correctly executed. This can seem to move the verification problem onto the simulation itself, and simulation alone can never give a completely satisfying verification and has to be used in conjunction with other techniques.
Laboratory tests are also used in addition to other verification techniques. Unlike simulation (which may also take place in a laboratory) laboratory testing is typically done on a version of the actual product, often a prototype. Apart from this, simulation and laboratory testing may look very similar and serve many of the same purposes.
Field testing is a way of getting to know how the final product or a prototype behaves in the environment where it is meant to function. This can typically be one of the last stages in a verification process incorporating also simulation and laboratory testing.
Formal proof is perhaps the most controversial of these methods. It consists of actually proving by formal mathematical techniques that a proposed product is according to specification. This has as a prerequisite that the specification is also a formal specification. Formal verification must be said to be in its infancy with respect to industrial applications but there is a tendency towards more use of formal techniques in the computer related industry and the telecommunications industry in particular. The reasons are among others a need for reliable and safe systems, like the intelligent network. IN services can be subjected to all of these types of verification.
Formal verification derives its name from the fact that the verification consists of deriving a formal proof of statements of formal logic (see the box on formal proof, Figure 2). It is inseparably connected to formal specification and cannot be done with respect to informal specifications.
Hitherto formal verification has been closely connected only to computer programs. The reason is partly that computer programs are quite easily considered as mathematical or formal objects amenable to formal reasoning. This was first realised by Turing in 1936, and has since been of vast importance to the development of computer science and programming. In 1969 C.A.R. Hoare in (6) formulated a mathematical theory of programming and program correctness which has since been used extensively as a basis for formal verification techniques.
Simulation, testing and field trial can at best give us an increased probability that our program is conforming to the specification. We can never by use of these methods get an unequivocal "Yes" or "No". The ambition of formal verification in combination with formal specification is to provide a means of obtaining such answers. We can outline a typical verification technique as follows. We consider a piece of a program, P, which is to be verified against a specification. This specification will often be of a form saying essentially what is assumed to be the case before S is executed, the "precondition" P, and then what is guaranteed to hold after the execution of S, the "postcondition" Q. If S is written in a suitable formal notation the idea is that a formal proof of Q from P and S constitutes a verification of the fact that as long as the precondition holds, S will guarantee that the postcondition is fulfilled. If the logical system used is consistent this will be a proof of a true statement and hence a formal mathematical proof of the fact that a program is doing what the specification says it should do.
A problem with formal verification is that as programs and specifications become larger the complexity of doing a formal proof explodes1). For this reason formal verification tends to be restricted to programs with extreme requirements imposed on them. This can be the case for e.g. security and safety critical software in the intelligent network.
It is interesting to investigate the possible benefits of object orientation on formal specification and verification. The most obvious idea can be sketched as follows: Since object orientation promotes encapsulation of operations related to the same concept into objects, we could benefit from specifying and verifying objects and object aggregates separately. The idea is that building larger structures from proven substructures ought to result in a proven structure as long as the construction process is designed such as to maintain the correctness at all times. This is an interesting research challenge to be pursued further.
To conclude we can say that verification of intelligent network services is important for many reasons, but the most important is perhaps that we should aim for verifiable services. That is, to specify and design services in such a way as to make verification possible if requested. This should include both formal and informal verification techniques. If we shall be able to do some sort of formal verification we need to have formal specifications.
Every description needs a language. Up to now we have argued for an object oriented approach to the description of services. Object orientation alone, however, is not sufficient. We need to state what the objects do, i.e. provide their semantics, and for that a set of concepts must be defined in a precise "mathematical" way.
Mathematical texts are mostly written in a mixture of English (or some other natural language) and some formal notation. The descriptions rely on concepts with a precise mathematical meaning, such as sets, functions, mappings, etc. Proofs are not formal (see Figure 2), but are explications that attempt to convey an understanding of the problems, constructions, solutions, etc., involved. Often there is a consensus that if someone cared to undertake the tedious task of translating them into formal proofs, this could be done.
Formal proofs, on the other hand, do not rely on any understanding of concepts, but consist of a series of string manipulations in conformance with given patterns (called rules or schemata). They are therefore particularly well suited to computers. A problem with mechanical theorem provers is that the search for a proof often leads to a "combinatorial explosion", i.e. the number of cases that needs to be inspected grows to such proportions that the computer has to give up. The search for a proof must therefore be guided by heuristics or human interaction.
The advantages of formal proofs are as follows: First, they allow us to disregard human subjectivity. Second, one can look at mechanical theorem provers as tools that help humans in finding and verifying proofs and making the translation from informal to formal proofs less tedious. The third advantage is probably the most important for us. Programming languages are formal languages in the sense that they are "understood" by computers. When we write class definitions, we are thus using a formal language. Programs are best related to "mathematical concepts" if the concepts are described in a formal way too; this is even a prerequisite for formal program verification (see Section 5).
In order to provide formal descriptions, a formal language must be chosen. Expressive power is an important criterium. An outstanding class of languages are the first order languages used in first order logic (FOL). A large class of mathematical structures are definable in FOL, e.g. groups, rings, fields; see Figure 1. FOL is however limited in expressive power. For example, it is impossible to define the natural numbers N or the real numbers R. In order to single out N, one has to move to higher order logic (HOL).
Efficient theorem proving is another important criterium. Theorems are important for several reasons. First, theorems supply us with explicating facts, thus improving our understanding of defined concepts. Second, if we are able to prove theorems, we are also able to discover inconsistencies in our descriptions by proving a contradiction. Third, theorems may help finding and improving efficient implementations. The way graph theory already has been used to configure cost effective telecommunication networks clearly illustrates this fact.
The general rule is that the greater the expressive power of a language is, the less efficient theorem proving becomes. First order languages incorporate in a way a maximum of expressive power combined with completeness. Depending on the field of application, first order languages may be too weak in expressiveness, but also too strong. They may be too weak to define certain structures, as pointed out above, but too strong to allow for efficient theorem proving. The reason is that, although complete, first order logic is not decidable, i.e. proofs may be extremely long, and in general there is no way of telling whether a deduction is provable. (This fact is tightly related to the halting problem of Turing machines: There is no general way of telling whether a Turing machine with a given input tape will halt.) In order to obtain decidability one has to give up some of the expressive power of FOL. At the present time we (the authors of this article) have not gained enough experience to tell whether we should base our descriptions on another logic than FOL.
Developed methodologies, so-called formal methods or formal description techniques, make formal descriptions easier to write and understand. They can be based on FOL, HOL or others, or on some specific theory like set theory. Their convenience in use is, in addition to expressive power, an important criterium for an appropriate choice. Although based on some specific logic they can present to the user a richer language with e.g. possibilities for modularity, that is translatable (compilable) into a language of the logic they are based on, thus allowing for lucid and comprehensible descriptions. They can also be associated with computer applications that support automatic syntax checking, theorem proving, program verification, etc.
Object orientation is a design and description that does not only and necessarily imply the use of an object oriented language. For a discussion of the principles of object orientation the reader is referred to a text book on the subject, e.g. (7). Some languages, however, encourage and facilitate working within this paradigm better than others. (Figure 1 illustrates object orientation by a small example written in Eiffel.) In addition to the typical characteristics of object oriented languages the following facilities are relevant for the description of services.
Facilities to handle concurrency and distribution At some level the problems that arise in connection with the fact that services involve the participation of several processes running in parallel and interconnected by a network have to be dealt with. A language that allows a smooth transition from higher level descriptions where distributional and concurrency aspects are transparent, to a lower level where these aspects can be handled, is preferred.
Facilities to handle real-time requirements The behaviour of certain services depends on elapsed time. "Call Forwarding on No Answer", e.g., is triggered only when a call is not answered for say 20 seconds. Such time constraints must be expressible. One also needs to express time constraints in order to schedule processes running in a distributed environment.
Somehow we need to relate the objects to the concepts they rely on. Common practice is to describe this relation as comments in the code. For example, a procedure used to compute the factorial function will contain a comment stating this (usually presupposing that natural numbers and the factorial function are well-known, overlooking the fact that these too need to be defined). In Figure 1 the class GROUP is related to the mathematical concept group by the assertions, i.e. the comments as well as the requires and ensures clauses. The language used in the assertions contains elements belonging both to Eiffel (the object oriented language) and to the first order1) language used to define a group. For example, the ensures clause in the description of the add feature is an equation whose left hand side belongs to Eiffel and right hand side belongs to the language used to define a group (G, o). This mixed interface language is necessary to express relations between entities in the object world and entities in the mathematical world. The first order descriptions together with the interface language can be viewed as a "two tiered" specification of the object oriented descriptions, an approach that is developed by the Larch Project, see (4).
In order to make precise descriptions of services without going into all the implementational details, a set of concepts must be developed. By adopting an object oriented approach and supplementing object oriented descriptions with mathematically defined concepts it is possible to describe services on different levels of detail. In that way the requirements to descriptions coming from users, subscribers, and providers are met.
1 Ehrig, H, Mahr, B. Fundamentals of Algebraic Specification 2. Berlin, Springer-Verlag, 1990. (EATCS Monographs on Theoretical Computer Science; volume 21).
2 Fraleigh, J B. A First Course in Abstract Algebra. Reading, Mass., Addison Wesley, 1976.
3 Guttag, J V, Horning, J J, Modet, A. Report on the Larch shared language: Version 2.3. Digital Equipment Corporation, Systems Research Center, 1990. (Technical Report TR 58.)
4 Guttag, J V, Horning, J J, Wing, J M. Larch in five easy pieces. Digital Equipment Corporation, Systems Research Center, 1985. (Technical Report TR 5.) Superseded by DEC SRC TR 58.
5 Hayes, I. Specification Case Studies. Englewood Cliffs, N-J., Prentice Hall, 1987.
6 Hoare, C A R. An axiomatic basis for computer programming. Communications of the ACM, 12, 576-580, 1969.
7 Meyer, B. Object oriented Software Construction. Englewood Cliffs, N-J., Prentice-Hall, 1988.
8 vanLeeuwen, J (editor). Handbook of Theoretical Computer Science, volume B. Amsterdam, Elsevier, 1990.
9 CCITT Q.1200-Series of Recommendations. Geneva, 1992.