Usenet comments on "Put it in the contract: The lessons of Ariane"

Robert Dewar said:

Well all sorts of things would have avoided the crash. One can also say that systematic proof of correctness, or systematic code review, or in fact almost any steps to be a bit more careful in this particular area, would have avoided the crash. As usual, a disaster of this type is a chain of occurrences, and fixing any one of them would have avoided the problem.

Certainly the notion of clear interfaces and interface requirements (I find the use of the capitalized "Design by Contract" a little pretentious, since I see no new concept there that justifies a new technical term) is one which we can all agree on. The argument that the use of Eiffel would have prevented the problem is entirely unconvincing, and has the unfortunate effect of encouraging people to think that the entire argument is just empty language advocacy.

When there is a major screwup in software (some examples, the long lines failure at AT&T, the Denver baggage handling, the overnight funds transfer disaster in an NY bank etc), there is a natural tendency for over enthusiastic pushers of a particular language to try to argue that their particular pet language would have guaranteed freedom from the problem.

Such arguments are never convincing, because even if it *were* the case that the particular problem at hand might *possibly* have had a better chance of being avoided writing in language X, extending this to the allegation that the use of language X would in general have been beneficial stretches credibility. In particular, in a large complex system of this type, the language and all other components have to meet all kinds of complex requirements, and people who know nothing at all about the whole application have no idea whatsoever if their pet language would in fact meet the criteria.

If Bertrand is saying:

(a) The Ariane problem (at least little piece of it) is a good example of why designing interfaces that are complete with respect to establishing a requirements contract for clients is a good idea.

(b) One of the attractive features of Eiffel is that it supports this concept directly and encourages programmers to use this approach.

Then the argument is entirely reasonable, and one that I cannot see anyone objecting to. But the way the argument is stated, it reads as though he is saying:

If Eiffel had been used for the Ariane project, it would not have crashed. And I find that claim entirely unjustified, and it detracts from the important underlying point. No language can force people to do things right, and to a large extent no language prevents people from doing things right. What a language can do is to encoruage a style of programming and thinking that helps people find their way to doing things right a little more easily.

This is indeed the principle behind the Ada design, and a lot of the reason that Ada is successful in practice is precisely because it encourages "good thinking".

But when people said "if only the Denver baggae system had been written in Ada, I would be flying to the new airport today", this crossed the line into non-credible language advocacy.

Similarly, a lot of the thinking behind the Eiffel design is also to encourage "good thinking", and in that sense Ada and Eiffel are kindred spirits in their approach to language design, though they make different choices about what is important to encourage.

However, when the Eiffel advocacy crosses the line to saying that a large complex system, if written in Eiffel, would have succeeeded where otherwise it failed, again this crosses the line into non-credible language advocacy.

I am, as everyone knows, a strong advocate of the use of Ada. However, I take a lot of care NOT to make exaggerated claims, since I don't think this helps at all. What Ada (or any other language) can do, is to push programmers up a little bit on the quality and productivity scales.

A little bit may not sound like much, but these scales are logarithmic scales in which the productivity of programmers and the quality of the code they right varies over orders of magnitudes, so a "little bit" can add up to very substantial savings in costs, and very substantial improvements in quality in practice.

Ken Garlington <> has spent a lot of time criticizing this paper on Usenet.

He has made his arguments available here .

I'll let the reader make his/her own opinion on this debate. I (Jean-Marc Jézéquel ) just want to make a meta-comment about our intentions with this paper, which seem to have been perceived differently by Ken Garlington and a few others:

No. In the authors' own view, the paper was much less ambitious than that. It merely attempted, in less than 2 pages, to raise the reader's awareness and interest on the notion of Design by Contract (DBC) for reusing software components.

The paper just does not claim that. Stricly speaking, it is very clear that DBC by itself is neither sufficient (it needs to be integrated in a proper system and software engineering process) nor necessary (even a compiler is not necessary: just take a genius team and make it program the system in binary code). What is necessary is to check the specifications of a component before re-using it in a new context.

In his critique, Ken seems to overlook the fact that whereas assertions related to DBC are physically located inside the code, they logically belong to the specification of components. Tools are available to extract this specification. When you want to reuse a component (let's say the perfectly valid Ariane4 SRI) in a new system (let's say Ariane5) the engineer working in a DBC context would check (with all possible means) whether the new system is happy with the component contract. This was not done in the case of Ariane5, and this was the primary cause of the failure (they are subsequently many others factors that transformed this mishandling into the 501 crash).

We think that there is a broad agreement on the fact that Ariane 5 is a striking example of working with assumptions that are true at a point in time (Ariane 4) and no longer later on (Ariane 5). The divergences on opinions lie on whether or not Eiffel style assertions would have been useful and practicable for expressing these hidden assumptions. Many people familiar with DBC in the context of Eiffel think that they would, others, like Ken Garlington, think that they wouldn't.

Nick Leaton answered Ken's critics:

    3.1.6 Adverse effects of documenting assertions There is a line of reasoning that says, "Even if DBC/assertions would have been of minimal use, why not include them anyway just in case they do some good?" Such a statement assumes there are no adverse impacts to including assertions in code for documentation purposes. However, there are always adverse effects to including additional code: As with any other project, there are management pressures to meet cost and schedule drivers. Additional effort, therefore, is discouraged unless justified. More importantly, all projects have finite time and other resources available. Time spent on writing and analyzing assertions is time not spent elsewhere. If the work that is not performed as a result would have been more valuable (in terms of its effect on the safety and integrity of the system) than the time spent with assertions, then the resulting system may be less safe. There is a growing consensus in the safety-critical software field that simpler software tends to be safer software [21]. With this philosophy, additional code such as assertions should only be added where there is a clear benefit to the overall safety of the particular system being developed.

  1. Additional effort. There is plenty of evidence that the cost of finding and fixing a bug early in the development process is much cheaper that fixing it later. Now *IF* DBC as a methodology produces earlier detection of faults, then this 'additional effort' is justified, as it is effort early in the development process. I believe this to be the case from practical experience. Developing code last week I detected errors in my code very early, even though the fault would have only shown itself in an exception. I had an invariant that made sure I had an error handler in a class. It broke as soon as I tried to construct the class, it didn't wait until I tried to generate an error, all because I hadn't set the handler up. That combined with permantant testing of assertions has the effect of producing less effort.
  2. Time spent else where. Is this the case? Some of it may be, but I believe if you cannot be rigourous about what your software is trying to do, by writing assertions, then you are unlikely to produce a quality system. The effect of writing assertions overlaps with the design process. It is not wasted time, it just comes under a different heading. If your design process listed the assertions in the specification, would implementing them be a waste of effort?
  3. Simple software. You bet. The simpler the better. Occams Razor rules. Now here there is a split between DBC a la Eiffel and DBC, say in C++. In Eiffel it is simple. In C++ it is hard, particularly with inheritance of assertions. One common theme from Eiffel practitioners is their support for DBC. Why? They are simple to write. Prior to using Eiffel I had read about the language. I was extremely sceptical about assertions because in my experience with C++ and C++ programmers, no one writes them, mainly because it is hassle. Take away the hassle and people will write them because of the benefits.
    1. 3.2.2 Adverse effects of testing with assertions Assume for a moment that the proper testing environment and data had been available. Putting aside for the moment the question as to whether assertions would have been necessary to detect the fault (see section 4.2), are there any disadvantages to using assertions during testing, then disabling them for the operational system? In the author's experience, there are some concerns about using this approach for safety-critical systems: The addition of code at the object level obviously affects the time it takes for an object to complete execution. Particularly for real-time systems (such as the Ariane IRS), differences in timing between the system being tested and the operational system may cause timing faults, such as race conditions or deadlock, to go undetected. Such timing faults are serious failures in real-time systems, and a test which is hindered from detected them loses some of its effectiveness. In addition, the differences in object code between the tested and operational systems raise the issue of errors in the object code for the operational system. Such errors are most likely to occur due to an error in the compilation environment, although it is possible that other factors, such as human error (e.g. specifying the wrong version of a file when the code is recompiled) can be involved. For example, the author has documented cases where Ada compilers generate the correct code when exceptions are not suppressed, but generate incorrect code (beyond the language's definition of "erroneous") when they are suppressed. This is not entirely unexpected; given the number of user-selectable options present with most compilation environments, it is difficult if not impossible to perform adequate toolset testing over all combinations of options. Nothing in the Eiffel paper indicates that Eiffel compilers are any better (or worse) than other compilers in this area. Although this is a fault of the implementation, not the language or methodology, it is nonetheless a practical limitation for safety-critical systems, where one object code error can have devastating results. One possible approach to resolving this issue is to completely test the system twice; once with assertions on and another time with assertions suppressed. However, the adverse factors described in section 3.1.6 then come into play: By adding to the test time in this manner, other useful test techniques (which might have greater value) are not performed. Generally, it is difficult to completely test such systems once, never mind twice! This effect is worse for safety-critical systems that perform object-code branch coverage testing, since this testing is completely invalidated when the object code changes [25]. Overall, there are significant limitations to using this technique for safety-critical systems, and in particular real-time systems such as the Ariane 5 IRS.

      Assertions affect timing in safety critical systems. Firstly it depends on the implementation. It is easy to envisage a system where the assertions are redundantly executed. But you would only want to do that if you were running with faulty software ?!*^£%

      I also find it worrying that systems are being used for safety critical apps where there is the possibility of a race or deadlock to occur. Compilation problems. These can occur in any system as you are aware. From discussions with some of the people involved in writting Eiffel compilers, the enabling, disabling of assertions has a very trivial implementation, which is very unlikely to go wrong. It has also be extensively tested in the field by end users. Do you trust your compiler? If not, you shouldn't be writing safety critical software with it. Period.

Jim Cochrane said:

Regardless of what Dr. Meyer's motives are, I think there are some basic assumptions underlying this discussion that some people are implicitly questioning. The basic assumptions, as I see them, are:

1. Documenting precise specifications in the form of pre- and post-conditions and class invariants for all component interfaces and making this documentation easily available to anyone using the components will help in producing systems that fulfill their requirements correctly.

2. Coding as many of these specifications as possible helps in ensuring their precision and allows them to be checked while testing, further increasing the chances that the systems developed in this manner will fulfill their requirements correctly.

3. Using a language and environment that directly supports both the documentation and the coding of these specifications will make the specification process easier, more complete, and less error prone, thus further increasing the chances that the systems developed in this manner will fulfill their requirements correctly.

One point of the discussion, I believe, is that the method implied in the above assumptions is a tool to use along with other valuable tools in producing high-quality software; it does not replace other usefull tools such as a good test method and plan - rather, it complements them.

Now perhaps I am biased and am not aware of valid counterarguments, but I am not able to see how the method that would result from making use of these assumptions, called design-by-contract in this discussion, would not significantly increase the quality of a system of substantial size, in the sense of producing a system that correctly fulfills its requirements.

Assuming that their are no valid counterarguments (if there are, I'm sure people will post them), the discussion comes down to how practical it is to implement and use an environment that supports design by contract.

As far as assumption 1., applying precise specifications for component interfaces, the usefullness and practicality of this technique seems obvious and just visualizing the consequences of not doing this (for example, increased time spent figuring out what a component does, developers' time wasted informing other developers what the interface specification of a component is, errors in coding due to misunderstanding an undocumented specification) should be enough to convince anyone that it would be foolish not to apply this technique on any significant project.

The practicality of applying assumptions 2 and 3 is perhaps not as obvious as the first assumption, but strong reasons do exist to support their practicality. First, coding interface specifications combined with a good test plan will increase the chance of uncovering defects and decrease the cost of fixing them (since it is well known that the later a defect is found, the more it will cost to fix). Also, using an environment that provides rich support for the documenting and coding of interface specifications will make it easier and less time consuming and thus more practical for developers to use this technique. Finally, using such an environment would provide an infrastructure that would allow development organizations to make the technique of design by contract a formal part of the development process and thus make the components of this technique - precise, complete, and testable specifications - a part of the culture of the development organization, so that the techniques used to produce high quality software become habitual (in a positive sense), rather than being something foreign that is forced upon the developers.

As I see it, Eiffel is simply an evolutionary step in the maturing fields of computer science and software engineering. By directly supporting design by contract it increases the chances that a team of capable developers will produce a quality product. My guess is that other languages (besides Eiffel and Sather, the only ones I'm aware of that directly support design by contract) will appear that also support this technique and that chances are that at least one of them will become mainstream (in the sense of being perhaps as well-known as Java is today, hopefully without the hype). Some of these languages may provide even more complete and sophisticated support for design by contract than Eiffel and Sather. It will be interesting to see what languages and techniques are being used 20 or 30 years from now (for those of us who are not retired or dead, that is :-) ).

Ted Velkoff said:

Ken Garlington responded to an earlier post of mine:

In my experience, there is a significant difference. I will not presume to make a scientific claim; rather I will offer a personal, anecdotal example.

About two years ago, I built a modest enhancement to a system which amounted to 4K lines each of Ada and C++. In the package specs and module headers I included preconditions, postconditions and invariants as comments. In the Ada bodies, I wrote if-then-else clauses to test preconditions (violations led to a chain of exceptions raised out to the main program). In the C++ implementations, I included conditionally compiled calls to the assert macro (which does a core dump and spits out a file and line number) for preconditions and trivial postconditions.

This approach worked really well for me because I was very motivated and willing to do certain things. Those things included: Step 1) write the assertions as comments in the specs; Step 2) update comments in the specs when testing revealed missing assertions (it's against the rules to introduce a precondition and not tell anyone about it - a contract with secret codicils, so to speak); Step 3) write the code to test the conditions, raise exceptions and generate a trace (in the Ada code).

This approach was not without flaws: Flaw 1) In the Ada code, for instance, there was no conditional compilation; to remove the assertion monitoring would have been a tedious exercise; Flaw 2) I didn't even attempt to monitor postconditions - this is harder, even for simple ones (e.g. count = old count + 1); Flaw 3) as implied above, information had to be kept in synch in two places: the specs and bodies.

In my opinion, this won't scale up to a large team. 20% might see the benefit and be willing to do the many tedious manual tasks I described; 20% will see no benefit, and even if they did, wouldn't do it; the other 60% could be persuaded that it would help but wouldn't tolerate the extra upfront burden and wouldn't adopt it (in particular, steps 2 and 3). I should say I haven't tried it out on a team; this is pure conjecture. (If anything, this appraisal is optimistic.)

Meanwhile at home I write software in Eiffel, where assertions are built into the language. In Eiffel, all I do is Step 1: write the pre/post-conditions and invariants. I don't need to do Step 2 (keep spec and body in synch) since in Eiffel, the spec (properly speaking, the short form) is automatically generated by tools. Step 3, (coding the assertion tests, handling failures, providing useful diagnostics) is provided automatically by the environment. Furthermore, the three flaws are eliminated: 1) with a compile-time switch I can turn off monitoring (I don't have to go through and "comment out" reams of code); 2) it's easy to monitor post-conditions, since this is provided by the tools (caveat - this is useful for testing and integration; postcondition monitoring necessarily incurs a performance penalty); 3) mentioned above, there is no spec/body redundancy to be managed by the programmer.

Returning to my view of scalability, I think the same 20% of programmers would love to work in a programming environment with these capabilities, 20% would still dislike it; but the other 60% - I think they would like it. I say that because the cost/benefit is easier to see. It doesn't take much time to write assertions, so when the rest comes for free (documentation and testing), programmers will want to write them.

Finally, I'll contrast Eiffel's assertions with comments (in any language). I conjecture that because they can be monitored easily, Eiffel's assertions affect programmer psychology. As a supplier, if I write postconditions or invariants as comments, I might be merely making promises. If I know those assertions are easily monitored, I will be careful about what I promise: the auditor might show up at any minute.

I can imagine that Eiffel may not be suitable for certain real-time applications mentioned in this thread. Nevertheless, I am convinced that a large number of software projects would benefit from employing "Design By Contract", and that Eiffel is the only language I know that makes its application practical and cost-effective.

Wes Groleau (Hughes Defense Communications, Fort Wayne, IN USA Senior Software Engineer - AFATDS)

Although they didn't demand that it be "in the code", the inquiry board did note

which to me means

1. given the requirements (ariane 4)

2. design and implement the solution (the code that failed)

3. document any restrictions which, though not requirements, are consequences of the chosen design.

While the results of step three are nearly guaranteed to be incomplete, for reasons already beat to death in this discussion, Bertrand Meyer came close to saying (correctly) that the effort of doing this _might_ have prevented the failure. Where he goes too far is on two points (now I'm repeating old news):

1. He says "probably would have" instead of "might have"

2. If you're not doing this in Eiffel syntax, you're not really doing it.

Now the last sentence will undoubtedly draw "he never said that" flames, so let me admit that (2) is an oversimplification of his claims that only Eiffel _really_ has "design by contract"

Bertrand Meyer answered:

To repeat once again the basic point made in the paper by Jean-Marc Jezequel and myself: it is dangerous and unjustifiable, especially in a mission-critical setting, to reuse a software element without a specification.

>From the beginning to the end of the software lifecycle, Design by Contract encourages associating such specifications with everything that you write. The method applies on at least four levels: getting the stuff right in the first place, by writing the software elements (the description of *how* things are done) together with their contracts (*what* they are supposed to achieve); documenting them, through automatic extraction of contracts as the basic information on a class; using the contracts as one of the principal guides to reuse; and applying them to debugging, which becomes less of a blind chase and more of a focused analysis of possible discrepancies between intents (the contracts) and reality (the implementations).

None of this automatically guarantees perfection, but it sure helps, as reported in these threads by every poster who had actually used Eiffel, where the ideas of Design by Contract are most directly realized. It is true that to someone who has not really tried it some of the benefits may appear exaggerated, but they're real. Just one case in which an incorrect call to a routine produces a violated precondition (caught right away at testing time, whereas it could in another approach have gone undetected for ages, and caused painful bugs) justifies the investment. In Eiffel development, this happens all the time.

The Ariane case provides a textbook example of the dangers of not using contracts. Of course it is easy to dismiss this example through below-the-belt arguments, such as

All this is rhetorics and cannot succeed to obscure the basic claim that systematic use of Design by Contract would probably have avoided the crash. Of course like any reconstruction of the past this conjecture cannot be proved, but there is enough evidence in the official report to support it. One can also object that other techniques would also have achieved the same goal, such as heavy-artillery a posteriori quality assurance, but they seem far more difficult and costly than integrating Design by Contract, a simple and easy-to-apply idea, into the design, implementation, documentation, reuse and validation process.

Some of the negative reactions seem to recall arguments that were used against the proponents of structured programming in the 70s, and against those of object technology in the 80s. They are probably inevitable (and in part healthy, as they force the methodologists to clarify their ideas, refine them, and avoid overselling). But when a useful methodological principle surfaces, all the rhetoric in the world will not prevent its diffusion.

Robert S. White (An embedded systems software engineer) answered to Bertrand Meyer saying:

We "aerospace" people agree.

The problem with this is that you are preaching to the choir. Long long ago, before Eiffel, the general concept of capturing system requirements in a spec document, flowing down to a software requirements document, conducting detailed design reviews with the requirment documents as resources/authorities (or getting them updated) and finally having software qualification tests done to prove performance to the software requirements spec has been practiced by the "aerospace" industry. The "crash" resulted because of a failure to fully follow this existing practice. Your papers about Eiffel and Design By Contract are just, IMO, another way to implement the concept of developing software that must comply to requirements. August 22, 1997 Copyright ©1997 Irisa