Silver Bullet Short Notes
Silver Bullet Short Notes
Silver Bullet Short Notes
I believe the hard part of building software to be the specification, design, and testing of this
conceptual construct, not the labor of representing it and testing the fidelity of the
representation. We still make syntax errors, to be sure; but they are fuzz compared with the
conceptual errors in most systems.
If this is true, building software will always be hard. There is inherently no silver bullet.
Let us consider the inherent properties of this irreducible essence of modern software systems:
complexity, conformity, changeability, and invisibility.
1. COMPLEXITY
2. Conformity.
Physics deals with terribly complex objects even at the "fundamental" particle level. The physicist labors on, however, in
a firm faith that there are unifying principles to be found, whether in quarks or in unified field theories.In many cases, the
software must conform because it is the most recent arrival on the scene.
In others, it must conform because it is perceived as the most conformable. But in all cases,
much complexity comes from conformation to other interfaces; this complexity cannot be
simplified out by any redesign of the software alone.
3. Changeability.
manufactured things are infrequently changed after
manufacture; they are superseded by later models, or essential changes are incorporated into
later-serial-number copies of the same basic design. Call-backs of automobiles are really quite
infrequent; field changes of computers somewhat less so. In part, this is so because the software of a system embodies its
function, and the function is the part that most feels the pressures of change. In part it is because software can be changed
more easily--it is pure thought-stuff, infinitely malleable.
All successful software gets changed. Two processes are at work. First, as a software product
is found to be useful, people try it in new cases at the edge of or beyond the original domain.
The pressures for extended function come chiefly from users who like the basic function and
invent new uses for it.
Second, successful software survives beyond the normal life of the machine vehicle for which
it is first written. If not new computers, then at least new disks, new displays, new printers
come along; and the software must be conformed to its new vehicles of opportunity.
In short, the software product is embedded in a cultural matrix of applications, users, laws,
and machine vehicles. These all change continually, and their changes inexorably force
change upon the software product.
4. Invisibility.
1. High-level languages.
the most powerful stroke for software productivity, reliability,
and simplicity has been the progressive use of high-level languages for programming.
An abstract program consists of conceptual constructs: operations, data types,
sequences, and communication. The concrete machine program is concerned with bits,
registers, conditions, branches, channels, disks, and such. To the extent that the high-level
language embodies the constructs one wants in the abstract program and avoids all lower
ones, it eliminates a whole level of complexity that was never inherent in the program at all.
Moreover, at some point the elaboration of a high-level language creates a tool-mastery
burden that increases, not reduces, the intellectual task of the user who rarely uses the esoteric
constructs.
2. Time-sharing.
Time-sharing attacks a quite different difficulty. Time-sharing preserves immediacy, and
hence enables one to maintain an overview of complexity. The slow turnaround of batch
programming means that one inevitably forgets the minutiae, if not the very thrust, of what
one was thinking when he stopped programming and called for compilation and execution.Slow turnaround, like
machine-language complexities, is an accidental rather than an essential
difficulty of the software process.
2. Object-oriented programming.
one must be careful to
distinguish two separate ideas that go under that name: abstract data types and hierarchical
types.The two concepts are orthogonal_one may
have hierarchies without hiding and hiding without hierarchies. Both concepts represent real
advances in the art of building software.such advances can do no more than to remove all the accidental difficulties
from the expression of the design.
3. Artificial intelligence. Many people expect advances in artificial intelligence to provide the
revolutionary breakthrough that will give order-of-magnitude gains in software productivity
and quality. [3] I do not. To see why, we must dissect what is meant by "artificial
intelligence."
4.Expert systems.
An expert system is a program that contains a generalized inference engine and a rule base,
takes input data and assumptions, explores the inferences derivable from the rule base, yields
conclusions and advice, and offers to explain its results by retracing its reasoning for the user.
The inference engines typically can deal with fuzzy or probabilistic data and rules, in addition
to purely deterministic logic.
Such systems offer some clear advantages over programmed algorithms designed for arriving
at the same solutions to the same problems:
• Inference-engine technology is developed in an application-independent way, and then
applied to many uses. One can justify much effort on the inference engines. Indeed,
that technology is well advanced.
• The changeable parts of the application-peculiar materials are encoded in the rule base
in a uniform fashion, and tools are provided for developing, changing, testing, and
documenting the rule base. This regularizes much of the complexity of the application
itself.
How can this technology be applied to the software-engineering task? In many ways: Such
systems can suggest interface rules, advise on testing strategies, remember bug-type
frequencies, and offer optimization hints.The most powerful contribution by expert systems will surely be to put at the
service of the
inexperienced programmer the experience and accumulated wisdom of the best programmers.
This is no small contribution.
5. "Automatic" programming.
For almost 40 years, people have been anticipating and writing
about "automatic programming," or the generation of a program for solving a problem from a
statement of the problem specifications. Some today write as if they expect this technology to
provide the next breakthrough.
• There are many known methods of solution to provide a library of alternatives.
• Extensive analysis has led to explicit rules for selecting solution techniques, given
problem parameters.
6. Graphical programming.
A favorite subject for PhD dissertations in software engineering is
graphical, or visual, programming--the application of computer graphics to software design.
the flowchart is a very poor abstraction of
software structure. Indeed, it is best viewed as Burks, von Neumann, and Goldstine's attempt
to provide a desperately needed high-level control language for their proposed computer.
7.Program verification.
Much of the effort in modern programming goes into testing and the
repair of bugs. Is there perhaps a silver bullet to be found by eliminating the errors at the
source, in the system-design phase? Can both productivity and product reliability be radically
enhanced by following the profoundly different strategy of proving designs correct before the
immense effort is poured into implementing and testing them?Program verification is a very powerful
concept, and it will be very important for such things as secure operating-system kernels. The technology does not
promise, however, to save labor. Even perfect program verification can only establish that a program meets its
specification.