Silver Bullet Short Notes

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Silver Bullet Notes :

I believe the hard part of building software to be the specification, design, and testing of this
conceptual construct, not the labor of representing it and testing the fidelity of the
representation. We still make syntax errors, to be sure; but they are fuzz compared with the
conceptual errors in most systems.
If this is true, building software will always be hard. There is inherently no silver bullet.
Let us consider the inherent properties of this irreducible essence of modern software systems:
complexity, conformity, changeability, and invisibility.

1. COMPLEXITY

From the complexity comes the difficulty of


communication among team members, which leads to product flaws, cost overruns, schedule
delays. From the complexity comes the difficulty of enumerating, much less understanding,
all the possible states of the program, and from that comes the unreliability. From complexity
of function comes the difficulty of invoking function, which makes programs hard to use.

From the complexity comes the difficulty of


communication among team members, which leads to product flaws, cost overruns, schedule
delays. From the complexity comes the difficulty of enumerating, much less understanding,
all the possible states of the program, and from that comes the unreliability. From complexity
of function comes the difficulty of invoking function, which makes programs hard to use.

2. Conformity.
Physics deals with terribly complex objects even at the "fundamental" particle level. The physicist labors on, however, in
a firm faith that there are unifying principles to be found, whether in quarks or in unified field theories.In many cases, the
software must conform because it is the most recent arrival on the scene.
In others, it must conform because it is perceived as the most conformable. But in all cases,
much complexity comes from conformation to other interfaces; this complexity cannot be
simplified out by any redesign of the software alone.

3. Changeability.
manufactured things are infrequently changed after
manufacture; they are superseded by later models, or essential changes are incorporated into
later-serial-number copies of the same basic design. Call-backs of automobiles are really quite
infrequent; field changes of computers somewhat less so. In part, this is so because the software of a system embodies its
function, and the function is the part that most feels the pressures of change. In part it is because software can be changed
more easily--it is pure thought-stuff, infinitely malleable.
All successful software gets changed. Two processes are at work. First, as a software product
is found to be useful, people try it in new cases at the edge of or beyond the original domain.
The pressures for extended function come chiefly from users who like the basic function and
invent new uses for it.
Second, successful software survives beyond the normal life of the machine vehicle for which
it is first written. If not new computers, then at least new disks, new displays, new printers
come along; and the software must be conformed to its new vehicles of opportunity.
In short, the software product is embedded in a cultural matrix of applications, users, laws,
and machine vehicles. These all change continually, and their changes inexorably force
change upon the software product.

4. Invisibility.

Software is invisible and unvisualizable. Geometric abstractions are powerful


tools. The floor plan of a building helps both architect and client evaluate spaces, traffic
flows, views. Contradictions and omissions become obvious.
The reality of software is not inherently embedded in space. Hence, it has no ready geometric
representation in the way that land has maps, silicon chips have diagrams, computers have
connectivity schematics. As soon as we attempt to diagram software structure, we find it to
constitute not one, but several, general directed graphs superimposed one upon another. The
several graphs may represent the flow of control, the flow of data, patterns of dependency,
time sequence, name-space relationships. These graphs are usually not even planar, much less
hierarchical. Indeed, one of the ways of establishing conceptual control over such structure is
to enforce link cutting until one or more of the graphs becomes hierarchical.

Past Breakthroughs Solved Accidental Difficulties

1. High-level languages.
the most powerful stroke for software productivity, reliability,
and simplicity has been the progressive use of high-level languages for programming.
An abstract program consists of conceptual constructs: operations, data types,
sequences, and communication. The concrete machine program is concerned with bits,
registers, conditions, branches, channels, disks, and such. To the extent that the high-level
language embodies the constructs one wants in the abstract program and avoids all lower
ones, it eliminates a whole level of complexity that was never inherent in the program at all.
Moreover, at some point the elaboration of a high-level language creates a tool-mastery
burden that increases, not reduces, the intellectual task of the user who rarely uses the esoteric
constructs.

2. Time-sharing.
Time-sharing attacks a quite different difficulty. Time-sharing preserves immediacy, and
hence enables one to maintain an overview of complexity. The slow turnaround of batch
programming means that one inevitably forgets the minutiae, if not the very thrust, of what
one was thinking when he stopped programming and called for compilation and execution.Slow turnaround, like
machine-language complexities, is an accidental rather than an essential
difficulty of the software process.

3. Unified programming environments.


Unix and Interlisp, the first integrated programming
environments to come into widespread use, seem to have improved productivity by integral
factors. Why?
They attack the accidental difficulties that result from using individual programs together, by
providing integrated libraries, unified file formats, and pipes and filters. As a result,
conceptual structures that in principle could always call, feed, and use one another can indeed
easily do so in practice.
This breakthrough in turn stimulated the development of whole toolbenches, since each new
tool could be applied to any programs that used the standard formats.
Because of these successes, environments are the subject of much of today's software-
engineering research. We look at their promise and limitations in the next section.

Hopes for the Silver


1. Ada and other high-level language advances.
a general-purpose high-level language of the 1980's. Ada not only reflects
evolutionary improvements in language concepts, but indeed embodies features to encourage
modern design and modularization.
Ada will not prove to be the silver bullet that slays the software productivity
monster. It is, after all, just another high-level language, and the biggest payoff from such
languages came from the first transition -- the transition up from the accidental complexities
of the machine into the more abstract statement of step-by-step solutions.
Ada's greatest contribution will be that switching to it
occasioned training programmers in modern software-design techniques.

2. Object-oriented programming.
one must be careful to
distinguish two separate ideas that go under that name: abstract data types and hierarchical
types.The two concepts are orthogonal_one may
have hierarchies without hiding and hiding without hierarchies. Both concepts represent real
advances in the art of building software.such advances can do no more than to remove all the accidental difficulties
from the expression of the design.

3. Artificial intelligence. Many people expect advances in artificial intelligence to provide the
revolutionary breakthrough that will give order-of-magnitude gains in software productivity
and quality. [3] I do not. To see why, we must dissect what is meant by "artificial
intelligence."

D.L. Parnas has clarified the terminological chaos: [4]


Two quite different definitions of AI are in common use today. AI-1: The use
of computers to solve problems that previously could only be solved by
applying human intelligence. Al-2: The use of a specific set of programming
techniques known as heuristic or rule-based programming. In this approach
human experts are studied to determine what heuristics or rules of thumb they
use in solving problems.... The program is designed to solve a problem the way
that humans seem to solve it.

4.Expert systems.
An expert system is a program that contains a generalized inference engine and a rule base,
takes input data and assumptions, explores the inferences derivable from the rule base, yields
conclusions and advice, and offers to explain its results by retracing its reasoning for the user.
The inference engines typically can deal with fuzzy or probabilistic data and rules, in addition
to purely deterministic logic.
Such systems offer some clear advantages over programmed algorithms designed for arriving
at the same solutions to the same problems:
• Inference-engine technology is developed in an application-independent way, and then
applied to many uses. One can justify much effort on the inference engines. Indeed,
that technology is well advanced.
• The changeable parts of the application-peculiar materials are encoded in the rule base
in a uniform fashion, and tools are provided for developing, changing, testing, and
documenting the rule base. This regularizes much of the complexity of the application
itself.
How can this technology be applied to the software-engineering task? In many ways: Such
systems can suggest interface rules, advise on testing strategies, remember bug-type
frequencies, and offer optimization hints.The most powerful contribution by expert systems will surely be to put at the
service of the
inexperienced programmer the experience and accumulated wisdom of the best programmers.
This is no small contribution.

5. "Automatic" programming.
For almost 40 years, people have been anticipating and writing
about "automatic programming," or the generation of a program for solving a problem from a
statement of the problem specifications. Some today write as if they expect this technology to
provide the next breakthrough.
• There are many known methods of solution to provide a library of alternatives.
• Extensive analysis has led to explicit rules for selecting solution techniques, given
problem parameters.

6. Graphical programming.
A favorite subject for PhD dissertations in software engineering is
graphical, or visual, programming--the application of computer graphics to software design.
the flowchart is a very poor abstraction of
software structure. Indeed, it is best viewed as Burks, von Neumann, and Goldstine's attempt
to provide a desperately needed high-level control language for their proposed computer.

7.Program verification.
Much of the effort in modern programming goes into testing and the
repair of bugs. Is there perhaps a silver bullet to be found by eliminating the errors at the
source, in the system-design phase? Can both productivity and product reliability be radically
enhanced by following the profoundly different strategy of proving designs correct before the
immense effort is poured into implementing and testing them?Program verification is a very powerful
concept, and it will be very important for such things as secure operating-system kernels. The technology does not
promise, however, to save labor. Even perfect program verification can only establish that a program meets its
specification.

8. Environments and tools.


Perhaps the biggest gain yet to be realized from programming environments is the use of
integrated database systems to keep track of the myriad details that must be recalled
accurately by the individual programmer and kept current for a group of collaborators on a
single system. Surely this work is worthwhile, and surely it will bear some fruit in both productivity and
reliability. But by its very nature, the return from now on must be marginal.
9. Workstations.
What gains are to be expected for the software art from the certain and rapid
increase in the power and memory capacity of the individual workstation? Well, how many
MIPS can one use fruitfully? The composition and editing of programs and documents is fully
supported by today's speeds. Compiling could stand a boost, but a factor of 10 in machine
speed would surely leave thinktime the dominant activity in the programmer's day. Indeed, it
appears to be so now.
More powerful workstations we surely welcome. Magical enhancements from them we
cannot expect.

You might also like