current logic programming language GHC. Given the
middle-out strategy of ICOT, the choice of the kernel programming language was crucial to all other developments,
downward to-the hardware architecture and upward to the
operating system, programming environment, and
applications. GHC, defined in 1984, has proved to be a
sound basis for KL1, the kernel language used for the
development of PIMOS, the PIM operating system, and
the PIM applications. Ueda describes the intellectual
history of GHC and its role in the Fifth Generation project.
Ken K a h n and his group at Xerox PARC have
pioneered several research directions in concurrent logic
programming. First, they offered a method for objectoriented programming within a high-level language on top
of a concurrent logic language. Second, they have
investigated the use of concurrent logic programming for
programming distributed open systems. Third, Vijay
Saraswat, a member of the group, has integrated the
approaches of concurrent logic programming and constraint logic programming in the cc framework for concurrent constraint programming. Fourth, Kahn and Saraswat
have invented a method for the complete visualization of
concurrent logic programs and their computations, which
might signal a breakthrough in visual programming. Kahn
describes the research carried out by his group and their
interaction with colleagues at ICOT and at the Weizmann
Institute.
Takashi Chikayama is responsible, more than any other
person, for ICOT's success in realizing innovative computer systems that were actually working and usable.
Chikayama was chief architect and chief programmer of
all the software systems developed at ICOT, including ESP
(ICOT's object-oriented version of Prolog), SIMPOS (the
operating system and programming environment of
ICOT's Prolog workstation, their main workhorse during
most of the project), KL1 (the kernel programming
language for PIM), and PIMOS (the PIM operating
system). Chikayama shares with us his experience in
designing, implementing, and using these software
systems.
Evan Tick's Ph.D. dissertation was on the design and
performance evaluation of Prolog architectures. Tick was
the first U.S. postdoctoral participant at ICOT, visiting
under the ICOT-NSF exchange agreement. Tick
investigated shared-memory implementations of both Prolog and FGHC while at ICOT. Following his postdoctoral
work at ICOT, Tick spent another year in Japan as a
faculty member of Tokyo University, still maintaining close
contacts with ICOT. Among the "outsiders," Tick i~
perhaps the most intimately familiar with the inner workings of the Fifth Generation project, which he describes in
his contribution to this special section.
We conclude this special section with a short epilogue
which briefly assesses the material presented in the articles
and offers some general assessments of the Fifth Generation Computer Systems project. •
E h u d Shapiro, Department of Applied Mathematics and ComputerScience,
The Weizmann Institute of Science, Rehovot 76100, Israel
David H.D. Warren, Department of Computer Science,
Bristol University, Bristol BS8 1TR, United Kingdom
LAUNCHING
THE NEW ERA
KazuhiroFuchi
ICOT RESEARCH C E N T E R
@
~
As you know, we have been
conducting a 10-year research
project on fifth-generation
computer systems. Today is the
tenth anniversary of the founding of our research center,
making it exactly 10 years since
our project actually started.
The first objective of this
international conference is to
show what we have accomplished in our research during
these 10 years.
Another objective of this conference is to offer an opportunity for researchers to present the results of advanced
research related to fifth-generation computer systems and
to exchange ideas. A variety of innovative studies, in addition to our own, are in progress in many parts of the world,
addressing the future of computers and information processing technologies.
I constantly use the phrase "Parallel Inference" as the
keywords to simply and precisely describe the technological goal of this project. Our hypothesis is that parallel
inference technology will provide the core for those new
technologies in the future--technologies that will be able
to go beyond the framework of conventional computer
technologies.
During these 10 years I have tried to explain this idea
whenever I have had the chance. One obvious reason I
have repeated the same thing so many times is that I
wish its importance to be recognized by the public. However, I have another, less obvious, reason.
When this project started, an exaggerated image o f
the project was engendered, which seems to persist even
now. For example, some people believed we were trying,
in this project, to solve in a mere 10 years some of the
most difficuh problems in the field of artificial intelligence (AI), or to create a machine translation system
equipped with the same capabilities as humans.
In those days, we had to face criticism, based on that
false image, that it was a reckless project trying to tackle
impossible goals. Now we see criticism, from inside and
outside the country, that the project has failed because it
has been unable to realize those grand goals.
The reason such an image was born appears to have
something to do with FGCS'81--a conference we held
one year before the project began. At that conference we
discussed many different dreams and concepts. The
substance of those discussions was reported as sensational news all over the world.
A vision with such ambitious goals, however, can
¢OMMUNICATIONSOFTHIEACM/~[arch
1993/Vol.36, No.3 4 9
never be materialized as a real project in its original
form. Even if a project is started in accordance with the
original form, it cannot be m a n a g e d and o p e r a t e d within
the framework o f an effective research scheme. Actually,
our plans had become much more modest by the time
the project was launched.
For example, the d e v e l o p m e n t o f application systems,
such as a machine translation system, was removed from
the list o f goals. It is impossible to complete a highly intelligent system in 10 years. A preliminary stage is required to enhance basic studies and to r e f o r m c o m p u t e r
technology itself. We decided to focus o u r efforts on
these foundational tasks. A n o t h e r reason is that, at that
time in Japan, some private companies had already
begun to develop pragmatic, low-level machine-translation systems i n d e p e n d e n t l y and in competition with one
another.
Most o f the research topics related to pattern recognition were also eliminated, because a national project
called Pattern Information Processing had already been
conducted by the Ministry o f International T r a d e and
Industry for 10 years. We also f o u n d that the stage o f the
research did not match o u r own.
We thus deliberately eliminated most research topics
covered by Pattern I n f o r m a t i o n Processing from the
scope o f o u r FGCS project. However, those topics them-
selves are very important and thus remain major topics
for research. T h e y may become a main theme o f another
national project o f J a p a n in the future.
Does all this mean that FGCS'81 was deceptive? I do
not think so. First, in those days a pessimistic outlook
p r e d o m i n a t e d concerning the future d e v e l o p m e n t of
technological research. For example, there was a general
trend that research into artificial intelligence would be o f
no practical use. In that sort o f situation, there was considerable value in maintaining a positive attitude toward
the future o f technological r e s e a r c h - - w h e t h e r this
meant 10 years or 50. I believe this was the very reason
we received remarkable reactions, both positive and negative, from the public.
T h e second reason is that the key concept o f Parallel
Inference was presented in a clear-cut form at FGCS'81.
Let me show you a d i a g r a m (Figure 1). This d i a g r a m is
the one I used for my speech at FGCS'81, and is now a
sort o f "ancient document." Its draft was completed in
1980, but I had come up with the basic idea four years
earlier. After discussing the concept with my colleagues
for four years, I finally completed this diagram.
Here, you can clearly see o u r concept that o u r goal
should be a Parallel Inference Machine. We wanted to create an inference machine, starting with study on a variety o f parallel architectures. For this purpose, research
Figure 1.
Conceptional development diagram
(Year) 1
5
• Network
10
- - - (optics) - - -
e
'a
e
(New software)
•
•
ROLOG machine + (z
~
/ LISP
\
~,,\ Small
. talk /
~,,
PS, etc.
f/functional ~ programming
\ logic /
Intelligent programming environments
Designing and prototypebuilding environments
• Various machines (chip module)
.
{" comparable to large-scale'~
\ machine currently used J
New lan
5GCorela
Supermachine-->renderedintelligentparallel
n~:::~
X
;
Associative
/ ~ H
Other ideas
•
SO
Software - Knowledge engineering
(Accumulation)
.......................
Software engineering (Basic theories)
Research for artificial intelligence
March 19931Vo1.36, No.3 / ¢ O N M U N I C J I , T I O N I I O I R T H I A | M
igher level symbol manipulation
/
/ Planning
(" . . . . .
~ Programming
| I'roulem solving
\ Theorem proving
J/
\
Games
/
/ . . . . .
/ QA-language understanding
/Rnowleege ease [ . . . . .
L_
\ L,onsuliailons
into a new language was necessary. We wanted to develop a fifth-generation kernel l a n g u a g e - - w h a t we now
call KL1. T h e diagram includes these hopes of ours.
T h e u p p e r part o f the diagram shows the research
infrastructure. A personal inference machine or workstation for research purposes should be created, as well
as a chip for the machine. We expected that the chip
would be useful for our goal. T h e c o m p u t e r network
should be consolidated to support the infrastructure.
T h e software aspects are shown in the bottom part o f the
diagram. Starting with the study on software engineering and AI, we wanted to build a framework for highlevel symbol processing, which should be used to achieve
our goal. This is the concept I presented at the FGCS'81
conference.
I would appreciate it if you would compare this diagram with o u r plan and the results of the final stage o f
this project, when deputy director Kurozumi shows
them to you later. I would like you to compare the original structure conceived 12 years ago with the present
results o f the project so you can appreciate what has
been accomplished and criticize what is lacking or what
was immature in the original idea.
Some people tend to make more of the conclusions
drawn by a committee than the concepts and beliefs o f
an individual. It may sound a little bit beside point, but I
have h e a r d there is an anecdote in the West that goes:
"The horse designed by a committee will turn out to be a
camel."
T h e p r e p a r a t o r y committee for this project had a series o f enthusiastic discussions for three years before the
project's launching. I thought they were doing an exceptional j o b as a committee. Although the committee's
work was great, however, I must say the plan became a
camel. It seems that their enthusiasm created some extra
humps as well. Let me say in passing that some people
seem to a d h e r e to those humps. I am surprised that
there is still such a so-called bureaucratic view even
among academic people and journalists.
This is not the first time I have expressed this opinion
about the goal of the project. I have, at least in Japanese,
been declaring it in public for the past 10 years. I think I
could have been discharged at any time had my opinion
been inappropriate.
As the person in charge o f this project, I have pushed
forward with the lines of Parallel Inference based on my
own beliefs. Although I have been criticized as still being
too ambitious, I have always been p r e p a r e d to take responsibility for that.
Since the project is a national project, it goes without
saying that it should not be controlled by one person. I
have had many discussions with a variety o f people for
more than 10 years. Fortunately, the idea o f the project
has not r e m a i n e d just my belief but has become a common belief shared by the many researchers and research
leaders involved in the project.
Assuming that this project has proved to be successful,
as I believe it has, this shared belief is probably the biggest reason for its success. For a research project to be
successful, it needs to be favored by good external conditions. But the most important thing is that the research
g r o u p involved has a c o m m o n belief and a c o m m o n will
to reach its goals. I have been very fortunate in realizing
and experiencing this over the past 10 years.
So much for introductory remarks. I wish to outline,
in terms o f Parallel Inference, the results o f o u r work
conducted over these 10 years. I believe the remarkable
feature o f this project is that it focused on one language
and, based on that language, e x p e r i m e n t e d with the
development o f hardware and software on a large scale.
F r o m the beginning, we envisaged taking logic prog r a m m i n g and giving it a role as a link that connects
highly parallel machine architecture and the problems
concerning applications and software. O u r mission was
to find a p r o g r a m m i n g language for Parallel Inference.
A research g r o u p led by d e p u t y director Furukawa
was responsible for this work. As a result o f their efforts,
Ueda came up with a language model, GHC, at the beginning of the intermediate stage of the project. T h e two
main precursors of this language were Parlog and Concurrent Prolog. He enhanced and simplified them to
make this model. Based on GHC, Chikayama designed a
p r o g r a m m i n g language called KL1.
KL1, a language derived from the logic p r o g r a m m i n g
concept, provided a basis for the latter half o f our project. Thus, all o f our research plans in the final stage were
integrated u n d e r a single language, KL1.
For example, we developed a hardware system, the
Multi-PSI, at the end o f the intermediate stage, and
d e m o n s t r a t e d it at FGCS'88. After the conference we
made copies and have used them as the infrastructure
for software research.
In the final stage we made a few PIM prototypes, a
Parallel Inference Machine that has been one o f our
final research goals on the hardware side. These prototypes are being d e m o n s t r a t e d at this conference. Each
prototype has a different architecture in its interconnection network and so forth, and the architecture itself is a
subject o f research. Viewed from the outside, however,
all o f them are KL1 machines.
Division chief Uchida and laboratory chief Taki will
show you details on PIM later. What I want to emphasize
here is that all o f these prototypes are designed, down to
the level of internal chips, with the assumption that KL 1,
a language that could be categorized as a very high-level
language, is a machine language.
O n the software side as well, o u r research topics were
integrated u n d e r the KL1 language. All the application
software, as well as the basic software such as operating
systems, were to be written in KL1.
We demonstrated an operating system called PIMOS
at FGCS'88, which was the first operating system software written in KL1. It was immature at that time, but
has been improved since then. T h e full-fledged version
o f PIMOS now securely backs the demonstrations being
shown at this conference.
Details will later be given by laboratory chief
Chikayama, but I wish to emphasize that not only have
we succeeded in writing software as complicated and
huge as an operating system entirely in KL1, but we have
also proved t h r o u g h o u r own experience that KL1 is
much m o r e a p p r o p r i a t e than conventional languages
¢OMNUNICATIONSOFTHIIACM/March
1993/Vo1.36,No.3 S ~
for writing system software such as operating systems.
O n e of the major challenges in the final stage was to
demonstrate that KL1 is effective not only for basic software, such as operating systems and language implementations, but also for a variety o f applications. As laboratory chief Nitta will r e p o r t later, we have been able to
demonstrate the effectiveness of KL1 for various applications including LSI-CAD, genetic analysis, and legal
reasoning. These application systems address issues in
the real world and have a virtually practical scale. But,
again, what I wish to emphasize here is that the objective
o f those developments has been to demonstrate the effectiveness o f Parallel Inference.
In fact, it was in the initial stage o f o u r project that we
first tried the a p p r o a c h o f developing a project a r o u n d
one particular language. T h e technology was at the level
o f sequential processing, and we a d o p t e d ESP, an exp a n d e d version o f Prolog, as a basis. Assuming that ESP
could play a role o f KL0, o u r kernel language for sequential processing, a Personal Sequential Inference
machine, called PSI, was designed as hardware. We decided to use the PSI machine as a workstation for our
research. Some 500 PSIs, including modified versions,
have so far been p r o d u c e d and used in the project.
SIMPOS, the operating system designed for PSI, is
written solely in ESP. In those days, this was one o f the
largest p r o g r a m s written in a logic p r o g r a m m i n g language. U p to the intermediate stage o f the project, we
used PSI and SIMPOS as the infrastructure to conduct
research on e x p e r t systems and natural language processing.
This kind o f a p p r o a c h is indeed the d r e a m o f researchers, but some o f you may be skeptical about o u r
approach. O u r project, though conducted on a large
scale, is still considered basic research. Accordingly, it is
supposed to be conducted in a free, unrestrained atmosphere to bring about innovative results. You may wond e r whether the policy o f centering a r o u n d one particular language restrains the f r e e d o m and diversity o f
research.
But this policy is also based on my, or our, philosophy.
I believe research is a process o f assuming and verifying
hypotheses. I f this is true, the hypotheses must be as
p u r e and clear as possible. I f not, you cannot be sure of
what you are trying to verify.
A practical system itself could include compromise or,
to put it differently, flexibility to accommodate various
needs. However, in a research project, the hypotheses
must be clear and verifiable. Compromises and the like
could be considered after basic research results have
been obtained. This has been my policy from the very
beginning, and that is the reason I took a rather controversial or provocative approach.
We had a strong belief that o u r hypothesis o f focusing
on Parallel Inference and KL1 had sufficient scope for a
world of rich and free research. Even if the hypothesis
acted as a constraint, we believed it would act as a creative constraint. I would be a liar if I were to say there was
no resistance a m o n g o u r researchers when we decided
on the above policy. KL1 and parallel processing was a
completely new world to everyone. It required a lot o f
52
March 1993/Vol.36, No.3
/¢OMMUHICATIONS
OIF T H E A C M
courage to plunge headlong into this new world. But
once the psychological barrier was overcome, the researchers set out to create new parallel p r o g r a m m i n g
techniques one after another.
People may not feel like using new p r o g r a m m i n g languages such as KL1. Using established languages and
systems only, or a kind o f conservatism, seems to be the
major trend today. In o r d e r to make a b r e a k t h r o u g h
into the future, however, we need a challenging a n d
adventurous spirit. I think we have carried out o u r
e x p e r i m e n t with such a spirit t h r o u g h o u t the 10-year
project.
A m o n g the many other results we obtained in the
final stage was a fast t h e o r e m - p r o v i n g system, or a
prover. Details will be given in' laboratory chief
Hasegawa's report, but I think this research will lead to
the resurrection o f t h e o r e m - p r o v i n g research. Conventionally, research into t h e o r e m proving by computers
has been criticized by many mathematicians who insisted
that only toy examples could be dealt with. However,
very recently, we were able to solve a problem labeled by
mathematicians as an "open problem" using o u r prover,
as a result of collaborative research with Australian National University. T h e applications o f o u r prover are not
limited to mathematical t h e o r e m proving; it is also being
used as the inference engine of o u r legal reasoning system. Thus, o u r prover is being used in the mathematics
world on one hand, and the legal world on the other.
T h e research on p r o g r a m m i n g languages has not
e n d e d with KL1. For example, a constraint logic prog r a m m i n g language called GDCC has been developed as
a higher-level language than KL1. We also have a language called Quixote.
F r o m the beginning o f this project, I have advocated
the idea o f integrating three types of languages--logic,
functional, and o b j e c t - o r i e n t e d - - a n d o f integrating the
worlds o f p r o g r a m m i n g and o f databases. This idea has
been materialized in the Quixote language; it can be
called a deductive object-oriented database language.
A n o t h e r language, CIL, was developed by Mukai in
the study o f natural language processing. C I L is a semantics representation language designed to be able to
deal with situation theory. Quixote incorporates C I L in a
natural form and therefore has the characteristics o f a
semantics representation language. As a whole, it shows
one possible future form o f knowledge representation
languages. More details on Quixote, along with the development o f a distributed parallel database management system, Kappa-P, will be given by laboratory chief
Yokota.
T h u s far I have outlined, albeit briefly, the final results o f o u r 10-year project. Recalling what I envisaged
10 years ago and what I have d r e a m e d and h o p e d would
materialize for 15 years, I believe we have achieved as
much or more than I expected, and I am quite satisfied.
Naturally, a national project is not p e r f o r m e d for
mere self-satisfaction. T h e original goal o f this project
was to create the core o f next-generation c o m p u t e r technologies. Various elemental technologies are n e e d e d for
future computers and information processing. Although
it is impossible for this project alone to provide all o f
those technologies, we are p r o u d to be able to say we
have created the core part, or at least provided an instance o f it.
T h e results o f this project, however, cannot be commercialized as soon as the project is finished, which is
exactly why it was conducted as a national project. I estimate that it will take us another five years, which could
be called a period for the maturation o f the technologies,
for o u r results to actually take root in society. I had this
prospect in mind when this project started 10 years ago,
and have been declaring it in public right up t h r o u g h
today. Now the project is nearing its end, but my idea is
still the same.
T h e r e is often a gap o f 10 or 20 years between the
basic research stage o f a technology and the day it appears in the business world. Good examples are Unix, C,
and RISC, which has become p o p u l a r in the current
trend toward downsizing. These technologies a p p e a r to
be up-to-date in the business world, but research has
been conducted on them for many years. T h e frank
opinions of the researchers involved will be that industry
has finally caught up with their research.
T h e r e is thus a substantial time lag between basic research and commercialization. O u r project, from its very
outset, set an eye on technologies for the far distant future. Today, the movement toward parallel computers is
gaining m o m e n t u m worldwide as a technology leading
into the future. However, skepticism was d o m i n a n t 10
years ago. T h e situation was not very different even five
years ago. When we tried to shift our focus on parallel
processing after the initial stage of the project, there was
a strong opinion that a parallel c o m p u t e r was not possible and that we should give it up and be satisfied with the
successful results obtained in the initial stage.
In spite o f the remaining skepticism about parallel
computers the trend seems to be changing drastically.
Thanks to constant progress in semiconductor technology, it is now becoming easier to connect five h u n d r e d , a
thousand, or even more processor chips, as far as hardware technology is concerned.
Currently, the parallel computers that most people
are interested in are supercomputers for scientific computation. T h e ideas there tend to still be vague regarding the software aspects. Nevertheless, a new age is
dawning.
T h e software problem might not be too serious as
long as scientific computation deals only with simple,
scaled-up matrix calculation, but it will certainly become
serious in the future. Now suppose this problem has
been solved and we can nicely deal with all the aspects o f
large-scale problems with complicated overall structures.
Then, we would have something similar to a generalp u r p o s e capability that is not limited to scientific computation. We might then be able to replace the mainframe
computers we are using now.
T h e preceding scenario is one possibility leading to a
new type of m a i n f r a m e c o m p u t e r in the future. One
could start by connecting a n u m b e r o f processor chips
and face enormous difficulties with parallel software.
However, one could alternatively start by considering
what technologies will be required in the future, and I
suspect that the answer should be the Parallel Inference
technology we have been pursuing.
I am not going to press the preceding view. However,
I anticipate that if anybody starts research without
knowing o u r ideas, or u n d e r a philosophy that he or she
believes is quite different from ours, after many twists
and turns that person will reach more or less the same
concept as o u r s - - p o s s i b l y with small differences such as
different terminology. In other words, my opinion is
that there are not so many different essential technologies.
It may be valuable for researchers to struggle through
a process o f research i n d e p e n d e n t from what has already been done, finally to find they have followed the
same course as somebody else. But a more efficient approach would be to build on what has been done in this
FGCS project and devote energy to moving forward
from that point. I believe the results o f this project will
provide i m p o r t a n t insights for researchers who want to
pursue general-purpose parallel computers.
This project will be finished at the end of this year. As
for maturation of the Parallel Inference technology, I
think we will need a new form of research activities.
T h e r e is a concept called "distributed cooperative computing" in the field o f computation models. I expect
that, in a similar spirit, the seeds generated in this project will spread both inside and outside the country and
sprout in many different parts o f the world.
For this to be realized, the results o f this project must
be freely accessible and available worldwide. In the software area, for example, this means it is essential to disclose all our accomplishments including the source codes
and to make them "international c o m m o n public assets."
M I T I minister Watanabe and the director general o f
the Bureau a n n o u n c e d the policy that the results of our
project could be utilized t h r o u g h o u t the world. Enormous effort must have been made to formulate such a
policy. I find it very impressive.
We have tried to encourage international collaboration for 10 years in this project. Consequently, we have
enjoyed opportunities to exchange ideas with many re-
I believe the results of the FGCS project will p r o v i d e
important
researchers
insights
for
who want to pursue general-purpose
parallel
computers.
COMMUNICATIONS OF THE A l : M / M a r c h 1993/Vol.36, No.3
S~
searchers involved in advanced studies in various parts
o f the world. T h e y have given us much support and cooperation, without which this project could not have
been completed.
In that regard, and also considering that this is a Japanese national project that aims to make a contribution,
though it may only be small, toward the future o f the
h u m a n race, we believe we are responsible for leaving
o u r research accomplishments as a legacy to future g e n erations and to the international community in a most
suitable form. This is now realized, and I believe it is an
i m p o r t a n t s p r i n g b o a r d for the future.
Although this project is about to end, the end is just
a n o t h e r starting point. T h e advancement of computers
and information processing technologies is closely related to the future o f h u m a n society. Social thought, ide-
ologies, and social systems that fail to recognize its significance will perish, as we have seen in recent world
history. We must advance into a new age now. To launch
a new age, I fervently hope that the circle o f those who
share our passion for a bright future will continue to
expand. •
CR Categories and Subject Descriptors: K.2 [Computing
Milieux]: History of Computing
General Terms: Design, Experimentation
Additional Key Words and Phrases: Fifth Generation Com-
puter Systems project
About the Authors:
KAZUHIRO FUCHI is the director of the Research Center at
the Institute for New Generation Computer Technology
(ICOT). Author's Present Address: Institute for New Generation Computer Technology, 4-28, Mita 1-chome, Minato-ku,
Tokyo 108, Japan.
Robert Kowalski
IMPERIAL
T h e initial a n n o u n c e m e n t of
the F G C S project caused a
great deal of confusion and
controversy t h r o u g h o u t the
world. Critics of the project
were uncertain about its scope,
which ranged from AI applications to parallel c o m p u t e r
architectures; and they were
critical of its methods, which
used logic p r o g r a m m i n g (LP) to bridge the gap between
applications and machines. They also criticized the lack of
attention given to m a i n s t r e a m c o m p u t i n g and software
engineering matters.
Invitations for international collaboration were reg a r d e d with suspicion, because it was believed that such
collaboration would unequally benefit Japan. To a large
extent, MCC in the U.S., Alvey in Great Britain, and
ESPRIT in E u r o p e were set up to compete with Japan.
These research p r o g r a m s p r o m o t e d both FGCS and
mainstream computing technologies and paid relatively
little attention to LP c o m p a r e d with the FGCS project.
T h e E u r o p e a n C o m p u t e r Research Centre (ECRC) in
Munich and the Swedish Institute for C o m p u t e r Science
in Stockholm (SICS), on the other hand, p u r s u e d the LP
approach, but on a much more modest scale.
Announcement of FGCS and the British
Response
I began to receive draft outlines o f the FCGS project in
mid-1981. Even at this stage it was clear that LP was destined to play an i m p o r t a n t role. Having advocated LP as
a unifying foundation for computing, I was delighted
with the LP focus o f the FCGS project.
Like many others, however, I was worried that the
project might be too ambitious and rely too heavily on
research breakthroughs that could not be foreseen in
S4
March 1993/Vol.36, No.3 / ¢ O M M U H I C A T I O N S O F T H I I A C M
COLLEGE
advance. T h e field o f AI, in particular, was notorious for
raising great expectations and p r o d u c i n g disappointing
results. Having recently supervised a Ph.D. dissertation
by George Pollard on the topic o f parallel c o m p u t e r architectures for LP, I was enthusiastic about the longterm prospects of such architectures, but apprehensive
about realizing those prospects within the 10-year time
scale o f the FGCS project. On balance, however, the
FGCS strategy o f setting ambitious goals seemed preferable to the m o r e conservative strategy o f aiming at safe
targets.
Although I was invited to the October 1981 FGCS
Conference, which presented the project plans in detail,
I was unable to attend, because I was already committed
to participate in the New H a m p s h i r e Functional Prog r a m m i n g (FP) Conference being held at the same time.
My colleagues, Keith Clark and Steve Gregory in the LP
g r o u p at Imperial College (IC), also attended the FP
Conference, where they presented their p a p e r on the
relational language. By coincidence, their work on the
relational language eventually led to the concurrent LP
language, GHC, which was later developed at ICOT,
and which served as the software foundation o f the
FGCS project.
Following the FGCS conference, the British delegation, sent to Tokyo to discuss the possibility o f collaborating with the FGCS project, met with other parties in the
U.K. to p r e p a r e a draft response. A r e p o r t was presented to a general meeting in London, which I was invited to attend. T h e p r o m i n e n t role planned for LP in
FGCS was noted with skepticism.
T h e result o f those meetings was that a committee,
chaired by J o h n Alvey, was created to formulate a U.K.
p r o g r a m o f research in information technology (IT).
T h e committee consulted widely, and research groups
t h r o u g h o u t the country lobbied the committee to promote s u p p o r t for their work. T h e r e was widespread con-
cern, especially a m o n g academic groups in particular,
that the Alvey p r o g r a m might follow the FGCS lead and
promote AI and LP to the detriment o f other research
areas.
At that time the LP g r o u p at IC, although small, was
probably the largest and most active LP g r o u p in the
world. As head o f the group, I had a responsibility to
argue the case for LP. To begin with, my arguments
seemed to have little positive effect. When the Alvey
p r o g r a m finally started, LP received hardly a mention in
the plan o f work. More generally, declarative languages
(LP and FP) and their associated parallel c o m p u t e r architectures were also largely overlooked.
To r e m e d y this latter oversight, J o h n Darlington, also
at IC, and I were invited by the Alvey directorate to edit
a d o c u m e n t on behalf o f the U.K., LP and FP research
communities to put the case for declarative languages
and their parallel c o m p u t e r architectures. T h e case was
accepted, and a declarative-systems architecture initiative was a d d e d to the Alvey program. However, the initiative became d o m i n a t e d by FP, and the planned LP/FP
collaboration never materialized. Equally frustrating was
the exclusion o f LP from the formal methods activities
within Alvey, especially since so much o f the work in our
g r o u p at IC was concerned with the development o f formal methods for verifying and transforming logic programs.
Although LP was not singled out for special support,
there was enough general funding available to keep me
and my colleagues busy with negotiating grant applications (and to distract me from doing research). I also
continued to argue the case for LP, and eventually in
1984 the Alvey directorate launched a LP initiative. By
November 1985, the initiative had awarded a total o f
£2.2 million for 10 projects involving eight industrial
research organizations and eight universities.
T o g e t h e r with research grants which were awarded
before the LP initiative, the Alvey p r o g r a m s u p p o r t e d
13 research grants for o u r group, involving a total exp e n d i t u r e o f £1.5 million. At its peak in 1987 the LP
g r o u p at IC contained approximately 50 researchers including Ph.D. students. Those grants f u n d e d LPoriented work in such diverse areas as deductive databases, legal reasoning, h u m a n - c o m p u t e r interaction, intelligent front ends, logic-programming environments,
and implementations and applications o f the concurrent
LP language, Parlog.
T h u s the LP g r o u p at IC was for a time relatively well
supported. But, because its work was divided into so
many separate projects, mostly of three years duration,
and many with other collaborators, the work was fragm e n t e d and unfocused. Moreover, the group remained
isolated within the Alvey p r o g r a m as a whole.
ESPRIT
Most o f the funding u n d e r the Alvey p r o g r a m came to
an end a r o u n d 1988. Researchers in the UK, including
those in o u r group at IC, increasingly looked to the
ESPRIT p r o g r a m in E u r o p e to continue support for
their work. For me, ESPRIT had the advantage over
Alvey that work on LP was held in higher regard. But
it had the disadvantage that it involved larger collaborations that were difficult to organize and difficult to
manage.
My involvement with ESPRIT was primarily with the
basic research p r o g r a m , first as coordinator o f the computational logic (Compulog), Action, which started in
1989, and then as the initial coordinator o f the Compulog Network of Excellence. Both o f these were concerned with developing extensions o f LP using enhancements from the fields o f c o m p u t e r algebra, database
systems, artificial intelligence, and mathematical logic.
In 1991 Gigina Aiello in Rome took over my responsibilities as coordinator of the Network, and in 1992
Krzysztof A p t in Amstersdam took over as coordinator
of the Action. By 1992 the Network contained over 60
m e m b e r nodes and associated nodes t h r o u g h o u t Europe.
Contacts with Japan
My frustrations with the Alvey p r o g r a m were exacerbated by my early contacts with the FGCS project and by
my resulting awareness of the importance of LP to
FGCS. These contacts came about both as the result o f
visits made by participants in the FGCS project to o u r
g r o u p at IC and as a result of my visits to Japan.
I made my first visit to J a p a n in November 1982, on
the initiative o f the British Council in Tokyo, and my
second visit in J u n e 1984, as part o f a small SERC delegation. These visits gave me an insight into the FGCS
work beginning at ICOT, ETL, the universities, and
some o f the major Japanese c o m p u t e r manufacturers.
As a consequence o f these early contacts, several Japanese researchers came to work in our group: Yuji Matsumoto and Taisuke Sato from ETL, s u p p o r t e d by the
British Council, Takeshi Chusho and Hirohide Haga
from Hitachi, and Ken Satoh from Fujitsu. These visitors came for various periods ranging from one month
to one year. Many visitors also came for shorter periods.
Partly because o f my heavy commitments, first to
Alvey and later to ESPRIT, I had relatively little contact
with J a p a n d u r i n g the period 1985 to 1990. During the
same period, however, members o f the Parlog g r o u p
m a d e a n u m b e r o f visits to ICOT. Keith Clark and Steve
Gregory, in particular, both visited for three weeks in
1983. Keith made several other visits and participated in
the FGCS conferences held in 1984 and 1988. J i m
C r a m m o n d , A n d r e w Davison, and Ian Foster also visited
ICOT. In addition, the Parlog g r o u p had a small grant
from Fujitsu, and the LP g r o u p as a whole had a similar
grant from Hitachi.
My contacts with J a p a n increased significantly d u r i n g
the 1990-92 period. In the s u m m e r o f 1990, I was invited by I C O T to give a talk at the Japanese LP Conference and to stay for a one-week visit. In addition to the
talks I gave about legal reasoning, temporal reasoning,
metalevel reasoning, and abduction, I interacted with
the groups working on knowledge representation. I was
interested in the work on theorem proving (TP), but was
sceptical about the need for full first-order logic and
general T P problem-solving methods. My own work,
partly motivated by legal-reasoning applications, concentrated instead on developing extensions o f LP. It was
COHHUNICA'IPIOIINIlOP~I
A C u / M a r c h 1993/Vol.36, No.3
n~l[~
a challenge, therefore, to consider whether the I C O T
applications o f more general-purpose TP could be reformulated naturally in such extended LP form.
This challenge helped me later to discover a duality
between knowledge representation in LP form, using
backward reasoning with if-halves of definitions, and
knowledge representation in disjunctive form, using forward reasoning with only-if halves of definitions [6]. Interestingly, the model generation theorem prover
(MGTP) developed at ICOT, if given the disjunctive
form, would simulate execution of the LP form. I am
currently investigating whether other TP strategies for
reasoning with the disjunctive form can simulate generalized constraint propagation [7] as a method of executing constraint LP.
I was also interested in the genetic-analysis and legal
reasoning applications being developed at ICOT. It
seemed to me that the genetic analysis applications were
of great scientific interest and social importance. Moreover, ICOT's logic-based technology, combining the
functionality of relational databases, rule bases, recursive data structures, and parallelism, seemed ideally
suited for such applications.
At that time, ICOT's work on legal reasoning focused
primarily on case-based reasoning, and much of the
emphasis was on speeding up execution by means of
parallelism. Two years later, the work, presented at
FGCS'92, had progressed significantly, integrating rulebased and case-based reasoning and employing a sophisticated representation for event-based temporal reasoning.
ICOT's work on legal reasoning was undertaken in
collaboration with the Japanese legal expert systems association (LESA) headed by Hajime Yoshino. I attended
a meeting of LESA during my visit, and since then my
colleague, Marek Sergot, and I have continued to interact with LESA on a small international project concerned with formalizing the United Nations' Convention
on International Contracts.
This same visit to I C O T coincided with the conclusion
of discussions with Fujitsu labs about a five-year project
for work on abductive LP, which started in October
1990. T h e following year in November 1991, Tony
Kakas and I visited Fujitsu Labs to report on the results
of our first year of work. We also made a short visit to
ICOT, where we learned more about the MGTP, about
Katsumi Inoue's use of MGTP to implement default reasoning (via the generation of stable models for negation
as failure), and about the application of these techniques
to legal reasoning.
In 1991, the program committee of FGCS'92 invited
me to chair the final session of the conference, a panel
with the title: "Will the Fifth Generation Technologies be
a Springboard for Computing in the 21st Century?". I
was pleased to accept the invitation, but I was also apprehensive about undertaking such a responsibility.
The panelists were Herv~ Gallaire, Ross Overbeek,
Peter Wegner, Koichi Furukawa, and Shunichi Uchida.
Peter Wegner, an outspoken proponent of objectoriented programming, was chosen to be the main critic.
In fact, all of the panelists and I were aware that the
S6
March 1993/Vol.36, No.3 / C O M M U N I C A T I O I I S O F T H E A C M
FGCS technologies had made comparatively little impact
on the world of computing during the course of the
FGCS project. During the months preceding the conference, I thought about what, if anything, had gone
wrong, and whether the problems that had been encountered were inherently unsolvable or only short-term
obstacles along the way.
What Went Wrong?
I shall consider the problems that have arisen in the
three main areas of FGCS, namely AI applications, LP
software, and parallel-computer architectures, in turn.
Perhaps it is the area of AI application which has been
the most visible part of the FGCS project. Not only were
the original FGCS targets for AI exceedingly ambitious,
but they were considerably exaggerated by some commentators, most notably perhaps by Feigenbaum and
McCorduck in their book The Fifth Generation [3].
By comparison with the expectations that were raised,
worldwide progress in AI has been disappointingly slow.
Expert systems and natural language interfaces, in particular, have failed to capture a significant share of the
I T market. Moreover, many of the AI applications
which have been successful have ultimately been implemented in C and rely significantly on integration with
non-AI software written in C and other imperative languages.
T h e FGCS project has suffered from the resulting
downturn of interest in AI. In later sections of this article, concerned with legal reasoning and default reasoning, I will argue both that progress in AI has been
greater than generally appreciated and that there are
good reasons to expect steady progress in the future.
Despite the disappointments with AI, it is probably
ICOT's choice of LP as the basis for the FGCS software
that is regarded by many critics as ICOT's biggest mistake. There are perhaps four main reasons held for this
belief:
• LP is an AI language paradigm of limited applicability
• LP is too inefficient
• Concurrent LP is too remote from the forms of LP
needed for user-level programming
• LP cannot compete with the world-wide trend to standardize on programming in C
LP is an M Language Paradigm. LP has suffered twofold
from its popular image as a language suited primarily
for AI applications, it has suffered both because AI itself
has experienced a decline of interest and because, as a
consequence of its associations with AI, LP is not normally viewed as being suitable for non-AI applications.
T h e contrary view is that LP is a language paradigm
of wide applicability. Indeed, one simple characterization of LP is that it can be regarded as a generalization of
both FP and relational databases, neither one of which is
normally regarded as being restricted to AI applications.
The AI image of LP is probably more a reflection of
sociological considerations than of technical substance. It
certainly reflects my own experience with the Alvey program, where the LP research community was isolated
from both the FP and software engineering research
communities.
T h e titles o f the technical sessions o f the First International Applications o f Prolog Conference held in London in 1992 give a m o r e objective indication o f the range
of applications of the most p o p u l a r LP language, Prolog:
•
•
•
•
•
•
•
•
•
•
CAD and Electronic Diagnosis
Planning
Virtual Languages
Natural Languages and Databases
Diagnostic and Expert Systems
Advisory Systems
Constraint Systems
Analysis
Planning in Manufacturing
Information Systems
Many o f these applications combine AI with more conventional computing techniques.
In addition to Prolog, several other commercial variants of LP have begun to become available. These include constraint LP languages such as C H I P and Prolog
III, and concurrent LP languages such as Strand and
PCN. Deductive database systems based on LP are also
beginning to emerge.
I C O T itself has focused more on developing the underlying enabling technologies for applications than on
constructing the applications themselves. In the course
o f developing this technology it has employed its LPbased software primarily for systems-programming purposes. In particular, its use of the concurrent LP languages GHC and KL1 to implement PIMOS, the operating system for the parallel-inference machine, PIM, has
been a major achievement.
LP is Too Inefficient. This seemingly straightforward
statement is ambiguous. Does it mean that conventional
algorithms written in LP languages run inefficiently in
time or space? Or does it mean that p r o g r a m specifications run orders o f magnitude more inefficiently than
well-designed algorithms?
T h e first problem is partly a nonproblem. For some
applications LP implementations are actually more efficient than implementations written in other languages.
For other applications, such as scheduling, for example,
which need to run only occasionally, efficiency is not the
main consideration. What matters is the increased prog r a m m e r productivity that LP can provide.
In any case, this kind of 'low-level' inefficiency can
and is being dealt with. T h e Aquarius compiler [9] and
ICOT'S PIM are a m o n g the best current examples of
what can be achieved on sequential and parallel implementations respectively.
T h e second problem is not only more difficult, but has
received correspondingly less attention. T h e possibility
o f writing high-level p r o g r a m specifications, without
concern for low-level details, is a major reason many
people are first attracted to Prolog. However, many of
these same people become disillusioned when those
specifications loop, even on the simplest examples, or
when they run with spectacular inefficiency. Few enthu-
siasts persist to achieve the level of expertise, exemplified in the book by Richard O'Keefe [8], required to
write p r o g r a m s that are both high level and efficient.
W h e n they do, they generally find that Prolog is a superior language for many applications.
Some critics believe that this second efficiency problem results from the LP community not paying sufficient
attention to software engineering issues. In reality, however, many LP researchers have worked on the provision
o f tools and methodologies for developing efficient programs from inefficient programs and specifications.
Indeed, this has been a major research topic in our
g r o u p at IC, in the Compulog project, and in Japan. Perhaps the main reason this work has had little practical
impact so far is that it applies almost entirely only to
p u r e logic p r o g r a m s and not to Prolog programs that
make use o f impure, nonlogical features. Either the theoretical work needs to be e x t e n d e d to the more practical
Prolog case, or a much higher priority needs to be given
to developing p u r e r LP languages and p u r e r styles of
p r o g r a m m i n g in Prolog. I believe it is the latter alternative that is the more promising direction for future
work.
Concurrent LP is Too Remote from other Forms of LP.
This is possibly the biggest criticism of the I C O T approach, coming from members o f the LP community itself. It is a criticism borne out by the gap which has
e m e r g e d in our own work at IC between the concurrent
form o f LP used for systems p r o g r a m m i n g in the Parlog
g r o u p and the forms of LP used for AI, databases, and
other applications in the rest o f the LP group. Moreover,
when the Parlog group has concerned itself with highlevel knowledge representation, it has concentrated on
providing object-oriented features and on practical matters o f combining Parlog with Prolog. T h u s the gap that
developed between the different forms of LP investigated in o u r g r o u p at IC seemed to m i r r o r a similar gap
that also occurred at ICOT.
Indeed, it can be argued that the logical basis or conc u r r e n t LP is closer to that of mainstream process models o f concurrency, such as CCS and CSP, than it is to
standard LP. F r o m this point o f view, the historical basis
o f concurrent LP in standard LP might be r e g a r d e d as
only a historical accident.
T h e r e are counterarguments, however, that seek to
reconcile concurrent LP with standard LP and standard
logic. Currently, the most p o p u l a r of these is the proposal to use linear logic as an alternative foundation for
concurrent LP. I C O T has also made a n u m b e r o f promising proposals. T h e most original of these is to implement M G T P in KL1 and to implement higher-level
forms o f logic and LP in MGTP. Two other promising
approaches are the A n d o r r a computational model o f
David H.D. Warren and Seif Haridi and the concurrent
constraint LP model o f Michael Maher and Vijay Saraswat.
It is too early to foresee the outcome of these investigations. However, no matter what the result, it seems
reasonable to expect that the affinity o f concurrent LP
both to standard LP and to mainstream models of con-
¢OMMUNI¢ATiOHSOFTHItACM/~A~r¢~
1993/Vol.36, No.3 ~
currency will prove to be an advantage rather than a
disadvantage.
L P Cannot Compete with C. The FGCS focus on LP has
had the misfortune to conflict with the growing worldwide trend to standardize on Unix as an operating system and C (and extensions such as C + + ) as a programming language. C has many virtues, but perhaps its most
important one is simply that more and more programmers are using it. Like the qwerty keyboard and the VHS
video system, C is not necessarily the best technology
available for its purpose, but it has virtually become the
standard.
Thus LP, in order to succeed, needs to integrate as
smoothly as possible with other systems written in other
languages. For this purpose, the Prolog company, Quintus, for example, has developed its own macrolanguage
with a C-like syntax that compiles into Prolog. In a similar spirit, Chandy and Kesselman [1] have developed a
language, CC+ +, that is partly inspired by the conceptual model o f concurrent LP but is an extension of C.
These and other adaptations of the LP ideal might
offend the purist, but they may be necessary if LP is to
integrate successfully into the real world. Moreover, they
may only prove that the procedural interpretation of
logic, which is the foundation of LP, has greater applicability than is normally supposed. Not only can logical
syntax be interpreted and executed as procedures, as is
usual in most implementations of LP, but suitably wellstructured procedural syntax can also be interpreted as
declarative statements of logic.
Problems with Parallel Inference Machines. In addition to
these criticisms of LP, the I C O T development of specialized hardware, first the personal sequential inference
machine and then the parallel inference machine (PIM),
has also been j u d g e d to be a mistake. Not only do specialized machines to support LP go against the trend to
standardize on Unix and C, but they may not even bring
about an improvement of efficiency. T h e failure of LISP
machines is often cited as an analogous example, in
which the gain in efficiency obtained by specialized
hardware has been offset by the more rapid progress of
implementations on increasingly efficient generalpurpose machines.
Undoubtedly, ICOT's decision to base its software on
specialized hardware has restricted the accessibility of
ICOT's results. It is also a major reason why the project
has been extended for a further two years, to reimplement the software on standard machines, so that it can
be made more widely available.
But the view that the FGCS machines are special purpose is mistaken. I C O T has discovered, as have other
groups, such as the FP group at IC, that the techniques
needed to implement declarative languages are very
similar to those needed to support general-purpose
computation. As a result, I C O T has been able to claim
that PIM is actually general purpose. Moreover, the concurrent LP machine language of PIM can also be viewed
as supporting both a mainstream model of concurrency
and a mainstream approach to object-oriented programming. Viewed in this way PIM and its operating system
PIMOS are the first large-scale implementations of such
general-purpose mainstream approaches to the use of
concurrency to harness the potential of parallel computation. As a consequence, it is quite possible that the
FGCS project has attained a worldwide lead in this area.
The Longer-Term Outlook
The FGCS project has taken place during a time of
growing disillusionment with innovation and of increasing emphasis on applications, interfaces, and the streamlining and consolidation of existing technologies. T h e
move to standardize on Unix and C and the rapid
growth o f graphics and networking exemplify these
trends.
It is difficult to predict how the growing influence of
C will affect the development of higher-level languages
in the longer term. Perhaps concurrent LP-inspired languages on parallel machines will one day displace C on
sequential machines. Or perhaps it will be adequate simply to standardize on the interfaces between programs
written in higher-level (and logic-based) languages and
programs written in C and other imperative languages.
But no matter what the future of present-day systemsprogramming languages, computer users must ultimately be allowed to communicate with computers in
high-level, human-oriented terms. My own investigations of the formalization of legislation [5] have convinced me that LP provides the best basis for developing
such human-centered, computer-intelligible languages.
The FGCS project has
during
a time
disillusionment
taken
place
of growing
with
innovation
and of increasing emphasis on applications, interfaces, and
the streamlining and consolidation of
existing technologies.
S8
March 1993/Vo1.36, No.3
/¢OMMUNICATION|OFTHIIAIIM
More than any other form of communication in natural language, legislation aims to regulate virtually every
aspect of human behavior. As a result, laws can be regarded as wide-spectrum programs formulated in a stylized form of natural language to be executed by people.
For this purpose, the language of law needs to be highly
structured and as precise and unambiguous as possible,
so that it can be understood the way it was intended, and
so that, in a given environment, execution by one person
gives the same results as execution by another.
Such precision needs to be combined judiciously with
"underspecification," so that law can be flexible and can
adapt to changing environments. These seemingly conflicting requirements are reconciled in law by defining
higher-level concepts in terms of lower-level concepts,
which are then either defined by still lower-level concepts or left undefined. The undefined concepts either
have generally agreed common-sense meanings or else
are deliberately vague, so that their meanings can be
clarified after the law has been enacted. This structure
of legal language can be reflected in more formal languages by combining precise definitions with undefined
terms.
Although legal language is normally very complex, its
resemblance to various computing language paradigms
can readily be ascertained, and its affinity to logic and
especially to LP is particularly apparent. This affinity
includes not only the obvious representation of rules by
means of conclusions and conditions, but even the representation of exceptions by means of LP's negation as
failure. Nonetheless, it is also clear that to be more like
legal language LP needs to be extended, for example, to
include some form of explicit negation in addition to
negation as failure, to amalgamate metalanguage with
object language, and to incorporate integrity constraints.
The example of law does not suggest that programs
expressed in such an extended LP language will be easy
to write, but only that they will be easier to read. Very
few users of natural language, for example, acquire the
command of language needed to draft the Acts of Parliament. Perhaps the future of application-oriented
computer languages will be similar, with only a few
highly skilled program writers but many readers.
There are other important lessons for computing to
be learned from legal language and legal reasoning:
about the relationship between programs (legislation)
and specifications (policies), about the organization and
reuse of software, and about the relationship between
rule-based and case-based reasoning. I C O T has already
begun to explore some of these issues in its own work on
legal reasoning. I believe that such work will become increasingly important in the future.
Default Reasoning
Perhaps the most important extension that has been
developed for LP is negation as failure (NAF) and its use
for default reasoning. In my opinion this development
has great significance not only for knowledge representation in AI but also for the application of logic in everyday life outside computing.
Until the advent of logics for default reasoning, formal logic was largely restricted to the formalization of
statements, such as those of mathematics, that hold universally and without exception. This has greatly inhibited the application of logic in ordinary human affairs.
To overcome these restrictions and to reason with general rules such as "all birds fly," that are subject to endless exceptions, AI researchers have made numerous
proposals for logics of default reasoning. Although a
great deal of progress has been made in this work, these
proposals are often difficult to understand, computationally intractable, and counterintuitive. T h e notorious
"Yale shooting problem" [4], in particular, has given rise
to a large body of literature devoted to the problem of
overcoming some of the counterintuitive consequences
of these logics.
Meanwhile, LP researchers have developed NAF as a
simple and computationally effective technique, whose
uses range from implementing conditionals to representing defaults. These uses of NAF were justified as
long ago as 1978 when Clark [2] showed that NAF can be
interpreted as classical negation, where logic programs
are "completed" by putting them into "if-and-only-if"
form.
The development of logics of default reasoning in AI
and the related development of NAF in LP have taken
place largely independently of one another. Recently,
however, a number of important and useful relationships have been established between these two areas.
One of the most striking examples of this is the demonstration that NAF solves the Yale shooting problem (see
[6] for a brief discussion). Other examples, such as the
development of stable model semantics and the abductive interpretation of NAF, show how NAF can usefully
be extended by applying to LP techniques first developed for logics of default reasoning in AI. I C O T has
been a significant participant in these developments.
The example of default reasoning shows that much
can be gained by overcoming the sociological barriers
between different research communities. In this case
cooperation between the AI and LP communities has
resulted in more powerful and more effective methods
for default reasoning, which have wide applicability both
inside and outside computing. In my opinion, this is a
development of great importance, whose significance
has not yet been widely understood.
conclusions
I believe that progress throughout the world in all areas
of FGCS technologies during the last 10 years compares
well with progress in related areas in previous decades.
In all areas, ICOT's results have equaled those obtained
elsewhere and have excelled in the area of parallelcomputer architectures and their associated software.
ICOT's work compares well with that of other national
and international projects such as Alvey and ESPRIT. If
there has been any major mistake, it has been to believe
that progress would be much more rapid.
Although the technical aspects of FGCS have been
fascinating to observe and exciting to participate in, the
sociological and political aspects have been mixed. On
¢OMIIIUHICATIONSOI~THIIACM/March
1993/Vol.36,No.3 5 g
the one hand, it has been a great pleasure to learn how
similarly people coming from geographically different
cultures can think. On the other hand, it has been disappointing to discover how difficult it is to make p a r a d i g m
shifts across different technological cultures.
I believe the technical achievements of the last 10
years justify a continuing belief in the importance o f LP
for the future o f computing. I also believe that the relevance o f LP for other areas outside computing, such as
law, linguistics, and philosophy, has also been demonstrated. Perhaps, paradoxically, it is this wider relevance
that, while it makes LP m o r e attractive to some, makes it
disturbing and less attractive to others. •
References
1. Chandy, K.M. and Kesselman, C. The derivation of compositional programs. In Proceedingsof theJoint International Conference and Symposium on Logic Programming (Washington,
D.C., Nov. 1992). MIT Press, Cambridge, Mass., pp. 3-17.
2. Clark, K.L. Negation as Failure. In Logic and Database, H.
Gallaire andJ. Minker, Eds. Plenum Press, New York, 1978,
pp. 293-322.
3. Feigenbaum, E.A. and McCorduck, P. The Fifth Generation.
Addison-Wesley, Reading, Mass., 1983.
4. Hanks, S. and McDermott, D. Default reasoning, nonmonotonic logics, and the frame problem. In AAAI-86. Morgan Kaufman, San Mateo, Calif., 1986, pp. 328-333.
5. Kowalski, R.A. Legislation as logic programs. In Logic Programming in Action, G. Comyn, N.E. Fuch, and M.J. Ratcliffe,
Eds. Springer-Verlag, New York, 1992, pp. 201-230.
6. Kowalski, R.A. Logic programming in artificial intelligence.
In IJCAI-91 (Sydney, Australia, Aug. 1991), pp. 596-603.
7. Le Provost, T. and Wallace, M. Domain independent
propogation. In Proceedingsof lnternational Conferenceon Fifth
Generation Computer Systems (Tokyo, June 1992), pp. 10041011.
8. O'Keefe, R. The Craft of Prolog. MIT Press, Cambridge,
Mass., 1990.
9. Van Ross, P. and Despain A.M. Higher-performance logic
programming with the Aquarius Prolog compiler. IEEE
Comput. (Jan. 1992), 54-68.
CR Categories and Subject Descriptions: C. 1.2 [Processor
Architectures]: Multiple Data Stream Architectures (Multiprocessors); D.1,3 [Programming Techniques]: Concurrent
Programming; D.1.6 [Software]: Logic Programming; D.3.2
[Programming Languages]: Language Classifications-Concurrent, distributed, and parallel languages, Data-flow languages,
Nondeterministic languages, Nonprocedural languages; K.2 [Computing Milieux]: History of Computing
General Terms: Design, Experimentation
Additional Key Words and Phrases: Concurrent logic programming, Fifth Generation Computer Systems project,
Guarded Horn Clauses, Prolog
About the Author:
ROBERT KOWALSKI is a professor of computational logic at
Imperial College. Current research interests include logic programming, artificial intelligence, and legal reasoning. Author's
Present Address: Imperial College Department of Computing,
180 Queen's Gate, London, England, SW7 2BZ; email:
[email protected]
~0
March 1993/Voh36, No.3 /COMMUNICATIONS OF THE A I I M
KoichiFurukawa
KEIO UNIVERSITY
Three years prior to the start of
F G C S , a committee investigated new information-processing technologies for the
1990s. M o r e than a h u n d r e d
researchers were involved in
the discussions conducted during that time. Two m a i n prop o s a l s e m e r g e d f r o m the
d i s c u s s i o n s . O n e was an
architecture-oriented proposal focusing on more flexible
and adaptive systems that can generate computers tuned
to given specifications. T h e other was a software-oriented
proposal aimed at redesigning p r o g r a m m i n g languages
and building a new software culture based on those new
languages. After thorough feasibility studies were completed, we selected the latter approach because of its potential richness and its expected impact.
For the selection of the kernel p r o g r a m m i n g language, one of the most i m p o r t a n t points was the adequacy of the language for knowledge information processing. F r o m this point of view, there were two
candidates: LISP and Prolog.
At that time, we had much p r o g r a m m i n g experience
in LISP. On the other hand, we were relatively new to
Prolog. Fuchi (Keio University), a few other researchers
from ETL, and I, began a joint research project on logic
p r o g r a m m i n g in late 1976. Fuchi was very interested in
Prolog that I brought from SRI in 1976. It was a Prolog
i n t e r p r e t e r by Colmerauer, written in F o r t r a n r u n n i n g
on a DEC-10 computer. We were able to make it run on
o u r c o m p u t e r and began a very preliminary study using
the system. I wrote several p r o g r a m s on database indexing. After that, we obtained DEC-10 Prolog from David
H.D. Warren. I wrote a p r o g r a m to solve the Rubik cube
problem in DEC-10 Prolog [9]. It ran efficiently and
solved the problem in a relatively short time (around 20
seconds). It is a kind o f e x p e r t system in which the inference engine is a Production System realized efficiently
by a tail-recursive Prolog p r o g r a m . F r o m this experience, I became convinced that Prolog was the language
for knowledge information processing.
What Caused the Shift from Prolog to GHC?
F r o m the beginning o f the project, we p l a n n e d to introduce a series o f (fifth-generation) kernel languages: the
preliminary version, the first version, and second version o f F G K L [9]. T h e preliminary version was a version
of Prolog with some extensions to the modularization,
meta-structure and relational database interface.
T h e direction o f the first and final versions o f FGKL
was given in [9] at FGCS'81:
T h e first version of the FGKL will be designed, and its
software simulator will be implemented during the first
stage of the project. One of the goals of this version is a
support mechanism for parallel execution.
A new parallel execution mechanism based on
breadth first search of and/or graph has to be developed
as well as exception handling of forced sequential execution.
The features extended to Prolog in the preliminary
version should also be refined in this stage.
Another important extension to be introduced in this
stage is concurrent programming capability; [12] developed a very high-level simulation language which incorporates a concurrency control mechanism into a Prologlike language.
T h e design and implementation of the second and
final version of FGKL will be completed by the end of
the intermediate stage.
The main feature of the final version is the ability to
deal with distributed knowledge bases. Since each
knowledge base has its own problem-solving capability,
cooperative problem solving will become very important.
One of the basic mechanisms to realize the function is
message passing [14]. Since concurrent programming
needs interprocess communication, the primitive message-passing mechanism will have been developed by the
end of the initial stage. The result of the research on
knowledge representation languages and metainference
systems will be utilized in this stage to specify those functions which FGKL is to provide, and to work out means
to realize them.
In preparing the preceding material, I did not have a
chance to read the paper by Clark and Gregory [5], proposing a concurrent logic programming language called
Relational Language. Just after having read the paper, I
believed that their proposal was the direction we should
follow because it satisfied most of the functional specifications of the first and second versions of FGKL, as
cited.
I met Ehud Shapiro in Sweden and in France in 1982,
and we found that we had common interests in concurrent logic programming. He proposed a new concurrent
logic programming language called Concurrent Prolog
[20]. This was more expressive than Relational Language in the sense that it was easy to write a Prolog interpreter in the language. Just after the conference, we invited him to continue our discussion. At ICOT, the KL1
design group tried to find the best solution to concurrent logic programming languages. We collaborated
with Shapiro, and Clark and Gregory, who designed
Relational Language and, late, PARLOG [4]. We also
collaborated with Ken Kahn who was also very interested
in concurrent logic programming. In 1983, we invited
those researchers to I C O T at the same time and did extensive work on concurrent logic programming, mainly
from two aspects: expressiveness and efficiency.
Concurrent Prolog is not efficient in realizing the
atomic unification that is the origin of its expressiveness.
It is particularly difficult to implement on it the distributed-memory architecture essential for scalable parallel
processing.
On the other hand, PARLOG is less expressive, and it
is impossible to write a simple Prolog metainterpreter in
the language. Furthermore, mode declaration is needed
to specify input/output mode. This restriction makes it
possible to realize efficient implementation even on scalable distributed-memory architecture.
At that time, since it was very difficult to judge which
of the two languages was better, we planned to pursue
both Concurrent Prolog and PARLOG at the same time.
Then, Ueda investigated the details of these two languages. Finally, he proposed Guarded Horn Clauses
(GHC) [22, 24]. It was about one year after the intensive
collaboration. GHC is essentially very similar to Relational Language/PARLOG. One apparent difference is
that GHC needs no mode declaration.
After deliberate investigation on this proposal, we
decided to adopt GHC as the core of FGKL version 1.
The change of the language from Prolog to concurrent logic programming was, therefore, scheduled by us
from the beginning, and the change was no surprise for
me. However, it provided a considerable shock to the
researchers and managers in the private companies that
were collaborating with us. We needed to persuade them
by showing them the new language's superiority over
Prolog.
One serious defect of concurrent logic programming
was the lack of automatic search capability, a shortcoming that is directly related to the notion of completeness
of logic programming. In other words, we obtained the
ability of concurrent programming in logic programming at the cost of automatic search capability.
The research topic of regaining this capability became
one of the most important issues since the choice of
GHC. We at last succeeded in solving this problem by
devising several programming methodologies for an allsolution search after several years of research efforts [6,
19, 23].
Evaluation of the Current Results in Light of
Early Expectations
My very early expectations are best summed up in [9]
and [10]. I will evaluate the current research results by
referring to the descriptions in these papers.
The former paper describes what we were striving for
in mechanisms/functions of problem solving and inference as follows:
T h e mechanisms/functions of problem solving and
inference that we have considered range from rather
basic concepts such as list processing, pattern matching,
chronological backtracking, and simple Horn clause
deduction, to higher-level ones such as knowledge acquisition, inference by analogy, common sense reasoning,
inductive inference, hypothetical reasoning, and
metaknowledge deduction. Furthermore, high-speed
knowledge information processing based on parallelcomputation mechanisms and specialized hardware have
been extensively investigated to enhance efficiency and
make the whole system feasible.
The research project for problem solving and inference mechanisms is outlined as follows:
COMMUNICATIONSOFYiUIACM/~[arch 1993/Vo1.36, No.3
61
1. T h e design of the kernel language of the FGC (called
FG-kernel language or simply FGKL): According to the
current program, there will be three evolution stages for
the development of FGKL to follow; the preliminary
version for the first three years, the first version for the
following four years and the second and final version for
the last three years.
2. The development of basic software systems on the
FGKL, including an intelligent programming system, a
knowledge representation language, a metainference
system, and an intelligent human-machine interface system.
3. (Ultimate goal of this project): T h e construction of
knowledge information processing system (KIPS) by
means of the basic software systems.
According to [9], four major components were proposed to realize the problem-solving and inference
mechanisms: the FG-kernel language, an intelligent programming system, a knowledge representation language, and a metainference system. I will pick up these
four items to evaluate the current research results in the
light of my early expectations.
Also, there are several research results that we did not
expect before, or even at the beginning of, the project.
The two most important results are constraint logic programming and parallel theorem provers.
On the FG-Kernel Language
The most important component of this project is the FGkernel language. In my very early expectations, I wanted
to extend Prolog to incorporate concurrency as the first
version of FGKL. However, concurrent logic programming is not a superset of Prolog. It does not have the
automatic searching capability, and therefore it lacks the
completeness of Prolog. Also, it is not upwardly compatible with Prolog. Fortunately, we largely succeeded in
regaining the searching capability by devising programming techniques.
At the end of the initial stage and the beginning of the
intermediate stage, we tried to design a second version
of FGKL, named KL2, based on GHC/KL1. We designed a knowledge-programming language called Mandala [11], but we failed to build an efficient language
processor for Mandala. The second candidate for KL2
was the concurrent constraint-programming language
called Guarded Definite Clauses with Constraints
(GDCC) [ 13]. We are working on developing an efficient
language processor in GHC/KL1, but we have not succeeded so far.
We developed a parallel theorem prover called Model
Generation T h e o r e m Prover (MGTP) [8] in GHC/KL1.
We adopted an efficient programming methodology for
the all-solution search [6] to implement MGTP, and we
succeeded in developing a very efficient bottom-up theorem prover for full first-order logic. Although MGTP
demands range restrictedness in the problem description, it is widely applicable to many realistic applications.
This system provides the possibility for the full firstorder logic to be one of the higher-level programming
languages on our parallel inference machine.
62
March 1993/Vol.36, No.3
/¢OIIMUIIli:ATIONS
OF Tile ACM
on an Intelligent ProgrammJng System
Regarding the second component, we emphasized the
importance of intelligent programming to resolve the
software crisis. We described it as follows:
One of the main targets of the FGCS project is to resolve the software crisis. In order to achieve the goal, it is
essential to establish a well-founded software production
methodology with which large-scale reliable programs
can be developed with ease. The key technology will
likely be modular programming, that is, a way to build
programs by combining component modules. It has
been already noted that the kernel language will provide
constructs for modularization mechanisms, but to enhance the advantage a programming system should provide suitable support. T h e ultimate goal is that the support system will maintain a database with modules and
knowledge on them to perform intelligent support for
modular programming.
Concerning modularization, we developed ESP, Extended Self-contained Prolog [2], which features module
construction in an object-oriented flavor. We succeeded
in developing our operating system. SYMPOS [2], entirely in ESP. The result shows the effectiveness of the
module system in ESP. However, ESP is not a pure logic
programming language. It is something like a mixture of
Prolog and Smalltalk. T h e module system originated in
Smalltalk. In a sense, we adopted a practical approach to
obtaining modularity for developing the operating system by ourselves.
Later, we developed an operating system for PIM,
called PIMOS, in KL1 [3]. We extended GHC by adding
a meta-control mechanism called Shoen (a Japanese
word meaning an ancient regional government) for creating users' programming environments. By utilizing the
Shoen mechanism, we succeeded in developing an operating system that runs on PIM.
For the ultimate goal of an intelligent programming
system, we tried to develop component technologies
such as partial evaluation, program transformation, program verification, and algorithmic debugging. We even
tried to integrate them in a single system called Argus.
However, we could not attain the goal for real applications.
On a Knowledge Representation Language
Since the issue of knowledge representation was discussed in much greater detail in [10], I will use that description as a reference:
The goal of this work is to develop cooperative knowledge-based systems in which problems are solved by the
cooperation of intelligent agents with distributed knowledge sources. An intelligent agent works as if it were an
excellent librarian, knowing where the relevant knowledge sources exist, how to use them to get necessary information, and even how to solve the problem.
With the future progress of knowledge-engineering
technology, it can be expected that the size of knowledge
bases in practical use will become bigger and bigger. We
think the approach ;liming at the cooperative knowl-
edge-based systems is a solution of the problem: how to
manage the growing scale of knowledge base in real
problems. As the knowledge sources distribute over the
knowledgeable agents, inference and problem solving
should be executed cooperatively over those distributed
knowledge sources.
T h e research project for knowledge base mechanisms
is outlined as follows:
1. The development of a knowledge representation system: the design of a knowledge representation language
(called Fifth Generation Knowledge Representation
Language, FGKRL in short) and support systems for the
knowledge base building are planned at the initial stage
of the project.
2. Basic research on knowledge acquisition: this system
is the key to the cooperative knowledge-based system.
3. Basic research on distributed problem solving.
4. Design of the external knowledge bases: the intelligent interface between a central problem solver and external knowledge bases is the key issue of this work. Relational algebra may be an interface language at least in
the earlier stages of the project.
In our research into knowledge bases, we focused on
developing a knowledge representation language, rather
than pursuing its architectural aspect (including a cooperative knowledge base). In this respect, very little research was done in the project for pursuing knowledge
base architecture.
From the viewpoint of an inference mechanism associated with a knowledge base, we expected a mixture of a
production system and a frame-oriented system as described in [9]:
T h e image of the knowledge representation language
we have in mind can be more or less regarded as a mixture o f a production system and a frame-oriented system. Our aim is to implement such a sophisticated language on FGKL . . . .
However, it is difficult to efficiently implement a
frame-oriented system on Prolog because of its lack of
structuring concepts. The proposed extension for structuring mechanisms such as modality and metacontrol is
expected to solve the problem.
I f we paraphrase this as a mixture of a declarative
knowledge representation scheme having a uniform inference engine and a knowledge representation scheme
for structural information, we can claim we attained the
expectation. We developed a powerful knowledge representation language called Quixote [ 18, 25], based on the
idea of representation of structural information and
constraint satisfaction as its uniform inference engine.
The result is not truly satisfactory in the sense that it
cannot be executed efficiently in concurrent/parallel
environments, so far.
On a M e t a l n f e r e n c e S y s t e m
In the original plan, a metainference system had an important role in realizing intelligent human-machine and
machine-machine interfaces, as described here:
A metainference system serves a semantic interpreter
between a person and a machine and also between two
different machines. T h e interpreter must understand
natural language and human mental states, as well as
machine language and machine status.
We intend to solve several apparently different problems in a single framework of the metainference system.
The problems we consider include: (1) knowledge acquisition, (2) problem-solving control, (3) belief revision,
(4) conversation control, (5) program modification,
(6) accessing external databases, (7) reorganizing external databases.
A metainference system makes use of knowledge
about (a) inference rules, (b) representation of objects,
(c) representation of functions/predicates, and (d) reasoning strategies to solve the problems we have listed.
T h e most relevant work we noticed after the project
started was the amalgamation o f language and metalanguage in logic programming by Bowen and Kowalski.
Combining
this
approach
with
well-known
metaprogramming techniques to build a Prolog interpreter, we built an experimental knowledge assimilation
system in Prolog [15, 17]. We treated the notion of integrity constraint as a kind of metaknowledge in the system
and succeeded in developing an elegant knowledge assimilation system using metaprogramming techniques.
One serious problem was the inefficiency of
metaprograms caused by the interpretive execution
steps. Takeuchi [21] applied partial evaluation to
metaprograms and succeeded in removing interpretive
steps from their execution steps. This was one of the
most exciting results we obtained during the preliminary
stage of the project. It is interesting to note that this result was observed around the same time (the end of
1984) as another important result, the invention of GHC
by Ueda.
We also pursued advanced AI functions within the
Prolog framework. They include nonmonotonic reasoning, hypothetical reasoning, analogical reasoning, and
inductive inference.
However, most research output, including the knowledge assimilation system was not integrated into the concurrent logic programming framework. Therefore, very
little was produced for final fifth-generation computer
systems based on PIM.
An exception is a parallel bottom-up theorem prover
called MGTP, and application systems running on it. An
example of such applications is hypothetical reasoning. I
expect this approach will attain my original goal of
"high-speed knowledge information processing based
on parallel computation mechanisms and specialized
hardware" in the near future.
HOW ICOT Worked and Managed the Project
Research activity is like a chemical reaction. We need to
prepare the research environment carefully to satisfy the
reaction conditions. If the preparation is good, the reaction will start automatically. In the case of research activity, the reaction conditions are (1) an appropriate research subject (knowledge information processing and
¢OMNUNICATIONIIIOlUTHilACM/~/[arch
1993/Vol.36,No.3 6 3
parallel processing), (2) an a p p r o p r i a t e research discipline (logic programming), (3) sufficiently good researchers, (4) several key persons, and (5) a p p r o p r i a t e
research management.
T h e importance of the research subject is considerable, and the success o f o u r project is due mainly to the
selection o f the research topics, that is, knowledge information processing and parallel processing, and the selection o f logic p r o g r a m m i n g as the research discipline.
At the establishment o f the I C O T research center, the
director o f ICOT, Kazuhiro Fuchi, posed one restriction
on the selection o f researchers: that is, the researchers
must be u n d e r 35 years old when they join ICOT. We
made another effort to select good researchers from private companies associated with the project by designating researchers whose talent was known to us from their
earlier work.
T h e overall activity o f I C O T covered both basic research and systems development. An important aspect
related to this point is a good balance o f these two basically different activities. T h e n u m b e r o f researchers involved in these two activities was about the same at the
beginning. T h e latter gradually increased to a r o u n d
twice as many as the former. T h e technology transfer of
basic research results into the systems development
g r o u p was p e r f o r m e d very smoothly by reorganizing the
structure o f I C O T and moving researchers from the
basic research g r o u p into the systems development
group. A typical example is the development o f GHC in
the basic research group, and the later d e v e l o p m e n t o f
KL1, Multi-PSI, and PIM in the systems d e v e l o p m e n t
group.
In the basic research group, we intentionally avoided
research m a n a g e m e n t except for serious discussions on
research subjects. I personally played the role o f critic
(and catalyst), and tried to provide positive suggestions
on basic research subjects.
We were very fortunate, since we received unexpected
feedback from abroad at the beginning o f the project.
National and international collaboration greatly helped
us to create a highly stimulating research environment.
I C O T became a world center o f logic-programming research. Almost all distinguished researchers in the logicp r o g r a m m i n g field visited us and exchanged ideas. Also,
we received strong s u p p o r t from national researchers
t h r o u g h working groups.
Concluding Remarks
Logic p r o g r a m m i n g is a rich enough topic to s u p p o r t a
10-year project. During this decade, it p r o d u c e d new
research subjects such as concurrent LP, constraint LP,
deductive databases, p r o g r a m analysis, p r o g r a m transformation, nonmonotonic reasoning in LP frameworks,
abduction, and inductive logic p r o g r a m m i n g .
C o n c u r r e n t logic p r o g r a m m i n g was the right choice
for balancing efficient parallel processing with expressiveness.
We succeeded in boosting the a m o u n t of logic prog r a m m i n g research all over the world, which further
benefited o u r own project.
Logic p r o g r a m m i n g now covers almost all aspects o f
4
March 1993/Vol.36, No.3 / C O M M U N I C A T I O N S O F T H E A C M
c o m p u t e r science, including both software and hardware. It provides one of the best approaches to general
concurrent/parallel processing.
F u r t h e r research will focus on new applications. Complex inversion p r o b l e m s , including abductive inference
and the solving o f nonlinear equations by G r 6 d n e r are
good candidates that will require heavy symbolic computation and parallel processing. •
References
1. Bowen, K.A. and Kowalski, R.A. Amalgamating language
and metalanguage in logic programming. In Logic Programming. Academic Press, New York, 1982, pp. 153-172.
2. Chikayama, T. Programming in ESP--Experiences with
SIMPOS. In Programming of Future Generation Computers.
North-Holland, Amsterdam, 1988.
3. Chikayama, T., Sato, H. and Miyazaki, T. Overview of the
Parallel Inference Machine Operating System (PIMOS). In
Proceedings of the International Conference on Fifth Generation
Computing Systems 1988 (Tokyo). 1988.
4. Clark, K.L. and Gregory, S. PARLOG: Parallel programming in logic. ACM. Trans. Program Lang. Syst. 8, 1 (1986).
5. Clark, K.L. and Gregory, S. A relational language for parallel programming. In Proceedings of the ACM Conference on
Functional Programming Languages and Computer Architecture.
ACM, New York, 1981.
6. Fuchi, K. An hnpression of KL1 programming-from my
experience with writing parallel provers. In Proceedings of
the KL1Programming Workshop '90. ICOT, Tokyo, 1990. In
Japanese.
7. Fuchi, K. and Furukawa, K. The Role of Logic Programming in the Fifth Generation Computer Project. Vol. 5, No.
1, New Generation Computing, Ohmsha-Springer, Tokyo,
1987.
8. Fujita, H. and Hasegawa, R. A model generation theorem
prover in KL 1 using a ramified-stack algorithm. In Proceedings of the Eighth International Conference on Logic Programming (Paris). 1991.
9. Furukawa, K., Nakajima, R., Yonezawa, A., Goto., S. and
Aoyama, A. Problem solving and inference mechanisms. In
Proceedings of the International Conference on the Fifth Generation Computer Systems 1981 (Tokyo, 1981).
10. Furukawa, K., Nakajima, R., Yonezawa, A., Goto, S. and
Aoyama, A. Knowledge base mechanisms. In Proceedings of
the Eighth International Conference on Logic Programming
(Tokyo, 1981).
11. Furukawa, K., Takeuchi, A., Kunifuji, S., Yasukawa, H.,
Ohki, M. and Ueda, K. Mandala: A logic based knowledge
programming system. In Proceedings of the International Conference on the Fifth Generation Computer Systems (Tokyo, 1984).
Ohmsha-North-Holland, Tokyo, 1984, pp. 613-622.
12. Futo, I. and Szeredi, P. T-Prolog: A Very High Level Simulation System--General Information Manual. SZ. K. I. 1011 Budapest I. Iskola Utca 8, 1981.
13. Hawley, D. and Aiba, A. Guarded definite clauses with constraints--Preliminary report. Tech. Rep. TR-713, ICOT,
Tokyo, 1991.
14. Hewitt, C. Viewing control structure as patterns of passing
messages. Artifi. lntell. 8, 3 (1977).
15. Kitakami, H., Kunifuji, S., Miyachi, T. and Furukawa, K. A
methodol6gy for implementation of a knowledge acquisition system. In Proceedings of the IEEE 1984 International
Symposium on Logic Programming (1984). IEEE Computer
Society Press, Los Alamitos, Calif.
16. Manthey, R and Bry, F. SATCHMO: A theorem prover
implemented in Prolog. In Proceedings of CADE-88 (Argonne, Ill., 1988).
17. Miyachi, T., Kunifuji, S., Kitakami, H., Furukawa, K.,
Takeuchi, A. and Yokota, H. A knowledge assimilation
method for logic databases. In Proceedings of the IEEE 1984
International Symposium on Logic Programming. IEEE Computer Society Press, Los Alamitos, Calif., 1984, pp. 118125.
18. Morita, Y., Haniuda, H. and Yokota, K. Object identity in
Quixote. Tech. Rep. TR-601, ICOT, Tokyo, 1990.
19. Okumura, A. and Matsumoto, Y. Parallel programming
with layered streams. In Proceedings of the Fourth Symposium
on Logic Programming (San Francisco, 1987), pp. 343-350.
20. Shapiro, E.Y. A subset of concurrent Prolog and its interpreter. Tech. Rep. TR-003, Institute for New Generation
Computer Technology, Tokyo, 1983.
21. Takeuchi, A. and Furukawa, K. Partial evaluation of Prolog
programs and its application to meta programming. In Proceedings of IF1P'86 (1986). North-Holland, Amsterdam.
22. Ueda, K. Guarded Horn Clauses. In Logic Programming '85,
E. Wada, Ed. Lecture Notes in Computer Science, vol. 221,
Springer-Verlag, New York, 1986.
23. Ueda, K. Making exhaustive search programs deterministic. In Proceedings of the Third International Conference on Logic
Programming (1986). Springer-Verlag, New York.
24. Ueda, K. and Chikayama, T. Design of the Kernel Language for the Parallel Inference Machine. Comput. J. 33, 6
(1990), 494-500.
25. Yasukawa, H. and Yokota, K. Labeled graphs as semantics
of objects. Tech. Rep. TR-600, ICOT, Tokyo, 1990.
CR Categories and Subject Descriptors: C.1.2 [Processor
Architectures]: Multiple Data Stream Architectures (Multiprocessors); D.1.3 [Programming Techniques]: Concurrent
Programming; D.1.6 [Software]: Logic Programming; D.3.2
[Programming Languages]:
Language Classifications--
Concurrent, distributed, and parallel languages, Data-flow languages,
Nondeterministic languages, Nonprocedural languages; K.2 [Computing Milieux]: History of Computing
General Terms: Design, Experimentation
Additional Key Words and Phrases: Concurrent logic programming, Fifth Generation Computer Systems project,
Guarded Horn Clauses, Prolog
About the Author:
KOICHI FURUKAWA is a professor in the Faculty of Environmental Information at Keio University. Current research interests include artificial intelligence, logic programming, and machine learning. Author's Present Address: Keio University,
Faculty of Environmental Information, 5322, Endo Fujisawashi, Kanagawa, 252, Japan; email:
[email protected]
KazunoriUeda
NEC CORPORATION
A n outstanding feature of the
Fifth Generation C o m p u t e r
Systems ( F G C S ) project is its
middle-out approach. Logic
p r o g r a m m i n g was chosen as
the central notion with which
to link highly parallel hardware and application software,
and three versions of so-called
kernel language were planned,
all of which were assumed to be based on logic programming. T h e three versions corresponded to the threestage structure of the project: initial, intermediate, and
final stages.
T h e first kernel language, KL0, was based on Prolog
and designed in 1982 as the machine language of the
Sequential Inference Machine. Initial study o f the second kernel language, KL1, for the Parallel Inference
Machine started in 1982 as well. T h e main purpose o f
KL1 was to s u p p o r t parallel computation. T h e third kernel language, KL2, was p l a n n e d to address high-level
knowledge information processing. Although the Institute for New Generation C o m p u t e r Technology (ICOT)
conducted research on languages for knowledge information processing t h r o u g h o u t the project and finally
p r o p o s e d the language Quixote [11], it was not called a
"kernel" language (which meant a language in which to
write everything). This article will focus on the design
and the evolution o f KL1, with which I have been involved since 1983.
What are the implications o f the middle-out approach
to language design? In a bottom-up or top-dowp approach, language design could be justified by external
criteria, such as amenability to efficient implementation
on parallel hardware and expressive power for knowledge information processing. In the middle-out approach, however, language design must have strong justifications in its own right.
T h e design o f KL0 could be based on Prolog, which
was quite stable when the FGCS project started. In contrast, design o f KL1 had to start with finding a counterpart of Prolog, namely a stable parallel p r o g r a m m i n g
language based on logic p r o g r a m m i n g . Such a language
was supposed to provide a c o m m o n platform for people
working on parallel c o m p u t e r architecture, parallel prog r a m m i n g and applications, foundations, and so on.
It is well known that I C O T chose logic p r o g r a m m i n g
as its central principle, but it is less well known that the
shift to concurrent logic p r o g r a m m i n g started very early
in the research and d e v e l o p m e n t of KL1. Many discussions were made d u r i n g the shift, and many criticisms
and frustrations arose even inside ICOT. In these struggles, I p r o p o s e d G u a r d e d H o r n Clauses (GHC) as the
basis o f KL1 in December 1984. G H C was recognized as
¢ O M M U N I C A q r I O N S Olin TIIIIE A C M / M a r c h 1993/Vol,36, No.3
6S
a stable platform with a number of justifications, and the
basic design of KL1 started to converge. Thus it should
be meaningful to describe how the research and development of KL1 was conducted and what happened inside I C O T before KL1 became stable in 1987. This article also presents my own principles behind the language
design and perspectives on the future of GHC and KL1.
Joining the FGCS Project
When I C O T started in 1982, I was a graduate student at
the University of Tokyo. My general interest at that time
was in programming languages and text processing, and
I was spending most of my time on the thorough examination of the Ada reference manual as a member of the
Tokyo Study Group of Ada. (Takashi Chikayama, author of an article included in this issue, was also a member of the Tokyo Study Group.) A colleague, Hideyuki
Nakashima, one of the earliest proponents of Prolog in
Japan, was designing Prolog/KR [13]. We and another
colleague, Satoru Tomura, started to write a joint paper
on input and output (without side effects) and string
manipulation facilities in sequential Prolog, with a view
to usihg Prolog as a text processing language instead of
languages such as SNOBOL.
The work was partly motivated by our concern about
declarative languages: We had been very concerned
about the gap between the clean, "pure" version of a
language for theoretical study and its "impure" version
for practical use. I was wondering if we could design a
clean, practical and efficient declarative language.
I had been disappointed with language constructs for
concurrent programming because of their complexity.
However, Hoare's enlightening paper on CSP (Communicating Sequential Processes) [10] convinced me that
concurrent programming could be much simpler than I
had thought.
I joined the NEC Corporation in April 1983 and soon
started to work on the FGCS project. I was very interested in joining the project because it was going to design
new programming languages, called kernel languages,
for knowledge information processing (KIP). The kernel languages were assumed to be based on logic programming. It was not clear whether logic programming
could be a base of the kernel language that could support the mapping of KIP to parallel computer architecture. However, it seemed worthwhile and challenging to
explore the potential of logic programming in that direction.
KL1 Design Task Group
Prehistory
T h e study of KLI had already been started at the time I
joined the project. I C O T research on early days was conducted according to the "scenario" of the FGCS project
established before the commencement of ICOT. Basic
research on the Parallel Inference Machine (PIM) and its
kernel language, KLI, started in 1982 in parallel with
the development of the Sequential Inference Machine
and its kernel language, KL0. Koichi Furukawa's laboratory was responsible for the research into KL1.
Although KL1 was supposed to be a machine lan-
66
March 1993/%1.36, No.3 [ ¢ O I I I l i U N I C A T I O N |
OII THi
ACll
guage for PIM, the research into KL1 was initially concerned with higher-level issues, namely expressive
power for the description of parallel knowledge information processing (e.g., knowledge representation,
knowledge-base management and cooperative problem
solving). The key requirements to KL 1 included the description of a parallel operating system as well, but this
again had to be considered from higher-level features
(such as concurrency) downward, because ad hoc extension of OR-parallel Prolog with low-level primitives was
clearly inappropriate. T h e project was concerned also
with how to reconcile logic programming and objectoriented programming, which was rapidly gaining popularity in Japan.
Research into PIM, at this stage, focused on parallel
execution of Prolog. Concurrent logic programming was
not yet a viable alternative to Prolog, though, as an initial
study, Akikazu Takeuchi was implementing Relational
Language in Maclisp in 1982. It was the first language to
exclusively use guarded clauses, namely clauses with
guards in the sense of Dijkstra's guarded commands.
Ehud Shapiro proposed Concurrent Prolog that year,
which was a more flexible alternative to Relational Language that featured read-only unification. He visited
I C O T from October to November 1982 and worked on
the language and programming aspects of Concurrent
Prolog mainly with Takeuchi. They jointly wrote a paper
on object-oriented programming in Concurrent Prolog
[16]. T h e visit clearly influenced Furukawa's commitment to Concurrent Prolog as the basis of KL1.
The TaSk Group
After joining the project in April 1983, I learned it was
targeted toward more general-purpose computing than
I had expected. Furukawa often said what we were going
to build was a "truly" general-purpose computer for the
1990s. He meant the emphasis must be on symbolic
(rather than numeric) computation, knowledge (rather
than data) processing, and parallel (rather than sequential) architecture.
As an I C O T activity, the KL1 Design Task Group
started in May 1983.* Members included Koichi
Furukawa, Susumu Kunifuji, Akikazu Takeuchi and me.
T h e deadline of the initial proposal was set for August
1983 and intensive discussions began.
By the time the Task Group started, Furukawa and
Takeuchi were quite confident of the following guidelines:
• (Concurrent Prolog) T h e core part o f KL1 should be
based on Concurrent Prolog, but should support search
problems and metaprogramming as well.
• (Set/stream interface) KL1 should have a set of language constructs that allows a Concurrent Prolog program to handle sets of solutions from a Prolog engine
and/or a database engine and to convert them to
streams.
• (Metaprogramming) KL1 should have metaprogram*Fortunately, I found a number of old files of the Task Group in storage
at ICOT, which enabled me to present the precise record of the design
process in this article.
ming features that support the creation and the (controlled) execution of program codes.
Apparently, the set/stream interface was inspired by
Clark et al.'s work on IC-PROLOG [5], and metaprogramming was inspired by Bowen and Kowalski's work
on metaprogramming [5]. The idea of sets as first-class
objects may possibly have been inspired by the functional language KRC[18].
I knew little about Relational Language and Concurrent Prolog prior to joining the project. I was rather surprised by the decision to abandon Prolog's features to
search solutions, but soon accepted the decision and
liked the language design because of the simplicity.
Various issues related to the preceding three guidelines were discussed in nine meetings and a three-day
workshop, until we finally agreed on those guidelines
and finished the initial proposal. We assumed that KL1
predicates (or relations) be divided into two categories,
namely AND relations for stream-AND-parallel execution
of concurrent processes based on don't-care nondeterminism, and OR relations for OR-parallel search based on
don't-know nondeterminism. The clear separation of
AND and OR relations reflected that the OR relations
were assumed to be supported by a separate OR-parallel
Prolog machine and/or a knowledge-base machine.
(Years later, however, we decided not to create machines
other than PIM; we became confident search and database applications could be supported by software with
reasonable performance). Set/stream interface was to
connect these two worlds of computation. We discussed
various possible operations on sets as first-class objects.
Metaprogramming was being considered as a framework for
• the observation and control of stream-AND-parallel
computation by stream-AND-parallel computation, and
• the observation and control of OR-parallel computation by stream-AND-parallel computation.
The former aspect was closely related to operating
systems and the latter aspect was closely related to the
set/stream interface. Metaprogramming was supposed to
provide a protection mechanism also. The management
of program codes and databases was another important
concern. Starting with the "demo" predicate of Bowen
and Kowalski, we were considering various execution
strategies and the representation of programs to be provided to "demo."
Other aspects of KL1 considered in the Task Group
included data types and object-oriented programming.
It was argued that program codes and character strings
must be supported as built-in data types.
The initial report, "Conceptual Specification of the
Fifth Generation Kernel Language Version 1 (KL1),"
which was published as an I C O T Technical Memorandum in September 1983, comprised six sections:
1.
2.
3.
4.
5.
6.
Introduction
Parallelism
Set Abstraction
Meta Inference
String Manipulation
Module Structures
In retrospect, the report presented many good ideas
and very effectively covered the features that were realized in some form by the end of the project, though of
course, the considerations were immature. The report
did not yet consider how to integrate those features in a
coherent setting. The report did not yet clearly distinguish between features requiring hardware support and
those realizable by software.
I C O T invited Ehud Shapiro, Keith Clark and Steve
Gregory in October 1983 to discuss and improve our
proposal. Clark and Gregory had proposed the successor of the Relational Language, PARLOG [3]. Many
meetings were held and many I C O T people outside the
Task Group attended as well.
In the discussions, Shapiro criticized the report as introducing too many good features, and insisted that the
kernel language should be as simple as possible. He tried
to show how a small number of Concurrent Prolog primitives could express a variety of useful notions, including
metaprogramming. While Shapiro was exploring a
metainterpreter approach to metaprogramming, Clark
and Gregory were pursuing a more practical approach
in PARLOG, which used the built-in "metacall" primitive
with various features.
From the implementation point of view, most of us
thought the guard mechanism and the synchronization
primitive of PARLOG were easier to implement than
those of Concurrent Prolog. However, the KL1 Design
Task Group stuck to Concurrent Prolog for the basis of
KL1; PARLOG as of 1983 had many more features than
Concurrent Prolog and seemed less appropriate for the
starting point. Some people were afraid that PARLOG
imposed dataflow concepts that were too static, making
programming less flexible.
The discussions with the three invited researchers
were enlightening. The most important feedback, I believe, was that they reminded us of the scope of KL1 as
the kernel language and led us to establish the following
principles:
• Amenability to efficient implementation
• Minimal number of primitive constructs (cf. Occam's
razor)
• Logical interpretation of program execution
Meanwhile, Furukawa started to design a user-level
language on top of KL1. The language was first called
Popper (for Parallel Object-oriented Prolog Programming EnviRonment), and then Mandala. On the other
hand, the implementation aspect of KL1 was left behind
for a long time, until Shapiro started discussions of sequential, but serious, implementation of Concurrent
Prolog. T h e only implementation of Concurrent Prolog
available was an interpreter on top of Prolog, which was
not fast--a few hundred reductions per second (RPS) on
DECsystem-20.
After the three invited researchers left, the Task
Group had many discussions on the language specification of KL1 and the sequential implementation of Concurrent Prolog. Although we started to notice that the
informal specification of Concurrent Prolog left some
aspects (including the semantics of read-only unifica-
COMMLiJNICATIONSOFTHIEACM/Mar¢~
1993/VoL36,No.3 6 i ~
tion) not fully specified, we became convinced that Concurrent Prolog was basically the right choice, and in January 1984 started to convince the I C O T members and
the members of relevant Working Groups.
Three large meetings on Concurrent Prolog were
held in February 1984, which many people working on
the FGCS project attended. The Task Group argued for
Concurrent Prolog (or concurrent logic programming in
general) as the basis of KL1 on the following grounds:
1. It is a general-purpose language in which concurrent
algorithms can be described.
2. It has added only two syntactic constructs to the logic
programming framework.
3. It retains the soundness property of the logic programming framework.
4. Elegant approaches to programming environments
taken in logic programming could be adapted to concurrent logic programs.
People gave implicit consent to the choice of the Task
Group in that nobody proposed an alternative basis of
KL1 in response to our solicitation. However, as a matter
of fact, people were quite uneasy about adopting Concurrent Prolog as the basis of KL1. The arguments being
made by the Task Group seemed based on belief rather
than evidence. Many people, particularly those working
on PIM, were rather upset (and possibly offended) that
don't-know nondeterminism of Prolog was going to be
excluded from the core part of KLI and moved to a
back-end Prolog engine. Unlike logic programming, the
direction of computation was more or less fixed, which
was considered inflexible and unduly restrictive. However, Furukawa maintained that the parallel symbolic
processing was a more important aspect of KL1 than
exhaustive search.
Implementation people had another concern:
whether reasonable performance could be obtained.
Some o f them even expressed the thought that it could
be too dangerous to have parallel processing as the main
objective of the FGCS project.
Nevertheless, through the series of meetings, people
agreed that a user language must be higher level than
Concurrent Prolog and that various knowledge representation languages should be developed on top of the
user language. We also agreed that programming environments for Concurrent Prolog (and a KL1 prototype)
must be developed quickly in order to accumulate experiences with concurrent logic programming. We decided
to develop a Concurrent Prolog implementation in a
general-purpose language (C was considered first;
Maclisp was chosen finally) to study implementation
techniques.
In March 1984, the Task Group finalized the report
on the Conceptual Specification of KL1 and published it
as an I C O T Technical Report [7]. The report now concentrated on the primitive features to be supported directly in KL1 for flexible and efficient KIP.
Implementing Concurrent Prolog
A good way to understand and examine a language definition is by trying to implement it; this forces us to con-
8
March 1993/Vol.36, No.3 /¢OMI~IUlII¢ATION$ OF THE lli¢lm
sider every detail of the language. In April 1984, the
Task Group acquired some new members, including
Toshihiko Miyazaki, Nobuyuki Ichiyoshi and Jiro Tanaka, and started a project on the sequential implementation of Concurrent Prolog under Takeuchi's coordination. We decided to build three interpreters in Maclisp,
which differed in the multiple environment mechanisms
necessary to evaluate the guard parts of program
clauses. The principal member, Miyazaki, was quick in
designing and Lisp programming. We also started to
build a Mandala implementation in Concurrent Prolog.
As an independent project, Chikayama started to
improve Shapiro's Concurrent Prolog interpreter on top
of Prolog. By compiling program clauses to some extent,
he improved the performance to 4kRPS, a much better
number for practical use. I further improved the performance by compiling clauses to a greater degree, and obtained 1 lkRPS by November 1984, a number better than
most interpretive Prolog systems of that time.
We had a general question on the implementation of
concurrent logic languages as well, which had been mentioned repeatedly in our discussions on systems programming. T h e question was whether basic operations
such as many-to-one interprocess communication and
array updates could be implemented as efficiently as
could be done in procedural languages in terms of time
complexity. For systems programming without side effects to be practical, it seemed essential to show that the
complexity of these operations is not greater than that o f
procedural languages. I devised an implementation
technique for these operations with Chikayama in the
beginning of 1984, and presented it in the FGCS'84 conference. These two pieces o f work on implementation
convinced me of the viability of concurrent logic programming languages as the basis of KL1.
Meanwhile, Clark visited us again in spring 1984, and
introduced a revised version of PARLOG [4]. T h e language had been greatly simplified. Although we were
too committed to Concurrent Prolog at that time, the
new version influenced the research on KLI later in various ways.
The three Concurrent Prolog interpreters were almost complete by August 1984, and an interim report
comparing the three methods was written. Two successor projects started in September, one on an abstract
KL1 machine architecture, and the other on a KL1 compiler. I started to design an abstract machine instruction
set with Miyazaki, but was not very excited about it. One
reason is that we had found several unclarified points in
the definition of Concurrent Prolog, most of which were
related to read-only unification and the execution of the
guards of program clauses. I started to feel that we
should reexamine the language specification of Concurrent Prolog in detail before we went any further. T h e
other reason is that full implementation of the guard
construct seemed to be too complicated to be included in
a KL1 implementation. The idea of Flat Concurrent
Prolog (FCP), which avoided the nesting of guards by
allowing only simple test predicates in guards, was conveyed to us by Shapiro in June 1984, but few of us, including me, were interested.
In retrospect, it is rather curious that we stuck to the
full version of Concurrent Prolog, which was hard to
implement. However, we were not confident o f moving
to any subset, T h e guard construct, if successfully implemented, was supposed to be used for OR-parallel problem solving and for the protected execution o f user programs in an operating system.
People working on PIM, who were supposed to implement KL1 in the future, were getting impatient in mid1984. As architects, they n e e d e d a concrete specification
of KL1 as early as possible and wanted to know what
kinds o f operations should be particularly optimized, but
the design o f KL1 had not reached such a phase. On the
other hand, members of the KL1 Design Task G r o u p
were u n h a p p y that they received few constructive comments from outside. A kind o f mutual disbelief was exposed in three meetings of the PIM Working G r o u p held
from J u n e to August, in which the Task G r o u p conferred with the PIM people.
P r o p o s a l o f GHC, a N e w Basis o f KL1
After the FGCS'84 conference in November 1984, I
started to reexamine the language specification of Concurrent Prolog in detail, the main concerns being the
atomicity (or granularity) o f various operations, including read-only unification and commitment, and the semantics o f the multiple environment mechanism [19].
Many subtle points and difficulties were f o u n d and discussed. I had to conclude that although the language
rules could be made rigorous and coherent, the resultant
set o f rules would be more complex and require more
sequentiality than we had expected.
T h e result of that work was not very encouraging, but
I continued to seek a coherent set of language rules. In
mid-December, I came up with an alternative to Concurrent Prolog, which solved the problems with read-only
unification and the problems with the multiple environment mechanism simultaneously. T h e idea was to suspend the computation of the guard of a clause if it would
require a multiple environment mechanism, that is, if
the computation would instantiate variables in the caller
of the clause. T h e semantics o f guard now served as the
synchronization mechanism as well, making read-only
unification necessary.
On December 17, I p r o p o s e d the new language to the
KL1 D~sign Task G r o u p as KL0.7. T h e name KL0.7
meant the core part o f KL1 that left:
• the decision on whether to include pure Prolog to support exhaustive search directly
• m a c h i n e - d e p e n d e n t constructs and
• the set o f p r e d e f i n e d predicates
T h e h a n d o u t (in Japanese) included the following
claims:
1. Read-only annotation is dispensed with because it
does not fit well in fine-grained parallel computation
models.
2. Multiple environments are unnecessary. It is not yet
clear whether multiple environments must be implemented, while it certainly adds to implementation cost.
Figure 1.
C o n c e p t u a l c o n f i g u r a t i o n o f KL1 (1984) [71
Figure 2.
S t r u c t u r e o f KL1 (1985)
Multiple environments make the visibility rule (of the
values o f variables) and the commitment rule less clear.
3. Mode declaration is dispensed with; it can be excluded from the kernel language level.
4. One kind o f unification is enough at the kernel language level, though a set o f specialized unification primitives could exist at a lower level.
5. I m p l e m e n t a t i o n will be as easy as PARLOG.
6. Implementation will be as efficient as PARLOG.
7. A m e t a i n t e r p r e t e r o f itself can be written.
8. Sequentiality between primitive operations is minimized, which will lead to high intrinsic parallelism and
clearer semantics.
Interestingly, the resultant language t u r n e d out to be
semantically closer to PARLOG than to Concurrent and
FCP in the sense that it was a single-environment language. Unlike PARLOG, however, it did not assume
static analysis or compilation; PARLOG assumed compilation into Kernel PARLOG, a language with lower-level
primitives. T h e h a n d o u t claimed also that pure Prolog
need not be included in KL1 if we made sure that exhaustive search could be done efficiently in KL1 without
special support.
T h e only new aspect to be considered in the implementation o f KL0.7 was the m a n a g e m e n t of nested
guards. I found it could be done anyway and expected
that static analysis would help in many cases. It was not
clear whether complex nested guards were really necessary, but they were free of semantical problems and thus
could be retained for the time being. In addition, the
new language was undoubtedly more amenable to compilation than Concurrent Prolog.
I quickly finished two internal reports, "Concurrent
Prolog Re-Examined" and "A Draft Proposal o f CPII"
and b r o u g h t them to the Task G r o u p meeting on December 20. T h e name CPII (for Concurrent Prolog II)
was selected tentatively and was used for a while. T h e
Task G r o u p seemed to welcome my proposal and appreciated the simplification.
COMMUNICATIONSOFTHEACM/Mar£h
1993/Vol.36, No.3 6 ~
In J a n u a r y 1985, the Task G r o u p continued designing KL1 with a new basis. Takeuchi proposed that KL1
be CPII with the metacall construct ~ la PARLOG and
primitives for the allocation and scheduling o f goals.
T h e proposal well reflected the final structure o f the
core part o f KL1. Set/stream interface and modularization (as a user-level feature) were still considered to be
part o f KL1, but were put aside for the moment. By January 1985, the Task G r o u p reached an agreement to
base KL1 on CPII. T h e a g r e e m e n t was quick and without too much discussion, because we had agreed to base
KL1 on some concurrent logic language, and CPII
seemed to have solved most o f the problems we had experienced with Concurrent Prolog. CPII did exclude
some o f the p r o g r a m m i n g techniques allowed in Conc u r r e n t Prolog, as Shapiro's g r o u p at the Weizmann Institute pointed out later. However, we p r e f e r r e d a language that was simpler and easier to implement.
People outside the Task G r o u p also welcomed the
proposal o f CPII, though most o f them were not yet convinced o f the a p p r o a c h based on concurrent logic prog r a m m i n g in general. It was not very clear, even to us in
the Task Group, how expressive this conceptual language was in a practical sense, much less how to build
large parallel software in it. However, there seemed to
be no alternative to CPII as long as we were to go with
concurrent logic p r o g r a m m i n g , since the language
seemed to address "something essential."
In early J a n u a r y 1985, Masahiro Hirata at T s u k u b a
University, who was independently working on the formal operational semantics o f Concurrent Prolog, notified me that the functional language, Qute, designed by
Masahiko Sato and Takafumi Sakurai [15] had incorporated essentially the same synchronization mechanism.
T h e news m a d e me wonder if the essence of CPII was
simply the rediscovery o f a known idea. After learning
that Qute introduced the mechanism to retain the
Church-Rosser p r o p e r t y in the evaluation o f expressions, however, I f o u n d it very interesting that the same
mechanism was i n d e p e n d e n t l y introduced in different
languages from different motivations. This suggested
that the mechanism introduced in these languages was
more universal and stable than we had thought at first.
Apparently, Hirata was i n d e p e n d e n t l y considering an
alternative to the synchronization mechanism o f Conc u r r e n t Prolog, and later p r o p o s e d the language Oc [9],
which was essentially CPII without any g u a r d goals.
By J a n u a r y 21, I modified my Concurrent Prolog
compiler on top of Prolog and obtained a CPII compiler.
T h e modification took less than two days, and demonstrated the suitability o f Prolog for rapid prototyping.
Miyazaki also made a G H C compiler with more features
by modifying Chikayama's Concurrent Prolog compiler
on top o f Prolog.
In the meantime, I considered the name of the language by putting down a n u m b e r o f keywords in my
notebook. T h e name was changed to G u a r d e d H o r n
Clauses (GHC) by F e b r u a r y 1985.
In March 1985, the project on Multi-SIM (renamed to
Multi-PSI later) started u n d e r Kazuo Taki's coordination. Its p u r p o s e was to provide an environment for the
70
March 1993/Vol.36, No.3 / ¢ O M M U N I C A T I O N S O F T H E
A¢M
development o f parallel software. Thus, by the end o f
the initial stage, we could barely establish a starting point
o f research in the intermediate stage.
From OH(: to KL1
In J u n e 1985, the intermediate stage o f the FGCS project started, and I j o i n e d I C O T while maintaining a position at NEC.
Shortly before that, the KL1 Design Task G r o u p (the
members being Furukawa, Takeuchi, Miyazaki, Ueda
and Tanaka at that time) p r e p a r e d a revised internal
r e p o r t on KL1. T h e two main aspects of the revision
were (i) the adoption o f G H C in view o f parallel execution and (ii) the reallocation o f p r o p o s e d features to
three sublanguages, KL1-C (core), KL1-P (pragma), and
KL1-U (user). KL1-C, the core part o f KL1, was supposed to be G H C a u g m e n t e d with some metacall construct to s u p p o r t metainference and modularization.
KL1-P was supposed to deal with the m a p p i n g between
KL 1-C p r o g r a m s and physical resources of the underlying implementation. T h e p r o p o s e d components o f
KL1-P were an abstract machine model, a mechanism
for allocating goals to processors, and a mechanism for
scheduling goals allocated in the same processor. KL1-U
was considered as a set o f higher-level language constructs to be compiled into KL1-C and KL1-P, which included the s u p p o r t o f p u r e Prolog (with a set/stream interface) and a module construct.
A n o t h e r sublanguage, KL1-B, was a d d e d to KL1 after
a while. Although KL1-C and KL1-P were supposed to
be the lowest-level sublanguages for p r o g r a m m e r s , they
were still too high-level to be executed directly by hardware. We decided to have a layer c o r r e s p o n d i n g to the
W a r r e n Abstract Machine for Prolog. Initial study of the
operating system for PIM, called PIMOS, started as well
in J u n e 1985.
We had assumed that KL1-C had all the features o f
GHC, including nested guards, until Miyazaki and I visited Shapiro's g r o u p at the Weizmann Institute for two
weeks from July to August 1985. D u r i n g the visit, we
had intensive discussion on the differences between
G H C and C o n c u r r e n t Prolog/FCP. We had discussions
also on the subsetting of G H C to Flat GHC, an analogue
o f FCP obtained from GHC.
Colleagues at the Weizmann Institute (Stephen Taylor
in particular, who later codesigned Strand and PCN)
were greatly interested in Flat G H C as an alternative to
FCP. However, they were concerned that the smaller
atomic operations o f Flat G H C m a d e the language less
robust for describing their Logix o p e r a t i n g system. In
C o n c u r r e n t Prolog and FCP, a goal pubfishes binding
information to outside on the reduction o f a goal to others, while in GHC, the publication is done after reduction
using an i n d e p e n d e n t unification goal in a clause body.
T h e separation m a d e implementation much easier, but
caused a problem in their m e t a i n t e r p r e t e r a p p r o a c h to
o p e r a t i n g systems: the failure o f a unification body goal
might lead to the failure o f the whole system.
O u r visit provoked many discussions in the FCP
group, but they finally decided not to move to Flat G H C
on the g r o u n d that Flat G H C was too fragile for the
metainterpreter approach [17]. On the other hand, we
chose the metacall approach because we thought the
metainterpreter approach would require very careful
initial design in order to get everything to work well,
which could be too time-consuming for us. T h e metacall
approach was less systematic, but this meant it would be
easier to make extensions if required in the development
of the PIMOS operating system.
Back in ICOT, a meeting was held to discuss whether
we should move from GHC to Flat GHC. Since Flat
GHC was clearly preferable from an implementation
point of view, the question was whether the OR-parallel
execution of different nested guards was really necessary, or if it could be efficiently compiled into the ANDparallel execution of different body goals. We did not
have a definite answer, but decided to start with Flat
GHC since nobody claimed the necessity of nested
guards. A week later, Miyazaki submitted the detailed
design of a Flat GHC implementation for Multi-SIM,
and Taki submitted the design of interconnection hardware for Multi-SIM. Miyazaki also submitted a draft
specification of KL 1-C as a starting point for discussions.
The points to be discussed included the detailed execution rule of guards, distinction between failure and suspension, the detail of metacall predicates, the treatment
of extralogical predicates, requirement for systems programming, and the handling of various abnormal situations (i.e., exception handling).
However, the detail of KL1-C was left unfinalized
until summer 1987. We had a number of things to do
before that. From an implementation point of view, we
first had to develop basic technologies for the parallel
implementation of Flat GHC, such as memory management and distributed unification. From a programming
point of view, we had to accumulate experiences with
Flat GHC programming. Although the focus of the R&D
of parallel implementation was converged on (Flat) GHC
by the end of 1985, it was still very important to accumulate evidence, particularly from the programming point
of view, that the concurrent logic programming approach was really feasible. One of the greatest obstacles
to be cleared in that respect was to establish how to program search problems in Flat GHC.
I started to work on compilation from pure Prolog to
Flat GHC in the spring of 1985. Since Hideki Hirakawa
had developed a pure Prolog interpreter in Concurrent
Prolog[8], the initial idea was to build its compiler version. However, the interpreter used an extralogical feat u r e - t h e copying of n o n g r o u n d terms--which turned
out not to make any sense in the semantics of GHC.
After some trial and error, in September 1985, I found a
new method of compiling a subset of pure Prolog into
Flat GHC programs that enumerated the solutions of
the original programs. While the technique was not as
general as people wanted it to be in that it required the
mode analysis of the original programs, the avoidance of
the extralogical feature led to higher performance as
well as clearer semantics.
Although the technique itself was not widely used
later, people started to agree that an appropriate compilation technique could generate efficient concurrent
logic programs for search problems. An important outcome along this line was Model Generation T h e o r e m
Prover (MGTP) for a class of n o n - H o r n clause sets [6].
My main work in 1985 and 1986 was to examine and
justify the language design from various respects and
thus to make the language more robust. I had a number
of opportunities to give presentations and have discussions on GHC. These were very useful for improving the
way in which the language was explained. T h e paper on
GHC was first presented at the Japanese Logic Program
Conference in June 1985 [201. A two-day tutorial on
GHC programming, with a textbook and programming
assignments, was held in May 1986 and 110 people attended. All these activities were quite important, because
people had not had sufficient exposure to actual GHC
programs and had little idea about how to express things
in GHC.
At first, I was introducing GHC to people by comparing it with the framework of logic programming. However, I started to feel it was better to introduce GHC as a
model of concurrent computation. GHC looked like a
concurrent assembly language as well, which featured
process spawning, message sending/receiving, and dynamic memory management. I revised my presentation
transparencies describing the syntax and the semantics
of GHC several times. T h e current version uses only one
transparency, where I refer to the syntactic constructs of
logic programming only for conciseness.
KLl-related R&D activities in the intermediate stage
started as collaboration between the first research laboratory for basic research (to which the KL1 Design Task
From an implementation point of view, w e
develop
basic
technologies
first
for the
had
to
parallel
implementation
of Flat GHC, such as memory
management and distributed unification. From a programming
point of view, w e h a d t o a c c u m u l a t e
experiences
with Flat GHC programming.
¢OHMUNICATIONSO|?HIA|/[V[~,rc~ 1993/Vol.36, No.3 7 S
Group belonged) and the fourth research laboratory for
implementation. As the Multi-SIM project progressed,
however, the interaction between the two laboratories
decreased. The fourth research laboratory had to design
implementation details, while the first research laboratory was concerned with various topics related to concurrent and parallel programming. In November 1986, all
the development efforts related to KL1, including the
design of KL1, were gathered in the fourth research laboratory.
The detail of KL1 had to be determined with many
practical considerations in implementation. GHC was to
concurrent logic programming what pure Prolog was to
logic programming; there was still a gap between GHC
and a kernel language for real parallel hardware. I was
of course interested in the design of KL1, but thought
there would be no other choice than to leave it to the
implementation team. During 1986 and 1987 I spent
much of my time in giving tutorials on GHC and writing
tutorial articles. I did have another implementation
project with Masao Morita, but it was rather a research
project with the purpose of studying the relationship
between language specification and sophisticated optimization techniques.
In the summer of 1987 Chikayama and his team finally fixed the design of KL1-C and KL1-P. The design
of KL1-C reflected many discussions we had since
Miyazaki's draft specification, and took Chikayama's
memory management scheme based on 1-bit reference
counting (MRB scheme) [2] into account. KL1-C turned
out not to be a genuine extension of Flat GHC but had
several ad hoc restrictions which were mainly for implementation reasons. I did not like the discrepancy between pure and practical versions of a language, but I
felt that if some discrepancy was unavoidable, the two
versions should be designed by different people. In our
project, both GHC and KL1 are important in their own
rights and had different, rather orthogonal design rationales, which were not to be confused. Fortunately, the
discrepancy is far smaller than the discrepancy between
pure Prolog and Prolog, and can be negligible when discussing the fundamental differences between GHC and
KL1 (see the following subsection "GHC and KLI").
Research in the Final Stage
Since 1987, the activities related to the kernel language
in the first research laboratory were focused on basic
research on Flat GHC and GHC programming. T h e
additional features of KL1 (by KL1 we mean KL1-C and
KL1-P henceforth, ignoring the upper and lower layers)
were too practical for theoretical study, and Flat GHC
itself still had many aspects to be explored, the most
important of which were formal semantics and program
analysis.
I had long thought that in order to maintain its own
"healthiness," in addition to reconciling parallel architecture and knowledge information processing, a kernel
language must reconcile theory and practice. A programming language, particularly a "declarative" one,
can easily split into a version for theoretical study and
another version for practice, between which no substan-
72
March 1993/Vo1.36, No.3 / ¢ O M M U N I C A T I O I I S O I m T H E A C M
tiat relationship remains. I wanted to avoid such a situation. Unfortunately, the interests of most I C O T theoreticians were not in concurrent logic programming (with
a few exceptions, including Masaki Murakami, who
worked on the semantics of Flat GHC, and Kenji
Horiuchi, who worked on abstract interpretation). Since
January 1988, I thought about how the set of unfold/
fold transformation rules for Flat GHC, initially proposed by Furukawa, should be justified. I finally developed what could be viewed as an asynchronous version
of theoretical CSP, in which each event was a unit transaction between an observee process and its observer, and
presented this idea at the FGCS'88 conference.
In the FGCS'88 conference, I was invited to the final
panel discussion on theory and practice of concurrent
systems chaired by Shapiro, and presented my position
on the role and the future direction of kernel languages
[24]. T h e panel was exceptionally well organized and
received favorable responses, which was unusual for
panel discussions.
I suggested two research directions of KL1 in the
panel. T h e first was the reconstruction of metalevel features in KL1. By metalevel features I meant the operations that referred to and/or modified the "current" status of computation. Jiro Tanaka was interested in the
concept of reflection since 1986 and was designing reflective features for Flat GHC with his colleagues. I liked
the approach, but felt that a lot of work was necessary
before we could build a full-fledged concurrent system
with reflective operations.
The second research direction was the simplification
of KL1 and the development of sophisticated optimization techniques, the motivation being to promote KL1
programming with many small concurrent processes.
T h e ultimate goal was to implement (a certain class of)
processes and streams as efficiently as records and pointers in procedural languages. I became interested in optimization techniques for processes that were almost always suspending, and began studying with Masao Morita
in September 1988. The work was intended to complement the R&D of Muhi-PSl and PIM and to explore the
future specification of KL1 to be used beyond the FGCS
project.
We soon devised the basic idea of what we later called
the message-oriented implementation technique [25],
though it took a long time to generalize it. We found it
interesting that Flat GHC programs allowed an implementation technique totally different from the one
adopted by all the other implementations.
Sophisticated optimization clearly involved sophisticated compile-time analysis of programs, particularly the
global analysis of information flow (mode analysis). Concurrent logic languages employed unification as the
basic means of communication. Although mathematically elegant, the bidirectionality of unification made its
distributed implementation rather complicated. From
the language point of view, the bidirectionality might
cause unification failure, the failure of unification body
goals. Unification failure was considered an exceptional
phenomenon very similar to division-by-zero in procedural languages (not as a superficial analogy, as ex-
plained in [21]), and hence it was much more desirable
to have a syntactic means to avoid it than to have it processed by an exception handler.
On the other hand, people working on d e v e l o p m e n t
were skeptical about global p r o g r a m analysis, suspecting
that it was not practical for very large programs. T h e
skepticism, however, led me to develop an efficient
mode analysis technique that was efficient and amenable
to separate analysis o f (very) large p r o g r a m s [25]. T h e
technique was based on a mode system which t u r n e d Flat
GHC into a strongly m o d e d subset called Moded Flat
GHC. I presented the technique at ICOT's 1990 newyear plenary meeting. Very interestingly, two other talks
at the meeting a r g u e d against general unification in KL 1
as well. T h e group implementing distributed unification
complained of its complexity. T h e g r o u p working on
natural languages and knowledge representation
pointed out that unification in KL1 did not help in implementing unification over richer structures such as
feature graphs. These arguments convinced me that
general unification was not necessary or useful at the kernel language level, though the progress made with KL1
implementations on PIM had been too great for us to
stop implementing general distributed unification. KL1
implementations on PIM would have been considerably
simpler if the mode analysis technique had been proposed earlier.
Reflections and Future Prospects
GHC and KL1
How close is the current status o f KL1 to my vision?
In many senses, KL1 was designed from very practical
considerations, while the main concern o f G H C was the
basic framework o f concurrent computation. As a positive aspect o f KLI's design policy, its performance is no
worse than procedural languages in terms of computational complexity, and its absolute performance is also
good for a novel symbolic processing language.
On the other hand, the constructs for m e t a p r o g r a m ming have stayed rather conservative. I expected that
practical m e t a p r o g r a m m i n g constructs with some theoretical background could be designed finally, but it
t u r n e d out to be very difficult. Also, the precise semantics o f guards seems to have somewhat ad hoc aspects. For
instance, the otherwise construct for specifying "default"
clauses could have been introduced in a more controlled
way that allowed better formal interpretation.
From a methodological point o f view, the separation
of the two languages, GHC and KL1, turned out to be
successful [25]. In designing these two languages, it
turned out that we were trying to separate two different,
though closely related, notions: concurrency and parallelism. Concurrency has to do with correctness, while
parallelism has to do with efficiency. GHC is a concurrent language, but its semantics is completely independent from the underlying model o f implementation.
Before GHC was designed, Shunichi Uchida, who led
the implementation team, maintained that the basic
computational model of KL1 should not assume any
particular granularity of underlying parallel hardware.
To make effective use o f parallel computers, we
should be able to specify how a p r o g r a m should most
desirably be executed on t h e m - - a t least when we wish.
However, the specification tends to be implementationd e p e n d e n t and is best given separately. This is an important role o f KL1, or more precisely, KL1-P. T h e clear
separation o f concurrency and parallelism made it easier
to tune p r o g r a m s without affecting their meaning.
O n GHC, the main point o f success is that it simplified
the semantics of guards by unifying two previously distinct notions: synchronization and the m a n a g e m e n t o f
binding environments. W h e n G6rard H u e t visited I C O T
in 1988, he finished a CAML implementation of Flat
G H C in a few days. I was impressed with the quick, constructive way o f u n d e r s t a n d i n g a p r o g r a m m i n g language he presented, but this was possible because G H C
was so small.
A n o t h e r point of success is that G H C t u r n e d out to be
very s t a b l e - - n o w for eight years. I always emphasized
the design principles and basic concepts of GHC whenever I introduced it, and stubbornly kept the language
unchanged. This may have caused frustration to GHC/
KL1 p r o g r a m m e r s . Indeed, the design o f G H C has not
been considered deeply from a software engineering
point of view. However, the essence of GHC is in its semantics; the syntax could be redesigned as long as a program in a new syntax can be translated to a p r o g r a m in
the current syntax in a well-defined manner. I found the
design o f user languages much more difficult to justify,
though they should be useful for the development of
large software. Many candidates for KL1-U were considered in ICOT, but the current one turned out to be a
rather conservative set o f additional syntactic conveniences.
Although I have kept G H C unchanged, I have continued to study the language. It a d d e d much to the stability
of the language and improved the way the language was
explained. Many ideas that were implicit when G H C was
designed materialized later from the research inside and
outside I COT, and contributed to the justification o f the
language design. I m p o r t a n t theoretical results from outside I C O T include the logical account o f the communication mechanism by Maher [12] and Saraswat's work on
concurrent constraint p r o g r a m m i n g [14] that subsumes
concurrent logic p r o g r a m m i n g and Flat G H C in particular. On a personal side, I have always been interested in
clarifying the relationship between concurrent logic prog r a m m i n g and other formalisms o f computation, including (ordinary) logic p r o g r a m m i n g and models of
concurrency. I have also been interested in subsetting
and have come up with a strongly m o d e d subset called
M o d e d Flat GHC.
Many people in the project worked on the implementation of KL1 and KL1 p r o g r a m m i n g , and p r o d u c e d
innovative outcomes [11]. T h e y were all important in
demonstrating the viability o f the concurrent logic prog r a m m i n g a p p r o a c h and provided useful information
for future language design and implementation. I believe o u r R&D went quite well. A new p a r a d i g m o f parallel symbolic p r o g r a m m i n g based on a new p r o g r a m m i n g
language has gone in a promising direction, though o f
course, much remains to be done.
C O M M U H I C A T i O H S O F T H E A C M / M a r c h 1993/'4ol.36, No.3
73
Did logic programming have anything to do with the
design of KLI? T h e objective of concurrent logic programming is quite different from the objective of logic
programming [22]; nevertheless logic programming
played an important role in the design of GHC by giving
it strong guidelines. Without such strong guidelines, we
might have relied too much on existing concurrency
constructs and designed a clumsier language. It is not
easy to incorporate many good ideas coherently in a single language.
Consequently, GHC programs still allow nonvacuous
logical reading. Instead of featuring don't-know
nondeterminism, GHC and other concurrent logic languages tried to give better alternatives to operations that
had to be done using side effects in Prolog. Logic programming provided a nice framework for reasoning and
search and, at the same time, a nice framework for computing with partial information. Concurrent logic programming exploited and extended the latter aspect of
logic programming to build a versatile framework of
concurrent computation.
O f course, the current status of concurrent logic programming is not without problems. First of all, the term
"concurrent logic programming" itself and the fact that
it was born from logic programming were--ironically
e n o u g h - - a source of confusion. Many people considered GHC as an unduly restrictive logic programming
language rather than a flexible concurrent language at
first. I tried to avoid unfruitful controversy on whether
concurrent logic programming languages are "logic"
programming languages. Also, largely due to the confusion, the interaction o f the concurrent logic programming community with the community of concurrency
theory and the community of object-oriented concurrent
programming has been surprisingly small. We should
have paid more attention to concurrency theory much
earlier, and should have talked much more with people
working on object-oriented concurrent programming.
The only basic difference between object-oriented concurrent programming and concurrent logic programming seems to be whether sequences of messages are
hidden or exposed as first-class objects.
ICOT as a Research Environment
I C O T provided an excellent research environment. I
could continue to work on language issues for years discussing them with many people inside and outside
Japan, which would have been much more difficult elsewhere. Electronic mail communication to and from
overseas was not available until 1985. O f the three stages
of the project, the initial stage (fiscal 1982 to 1984) was
rather different in the sense that it gave us who worked
on KL1 much freedom as well as much responsibility for
the R&D of subsequent stages.
I have never felt that ICOT's adherence to logic programming acted as an obstacle to kernel language design; the details were largely up to us researchers, and it
was really interesting to try to build a system of new concepts based on logic programming.
The project's commitment to logic programming was
liable to be considered extremely political and may have
74
March 1993/Vo1.36, No,3 / ¢ O M M U N I C A T I O I M $ O F T H I I A C M
come as an obstacle to some of the researchers who had
their own fields of :interest outside (concurrent) logic
programming. However, in retrospect, ICOT's basic research activities, particularly those not directly related to
concurrency and parallelism, could focus more on connecting logic programming and their primary fields of
interest.
Parallelism, too, was not a primary concern for most
people working on applications. Parallel programming
in KL1 was probably not an easy and pleasant task for
them. However, it was clear that somebody had to do
that pioneering work and contribute to the accumulation
of good programming methodologies.
Position and Beliefs
Fortunately, I have been able to maintain a consistent
position regarding my research subject--at least since
1984 when I became acquainted with the project. I was
consistently interested in clarifying the relationship and
interplay a m o n g different concepts rather than amalgamating them. T h e position, for instance, reflected in the
research on search problems in concurrent logic languages. Although the Andorra principle was proposed
later as a nice amalgamation o f logic programming and
concurrent logic programming, our research on search
problems, including the MGTP project, focused on the
compilation approach throughout. An interesting finding obtained independently from my work on exhaustive search and the MGTP work is that a class of logic
programs, which the specialists call range-restricted, is
fundamentally easier to handle than others. Thus the
compilation approach led us to recognize the importance of this concept.
T h e separation of a concurrent language, GHC, and a
parallel language, KL1, is another example. T h e panel
discussion o f the FGCS'88 Conference included a heated
debate on whether to expose parallelism to programmers or to hide it. My position was to expose parallelism,
but in a tractable form. This was exactly what KL1 tried
to address by separating concurrency and parallelism. It
is often claimed that GHC is a language suitable for systems programming, but the fact is that GHC itself lacks
some important features for systems programming,
which are included in KL1.
In language design, there has been a long controversy
within the concurrent logic language community on
whether reduction (of a goal) and unification (for the
publication o f information) should be done atomically or
separately. Here again, we continued to use the separation approach.
One reason I stuck to the separation o f concepts is
that the gap between parallel hardware and applications
software seemed to be widening and was unlikely to be
bridged by a single universal paradigm. Logic programming was a good initial approximation to the paradigm,
but it turned out that we had to devise a system of good
concepts and notations. T h e system of concepts and notations was supposed to form a new methodology, which
the FGCS project was going to establish as its principal
objective. GHC and KL1 were to form the substratum of
the system. (This is why the performance of KL1 imple-
mentations is very important.) Later on, languages such
as GDCC [11] and Quixote provided higher-level concepts and notations. First-order logic itself can be regarded as one o f such higher-level constructs, in the
sense that M G T P compiles it to KL1 programs. These
languages will play the role of Mandala and KL2 we once
planned.
I was always interested in the interaction between theory and practice and tried to put myself in between. Now
I am quite confident that a language designer should try
to pay attention to various aspects including its definition, implementation, p r o g r a m m i n g and foundations
simultaneously. Language design requires the reconciliation o f constraints from all these aspects. (In this sense,
our a p p r o a c h to the project was basically, but not strictly,
middle-out.)
Mode analysis and the message-oriented implementation technique were the recent examples of simultaneity
working well. It would have been very difficult to come
up with these ideas if we had p u r s u e d theory and practice separately. In the combination o f high-level languages and recent c o m p u t e r architectures, sophisticated
p r o g r a m analysis plays an important role. It is highly
desirable that such analysis can be done systematically
rather than in an ad hoc manner, and further that a theory behind the systematic a p p r o a c h is expressed naturally in the form of a language construct. By stipulating
the theory as a language construct, it become's a concept
sharable among a wider range o f people.
Language designers need feedbacks from specialists
in related fields. In semantics research, for instance, one
position would be to give precise meanings to given prog r a m m i n g languages. However, it would be much more
productive if the mathematical formulation gives constructive feedbacks back to language design.
The Future
What will the future o f GHC/KL1 and concurrent logic
p r o g r a m m i n g in general be? Let us look back to the past
to predict the future.
T h e history o f the kernel language design was the history o f simplification. We moved from Concurrent Prolog to GHC, and from G H C to Flat GHC. Most people
seemed to believe we should implement distributed unification for Flat G H C at first. My present inclination,
I have
adherence
acted
language
never
however, is not to do so. T h e simplification n e e d e d a lot
o f discussions and experiences, but the p e r f o r m a n c e
r e q u i r e m e n t has always been a strong thrust to this direction. It is not yet clear whether we can completely
move to a kernel language based on M o d e d Flat GHC in
the near future, but if successful in moving, I expect the
p e r f o r m a n c e can be approximately half o f the performance o f comparable p r o g r a m s written in procedural
languages. T h e challenge is to achieve the performance
in a non-ad hoc manner:
For applications in which efficiency is the primary issue but
little flexibility is needed, we could design a restricted version of
GHC which allows only a subclass of GHC and/or introduces
declarations which help optimization. Such a variant should
have the properties that additional constructs such as declarations are used only for efficiency purposes and that a program
in that variant is readable as a GHC program once the additional constructs are removed from the source text. [20, Section
5.3]
We hope the simplicity of GHC will make it suitable for a
parallel computation model as well as a programming language.
The flexibility of GHC makes its efficient implementation difficult compared with CSP-like languages. However, a flexible
language could be appropriately restricted in order to make
simple programs run efficiently. On the other hand, it would be
very difficult to extend a fast but inflexible language naturally.
[20, Section 9]
Review o f the design o f KL1 and its implementation is
now very important. T h e design o f different models o f
PIMs may not be optimal as KL1 machines, because they
had to be designed when we did not have enough knowledge about KL1 implementation and KL1 programming. Also, as experimental machines, they included
various ideas we wanted to try. Now the machines have
been built and almost a million lines of KL1 programs
have been written. Based on the experience, we should
try to simplify the language and the implementation
with m i n i m u m loss of compatibility and expressive
power.
A n o t h e r problem facing KL1 is the huge economical
and social inertia on the choice o f p r o g r a m m i n g languages. Fortunately, the fact that KL1 and other concurrent logic languages address the field o f parallel computing makes things more advantageous. For example,
PCN [1], a descendant of concurrent logic languages,
addresses an i m p o r t a n t issue: parallelization o f proce-
felt
to logic
as an
design;
that
I¢OT'S
programming
obstacle
to kernel
the details were largely up to us
researchers, and it was really interesting to try to build a system of
new concepts based on logic programming.
COMMUNICATIONBOImTHmACM/March1993/Vol.36, No.3 7 S
dural programs. I am pleased to see that a new application area of concurrent logic p r o g r a m m i n g is developed
this way, but at the same time, I feel we should study
whether parallel applications can be made to r u n very
efficiently without interfacing to procedural codes.
Formal techniques, such as verification, are the area in
which the progress of our research has been very slow so
far. However, we believe that GHC/KL1 is quite amenable to formal techniques compared with other concurrent languages. T h e accumulation of technologies and
experiences should be done steadily, as the history of
Petri nets has shown.
I n his invited lecture of the final day of the FGCS'92
conference, C. A. R. Hoare concluded his talk, "Programs Are Predicates" [11], with comments on the similarities between his and our approaches to p r o g r a m m i n g
languages and formalisms, listing a n u m b e r of keywords--simplicity, efficiency, abstraction, predicates,
algebra, concurrency, and nondeterminism.
Acknowledgments
T h e author is indebted to Akikazu Takeuchi for his
comments on the early design activities of KL1. •
References
1. Chandy, M. and Taylor, S. An Introduction to Parallel Programming. Jones and Bartlett Inc., Boston, 1992.
2. Chikayama, T. and Kimura, T. Multiple reference management in Flat GHC. In Proceedings of the Fourth International
Conference on Logic Programming, MIT Press, 1987, pp. 276293.
3. Clark, K.L. and Gregory, S. PARLOG: A parallel logic programming language. Res. Rep. DOC 83/5, Dept. of Computing, Imperial College of Science and Technology, London, 1983.
4. Clark, K.L. and Gregory, S. PARLOG: Parallel programming in logic. Res. Rep. DOC 84/4, Dept. of Computing,
Imperial College of Science and Technology, London,
1984. Also in ACM. Trans. Prog. Lang. Syst. 8, 1 (1986), 149.
5. Clark, K. and T~irnlund, S.-~., Eds. Logic Programming,
Academic Press, London, 1982, 153-172.
6. Fujita, H. and Hasegawa, R. A model generation theorem
prover in KL1 using a ramified-stack algorithm. In Proceed-
7.
8.
9.
10.
11.
12.
76
ings of the Eighth International Conference on Logic Programming, MIT Press, 1987, pp. 535-548.
Furukawa, K., Kunifuji, S., Takeuchi, A. and Ueda, K. The
conceptual specification of the kernel language version 1.
ICOT Tech. Rep. TR-054, ICOT, Tokyo, 1984.
Hirakawa, H., Chikayama, T. and Furukawa, K. Eager and
lazy enumerations in Concurrent Prolog. In Proceedings of
the Second International Logic Programming Conference (Uppsala Univ., Sweden, 1984), pp. 89-100.
Hirata, M. Letter to the editor. SIGPLAN Not. 21, 5 (1986),
16-17.
Hoare, C.A.R. Communicating sequential processes. Commun. ACM 21, 8 (1978), 666-677.
ICOT, Ed. In Proceedings of the Fifth Generation Computer Systems (Ohm-sha, Tokyo, 1992).
Maher, M.J. Logic semantics for a class of committedchoice programs. In Proceedings of the Fourth International
Conference on Logic Programming. MIT Press, Cambridge,
Mass., 1987, pp. 858-876.
March 1993/Vol.36, No.3
]COMMUHICATIONS
OF THE ACM
13. Nakashima, H. Knowledge representation in Prolog/KR. In
Proceedings of the 1984 Symposium on Logic Programming,
IEEE Computer Society, 1984, pp. 126-130.
14. Saraswat, V.A. and Rinard, M. Concurrent constraint programming (Extended Abstract). In Conference Record of the
Seventeenth Annual ACM Symposium on Principles of ProgrammingLanguages, ACM, New York, N.Y., 1990, pp. 232-245.
15. Sato, M. and Sakurai, T. Qute: A functional language based
on unification. In Proceedings of the International Conference
on Fifth Generation Computer Systems 1984, ICOT (Tokyo,
1984), pp. 157-165.
16. Shapiro, E. and Takeuchi, A. Object oriented programming in Concurrent Prolog. New Generation Computing 1, 1
(1983), 25-48.
17. Shapiro, E.Y. Concurrent Prolog: A progress report. Computer 19, 8 (1986), 44-58.
18. Turner, D.A. The semantic elegance of applicative languages. In Proceedings of the 1981 Conference on Functional
Programming Languages and Computer Architecture, ACM,
New York, N.Y., 1981, pp. 85-92.
19. Ueda, K. Concurrent Prolog re-examined. ICOT Tech.
Rep. TR-102, ICOT, Tokyo, 1985.
20. Ueda, K. Guarded Horn Clauses. ICOT Tech. Rep. TR103, ICOT, Tokyo, 1985. Also in Logic Programming "85,
Wada, E., Ed., Lecture Notes in Computer Science 221,
Springer-Verlag, Berlin Heidelberg, 1986, 168-179.
21. Ueda, K. Designing a concurrent programming language.
In Proceedings of the InfoJapan'90, Information Processing Society of Japan, Tokyo, 1990, pp. 87-94.
22. Ueda, K. Parallelism in logic programming. In Inf. Process.
89, G.X. Ritter, Ed., North-Holland, 1989, pp. 957-964.
23. Ueda, K. and Chikayama, T. Design of the kernel language
for the Parallel Inference Machine. Comput. J. 33, 6 (Dec.
1990), 494-500.
24. Ueda, K. and Furukawa, K. Transformation rules for GHC
programs. In Proceedings of the International Conference on
Fifth Generation Computer Systems 1988, ICOT (Tokyo, 1988),
pp. 582-591.
25. Ueda, K. and Morita, M. A new implementation technique
for Flat GHC. In Proceedings of the Seventh International Conference on Logic Programming, MIT Press, 1990, pp. 3-17. A
revised, extended version submitted to New Generation Computing.
CR Categories and Subject Descriptors: C.1.2 [Processor
Architectures]: Multiple Data Stream Architectures (Multiprocessors); D.1.3 [Programming Techniques]: Concurrent
Programming; D.1.6 [Software]: Logic Programming; D.3.2
[Programming Languages]:
Language Classifications--
Concurrent, distributed, and parallel languages, Data-flow languages,
Nondeterministic languages, Nonprocedural languages; K.2 [Computing Milieux]: History of Computing
General Terms: Design, Experimentation
Additional Key Words and Phrases: Concurrent logic programming, Fifth Generation Computer Systems project,
Guarded Horn Clauses, Prolog
About the Author:
KAZUNORI UEDA is assistant manager of the Computer System Research Laboratory at NEC C&C Systems Research Laboratories. Current research interests include design and implementation of programming languages, logic programming,
concurrency and parallelism, and knowledge information processing. Author's Present Address: NEC C&C Systems Research Laboratories, Computer Systems Research Laboratory,
1-1 Miyazaki 4-chome, Miyamae-ku, Kawasaki 216, Japan;
email:
[email protected]
KenKahn
XEROX PARC
A Braid of Research Threads from ICOT, Xerox
PARC, and the Welzmann Institute
W h e n I C O T was formed in
1982, I was a faculty m e m b e r
of the University of Uppsala,
Sweden, d o i n g research at
U p p s a l a P r o g r a m m i n g Methodology and Artificial Intelligence Laboratory ( U P M A I L ) .
T h e creation of I C O T caused
great excitement in the labo r a t o r y because we shared
with the Fifth Generation project a basic research stance
- - t h a t logic p r o g r a m m i n g could offer much to A I and, in
general, symbolic computing. Koichi Furukawa (at that
time an I C O T lab manager, now deputy director of I C O T )
and some of his colleagues visited U P M A I L that year to
present the plan for the Fifth G e n e r a t i o n project and to
explore possible collaborations.
About a year later I was invited to be a guest researcher at I C O T for a month. My research at that time
was on LM-Prolog, an extended Prolog well integrated
with Lisp and i m p l e m e n t e d on MIT-style Lisp Machines
(LMI and Symbolics) [1]. One of the driving motivations
behind this work was that there were lots o f good things
in Prolog, but Prolog could be much better if many of
the ideas from the Lisp and object-oriented p r o g r a m ming communities could be i m p o r t e d into the framework. I was also working on a partial evaluator for Lisp
written in LM-Prolog [11]. This p r o g r a m was capable o f
automatically specializing Lisp programs. One goal o f
this effort was to generate specializations of the LMProlog interpreter, each o f which could only interpret a
single LM-Prolog program. T h e p e r f o r m a n c e o f these
specialized interpreters o f p r o g r a m s was comparable to
the compiled versions of those programs.
Researchers at I C O T were working on similar things.
T h e r e was good work going on in partial evaluation o f
Prolog programs. T h e r e was work on ESP, a Prolog extended with objects and macros [2]. Efforts on a system
called Mandala had begun which combined ideas of
metainterpretation and object-oriented p r o g r a m m i n g in
a logic p r o g r a m m i n g framework [5].
While my demonstrations and seminars about LMProlog and partial evaluation went well and my discussions with I C O T researchers were productive, the most
i m p o r t a n t event d u r i n g my visit was my introduction to
C o n c u r r e n t Prolog. E h u d Shapiro, from the Weizmann
Institute o f Science in Israel was visiting then, working
closely with Akikazu Takeuchi o f ICOT. Concurrent
Prolog was conceived as an extension ~ o f Prolog to introduce p r o g r a m m e r - c o n t r o l l e d concurrency [20]. It was
based on the concept of a read-only variable, which I had
found very conflasing when I had read about it before
my I C O T visit. Part o f the problem was simple nomenclature: a variable does not become read-only; what happens is that there are occurrences of a variable which
only have a read capability instead o f the usual situation
where all occurrences have read/write privileges.
Shapiro and Takeuchi [21] had written a p a p e r about
how Concurrent Prolog could be used as an actor or concurrent object language. I was very interested in this,
since I had worked on various actor languages as a doctoral student at MIT. Again, my difficulty in grasping
read-only variables interfered with a good grasp o f the
central ideas in this article. I u n d e r s t o o d it only after
Shapiro carefully explained the ideas to me. After understanding the paper, I felt that some very powerful
ideas about concurrent objects or actors were h i d d e n
u n d e r a very verbose and clumsy way o f expressing them
in Concurrent Prolog. T h e idea o f incomplete messages in
which the receiver (or receivers) fills in missing portions
o f messages was particularly attractive. Typically, there
are processes suspended, waiting for those parts to be
filled in. It seemed clear to me that this technique was a
good alternative to the continuation passing o f actors
and Scheme.
At this time the Fifth Generation project was designing parallel hardware and its accompanying kernel language. A distributed-memory machine seemed to make
the most sense since it could scale well, while sharedm e m o r y machines seemed uninteresting because they
were limited to a small n u m b e r of processing elements.
Shapiro was working on a parallel machine architecture called the Bagel [19]. I collaborated with him on a
notation for m a p p i n g processes to processors based on
the ideas of the Logo turtle. A process had a heading
and could spawn new processes forward (or backward)
along its heading and could change its heading.
At this time it seemed that single-language machines
were a good idea. T h e r e was lots o f excitement about
Lisp machines, which benefited from a tight integration
of components and powerful features. During my visit to
I C O T it seemed clear to most people that building a Prolog or Concurrent Prolog machine was the way to go.
A n d unlike the Lisp Machines, these new machines
would be designed with parallelism in mind. 2
As I recall, there was some debate at that time about
whether the kernel language o f parallel inference ma1There never was an implementation of C o n c u r r e n t Prolog that retained
Prolog as a sublanguage. Eventually, C o n c u r r e n t Prolog was redefined
as a different l a n g u a g e which provided c o n c u r r e n c y a n d sacrificed the
ability of Prolog p r o g r a m s to d o implicit search.
~With the advantage o f hindsight, this was a mistake because it cut o f f
FGCS research f r o m tools a n d platforms o f o t h e r researchers. This app r o a c h was too closed, a n d only now is I C O T d o i n g serious work on
porting their software to s t a n d a r d platforms.
COMMUHICATIONSOFTiiEACM/Mar(~h1993/"/ol.36,No.3 7 7
generating animations of program executions [13]. One
discovery was that object-oriented programs did not
come out so clumsy and verbose when they were drawn
instead of typed.
I collaborated with Shapiro on a preprocessor for
logic programs designed to support object-oriented programming. My thinking had changed from believing
that concurrent logic programs were too low level, to
believing they just needed some simple syntactic support. I came to realize that the Vulcan language, by trying to be an actor language, had sacrificed some very
important expressive power of the underlying language.
In 1989 I presented an invited paper at the European
Conference on Object-Oriented Programming on this
topic [10]. T h e essence of the paper is that concurrent
logic programming, by virtue of its first-class communication channels, is an important generalization of actors
or concurrent objects. Multiple input channels are very
important, as is the ability to communicate input channels. During this period I interacted with Kaoru Yoshida
of I C O T during her development of A'UM, an objectoriented system on top of FGHC which retains the
power of multiple, explicit, input channels [24]. Andrew
Davidson at the Imperial College also made this design
decision early in his work on Pool and Polka.
We hosted an extended visit by Kazunori Ueda and
Masaki Murakami from ICOT. Ueda, the designer of
Flat Guarded H o r n Clauses (FGHC), slowly won us over
to the view that his language, while weaker than
Herbrand, was simpler and that there were programming techniques that compensate for its weaknesses.
Essentially, we moved from the view of unification as an
atomic operation to viewing it as an eventual publication
of information. I began to program in the FGHC subset
of the Weizmann Institute implementation of FCP. I
would have seriously considered using an I C O T implementation had one been available for Unix workstations.
(ICOT today is working on porting their work to com. merically available multiprocessors and Unix workstations and has made its software freely available.)
AI Limited in the United Kingdom then announced a
commercial concurrent logic programming language
called Strand88. We became a beta test site and received
visits by one of the language designers (Steve Taylor)
and lafer by officials of the company. We were very
eager to collaborate because the existence of a commercial product gave these languages a certain realness and
respectability within Xerox. 5
O u r first reaction was that they had simplified the language too much: they had replaced unification by single
assignment and simple pattern matching. What we once
believed was the essence of concurrent logic programming was gone. As was the case with FGHC, we were
won over to the language by being shown how its deficiencies were easily compensated for by certain programming techniques and that there were significant
implementation and/or semantic advantages that followed. I stopped using the FGHC subset of FCP and
5Xerox also h a d a long history of business relations with AI Limited o n
o t h e r products•
80
March 1993/Vol.36,No.3 ]COMIrilLINIIllATION$O F
THE ACM
became a Strand programmer. I even offered a wellattended in-house tutorial on Strand at PARC.
Saraswat quickly became disenchanted with Strand
because the way it provided assignment interfered with
it fitting into his concurrent constraint framework and
thereby giving it a good declarative semantics. Strand
assignment is single assignment, so it avoids the problems associated with assignment in a concurrent setting.
But Strand signals an error if an attempt is made to assign the same variable twice. A symptom of these problems is that in Strand X := Y where X and Y are unbound is operationally very different from Y := X. We
wanted := to mean the imposition (or "telling") of equality constraints between its arguments.
Saraswat then discovered a two-year-old paper by
Masahiro Hirata from the University of Tsukuba in
Japan on a language called DOC [9]. The critical idea in
the paper was that if every variable had only a single
writer then no inconsistencies or failures could occur.
Saraswat, Levy, and I picked up on this idea and designed a concurrent constraint language called Janus
[18]. We introduced a syntax to distinguish between an
asker and a teller of a variable. We designed syntactic restrictions (checkable at compile time) which guarantee
that a variable cannot receive more than one value. We
discovered that these restrictions also enable some very
significant compiler optimizations, including compiletime garbage collection.
Because we lacked the resources to build a real implementation of Janus, we started collaborative efforts with
various external research groups (the University of Arizona, McGill University, Saskatchewan University).
Jacob Levy left and started a group at the Technion University in Israel. 6
TOday
Today work continues on Janus implementations. David
Gndeman and others at the University of Arizona have
produced an excellent high-performance serial implementation [6]. I C O T research on moded FGHC attempts to achieve the goals of Janus by sophisticated
program analysis rather than syntactic restrictions [22].
An interesting aspect of this work is how it manages,
when appropriate, to borrow implementation techniques from object-oriented programming. Also, work
on directed logic variables at the Weizmann Institute was
inspired by Janus [15].
Saraswat has had a major impact on the research community with his fiamework of concurrent constraint
programming. He is active in a large joint Esprit/NSF
project called ACCLAIM based on his work. At I C O T
there is a system called GDCC ,which directly builds on
his work. His work has also had significant impact on the
theoretical computer science community interested in
concurrency. Saraswat and a student (Clifford Tse) are
working on the design and parallel implementation a
programming language called Linear Janus, which is
based on concurrent constraint programming and ad6He now works at Sun Microsystems a n d is i m p l e m e n t i n g J a n u s in his
spare time.
dresses the same goals as Janus but is based on linear
logic.
The work of the Vulcan project on exploring the feasibility and advantages o f using concurrent logic programming as the foundation for building distributed
applications has strongly influenced Shapiro's group at
the Weizmann Insitute. In recent years they have moved
away from a focus on parallel computing to a focus on
distributed-computing foundations and applications.
After leaving Xerox, Tribble and Miller focused primarily on large-scale hypertext systems. However, they
have been designing a programming language for distributed computing called Joule, which combines many
of the insights from concurrent logic-programming language design, higher-order programming, actors, and
capability-based operating systems (in particular the
KeyKOS system [7]).
Until very recently, I concentrated my efforts on
building an environment for Pictorial Janus, a visual syntax for Janus. The system accepts program drawings in
PostScript, parses them, and produces animations of
concurrent executions. A postdoc (Markus Fromherz) is
using Pictorial Janus to model the behavior of paper
paths in copiers. I see this work as making the ideas behind concurrent logic programming more accessible.
Programs and their behaviors are much easier to understand when presented in a manner that exploits our very
capable perceptual system.
In September 1992, 1 left Xerox to start my own small
company to develop T o o n T a l k - - a n animated programming language for children based on the concurrent
logic programming research at ICOT, Weizmann, Xerox
PARC, and elsewhere. My belief is that the underlying
concepts of concurrency, communication, synchronization, and object orientation are not inherently difficult to
grasp. What is difficult is learning how to read and write
encodings of behaviors of concurrent agents. My research on Pictorial Janus convinced me that encoding
the desired behaviors as static diagrams was a good step
in the right direction, but not a large enough one.
I believe the next step is to make heavy use of animation, not just to see how a program executes but also to
construct and view source programs. In the process of
doing my dissertation work on the creation of computer
animation from story descriptions, I took several animation courses and made a few films. An important lesson I
learned was how effectively well designed animation can
communicate complex dynamic behaviors. I believe the
designers of programming language syntax and programming environments should be studying Disney cartoons and home entertainment such as Super Mario
Brothers.
When I was a member of the Logo project at MIT, I
recall Seymour Papert describing the Logo language as
an attempt to take the best ideas in programming language research and make them accessible to children. In
the late 1960s Lisp was the best source of these ideas.
More than 20 years later, I see myself as making another
attempt, taking what I see as the best in programming
language research--concurrent logic programming,
constraint programming, and actors--and, with the help
of interactive animation, making these powerful ideas
usable by children. If successful, concurrent logic programming will soon be child's play.
A Personal view of the Project
So what is my view of the Fifth Generation project after
10 years of interactions? Personally, I am very glad that
it happened. There were many fruitful direct interactions and I am sure several times as many indirect positive influences. Without the Fifth Generation project
there might not have been a Vulcan project, or good
collaborations with the Weizmann Institute, or the
Strand and Janus languages. More globally, the whole
computer science research community has benefited a
good deal from the project. As Hideaki Kumano, director general, Machinery and Information Industries Bureau, Ministry of International Trade and Industry
(MITI) said during his keynote speech at the 1992 FGCS
conference:
Around the world, a number of projects received
their initial impetus from our project: these include
the Strategic Computing Initiative in the U.S., the
EC's ESPRIT project, and the Alvey Project in the
U.K. These projects were initially launched to compete with the Fifth Generation Computer Systems
Project. Now, however, I strongly believe that since
our ideal of international contributions has come to be
understood around the globe, together with the realization that technology cannot and should not be divided by borders, each project is providing the stimulus for the others, and all are making major
contributions to the advancement of information processing technologies.
I think the benefits to the Japanese computer science
community were very large. Comparing visits I made to
Japanese computer science laboratories in 1979, 1983,
1988, and 1992 there has been tremendous progress.
When the project started there were few world-class researchers in Japan on programming language design
and implementation, on AI, on parallel processing, and
so forth. Today the gap has completely disappeared; the
quality and breadth of research I saw in 1992 is equal to
that of America or Europe. I think the Fifth Generation
project deserves much credit for this. By taking on very
ambitious and exciting goals, they got much further than
if they had taken on more realistic goals. I do not believe
the Fifth Generation project is a failure because they
failed to meet many of their ambitious goals; I think it is
a great success because it helped move computer science
research in Japan to world-class status and nudged computer science research throughout the world in a good
exciting direction. •
References
1. Carlsson, M. and Kahn, K. LM-Prolog user manual. Tech.
Rep. 24, UPMAIL, Uppsala University, 1983.
2. Chikayama, T. Unique features of ESP. In Proceedingsof the
International Conference on FGCS. 1984.
3. Clark, K.L. and Gregory, S. A relational language for parallel programming. Tech. Rep. DOC 81/16, Imperial College, London, 1981.
IIOMMUNICATIONSOFTHIIACM/March1993/Vol.36, No,3 8S
4. Foster, I. and Taylor, S. Flat Parlog: A basis for comparison. Int. J. Parall. Programm. 16, 2 (1988).
5. Furukawa, K., Takeuchi., A., Kunifuji, S., Yasukawa, H.,
Ohki, M. and Ueda, K. Mandala: A logic based knowledge
programming system. In Proceedings of the International Conference on Fifth Generation Computer Systems. 1984.
6. Gudeman, D., De Bosschere, K. and Debray, S.K., JC: An
efficient and portable sequential implementation of Janus.
In the 1992 Joint International Conference and Symposium on
Logic Programming (Nov. 1992).
7. Hardy, N. Keykos architecture. Oper. Syst. Rev. (Sept. 1985).
8. Haridi, S. and Janson, S. Kernel Andorra Prolog and its
computation model. In Proceedings of the Seventh International Conference on Logic Programming (June 1990).
9. Hirata, M. Programming language DOC and its selfdescription, or, x = x considered harmful. In the Third
Conference Proceedings of the Japan Societyfor Software Science
and Technology (1986), pp. 69-72.
10. Kahn, K. Objects--A fresh look. In Proceedings of the Third
European Conference on Object-Oriented Programming. Cambridge University Press, Cambridge, Mass., 1989, pp. 207224.
11. Kahn, K. The compilation of Prolog programs without the
use of a Prolog compiler. In Proceedings of the Fifth Generation Computer Systems Conference. 1984.
12. Kahn, K. and Kornfeld, W. Money as a concurrent logic
program. In Proceedings of the North American Conference on
Logic Programming. The MIT Press, Cambridge Mass,,
1989.
13. Kahn, K. and Saraswat, V. Complete visualizations of concurrent programs and their executions. In Proceedings of the
IEEE Visual Language Workshop. IEEE, New York, (Oct.
1990).
14. Kahn, K., Tribble, E., Miller, M. and Bobrow, D. Vulcan:
Logical concurrent objects. In Research Directions in ObjectOriented Programming. The MIT Press, Cambridge, Mass.,
1987. Also in Concurrent Prolog, MIT Press, Ehud Shapiro,
Ed.
15. Levy, Y. Concurrent logic programming languages: Implementation and comparison. Ph.D. dissertation, Weizmann
Institute of Science, Israel, 1989.
16. Miller, M.S. and Drexler, K.E. Markets and computation:
Agoric open systems. In The Ecology of Computation. Elsevier
Science Publishers/North-Holland, Amsterdam, 1988.
17. Saraswat, V. Concurrent constraint programming languages. Ph.D. dissertation, Carnegie-Mellon University,
Pittsburgh, Pa., 1989.
18. Saraswat, V.A., Kahn, K. and Levy, J. Janus--A step towards distributed constraint programming. In Proceedings
of the North American Logic Programming Conference. MIT
Press, Cambridge, Mass., 1990.
19. Shapiro, E.Y. Systolic programming: A paradigm of parallel processing. In Proceedings of the Fifth Generation Computer
Systems Conference. (1984).
20. Shapiro, E. A subset of Concurrent Prolog and its interpreter. Tech. Rep. CS83-06, Weizmann Institute, Israel,
1983.
21. Shapiro E. and Takeuchi, A. Object oriented programming
in Concurrent Prolog. New Gener. Comput. 1, (1983), 25-48.
22. Ueda, K. A new implementation technique for Flat GHC.
In Proceedings of the Seventh International Conference on Logic
Programming (June). 1990.
23. Ueda, K. Guarded Horn Clauses. Tech. Rep. TR-103,
ICOT, 1985.
2
March 1993/Vol.36, No.3 / ¢ I ) M M U N I ¢ A T I O N S O F T H I I A C M
24. Yoshida, K. and Chikayama, T. A'um--A stream-based
concurrent object-oriented language. In Proceedings of the
International Conference on Fifth Generation Computer Systems,
1988, pp. 638-649.
CR Categories and Subject Descriptors: C.1.2 [Processor
Architectures]: Multiple Data Stream Architectures (Multiprocessors); D.1.3 [Programming Techniques]: Concurrent
Programming; D.1.6 [Software]: Logic Programming; D.3.2
[Programming Languages]:
Language Classifications--
Concurrent, distributed, and parallel languages, Data-flow languages,
Nondeterministic languages, Nonprocedural languages; K.2 [Computing Milieux]: History of Computing
General Terms: Design, Experimentation
Additional Key Words and Phrases: Concurrent logic programming, Fifth Generation Computer Systems project,
Guarded Horn Clauses, Prolog
About the Author:
KEN KAHN is founder and CEO of Animated Programs, Inc.
His interests range from concurrent programming languages to
visual and animated programming to novice/end user programruing tools. Author's Present Address: 44 El Rey Road, Portola
Valley, CA 94028,
[email protected]
TakashiChikayama
ICOT RESEARCH CENTER
I joined the Institute for New
Generation C o m p u t e r Technology ( I C O T ) in J u n e 1982,
almost immediately after its
creation in April of the same
year. All the researchers then
gathered at I C O T were recruited from computer m a n u facturers a n d governmental
research i n s t i t u t e s (I was
a m o n g these and was recruited from Fujitsu). Almost all
these researchers, with very few exceptions, of which I was
one, left I C O T after three to five years, either returning to
their original places of employment or starting new careers,
mosdy in universities. Having been at I C O T for the entire
project period of 10 years plus one additional year, I have
gradually been making more critical decisions about
research directions.
I started my research career at I C O T designing a sequential logic p r o g r a m m i n g language ~ la Prolog and its
implementation, and then designing an operating system for logic p r o g r a m m i n g workstations. As the main
interest of the project shifted to c o n c u r r e n t logic programming, my research topics naturally changed to the
design and implementation of c o n c u r r e n t languages and
an operating system for parallel systems. T h e project has
included various other research topics with which I have
not been involved, and cannot c o m m e n t fully here.
Before Things Really Started
The First Encounter
W h e n the preliminary investigation o f the FGCS project
plan began in 1979, I was a graduate student at the University o f Tokyo. T o h r u Motooka, a professor at the university, was playing an i m p o r t a n t role in forming the
project plan. I was invited to participate in one o f many
small groups to discuss various aspects o f the project
plan. T h a t was my first chance to hear about this seemingly absurd idea o f the "fifth-generation" c o m p u t e r systems.
T h e project plan at that time was just too vague to
interest me. T h e idea o f building novel technologies for
future c o m p u t e r systems seemed adequate, but it was
not at all clear what such technologies should be. O u r
group was supposed to discuss how an a p p r o p r i a t e software development environment for such a system
should be designed, but the discussion was not much
more than writing a science fiction story. Both the needs
and seeds o f such a system were beyond o u r speculation,
if not our imagination.
A few years later, the project plan became more concrete, committed to parallel processing and logic programming. My main research topic at the university was
design and implementaion o f a Lisp dialect. Hideyuki
Nakashima, one o f my colleagues there, was an enthusiastic advocate o f logic p r o g r a m m i n g , and was strongly
influenced by Koichi Furukawa, who was.one o f the key
designers o f the FGCS project plan. Nakashima was
implementing his Prolog dialect on a Lisp system I had
just implemented, and I assisted in this process. Although the Prolog language seemed interesting, I could
not imagine how such a language could be i m p l e m e n t e d
with the efficiency reasonable for practical use.
W h e n I finished my doctoral dissertation in March
1982 and was looking for a j o b opportunity, Motooka
kindly r e c o m m e n d e d me to work at ICOT. Without any
particular expectation in research topics, hoping to do
something interesting without too much restriction, I
accepted his proposal.
Joining the Project
T h e FGCS project was organized so that one central research institute, ICOT, could decide almost all the aspects o f the project, except for the size o f its budget. T h e
J a p a n e s e g o v e r n m e n t (the Ministry o f International
T r a d e and Industry [MITI], to be more specific),
f u n d e d the project, but M I T I officers never forced
I C O T to change the research direction in o r d e r to promote the Japanese industry more directly. This has been
true t h r o u g h o u t the 11 years o f the project period. Although the g r a n d plan o f the project was already there,
it was still vague enough to leave enough freedom to the
I C O T researchers.
One of the consequences o f this situation was t h a t
when the research center was f o u n d e d with some 30 researchers in J u n e 1982 nobody had concrete research
plans. T h e core members o f the project, including
Kazuhiro Fuchi, Toshio Yokoi, Koichi Furukawa and
Shunichi Uchida who had participated in the project's
g r a n d plan, held meetings almost daily to develop a
more detailed plan. C o m m o n researchers such as I had
no mission for about a month but to read t h r o u g h a heap
o f research papers on many related areas. Voluntary
groups were formed to study those papers. Also, we
tried out Prolog p r o g r a m m i n g with implementations on
PDP-11 and A p p l e - I I systems, which were the only systems available to us at that time.
My greatest surprise in the course o f this study was
that the researchers gathered there had almost no experience o f symbolic processing, with only a small n u m b e r
of exceptions. Only a few had experienced design and
implementation o f any language systems or operating
systems either. It was not that ICOT's selection of researchers was i n a p p r o p r i a t e - - t h e r e were so few in
J a p a n with experience in these areas. T h e level of the
c o m p u t e r software research in J a p a n was far behind the
United States and Western Europe at that time, especially in the basic software area.
In early July, a m o r e concrete research plan was finished and several research groups were f o r m e d to tackle
specific topics. I j o i n e d the g r o u p to design the first version o f the "kernel language."
T h e idea o f the "kernel language" has been characteristic o f the project. T h e research and development were
to be started with the design o f a p r o g r a m m i n g language, followed by both hardware and software research
toward its efficient implementation and effective utilization. T h e r e have been two versions o f the kernel language, KL0 and KL1, and this process repeated in the
project. T h e design I started in 1982 was that o f KL0,
which was a sequential logic p r o g r a m m i n g language.
Sequential Inference Systems
One o f the first subprojects planned was to build socalled "personal sequential inference machines." T h e
development effort was an attempt to provide a comfortable software research environment in logic prog r a m m i n g languages as soon as possible.
It actually took longer than expected, as is always the
case; the first system was ready (with a barely useful software d e v e l o p m e n t environment for application users) at
the end o f 1984, two-and-a-half years after the project
began. T h e development environment gradually mat u r e d to a reasonable level as its operating system went
t h r o u g h several revisions. Two major revisions were subsequently m a d e to the hardware, and the execution
speed was improved by m o r e than 30 times. T h e system,
which had been used as the principal tool for software
research until the first experimental parallel inference
system was ready in 1988, is still used heavily for personal workstations that efficiently simulate the parallel
system.
Sequential Inference Machine: PSI
Without any doubt, the decision to develop such a "machine" had the same motivation as the Lisp machines
developed at M I T and Xerox PARC. A DEC-2060 system was introduced in the fall of 1982, allowing us to use
E d i n b u r g h Prolog [2]. Its compiler was by far the most
efficient system available at the time. However, when we
started to solve larger problems, we soon found that the
COMMUNICATIONS OF THIE A C r e / M a r c h 1993/Vol.36, No,3
8~
FIFTH
~E~WI~'ATI
ON
.
.
PROJECT
.
.
1115"8
amount of computation needed exceeded the capacity of
time-shared execution. Personal workstations specially
devised for a specific language and with plenty of memory seemed to be the solution. Indeed they were, I think,
for logic programming languages in 1982 when more
sophisticated compilation techniques were not available.
Two models of personal sequential inference machines, called "PSI," were developed in parallel [23].
They had the same microprogram-level specification
designed at ICOT, but with slightly different architectures. Two computer companies, Mitsubishi and Oki,
manufactured different models. Such competition occurred several times during the project on different
R&D topics. T h e word "competition" might not be quite
accurate here. Although the top management of the
companies might have considered them as competition,
the researchers who actually participated in the projects
gradually recognized that they were meant to be in collaboration. They exchanged various ideas freely in frequent meetings at ICOT.
Both models of sequential inference machines had
microcoded interpreters for graph-encoded programs.
An alternative research direction which put more effort
on static analysis and optimized compilation was not considered seriously. Running such a research project in
parallel with the development of the hardware systems
might have yielded less costly solutions. However, given
a short period of time and few h u m a n resources with
compiler-writing skills, we had to commit ourselves to
pursue a single method.
The First Kernel Language: KLO
My first serious job in the project was designing the first
version of the kernel language, "KL0." This language
was, in short, an extended Prolog. Some nonstandard
control primitives were introduced, such as mechanisms
for delaying execution until variable binding or for
cleaning up side effects on backtracking, but they were
only minor additions that did not affect the basic implementation scheme of Prolog.
We decided to write the software, including the operating system, in this language. This was partly for clearing up the c o m m o n misunderstanding that logic programming languages could not be practical for
real-world software development. We thought, on the
contrary, that using a symbolic processing language was
the easiest way to build a decent software environment
for the new machine.
For writing an entire operating system, extensions to
control low-level hardware, such as the I/O bus or pagemap hardware, were also made. The memory space was
divided into areas private to each process for Prolog-like
stacks and areas c o m m o n to all the processes where side
effects were allowed. Side-effect mechanisms were much
more enhanced than in Prolog. Interprocess communication was effected through such side effects.
The resultant language had high descriptive power,
but was somewhat of a medley of features of various languages. I did not mind it because, although the language
had all the features of Prolog, it was supposed to be a
low-level machine language, rather than a language for
application software developers.
4
March 1993/Vo1.36, No.3 / ¢ O M M U N I C A T I O N S O F T H I I A C M
An Object-Oriented Language: ESP
A group headed by Toshio Yokoi was designing the operating system for the sequential inference machines.
Their rough design of the system was based on objectoriented abstraction of system features. After finishing
the design of KL0, I was requested to join a team to
design a higher-level language with object-oriented features.
T h r o u g h several meetings discussing the language
features, a draft specification of the language named
"ESP" was compiled [3]. I wrote an emulator of its subset
on Edinburgh Prolog in the summer of 1983, in about
one week when the DEC-2060 was lightly loaded because
most of the other users were away on summer vacation.
This emulator was used extensively later in early development phases of the operating system.
T h e language model was simple. Each object corresponds to a separate axiom database of Prolog. T h e
same query might be answered differently with different
axiom sets, as different objects behave differently on the
same method in other object-oriented languages. This
allowed programming in small to be done in the logic
programming paradigm and programming in large in
the object-oriented paradigm.
SIMPOS
More detailed design of the operating system followed.
Actual coding and debugging of the system began in the
fall of 1983 using the implementation on Edinburgh
Prolog, by a team of some 30 programmers gathered
from several software companies. T h e hardware o f PSI
arrived at I C O T on Christmas day, and the development
of the microcoded interpreter, which had also been done
on emulators, was continued on the physical hardware.
In July 1984, the operating system, named SIMPOS,
first began working on the machine. In the course of the
development, I gradually had become the virtual leader
of the development team.
Even in its first version, SIMPOS had a full repertoire
of a personal operating system: multitasking, files, windows (which were not so c o m m o n at that time), networking, and so forth. T h e first version, however, was awfully
slow. It took several seconds to display a character on a
display window after pressing a keyboard key.
Following our analysis of this slug, a thorough revision of the system was carried out. T h e microcoded language implementation and the compiler, especially the
object-oriented features, were considerably improved,
making the same program run about three times faster.
T h e operating system also went through a complete revision in the kernel, the I/O device handlers, the window
system, and so forth. Algorithms and data structures
were changed everywhere. Task allotment to processes
was also changed. This considerable amount of change
made the system run almost two orders o f magnitude
faster. T h e revision took less than three months and was
ready to exhibit at the FGCS'84 conference in the beginning of November [4].
The system before the revision already had several
tens of thousands of lines of ESP. T h e high-level features o f ESP helped considerably in carrying out such a
major revision in such a short period of time. The object-oriented features, especially its flexible modularization power, allowed major changes without taking too
much care on details. Like other symbolic processing
languages, explicit manipulation of memory addresses is
not allowed in ESP (except for in the very kernel of the
system) and ranges of indexes to arrays are always
checked. This made bugs in rewritten programs much
easier to find.
A very important byproduct of the development of
SIMPOS was training of logic programming language
programmers. For most of the programmers participating in the development of SIMPOS it was their first experience in writing a logic or an object-oriented programming language. Many, probably nearly half of
them, had not experienced any large-scale software development before. For some, ESP was the first language
to program in. Those programmers who acquired programming skills during this development effort played
important roles in development of a variety of software
later in the project.
Software Systems on PSI
The original PSI machine ran at about the same speed as
Edinburgh Prolog on a DEC 2060. The large main
memory (80MB max.) allowed much larger programs to
run. Being a personal machine, users were not bothered
by other time-sharing users. Limitation of computational
resources, one of the largest obstacles in advanced software research, was greatly reduced.
From 1985 on, the PSI machine, and its successors
PSI-II and -III, have been used heavily in software research. T h e largest software on PSI was its operating
system SIMPOS. It went through many revisions and
added more and more functionalities, including debugging and performance-tuning facilities, on ever-increasing users' demands. It now has more than 300,000 lines
of ESP code.
Not only the operating system but also other basic
software systems were built up on PSI and SIMPOS. A
database system Kappa based on a nested relational
model was probably the largest such system. Higherlevel programming language systems, such as a language
based on situation semantics CIL or a constraint-based
language CAL, were also built.
Numerous experimental application systems were also
built on PSI. A natural language processing system,
DUALS played an important role in demonstrating to
the people outside the community what a logic programruing system can do. Many expert systems and expert
system shells were developed, based on a variety of reasoning technologies. At its maximum, probably more
than 200 people were conducting their research using
PSI or its successors within the project [14].
PSI-II and -III
Near the end of 1985, we decided to develop a new
model of PSI based on a more sophisticated compilation
scheme proposed by David H.D. Warren [22]. Its experimental implementation on PSI by Hiroshi Nakashima
ran more than twice as fast as the original implementa-
tion. A new machine called PSI-II was designed and became operational near the end of 1986. SIMPOS were
ported to the machine relatively easily. This model went
through minor revisions for faster clock speed later and
its final version attained more than 400 KLIPS, about 10
times faster than the original PSI. As the machine clock
was as low as 6.67MHz, this meant that one inference
step needed 16 microprogram steps.
Another major revision was made during 1989 and
1990, which resulted in the third generation of the system, PSI-III. At this time, Unix was already recognized
as the common basis of virtually all research systems.
Thus, the PSI-III system wa,~ built as a back-end processor, rather than a standalone system. T h e operating system, however, was ported to the new hardware almost
without modification, replacing I/O device drivers with a
communication server to the front-end part. The system
attained 1.4 MLIPS at the clock rate of 15MHz. One inference needed only 11 steps.
Parallel Inference Systems
From the very beginning of the project, the second version of the kernel language was planned to combine parallel computation and logic programming. Parallel hardware research was going on simultaneously. These two
groups, however, did not interact well in the early years
of the project, resulting in several parallel Prolog machines and a language design that did not fit on them.
Later, the language and hardware design activities became much better orchestrated under the baton of KL1.
Early Parallel Hardware Systems
Some argued that much parallelism could be easily exploited from logic programming languages because both
AND and OR branches can be executed in parallel. With
some experience in cumbersome interprocess synchronization, I was quite skeptical about such an optimistic
and simplistic claim. Yes, a high degree of parallelism
was possibly there, but exploiting that parallelism could
be counterproductive; making everything parallel
means making everything slow, probably spoiling the
benefits of parallelism.
The parallel hardware research began, however, despite the skepticism. As far as pure Prolog is concerned,
the easiest parallelism to exploit was the OR parallelism
because no sharing of data is required between branches
once the whole environment is copied. Some of the systems successfully attained reasonable speedup, although
the physical parallelism was still small.
T h e next thing to do was to implement a fuller version
of Prolog, since the descriptive power of pure Prolog was
quite limited. T h e implementation was a difficult task.
To do that efficiently actually required a considerable
amount of effort later in the Aurora OR-parallel Prolog
project [17]. O u r language processing technology was
not yet at that level. OR parallel hardware research
ceased around 1985 and was displaced by committedchoice type AND parallel research.
Pre-GHC Days
The first concurrent logic programming language I
¢IDMMUNICATIOliSOFTIIIAgM/March
19931Vo1136,No.3
8S
learned was the Relational Language by Keith Clark and
Steve Gregory [8]. When I read the paper in 1982, I
liked it because the language did not try to blindly exploit all the available parallelism, but confined itself to
the dataflow parallelism. T h e idea seemed quite revolutionary. I thought the language implementation would
be much easier than naive parallelization of Prolog and
parallel algorithms could be easily expressed in the language. But should a language for describing algorithms
be called a logic programming language?
The most loudly trumpeted advantage of logic programming was that the programmers have only to describe what problem to solve, not how. At that time, in the
summer of 1982, I was still a beginner in Prolog programming. I did not yet recognize fully that, even in
Prolog, I had to describe algorithms. Anyway, 1 was too
busy designing the sequential system and soon stopped
thinking about it.
Near the end of 1982, Ehud Shapiro visited I C O T
with his idea of Concurrent Prolog (CP). During his stay,
he refined the idea and even made its subset implementation on Edinburgh Prolog [18], which worked very
slowly but allowed us to try out the language. T h e language design considerably generalized the idea in the
Relational Language by allowing partially defined data
structure to be passed between processes. The objectoriented programming style in CP proposed later by
Shapiro and Akikazu Takeuchi [19] showed that the
enhanced descriptive power would be actually useful in
practical programming.
The language feature which attracted people at I C O T
most might be its syntactic similarity to Prolog, which the
Relational Language did not have. This look-alikeness
was inherited later in PARLOG and then in GHC. This
may have been the main cause of the widespread misunderstanding that concurrent logic programming languages are parallel versions of Prolog.
In 1983 Clark and Gregory proposed their new language, PARLOG [9]. Its design seemed to have been
greatly influenced by CP. A crucial difference was that
the argument mode declaration allowed more static program analysis, making it much easier to implement
nested guard structures.
When I C O T invited Shapiro, Clark, Gregory, and
Ken Kahn, who was also interested in the area, we discussed various aspects of those languages. These discussions contributed significantly in deciding later research
directions. I was an outsider at that time, but enjoyed the
discussions. Basic ideas of some o f the features later incorporated in KL1 implementations occurred to me
during the discussions, such as automatic deadlock detection by the garbage collector [16].
I already had become sure enough through my experience of Prolog programming that we cannot avoid describing algorithms even in logic programming languages. When the basic design and the development
time table of SIMPOS were more or less established, I
could find some time for my participation in the design
of CP implementation.
After FGCS'84, Kazunori Ueda, then at NEC, started
examining the features o f CP, especially its synchroniza-
86
March 1993/Voh36, No.3 / ¢ O M M U N I C A T I O N S O F T H E A C M
tion mechanism by read only variables and atomic unification in detail. His conclusion was that, to make the semantics clean enough, the language implementation
would become much more complicated than was expected. That led him, at the very end of the year, to a
new language with much simpler and cleaner semantics,
later named the Guarded Horn Clauses (GHC) [20].
Guarded Horn Clauses
When Ueda proposed GHC, the group designing KL1
almost immediately adopted it as the basis of KL1, in
place of CP. Although I cannot deny the possibility o f
the "not invented here" rule slightly affecting the decision in a national project, the surprisingly simpler and
cleaner semantics o f GHC was the primary reason.
GHC was much more welcomed than CP by language
implementers. Those who had not found any reasonable
implementation schemes of CP felt much more relaxed.
Only a few months later, Shunichi Uchida and Kazuo
Taki initiated a group to plan an experimental parallel
system, connecting several PSI machines, to make an
experimental implementation of GHC, which was called
Multi-PSI [13].
I was still feeling uneasy with its ability to express
nested guards. Arbitrarily nested environments were
required to implement them correctly, in which variables
of the innermost environment and outer environments
must somehow be distinguished.
In the fall of 1985, partly u n d e r the influence o f the
Flat version of CP adopted as the basis o f the Logix system developed at Weizmann Institute [12], the KL1
group decided not to include nested guards in the language, which made it Flat GHC. This decision allowed
me to start considering further details of the implementation with Toshihiko Miyazaki and others, although rny
principal mission was still the sequential system.
T h e last few months of the year might have been the
most difficult period for those who had been engaged in
the parallel Prolog hardware development. After examining the rough sketch of Flat GHC implementation, the
leaders o f the group, Shunichi Uchida and Atsuhiro
Goto, decided that this language was simple enough for
efficient implementation and descriptive enough for a
wide range o f applications. T h e development o f parallel
Prolog machines was stopped and a new project of building parallel hardware that supports a language based on
Flat GHC was started.
MRB and My Full Participation
Based on experimentations with the first version of
Multi-PSI, building a more powerful and stable parallel
hardware was planned, called Multi-PSI V2. For the
processing elements, the second generation of PSI, PSIII, was chosen. From this stage (i.e., from 1986), I was
more fully involved in the parallel systems research as
SIMPOS was approaching its maintenance phase. My
active motivation was that I thought I solved the last
remaining difficulty of efficient implementation.
Logic programming languages are pure languages in
that once a value holder (a variable or an element of a
data structure) gets some value, it will remain constant.
It is impossible to update an element o f an array. What
one can do is make a copy of an array with one element
differing from the original. A straightforward implementation of actually copying the whole array was, of
course, not acceptable. Representing arrays with trees,
allowing logarithmic time accesses, would not be satisfactory either. Without constant time array element accesses, computational complexity o f existing algorithms
would become larger--massively parallel p r o g r a m s written in such a language would be beaten by sequential
programs with large enough problem size. H e n r y Baker's shallow binding method [1] or similar methods proposed for Prolog matched the basic requirements, but
the constant overhead associated with those methods
seemed unbearable for the most basic primitives.
In early 1986, I h e a r d that a constant time update
scheme was designed by a g r o u p in a company cooperating with ICOT. I talked with them and found a crucial
oversight in their algorithm, but the basic idea was excellent. I f there were no other references to an array except
that used as the original o f the u p d a t e d array, destructive u p d a t e would not disturb the semantics. While the
idea was simple, the algorithm o f keeping the single reference information where I found the bug was rather
complicated, since we had to cope with shared logical
variables.
After several days o f considering how to fix the bug, I
reached a solution, later n a m e d the multiple reference
bit (MRB) scheme [6]. Only one-bit information in pointers, rather than data objects, was needed for MRB,
which was especially beneficial for shared m e m o r y implementation, since no m e m o r y accesses were needed
for reference counting. It was also suited for hardware
support.
In later years, as static analysis o f logic programs prospered, static reference count analyses were also studied,
yielding reasonable results. But this dynamic analysis by
MRB was well suited to the h u m a n resources we had at
ICOT. T h e lack o f compiler experts had always been a
problem with the project. I f we had tried static analysis
methods at that time, the language implementation
would not have been completed in that short period of
time.
years and the discussion there was the hottest I know o f
at ICOT. T h e final design o f KL1 [21] was decided here
and most o f the p r o p o s e d ideas were actually implemented on Multi-PSI [14].
Most of the implementation issues were on optimization schemes, many based on MRB. T h e principle there
was to make single reference cases as efficient as possible
and have multiple reference cases handled correctly but
less efficiently. This decision proved reasonable through
later p r o g r a m m i n g experience since the single reference
p r o g r a m m i n g style was found to be not only efficient but
also more readable. Some similar languages designed
more recently even enforce data structure references to
be single.
T h e discussion in the g r o u p was not confined to implementation issues. Some aspects of the specification o f
KL1, especially on metalevel features, were also investigated. Although the KL1 language design g r o u p had
already p r o p o s e d that KL1 should have the metacall feature similar to one in PARLOG [10], it only had qualitative execution control mechanisms, while more quantitative mechanisms such as priority and resource control
were n e e d e d as the parallel kernel language. It was reasonable, I think, to define details o f such metalevel features at the implementation group, since they could not
be clearly separated from implementation issues.
Load distribution was m a d e explicit by enabling the
p r o g r a m to specify the processor to execute goals. This
decision seems to have been appropriate, since we are
still struggling to design good load distribution policies
and it would have resulted in disaster if the language
implementation tried to automatically distribute the load
within large-scale multiprocessor systems.
Data placement, on the other hand, was made automatic. T h a n k s to the side effect-free nature of the language, data structures can be moved to any processors
and arbitrarily many copies can be made. This simplified
the design considerably.
Features for building up a reasonable software develo p m e n t environment, such as primitives for p r o g r a m
tracing and executable code management, were also
added. These additions were designed so that the basic
principles o f the language, such as the side effect-free
semantics, were not disturbed. Otherwise, the implementation would have been much more complicated,
disabling various optimization schemes.
As a whole, the design o f the language and its implementation was rather conservative. We chose a design we
KL1 and Multi-PSI V2
Near the end o f 1986, a g r o u p was f o r m e d to investigate
details o f the language implementation on Multi-PSI.
Weekly meetings o f this g r o u p continued for about two
The
goal
of the
project
was to establish the basic
technologies needed for making such a dream come true.
I consider
better
excuse
such
than
dreams
Star
a much
Wars
to obtain
funding for basic research.
¢OMMUNICAIrlONIOIITHIA¢M/March
1993/Vol.36, No.3
87
could be sure to implement without many problems and
gave up our ambition of being more innovative. We had
to provide a reasonable development environment to
allow enough time for parallel software research within
the project period.
T h e hardware development went on in parallel with
the language implementation meetings at Mitsubishi and
the hardware arrived at I C O T at the end of 1987. It had
64 processors with 80MB of main memory each, connected by a mesh network. T h e development of KL1
implementation on the hardware continued.
PIMOS
When the design of the basic metalevel primitives was
completed, a team to develop the operating system for
parallel inference systems, PIMOS, was formed in 1987.
Given the well-considered language features, the design
of PIMOS was not very difficult.
As the real parallel hardware was expected to be ready
much later, we needed some platform of the operating
system development. Although an implementation of
GHC on Prolog by Ueda was available, its performance
was too low for large-scale program development and
many newly added features o f KL1 for system programming were lacking. A team led by Miyazaki made a fuller
pseudoparallel implementation in C, called PIMOS Development Support System (PDSS) to fill the needs.
Coding and debugging of PIMOS were done by a
team of about 10 participants using PDSS, until the language system on Multi-PSI V2 became barely operational at the end of the summer of 1988.
As we expected, but nevertheless to our surprise, the
operating system developed on the pseudoparallel PDSS
could be ported immediately on to the real parallel hardware. With multiple processors, the execution order of
processes on Multi-PSI was completely different from
PDSS. In theory, the dataflow synchronization feature of
GHC was expected to avoid any synchronization problems. But based on my own experience in developing a
multitask operating system SIMPOS, I was ready to encounter annoying synchronization bugs. On physically
parallel hardware on which scheduling-dependent bugs
were hard to reproduce, debugging should be much
more difficult than on single-processor multitask systems. All the bugs we found were those of the language
implementation except for a few problems of very highlevel design.
This experience clearly showed us the merit of using a
concurrent logic programming language. In a parallel
processing environment, not only a limited number of
system programmers, but also application programmers
have to solve synchronization problems. The dataflow
synchronization mechanism can reduce the burden almost entirely. T h e language implementation might be
much more difficult than for sequential languages with
additional communication and synchronization features,
but the results of the effort can be shared by all the software developers using the language.
After about two months, the language implementation and the operating system on Multi-PSI V2 became
stable. We could exhibit the system at FGCS'88 held in
8
March 1993/Vol,36, No.3 ] C O M M U I I I C A T I O I N S O F T H E A C M
the beginning of December with several preliminary
experimental application software systems [7].
Application Software Development
With its 64 processor,;, Multi-PSI V2 ran more than 10
million goal reductions per second at its peak. This figure was not outstanding, being only about 10 times
faster than Prolog on mainframe machines, but good
enough to invite some of the application researchers to
parallel processing. Several copies of Multi-PSI V2 were
manufactured in the following years and used heavily in
parallel software research in various areas [15].
For about a year or two, we heard many complaints
from those who were accustomed to Prolog and ESP.
The lack of automatic backtracking made the language
completely different from Prolog, while their syntactic
similarity prevented some from easily recognizing the
difference. Many tried to write their programs in
Prolog-like style, recognizing after the debugging struggle that the language did not provide automatic search
features and they had to write their own. T h e n they reluctantly started writing search algorithms. This often
resulted in much more sophisticated searches than Prolog's blind exhaustive search. They also had to consider
how these searches could be parallelized in an efficient
way. T h e language lured Prolog programmers to the
strange world of parallel processing with its syntactic
decoy.
Another typical difficulty seems to have been in finding a good programming style in the language with too
much freedom. T h e object-oriented style [19] became
recognized as the standard style later. Designing programs in KL1 became synonymous with designing process structures.
Load distribution with decent communication locality
is the key to efficient parallel computation. Load distribution strategies that were successful for some particular
problems were compiled into libraries and distributed
with the operating system [ 11 ], accelerating the development of many application systems.
Although some application systems attained almost
linear speedup with 64 processors rather easily, others
needed more effort to benefit from parallel execution.
Some needed a fundamental revision in the algorithm
level; some could run only a few times faster than on a
single processor; some seemed to have attained reasonable speedup, but when certain changes in the algorithm
successfully improved the overall performance, the
speedup figure went down considerably. Research into
parallel algorithms with assumptions on the hardware
much more realistic than PRAM is one of the most important areas of study in the future.
Parallel Inference Machines
In parallel with the development of the experimental
Multi-PSI V2, design of more powerful parallel inference machines, PIMs, was going on by a team headed by
Goto and later by Keiji Hirata. I participated in this
hardware project only to a limited extent, but I may have
influenced the grand design of the language implementations on PIMs considerably.
Five different models were planned, corresponding to
five different c o m p u t e r manufacturers. This decision
had a more political than pure scientific basis. Five different models not only required m o r e funding but also
incurred considerable research m a n a g e m e n t overhead.
On the other hand, it may have been quite effective in
diffusing the technology to Japanese industry.
Since it was still difficult to find many language implementation experts, we decided to use basically the same
implementation, Virtual PIM (VPIM), for four out of
five models to minimize the effort. T h e one remaining
model inherited the design from the implementation on
Multi-PSI V2. VPIM was written in a language called the
PIM System Language (PSL), which is a small subset of C
with extensions to control low-level hardware features.
T h e idea was that the same implementation should be
ported to all the models by only writing PSL compilers.
VPIM was developed at I C O T using a PSL compiler
built on Sequent Symmetry. T h e responsibility o f porting it to each model was on each manufacturer.
T h e first model o f PIM to have become available in
mid-1991 was PIM/m, the successor o f Multi-PSI V2 by
Mitsubishi. It had up to 256 processors with a peak
speed of more than 100 million reductions per second
(i.e., about 10 times faster than Multi-PSI V2). PIMOS
and many software applications developed on Multi-PSI
were ported to PIM/m without much effort, since the
p r o g r a m m i n g language was identical. Almost all o f the
software that showed near-linear s p e e d u p on Multi-PSI
V2 also did so on PIM/m.
Early in 1992, another model, PIM/p, m a n u f a c t u r e d
by Fujitsu, got ready for software installation. This system had up to 64 clusters, each with 8 processors sharing
the main m e m o r y t h r o u g h coherent cache memory.
Load distribution within clusters was automatic by the
language implementation, while it was still p r o g r a m m e d
a m o n g clusters. This m a d e the scheduling quite different from Multi-PSI or PIM/m, but PIMOS and application software were ported without much o f a problem
except for hardware and language implementation
problems, to be solved in time for exhibition at FGCS'92
[5]. Again we thanked the language for its dataflow synchronization.
Conclusion
T h e r e have been pros and cons on the outcome o f the
project.
One might argue that the project was a failure as it
could not meet its goal described in the project's grand
plan, such as a c o m p u t e r that can speak like a human.
T h e grand plan did not actually say such a computer can
be realized within 10 years. T h e goal o f the project was
to establish the basic technologies n e e d e d for making
such a d r e a m come true. I consider such dreams a much
better excuse than Star Wars to obtain funding for basic
research.
T h e FGCS project was the first scientific Japanese national project conducted by M I T I . All the projects carried out before FGCS started, and many that followed,
were aiming primarily at promoting industry. This may
have been greatly due to the intransigent character o f
the project leader, Kazuhiro Fuchi. T h e results o f the
project may not be immediately commercialized. But we
have not been aiming at such a short-term goal. O u r
goals are much longer-term: technologies that will be
indispensable when even personal computers can have
muhimillions of processors.
One of the weak points o f o u r project was, as mentioned earlier, the shortage of h u m a n resources in basic
software technology. If we had three times as many researchers who could design a p r o g r a m m i n g language
and who could write optimizing compilers, the design o f
the parallel inference machines might have been much
different. At least, more ambitious systems (several of
them) could have been designed. T h e language system
was not our ultimate goal. Research in parallel software
architecture was a much m o r e important goal. For securely providing a development environment for parallel software research with the limited h u m a n resources,
we chose one safe route, which, I believe, was the best
choice.
As a whole, I think the project was quite successful. It
made a considerable contribution to the parallel processing technologies, especially in p r o g r a m m i n g language
and software environment design. Research in parallel
software for knowledge processing has only begun, but
without the project, there would have been nothing.
F u r t h e r refinements o f the design o f the kernel language, its implementation both in compilation scheme
and hardware are needed, but they are relatively minor
issues. T h e most i m p o r t a n t research topic o f the future,
I believe, is in the design o f parallel algorithms. T h e
largest achievement of the project was showing a way to
build a platform for such research activities.
Acknowledgments
T h e a u t h o r would like to thank all those who participated in the project and research in the related areas.
References
1. Baker, H. Shallow binding in LISP 1.5. Commun. ACM 21,
7, 1978.
2. Bowen, D.L., Byrd, L., Pereira, F.C.N., Pereira, L.M. and
Warren, D.H.D. DECsystem-lO Prolog User's Manual, Nov.
1983.
3. Chikayama, T. Unique features of ESP. In Proceedings of
FGCS'84, pp. 292-298, 1984.
4. Chikayama, T. Programming in ESP--Experiences with
SIMPOS. In Programming of Future Generation Computers,
Kazuhiro Fuchi and Maurice Nivat, Eds. North-Holland,
New York, N.Y., 1988, pp. 75-86.
5. Chikayama, T. Operating system PIMOS and kernel language KL 1. In Proceedingsof FGCS'92, Tokyo, Japan, 1992,
pp. 73-88.
6. Chikayama, T. and Kimura, Y. Multiple reference management in flat GHC. In Proceedingsof the Fourth International
Conference on Logic Programming, 1987.
7. Chikayama, T., Sato, H. and Miyazaki, T. Overview of the
parallel inference machine operating system (PIMOS). In
Proceedings of FGCS'88, Tokyo, Japan, 1988, pp. 230-251.
8. Clark, K.L. and Gregory, S. A relational language for parallel programming. In Proceedings of ACM Conference on
Functional Languages and Computer Architecture, 1981,
pp. 171-178.
9. Clark, K.L. and Gregory, S. Parlog: A parallel logic pro-
COMMUlilCAT|OHSOFTHIEAI~M/Marc~
1993/7ol.36, No.3
8S
THE
]"IF
I O N
PROJECT
gramming language. Res. Rep. TR-83-5, Imperial College,
1983.
10. Clark, K. and Gregory, S. Notes on systems programming
in PARLOG. In Proceedingss of FGCS'84, 1984, pp. 299306.
11. Furuichi, M., Taki, K. and Ichiyoshi, N. A multi-level load
balancing scheme for or-parallel exhaustive search programs on the multi-PSI. In Proceedings of the Second ACM
SIGPLAN Symposium on Principles and Practice of Parallel Programming, March 1990, pp. 50-59.
12. Hircsh, M., Silverman, W. and Shapiro, E. Computation
control and protection in the logic system. In Concurrent
Prolog: Collected Papers, vol 2, Ehud Shapiro, Ed. The MIT
Press, Cambridge, Mass., 1987, pp. 28-45.
13. Ichiyoshi, N., Miyazaki, T. and Taki, K. A distributed implementation of flat GHC on the multi-PSI. In Proceedings
of the Fourth International Conference on Logic Programming,
MIT Press, Cambridge, Mass., 1987.
14. ICOT. Proceedings ofFGCS'88. ICOT, Tokyo, Japan, 1988.
15. ICOT. Proceedings ofFGCS'92. ICOT, Tokyo, Japan, 1992.
16. Inamura, Y. and Onishi, S. A detection algorithm of perpetual suspension in KL1. In Proceedings of the Seventh International Conference on Logic Programming, The MIT Press,
1990, pp. 18-30.
17. Lusk, E., Warren, D.H.D., Haridi, S. et al. The Aurora orparallel system. New Generation Comput., 7 (1990), 243-271.
18. Shapiro, E. A subset of Concurrent Prolog and its interpreter. ICOT Tech. Rep. TR-003, ICOT, 1983.
19. Shapiro, E. and Takeuchi, A. Object oriented programming in Concurrent Prolog. ICOT Tech. Rep. TR-004,
ICOT, 1983. Also in New Generation Computing, SpringerVerlag vol.1 no.I, 1983.
20. Ueda, K. Guarded Horn Clauses: A parallel logic programming language with the concept of a guard. ICOT Tech.
Rep. TR-208, ICOT, 1986.
21. Ueda, K. and Chikayama, T. Design of the kernel language
for the parallel inference machine. Comput. J. (Dec. 1990).
22. Warren, D.H.D. An abstract Prolog instruction set. Tech.
Note 309, SRI International, 1983.
23. Yokota, M., Yamamoto, A., Taki, K., Nishikawa, H. and
Uchida, S. The design and implementation of a personal
sequential inference machine: PSI. ICOT Tech. Rep. TR045, ICOT, 1984. Also in New Generation Computing, vol.1
no.2, 1984.
CR Categories and Subject Descriptors: C.1.2 [Processor
Architectures]: Multiple Data Stream Architectures (Multiprocessors); D.1.3 [Programming Techniques]: Concurrent
Programming; D.1.6 [Software]: Logic Programming; D.3.2
[Programming Languages]:
Language Classifications-Concurrent, distributed, and parallel languages, Data-flow languages,
Nondeterministic languages, Nonprocedural languages; K.2 [Computing Milieux]: History of Computing
General Terms: Design, Experimentation
Additional Key Words and Phrases: Concurrent logic programming, Fifth Generation Computer Systems project,
Guarded Horn Clauses, Prolog
About the Author:
TAKASHI CHIKAYAMA is chief of the First Research Laboratory at the Institute for New Generation Computer Technology. Current research interests include design and implementation of logic programming languages, object-oriented
programming languages, and software development environments. Author's Present Address: Institute for New Generation Computer Technology, ICOT Research Center, 1-4-28,
Mita, Minato-ku, Tokyo, Japan; email: chikayama@
icot.or.jp
0
M a r c h 1993/Vol.36, No.3
/¢OHHUHICATION|OPlrHiA¢iR
EvanTick
UNIVERSITY OF OREGON
It is not very often that Westernersget to see theJapanesejust as they are.
The difficulty we have when we look at Japan--the layers-of-the-onion
problem--can be so frustrating that we tend to raise our own screen of
assumptions and expectations, or we content ourselves with images of the
Japanese as they would like to be seen. If you live in Japan, you learn to
value moments of clarity--times when you feel as ifyou'd walked into a
room where someone is talking to himself and doesn't know you 're there.
L E T T E R F R O M JAPAN
E Smith
The N e w Yorker, April 13, 1992
Summaries of the F G C S project successes and failures by
most foreign researchers tend
to categorize the abstract
vision (of knowledge engineering and a focus on logic programming) as a great success
and the lack of commercially
competitive hardware and software as the m a i n failure. I
would like to place these generalizations in the specific context of my personal involvement with the project: my own
corner of things, and my assessment of parallel inference
machine (PIM) research as a whole. Furthermore, I would
like to comment on more subtle successes and failures that
fewer observers had a chance to evaluate. These results
involve the training of a generation of computer scientists.
My participation in the FGCS project is somewhat
unique because I was both an I C O T visitor in February
1987 and then a recipient of the first N S F - I C O T Visitors
Program grant, from September 1987-September 1988.
For that year I conducted basic research in the laboratory responsible for developing parallel-inference multiprocessors. I n 1988 I j o i n e d the University of Tokyo, in
the Research Center for Advanced Science and Technology (RCAST), with a visiting chair in information science
donated by the CSK Corporation. T h u s over the period
1987 to 1989 I had an "insider's view" of the FGCS project. Furthermore, for the past three years, at the University of Oregon, I have continued to collaborate with
I C O T researchers in the PIM groups.
Stranger In a Strange Land
I had applied for the NSF-ICOT Visitor's Program
grant with the goal of both extending my thesis research
(evaluating the memory-referencing characteristics of
high-performance implementations of Prolog) and living in Tokyo. Since both of these goals were equally important, the NSF-ICOT grant was ideal. Prior to graduating, I had studied Japanese at Stanford for two years,
in addition to making two short visits to Tokyo.
Strangely, rather than being interested in that vision of
rural J a p a n professed by travel posters and paperbacks,
my interest was limited almost exclusively to Tokyo, the
farthest one can be from the "real" Japan, and still be on
Japanese soil. Yet to me, this was the "real" Japan: the
vitality of a sensory overload of sound trucks, a myriad
of ever-changing consumer products, ubiquitous vending machines, electronics supermarkets, and a haywire
of subway and rail systems.
It seemed only appropriate that I C O T was located in
the midst of all this motion: downtown Mita in the
Minato-ku ward of Tokyo. Located inside the
Yamonote-sen, the railway line encircling the city, Mita
has immediate access to all parts of cosmopolitan Tokyo
either by rail, subway, taxi, bicycle, or foot. I C O T members were all "on loan" from parent companies and institutions, most of which were on the outer belts of the city,
so they had long commutes. I was not so constrained and
found an apartment in Hiroo, a convenient 30-minute
walk to ICOT. Furthermore, Hiroo was only 30 minutes
by foot to Roppongi (nightclub district) and by bicycle to
Ooi Futo, a circuit where local racers practiced on Sundays.
It was in this environment, not unlike Queens, New
York where I grew up, that I started working in September 1987. My first day, I arrived at Narita airport at
about 8:00 A.M. and took the liberty of grabbing the
first available bus to Mita and then a taxi to ICOT. I had
brought a Macintosh, and I figured I would first drop it
off at work and say hello to everyone. Unfortunately, no
one at I C O T was expecting my arrival there. To the contrary, they had informed an N H K film crew that I would
be arriving at my hotel. T h e mixup was diplomatically
solved by calling the film crew over to ICOT, having me
carry my backpack and Mac back down to the lobby, and
staging an "official" arrival for Japanese television. We
then proceeded up the elevator, cameras glaring, to my
desk, so they could record my unpacking of the Mac.
The gist of it was "strange gaijin (in tennis shoes) brings
own computer to Fifth Generation Project . . ." It was
fairly amusing, although it is always disconcerting how
naive the media are. Almost as funny was the next week
when a newspaper requested my photo, which I had
taken at a nearby film shop. The next day's paper displayed that photo, but with a necktie drawn in. Interesting cultural differences, but moreover an indication of a
time of peaking national limelight for the project.
Expectations and Goals
My expectations were to continue my research in the direction of parallel logic programming languages implementation and performance evaluation. At the point
when I finished my thesis, I had only begun to explore
parallel systems, primarily collaborating with M. Hermenegildo, l In 1987 Hermenegildo was developing the
prototype of what later evolved into &-Prolog, a transparent AND-parallel Prolog 2 system for shared-memory
multiprocessors [7]. This work introduced me to the
world of parallel processing during a period of great
excitement in the logic programming community. One
[Hermenegildo was at the Microelectronics and Computer Technology
Corporation (MCC) at the time, now at the Technical University of Madrid (UPM).
of the primary triggers of this excitement was the new
shared-memory multiprocessors from Sequent, Encore,
and BBN in the mid-1980s. Many research groups were
developing schemes to exploit parallelism. I think funding for the FGCS project motivated many international
researchers to continue in this area. Two particularly
promising systems at the time were Aurora OR-parallel
Prolog 3 [12] and the family of committed-choice languages [16]. Aurora was being developed at the University of Manchester (later at the University of Bristol),
Argonne National Laboratories (ANL), and the Swedish
Institute of Computer Science (SICS). Committedchoice languages were being developed primarily at the
Weizmann Institute, ICOT, and Imperial College.
Both &-Prolog and Aurora were meant to transparently exploit parallelism within Prolog programs.
&-Prolog was based on the idea of "restricted ANDparallelism" [4] wherein Prolog goals could be statically
analyzed to determine that they shared no data dependencies, and could thus be executed in parallel. Aurora
exploited OR-parallelism wherein alternative clauses
defining a procedure definition could be executed in
parallel, spawning an execution tree of multiple solutions. The committed-choice languages represented a
radical departure from Prolog: backtracking was removed in favor of stream-AND parallelism, similar to
that in Hoare's Communicating Sequential Processes.
These approaches were promising because they potentially offered low-overhead essential operations: variable
binding, task invocation, and task switching.
It was in this whirlwind of activity that I mapped out
my own project for ICOT. Such planning was a good
idea in retrospect. Some other visiting researchers had
failed to plan ahead, and floundered, unable to connect
with a group to work with. Although I should have
learned from these experiences, I myself fell into the
same trap at the University of Tokyo a year later. Both
institutions had trouble integrating young visiting researchers into the fray because of language differences
and an unjustified assumption (in my case) that "visitors"
were omniscient experts who did not need mentoring.
My proposed research was to evaluate alternative parallel logic programming languages, executing on real
(namely Sequent and Encore at the time) multiprocessors. Specifically, M. Sato of Oki was at that time building Panda, a shared-memory implementation of Flat
Guarded Horn Clauses (FGHC) 4 [16] f o r the Sequent
Balance. An early (Delta) version of Aurora was also obtained from ANL. My intent was to get a set of comparative benchmarks running in both FGHC and Prolog, for
judging the merits of the approaches. It would have
2Logic programs are composed of procedures defined by Horn clauses
of the form: H : - B l , B2, ..., Bk for k -->0. The head H contains the
formal parameters of the procedure corresponding to the clause. The
body goals Bi contain actual parameters for procedure invocations made
by the parent procedure. AND parallelism exploits the parallel execution
of multiple body goals.
SOR parallelism exploits the parallel execution of alternative clauses defining the same procedure.
4KL1 is the supersetted FGHC that is supported by the PIM architectures.
¢OMMUNICATIONSOPTHIACM/March
1993/Vol.36, No.3 S ~
THE
F I FT'H
GENE:R'ATI
ON
PROJECT
been a coup to include &-Prolog in the comparisons, but
the enabling compiler technology was still u n d e r develo p m e n t [14].
T h e measurements I wanted were o f the type previously collected for Prolog and &-Prolog: low-level memory behavior, such as frequency o f data-type access, and
cache performance. Work progressed on several fronts
simultaneously: as Panda and A u r o r a were stabilizing, a
collaborative effort with A. Matsumoto to build a parallel
cache simulator was underway, as was b e n c h m a r k development. T h e Panda and A u r o r a systems were then
a d o p t e d and harnessed for the simulator. T h e simulator
was interesting in its own right: to enable long runs, it
was not trace driven, but rather ran concurrently with
language emulators [6].
T h e overall goals o f this research were to conduct
some o f the first detailed empirical evaluations o f conc u r r e n t and parallel logic p r o g r a m m i n g systems. Little
work had been done in p e r f o r m a n c e evaluation: most
logic p r o g r a m m e r s were furiously creating systems and
simply not analyzing them, or at best measuring the inference rate o f list concatenation. T h e final word on performance characteristics, as was later d o c u m e n t e d for
imperative languages by J.L. Hennessy and D.A. Patterson, was impossible for logic p r o g r a m m i n g languages
because o f the lack o f an established body o f application
programs. Although Prolog had received widespread use
and recognition, leading to industrial-strength applications (e.g., [22]), concurrent languages and parallel variants had no such history. As a result, my goals were practically oriented: to make an incremental i m p r o v e m e n t in
the benchmarks over the standardized UC-Berkeley and
Edinburgh benchmarks, and to continue to improve the
range of o u r analysis into muhiprocessor cache performance. This work was later described in a book [19],
written primarily at the University o f Tokyo.
Successes, Failures, and Frustrations
During the s u m m e r o f 1988, A. Ciepielewski 5 visited
I C O T and helped a great deal with instrumenting Aurora. This required rewriting the lock macros, the Manchester scheduler, and o t h e r nasty business. By summer's end, we had p r o d u c e d r u n n i n g systems evaluating
a large b e n c h m a r k suite, which had been refined over
the months for p e r f o r m a n c e considerations. In August I
started writing a p a p e r on o u r results [20], finishing off
my I C O T visit and moving to Todai. T h e technical results o f the empirical study were summarized in the Lisbon ICLP [20]:
T h e most i m p o r t a n t result o f this study was a confirmation that indeed (independent) OR-parallel architectures
have better memory performance than (dependent) AND-parallel architectures. T h e reasons are that OR-parallel architectures can exploit an efficient stack-based storage
model, whereas d e p e n d e n t AND-parallel architectures
must resort to a less efficient heap-based model. For allsolutions search problems, a further result is that
noncommitted-choice architectures have better memory performance than committed-choice architectures. This is because
5Ciepielewski was at the Swedish I n s t i t u t e o f C o m p u t e r Science (SICS) at
the time, now at C a r l s t e d t E l e k t r o n i k AB.
2
March 1993/Vol.36, No.3 / ¢ O I m l R U l I I C A T I O H S O P Y l l E A ¢ I R
backtracking architectures can efficiently reclaim storage d u r i n g all-solutions search, thereby reducing
working-set size. Committed-choice architectures, as
functional language architectures, do consume m e m o r y
at a rapid rate. Incremental GC can alleviate some o f the
penalty for this m e m o r y appetite, but incremental GC
also incurs its own overheads. T h i r d , for single-solution
problems, OR-parallel architectures cannot exploit parallelism
as efficiently as dependent AND-parallel architectures can. Although OR-parallel goals may exist, they are often too
fine-grained for the excessive overheads necessary to
execute them in parallel. In this respect, dependent AND-
parallel architectures can execute fine-grain parallelism more
efficiently than can OR-parallel architectures.
T h e r e were some objections to these conclusions a m o n g
those i m p l e m e n t i n g the systems. T h e basic criticism was
simply that the application domains o f the languages differed, therefore comparison was inappropriate. A more
far-reaching question, that still has not been resolved by
the FGCS project, is the utility o f an all-solutions search
m e t h o d as realized by backtracking. I C O T made a radical decision in 1982 to use logic p r o g r a m m i n g as its base
and then a n o t h e r radical decision to switch to committed-choice languages a few years later. Although
many techniques have been e x p l o r e d to recapture logical completeness, none o f them have been entirely successful. This gap has led other research groups to develop more powerful language families, such as
concurrent constraint languages (CCLs) and languages
based on the " A n d o r r a Principle" [1]. First attempts at
both o f these families at I C O T are GDCC and AND-ORII, respectively.
I felt my greatest success at I C O T was collaborating
with several engineers, contributing to both my project
and others. These included the members o f the PIM laboratory, as well as I C O T visitors, such as J. C r a m m o n d
o f Imperial College. T h e congeniality at I C O T was unsurpassed, not just to foreign visitors, but a m o n g the different c o m p a n y members. Sometimes I thought that it
was a bit too congenial and the lack o f (externally directed) aggressive competitiveness was detrimental to
the FGCS project overall. As in most J a p a n e s e institutions, foreign collaboration was somewhat carefree because the visitors were not treated as true members o f
the team. T h e main reason for this was lack of reading
skills n e e d e d to fully participate in g r o u p meetings and
p r e p a r e working papers. A related problem was a lackadaisical attitude toward citing the influences o f foreign
researchers. I attribute this somewhat to language differences, but primarily to lenient academic standards
inherited from a corporate culture.
An implicit success, I hope, was that my technical
analysis helped uncover systems' problems that were
later fixed. During the project, d e v e l o p m e n t progressed
on the A u r o r a schedulers, improved Prolog and F G H C
compilers, and alternative systems such as J A M Parlog
and MUSE OR-parallel Prolog. In a sense, these developments were frustrating because they made my small
e x p e r i m e n t somewhat obsolete. T h e half-life o f empirical data o f this type is very short: results rarely make it to
j o u r n a l form before the underlying systems have been
"versioned-up." Furthermore, I hope the conclusions
derived comparing OR-parallel Prolog to stream ANDparallel FGHC had some influence on the Andorra
model [1], developed by D.H.D. Warren to combine the
two.
A limited success was my influence on I C O T researchers in the PIM groups. I think the empirical slant
influenced a number of researchers to devote more care
to evaluating their designs. Still, I do not think enough
emphasis was placed on producing quantitative performance analysis. Considering the massive efforts that
went into building the Multi-PSIs and PIMs, few publications analyzing the key performance factors were generated (see [15]). I blame the three-period FGCS schedule for this. In 1988, the groups were struggling to
complete the Multi-PSI-V2 demonstration. Yet the design of the PIM machines was largely under way, and
little if any Multi-PSI experience collected after 1988
affected PIM. Still, the sophistication of PIM instrumentation and experimentation (e.g., Toshiba's PIM/k cache
monitors and Fujitsu's performance visualization tools)
have improved over the final period, even if the newer
application benchmarks [15] have not yet been exercised
on the PIMs.
One impediment to my research was lack of compiler
technology. This was prevalent throughout the systems:
compile-time analysis was not yet on par with that of
imperative languages, thus lessening the importance of
what we were measuring. Recent work in logic program
compilation (e.g., [23]) indicates that significant speedups can be attained with advanced optimization methods. A related frustration during my stay in Japan was
something as simple as lack of floating-point arithmetic
in parallel logic language implementations. Even
SICStus Prolog had such an inefficient implementation
of floating-point numbers as to make it unusable. I accidentally discovered this when attempting to implement a
new quadrature algorithm for the N-Body problem,
developed by K. Makino at the University of Tokyo. I
had just joined the school after leaving I C O T and was
eager to find that "killer application" which would link
number crunching with irregular computation that logic
programs are so good at. I had read a paper by Makino
[13] describing a method for successively refining space
into oct-trees and exploiting this to approximate longdistance force interactions. I walked over to his office,
surprising him one day, got the particulars of how to
generate test galaxies, and hacked up the program in
Prolog. For a long period afterwards I could not get the
program to perform more than a few iterations before
the heap overflowed. Tracing the problem it became
apparent that all unique floating-point results were interned.
This was disappointing primarily because it was a lost
opportunity for cross-disciplinary research within the
University, which I sorely wanted. Furthermore, it indicated the early-protype state of parallel logic programming at the time, since none of the systems could tackle
applications driving the U.S. markets [2]. 6
20/20 Hindsight
My belief going into the I C O T visit was that transpar-
ently parallelizing sequential languages was best (e.g., in
some gross sense, let us exploit parallelism in Prolog as
in Fortran). I still believe this approach to be very useful,
especially for users who cannot be bothered with concurrency. However, during my stay I discovered the additional value of expressing concurrent algorithms directly
and became an advocate of concurrent languages. My
own intellectual development included, above all, learning how to write concurrent programs (and I am still
learning how to make them execute in parallel). I continue to believe that concurrent languages are great, but
I realized after all the effort developing those benchmarks, that although concurrent languages often facilitate elegant means to implement algorithms [16, 19], the
languages per se were rarely used to model concurrent
systems. T h e mainstream concurrent languages do not
support temporal constraints, thereby making certain
types of modeling no easier than nonlogical languages.
Furthermore, it was still quite messy to express collections and connections of nondeterminate streams in
these languages (recent languages have been designed to
assuage this problem e.g., A'UM, Janus, and PCN).
My greatest technical criticism of my own research was
that it was limited in scope, both in systems (Aurora vs.
Panda) and benchmark programs. If the languages had
been standardized years earlier, especially among the
committed-choice logic programming language community, we could have collected much more significant
benchmarks. Alternatively, if I had the products of
I C O T applications development conducted during the
third period of the FGCS project [15], I would also have
been in a stronger position. Advanced compilers are still,
to this day, not available.
On the personal side, collaboration was not always
smooth, but that made our successes more of an accomplishment. There were more than a few arguments during the years, as to alternative methods of implementing
one feature or the other, or bureaucratic culture shock.
An amusing example of the latter was that to bypass
M I T I software release restrictions, we would publish the
source listing of the software in a technical report. My
aggressiveness was not immediately understood for what
it was (I like to think it is healthy competitiveness) until
people got to know me better over the years. As in any
organization, the Japanese were no different in that software developers become possessive and protective of
their systems. When I made a branch modification of
those systems, and bugs appeared, the first question was:
did I cause this bug, or did I inherit it? Software systems
were not as successfully partitioned among engineers as
was hardware, usually resulting in a single guru associated with each system. This necessitated some rewriting
of similar systems over the years.
I C O T researchers worked well together, especially
considering the number of companies from which they
hailed. There were some communication difficulties
6It should be stressed that there are no exceptional technical problems
preventing logic-programming systems from having first-class floating
point, e.g., Quintus Prolog implements the IEEE standard. Recently,
SICStus Prolog floating point has been repaired, benefiting Aurora and
&-Prolog, both based on it. The PIM systems also support floating point.
COMMUNICATIONSOPTHIIAC:M/1.~/[alsch
1993/Vol.36,No.3 9 3
among the research laboratories. My own stated problems with compilers and applications resulted in part
from differing laboratory agendas: the hardware group
needed help in these areas, whereas the languages group
was working in other areas (e.g., or-parallel search, constraints, metaprogramming, partial evaluation, and artificial intelligence). Again, this was a conflict of advanced
technology w'. basic research that depleted both.
I do not think that I C O T researchers were on average
more efficient than those in other institutions. Recall
that the end of 1987 brought an avalanche on u.s.Japan relations: the October stock market crash, yen
appreciation, and "Super 301" (U.S. trade legislation)
left people in a surly mood. Japan bashing began, with
foreign media attention on Japanese long working
hours, among other things. We did not work any longer
at I C O T than, say, at Quintus or IBM Yorktown Heights
(hey, the last trains left around midnight). Long hours
do not necessarily translate into insights or efficiency,
anywhere. If anything, Tokyo was isolated in some
sense, leading to a feeling of remoteness a m o n g the research community. For example, in the U.S., universities
and industry tend to interact well informally, for instance, through university-sponsored seminars. Tokyo
shares a great diversity of computer industry and universities, but none of that informal get-togetherness. On
the other hand, I C O T did have an advantage commanding member manufacturers' resources (namely large
groups of engineers) to construct large software (e.g.,
PIMOS) and hardware (PIM) systems. Such cooperation
was quite astounding.
In retrospect, with regard to my own participation, I
would have done a few things differently. Primarily, I
would have memorized my kanji every day. It was and is
simply no fun at all, but Martin Nilsson of the University
of Tokyo, now at SICS, demonstrated that it can be
done. This would have allowed my increased participation in weekly I C O T meetings, and later at Todai.
TeChnology and the FGCS Project
Hirata et al. [8] and Taki [17] wrote excellent articles
summarizing the design criteria and implementation
decisions made at the software and firmware levels of
the PIM. My own research was most closely tied to this
laboratory at ICOT, so I will limit my comments to this
research. Since all PIMs are organized as loosely connected shared-memory clusters (except PIM/m), each
design problem requires both local and distributed solutions. Furthermore, static analysis (by the compiler) must
be balanced with run-time analysis. The main components of their design include:
• memory management: concurrent logic programs
have a high memory bandwidth requirement because of
their single-assignment property and lack of backtracking. I C O T has developed an integrated solution for garbage collection at three levels within the PIMs. Locally,
incremental collection is performed with approximative
reference counting. Specifically, T. Chikayama's Multiple Reference Bit is incorporated in each data word. Distributed data is reclaimed across clusters via their export
~4
March 1993/Vol.36, No.3 /(~OlRMUNIt,Ji'IPION|OIRTHN A C M
tables. Finally, if local memory expires, a parallel stopand-copy garbage collector is invoked in the cluster [9].
The effort in designing and evaluating alternative garbage collection methods is one of the most extensive of
all I C O T projects, primarily because the problem was
recognized several years ago. However, compilation
techniques, such as :~tatically determining instances of
local reuse, were not explored. The trade-offs between
static analysis time vs. runtime overhead are still open
questions for such techniques.
• scheduling: concurrent logic languages have inherently fine-grained process structures. The advantage is
exploitable parallelism, but the disadvantage is the potential of a thrashing scheduler. The PIMs rely on explicit intercluster scheduling using goal pragma (an attribute telling where the goal should be executed) and
implicit (automatic) load balancing within a cluster. Furthermore, goals can be assigned priorities, which steer
load balancing in a nonstrict manner. Although functional mechanisms have been completed at ICOT, extensive evaluation has not yet been conducted. Higher-level
programming paradigms, such as motifs in PCN, have
not been designed to alleviate the complexity of user
development and modification of pragma.
• metacontrol: pure concurrent-logic languages have
semantics that allow program failure as a possible execution result. This has long been recognized as a problem
because user process failure could migrate up to the
operating system in a naive implementation. I C O T developed protected tasks called shoen, similar to previous
work by I. Foster. Functional mechanisms for shoen management have been implemented at I C O T over the past
four years, although empirical measurements of the
operating system running applications programs have
not yet been analyzed. Intracluster mechanisms include
foster parent (a shoen's local proxy) termination detection and deadlock detection, intercluster mechanisms
include weighted throw counts for shoen termination
detection and intelligent resource management [8].
Without further empirical data, it is difficult to judge the
effectiveness of the mechanisms for reducing run-time
overheads. Furthermore, it is not clear how this research
should be viewed: as fine-grained control over concurrent logic programs or as a full-blown operating system.
T h e latter view would certainly run into commercial
problems.
• unification: unification is peculiar to logic programs,
and somewhat controversial, in the general computing
conmmnity, in its utility. Concurrent logic programs
reduce general two-way unification into ask and tell unifications, which correspond more directly to importation
and exportation of bindings in imperative concurrent
languages. Still, logical variables cause two serious problems:
(1) Variables are overloaded to perform synchronization. This is both the beauty and horror of concurrent
logic languages. The programmer's model is simplified
by implicit synchronization on variables (i.e., if a required input variable arrives with no binding, the task
suspends). Furthermore, if at any time that variable re-
ceives a binding (anywhere in the machine), the suspended task is resumed. Implementing this adds significant overhead to the binding time, primarily because
mutual exclusion is required during binding, and suspension/resumption management.
(2) In a distributed environment, optimizing data locality
over a set of unifications of arbitrary data structures is an
impossibly difficult problem. Message-passing mechanisms defining import/export tables and protocols were
developed, but little empirical analysis has been published.
Compilation techniques to determine run-time characteristics of logical variables, such as "hookedness" and
modes [21 ], and exploit them to speedup bindings, minimize suspensions, and minimize memory consumption,
have not yet been implemented in the current PIM compilers.
The I C O T research schedule began with the development of the personal inference machines (PSI-I,II,III),
followed by mockup PIMs (Muhi-PSI-VI/2, built of PSIIIs), and finally the various PIMs: PIM/p (Fujitsu),
PIM/m (Mitsubishi), PIM/i (Oki), PIM/c (Hitachi), and
PIM/k (Toshiba). A great deal of credit must go to
ICOT's central management of these efforts, based on a
virtual-machine instruction set called PSL used to describe a virtual PIM (VPIM) running on a Sequent Symmetry [17]. VPIM was shared (with slight modifications)
by most of the member organizations, making design
verification feasible. These designs are summarized in
Taki [17]. Highly parallel execution of dynamic and
nonuniform (but explicitly not data-parallel) applications
is cited as the target of the project. T h e major design
decisions were made for MIMD execution, of a finegrained concurrent logic programming base language,
on a scalable distributed-memory multiprocessor. The
PIMs were also designed around a cluster organization. I
recall visiting MCC (where research on virtual sharedmemory multiprocessors was being conducted) in spring
1987 and giving a talk describing the organization and
fielding questions about why the design was not more
innovative? My reply was that if I C O T could get the organization to work efficiently, that would be sufficiently
innovative. I still think the design is plausible; a vote of
confidence came later from the Stanford DASH project,
with a similar organization (but different consistency
protocols). But the two-level hierarchy presents an irregular model to mapping and load-balancing algorithms
that has not yet been conclusively solved by ICOT.
Interestingly, one of MCC's prime directives was to
design long-term future systems that were far beyond
the planning window of its member companies. Thus,
for instance, advanced human interfaces (including virtual reality) and D. Lenat's common-sense knowledge
base CYC [10] were tackled. The FGCS project had a
fixed duration, whereas MCC did not, and as I C O T
wound down, through the second and third periods, the
goals became shorter term, in an effort to demonstrate
working systems to M I T I to continue funding. Similarly,
the more futuristic projects within MCC were terminated at the first shortage of funds (e.g., the entire par-
allel processing program, including the virtual sharedmemory efforts, which now look so promising). M. Hermenegildo states that, in hindsight, this has proved to be
one of MCC's primary weaknesses: that it is privately
funded and thus has slowly drifted to very short-term
research.
I C O T succeeded in technology transfer to its member
companies, even if the final systems were not ideal. In
some overlapping areas of interest, MCC may have produced concepts and systems that were on a par or superior to those of ICOT, but MCC had a more difficult
time transferring the technology to the shareholders.
ICOT's success was due to the dedication of many engineers, the investment in a single, integrated computational paradigm, and careful management from conception to execution to transfer. Individuals were
transferred as well, ensuring that the technology would
not be dead on arrival. The companies contributed resources, toward hardware and software construction,
that are quite large by U.S. standards. The formula: topdown concept management cooperating with strong
bottom-up construction, is discussed further in the next
section.
In my own view, the future of this research area lies in
the design and compilation o f compositional languages,
such as PCN, that attempt to bring logic programming
more in line with mainstream practices. Specific problems that need to be solved (many of these issues are
currently active research topics):
• How to reuse memory automatically at low cost? How
to retain data locality within distributed implementations?
• How to finesse, with static analysis, many overheads of
symbolic parallel processing, such as dereferencing, synchronization, and communication?
• How to efficiently schedule tasks with a combination
of static analysis, profiling information, and motifs?
• Programming paradigms and compilation techniques
for bilingual or compositional programming languages.
Efficiently partitioning the control and data structures
of a problem into two languages can be difficult.
• Find "killer applications" that combine the power of
symbolic manipulation and supercomputer number
crunching, to demonstrate the utility of logic programming languages.
• Gain experience with large benchmark suites, which
requires standardizing some of these languages.
Commercial Success and Competitiveness
In this section I will address the validity or commercialization of the processing technologies developed by
ICOT, specifically the idea of building a special-purpose
multiprocessor to execute a fine-grained concurrent language. This seems to be the main concern of the media
and perhaps the key point on which I C O T is being evaluated. One could criticize I C O T for attempting to naively leapfrog "fourth-generation" RISC-based microprocessor technologies, which continue yearly to grow in
performance. Ten years ago, Japanese companies did
not have experience developing microprocessor archi-
C:O~UNICATIONS
OF Tun AC~/March
1993/VoL36, No.3
9S
tectures, much less second-generation (superscalar)
RISC designs, nor MIMD multiprocessor designs. Building the various PIM machines gave some of the hardware manufacturers limited experience in microprocessor design, although presumably this experience could
have been had with a more conventional target.
It should be emphasized that the key goal of the FGCS
project was to develop computers for "dynamic and nonuniform large problems" [17]. This is distinctly different
from the goal of supercomputing research in the U.S.,
developing computers for large data-parallel (regular)
problems [2]. The result is different design decisions:
for example, massively parallel SIMD computers cannot
be effectively used to execute nonuniform applications.
Neither Japan nor the U.S. misevaluated future goals,
but they each saw a part of it, neglecting the other part.
I do not, however, disagree with the criticism that a
more conventional target could have produced successful commercial technologies and influenced human infrastructure (next section). The selection of the FGCS
goals was certainly influenced by the individuals in
charge, primarily K. Fuchi and K. Furukawa, who had a
grand vision of logic programming integrating all aspects of high-performance symbolic problem solving.
Human infrastructure development was influenced by
these same architects. In some sense the two goals are
connected: engineers were given less freedom to choose
the direction of overall research, the top-down technologies being managed from above. This is in sharp contrast
to U.S. research groups, which are closer to creative anarchy.
In any case, I believe some unique experience was attained in the FGCS project: that of fabricating tagged,
symbolic, parallel architectures. This endeavor covers
the same ground as the more bottom-up approach to
massively parallel computation, taken by conventional
multiprocessor vendors. The overall problem of exploiting massively parallel symbolic computation can be seen
from two vantage points. I C O T took the high road, first
laying out a foundation of a family of symbolic, concurrent languages, and then attempting to implement them,
and building layers of applications around them. U.S.
vendors took the low road, first building massively parallel hardware, and then attempting to implement imperative languages, either through parallel library interfaces,
through sophisticated compilation, or by language modification. In no instance, to my knowledge, have vendors
developed massively parallel software for solving symbolic problems, as has been emphasized throughout the
FGCS project. My strongest criticism of the FGCS project, however, was a lack of sufficient concern for compiler technology, something adamantly stressed in the
U.S. for the past decade.
It is not surprising that the operating systems community is now developing lightweight threads, what I consider a bottom-up effort. Furthermore, languages such
as object-oriented Smalitalk and tuple-based Linda form
the cores of recent distributed-processing efforts, these
kernels developing top down, similar to the I C O T approach with logic programming. A performance gap
currently remains between these top-down and bottomup approaches. To bridge this gap, significant work
needs to be done in compilation, and further hardware
design refinement is needed. I think the latter requirement is easier for the Japanese manufacturers than the
former. I have always been impressed by the responsiveness of hardware development both in the universities
and industry. It may be the case that compiler technology has made little progress in the FGCS project because
the emphasis was placed elsewhere, on languages and
applications. A more subtle reason is the inherent complexity of managing high-level language compilation
and hardware development simultaneously. These languages have large semantic gaps that need to be covered,
with concurrency adding to the analysis problem.
Let us consider what the situation will be if/when the
performance gap between these approaches can be
bridged. The key question is then who will be in the better
position? The top-down approach (taken by ICOT) has
the advantage of programming and application experience in concurrent and symbolic, high-level languages.
The bottom-up approach (predominantly taken by U.S.
vendors) has the advantage of using imperative languages that evolved slowly, thus retaining market share.
There is no clear answer to this question, but for the sake
of illustration, let me rephrase it in terms of two specific
technologies: worm-hole-routed distributed networks
[3] and concurrent constraint languages.
I believe both these technologies required significant
intellectual efforts to conceptualize, design, implement,
and apply in real systems. The former represents a
bottom-up technology and the latter a top-down technology. Bottom-up technologies are easier to introduce
into designs, e.g., PIM/m incorporates worm-hole routing (and can execute GDCC, a constraint language [18]),
whereas the Intel machines [ 11 ] (and other experimental
supercomputers at this level, such as the CM/5 and
iWARP) do not yet have implementations of constraint
languages, or in fact any vendor-offered languages
other than C or Fortran. Perhaps GDCC can be ported
to general-purpose multiprocessors, but that is not the
issue. Where GDCC came from, and where it is going,
can only be determined from the foundation of the research expertise gained in its development. This is of
course true about routing technologies, but again,
bottom-up technologies are more easily imported and
ICOT infrastructure was u n i q u e f o r J a p a n e s e
research
organizations in the early 1980s in that it supplied
researchers with various communication channels that normally
did not exist in the corporate culture.
6
March 1993/Vo1.36, No.3 / ¢ O M M U N I C A T I O N S O F T H E A C M
adapted. They are also more easily sold because they
translate more directly to peak FLOPS, although this can
be a grave misstatement of application performance,
especially in .the domain of nonscientific codes.
In 1991, the U.S. Congress passed the High Performance Computing and Communications (HPCC) initiative [2] stating several "grand challenges," such as climate modeling. These goals were indicative of the
research interests and activities already under way in
U.S. universities and national laboratories. Furthermore, these challenges echoed current government
funding in supercomputer research, such as DARPA's
28% stake in Touchstone development costs [11]. Conspicuously, none of the challenges involved symbolic
computation. Equally conspicuous was the lack of numerical computation in the FGCS project goals (the
overlapping supercomputing project 7 was more attuned
to this goal). Yet again, reasoning that if and when the
bottom-up and top-down approaches coincide, we can
imagine that these "traditional" number-crunching applications will also run efficiently on the resulting architectures or that the symbolic and numeric engines will
become so inexpensive that they will coexist in hybrid
systems. It is perhaps more revealing that software applications to solve both climate modeling and genome
mapping are being led by language paradigms with logic
programming roots, namely PCN [5] and Lucy [24].
A New Generation
The following is a compendium of ICOT's major influences in developing its human capital. During the
FGCS'92 conference in Tokyo, I had the opportunity to
conduct extensive interviews with hardware engineers
participating in the FGCS project, filling me in on all
that had transpired since I had left. All those interviewed were active throughout the length of the project,
and thus had a complete perspective. Because they
worked primarily in the area of computer architecture,
their combined views form one in-depth analysis of
ICOT, rather than broad-based analyses.
Increased Communication
I C O T infrastructure was unique for Japanese research
organizations in the early 1980s in that it supplied researchers with various communication channels that
normally did not exist in the corporate culture. In the
following I will summarize the forms of communication.
Company-to-company interaction was engendered by
the cooperative efforts of engineers centrally headquartered at ICOT. All previous national projects were distributed among manufacturers. Perhaps it was the Japanese culture of consensus making that made the central
location successful.
The introduction of electronic mail increased international as well as local information flow. This trend was
generally occurring throughout Japanese organizations,
coinciding with, not engendered by, the FGCS project.
The creation of electronic networks falls under the aus7"High-Speed Computing Systems for Scientific and Technological
Uses," 1981-1989.
pices of the Ministry of Communication. Because this
delegation is separate from education and industry, network infrastructure has been slow to develop in Japan.
Even the most advanced universities only developed
high-bandwidth networks within the past five years.
Company-to-university interaction was engendered
by the Working Groups (WGs) associated with the FGCS
project. The WGs were started from the inception of
ICOT, with the intent of fostering university and company communication. Initially there were 1 to 5 groups
in the first period of the project, growing to 15 groups in
the second period and about 10 in the third period. Participating universities included Tokyo, Kyushu, Kobe,
Kyoto, Keio, and the Tokyo Institute of Technology. As
an example, the PIM WG, one of the oldest, meets
monthly. My experience, from presenting talks at the
PIM WG in 1988 and 1992, was an overly structured
format and limited participation. Most participation is
from universities in Tokyo (for economic reasons), but
student participation is extremely limited (e.g., one or
two Todai students might attend). In this respect, I
doubt that the WGs were efficient in strengthening ties
between industry and universities, for instance, compared to the weekly Computer Systems Laboratory seminars held at Stanford University. Yet others disagree
with this conclusion, pointing out that cooperation
among U.S. universities is limited and among U.S. companies, almost nonexistent.
Interaction between research communities in Japan
and abroad was engendered by the high value placed on
the publication and presentation of research results.
I C O T researchers and international researchers exchanged visits frequently. The exposure gained by
young researchers was exceptional, even for Western
organizations.
Postgraduate Education
I C O T served as a substitute for O J T ("on-the-job training"), and in doing so, graduated a generation of engineers/managers educated in advanced areas of computer science and better able to manage their own
groups in the future. The latter point applies to both the
engineering management as well as political management, learned by a close relationship with MITI.
An argument can be made that separation of industry
and higher education is beneficial to Japan. For instance,
it delivers engineers to industry ready to be trained in
specific company technologies. My personal experience
in Japan indicated that this argument is weak. The lack
of popularity of graduate studies weakens Japan's ability
to do computer science and, therefore, to produce advanced long-term technologies. The issue is not so much
of where advanced academic skills are learned, but that
the infrastructure needed to successfully learn includes
high communication bandwidth of all forms and an
open forum to discuss the latest research ideas. An indirect result of I C O T was to teach university graduates
(mainly with B.S. degrees) how to properly conduct research and construct hardware and software systems.
Furthermore, experience of presenting papers at conferences gave the individuals much needed practice at
COUUUNICA'lrtONJOIt'lrglACU/~V[arch
1993/Vol,36, No.3 9 7
T IIE
F I F T
~TI
ON
PROJECT
1-social interaction with the international community.
Few I C O T researchers entered service with Ph.D.'s.
Over the life o f the project, about 10 Ph.D.'s were
granted for FGCS-related research. This side effect was
unusual for national projects, indicating ICOT's emphasis on basic research, as well as more practical considerations of personal advancement: a large percentage of
those completing Ph.D.'s became university professors.
My stay at ICOT, and the University o f Tokyo, as well
as various visits to industry, indicated that ICOT's infrastructure was carefully p l a n n e d to bring about these results. Detail was paid to basic things, such as the I C O T
library, which was quite extensive. I have experienced
that communication between engineers, unprotected by
offices or cubicles, was extensive at ICOT, more than
any other workplace. Finding an expert for consultation
was as simple as crossing the room.
Company Cultures
I believe that I C O T coincided with greater forces within
J a p a n causing a movement away from the culture o f lifetime employment. However, the revolution was certainly
felt within the FGCS project. A. Goto o f N T T estimates
that over 5% o f all I C O T participants changed their affiliations after their tenure ended. Examples include
moves from industry and the national laboratories to
academia (both as professors and as researchers) and
moves between companies. T h e f o r m e r constituted the
major group.
In general the most highly productive researchers
made the moves. I C O T may have implicitly influenced
the individuals by e m p o w e r i n g them to conduct worldclass research programs. Once successful, they reevaluated their o p p o r t u n i t y costs, which were not being adequately met by their employers. These costs involved
both salary, as well as intellectual freedom. T h e explosion came as a surprise to the companies, which it should
not have, given the highly technical nature o f c o m p u t e r
science. Certain companies took direct action to deal
with it, such as SONY forming the C o m p u t e r Science
Laboratory (CSL), a small Western-style research lab in
Tokyo. NEC took indirect action by forming a research
laboratory in New Jersey.
In addition, the universities gained a significant number o f professors "generated" at ICOT. K. Nakajima o f
Mitsubishi estimates this at about five directly from
I C O T and six from the I C O T - r e l a t e d groups within
industry. Perhaps this was an accidental side effect of the
decade, but it certainly was not seen in the previous national projects.
An opposite effect, to the previous "explosion," was
the cross-fertilization o f company cultures. I C O T played
a role o f matchmaker to manufacturers, resulting in
technology transfers, however indirect o r inadvertent,
over the 10 years. Here I review two main transfers:
engineering m a n a g e m e n t techniques and multiprocessor technologies.
Large systems development, such as the PIM development efforts, required scheduling. Nakajima pointed
out that I C O T would pool the production m a n a g e m e n t
techniques from the m e m b e r companies without bias.
S8
M a r c h 1 9 9 3 / V o l . 3 6 , N o. 3
/¢:OIIII~UIIICATIONIIOFTHIIACM
This resulted in more efficient scheduling and project
completion. Even if the companies themselves did not
adopt the hybrid methodologies, the individuals involved certainly learned.
I C O T was a mixture o f manufacturers and their engineers, and the experience of introducing these groups
was beneficial to all. T h e engineers have a chance to experience how things are done in other companies. K.
Kumon o f Fujitsu stressed that PIM/p and PIM/m could
not both be built by both Fujitsu and Mitsubishi--each
m a n u f a c t u r e r had its own technology expertise, and
thus the designs evolved. Designers from both companies learned, firsthand, alternatives that were not (yet)
feasible in their own environments.
Conclusions
Considering technology, I conclude that the top-down
vertically integrated a p p r o a c h in the FGCS project failed
to achieve a revolution, but was a precursor to evolutionary advances in the marketplace. T h e J a p a n e s e supercomputing project involved vector processor technology
that was well u n d e r s t o o d c o m p a r e d to symbolic computation. T h u s the projects cannot be c o m p a r e d on that
basis. F u r t h e r m o r e , comparisons to U.S. research efforts, which are driven by strong national laboratories
and universities in the direction of "traditional" scientific
computation, is also inappropriate. Perhaps f u r t h e r
comparison o f the FGCS and HPCC projects would be
a p p r o p r i a t e , but a subject o f another article. U.S. research and d e v e l o p m e n t concerning symbolic processing
in Lisp and Smalltalk might be the most valid benchmark, if comparisons are desired. T h e rise and fall o f the
Lisp machine market, over this decade, does not place
the U.S. in a more successful light. Refinement, rather
than a b a n d o n m e n t , o f the concepts developed in the
FGCS project may well serve the J a p a n e s e manufacturers in the upcomin~ decade.
Considering h u m a n capital, I think all the influences
cited in this article are natural results o f "market forces."
T h e action o f these influences on young I C O T researchers were by and large positive. Increased communication
a m o n g engineers, managers, professors, students, and
g o v e r n m e n t bureaucrats leads to more rapid progress in
developing basic research ideas into successful commercial products.
T h e question remains as to whether a national project
of this m a g n i t u d e is necessary to create these h u m a n
networks each generation, or if this first network will
propagate itself without help from a n o t h e r project. An
optimistic view has the networks weakening with age, but
remaining in place. T h u s in the future it may not require
such a grand-scale project to strengthen ties. For example, c u r r e n t I C O T graduates, u n d e r s t a n d i n g the importance of free and flexible discussion o f results at national
conferences, will increase the participation o f the researchers in their care, thus enabling the next generation to form their own friendships and working relationships.
However, few I C O T people believe this scenario.
Some believe that most I C O T researchers implicitly understand the importance o f I C O T ' s contributions in this
area, but not explicitly. Without explicit self-awareness,
this metaknowledge may be lost without another national project, or an equivalent, to reinforce the lessons.
T h e current generation of engineers, without an experience similar to ICOT, will be at a disadvantage to the
I C O T generation. Communication will be strictly limited
to technical conferences, where information flow is restricted. In this sense, I C O T did not create a revolution
because it did not fundamentally change the manufacturers. T h e h u m a n networks will not be self-generating
from the bottom up, by the few seedling managers
trained at ICOT. Although a manager's own bias may be
consistent with ICOT's flexible style of research and
management, the higher one gets in the company hierarchy, the less managers tend to share this sentiment.
Either another project, or a radical restructuring of
the diametric cultures of education and industry, will be
required to propagate the advances made in the FGCS
project. T h e Japanese, certainly amenable to hedging
their bets, have already started on both avenues. A
"sixth-generation" project involving massive parallelism,
neural networks, and optical technologies is already
u n d e r way. However, the research is distributed among
many institutions, potentially lessening its impact. Furthermore, the Japanese Ministry of Education is currently making plans to approximately double f u n d i n g
for basic research in the universities (person~l communication, K. Wada, T s u k u b a University, June, 1992).
O n a more personal note, I highly respect the contribution made by the FGCS project in the academic develo p m e n t of the field of symbolic processing, notably implementation and theory in logic programming,
constraint and concurrent languages, and deductive and
object-oriented databases. In my specific area of parallel
logic p r o g r a m m i n g languages, architectures, and implementations, I C O T made major contributions, but perhaps the mixed schedule of advanced technology transfer and basic research was ill advised.
This basic research also led to a strong set of successful applications, in fields as diverse as theorem proving
and biological computation. I n a wider scope, the project
was a success in terms of the research it e n g e n d e r e d in
similar international projects, such as ALVEY, ECRC,
ESPRIT, INRIA, and MCC. These organizations
learned from one another, and their academic competitiveness in basic research pushed them to achieve a
broader range of successes. I n this sense, the computer
science community is very much indebted to the "fifthgeneration" effort.
One Sunday m o r n i n g d u r i n g my stay in Tokyo, I was
invited by some U.S. congressmen to a breakfast meeting
at a plush Roppongi hotel. Since it was so early, I had to
attend directly from Saturday night socializing, leaving
me somewhat weakened. However, I was clear-headed
enough to listen to the voices a r o u n d the table stating
what was wrong with the electronics/computer trade
imbalance. I was perhaps the only attendee who was not
a salesperson or a politician, and certainly the only one
who was not quite "groking" that "Big American Breakfast" sitting in front of me. When it was my turn to
speak, I could not think of much to say: the issues were
as large as billions in chip d u m p i n g and unfair markets,
not collaborative research efforts. Well, collaboration is
of long-term importance; I t h o u g h t - - t h e same as the
basic research itself.
Acknowledgments
T h e author is now supported by an NSF Presidential
Young Investigator award, with matching funds from
Sequent Computer Systems Inc. My stay at I C O T was
generously supported by Y.T. Chien and A. DeAngelis
of the National Science Foundation. I would like to
thank the n u m e r o u s people who graciously helped me in
writing this article.
For space considerations, the citations in this article
have been restricted. Contact the author for the complete citations. •
References
1. Brand, P., Haridi, S. and Warren, D.H.D. Andorra Prolog-The language and application in distributed simulation.
New Gen. Comput. 7, 2-3 (1989), 109-125.
2. Committee on Physical, Mathematical, and Engineering
Sciences. Grand Challenges: High Performance Computing and
Communication. NSF, Washington D.C., 1991.
3. Dally, W.J. and Seitz, C. Deadlock-free message routing in
multiprocessor interconnection networks. IEEE Trans. Comput. C-36, 5 (May 1987), 547-553.
4. DeGroot, D. Restricted AND-Parallelism. In International
Conference on Fifth Generation Computer Systems (Tokyo, Nov.
1984). ICOT, Tokyo, pp. 471-478.
5. Foster, I., Olson, R. and Tuecke, S. Productive parallel programming: The PCN approach. Sci. Program. 1, 1 (1992).
6. Goto, A., Matsumoto, A. and Tick, E. Design and performance of a coherent cache for parallel logic programming
architectures. In International Symposium on Computer Architecture (Jerusalem, May). IEEE Computer Society, Los
Alamitos, Calif., pp. 25-33.
7. Hermenegildo, M.V. An abstract machine for restricted
AND-parallel execution of logic programs. In International
Conference on Logic Programming. In Lecture Notes in Computer Science, vol. 225. Springer-Verlag, New York, 1986,
pp. 25-40.
8. Hirata, K., Yamamoto, R., Imai, A., Kawai, H., Hirano, K.,
Takagi, T., Taki, K., Nakase, A. and Rokusawa, K. Parallel
and distributed implementation of concurrent logic programming language KL 1. In International Conference on Fifth
Generation Computer Systems (Tokyo, June, 1992). ICOT,
Tokyo, pp. 436-459.
9. Imai, A. and Tick, E. Evaluation of parallel copying garbage collection on a shared-memory multiprocessor. IEEE
Trans. Parall. Distrib. Comput. To be published.
10. Lenat, D.B., Prakash, M. and Shepherd, M. CYC: Using
common sense knowledge to overcome brittleness and
knowledge acquisition bottlenecks. AI Mag. (Winter 1985).
11. Lillevik, S.L. The Touchstone 30 gigaflop DELTA prototype. In International Conference on Supercomputing. IEEE
Computer Society, Los Ala- mitos, Calif., 1991, pp. 671677.
12. Lusk, E., Butler, R., Disz, T., Olson, R., Overbeek, R., Stevens, R., Warren, D.H.D., Calderwood, A., Szeredi, P.,
Haridi, S., Brand, P., Carlsson, M., Ciepielewski, A. and
Hausman, B. The Aurora Or-Parallel Prolog System. In
International Conference on Fifth Generation Computer Systems
(Tokyo, Nov. 1988). ICOT, Tokyo, pp. 819-830.
13. Makino,J. On an O(NlogN) algorithm for the gravitational
¢OMHUHICAI'IOII|OPTHBACIII/M;]rch 1993/Vol.36,
No.3
~9
N-Body simulation and its vectorization. In Proceedings of
the 1st Appi Workshop on Supercomputing (Tokyo 1987). pp.
153-168. Institute of Supercomputing Research. ISR
Tech. Rep. 87-03.
14. Muthukumar, K. and Hermenegildo, M. Determination of
variable dependence information through abstract interpretation. In North American Conference on Logic Programming (Cleveland, Oct. 1989). MIT Press, Cambridge, Mass.,
pp. 166-168.
15. Nitta, K., Taki, K. and Ichiyoshi, N. Experimental parallel
inference software. In International Conference on Fifth Generation Computer Systems (Tokyo, June 1992). ICOT, Tokyo,
pp. 166-190.
16. Shapiro, E.Y., Ed. Concurrent Prolog: Collected Papers, vol.
1,2. MIT Press, Cambridge, Mass., 1987.
17. Taki, K. Parallel Inference Machine PIM. In International
Conference on Fifth Generation Computer Systems (Tokyo, June
1992). ICOT, Tokyo, pp. 50-72.
18. Terasaki, S., Hawley, D.J., Sawada, H., Satoh, K., Menju, S.,
Kawagishi, T., Iwayama, N. and Aiba, A. Parallel constraint
logic programming language GDCC and its parallel constraint solvers. In International Conference on Fifth Generation
Computer Systems (Tokyo, June 1992). ICOT, Tokyo,
pp. 330-346.
19. Tick, E. Parallel Logic Programming. MIT Press, Cambridge,
Mass., 1991.
20. Tick, E. A performance comparison of AND- and ORParallel logic programming architectures. In International
Conference on Logic Programming (Lisbon, June 1989). MIT
Press, Cambridge, Mass., pp. 452-470.
21. Ueda, K. and Morita, M. A new implementation technique
for flat GHC. In International Conference on Logic Programming (Jerusalem, June 1990). MIT Press, Cambridge,
Mass., pp. 3-17.
22. Van Caneghem, M. and Warren, D.H.D., Eds. Logic Programming and Its Applications. Ablex, 1986.
23. Van Roy, P.L. and Despain, A.M. High-performance logic
programming with the Aquarius Prolog compiler. IEEE
Comput. Mag. (Jan. 1992), 54-68.
24. Yoshida, K., Smith, C., Kazic, T., Michaels, G., Taylor, R.,
Zawada, D., Hagstrom, R. and Overbeek, R. Toward a
human genome encyclopedia. In International Conference on
Fifth Generation Computer Systems (Tokyo, June 1992). ICOT,
Tokyo, p. 307-320.
CR Categories and Subject Descriptors: C.1.2 [Processor
Architectures]: Multiple Data Stream Architectures (Multiprocessors); D.1.3 [Programming Techniques]: Concurrent
Programming; D.1.6 [Software]: Logic Programming; D.3.2
[Programming Languages]: Language Classifications-Concurrent, distributed, and parallel languages, Data-flow languages,
Nondeterministic languages, Nonprocedural languages; K.2 [Computing Milieux]: History of Computing
General Terms: Design, Experimentation
Additional Key Words and Phrases: Concurrent logic programming, Fifth Generation Computer Systems project,
Guarded Horn Clauses, Prolog
About the Author:
EVAN TICK is assistant professor at the University of Oregon.
Current research interests include parallel processing, compilation of concurrent languages, and computer architecture. Author's Present Address: Department of Computer Science and
Information Science, University of Orgeon, Eugene, OR
97403; email:
[email protected]
O0
March 1993/%1.36, No.3 /¢OIIMUHICATIONS O F
THE
diem
I n o u r introduction to this
special section we stated
that a l t h o u g h the results o f
the Fifth G e n e r a t i o n project do not m e a s u r e u p
to the expectations it
generated, nevertheless
those
Involved
with
the project
have
a sense
of
W h a t is the source of this
achievement.
discrepancy between the generally negative perception of the project and the generally positive feeling
o f the p e o p l e w h o actually participated in it?
Perhaps the essence of this contradiction lies in
the difference between the way the project was
presented initially to the public a n d w h a t the project
really was about. T h e p r o m o t e r s o f the project in
J a p a n popularized the project by p r o m i s i n g to m a k e
the d r e a m of artificial intelligence (AI) c o m e true.
This view was further amplified by scientists
t h r o u g h o u t the world, w h o capitalized on the fear of
J a p a n e s e technological s u p r e m a c y in order to scare
their own governments into f u n d i n g research.
However, what the project was really a b o u t was
evident very early to anyone w h o cared to find out.
Ten years ago, one of us stated in this publication:
The smoke cleared when ICOT was formed, with
Fuchi as its director. With the excuse of budget constraints, all ballasts were dropped, and a clear,
coherent research project emerged: to build parallel
computers, whose machine language was based on
Horn-clause predicate logic and to interface them to
database machines, whose data-description and query
language was based on Horn-clause logic.
The fancy artificial intelligence applications of the
original proposal remain, serving as the pillar of f'Lne
that gives the true justification for building faster and
better computers; but no one at ICOT deludes himself
that in 10 years they will solve all the basic problems
of artificial intelligence... Commun. A C M 26,
9 (Sept. 1983), 637-641.