Log/Mate Esp Assistant - A Knowledge-Based System For Log Analysis A Progress Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

LOG/MATE ESP ASSISTANT A KNOWLEDGE-BASED SYSTEM FOR LOG ANALYSIS

A PROGRESS REPORT
E. R. (Ross) Crain, P.Eng.
D&S Petroleum Consulting Group / Alberta Research Council Joint Venture
Calgary, Alberta
403 845 2527 [email protected]
Distributed internally to D&S/ARC Sep 1986. This electronic version created Jan 2005.
Authors Note: The LOG/MATE ESP ASSISTANT joint venture research project with the Alberta
Research Council was suspended shortly after this was written and my involvement ceased. The
project was resumed later in 1987 and a deliverable product was completed in 1988, based on the
foundation described here. The earlier papers on LOG/MATE ESP and expert systems research
leading up to this work are available from the Publications section of this website.
This paper was never intended for external publication. It contained proprietary trade secrets and
details of management, planning, and financing problems that were not for public consumption.
However, 20 years have passed and the experiences recounted may be instructive today. ERC Jan
2005.

ABSTRACT
This paper reviews the plans and progress achieved to date on research, design, and implementation
of a prototype knowledge-based (expert) system for log analysis, being developed by the author and
his colleagues, under the terms of a joint venture agreement between D&S Petroleum Consulting
Group Ltd. and the Alberta Research Council in Calgary, Alberta, Canada.

THE PRIMARY GOAL - A Commercially Viable Knowledge-Based System For Log


Analysis
The primary goal of the LOG/MATE ESP ASSISTANT project was to develop a knowledge-based
system capability for the existing LOG/MATE ESP log analysis package. Implicit in this goal is the
requirement that the final program be commercialized; that is, it must be finished and tested in a
delivery environment during the term of the joint venture contract, be saleable immediately thereafter
in the current oil and gas marketplace, and offer a significant competitive advantage to D&S
Petroleum Consulting Group Ltd. and its customers.
Such programs are often called expert systems, but most do not achieve the capabilities of a true
domain expert, so the term knowledge-based or rule-based system is more appropriate and more
descriptive.
Our definition of a knowledge-based system, paraphrased from "Rule Based Expert Systems" by
Bruce G. Buchanan and Edward H. Shortcliffe, Addison Wesley, 1984, is the following:
1. The program should be useful, and should meet a specific need for which expert assistance is
normally required. (Log analysis is certainly one of these applications.)
2. The program should be usable, even by novices. (This is one of the major reasons the rule bases
are needed for log analysis.)
3. The program should be educational where appropriate, so that non-experts can learn by its use.
4. The program should be able to explain its advice, so that a user can decide whether to accept the
advice.
5. The program should be able to respond to simple questions, such as why information is needed.

6. The program should be able to learn new knowledge, by observation and by asking questions.
(Most systems do not remember this new knowledge for use on the next problem, and it must be
specifically entered into the data or rule bases for later use.)
7. The program's knowledge should be easily modifiable, to account for new knowledge or methods.
Our decisions about the tools and methods to use were predicated on these seven principles, plus
the engineer's more pragmatic KISS principle (keep it simple and small). Obviously, there are many
ways in which to create such a system, and still honour these rules. We have spent considerable time
and effort evaluating these factors and have made a number of arbitrary but pragmatic decisions
which naturally flavour the result.

THE PRIMARY GOAL - Design Criteria


Functionally, one could think of LOG/MATE ESP as a super-spreadsheet, as the functions, use, and
flexibility are similar in many ways to modern, sophisticated spreadsheet packages, such as
Symphony and Framework. The existing program has the following major components:
1. data base
2. algorithm processor
3. interactive graphics (depth and crossplots)
4. report generator
5. data communications
The knowledge-based program will be imbedded in the algorithm processor, which also will be
extended to offer run stream information to graphics and report generator modules so that automatic
output of results is achieved.
For LOG/MATE ESP ASSISTANT, the knowledge-based component involved creating a program which
would control the parameter selection, computation, and output functions, and act as an assistant or
advisor to the user. It is an intelligent interface between the user and the existing complex program.
The following components have been designed and, to date; the first two have been tested:
1. a rule base for selection of the appropriate log analysis algorithms, based on available data and
borehole environment,
2. a rule base for selection of the necessary analysis parameters, based on the chosen algorithms,
as well as known and derivable rock and fluid properties,
3. result analysis, or iteration rules, to handle re-runs when results do not match ground truth, such
as sample description, core, and DST data,
4. a heuristic search process to create the appropriate zonation of the well,
5. system usage rules to guide the user through the options in a logical sequence, prompting for
missing data, and checking data validity, based on an initial survey of the available well data,
formation and fluid characteristics of the zone, and the user's expertise.
Integration of the first two rule bases with the existing program is just now being undertaken. The
remaining three are fairly well defined and will be tested and implemented during the current year.
These rule based functions lead to automatic first-pass analysis, which could be improved by
successive passes using iteration rules and the immense power of the very flexible, interaction
processing capabilities of LOG/MATE ESP. In fact, it is this very flexibility which necessitates the
initial screening and option selection rule base, as we have found that only expert users can benefit
from the system's total capabilities. Less experienced or infrequent users cannot retain enough
operating knowledge to take full advantage of this powerful package.

THE SECOND GOAL - A Historical Database


The program initially runs at the assistant level, and when properly tuned to an area, may run at the
advisor or expert level. This is accomplished by providing a historical data base of known facts, such

as the physical properties of rocks and fluids, and log analysis parameters previously used in the
area.
Our second goal was thus to provide the textbook, parameter, and procedural data for Western
Canada, and at least text book data for other areas. The historical data base will be augmented by the
user at each installed site, so that empty areas grow, or learn, from use of the system. With support
from users, the parameter data for a wide area of the world could be shared.
A historical data base for Western Canada has been prepared. This data represents the log data,
results, methods, and parameters used by analysts to solve the standard log analysis algorithms in
this area, sorted by formation name and locality. It includes approximately 900 model zones, drawn
from our previous analyses. These wells all have good ground truth data, so that analysis parameters
and methods can be validated.
This database will learn from an expert's use of the system, and could be called a teachable database.
It will not learn everything, but only those things we wish it to learn. The learning function will be
provided by an application program which will update the historical data base upon user command.
This update facility will add parameter values used successfully by the analyst since the last update,
provide a mapping facility for data evaluation, and an editing feature to remove or correct
inconsistent data. Coding of this function has not yet begun, but is planned for this year.
It would thus be possible for experts to share local knowledge amoung many users, and to provide
less experienced users with a good starting point for their analyses. It also serves as the perfect
memory for both advanced and novice users.
Systems sold locally could contain a considerable amount of data since it would be readily available
from our own files. Those sold internationally would likely be delivered with an empty database,
except for universally accepted rock and fluid properties and rational default values for all other
parameters. These would be updated by the software as analyses are run, preferably under the
control of knowledgable analysts.
An integral part of this enhancement will be a parameter picking feature, so that parameter values can
be extracted from the historical data base, as well as from depth plots and crossplots of current data,
for use in analysing the current well.

THE THIRD GOAL - A Portable Vehicle To Carry The System


The current system, LOG/MATE ESP, runs on HP 200 and 300 series desktop computers (Motorola
68000 16-bit and 32-bit processors) in HP Extended Basic, which includes its own operating system.
Alpha and graphics screen handlers, and the algorithm processor, are written in assembler or
compiled Pascal for speed. This system is not portable to any other hardware or software
environment.
Our target delivery vehicles for LOG/MATE ASSISTANT are IBM/PC-AT and RT equivalents,
microVAX II, HP-300 and equivalents (Sun and Apollo), and other similar high powered desktop
workstations. Therefore our third goal was to convert the system to run under a standard subset of C
language, which is relatively portable, has a wealth of supporting software, and seems to have a
relatively stable future. We intend to support installations that run C language object code, such as
Unix, Xenix, or VMS operating systems. We cannot justify down-grading LOG/MATE ESP performance
to MS-DOS, with its 640 K memory limitation.
Without this conversion there would be no commercial vehicle to carry the results of our knowledgebased system research, at least not within the time and budget constraints of the current joint
venture. We have thus weighed this approach with others, and found the alternatives lacking due
either to high cost, lack of time, talent, and manpower, or poor chance of success in the marketplace.
To reduce coding, conversion, and maintenance efforts, we have used available fourth generation
languages as much as possible. In particular, a decision has been made to use the RDS - Informix
relational data base rather than rewrite our own DB.

Use of the commercially available relational database was found to be the most efficient mechanism
to aid our conversion and portability, and satisfy client demand for a database which could be read by
other prrograms they may already own. RDS is available on a large variety of target delivery vehicles,
is maintained and updated by others, and has a very broad user base and acceptance level. Use of
such a tool has reduced our programming effort considerably, although we elected to create our own
internal data representation structure in C language. This was done so as not to be tied directly to the
RDS human interface, which requires more user knowledge than we would wish upon our casual
users.
We have decided to utilize the GKS graphics kernal to rewrite our interactive graphics modules. A
commercially available package (Visual:GKS) provides GKS primitives and a library of device
dependant drivers. The device driver library is itself an expert system, making a total of six that will be
imbedded in our system. This reduces our effort in attempting to be machine independant. As well
this package uses the RDS data base and is available on a large variety of our target delivery vehicles.
We have decided also to use Rulemaster, a decision table approach to expert system building, to
create and run the rule bases in our prototype system. However, we are continuing research (as time
and resources permit) on data representation and rule processing on the Symbolics 3600 computer
using the more powerful KnowledgeCraft expert system environment. Future releases of the
LOG/MATE ESP ASSISTANT may use a delivery version of this kind of expert system shell.
Multiple active windows, using commercially available window management packages, have yet to be
added to the screen handler of LOG/MATE ESP, to improve the human interface. However, we are now
yet able to move the contents of a data element in one screen to another, reducing the need for this
feature in the near term.
The C language conversion is approximately 70% complete and all modules converted to date have
been thoroughly tested. Approximately 50 lines of assembly language code are needed for rapid
screen handling for each new system configuration.
Integration of the Rulemaster, RDS database, and GKS kernal will begin in October, 1986.

THE FOURTH GOAL - Expanded Data Communications


Data communication is a catchall term for all functions relating to moving data in and out of
LOG/MATE ESP. We have recognized, for over ten years, that most customers are interested in, and
demand, some form of data transfer between their log analysis workstations and their mainframe data
bases. As well, they want professional level security (backups) and small local area networks with
shared central data bases; functions similar to those usually associated with mainframe operations.
This concept has been imbedded in LOG/MATE products since their inception in 1976, and these
capabilities continue to be expanded.
An extensive copy/backup/archive/restore facility exists and input of log data from Schlumberger LIS
tape is available either by reading tapes at the workstation or from central site data files. Reading of
Dresser BIT tapes will be available in the near future.
The system can be placed now in a small local area network for multi-user operation using IEEE-488
(HP-IB) and special shareable disc drives (Bering). It can operate now on HP's Shared Resource
Manager, a uniquely HP version of a local area network.
Data communication to remote computers is now via RS-232, and IEEE-802 (Ethernet) will be added.
Ethernet is supported by HP, DEC, IBM, and some other vendors only on a UNIX based workstation,
so again a UNIX operating system is demanded. The networking capability of the system thus will
depend entirely on the Unix/Xenix environment, and not on code that we write ourselves. We do not
plan to implement a mainframe-terminal environment immediately, but will do so upon customer
demand.
Additional features to make datacomm more friendly are being added, such as an inverse report
generator to transfer data from a foreign data file into LOG/MATE files. The existing report generator
is used to create ASCII files for transmission to a remote computer. All existing and new data comm
code will be written in C. An IBM 3270 emulator will be acquired or written to facilitate high speed

transmission to IBM mainframes. Transmission to DEC and other mainframes can be adequately
handled by the RS-232 and Ethernet protocols.
Since the data base will be in RDS format it could be accessed directly from foreign programs without
the necessity of going through LOG/MATE ESP.
These data communications features are widely used on our present systems, and will be integrated
into the knowledge-based system as they are completed. The data comm work is not part of the D&S
joint venture with ARC, but is being undertaken by D&S to provide greater marketability of the new
product.

FUTURE RESEARCH AND DEVELOPMENT - The Budget/Goals Dilemma


Many choices have been made so that we could create a viable program within the budget, man
power capabilities, and time frame available to us. Compromises are always required in this situation
and decisions must always be made in the face of incomplete and conflicting information. We have
attempted to address the seven basic principles elucidated in our primary goal, and believe that our
present plan and objectives will satisfy them.
We will be able to finish building a program which will carry the five expert system components in a
viable, portable program for log analysis, geological, and engineering applications on the current
breed of workstations already on the desktops of many exploration and development personnel. This
is not a bad result.
Time and budget constraints dictated, to a large degree, how sophisticated our software research and
planning could be. Our market research also indicates, with the current slump in oil prices, that few
sophisticated, relatively expensive, systems could be placed in the next three to five years, especially
when so many IBM/PC's and clones already exist in our target market.
It should be noted that many changes were made in the original plan, which of course was based on
only a hazy definition of the problems to be solved and limited knowledge of the available tools,
potential personnel, and even the subject domain. The initial six months were mainly involved in
defining the problem, acquiring suitable personnel, training them to the domain and the environment,
testing software tools, and defining tentative knowledge-based solutions to the log analysis problem.
The second six months were devoted to testing two major tools and to designing a portable program
environment to carry the results of this work. The remaining year of the project will complete the
tasks of program conversion and integration of rule bases and relational data bases. This scenario
was not at all similar to the original plan.
The most obvious fault in our initial plan was lack of a "learning curve" for all participants. It was
assumed that we would merely begin designing and implementing a knowledge-based system using
classical textbook techniques without first learning the domain and testing the tools. None of our
advisors could demonstrate at an early stage a clearcut, unequivocal route to a successful
commercial product, because there are very few, or maybe no such, products to emulate.
Although many expert systems, or knowledge-based systems, have been described in the literature,
the vast majority are used only in-house with well trained users with very small problem domains and
are not sold for commercial gain to out-of-house customers. In addition, only Schlumberger's
Dipmeter Advisor, seemed to address a problem domain as large as ours, and the tool building
involved in that project was far beyond our capabilities. The only hybrid rule-based system analogous
to our current plan was Amoco's ELAN program, which was neither commercial nor accepted by
Amoco's branch offices as a viable alternative to conventional methods.
This knowledge, gained by a painfully slow literature search and tool exploration process, led to the
current problem definition and solution path.
The second major fault in the original plan was the under-estimation of the work involved in creating a
pilot program, or test bed, for the results of our knowledge-based research. Since this was clearly
part of our mandate, it could not be ignored. Therefore we added resources to bring this work on
stream sooner than planned, and it will absorb a larger fraction of the total project funding than

planned. Also, we intend to have a larger involvement of D&S marketing efforts sooner than expected
to raise the chances of commercial success.
We have been criticized for not using more sophisticated tools and for our emphasis on commercial
aspects at the apparent expense of more elaborate research. Our rationale is based on the fact that
the success or failure of the project will be judged primarily on its performance in the marketplace,
and not on the elegance, cost, or ultimate capabilities of the inference engine.
No one should be ashamed of the calibre of the original plan, for the problems stated above are quite
realistic results when engaged in leading-edge research with a commercial goal, a strict budget, and a
deadline to meet. However, these obvious facts should be a warning to others not to expect instant
results, clearly defined paths, or even unanimous agreement as to methodology or goals in a project
of this type. The same warning should be heeded as we prepare plans and budget for the coming
year.
We would certainly like to do more research and development on the KnowledgeCraft system. Indeed,
we would be foolhardy in not planning further work on several fronts. Lynn Sutherland, one of the
project members, concludes her report with the following: "Traditional environments cannot easily
support the process of reasoning, flexible programs, or friendly human interfaces. Perhaps the
established commercial and professional world is not yet ready to accept the ease of use provided by
non-traditional environments such as Macintosh and Lisp machines, but professional graduates now
are computer literate and will demand flexible, integrated, and compatible software. We have the
people and the equipment in place to research a sophisticated expert system problem. We may feel at
this time that we are ahead of commercial acceptance of such a development. We probably are, but
just a little. Eventually commercial competition will force the decision to accept more powerful, nontraditional, tools."
Frankly, the current joint venture does not have sufficient resources to bring about the goal
suggested by Lynn. The joint venture participants should therefore seriously consider a further two
year research and development program to pilot test a KnowledgeCraft version of LOG/MATE ESP
ASSISTANT for the high end user. The timing would be right. as the oil industry slump should
rebound over this period, and our efforts to date on the KnowledgeCraft version show considerable
promise.
However, the full ramifications of trying to mount the existing and planned capabilities of LOG/MATE
ESP have not been investigated thoroughly enough as yet to write a plan for the complete program.
We need to know a great deal more about our results to date on KnowledgeCraft. Deeper investigation
of a relational data base interface to the Lisp workspace, conversion of bit-mapped pixel location to
scaled data values, and the massive list of ESP features to be re-designed and re-coded is required.
This work could be done under the terms of the present joint venture, but should only be undertaken
if the two year extension is considered highly probable.

FUTURE APPLICATIONS - The Long Term Business Plan


Anyone who has followed the fate of the oil industry lately knows that it is a fickle marketplace. For
this reason, much of the LOG/MATE ESP system has been designed with a great deal of generality.
For example, the algorithm processor, crossplots, useage rules, and parameter selection rules would
be equally at home with cash flow, pressure analysis, civil engineering, process engineering, and a
host of other engineering and scientific applications that require both super-spreadsheet functions
and knowledge-based control or advice. Therefore we propose to use the log analysis environment to
demonstrate other scientific domains, with the view to marketing systems in those areas in the future.
We believe that LOG/MATE ESP ASSISTANT, with suitable enhancements, will be an excellent
knowledge-based shell or tool, for use in a much broader field than well log analysis or even the oil
industry.
To demonstrate the enormous potential of such a software package, we have prepared a 10 year
software development, marketing, and business plan with many innovative management ant
communications concepts. This far reaching plan could revolutionize both the way this kind of work
is carried out and the actual product the customer uses.

CONCLUSIONS
We are confident that the last twelve months effort by our team of eight full time staff and two part
time advisors has generated a workable, achievable design. We have tested many of the components
and are satisfied with most of the results to date. We have set a target date of December 1986 for our
first commercially releasable HP-300 based version and January 1987 for the first IBM/PC version.
They will lead log analysis and related geological and engineering data processing into new realms of
capability, and still be friendly and easy to use. At the same time, we are continuing to investigate
more powerful tools to represent our knowledge and to process it in new mays which may enhance
productivity even more on future releases.

APPENDIX ONE - Summary of Reasearch Completed to Date


A major research effort involved extracting analysis rules and methodology from an expert in log
analysis. Much of this work was brought to the project in useable form by D&S due to the two year
buildup before the actual joint venture contract was signed.
Log analysis rules are of five distinct kinds:
1. algorithm usage rules
2. parameter selection rules
3. iteritive or re-analysis rules
4. zonation rules
5. system usage rules
These rules, or heuristics, are being coded into rule bases which can be used to guide analysts to the
correct procedure for a particular problem. Many of the rules have already been codified in the
author's text book, "The Log Analysis Handbook" and in the LOG/MATE ESP User Documentation
manual. Rules can be generic or location specific, but this fact must be identified within the rule.
Unstated rules have been elicited by interaction between the expert, and the knowledge engineer.
More than one expert will eventually be involved in this work, as the log analysis expert may not be an
expert system user or geologist, and vice versa.
Algorithm usage rules are based on the availability of log data and constraints concerning hole
condition, borehole and formation fluid type, rock type, and tool or algorithm resolution. They are
intended to provide the best initial set of algorithms to use. For example, any method which uses the
density log is not used when hole condition is bad.
Parameter picking rules are also fairly well defined and tie directly to the historical database, as well
as to existing LOG/MATE ESP features such as depth plot and crossplot interactions. These rules are
described in various chapters of the text book, and again are intended to produce the best initial or
default values for any job.
Iterative rules are based on result analysis and numerous heuristics about algorithm usage,
parameter selection, data editing, and comparison to ground truth. This is where the real expertize of
the experienced log analyst lies. These are the most difficult rules to codify, and we have only a small
set of these at the moment. They are based on a step wise evaluation of how well the shale volume,
porosity, water saturation, lithology, and productivity calculated for the zone match the available core,
DST, rock sample, and fluid production data from the same well and those around it.
Rules for formation zonation are described adequately in the literature. Three methods are available,
and after suitable testing, we will invoke the one which fits our system the best.
The system usage rules are necessary to lead all classes of users through the system, to solicit facts
and data known to the user but not yet known to the system, to provide the best educated guess at
the first pass parameters, and to run the first pass through to finished plots and reports. From this,
the iteration rules could take over to guide novice users to refined final results.
The system will have to be flexible enough to allow experienced users to add or change rules in all
five catagories, because many rules vary between analysts and between localities. Therefore, some

investigation of appropriate rule managing tools, or inference engines, such as Rulemaster, KEE,
KnowledgeCraft, and hardcoded LISP has been undertaken.
A prototype inference engine was developed operating on the Symbolics 3600 using the
KnowledgeCraft shell, both specially acquired for the task. This work has been temporarily
suspended, pending a detailed review of what we have learned to date. A second prototype, using
Rulemaster as an inference engine, has been tested with a more extensive rule set than that used in
KnowledgeCraft.
We have successfully implemented usage and parameter selection rules in both Rulemaster and
Knowledgecraft. Rulemaster will be used in the initial prototypes because it is available on the lower
cost workstations we plan to market, and the rule description interface is easy to teach to high level
users who may wish to modify the actual rules for his own area. KnowledgeCraft has a much richer
data representation methodology, but the human interface requires considerably more work by our
development staff before it could be used by potential customers. It is also 60 times more expensive,
and is not yet available on our target machines, although this, as well as a lower delivery
version price, have been promised.
We have experimented with iteration rules with Knowledgecraft, and have determined that if a rule can
be defined by an expert, it can be represented and invoked. Similar rules can be phrased in
Rulemaster, but at the time of writing, they have been tested only on paper.
Conflict, completeness, and consistency issues are still being resolved manually, as no available
tools cover these problems adequately. We will have to trust the initial expert and subsequent users
to behave rationally, or to be smart enough to find their errors and correct them. This is similar in
many ways to debugging problems in conventional programming.
We have also looked at ART and KEE, two other high end expert system tools, which promise
execution versions on at least HP and VAX hardware as well as LISP machines. We have also tested
material from Xerox using Interlisp-D and Notecards as a data base. We have modified the proprietary
software of one of our team for experimental purposes. Because of this extensive investigation, we
feel confident in our choice of Rulemaster for our early prototypes, but have left our options open for
future enhancements with more powerful tools. Since we have limited research resources, we must
produce working software in evolutionary steps, and cannot wait for the perfect solution.
Numerous other research activities were carried out and are covered in separate reports provided by
the individual researchers. These reports are appended to this progess report. Some of the topics
include details on database and graphics on the Symbolics, relational data bases on LOG/MATE and
Rulemaster, IBM-PC environment, Rulemaster and RADIAL environment, KnowledgeCraft
environment, UNIX environment, and integration issues. In addition a large library of technical papers
on expert systems and many related topics has been gathered and is available for review by
interested parties.

APPENDIX TWO - Reasearch and Development to be Completed


A number of components of the knowledge-based version of LOG/MATE ESP are essentially
complete, although finishing touches and integration are currently underway. These components are:
1. historical data base of model analyses on LOG/MATE ESP
2. textbook algorithm usage and parameter selection rules on Rulemaster
3. textbook algorithm usage and parameter selection rules on KnowledgeCraft
4. data representation structure on KnowledgeCraft
5. working algorithm parser on KnowledgeCraft
6. human interface on KnowledgeCraft
7. data base on RDS - Informix
8. data structure in C on HP-300
9. alpha and graphics screen handlers in C and assembler on HP-300
10. Rulemaster to RDS link tested
11. Rulemaster to invoke processes or algorithms tested
12. UNIX environment on HP tested and debugged
13. report generator module converted to C
14. user documentation

The work yet to be completed is:


1. create historical data base summary file from model zones, create update (learning) feature for
summary file
2. create textbook parameter file of rock and fluid properties
3. create module for parameter extraction from historical data base summary, textbook parameter
file, depth plots, and crossplots
4. define and create mapping and contouring module for historical parameters and/or log analysis
results
5. amplify parameter selection rules for real data
6. amplify iteration rules for real situations
7. amplify algorithm selection rules for real algorithms
8. amplify system usage rules for actual system configuration
9. create zonation module to reduce zone definition problem
10. acquire Visual:GKS software, convert graphics modules to C/GKS
11. convert enter/edit and compute modules to C
12. integrate Rulemaster, RDS, and C code .
13. define HP delivery configuration, test and debug on HP-300
14. define and acquire appropriate IBM hardware and operating system, test and debug on IBM/AT
with and without accelerator board
15. set up marketing person, his/her training and indoctrination
These items are essential for a successful commercial product. Marketing could begin the day these
functions are tested together, along with the components already essentially completed. However
marketing training and indoctrination must begin immediately so as to carry the momentum of the
project forward into the commercialization phase.
Continued research and development in the following areas should also be undertaken to provide a
springboard for future releases of the product.
1. define better window management package
2. acquire networking hardware and software for HP and IBM configurations
3. enhance readability of Rulemaster explanations (brevity)
4. complete data communications and networking features
5. complete Knowledgecraft version if time and budget permits and if marketing considerations
suggest that it is a good idea
6. enhance prototype with results of Knowledgecraft work
7. add other reservoir engineering functions to ESP algorithm data base, modify plot, crossplot,
and compute if necessary
8. add seismic functions to ESP algorithm data base

ABOUT THE AUTHOR


Mr Crain is a Professional Engineer with over 35
years of experience in reservoir description,
petrophysical analysis, and management. He has
been a specialist in the integration of well log
analysis and petrophysics with geophysical,
geological, engineering, and simulation phases of oil
and gas exploration and exploitation, with
widespread Canadian and Overseas experience. He
has an Engineering degree from McGill University in
Montral and is a registered engineer in Alberta. He
wrote The Log Analysis Handbook, published by
Pennwell, and offers seminars, mentoring, or
petrophysical consulting to oil companies,
government agencies, and consulting service
companies around the world.
Ross is credited with the invention of the first
desktop log analysis system (LOG/MATE) in 1976, 5
years before IBM invented the PC. He continues to
advise and train people on software design,
implementation, and training. For his consulting
practice, he uses his own proprietary software (META/LOG), and is familiar with most commercial
systems.
He has won Best Paper Awards from CWLS and CSEG and has authored more than 30 technical
papers. He is currently building an Interactive Learning Center for petrophysics on the World Wide
Web. Mr Crain was installed as an Honourary Member of the Canadian Well Logging Society for his
contributions to the science of well log analysis.

You might also like