Systems Engineering Guidebook - Isbn 9780692091807
Systems Engineering Guidebook - Isbn 9780692091807
Systems Engineering Guidebook - Isbn 9780692091807
Limit of Liability/Disclaimer of Warranty: While the author has used his best efforts in preparing
this book, he makes no representation or warranty with respect to the accuracy or completeness
of the contents of this book and specifically disclaims any implied warranties of merchantability
or fitness for a particular purpose. The advice and strategies contained herein may not be
suitable for your specific situation. You should consult with a professional where appropriate.
The author shall not be liable for any loss or other damage including but not limited to special,
incidental, consequential or other damages.
Projects like this demand a lot of time and extensive experience since they go outside of the
"normal" Systems Engineering set of processes into the enabling activities. My sincere gratitude
and appreciation to colleagues who contributed to this book. My special thanks to Rebecca
Falcon and Charlie Spillar who worked with me throughout the entire project and really aided in
its completion.
Comments
Comments for update/revision are welcome from any interested party. Suggestions for change in
the document should be in the form of a proposed change of text, together with appropriate
supporting rationale. Please use the feedback form that is provided at the end of this document.
Comments and requests for interpretations should be addressed to the author:
David C. Hall
Toney, AL 35773
[email protected]
1
www.incose.org/AboutSE/WhatIsSEC
1
1.0 Introduction
This Systems Engineering Guidebook is a template describing “What” to do to successfully
employ the Systems Engineering Process to implement and apply appropriate (effective and
efficient) Systems Engineering processes and activities. Using this guide to implement Systems
Engineering capability also allows use of the Systems Engineering Maturity Model described in
Document HA2017-02. The Appendices (Section II) contain a description of the members of a
SEIPT, a suggested metrics spreadsheet and list of reference documents that anyone can get and
review if necessary. Note that once you develop an integrated timeline, you can implement in
stages – implementing one process or activity at a time based on that integrated implementation
time line. Implementation can be accomplished this way but it may be more efficient (depending
on available resources and necessary culture changes) to implement multiple processes and
activities at the same time. Standards and handbooks address life cycle models and SE processes
and activities 2 that may or may not fully apply to a given organization and/or project. The
objective of this Guide is to ensure that the Systems Engineering process and activity set meet
the needs of the Enterprise/Organization/Program/Project while being scaled to the level of rigor
that allows the system life cycle activities to be performed with an acceptable level of risk and to
be measured. All SE processes and activities must be tailored to a rigorous application that
provides an appropriate level based on need. While all SE processes and activities apply to all
life cycle stages, tailoring determines the process/activity level that applies to each stage, and
that level is never zero. There is always some effort in each process and activity in each stage.
At the enterprise or organizational level, the tailoring process adapts external or internal
standards in the context of the enterprise or organizational processes to meet the needs of the
2
Note that the most useful overall are the INCOSE Systems Engineering Handbook and Systems Engineering Body
of Knowledge. See reference list for additional documents.
2
enterprise/organization. At the program level, the tailoring process should adapt enterprise or
organizational SE processes and activities to the unique needs of the program 3.
The objective of this Guide is to use the Systems Engineering Process on your System
Engineering Implementation, Use and Update Project to implement, use, measure and improve
your Systems Engineering capabilities. It also forces you to pull together the interdisciplinary
processes and activities used in most enterprises and organizations. It covers the fundamental
elements and lessons learned in systems engineering and used in developing, implementing,
using, measuring and analyzing an overall Systems Engineering process/activity set tailored to
the needs of an enterprise, an organization or a specific program/project. The Guide requires
proper integration of disciplines and specialties – whichever ones are necessary and appropriate
for a particular product (in the broadest definition of “product”). It is recommended that such
implementation be done at the enterprise/organizational level and modified as necessary for
specific programs rather than being done separately for each program. Many organizations
currently document a program’s systems engineering process in the “Systems Engineering
Management Plan.” Details for a SEMP (SEP) are described in the INCOSE Handbook, SEBok,
IEEE 1220, DOD (EIA 632) Systems Engineering Standards and many other documents (Section
III - Reference List). However, historical data and lessons learned have made it apparent that it
is necessary to develop a Systems Engineering Process Implementation Plan (following the
guidelines in this Guide) to cover all the necessary steps in accomplishing an appropriate
tailoring of the SE processes and activities, implementing these tailored processes/activities,
executing them, measuring the outcomes and providing continuous improvement. It has also
been found that if an enterprise or organization develops (and uses) a Systems Engineering
Integrated Process Team (SEIPT) to work an SE implementation, use and update project, the
likelihood of success increases significantly (See Appendix A for the required expertise of
personnel on a SEIPT).
engineering. Systems Engineering is not the sole responsibility of systems engineers: all
engineers and developers must practice systems engineering. This Guide is intended to be used
as a road map for integrating modern systems engineering disciplines and modern product
realization processes and activities into programs. The Project implementation process outlined
in this Guide is based on the overall INCOSE Systems Engineering process. Therefore, the steps
described are the steps required for a typical development program/project. These steps are
equally valid for development of new products, modifications of existing products, and
replacement of components or subsystems within existing products. Additional systems
development activities such as project and program management are also included. Note that
Systems Engineering and Project Management overlap considerably (see figure 2). A good
systems engineering process or activity contains both technical and management functions. It is
the responsibility of the Systems Engineer and the Program Manager to coordinate these
activities and eliminate duplications.
It is extremely important that you understand where your product is in its life cycle. You can
develop and implement systems engineering for your specific life cycle phase but it is more
efficient to tailor and implement systems engineering capable of working throughout the entire
product life cycle. The system life cycle has seven general phases: (1) discovering system
requirements, (2) creating and evaluating concepts, (3) design and development, (4) system
verification, (5) system production, (6) operation, maintenance and modification, and (7)
retirement, disposal, recycle, and replacement. The exact definitions and descriptions of the
system life cycle can be different for different industries, products and customers but all
variations address the above seven phases.
This Guide describes the activities recommended for the DISMI System Engineering Process
Implementation Project (see figure 1). One might conclude from this Guide, though incorrectly,
that the steps must occur in a linear sequence. In general, it is true that step n will usually begin
before step n+1 begins. (In the terminology of program management, this is a start-to-start
constraint.) With real-world programs, many of the steps will proceed in parallel. Some groups
4
In addition to the INCOSE definition, one of the more comprehensive SE definitions comes from the US Air
Force: Systems Engineering is the discipline encompassing the entire set of scientific, technical and managerial
processes needed to conceive, evolve, verify, deploy and support an integrated Systems of Systems capability to
meet user needs across the life cycle.
4
of steps will actually form iterative loops. Occasionally other constraints may change the actual
sequence of the steps. However, the steps in the order described here are a good guide. The
definitions of the stages in this document are consistent with the definitions of the phases or
stages used in industry, academia, Department of Defense, and Department of Energy literature.
The areas of concern (risks to properly developing, implementing, using, measuring and
analyzing Systems Engineering processes and activities) within your enterprise or organization
(or for individual programs/projects) are:
1. People: Who are your systems engineers? Is systems engineering a job title, or does it
describe anyone who wants to think about the larger system that a product fits into, or
only people with “Systems Engineering” degrees, or something certifiable by INCOSE?
Note that excellence comes from people, not processes.
2. Culture: What is your current management and work culture and how resistant is it to
change? How much change is going to be required? References on developing a Culture
Change Management Plan are provided.
3. Value: What is the value to your organization or company of performing optimized and
effective systems engineering? What are the benefits of systems engineering you are
expecting (what are your goals/objectives?).
4. Training: How should your system engineers and other personnel be educated? What
classroom and on-the-job training is important?
5. Tools: What tools do your systems engineers use? What tools can provide necessary
support for everything systems engineering does in an integrated manner?
7. Standards: Who should use systems engineering standards (or domain best practices),
and how should they use them? Do the various standards apply differently to different
implementations of systems engineering? How do systems engineering standards apply to
a small company making piece parts, consumer goods or services?
Goals/Objectives (Define)
Define Requirements (Define)
Implement Risk
Sustain Product
Measure Standards/Best Practices
Culture
Improve
Skills
Past Performance
Validation/Verification
Implementation
Continuous Improvement
Training
Each requirement achieved
Tools
What rework is still ongoing
Management Champions
What areas need to be more efficient
Culture Change
What areas need to be more effective
Sufficient metrics being collected
Culture Change
Additional enforcement required
Additional training required
Operational/Sustainment
Standards
Measurement Support
Validation Metrics Training
Verification Metrics Tools
Personal Personnel/Skills
Process Culture Change
Organizational Enforcement
Customer
5
Note that your objectives, if written succinctly, can be incorporated into your Program Schedule as milestones.
6
Not necessarily all SE processes/activities but only those needed for this specific enterprise/organization/program.
If you decide to leave out a process/activity, you should insert a rationale for each decision as you may need to add
these processes/activities in the future.
9
Remember that if you cannot measure it, you cannot control it and if you cannot control it, you
cannot manage it. Each process and activity requirement should be based on the following (see
Requirements Engineering and Management References in Section 2):
1. Standards/Contract/Best Practices/Other
2. Product(s) and life cycle stage
3. Validation/Verification requirements versus capabilities
4. Current enterprise/organizational or company culture and expected changes required
5. Required tailoring based on “product”, contract, etc.
Figure 3: Develop the Appropriate Requirements (based on the ANSI/EIA 632 Egg
Diagram)
Tailoring Activities
It is normally necessary to tailor each requirement to fit your specific domain and
enterprise/organization/program portfolio. The following are areas you need to address to
successfully tailor your requirements.
10
7
Requirements Engineering normally has critical problems which can be due to lack of stakeholders’ involvement
in the requirements process. Lack of requirements management skills can also lead to bad requirements engineering.
Unclear responsibilities and communication among stakeholders can also lead to bad requirements engineering.
8
An Organizational Memory or Knowledge Repository is a computer system that continuously captures and
analyzes the knowledge assets of an organization. It is a collaborative system where people can query and browse
both structured and unstructured information in order to retrieve and preserve organizational knowledge assets and
facilitate collaborative working.
12
Example Requirement: The standard to be used for this project is ANSI/EIA-649-1998 National
Consensus Standard for Configuration Management. All aspects of this standard shall be
implemented unless specifically tailored out. If any are tailored out, rationale for the tailoring
must be provided.
Example Validation and Verification: Verification and Validation of this requirement shall be
by collection and analysis of the 12 metrics defined as Configuration Management Metrics in
Appendix C – Metrics Guide.
Risk management is critical to program success for any program. The purpose of addressing risk
on programs is to help ensure program cost, schedule, and performance objectives are achieved
at every stage in the life cycle and to communicate to all stakeholders the process for uncovering,
determining the scope of, and managing program uncertainties. Since risk can be associated with
all aspects of a program, it is important to recognize that risk identification 9 is part of the job of
everyone and not just the program manager or systems engineer. That includes the test manager,
financial manager, contracting officer, logistician, and every other team member. If required, an
organization can add an Opportunity Management process mirroring the Risk Management
process.
9
No current risk management standard or guide requires a risk baseline including all risk areas be established and
managed. However, it is essential that such a risk baseline (covering all areas of program risk – technical,
management, operational, external, enterprise and organizational) be established at the start of any risk management
process.
13
Example Requirement: The standard to be used for this project is ISO 31000:2009, Risk
Management. All aspects of this standard shall be implemented unless specifically tailored out.
If any are tailored out, rationale for the tailoring must be provided.
Example Validation and Verification: Verification and Validation of this requirement shall be
by collection and analysis of the 9 metrics defined as Risk Management Metrics in Appendix C –
Metrics Guide.
Other potential plan contents include responsibilities and commitments, work breakdown
structures, resource and schedule estimates, risks, progress measures, relationships, and
traceability to other plans. The most usable plans have a particular focus. Planning a complex
task often requires a set of interrelated plans that might have these relationships:
1. Temporal relationships: Some plans might cover a time period that precedes or follows
that of other plans.
2. Hierarchical relationships: Some plans contain subordinate details.
3. Relationships involving critical dependencies: Some plans depend on the execution of
other plans.
4. Relationships based on a supporting infrastructure: Some plans depend on the existence
of an organizational function–for example, a quality assurance or process group.
There are no specific standards for the SE Technical Planning work area but there are numerous
guidelines and templates for various plans. For example, look at the templates in the Defense
Acquisition Guidebook, Chapter 4: Systems Engineering, Section 4.3.2. Technical Planning
Process or in NASA NPR 7123.1B, Appendix C. Practices for Common Technical Processes
(see figure 4) for defining the scope of the technical effort required to develop, field, and sustain
a system, as well as providing critical quantitative inputs to program planning and life-cycle cost
estimates. Develop a Plan Template(s) that applies to all types of Plans your program must
develop.
16
Engineer use various measures and metrics, including Technical Performance Measures (TPM)
and leading indicators, to gauge technical progress against planned goals, objectives, and
requirements.
Example Requirement: The Best Practices to be used for this project are shown in NASA NPR
7123.1B, Appendix C. Practices for Common Technical Processes. All aspects of this Best
Practice shall be implemented unless specifically tailored out. If any are tailored out, rationale
for the tailoring must be provided.
Example Validation and Verification: Verification and Validation of this requirement shall be
by collection and analysis of the metrics defined as Technical Effort Assessment metrics in
Appendix C – Metrics Guide.
System design is intended to be the link between the system architecture (at whatever point this
milestone is defined in the specific application of the systems engineering process) and the
implementation of technological system elements that compose the physical architecture model
of the system. Design definition is driven by specified requirements, the system architecture,
and more detailed analysis of performance and feasibility. It addresses the implementation
technologies and their assimilation. Design provides the “how” or “implement to” level of the
definition. Design concerns every system element composed of implementation technologies
(for example mechanics, electronics, software, chemistry, human operations and services) for
which specific engineering processes are needed. System design provides feedback to the parent
system architecture to consolidate or confirm the allocation and partitioning of architectural
characteristics and design properties to system elements. The purpose of the System Design is to
supplement the system architecture providing information and data useful and necessary for
implementation of the system elements. Design definition is the process of developing,
expressing, documenting, and communicating the realization of the architecture of the system
through a complete set of design characteristics described in a form suitable for implementation.
System design includes activities to conceive a set of system elements that answers a specific,
intended purpose, using principles and concepts; it includes assessments and decisions to select
system elements that compose the system, fit the architecture of the system, and comply with
traded-off system requirements. It is the complete set of detailed models, properties, and/or
characteristics described into a form suitable for implementation.
Example Requirement Development process –
The following Best Practices have been chosen for this requirement:
1. Choose and get approved an appropriate development lifecycle process to the project at
hand. All other activities are to be derived from the chosen lifecycle process. For an
example software development project a spiral-based methodology is chosen.
2. Gather and agree on requirements for the project.
3. Choose the appropriate architecture for your application. Apply well-known industry
architecture best practices.
4. Keep the design as simple as possible.
5. Conduct periodic Peer reviews including all artifacts from the development process
(including plans, requirements, architecture, design, code, and test cases).
20
and specifications (unit, string, and integration testing per the documented and approved
system design specifications; and system testing per the documented and approved
functional requirements).
Validation
Validation is establishing documented evidence which provides a high degree of assurance that a
specific process/activity will consistently produce a product meeting its predetermined
specifications and quality attributes. It is establishing confidence that process equipment and
sub-systems are capable of consistently operating within established limits and tolerances. Note
that before accomplishing validation on a product, be sure that it has passed qualification 10.
Verification
Product (System) Verification is a set of actions used to check the correctness of any element,
such as a product element, a product, a document, a service, a task, a requirement, etc. These
types of actions are planned and carried out throughout the life cycle of the product. Verification
is a generic term that needs to be instantiated within the context it occurs. As a process,
verification is a transverse activity to every life cycle stage of the product. In particular, during
the development cycle of the product, the verification process is performed in parallel with the
product definition and product realization processes and applies to any activity and any product
resulting from the activity. The activities of every life cycle process and those of the verification
process can work together. The four fundamental methods of verification are Inspection,
10
Adding to the confusion caused by these terms with similar and overlapping meanings, different organizations
mix the terms and definitions. Some organizations refer to verification as validation. Some define verification as
dynamic testing and validation as static testing (i.e., peer review). Others refer to testing as verification or
qualification. And others refer to qualification as validation. What’s important is not that we agree on terms, but
that we understand all the activities associated with the validation of systems and ensure that they are performed.
22
Demonstration, Test, and Analysis. The four methods are somewhat hierarchical in nature, as
each verifies requirements of a product or system with increasing rigor. Each enterprise and
organization should clearly define each of the four primary verification methods: Test,
Demonstration, Inspection, and Analysis.
11
If the enterprise or organization separates Qualification and Quality Assurance the following should be added as a
Quality Assurance requirement: develop and implement a Quality Assurance Plan, perform quality audits, report
quality audits, define and track quality corrective actions.
23
2.2.9 Training
Systems engineering is NOT a rulebook. It is a set of principles (processes and activities)
supported by methods deigned to deliver maximum benefits to stakeholders at minimum costs.
A Systems Engineering training course set should be designed for personnel who currently
perform, manage, control or specify the life cycle of products 13. All courses and seminars can be
delivered using a mixture of formal presentation, informal discussion, and extensive work shops
which exercise key aspects of systems engineering on a single product or multiples of products
through the life cycle. The desired result is a high degree of continuing learning on each Systems
Engineering process and activity.
Competencies are the combination of knowledge, skills and abilities that contribute to individual
and organizational performance. Any Systems Engineering developmental framework should be
based on a rigorous set of competencies that personnel should have in order to perform their
12
Tailoring Software Validation and Verification should be risk based on integrity levels.
13
As noted before product is used in its broadest sense.
24
jobs. These competencies define the breadth and scope of the discipline and facilitate personnel
development and assessment of individual knowledge and capabilities. Competencies developed
form the foundation of any training program and should be under configuration control and
reviewed and updated as appropriate.
A key step for managerial, engineering and technical personnel is to understand the requirements
of their roles and the related competencies. Performance-level descriptions for each competency
should be created to guide the overall development of individuals within the program and
domain engineering disciplines.
Example Requirement
Conduct a Systems Engineering Training Needs and Skill Gap assessment based on the
requirements set developed and an evaluation of existing Systems Engineering and other
personnel skills/knowledge/experience. Develop and implement Systems Engineering Training
Plans using the skills shown to be required for the team and each individual, develop and give
training courses on enterprise/organizational Systems Engineering processes, activities and
tools as required and assess results.
Example Validation and Verification
Validation and verification shall be by collection and analysis of the metrics defined by Systems
Engineering Training metrics in Appendix B – Metrics Guide.
The benefit of citing "specialty engineering" in planning is the notice to all team levels that
special management and science factors may need to be accounted for and may influence the
project. Specialty engineering may also be cited by commercial entities and others to specify
their unique abilities.
26
3. Organizational and individual talent necessary for success and performance measures and
incentives aligned to objectives
4. Superior execution of programmatic work processes and effective and efficient support
processes and systems
5. High performance’ values and behaviors and capacity to change
Once you understand your current culture and how the enterprise/organization/program reacts to
change, then the necessary Culture Change Management to successfully implement SE
capabilities involves the selection of strategies to facilitate the transition of individuals, teams, or
entire enterprises/organizations/programs from a current state of operation to the new, desired
state. More specifically, you must develop a process and set of techniques to manage the
feelings, perceptions, and reactions of the people affected by the changes being introduced. This
includes senior management as they hold the resources necessary for any change. The impetus of
any change initiative is to improve some aspect of operations or longer term outcomes. Change
projects result in new policies, processes, protocols, or systems to which staff must become
accustomed and change management must be used to facilitate the transition. For successful
culture change, attention must be given to both the “process” and “human” sides of change. The
“process” side involves the specific project management related activities required for moving
from the current to desired state (e.g., develop plans, build the infrastructure, change processes or
systems, redefine job roles). The “human” side of change involves strategies to help employees
impacted by the change understand and adopt it as a part of their jobs (e.g., alleviate staff
resistance, meet training needs and secure buy-in).
Both aspects of change should be integrated and occur simultaneously for successful change,
however, the change leader(s) may need to think of the “process” and “human” changes
distinctly when assessing and addressing roadblocks. For example, an organization may have full
employee buy-in for a particular change initiative but adequate resources and planning efforts
have not been put in place to support the change. Alternatively, appropriate structures and
processes may be in place but employees remain resistant to the initiative.
3. Practices and policies that govern how you do business, how you interact with
suppliers, and how you serve customers – all help to shape your organizational
culture.
The flow of information in your organization also strongly affects its culture. Consider the
following questions regarding your information flows:
1. What kind of information is distributed in your company? Is it easy for your
people to stay up-to-date about important information? This could include key
metrics on company performance, both before and after a change.
2. How about information regarding your people? Do you highlight employee
achievements, accolades and hobbies?
3. Where does the information flow in your organization? It can flow vertically from
one level to another. It also flows horizontally, among co-workers. The informal
organization created by information flow is as important as the formal
organization flow.
4. What methods and mediums do you use to communicate with your people? Do
you use email or an internal web portal to keep people up to date? How do you
use face-to-face meetings to communicate with your people?
6. Baseline of Risks to Implementing the Plan (since almost all the risks/problems will
be caused by people)
3.0 Implementation
Once you have the above information, you can develop the overall Implementation Plan and
Timeline and execute the processes and activities based on your documentation and policies.
The Systems Engineering Capability Implementation Plan is a management tool designed to
illustrate, in detail, the critical steps in implementing, using, maintaining and improving your
33
Systems Engineering capabilities. It is a guide or map that helps staff be proactive rather than
reactive in accomplishing and using Systems Engineering and identifying any challenges along
the way. It allows any person working in systems engineering (and all other concerned
personnel), regardless of his or her level of involvement, to fully understand the goals and
objectives outlined above and how they are to be accomplished. It ensures that everyone working
on the program is on the same page and any discrepancies are resolved before they become
costly to the program or population served. If you are accomplishing an enterprise or
organizational Implementation Plan, this ensures that enterprise/organizational policies mandate
consistency in how programs address and use Systems Engineering.
Develop Requirements
Develop and
Implement Culture
Change Actions
Implement Requirements
Conduct Processes/Activities
Review Status
Do CI Analysis
14
It is recommended that each process/activity be implemented individually according to an approved timeline.
This type of implementation will minimize changes your personnel are required to accept and eliminate mass
confusion.
15
This is defined as work required to fix errors. It does not include work required by a change in requirements.
35
16
During these reviews, you should also determine each requirement’s validation/verification accuracy
36
5.0 Measurement
To accomplish the above, each company/organization/program must establish and use a
measurement process (established metrics) that delivers relevant information to managers who
use it for decision-making. Measurement information helps the manager(s) to:
• Monitor the progress and performance of the Systems Engineering process as well as the
individual processes and activities
• Communicate effectively throughout the organization or company
• Identify and correct problems early (Continuous improvement, See Paragraph 6.0)
• Make key tradeoffs that affect how the Systems Engineering process is being used
• Track specific project objectives (the SE Goals and Objectives)
• Defend and justify decisions
For each of the above, measurement quantifies the relevant individual Systems Engineering
processes or work products as well as the overall Systems Engineering Process with respect to
the needs and objectives of the program, company or organization. Common system engineering
metrics (see Section 2 Metrics List for suggested metrics) include timeliness, efficiency and
effectiveness, performance requirements, quality attributes, conformance to standards and
resource use. These measurements should also provide critical insight needed for continuous
process improvement to achieve cost and schedule/cycle time reduction and quality and technical
performance improvement.
Note that these metrics must be related to the validation and verification of the requirements
established early in this process. They must allow each requirement to be verified and validated
throughout the life cycle of the product(s).
37
The following steps should be accomplished 17 and documented for each requirement regardless
of the CI process chosen:
1. Review established goals and objectives to determine if any changes are required
2. Review each requirement and associated metrics
17
Each requirement should be reviewed at a minimum once a year (Quarterly is recommended).
38
7.0 Summary
Systems Engineering, when done optimally for your product(s), can significantly enhance your
capabilities, enable your programs/projects to be completed on time and on budget with all
required functions. But we note that this is not happening. In 2004, the Director for Systems
Engineering in the Office of the Undersecretary for Defense for Acquisition, Technology and
Logistics (OUSD [AT&L]) came to the National Defense Industrial Association (NDIA) and
voiced concerns that DOD acquisition programs were not capitalizing on the value of systems
engineering. He knew the value of SE and knew that it could help DOD programs, but he also
knew that not all DOD program managers shared his convictions. Consequently program
managers were taking shortcuts and eliminating SE capabilities from their programs.
Subsequently, others have recognized this same problem. A recent Government Accountability
Office (GAO) report indicates that acquisition program costs are typically 26 percent over
budget and development costs are typically 40 percent more than initial estimates. These
programs routinely fail to deliver the capabilities when promised, experiencing, on average, a 21
month delay. The report finds that "optimistic assumptions about system requirements,
technology, and design maturity play a large part in these failures, and that these optimistic
assumptions are largely the result of a lack of disciplined SE analysis early in the program." 18
This conundrum has continued to the present. The combined cost overrun for Major Defense
Acquisition Program (MDAP) portfolio programs in 2015 was $468 billion, up from $295 billion
in 2008. The total cost of the US Department of Defense's 2015 MDAP portfolio grew at 48.3
18
From The Value of Systems Engineering posted on May 20, 2013 by Joseph Elm in Systems Engineering,
https://insights.sei.cmu.edu/sei_blog/2013/05/the-value-of-systems-engineering.html
39
percent with an average schedule delay of 29.5 months. 19 For the latest statistics, do an Internet
search on project management statistics.
The reason for this is that a Systems Engineering capability is normally defined and shaped by
the context or environment in which it is embedded. But very seldom (or never) is the Systems
Engineering Process used to define the optimum set of SE processes and activities necessary to
provide maximum return on your investment in your environment. As discussed at the
beginning of this book, companies and enterprises ad hoc embed individual SE processes and fail
to determine the necessary interactions or the effectiveness of each process. Goals and
objectives for the overall SE capabilities are not considered. So while some ROI is achieved, the
maximum benefit of using appropriate, effective and efficient SE is not. Accomplishing
implementation and use of Systems Engineering as described in this book can significantly
increase the likelihood of successful program/project accomplishment and allow maximum
return on investment.
19
From Deloitte Aerospace Defense Report., 2017
40
Section II:
Systems Engineering Guidebook
Appendices
41
Systems Thinking
The approach of Systems Thinking is fundamentally different from traditional analysis.
Traditional analysis focuses on separating the individual pieces of what is being
studied/developed. Systems Thinking, in contrast, focuses on how the thing (component/part/
interface/etc.) being studied/developed interacts with the other constituents of the overall system
– a set of elements that interact to produce behavior – of which it is a part. Systems Thinking
also focuses on how the system being studied/developed interacts with the other systems as a
part of a System of Systems. This means that instead of isolating smaller and smaller parts of the
system being studied/developed, systems thinking works by expanding its view to take into
account larger and larger numbers of interactions as an issue. This results in sometimes
strikingly different conclusions that those generated by traditional analysis, especially when what
is being studied/developed is dynamically complex or has a great deal of feedback from other
internal or external sources.
Breadth of Knowledge
1. Experienced in or capable of Systems Thinking (See below)
2. Knowledge across technical disciplines and engineering functions
3. Proven ability to ensure rigorous technical processes are applied
4. Experience in applying engineering capabilities, tools and techniques to anticipate issues
with requirements, acquisition, test and sustainment of product capabilities.
2. Capability to understand the Technical View 20, System View 21, Operational View 22 and
Disposal View 23 of the product.
3. Experience in scope/range of requirements development, science and technology,
product/system life cycle phases.
4. Experience in or knowledge of operational safety, suitability and effectiveness (OSS&E)
characteristics and design.
The equivalent breadth and depth of knowledge as well as the overall vision should be required
of all non-engineering SEIPT members.
20
Technical View Success criteria – Systems/subsystem components function properly; designs reflect “plug and
Play” open interfaces and domain industry standards.
21
Systems View Success Criteria – Robust product and all subsystems function properly; product can operate and
deliver required capability in its intended operational environment.
22
Operational View – All required products can interoperate and potential operational errors are minimized.
23
Disposal View – All aspects of the product should include an anticipated phase-out period and take disposal into
account in the design and life cycle cost assessment.
44
Appendix B: Metrics
Reviews
13 Systems
Estimate vs. Actual Cost per Engineering
Subsystem Monthly Reviews
14 Program
Original Work vs. Rework Hours Monthly Reviews
Staffing
Program
1 Planned vs Actual Staffing Monthly Reviews
Planned vs. Actual Staffing Mix Program
2 (Salary, Hourly, contractual) Monthly Reviews
Planned vs. Actual Staffing Program
3 Profile per Labor Category Monthly Reviews
Unplanned Staff Losses per labor Program
4 Category Monthly Reviews
Unplanned Staff Gains per Labor Program
5 Category Monthly Reviews
Schedule
Estimated vs. Actual IMS Program
1 Schedule Milestones Monthly Reviews
Risk Level at Each Schedule Program
2 Milestone Monthly Reviews
Estimated vs. Actuals
Deliverables (Total, Complete, Program
3 Remaining, Late) Monthly Reviews
Program
4 Deliverables (Aging) Monthly Reviews
Critical Path status (Drag and Program
5 Drag Cost) Monthly Reviews
Quality
1 On Time vs. Late Deliverables Monthly
Program
46
Reviews
Accepted vs. Rejected Program
2 Deliverables Monthly Reviews
Estimated vs. Actual Document Program
3 Delivery Schedule Milestones Monthly Reviews
Program
4 Deliverables (Aging) Monthly Reviews
Engineering
Change Proposals
Number of Customer Submitted Program
1 ECPs by Month Monthly Reviews
Estimated vs. Actual Dates
Proposed, Open, Approved,
Incorporated for Customer Program
2 Submitted ECPs Monthly Reviews
Number of Subsystems Affected Program
3 by Customer Submitted ECPs Monthly Reviews
Number of Computer Software
Units (Program) Affected by Program
4 Customer Submitted ECPs Monthly Reviews
Number of CSU Lines of Code
Affected by Customer Submitted Program
5 ECPs Monthly Reviews
Number of Documents/Drawings
Affected by Customer Submitted Program
6 ECPs Monthly Reviews
Number of Contractor Submitted Program
7 ECPs by Month Monthly Reviews
Estimated vs. Actual Dates
Proposed, Open, Approved,
Incorporated Contractor Program
8 Submitted ECPs Monthly Reviews
Number of Subsystems Affected Program
9 by Contractor Submitted ECPs Monthly Reviews
47
Failure Board
Items
Systems
Number of Failure/Incident Engineering
1 Reports by Month Weekly Review
Systems
Number of Failure/Incident Engineering
2 Reports Opened vs. Closed Weekly Review
Systems
Status of All Open Engineering
3 Failure/Incident Reports Weekly Review
Systems
Assigned Risk Level for Each Engineering
4 Failure/Incident Report Weekly Review
Systems
Estimated vs. Actual Cost Impact Engineering
5 of each Failure/Incident report Weekly Review
Categorization of Failure/Incident Systems
Reports (Weight, cost, reliability, Engineering
6 process, etc.) Weekly Review
Systems
Engineering
7 Failure/Incident Reports Aging Weekly Review
Issues
Systems
1 Weekly
Number of Issues Established by Engineering
48
Week Review
Systems
Categorization of Issues Engineering
2 Established Weekly Review
Systems
Engineering
3 Assigned Issue Risk Weekly Review
Systems
Engineering
4 Number of Issues open vs. Closed Weekly Review
Systems
Estimated vs. Actual Cost Impact Engineering
5 of each issue Weekly Review
Systems
Estimated vs. Actual Schedule Engineering
6 impact of each issue Weekly Review
Systems
Engineering
7 Issues Aging Weekly Review
Waivers/Deviation
s
Number of Waivers/Deviations to Systems
Procedures/Processes/Parts by Engineering
1 Week Weekly Review
Categorization of Systems
Waivers/Deviations to Engineering
2 Procedures/Processes/Parts Weekly Review
Systems
Assigned Waivers/Deviations to Engineering
3 Procedures/Processes/Parts Risk Weekly Review
Number of Waivers/Deviations to Systems
Procedures/Processes/Parts open Engineering
4 vs. Closed Weekly Review
Estimated vs. Actual Cost Impact Systems
5 Weekly
of each Waivers/ Deviations to Engineering
49
Action Items
Systems
Number of action Items Engineering
1 Established by Week Weekly Review
Systems
Engineering
2 Categorization of Action Items Weekly Review
Systems
Engineering
3 Assigned Action Item Risk Weekly Review
Systems
Number of Action Items open vs Engineering
4 Closed Weekly Review
Systems
Estimated vs. Actual Cost Impact Engineering
5 of each action Item Weekly Review
Systems
Estimated vs. Actual Schedule Engineering
6 impact of each Action Item Weekly Review
Systems
Engineering
7 Action Item Aging Weekly Review
Risk
History of Baseline Changes
1 (including rationale for changes)
50
Process
Management
History of Baseline Changes
1 (including rationale for changes)
Number of Systems Engineering Program
2 Processes Documented/ Updated Monthly Reviews
Planned vs. Actual Systems Program
3 Engineering Processes Started Monthly Reviews
Planned vs. Actual Systems
Engineering Processes Program
4 Operational Monthly Reviews
Program
5 Effectiveness of Process - Trend Monthly Reviews
6 Monthly
Process Compliance - Number Program
51
Configuration
Management
Number (Number of HWCIs and Program
1 CSCIs) Monthly Reviews
History of Baseline Changes Program
2 (including rationale for changes) Monthly Reviews
Change Requests - Number and Program
3 Type Monthly Reviews
Status of Change Requests
(Submitted, Pending, Approved, Program
4 Implemented) Monthly Reviews
Cost Impact of Change Request Program
5 (Estimated vs. Actual) Monthly Reviews
Schedule Impact of Change Program
6 Request (Estimated vs. Actual) Monthly Reviews
Overall Cost and Schedule
Impact of Change Requests Program
7 (history of system) Monthly Reviews
Program
8 Change Request Risk Level Monthly Reviews
Program
9 Change Request Aging Monthly Reviews
Number and Type of Deficiencies
Identified by Configuration Program
10 Audits Monthly Reviews
Number of Document Program
11 Configuration Items Monthly Reviews
Program
Status of Document
12 Monthly Reviews
Configuration Items (Open,
52
Environmental,
Safety and
Occupational
Health (ESOH)
Program
1 Number of ESOH Requirements Monthly Reviews
Number and Type of ESOH Program
2 Functions and Constraints Monthly Reviews
Program
3 EOSH Performance Attributes Monthly Reviews
Number of ESOH Hazards Program
4 Identified and Acceptance Status Monthly Reviews
Status of ESOH Hazard Action Program
5 Plans Monthly Reviews
Cost Impact of EOSH Plan Program
6 (Estimated vs. Actual) Monthly Reviews
Schedule Impact of ESOH Plan Program
7 (Estimated vs. Actual) Monthly Reviews
Overall Cost and Schedule
Impact of ESOH Work Efforts Program
8 (history of system) Monthly Reviews
Program
9 ESOH Plan Risk Level Monthly Reviews
Program
10 ESOH Hazards Aging Monthly Reviews
Systems
Engineering
Training
Personnel Experience Utilization
(Trained and Qualified) by Program
1 Category Monthly Reviews
53
Manufacturing
and Producibility
Number and Type of Parts Program
1 Manufactured/Modified Monthly Reviews
Number and Type of Parts Program
2 Purchased Monthly Reviews
Program
3 Parts Risk Status Monthly Reviews
Estimated vs. Actual Production Program
4 Time per Unit Monthly Reviews
Total Labor Hours per Unit
(Production, Inspection,
Shipping, Installation, Program
5 Maintenance, Removal) Monthly Reviews
Number of Defects or Errors per Program
6 unit Monthly Reviews
Number and Type of
Waivers/Deviations Program
7 Requested/Approved per Unit Monthly Reviews
Estimated vs. Actual Cost and
Schedule Impact of Program
8 Waiver/Deviation Monthly Reviews
Technical
Monthly
Performance Program
54
Measures Reviews
Estimated vs. Actual Reliability Program
1 (system and subsystems) Monthly Reviews
Estimated vs. Actual Operational Program
2 Availability Monthly Reviews
Estimated vs. Actual Program
3 Maintainability Monthly Reviews
Estimated vs. Actual Weight Program
4 (systems and subsystems) Monthly Reviews
Estimated vs. Actual Program
5 Transportability Monthly Reviews
Program
6 Estimated vs. Actual Range Monthly Reviews
Estimated vs. Actual Specific Program
7 Fuel Consumption Monthly Reviews
Program
8 Others as Required by Product Monthly Reviews
Customer
Satisfaction
Customer Survey/Questionnaire Program
1 Results Monthly Reviews
Number and Type of Customer Program
2 Problem Reports Monthly Reviews
Program
3 Customer Reporting Monthly Reviews
Lessons Learned
Number of Lessons Learned Program
1 Submitted Monthly Reviews
Number of Lessons Learned Program
2 Approved Monthly Reviews
3 Monthly
Number of Personnel Submitting Program
55
Program
Software Monthly Reviews
Estimate vs. Actual - Number of Program
1 Unique Programs Monthly Reviews
Estimate vs. Actual - Labor Program
2 Hours per Program Monthly Reviews
Estimate vs. Actual - Program Program
3 Development Schedule Monthly Reviews
Number and Type of User
Complaints/Trouble Reports - Program
4 System and Programs Monthly Reviews
Time Required to Solve Program
5 Complaints/Trouble Reports Monthly Reviews
Number and Type of User Issue Program
6 Reports - System and Programs Monthly Reviews
Time Required to Solve User Program
7 Issues Monthly Reviews
Estimated vs. Actual Mean Time Program
8 Between System Failures Monthly Reviews
Number and Type of Program
9 Programming Errors Found Monthly Reviews
Time Required to Correct Program
10 Programming Errors Monthly Reviews
Parts, Materials
and Processes
Number and Type of DMSMS Program
1 Issues Monthly Reviews
56
Program
2 Status of DMSMS Issues Monthly Reviews
Program
3 Risk of DMSMS Issues Monthly Reviews
Estimated vs. Actual Cost Impact Program
4 of each DMSMS Issue Monthly Reviews
Estimated vs. Actual Schedule Program
5 Impact of each DMSMS Issue Monthly Reviews
Estimated vs. Actual Time
Required to Successfully Address Program
6 DMSMS Issues Monthly Reviews
Number and Type of Counterfeit Program
7 Parts Issues Monthly Reviews
Program
8 Status of Counterfeit Parts Issues Monthly Reviews
Program
9 Risk of Counterfeit Parts Issues Monthly Reviews
Estimated vs. Actual Cost Impact Program
10 of each Counterfeit Parts Issue Monthly Reviews
Estimated vs. Actual Schedule
Impact of each Counterfeit Parts Program
11 Issue Monthly Reviews
Estimated vs. Actual Time
Required to Successfully Address Program
12 Counterfeit Parts Issues Monthly Reviews
Program
13 Number of Parts Reviewed Monthly Reviews
Hardware
Estimate vs. Actual - Number of Program
1 Unique Components Monthly Reviews
Estimate vs. Actual - Labor Program
2 Hours per Component Monthly Reviews
3 Monthly
Estimate vs. Actual -Component Program
57
Requirements
Program
1 Total Number of Requirements Monthly Reviews
Total Number of Requirements Program
2 by Tier (if appropriate) Monthly Reviews
Number of Non-Compliances and Program
3 reason Monthly Reviews
Program
4 Risk by Requirement and reason Monthly Reviews
Requirements Verified versus Program
5 Requirements Not Verified Monthly Reviews
Requirements Validated versus Program
6 Requirements Not Validated Monthly Reviews
58
Technical Planning
Title/Type of Technical Plans
Required and Budgeted versus Program
1 Completed and Approved Monthly Reviews
Status of Technical Plan
Development - Estimated versus
Actual by Plan (possible to use
EVM if planning is via a WBS Program
2 level) Monthly Reviews
Periodic Review Date and
Amount of Updating/Revising Program
3 (by Plan) Monthly Reviews
Timeline and status of Milestones
for Temporal relationships,
Hierarchical relationships,
Critical Dependencies
relationships, and Supporting
Infrastructure relationships by Program
4 Plan Monthly Reviews
Technical
Assessment Effort
Metrics
Technical Performance Measures
Program
(TPM) derived from Key
1 Monthly Reviews
Performance Parameters (KPPs)
60
Validation and
61
Verification
Wher
Pass – Fail e
Methods of Criteria When Brief
Verification/Validation based on: Collected ed
Incorporation
of markups of Progr
deficiencies am
from all Revie
1 approvers Monthly ws
formal peer
reviews,
informal Progr
contacts and a am
separate Revie
2 Peer Reviews (software) testing group. Monthly ws
Incorporation
of markups of
deficiencies Progr
from all am
approvers. Revie
3 Peer Reviews (drawings) Signatures Monthly ws
Incorporation Progr
of markups of am
Peer Reviews (technical deficiencies Revie
4 publications) from editorial Monthly ws
Incorporation
of markups of Progr
deficiencies am
from all Revie
5 Peer Reviews (kits, drawings) approvers Monthly ws
Incorporation Progr
of all editors’ am
mark-ups of Revie
6 Peer Reviews (packaging data) deficiencies Monthly ws
Incorporation Progr
of markups am
7 Customer Validation Monthly
from Hands- Revie
62
on Validation ws
(w/wo
customer)
Incorporation
of markups
from Hands- Progr
on am
Verification Revie
8 Customer Verification (w/customer) Monthly ws
Defined Test
Procedure(s)
IAW ASTM
D4169 and Progr
MIL-STD- am
2073 appendix Revie
F Monthly ws
Any Test Progr
Procedure(s) am
provided by Revie
9 Testing, Packaging the customer Monthly ws
Progr
am
Defined in Revie
10 Testing, parts Test Procedure Monthly ws
Progr
am
Defined in Revie
11 Testing, vehicles Test Procedure Monthly ws
Progr
am
Defined in Revie
12 Testing, software Test Procedure Monthly ws
Components
have passed
Progr
when no open
am
issues are
Revie
indicated on
13 Inspection (in process) Monthly ws
deficiency
63
sheet(s)
Components
have passed
when no open
issues are
indicated on Progr
Inspection am
Test Report Revie
14 Inspection (receiving) (ITR) Monthly ws
Components
have passed
when no open
issues are Progr
indicated on am
the deficiency Revie
15 Safety Inspection sheet(s). Monthly ws
Progr
Defined in am
Safety Test Revie
16 Safety Inspection, Software Procedure Monthly ws
Defined Test
Procedure(s)
IAW ASTM
D4169 and Progr
MIL-STD- am
2073 appendix Revie
F Monthly ws
Defined in Progr
Software am
Safety Test Revie
17 Final Inspection Procedure Monthly ws
Components
have passed
when no open
issues are Progr
indicated on am
the deficiency Revie
sheet(s). Monthly ws
64
Software Metrics
Program
1 Balanced scorecard Monthly Reviews
Program
2 Bugs per line of code Monthly Reviews
Program
3 Code coverage Monthly Reviews
Program
4 Cohesion Monthly Reviews
Program
5 Comment density[1] Monthly Reviews
Program
6 Connascent software components Monthly Reviews
Program
7 Coupling Monthly Reviews
Cyclomatic complexity Program
8 (McCabe's complexity) Monthly Reviews
DSQI (design structure quality Program
9 index) Monthly Reviews
Function Points and Automated Program
10 Function Points Monthly Reviews
Program
11 Halstead Complexity Monthly Reviews
Program
12 Instruction path length Monthly Reviews
Program
13 Maintainability index Monthly Reviews
Program
14 Number of classes and interfaces Monthly Reviews
Program
15 Number of lines of code Monthly Reviews
Number of lines of customer Program
16 requirements Monthly Reviews
65
Program
17 Program execution time Monthly Reviews
Program
18 Program load time Monthly Reviews
Program
19 Program size (binary) Monthly Reviews
Program
20 Weighted Micro Function Points Monthly Reviews
CISQ automated quality Program
21 characteristics measures Monthly Reviews
defect arrival and fix rate (used to
can help measure the maturity of Program
22 the code) Monthly Reviews
Systems
Integration
Technical integration Program
1 strategy document Status, Monthly Reviews
Integration Plans Program
2 Status Monthly Reviews
Integration test Program
3 scripts status Monthly Reviews
Integration test
scenarios status (both
development and Program
4 implementation) Monthly Reviews
Integration tests
status and results (include any Program
5 retests and reason for retest) Monthly Reviews
66
Appendix C: References
These are a few of the references I thought might be useful when researching how to accomplish
each step. Many more can be reached through searching for the phrase of each area. I
recommend that each step, if never accomplished before, be researched thoroughly to enable you
to determine the most effective way to accomplish each step.
Overall
1. INCOSE Systems Engineering Handbook, A Guide for Systems Engineering Processes and
Activities, Fourth Edition, INCOSE-TP-002-04, 2015
2. Guide to the Systems Engineering Body of Knowledge (SEBoK), v1.6, March 2016,
http://sebokwiki.org/wiki/Guide_to_the_Systems_Engineering_Body_of_Knowledge_(SEBoK)
3. NASA Systems Engineering Handbook, NASA/SP-2007-6105 Rev 1
4. MITRE Systems Engineering Guide, 2014,
http://www.mitre.org/sites/default/files/publications/se-guide-book-interactive.pdf
5. Space and Missile Center Systems Engineering Primer and Handbook: Concepts, Processes,
and Techniques, Space & Missile Systems Center, U.S. Air Force, 3rd Edition 29 April 2005
6. NATIONAL AIRSPACE SYSTEM (NAS): SYSTEM ENGINEERING MANUAL (SEM) -
FEDERAL AVIATION ADMINISTRATION (FAA) (VER. 3.1) (11 OCT 2006) [S/S BY FAA
SEM 1.0.1] (NAS Systems Engineering Portal (SEP), https://sep.faa.gov/)
Architecture/Design References
1. Guide To Running Software Development Projects, IBM,
2003http://www.ibm.com/developerworks/websphere/library/techarticles/0306_perks/perks.h
tml
2. System Design& Technical Architecture https://www2.gov.bc.ca/assets/gov/british-
columbians-our-governments/services-policies-for-government/information-
technology/standards/economy-sector/system_design__technical_architecture_template.docx
Baseline Control
1. https://vca.berkeley.edu/sites/default/files/change_control_process_aa.pdf
2. MIL-HDBK-61 page Page 3-4, "Configuration baseline (baseline)"
3. https://www.energy.gov/projectmanagement/downloads/evms-training-snippet-46-
baseline-control-methods
Measurement
1. INCOSE Systems Engineering Measurement Primer v2.0, Document No.:
INCOSE‐TP‐2010‐005‐02, 5 November 2010
2. ISO/IEC 15939:2007, Systems and software engineering — Measurement
Process, Software Engineering Institute (SEI) CMMI®-DEV and CMMI®-ACQ —
Measurement and Analysis process area
3. Practical Software and Systems Measurement (PSM) ISO/IEC 15288:2008, Systems
and software engineering — System life cycle processes
4. INCOSE-TP-2005-003-02, Systems Engineering Leading Indicators Guide, version
2.0, dated 29 January 2010
5. INCOSE-TP-2003-020-01, Technical Measurement: A Collaborative Project of
PSM, INCOSE, and Industry
Systems Thinking
1. Simple_Complexity: A Management Book for the Rest Of Us – A Guide to Systems
Thinking, William Donaldson, 2017, Morgan James Publishing
2. http://www.systemicleadershipinstitute.org/systemic-leadership/theories/basic-
principles-of-systems-thinking-as-applied-to-management-and-leadership-2/
3. Systems Thinking, Systems Tools and Chaos Theory,
http://managementhelp.org/systems/index.htm
4. Human Factors in Simple and Complex Systems, R.W.Proctor and T.Van Zandt, CRC
Press, 2008
Systems Integration
70
4. Australia
• AS/NZS 3907:1996, Quality Management—Guidelines for Configuration Management
72
10. Code of Federal Regulations (CFR) - United States and other Governments
• Clinger-Cohen Act (IT)
• Sarbanes-Oxley, "Define and establish controls..." is the heart of the Sarbanes Oxley Act
• Title 21 CFR Part 820, Quality System Medical Devices FDA
• Title 10 CFR Part 830 122 , Quality Assurance Criteria DOE
• Title 14 CFR Chapter I (FAA), Part 21 Certification Procedures for Products and Parts
• Title 48 CFR, Federal Acquisition Regulations
• Title 48 CFR 2210 Specifications, Standards and Other Purchase Descriptions
• Title 48 CFR 1 Part 46 Quality Assurance
• DoD Cataloging Handbook H6, Federal Item Identification Guides for Supply Cataloging
• DoD Cataloging Handbook H7, Manufacturers Part & Drawing Numbering Systems for
Use in the Federal Cataloging System
• DoDISS, Department of Defense Index of Specifications & Standards )
• MIL-HDBK-59- Computer Aided Acquisition & Logistics Support (CALS) Program
Implementation Guide (CALS is now known as Continuous Acquisition & Life Cycle
Support)
• MIL-HDBK-61, Configuration Management
• MIL-STD-109, Quality Assurance terms & Definitions
• MIL-STD-1168, Lot Numbering of Ammunition
• MIL-STD-130, Identification Marking of US Military Property
• MIL-HDBK-245, Preparation of Statement of Work
• MIL-STD-280, Definition of Item levels, Item Exchangeability, Models & RelatedTerms
• MIL-HDBK-454, Standard General Requirements for Electronic Equipment
• MIL-STD-881, Work Breakdown Structure for Defense Material Items
• MIL-STD-961, Military Specifications & Associated Documents, Preparation of
• MIL-STD-962D, Defense Standards Format and Context.
• MIL-STD-963C, Data Item Descriptions (DIDs)
• MIL-STD-973, CM Notice 3 (canceled 9/30/2000)
• MIL-STD-974, CITIS (Contractor Integrated Technical Information Service, is being
transitioned to a non-government standard).
• MIL-STD-1309, Definitions of Terms for Test, Measurement & Diagnostic Equipment
• MIL-STD-1465, CM of Armaments, Munitions & Chemical Production Modernization
• MIL-STD- 1520, Corrective Action & Disposition System for Non Conforming Material
• DOD-STD-1700, Data Management Program (not superseded but generally replaced by
SAE-GEIA-859, Data Management, 24 Nov 2014)
• MIL-STD-1767, Procedures for Quality assurance & Configuration Control of ICBM
Weapon System Technical Publications & Data
• MIL-STD-1840, Automated Interchange of Technical Information
• MIL-STD-2084, General Requirements for Maintainability of Avionics & Electronic
Systems & Equipment
• DOD-STD-2168, Defense System Software Quality Program
• MIL-I-8500, Interchangeability & Replaceability of Component Parts for Aerospace
Vehicles
• MIL-S-83490, Specification, Types & Forms
• SMC-S-002, Configuration Management (Space and Missile Command)
• USAFAI33-114 Managing Software Configuration and Controlling Data in the Cadet
• Administrative Management Information System (CAMIS)
28. France
• NF EN 13290-5 January 2002, Management of Space Projects—General Requirements—
Part 5: Configuration Management
• AFNOR NF EN 300291-1, Telecommunications Management Network (TMN)—
Functional Specification of Customer Administration (CA) on the Operations System/
Network Element (OS/NE) Interface—Part 1: Single Line Configurations (V1.2.1)
• AFNOR NF ETS 300617, Digital Cellular Telecommunications System (Phase 2)—GSM
Network Configuration Management
• IEEE Std 803.1-1992, Recommended Practice for Unique Identification in Power Plants
& Related Facilities - Component Function Identifiers
• IEEE Std 804-1983, Recommended Practice for Implementation of Unique Identification
System in Power Plants & Related Facilities
• IEEE Std 805-1984 , Recommended Practice for System Identification in Nuclear Power
Plants & Related Facilities
• IEEE Std 806-1986, Recommended Practice for System Identification in Fossil-Fueled
Power Plants & Related Facilities
• IEEE/EIA 12207.0 Industry Implementation of ISO/IEC 12207 (Standard for Information
Technology)
• IEEE/EIA 12207.1 Guide for Information technology- Software Life Cycle Processes
Life Cycle Data
• IEEE/EIA 12207.2 Guide for Information Technology- Software Life Cycle Processes
Implementations Considerations
• J-Std-016 (EIA/IEEE Interim Standard) Standard for Information Technology; Software
Life Cycle Processes; Software Development; Acquirer/Supplier Agreement
• IEEE 828-2012-IEEE Standard for Configuration Management in Systems and Software
Engineering: Institute of Electrical and Electronics Engineers / 16-Mar-2012 / 71 pages
38. Norway
Norwegian Defense Acquisition Regulation (ARF)
• ISO 10007 with a contractual adaption replaced in 2014 by NATO ACMP 2100 suppliers
quality management systems shall comply with Allied Quality Assurance
Requirements—AQAPs
• NUREG 1000, Generic Implications of ATWS Events at the Salem Nuclear Power Plant
• NUREG/CR 1397, An Assessment of Design Control Practices & Design Reconstitution
Programs in the Nuclear Power Industry
• NUREG CR 4640, Handbook of Software Quality Assurance Techniques Applicable to
the Nuclear Industry
• NUREG/CR- 5147, Fundamental Attributes of a Practical CM Program for Nuclear Plant
Design Control
• NUMARC (Nuclear Management & Resources Council)
• NUMARC 90-12, Design Basis Program Guidelines, October 1990
43. NUREG
• NUREG BR 0167, Software Quality Assurance Programs & Guidelines
• NUREG 1000, Generic Implications of ATWS Events at the Salem Nuclear Power Plant
• NUREG/CR 1397, An Assessment of Design Control practices & Design Reconstitution
Programs in the Nuclear Power Industry
• NUREG CR 4640, Handbook of Software Quality Assurance Techniques Applicable to
the Nuclear Industry
• NUREG/CR- 5147, Fundamental Attributes of a Practical CM Program for Nuclear Plant
Design Control
• NUMARC (Nuclear Management & Resources Council)
• NUMARC 90-12, Design Basis Program Guidelines, October 1990
44. Occupational Safety & Health Administration (OSHA) (U.S. Department of Labor)
• OSHA 1910.119, Process Safety Management of Highly Hazardous Chemicals
• OSHA Standards for the Construction Industry (20 CFR Part 1926)
• OSHA Standards for General Industry (29 CFR Part 1910)
48. Spain
• AENOR UNE-EN 13290-5, Space Project Management—General Requirements—Part
5: Configuration Management
• AENOR UNE 135460-1-1, Road Equipment. Traffic Control Centers. Part 1-1: Remote
Stations, Services Management. Communications and Configuration Services
• AENOR UNE 73101, Configuration Management in Nuclear Power Plants
50. TechAmerica
• ANSI/EIA-649B, Configuration Management
• GEIA-HB-649, Configuration Management Handbook
• GEIA-859A, Data Management
• TECHAMERICA CMB 4-1A, Configuration Management Definitions for Digital
Computer Programs (withdrawn)
• TECHAMERICA CMB 5-A, Configuration Management Requirements for
Subcontractors/ Vendors
• TECHAMERICA CMB 6-10, Education in Configuration and Data Management
• TECHAMERICA CMB 6-1C, Configuration and Data Management References
• TECHAMERICA CMB 6-2, Configuration and Data Management In-House Training
Plan
• TECHAMERICA CMB 6-9, Configuration and Data Management Training Course
• TECHAMERICA CMB 7-1, Electronic Interchange of Configuration Management Data
• TECHAMERICA CMB 7-2, Guideline for Transitioning Configuration Management to
an Automated Environment
• TECHAMERICA CMB 7-3, CALS Configuration Management SOW and CDRL
Guidance
• TECHAMERICA GEIA-TB-0002, System Configuration Management Implementation
Template (Oriented for a U.S. Military Contract Environment)
FEEDBACK FORM
Title of Document:
Systems Engineering Guide
Author
David C. Hall, ESEP/CISSP, Hall Associates LLC
Send To:
[email protected]