Systems Engineering: Systems Engineering Is An Interdisciplinary Field of Engineering That Focuses On How
Systems Engineering: Systems Engineering Is An Interdisciplinary Field of Engineering That Focuses On How
Systems Engineering: Systems Engineering Is An Interdisciplinary Field of Engineering That Focuses On How
Systems engineering is an interdisciplinary field of engineering that focuses on how complex engineering projects should be designed and managed. Issues such as logistics, the coordination of different teams, and automatic control of machinery become more difficult when dealing with large, complex projects. Systems engineering deals with work-processes and tools to handle such projects, and it overlaps with both technical and human-centered disciplines such as control engineering, mechatronics engineering, industrial engineering, organizational studies, and project management As an approach, systems engineering is holistic and interdisciplinary in flavour.
Agreement processes The first step in the systems engineering process is to establish an agreement with the customer, in order to build a new system. This order is set when the outcome of the feasibility study is positive: there is a need for a new system and there is no other system that can be used or it will be more cost effective to create a new system. When the outcome is negative, the project will end here. The input of the feasibility study is an acquisition phase and a problem definition. A well known way to acquire knowledge is the structured or unstructured interview. This technique and several other acquisition techniques can be used to define the needs of the customer, user and other stake holders. Project processes After an agreement is made, the planning of the project will begin and this results in a project plan, which can be modified during the technical processes. In fact, the project processes are not sequential processes and go in parallel with the whole project, because at each moment a project needs planning, assessment and control. The design loop indicates that during the technical processes, after each step the project-management will assess whether changes have to be made in for example the schedule, which falls under project management. To ensure that a project will be successful the project management takes care that objectives are met within time, costs and a certain performance level, to help the project-management with this a work breakdown structure is made. Configuration management records the changes that are made in the design or requirements. This enables stakeholders to comment on a certain change proposal. But very important for a project success is risk control. Identifying risks and think about solutions in an early stage will save a lot of work and money in a later stage. Technical processes The technical processes cover the design, development and implementation phase of a system life cycle. In the earlier agreement phase top level (or customer) requirements have been established. This set of top level requirements are translated into software requirements which will define the functionalities of the software product. These software requirements can lead to a few alternate designs for the product. Each requirement is periodically examined for validity,
consistency, desirability and attainability (see evaluation processes). With these examinations, or evaluations a decision can be made on the design. With the chosen design a requirements analysis will be performed and a functional design can be made. This functional design is a description of the product in the form of a model: the functional architecture. This model describes what the product does and of which items it consists of (allocation and synthesis). Thereafter, the product can actually be developed, integrated and implemented in the user environment. Evaluation processes Another loop in the model is the evaluation loop. During and after the creation of a software product the following questions have to be answered: Does the product do what it is intended to do? Are the requirements met? and as mentioned in the previous paragraph: Are the requirements valid and consistent? How is the requirements prioritization? This requires that the requirements are testable. For example a usability requirement, such as the software product must be easy to use can be tested through a heuristic evaluation. During the lifetime of a software product it will be continuously revised, updated and re-evaluated until the product is not used anymore and is disposed of.
1. Life Cycle Analysis A life cycle analysis (LCA) is a method to categorize manmade products according to how much energy is involved in their production, as well as any other potential environmental impacts. The LCA is made up of five parts: 1. Raw material acquisition - includes the total amount of energy to extract the material from its virgin state 2. Materials manufacture - refers to the phase of the product's creation where its materials are created 3. Production - refers to the assembly of the materials into the form of the product 4. Use/reuse/maintenance - includes all of the energy and potential impacts of the product's use, reuse and maintenance 5. Waste management - what happens to the material that is discarded during the creation of the product, as well as what happens to the product itself at the end of its use? This is often called a "cradle to grave" approach, since a product is evaluated from the time its material is removed from the earth until it is returned.
2. Life Cycle Cost Life cycle cost is the total cost of ownership of machinery and equipment, including its cost of acquisition, operation, maintenance, conversion, and/or decommission (SAE 1999). LCC are summations of cost estimates from inception to disposal for both equipment and projects as determined by an analytical study and estimate of total costs experienced in annual time increments during the project life with consideration for the time value of money. The objective of LCC analysis is to choose the most cost effective approach from a series of alternatives (note alternatives is a plural word) to achieve the lowest long-term cost of ownership. LCC is an
economic model over the project life span. Usually the cost of operation, maintenance, and disposal costs exceed all other first costs many times over (supporting costs are often 2-20 times greater than the initial procurement costs). The best balance among cost elements is achieved when the total LCC is minimized (Landers 1996). As with most engineering tools, LCC provides best results when both engineering art and science are merged with good judgment to build a sound business case for action. Businesses must summarize LCC results in net present value (NPV) format considering depreciation, taxes and the time value of money. Government organizations do not require inclusion of depreciation or taxes for LCC decisions but they must consider the time value of money.
3. Supportability Supportability engineering deals with all the aspects related to the maintenance, repair and support of systems and products to ensure continued operation or functioning of the systems or product(s). Supportability Engineering is executed during the development and design of new systems to ensure that they are supportable in a cost-effective manner. Supportability Engineering may also be performed during the operational life of the systems, to optimize the availability of the system at an acceptable cost. Supportability Engineering is related to and developed from Integrated Logistics Support. 4. Reliability Reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly. For example, if a test is designed to measure a trait (such as introversion), then each time the test is administered to a subject, the results should be approximately the same. Unfortunately, it is impossible to calculate reliability exactly, but there several different ways to estimate reliability.
Test-Retest Reliability To gauge test-retest reliability, the test is administered twice at two different points in time. This kind of reliability is used to assess the consistency of a test across time. This type of reliability assumes that there will be no change in the quality or construct being measured. Test-retest reliability is best used for things that are stable over time, such as intelligence. Generally, reliability will be higher when little time has passed between tests. Inter-rater Reliability This type of reliability is assessed by having two or more independent judges score the test. The scores are then compared to determine the consistency of the raters estimates. One way to test inter-rater reliability is to have each rater assign each test item a score. For example, each rater might score items on a scale from 1 to 10. Next, you would calculate the correlation between the two rating to determine the level of inter-rater reliability. Another means of testing inter-rater reliability is to have raters determine which category each observations falls into and then calculate the percentage of agreement between the raters. So, if the raters agree 8 out of 10 times, the test has an 80% inter-rater reliability rate. Parallel-Forms Reliability Parallel-forms reliability is gauged by comparing to different tests that were created using the same content. This is accomplished by creating a large pool of test items that measure the same quality and then randomly dividing the items into two separate tests. The two tests should then be administered to the same subjects at the same time. Internal Consistency Reliability This form of reliability is used to judge the consistency of results across items on the same test. Essentially, you are comparing test items that measure the same construct to determine the tests internal consistency. When you see a question that seems very similar to another test question, it may indicate that the two questions are being used to gauge reliability. Because the two questions are similar and designed to measure the same thing,
the test taker should answer both questions the same, which would indicate that the test has internal consistency. Reliability may be defined in several ways:
The idea that something is fit for a purpose with respect to time; The capacity of a device or system to perform as designed; The resistance to failure of a device or system; The ability of a device or system to perform a required function under stated conditions for a specified period of time;
The probability that a functional unit will perform its required function for a specified interval under stated conditions.
5. Maintainability Maintainability is defined as the probability of performing a successful repair action within a given time. In other words, maintainability measures the ease and speed with which a system can be restored to operational status after a failure occurs. For example, if it is said that a particular component has a 90% maintainability in one hour, this means that there is a 90% probability that the component will be repaired within an hour. In maintainability, the random variable is timeto-repair, in the same manner as time-to-failure is the random variable in reliability. In engineering, the ease with which a product can be maintained in order to:
correct defects meet new requirements make future maintenance easier, or cope with a changed environment
In telecommunication and several other engineering fields, the term maintainability has the following meanings:
1. A characteristic of design and installation, expressed as the probability that an item will be retained in or restored to a specified condition within a given period of time, when the maintenance is performed in accordance with prescribed procedures and resources. 2. The ease with which maintenance of a functional unit can be performed in accordance with prescribed requirements.
6. Human Factors Engineering Human Factors Engineering (HFE) is the discipline of applying what is known about human capabilities and limitations to the design of products, processes, systems, and work environments. It can be applied to the design of all systems having a human interface, including hardware and software. Its application to system design improves ease of use, system performance and reliability, and user satisfaction, while reducing operational errors, operator stress, training requirements, user fatigue, and product liability. HFE is distinctive in being the only discipline that relates humans to technology. Human factors engineering focuses on how people interact with tasks, machines (or computers), and the environment with the consideration that humans have limitations and capabilities. Human factors engineers evaluate "Human to Human," "Human to Group," "Human to Organizational," and "Human to Machine (Computers)" interactions to better understand these interactions and to develop a framework for evaluation. Human Factors engineering activities include: 1. Usability assurance 2. Determination of desired user profiles 3. Development of user documentation 4. Development of training programs. 7. Safety Engineering Safety is the freedom hazards or accident. It is a relative matter of freedom from risks and dangers. We can never achieve absolute safety. What we can is to maintain certain level or relative safety condition which is subjected to different situations.
Safety engineering itself is considered the field of applying the knowledge of science to systems engineering and to prevent the happening of accidents in the systems engineering workplace.
8. Electromagnetic Compatibility All electric devices or installations influence each other when interconnected or close to each other. Sometimes you observe interference between your TV set, your GSM handset, your radio and nearby washing machine or electrical power lines. The purpose of electromagnetic compatibility (EMC) is to keep all those side effects under reasonable control. EMC designates all the existing and future techniques and technologies for reducing disturbance and enhancing immunity. The main objective of the Directive 2004/108/EC and of the Council, of 15 December 2004, on the approximation of the Laws of Member States relating to electromagnetic compatibility (EMC) is thus to regulate the compatibility of equipment regarding EMC:
equipment (apparatus and fixed installations) needs to comply with EMC requirements when it is placed on the market and/or taken into service;
the application of good engineering practice is required for fixed installations, with the possibility for the competent authorities of Member States to impose measures if noncompliance is established.
The EMC Directive first limits electromagnetic emissions of equipment in order to ensure that, when used as intended, such equipment does not disturb radio and telecommunication as well as other equipment. The Directive also governs the immunity of such equipment to interference and seeks to ensure that this equipment is not disturbed by radio emissions when used as intended.
Types of interference Electromagnetic interference divides into several categories according to the source and signal characteristics. The origin of noise can be manmade or natural. Continuous interference Continuous, or Continuous Wave (CW), interference arises where the source regularly emits a given range of frequencies. This type is naturally divided into sub-categories according to frequency range, and as a whole is sometimes referred to as "DC to daylight".
Audio Frequency, from very low frequencies up to around 20 kHz. Frequencies up to 100 kHz may sometimes be classified as Audio. Sources include:
o
Mains hum from power supply units, nearby power supply wiring, transmission lines and substations.
o o
Audio processing equipment, such as audio power amplifiers and loudspeakers. Demodulation of a high-frequency carrier wave such as an FM radio transmission.
Radio Frequency Interference, RFI, from 20 kHz to a limit which constantly increases as technology pushes it higher. Sources include:
o o o o
Wireless and Radio Frequency Transmissions Television and Radio Receivers Industrial, scientific and medical equipment Digital processing circuitry (For example microcontrollers)
Broadband noise may be spread across parts of either or both frequency ranges, with no particular frequency accentuated. Sources include:
o o o
Solar Activity Continuously operating spark gaps such as arc welders CDMA mobile telephony
Pulse or transient interference Electromagnetic Pulse, EMP, also sometimes called Transient disturbance, arises where the source emits a short-duration pulse of energy. The energy is usually broadband by nature, although it often excites a relatively narrow-band damped sine wave response in the victim. Sources divide broadly into isolated and repetitive events.
Switching action of electrical circuitry, including inductive loads such as relays, solenoids, or electric motors.
Electrostatic Discharge (ESD), as a result of two charged objects coming into close proximity or even contact.
o o o
Nuclear Electromagnetic Pulse (NEMP), as a result of a nuclear explosion. Non-Nuclear Electromagnetic Pulse (NNEMP) weapons. Power Line Surges/Pulses
Electric Motors Gasoline engine ignition systems Continual switching actions of digital electronic circuitry
9. Testability Design for Testability The aspects of the product design process whose goal is to ensure that the testability of the end product is competently and sufficiently developed. Design for Testability is one of two independent, yet related disciplinesboth falling under the testability rubricthat emerged from the realization that good system diagnostics are the result not only of well-developed test and diagnostic procedures, but also of decisions made
during the product design process. Today, Design for Testability most frequently refers to a set of practices that provide a means of ensuring fit or function of low-level circuits within electronic circuit boards or chips (including fitness for software test). In practice, Design for Testability is closely related to Design for Test (both even sharing the acronym DFT). Design for Test (aka "Design for Testability" or "DFT") is a name for design techniques that add certain testability features to a microelectronic hardware product design. The premise of the added features is that they make it easier to develop and apply manufacturing tests for the designed hardware. The purpose of manufacturing tests is to validate that the product hardware contains no defects that could, otherwise, adversely affect the products correct functioning. Tests are applied at several steps in the hardware manufacturing flow and, for certain products, may also be used for hardware maintenance in the customers environment. The tests generally are driven by test programs that execute in Automatic Test Equipment (ATE) or, in the case of system maintenance, inside the assembled system itself. In addition to finding and indicating the presence of defects (i.e., the test fails), tests may be able to log diagnostic information about the nature of the encountered test fails. The diagnostic information can be used to locate the source of the failure.] 10. Manufacturability Design for manufacturability (DFM) is the general engineering art of designing products in such a way that they are easy to manufacture. The basic idea exists in almost all engineering disciplines, but of course the details differ widely depending on the manufacturing technology. Here are examples: Ideally, DFM guidelines take into account the processes and capabilities of the manufacturing industry. Therefore, DFM is constantly evolving. As manufacturing companies evolve and automate more and more stages of the processes, these processes tend to become cheaper. DFM is usually used to reduce these costs. For example, if a process may be done automatically by machines (ie: SMT component placement and soldering), such process is likely to be cheaper than doing so by hand.
11. Value analysis Value Analysis (and its design partner, Value Engineering) is used to increase the value of products or services to all concerned by considering the function of individual items and the benefit of this function and balancing this against the costs incurred in delivering it. The task then becomes to increase the value or decrease the cost. 1. Manufacturing: Systematic analysis that identifies and selects best value alternatives for designs, materials, processes, and systems. It proceeds by repeatedly asking "can the cost of this item or step be reduced or eliminated, without diminishing the effectiveness, required quality, or customer satisfaction?" Also called value engineering, its objectives are (1) to distinguish between the incurred costs (actual use of resources) and the costs inherent (locked in) in a particular design (and which determine the incurring costs), and (2) to minimize the locked-in costs. 2. Purchasing: Examination of each procurement item to ascertain its total cost of acquisition, maintenance, and usage over its useful life and, wherever feasible, to replace it with a more cost effective substitute. Also called value-in-use analysis. 12. Design to cost DTC (Michaels & Wood, 1989) is a methodology which allows designers to achieve cost targets that were decided at the early product definition phase. This creates a situation where the total target cost of the system is a requirement in the same way as, for example, the total Reliability of a system is a requirement. Now much of the initial cost of a system is a function of the design, development and production costs. Consequently, in DTC, cost is elevated to the same level of concern as performance and schedule, and is evaluated continuously during the design and manufacturing processes. DTC manages and controls cost considerations in all the development processes, based on the following elements:
1. Allocation of target cost to the cost factors of the project. Target costs are derived from the price that the customer is willing to pay, and from the market or business conditions. The
project manager initiates allocation of target cost values to subsystems, assemblies and parts until he reaches the level of the single designer.
2. Design to meet target cost and provision of data and cost estimation tools for designers. Achieving the allocated target cost is the responsibility of each designer and each design team. They are supported by a special team which includes manufacturing engineers, purchasing staff, technologists and others. The team considers all the aspects that affect the cost, such as manufacturing process, testing, assembly, operation and maintenance. The designers re-evaluate their design until the product achieves its allocated target cost.
3. Costs control using cost estimation for each cost factor and Design to Cost Reviews (DTCR). Continuous evaluation, by the managers, of the design for conformance to target cost. The evaluation process can be incorporated in the regular design and development reviews, or in special reviews initiated by various management levels.
4. Corrective actions as required to reduce costs. In case the target costs are exceeded, at any point in the development process, immediate corrective measures are applied. These measures include updates of intermediate target costs, design updates and conducting of value engineering reviews and workshops.
Online Reference: http://en.wikipedia.org/wiki/Systems_engineering http://www.barringer1.com/pdf/LifeCycleCostSummary.pdf http://en.wikipedia.org/wiki/Support_engineering http://en.wikipedia.org/wiki/Maintainability http://www.weibull.com/SystemRelWeb/maintainability.htm http://en.wikipedia.org/wiki/Human_factors#Human_Factors_Engineering http://ec.europa.eu/enterprise/sectors/electrical/emc/ http://en.wikipedia.org/wiki/Electromagnetic_compatibility http://www.designfortestability.com/ http://en.wikipedia.org/wiki/Design_For_Test http://ezinearticles.com/?Design-For-Manufacturability http://www.businessdictionary.com/definition/value-analysis.html http://ae-www.technion.ac.il/events/system_workshop/Conceptual%20Design%20to%20Cost_Hari%20Amihud.pdf