Data Processing Techniques

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

Data Processing Techniques

1. Real Time Data Processing:


In real time data processing involves a continual input, process and output of data. Data must be processed in a small time period (or near real time). Radar systems, customer services and bank ATMs are examples. Real time processing is the processing of data that takes place as the information is being entered. Real time data processing and analytics allows an organization the ability to take immediate action for those times when acting within seconds or minutes is significant. The goal is to obtain the insight required to act prudently at the right time - which increasingly means immediately. Operational Intelligence (OI) uses real time data processing to gain insight into operations by running query analysis against live feeds and event data. Operational Intelligence is near real time analytics over operational data and provides visibility over many data sources. The goal is to obtain near real time insight using continuous analytics to allow the organization to take immediate action. Real time data processing is used by Point of Sale (POS) Systems to update inventory, provide inventory history, and sales of a particular item - allowing an organization to run payments in real time. Assembly lines use real time processing to reduce time, cost and errors: when a certain process is competed it moves to the next process for the next step - if errors are detected in the previous process they are easier to determine. Some companies that use real-time processing are: Banks use ATM Machines use real-time processing. This allows an individual increased visibility of their bank account in real time. Radar systems such as the weather system called NOAA have real-time processing of data. These radar systems can provide us with up to date weather conditions such as storms, hurricanes, floods, etc .

Advantages of real-time processing: Any errors in the processing of data are caught at the time the data is entered and can be fixed when real-time processing is used. Real-time processing increases employee productivity. Real-time processing provides better control over inventory and inventory turnover. Real-time processing reduces the delay in billing to the customer. This promotes customer satisfaction. By using real-time processing the amount of paper being used is reduced.

Disadvantages of real-time processing: Real-time processing is more complex and expensive than batch systems. A real-time processing system is harder to audit. The backup of real time processing is necessary to retain the data.

2. Batch Data Processing:


Batch data processing is an efficient way of processing high volumes of data is where a group of transactions is collected over a period of time. Data is collected, entered, processed and then the batch results are produced (Hadoop is focused on batch data processing). Batch processing requires separate programs for input, process and output. An example is payroll and billing systems. Source documents are grouped into batches and control totals calculated. Periodically (say after some minutes, days, weeks or months), the batches are entered into a computer system, edited, sorted and stored in a temporary file. The temporary file is run against the master file to update the master file. Output is displayed/printed along with error and transaction reports. A batch is a group of similar transactions that are accumulated to be processed together. Under batch processing, the data is collected before any processing can take place; it is then entered into the system in one go (a batch of data) and processed. There are times when it would make more sense for an organization to use batch processing over real-time processing. One is if the organization is looking for a more economical way of processing a large amount of transactions. It also makes sense for a company to process their payroll in a batch. The hours are calculated over a period of time and processed according to the pay period. This helps save time and money because an employee is not constantly updating the payroll system. Batch processing would be beneficial for a small business. This allows them to compete with larger manufacturing companies. This is a way for a small business to enhance their company at lower costs. Batch processing therefore is when the computer does not carry out any processing/ does not produce any output until all the inputs have been collected in. Some companies that use batch processing are: Credit card companies use batch processing. The monthly credit card bill is generated by batch processing. All data is processed as a batch at the end of the monthly billing cycle. A utility company produces monthly bills for customers. All of the information would be gathered over the month and then batch processed. The bills would be calculated, printed, and sent out to the customer.

Advantages of batch processing: Once the data process is started, the computer can be left running without supervision. Batch processing allows an organization to increase efficiency because a large amount of transactions can be combined into a batch rather than processing them each individually. Batch processing is cost efficient for an organization. Batch processing is only carried out when needed and uses less computer processing time. Batch processing allows for a good audit trail.

Disadvantages of batch processing With batch processing there is a time delay before the work is processed and returned. In batch processing, the master file is not always kept up to date. Batch processing can be slow.

3. Online Data Processing:


Data is processed by the system as it is entered. The master file is then immediately updated. A subset of online processing is real time processing. Under real time processing, response time and processing speed are enhanced to effect output before the next input is done. Here online data processing takes place as the data is input but the system does not need to be instant, a delay of a few seconds is not critical. Most examples involve updating some form of database and often involve multiple users over some form of LAN or a WAN such as the Internet. It is more expensive to set up than a batch processing system as the hardware has to cope with peaks of demand and there must be a reliable backup system.

Some companies that use batch processing are: An airline ticket booking system used by a travel agent or accessed directly by customers over the Internet. Each booking updates a central database almost immediately to avoid double booking seats. A reservation system for booking theatre tickets. This could be accessed by booking staff at the theatre or directly by customers over the Internet.

Advantages of batch processing: It enables the user to input data and get the results to the processing of that data immediately. Master file is updated immediately which enables the provision of timely and relevant information.

Fact Finding & Techniques


It used by analysts to investigate requirements, to identify what new system should be able to do. Provides specification of what the system should do as per users requirements. It also includes what the existing system does and what is the new one expected to do. This is done by system or business analyst. It is very important technique as environment of organizations rapidly changing. There are five main fact finding techniques that are used by analysts to investigate requirements. Here we describe each of them in the order that they might be applied in a system development project, and for each one explain the kind of information that user would expect to gain from its use, its advantages and disadvantages, and the situations in which it is appropriate to use it. Techniques: Background Reading Interviewing Observation Document Sampling Questionnaires

Classifies data in 3 categories: Functional Requirements Non-Functional Requirements Usability Requirements.

1. Functional Requirement: Describes what a system is expected to do (Functionality). Describes the processes that system will carry out. Details of the inputs into the system from paper forms and documents and other systems. Details of the output expected from the system on screen display and as printouts on the paper. 2. Non-Functional Requirement: Describes the quality parameters of the processing of functional requirements. Performance criteria: Desired Response time for updating or retrieving data in/from the system. Ability of the system to cope with multi using at multi levels. Security parameters: resistance and detection of attacks. 3. Usability Requirement: Describe the usability factors and facts between the system and users. Ensures good match between the system and users performing tasks on the system. Efficient Human-Computer interactions.

1. Background reading
If an analyst is employed within the organization that is the subject of the fact gathering exercise, then it is likely that he or she will already have a good understanding of the organization and its business objectives. If, however, he or she is going in as an outside consultant, then one of the first tasks is to try to gain an understanding of the organization. Background reading or research is part of that process. The kind of documents that are suitable sources of information include the following: company reports, organization charts, policy manuals, job descriptions, reports and documentation of existing systems. Although reading company reports may provide the analyst with information about the organizations mission, and so possibly some indication of future requirements, this technique mainly provides information about the current system. Advantages and disadvantages Background reading helps the analyst to get an understanding of the organization before meeting the people who work there. It also allows the analyst to prepare for other types of fact finding, for example, by being aware of the business objectives of the organization. Documentation on the existing system may provide formally defined information requirements for the current system. Written documents often do not match up to reality; they may be out of date or they may reflect the official policy on matters that are dealt with differently in practice.

2. Interviewing
Interviewing is probably the most widely used fact finding technique; it is also the one that requires the most skill and sensitivity. A systems analysis interview is a structured meeting between the analyst and an interviewee who is usually a member of staff of the organization being investigated. The interview may be one of a series of interviews that range across different areas of the interviewees work or that probe in progressively greater depth about the tasks undertaken by the interviewee. The degree of structure may vary: some interviews are planned with a fixed set of questions that the interviewer works through, while others are designed to cover certain topics but will be open-ended enough to allow the interviewer to pursue interesting facts as they emerge. The ability to respond flexibly to the interviewees responses is one of the reasons why interviews are so widely used. Interviews can be used to gather information from management about their objectives for the organization and for the new information system, from staff about their existing jobs and their information needs, and from customers and members of the public as possible users of systems. While conducting an interview, the analyst can also use the opportunity to gather documents which the interviewee uses in his or her work.

Advantages and disadvantages Personal contact allows the analyst to be responsive and adapt to what the user says. Because of this, interviews produce high quality information. The analyst can probe in greater depth about the persons work than can be achieved with other methods. If the interviewee has nothing to say, the interview can be terminated. Interviews are time-consuming and can be the most costly form of fact gathering. Interview results require the analyst to work on them after the interview: the transcription of tape recordings or writing up of notes. Interviews can be subject to bias if the interviewer has a closed mind about the problem. If different interviewees provide conflicting information, it can be difficult to resolve later.

3. Observation
Watching people carrying out their work in a natural setting can provide the analyst with a better understanding of the job than interviews. Observation also allows the analyst to see what information people use to carry out their job. This can tell you about the documents they refer to, whether they have to get up from their desks to get information, how well the existing system handles their needs. People are not good at estimating quantitative data, such as how long they take to deal with certain tasks, and observation with a stopwatch can give the analyst plentiful quantitative data, not just about typical times to perform a task but also about the statistical distribution of those times. In some cases where information or items are moving through a system and being dealt with by many people along the way, observation can allow the analyst to follow the entire process through from start to finish. This type of observation might be used in an organization where orders are taken over the telephone, passed to a warehouse for picking, packed and despatched to the customer. The analyst may want to follow a series of transactions through the system to obtain an overview of the processes involved. Advantages and disadvantages Observation of people at work provides firsthand experience of the way that the current system operates. Data are collected in real time and can have a high level of validity if care is taken in how the technique is used. Observation can be used to verify information from other sources or to look for exceptions to the standard procedure. Baseline data about the performance of the existing system and of users can be collected. Most people do not like being observed and are likely to behave differently from the way in which they would normally behave. This can distort findings and affect the validity. Observation requires a trained and skilled observer for it to be most effective.

There may be logistical problems for the analyst, for example, if the staff to be observed work shifts or travel long distances in order to do their job. There may also be ethical problems if the person is being observed deals with sensitive private or personal data or directly with members of the public, for example in a doctors surgery.

4. Document sampling
Document sampling can be used in two different ways. Firstly, the analyst will collect copies of blank and completed documents during the course of interviews and observation sessions. These will be used to determine the information that is used by people in their work, and the inputs to and outputs from processes which they carry out, either manually or using an existing computer system. Ideally, where there is an existing system, screen shots should also be collected in order to understand the inputs and outputs of the existing system. Secondly, the analyst may carry out a statistical analysis of documents in order to find out about patterns of data. For example, many documents such as order forms contain a header section and a number of lines of detail. The analyst may want to know the distribution of the number of lines in an order. This will help later in estimating volumes of data to be held in the system and in deciding how many lines should be displayed on screen at one time. While this kind of statistical sampling can give a picture of data volumes, the analyst should be alert to seasonal patterns of activity which may mean that there are peaks and troughs in the amount of data being processed. Advantages and disadvantages Can be used to gather quantitative data, such as the average number of lines on an invoice. Can be used to find out about error rates in paper documents. If the system is going to change dramatically, existing documents may not reflect how it will be in future.

5. Questionnaires
Questionnaires are a research instrument that can be applied to fact finding in system development projects. They consist of a series of written questions. The questionnaire designer usually limits the range of replies that the respondent can make by giving them a choice of options. YES/NO questions only give the respondent two options. If there are more options, the multiple choice type of question is often used when the answer is factual, whereas scaled questions are used if the answer involves an element of subjectivity. Some questions do not have a fixed number of responses, and must be left open-ended for the respondent to enter what they like. Where the respondent has a limited number of choices, these are usually coded with a number which speeds up data entry if the responses are to be analysed by computer software.

Advantages and disadvantages An economical way of gathering data from a large number of people. If the questionnaire is well designed, then the results can be analysed easily, possibly by computer. Good questionnaires are difficult to construct. There is no automatic mechanism for follow up or probing more deeply, although it is possible to follow up with an interview by telephone or in person if necessary. Postal questionnaires suffer from low response rates.

Software testing
Software testing identifies defects, flaws or errors in the application code that must be fixed. We can also define software testing as a process of accessing the functionality and correctness of a software through analysis. The main purpose of testing can be quality assurance, reliability estimation, validation and verification. Software testing is a fundamental component of software quality assurance and represents a review of specification, design and coding. The main objective of software testing is to affirm the quality of software system by systematically testing the software in carefully controlled circumstances, another objective is to identify the completeness and correctness of the software, and finally it uncovers undiscovered errors. The three most important techniques that are used for finding errors are: 1) Black Box Testing 2) White Box Testing 3) Grey Box

1. Black Box Testing


It is a technique of testing without having any knowledge of the internal working of the application. It only examines the fundamental aspects of the system and has no or little relevance with the internal logical structure of the system. Black box testing treats the software as a Black Box without any knowledge of internal working and it only examines the fundamental aspects of the system. While performing black box test, a tester must know the system architecture and will not have access to the source code. This testing methodology looks at what are the available inputs for an application and what the expected outputs are that should result from each input. It is not concerned with the inner workings of the application, the process that the application undertakes to achieve a particular output or any other internal aspect of the application that may be involved in the transformation of an input into an output. Most black-box testing tools employ either coordinate based interaction with the applications graphical user interface (GUI) or image recognition. An example of a black-box system would be a search engine. You enter text that you want to search for in the search bar, press Search and results are returned to you. In such a case, you do not know or see the specific process that is being employed to obtain your search results, you simply see that you provide an input a search term and you receive an output your search results.

Advantages to black-box testing: 1. Ease of use: Because testers do not have to concern themselves with the inner workings of an application, it is easier to create test cases by simply working through the application, as would an end user.

2. Quicker test case development: Because testers only concern themselves with the GUI, they do not need to spend time identifying all of the internal paths that may be involved in a specific process, they need only concern themselves with the various paths through the GUI that a user may take. 3. Simplicity: Where large, highly complex applications or systems exist black-box testing offers a means of simplifying the testing process by focusing on valid and invalid inputs and ensuring the correct outputs are received.

Disadvantages to black-box testing: 1. Script maintenance: While an image-based approach to testing is useful, if the user interface is constantly changing the input may also be changing. This makes script maintenance very difficult because black-box tools are reliant on the method of input being known. 2. Fragility: Interacting with the GUI can also make test scripts fragile. This is because the GUI may not be rendered consistently from time-to-time on different platforms or machines. Unless the tool is capable of dealing with differences in GUI rendering, it is likely that test scripts will fail to execute properly on a consistent basis. 3. Lack of introspection: Ironically, one of the greatest criticisms of black-box testing is that it isnt more like white-box testing; that it doesnt know how to look inside an application and therefore can never fully test an application or system. The reasons cited for needing this capability are often to overcome the first two issues mentioned. The reality is quite different.

2. White Box Testing


It is the detailed investigation of internal logic and structure of the code. In white box testing it is necessary for a tester to have full knowledge of source code. White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. White box testing can uncover implementation errors such as poor key management by analyzing internal workings and structure of a piece of software. White box testing is applicable at integration, unit and system levels of the software testing process. In white box testing the tester needs to have a look inside the source code and find out which unit of code is behaving inappropriately. This testing methodology enables you to see what is happening inside the application. White box testing provides a degree of sophistication that is not available with black-box testing as the tester is able to refer to and interact with the objects that comprise an application rather than only having access to the user interface. An example of a white-box system would be an auto-mechanic who looks at the inner-workings of a car to ensure that all of the individual parts are working correctly to ensure the car drives properly. The main difference between black-box and white-box testing is the areas on which they choose to focus. In simplest terms, black-box testing is focused on results. If an action is taken and it produces the desired result then the process that was actually used to achieve that outcome is irrelevant. White-box testing, on the other hand, is concerned with the details. It focuses on the internal workings of a system and only when all avenues have been tested and the sum of an applications parts can be shown to be contributing to the whole is testing complete.

Advantages to white-box testing: 1. Introspection: Introspection, or the ability to look inside the application, means that testers can identify objects programmatically. This is helpful when the GUI is changing frequently or the GUI is yet unknown as it allows testing to proceed. It also can, in some situations, decrease the fragility of test scripts provided the name of an object does not change. 2. Stability: In reality, a by-product of introspection, white-box testing can deliver greater stability nd reusability of test cases if the objects that comprise an application never change. 3. Thoroughness: In situations where it is essential to know that every path has been thoroughly tested, that every possible internal interaction has been examined, white-box testing is the only viable method. As such, white-box testing offers testers the ability to be more thorough in terms of how much of an application they can test.

Disadvantages to black-box testing: 1. Complexity: Being able to see every constituent part of an application means that a tester must have detailed programmatic knowledge of the application in order to work with it properly. This high-degree of complexity requires a much more highly skilled individual to develop test case. 2. Fragility: While introspection is supposed to overcome the issue of application changes breaking test scripts the reality is that often the names of objects change during product development or new paths through the application are added. The fact that white-box testing requires test scripts to be tightly tied to the underlying code of an application means that changes to the code will often cause whitebox test scripts to break. This, then, introduces a high degree of script maintenance into the testing process. 3. Integration: For white-box testing to achieve the degree of introspection required it must be tightly integrated with the application being tested. This creates a few problems. To be tightly integrated with the code you must install the white-box tool on the system on which the application is running. This is okay, but where one wishes to eliminate the possibility that the testing tool is what is causing either a performance or operational problem, this becomes impossible to resolve. Another issue that arises is that of platform support. Due to the highly integrated nature of white-box testing tools many do not provide support for more than one platform, usually Windows. Where companies have applications that run on other platforms, they either need to use a different tool or resort to manual testing.

2. Grey Box Testing


White box + Black box = Grey box, it is a technique to test the application with limited knowledge of the internal working of an application and also has the knowledge of fundamental aspects of the system. Grey box testing technique will increase the testing coverage by allowing us to focus on all the layers of any complex system through the combination of all existing white box and black box testing. In grey box testing the tester must have knowledge of internal data structures and algorithm, for the purpose of designing test cases. Examples of grey box testing technique are: Architectural Model Unified Modelling language (UML) State Model (Finite State Machine)

In grey box testing the codes of two modules are studied (white box testing method) for the design of test cases and actual test are performed in the interfaces exposed (black box testing method). Advantages of Grey Box Testing: Grey box testing provides combined benefits of white box and black box testing techniques. In grey box testing, the tester relies on interface definition and functional specification rather than source code. In grey box testing, the tester can design excellent test scenarios. The test is done from the users point of view rather than designers point of view. Create an intelligent test authoring. Unbiased testing.

Disadvantages of Grey Box Testing: Test coverage is limited as the access to source code is not available. It is difficult to associate defect identification in distributed applications. Many program paths remain untested. If the software designer has already run a test case, the tests can be redundant.

Differences between Black Box, White Box and Grey Box Testing

S. No. 1.

2. 3.

4.

Black Box Testing Analyses fundamental aspects only i.e. no proved edge of internal working Granularity is low Performed by end users and also by tester and developers (user acceptance testing) Testing is based on external exceptions internal behaviour of the program is ignored It is least exhaustive and time consuming It can test only by trial and error method Not suited for algorithm testing

White Box Testing Full knowledge of internal working Granularity is high It is performed by developers and testers

Grey Box Testing Partial knowledge of internal working Granularity is medium Performed by end users and also by tester and developers (user acceptance testing) Test design is based on high level database diagrams, data flow diagrams, internal states, knowledge of algorithm and architecture It is somewhere in between Data domains and internal boundaries can be tested and over flow, if known Not suited for algorithm testing

Internal are fully known

5. 6.

Potentially most exhaustive and time consuming Test better: data domains and internal boundaries It is suited for algorithm testing (suited for all)

7.

Feasibility Study
All projects are feasible - given unlimited resources and infinite time! Unfortunately, the development of a computer-based system or product is more likely plagued by a scarcity of resources and difficult (if notdownright unrealistic) delivery dates. It is both necessary and prudent to evaluate the feasibility of a project at the earliest possible time. Months or years of effort thousand for millions of dollars and untold professional embarrassment can be averted if an illconceived system is recognized early in the definition phase. Feasibility and risk analysis are related in many ways. If project risk is great the feasibility of producing quality software is reduced. During product engineering however, we concentrate our attention on four primary areas of interest: Economic feasibility: An evaluation of development cost weighed against the ultimate income or benefit derived from the developed system or product. Technical Feasibility: A study of function, performance and constraints that may affect the ability to achieve an acceptable system. Legal feasibility: A determination of any infringement violation or liability that could result from development of the system. Alternative: An evaluation of alternative approaches to the development of the system or product. A feasibility study is not warranted for systems in which economic justification is obvious technical risk is low, few legal problems are expected and no reasonable alternative exists. However if any of the preceding conditions fail a study of that area should be conducted. Economic justification is generally the bottom-line consideration for most systems (notable exceptions sometimes include national defense systems, systems mandated by law, and hightechnology applications such as the space program). Economic justification includes a broad range of concerns that include cost benefits analysis long-term corporate income strategies impact on other profit centers or products cost of resources needed for development and potential market growth. Technical feasibility is frequently the most difficult area to assess at this stage of the product engineering process. Because objectives functions and performance are somewhat hazy anything seems possible if the right assumptions are made. It is essential that the process of analysis and definition be conducted in parallel with an assessment of technical feasibility. In this way concrete specification may be judged, as they are determined. The considerations that are normally associated with technical feasibility include: Development risk: Can the system element be designed so that necessary function and performance are achieved within the constraints uncovered during analysis? Resource availability: Is skilled staff available to develop the system element in question? Are other necessary resources (hardware and software) available to build the system? Technology: Has the relevant technology progressed to a state that will support the system?

Developers of computer-based systems are optimists by nature. However, during an evaluation of technical feasibility a cynical if not pessimistic attitude should prevail. Misjudgment at this stage can be disastrous. Legal feasibility encompasses a broad range of concerns that include contracts, liability, infringement and myriad other traps frequently unknown to technical staff. A discussion of legal issues and software is beyond the scope of this book. The integrated reader should see [SCO89]. The degree to which alternatives are considered is often limited by cost and time constraints; however a legitimate but unsponspored variation should not be buried. The feasibility study may be documented as a separate report to upper management and included as an appendix to the system specification. Although the format of a feasibility study may vary the outline provided in Figure 14.1 covers most important topics. The feasibility study is reviewed first by project management (to assess content reliability) and by upper management (to assess project status). The study should result in a go/no-go decision. It should be noted that other go/no-go decisions will be made during the planning, specification, and development steps of both hardware and software engineering.

Requirements Engineering

Requirements engineering helps software engineers understand the problem they are to solve. It involves activities that lead to understanding the business context, what the customer wants, how end-users will interact with the software, and what the business impact will be. Requirements engineering starts with the problem definition: customer statement of work (also known as customer statement of requirements). This is an informal description of what the customers think they need from a software system to do for them. The problem could be identified by management personnel, through market research, by ingenious observation, or some other means. The statement of work captures the perceived needs and, because it is opinion-based, it usually evolves over time, with changing market conditions or better understanding of the problem. Defining the requirements for the system-to-be includes both fact-finding about how the problem is solved in the current practice as well as envisioning how the planned system might work. The final outcome of requirements engineering is a requirements specification document. The key task of requirements engineering is formulating a well-defined problem to solve. A well defined problem includes A set of criteria (requirements) according to which proposed solutions either definitely solve the problem or fail to solve it The description of the resources and components at disposal to solve the problem.

Requirements engineering involves different stakeholders in defining the problem and specifying the solution. A stakeholder is an individual, team, or organization with interests in, or concerns related to, the system-to-be. Generally, the system-to-be has several types of stakeholders: customers, end users, business analysts, systems architects and developers, testing and quality assurance engineers, project managers, the future maintenance organization, owners of other systems that will interact with the system-to-be, etc. The stakeholders all have a stake, but the stakes may differ. End users will be interested in the requested functionality. Architects and developers will be interested in how to effectively implement this functionality. Customers will be interested in costs and timelines. Often compromises and tradeoffs need to be made to satisfy different stakeholders.

Although different methodologies provide different techniques for requirements engineering, all of them follow the same requirements process: Requirements gathering Requirements analysis Requirements specification.

Requirements gathering process starts with customers requirements or surveying the potential market and ends with a specification document that details how the system-to-be will behave. This is simply a logical ordering of requirements engineering activities, regardless of the methodology that is used. Of course, the logical order does not imply that each step must be perfectly completed before the next is taken. A requirement gathering (also known as requirements elicitation) helps the developer understand the business context. The customer needs to define what is required: what is to be accomplished, how the system will fit into the needs of the business, and how the system will be used on a day-to-day basis. This turns out to be very hard to achieve, as discussed in Section 2.2.2. The statement of work is rarely precise and completes enough for the development team to start working on the software product. Requirements analysis involves refining of and reasoning about the requirements received from the customer during requirements gathering. Analysis is driven by the creation and elaboration of user scenarios that describe how the end-user will interact with the system. Negotiation with the customer will be needed to determine the priorities, what is essential, and what is realistic. A popular tool is the use cases (Section 2.4). It is important to ensure that the developers understanding of the problem coincides with the customers understanding of the problem. Requirements specification represents the problem statement in a semiformal or formal manner to ensure clarity, consistency, and completeness. It describes the function and quality of the softwareto-be and the constraints that will govern its development. A specification can be a written document, a set of graphical models, a formal mathematical model, a collection of usage scenarios (or, use cases), a prototype, or any combination of these. The developers could use UML or another symbol language for this purpose.

The user interface


User interfaces should be designed to match the skills, experience and expectations of its anticipated users. System users often judge a system by its interface rather than its functionality. A poorly designed interface can cause a user to make catastrophic errors. Poor user interface design is the reason why so many software systems are never used.

Graphical user interfaces


Most users of business systems interact with these systems through graphical interfaces although, in some cases, legacy text-based interfaces are still used.

GUI characteristics
1. Windows: Multiple windows allow different information to be displayed simultaneously on the users screen. 2. Icons: Icons different types of information. On some systems, icons represent files; on others, icons represent processes. 3. Menus: Commands are selected from a menu rather than typed in a command language. 4. Pointing: A pointing device such as a mouse is used for selecting choices from a menu or indicating items of interest in a window. 5. Graphics: Graphical elements can be mixed with text on the same display.

GUI advantages
1. They are easy to learn and use. Users without experience can learn to use the system quickly.

2. The user may switch quickly from one task to another and can interact with several different applications. Information remains visible in its own window when attention is switched.

3. Fast, full-screen interaction is possible with immediate access to anywhere on the screen

User interface design process

UI design principles
UI design must take account of the needs, experience and capabilities of the system users. Designers should be aware of peoples physical and mental limitations (e.g. limited short-term memory) and should recognise that people make mistakes. UI design principles underlie interface designs although not all principles are applicable to all designs.

User familiarity: The interface should use terms and concepts which are drawn from the experience of the people who will make most use of the system. The interface should be based on user-oriented terms and concepts rather than computer concepts. For example, an office system should use concepts such as letters, documents, folders etc. rather than directories, file identifiers, etc.

Consistency: The interface should be consistent in that, wherever possible, comparable operations should be activated in the same way. The system should display an appropriate level of consistency. Commands and menus should have the same format, command punctuation should be similar, etc.

Minimal surprise: Users should never be surprised by the behaviour of a system. If a command operates in a known way, the user should be able to predict the operation of comparable commands.

Recoverability: The interface should include mechanisms to allow users to recover from errors. The system should provide some resilience to user errors and allow the user to recover from errors. This might include an undo facility, confirmation of destructive actions, 'soft' deletes, etc.

User guidance: The interface should provide meaningful feedback when errors occur and provide context-sensitive user help facilities. Some user guidance such as help systems, on-line manuals, etc. should be supplied.

User diversity: The interface should provide appropriate interaction facilities for different types of system user. Interaction facilities for different types of user should be supported. For example, some users have seeing difficulties and so larger text should be available.

Menu systems
Users make a selection from a list of possibilities presented to them by the system. The selection may be made by pointing and clicking with a mouse, using cursor keys or by typing the name of the selection. It may make use of simple-to-use terminals such as touch screens.

Advantages of menu systems


Users need not remember command names as they are always presented with a list of valid commands. Typing effort is minimal. User errors are trapped by the interface. Context-dependent help can be provided. The users context is indicated by the current menu selection.

Problems with menu systems


Actions which involve logical conjunction (and) or disjunction (or) are awkward to represent. Menu systems are best suited to presenting a small number of choices. If there are many choices, some menu structuring facility must be used. Experienced users find menus slower than command language.

You might also like