Online Shopping Project Synopsis

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 95

Your College Name

Project Synopsis
On

SUBMITTED FOR THE PARTIAL FULFILMENT OF THE REQUIREMENT FOR THE DEGREE
OF

Type Course Name Here

SUPERVISED B
Y
Faculty Name Here
(Faculty)

Submitted by:

Your Name Here


Online Shoppe
Chapt
er 1
Abstr
act
R-World is a Supermarket chain having retail outlets in important residential
colonies in Metro Locations.R-World stocks items from provisions, toiletries,
cosmetics, beverages, wines & spirits, Frozen Foods and Vegetables. Other
categories of items will be added on in future. The chain is relatively new compared
to other more established companies. R-World hopes to make a significant
difference by providing value-added services like Door Delivery. Volume discounting
and monthly credit schemes. Its business philosophy is to treat each individual
customer as an account and providing all facilities by which the account would
repeat-purchase in R-World locations. Regular customers would be provided with
the R-World card, which designates the customer as an account and thereby make
available all value-added services to such customers.

Operational Structure

 R-World will start its operations in Mangalore city. It will commission 8 retail
outlets in the important residential localities of the city.

 2 stock depots-East and West, will service r-World stock distribution. Each
stock depot will service 4 locations each. Inter depot stock transfers will also
happen to facilitate a balanced stock position.

 Within each outlet, there are two kinds of stores that are maintained. Main
stores and Shelf inventory. Main stores reflect the outlets stores from where
items are moved to the shelf. Shelf inventory reflects items in stock on
various shelves in the outlet. Therefore, total stock of item is the main stores
stock added with the shelf inventory stock.

 Minimum total stock(MTS) positions are defined for each item. Re-order
levels (ROL) are also defined for each item. Whenever stock falls to the Re-
order level, stock indents are placed to the nearest stock depot. Stock depots
reach items twice everyday – morning and afternoon.

 Minimum Shelf quantities (MSQ) are maintained for each item within the
outlet. Whenever stock level in the shelf of a specific item falls to MSQ, the
outlet manager instructs replenishment of stock to the shelf.

 The Dispatch Manager compiles door delivery orders three times during the
day. The cut-off time for packing for the 1.00 pm delivery is 10.30 am. At
10.30.a Customer-wise packing list is generated. Together with that, a Item-
wise total quantity list is also generated. Packers assemble items in bags and
make a ticked off and the packer presents the list and the consignment to
the Dispatch clerk for billing. Two copies of the bill and consignment list are
stapled along with each consignment box. Boxes are loaded into the delivery
trucks after the driver verifies the Consignment List with delivery
instructions. Credit feasibility of each customer is indicated in the bill.
Customers may pay by cash, Cheque, by card or opt for credit.

 Delivery trucks are loaded with standard sized consignment boxes, which
come in three sizes-small, medium and Big. The total space of a Delivery
truck is fixed and therefore, it should be possible to calculate space
availability in a truck based on pending Door Delivery order volumes. Unless
specified, customer orders are loaded on first-come-first-served basis on the
delivery truck. Currently, there is only one truck. This will be increased as
order volumes increase.

 After completion of deliveries, deliveryman will settle collections with the


dispatch manager.

Customers

 Customers are divided into two categories. Walk-in customers and Account
Customers. Account customers are provided with the R-World card and they
may use that card while billing at the counter or alternatively, quote the Card
Number while ordering stocks over the Net.

 Account customers can order for stocks over the Web. The minimum value
for door delivery orders is Rs.250.This amount may be revised later.
Customers need to be given a choice of item categories to choose from, item
lists with prices and status of item availability in stock.

 Based on the delivery schedule and the status of space availability with in
each delivery, earliest delivery date/time will be intimated to the customer.
Account customers may also walk-in to the store, buy items and ask for door
delivery.
 Customers may return items to the store and based on the situation, the
store may accept items back. Either money will be returned or customers
may buy another item and settle the difference (either way).

 Volume discounts are defined for each item. Discount rates are defined for
quantity slabs. There may be any number of quantity slabs for individual
item. Some may have two and others may have up to 5 quantity slabs. This
can change in the future.

The project has been planned to be having the view of distributed architecture, with
centralized storage of the database. The application for the storage of the data has
been planned. Using the constructs of MS-SQLServer2000 and all the user
interfaces have been designed using the ASP.Net technologies. The database
connectivity is planned using the “SQL Connection” methodology. The standards of
security and data protective mechanism have been given a big choice for proper
usage. The application takes care of different modules and their associated reports
which are produced as per the applicable strategies and standards that are put
forwarded by the administrative staff.
Chapt
er 2
Project
Synopsis
The entire project has been developed keeping in view of the distributed client
server computing technology, in mind. The specification have been normalized up
to 3NF to eliminate all the anomalies that may arise due to the database
transaction that are executed by the general users and the organizational
administration. The user interfaces are browser specific to give distributed
accessibility for the overall system. The internal database has been selected as MS-
SQL server 200.The basic constructs of tablespaces, clusters and indexes have been
exploited to provide higher consistency and reliability for the data storage. The MS-
SQL server 200 was a choice as it provides the constructs of high-level reliability
and security. The total front end was dominated using the ASP.Net technologies. At
all proper levels high care was taken to check that the system manages the data
consistency with proper business rules or validations. The database connectivity
was planned using the latest ”SQL Connection” technology provided by Microsoft
Corporation. The authentication and authorization was crosschecked at all the
relevant stages. The user level accessibility has been restricted into two zones
namely. The administrative zone and the normal user zone.

About the Organization

R-World is a Supermarket chain having retail outlets in important residential


colonies in Metro Locations. R-World stocks items from provisions, toiletries,
cosmetics, beverages, wines & spirits, Frozen Foods and Vegetables. Other
categories of items will be added on in future. The chain is relatively new compared
to other more established companies. R-World hopes to make a significant
difference by providing value-added services like Door Delivery. Volume discounting
and monthly credit schemes. Its business philosophy is to treat each individual
customer as an account and providing all facilities by which the account would
repeat-purchase in R-World locations. Regular customers would be provided with
the R-World card, which designates the customer as an account and thereby make
available all value-added services to such customers.
Manual Process

States a glance

 Customer
physically visits
the Retail outlet
of the products
and selects the
required
of his choice products

The sales
Customers Cross-verifies Bill clerk person
leaves without the items from raises the precedes the
reference bill Bill products as
per demand

Why the New system

 The system at any point of time can provide the information related
to all the existing retail outlets and their operations.

 The system at any point of time can provide the list of items and
their availability stock.

 The system at any point of time can help the customers in raising
their orders.

 The system a specifically can instruct related to the status of the


delivery process of the products.
Chapt
er 3
Fe a s i b i l
ity
Report
Technical Descriptions
Databases: The total number of databases has been identified as 21
entities. The major part of the databases is categorized as administrative
components and the general user components. The administrative
components are useful in managing the actual master data that is very much
necessary to maintain the consistency upon the system. The administrative
databases are purely used for the internal organizational needs and
necessities only at the upper and middle management areas.

The user components are designed to handle the transactional states that
arise upon the system whenever the general employee within the
organization visits the user interface for mock enquiry for the required data.
The normal user interfaces get associated to the environment mostly for the
sake of report standardization. The user components are scheduled to accept
parametrical information from the users as per the system necessity.

GUI’s

In the flexibility of the uses the interface has been developed a graphics
concept in mind, associated through a browses interface. The GUI’S at the
top level have been categorized as

1. Administrative user interface

2. The operational or generic user interface

The administrative user interface concentrates on the consistent information


that is practically, part of the organizational activities and which needs
proper authentication for the data collection. The interfaces help the
administrations with all the transactional states like Data insertion, Data
deletion and Date updation along with the extensive data search capabilities.

The operational or generic user interface helps the users upon the system in
transactions through the existing data and required services. The operational
user interface also helps the ordinary users in managing their own
information helps the ordinary users in managing their own information in a
customized manner as per the assisted flexibilities.

Number of Modules

The system after careful analysis has been identified to be presented with
the following modules:

1. Retail outlet operations Module: This module maintains the


information to the existing retail outlets, and their subjective
standards of operation. The module not only maintains the outlet
information, but also associated with the products availability at these
outlets and the specification in charger who are designated upon these
outlets.

2. Outlets inventory module: The Outlets inventory module manages


the entire information related to the stock of items that are maintained
at the outlets, and the warehouses. The module is dynamic in nature
integrating itself with the sales and purchases that take place upon the
system.

3. Orders and Delivery information module: This module takes care


of the information related to orders that are raised by the customers
and the Door delivery of products that have been demanded by the
customers as per their standard of requirement.
Chapt
er 4
Analysis
Report
SRS Document:
Intended Audience And Reading Suggestions

The document is prepared keeping is view of the academic constructs of my


Bachelors Degree / Masters Degree from university as partial fulfillment of
my academic purpose the document specifies the general procedure that
that has been followed by me, while the system was studied and developed.
The general document was provided by the industry as a reference guide to
understand my responsibilities in developing the system, with respect to the
requirements that have been pin pointed to get the exact structure of the
system as stated by the actual client.

The system as stated by my project leader the actual standards of the


specification were desired by conducting a series of interviews and
questionnaires. The collected information was organized to form the
specification document and then was modeled to suite the standards of the
system as intended.

Document Conventions:

The overall documents for this project use the recognized modeling
standards at the software industries level.

 ER-Modeling to concentrate on the relational states existing


upon the system with respect to Cardinality.
 The Physical dispense, which state the overall data search for
the relational key whereas a transactions is implemented on
the wear entities.
 Unified modeling language concepts to give a generalized blue
print for the overall system.
 The standards of flow charts at the required states that are the
functionality of the operations need more concentration.
Microsoft SQL Server 7.0 Storage Engine

Introduction

SQL Server™ 7.0 a scalable, reliable, and easy-to-use product that


will provide a solid foundation for application design for the next
20 years.

Storage Engine Design Goals

Database applications can now be deployed widely due to


i n t e l l i g e n t , a u t o m a t e d s t o r age engine operations. Sophisticated yet simplified
architecture improves performance, reliability, and scalability.

Feature Description and Benefits

Reliability Concurrency, scalability, and reliability are improved


with simplified data structures and algorithms. Run-
time checks of critical data structures make the
database much more robust, minimizing the need for
consistency checks.

Scalability The new disk format and storage subsystem provide


storage that is scalable from very small to very large
databases. Specific changes include:
 Simplified mapping of database objects to files
eases management and enables tuning
flexibility. DB objects can be mapped to specific
disks for load balancing.

 More efficient space management including


increasing page size from 2 KB to 8 KB, 64 KB
I/O, variable length character fields up to 8 KB,
and the ability to delete columns from existing
tables without an unload/reload of the data.

 Redesigned utilities support terabyte-sized


databases efficiently.

Ease of Use DBA intervention is eliminated for standard operations


—enabling branch office automation and desktop and
mobile database applications. Many complex server
operations are automated.
Storage Engine Features

Feature Description and Benefits

Data Type Maximum size of character and binary data types


Sizes is dramatically increased.

Databases Databases creation is simplified, now residing on


and Files operating system files instead of logical devices.

Dynamic Improves performance by optimizing memory


Memory allocation and usage. Simplified design minimizes
contention with other resource managers.

Dynamic Full row-level locking is implemented for both


Row-Level data rows and index entries. Dynamic locking
Locking automatically chooses the optimal level of lock
(row, page, multiple page, table) for all database
operations. This feature provides improved
concurrency with no tuning. The database also
supports the use of "hints" to force a particular
level of locking.

Dynamic A database can automatically grow and shrink


Space within configurable limits, minimizing the need for
Management DBA intervention. It is no longer necessary to pre
allocate space and manage data structures.

Evolution The new architecture is designed for extensibility,


with a foundation for object-relational features.

Large SQL Server 7.0 Enterprise Edition will support


Memory memory addressing greater than 4 GB, in
Support conjunction with Windows NT Server 5.0, Alpha
processor-based systems, and other techniques.

Unicode Native Unicode, with ODBC and OLE DB Unicode


APIs, improves multilingual support.
Storage Engine Architectural Overview

Overview
The original code was inherited from Sybase and designed for
eight-megabyte Unix systems in 1983.These new formats improve
manageability and scalability and allow the server to easily scale
from low-end to high-end systems, improving performance and
manageability.

Benefits

There are many benefits of the new on-disk layout, including:

 Improved scalability and integration with Windows NT


Server

 Better performance with larger I/Os


 Stable record locators allow more indexes
 More indexes speed decision support queries
 Simpler data structures provide better quality
 Greater extensibility, so that subsequent releases will
have a cleaner development process and new features
are faster to implement

Storage Engine Subsystems

Most relational database products are divided into relational engine


and storage engine components. This document focuses on the
storage engine, which has a variety of subsystems:

 Mechanisms that store data in files and find pages,


files, and extents.

 Record management for accessing the records on


pages.
 Access methods using b-trees that are used to quickly
find records using record identifiers.
 Concurrency control for locking,used to implement the
physical lock manager and locking protocols for page-
or record-level locking.
 I/O buffer management.
 Logging and recovery.
 Utilities for backup and restore, consistency checking,
and bulk data loading.

Databases, Files, and File groups

Overview

SQL Server 7.0 is much more integrated with Windows NT Server


than any of its predecessors. Databases are now stored directly in
Windows NT Server files .SQL Server is being stretched towards
both the high and low end .

Files
SQL Server 7.0 creates a database using a set of operating system
files, with a separate file used for each database. Multiple
databases can no longer share the same file. There are several
important benefits to this simplification. Files can now grow and
shrink, and space management is greatly simplified. All data and
objects in the database, such as tables, stored procedures,
triggers, and views, are stored only within these operating system
files:

File Type Description

Primary This file is the starting point of the database. Every


data file database has only one primary data file and all system
tables are always stored in the primary data file.

Secondary These files are optional and can hold all data and
data files objects that are not on the primary data file. Some
databases may not have any secondary data files,
while others have multiple secondary data files.

Log files These files hold all of the transaction log information
used to recover the database. Every database has at
least one log file.

When a database is created, all the files that comprise the


database are zeroed out (filled with zeros) to overwrite any
existing data left on the disk by previously deleted files. This
improves the performance of day-to-day operations.
File groups

A database now consists of one or more data files and one or more
log files. The data files can be grouped together into user-defined
filegroups. Tables and indexes can then be mapped to different
filegroups to control data placement on physical disks. Filegroups
are a convenient unit of administration, greatly improving
flexibility. SQL Server 7.0 will allow you to back up a different
portion of the database each night on a rotating schedule by
choosing which filegroups to back up. Filegroups work well for
sophisticated users who know where they want to place indexes
and tables. SQL Server 7.0 can work quite effectively without
filegroups.

Log files are never a part of a file group. Log space is managed
separately from data space.

Using Files and File groups

Using files and file groups improves database performance by


allowing a database to be created across multiple disks, multiple
disk controllers, or redundant array of inexpensive disks (RAID)
systems. For example, if your computer has four disks, you can
create a database that comprises three data files and one log file,
with one file on each disk. As data is accessed, four read/write
heads can simultaneously access the data in parallel, which speeds
up database operations. Additionally, files and file groups allow
better data placement because a table can be created in a specific
file group. This improves performance because all I/O for a specific
table can be directed at a specific disk. For example, a heavily
used table can be placed on one file in one file group and located
on one disk. The other less heavily accessed tables in the database
can be placed on other files in another file group, located on a
second disk.

Space Management

There are many improvements in the allocations of space and the


management of space within files. The data structures that keep
track of page-to-object relationships were redesigned. Instead of
linked lists of pages, bitmaps are used because they are cleaner
and simpler and facilitate parallel scans. Now each file is more
autonomous; it has more data about itself, within itself. This works
well for copying or mailing database files.
SQL Server now has a much more efficient system for tracking
table space. The changes enable

 Growing and shrinking files

 Better support for large I/O

 Row space management within a table

 Less expensive extent allocations

SQL Server is very effective at quickly allocating pages to objects


and reusing space freed by deleted rows. These operations are
internal to the system and use data structures not visible to users,
yet are occasionally referenced in SQL Server messages .

File Shrink

The server checks the space usage in each database periodically. If


a database is found to have a lot of empty space, the size of the
files in the database will be reduced. Both data and log files can be
shrunk. This activity occurs in the background and does not affect
any user activity within the database. You can also use the SQL
Server Enterprise Manager or DBCC to shrink files as individually
or as a group, or use the DBCC commands SHRINKDATABASE or
SHRINKFILE.

SQL Server shrinks files by moving rows from pages at the end of
the file to pages allocated earlier in the file. In an index, nodes
are moved from the end of the file to pages at the beginning of the
file. In both cases pages are freed at the end of files and then
returned to the file system. Databases can only be shrunk to the
point that no free space is remaining; there is no data
compression.

File Grow

Automated file growth greatly reduces the need for database


management and eliminates many problems that occur when logs
or databases run out of space. When creating a database, an initial
size for the file must be given. SQL Server creates the data files
based on the size provided by the database creator and data is
added to the database these files fill. By default, data files are
allowed to grow as much as necessary until disk space is
exhausted. Alternatively, data files can be configured to grow
automatically, but only to a predefined maximum size. This
prevents disk drives from running out of space.
Allowing files to grow automatically can cause fragmentation of
those files if a large number of files share the same disk.
Therefore, it is recommended that files or filegroups be created on
as many different local physical disks as available. Place objects
that compete heavily for space in different filegroups.

Physical Database Architecture

Microsoft SQL Server version 7.0 introduces significant


improvements in the way data is stored physically. These changes
are largely transparent to general users, but do affect the setup
and administration of SQL Server databases.

Pages and Extents


The fundamental unit of data storage in SQL Server is the page. In
SQL Server version 7.0, the size of a page is 8 KB, increased from
2 KB. The start of each page is a 96-byte header used to store
system information, such as the type of page, the amount of free
space on the page, and the object ID of the object owning the
page.

There are seven types of pages in the data files of a SQL Server 7.0 database.

Page Type Contains

Data Data rows with all data except text, ntext,


and image.

Index Index entries

Log Log records recording data changes for use in


recovery

Text/Image Text, ntext, and image data

Global Allocation Map Information about allocated extents

Page Free Space Information about free space available on


pages

Index Allocation Map Information about extents used by a table or


index.
Torn Page Detection

Torn page detection helps insure database consistency. In SQL


Server 7.0, pages are 8 KB, while Windows NT does I/O in 512-
byte segments. This discrepancy makes it possible for a page to be
partially written. This could happen if there is a power failure or
other problem between the time when the first 512-byte segment is
written and the completion of the 8 KB of I/O.

There are several ways to deal with this. One way is to use
battery-backed cached I/O devices that guarantee all-or-nothing
I/O. If you have one of these systems, torn page detection is
unnecessary.

In SQL Server 7.0, you can enable torn page detection for a
particular database by turning on a database option.

Locking Enhancements

Row-Level Locking

SQL Server 6.5 introduced a limited version of row locking on


inserts. SQL Server 7.0 now supports full row-level locking for both
data rows and index entries. Transactions can update individual
records without locking entire pages. Many OLTP applications can
experience increased concurrency, especially when applications
append rows to tables and indexes.

Dynamic Locking

SQL Server 7.0 has a superior locking mechanism that is unique in


the database industry. At run time, the storage engine dynamically
cooperates with the query processor to choose the lowest-cost
locking strategy, based on the characteristics of the schema and
query.

Dynamic locking has the following advantages:

 Simplified database administration, because database


administrators no longer need to be concerned with adjusting
lock escalation thresholds.

 Increased performance, because SQL Server minimizes system


overhead by using locks appropriate to the task.
 Application developers can concentrate on development,
because SQL Server adjusts locking automatically.

Multigranular locking allows different types of resources to be


locked by a transaction. To minimize the cost of locking, SQL
Server automatically locks resources at a level appropriate to the
task. Locking at a smaller granularity, such as rows, increases
concurrency but has a higher overhead because more locks must be
held if many rows are locked. Locking at a larger granularity, such
as tables, is expensive in terms of concurrency. However, locking a
larger unit of data has a lower overhead because fewer locks are
being maintained.

Lock Modes

SQL Server locks resources using different lock modes that


determine how the resources can be accessed by concurrent
transactions.

SQL Server uses several resource lock modes:

Lock mode Description

Shared Used for operations that do not change or update


data (read-only operations), such as a SELECT
statement.

Update Used on resources that can be updated. Prevents a


common form of deadlock that occurs when
multiple sessions are reading, locking, and then
potentially updating resources later.

Exclusive Used for data-modification operations, such as


UPDATE, INSERT, or DELETE. Ensures that multiple
updates cannot be made to the same resource at
the same time.

Intent Used to establish a lock hierarchy.

Schema Used when an operation dependent on the schema


of a table is executing. There are two types of
schema locks: schema stability and schema
modification.

Table and Index Architecture

Overview

Fundamental changes were made in table organization. This new


organization allows the query processor to make use of more
nonclustered indexes, greatly improving performance for decision
support applications. The query optimizer has a wide set of
execution strategies and many of the optimization limitations of
earlier versions of SQL Server have been removed. In particular,
SQL Server 7.0 is less sensitive to index-selection issues, resulting
in less tuning work.

Table Organization

The data for each table is now stored in a collection of 8-KB data
pages. Each data page has a 96-byte header containing system
information such as the ID of the table that owns the page and
pointers to the next and previous pages for pages linked in a list.
A row-offset table is at the end of the page. Data rows fill the rest
of the page.

SQL Server 7.0 tables use one of two methods to organize their
data pages:

 Clustered tables are tables that have a clustered index. The


data rows are stored in order based on the clustered index
key. The data pages are linked in a doubly linked list. The
index is implemented as a b-tree index structure that
supports fast retrieval of the rows based on their clustered
index key values.

 Heaps are tables that have no clustered index. There is no


particular order to the sequence of the data pages and the
data pages are not linked in a linked list.

Table Indexes

A SQL Server index is a structure associated with a table that


speeds retrieval of the rows in the table. An index contains keys
built from one or more columns in the table. These keys are stored
in a structure that allows SQL Server to quickly and efficiently find
the row or rows associated with the key values. This structure is
called a heap. The two types of SQL Server indexes are clustered
and nonclustered indexes

Clustered Indexes

A clustered index is one in which the order of the values in the


index is the same as the order of the data stored in the table.

The clustered index contains a hierarchical tree. When searching


for data based on a clustered index value, SQL Server quickly
isolates the page with the specified value and then searches the
page for the record or records with the specified value. The lowest
level, or leaf node, of the index tree is the page that contains the
data.

Nonclustered Indexes

A nonclustered index is analogous to an index in a textbook. The


data is stored in one place; the index is stored in another, with
pointers to the storage location of the indexed items in the data.
The lowest level, or leaf node, of a nonclustered index is the Row
Identifier of the index entry, which gives SQL Server the location
of the actual data row. The Row Identifier can have one of two
forms. If the table has a clustered index, the identifier of the row
is the clustered index key. If the table is a heap, the Row
Identifier is the actual location of the data row, indicated with a
page number and offset on the page. Therefore, a nonclustered
index, in comparison with a clustered index, has an extra level
between the index structure and the data itself.

When SQL Server searches for data based on a nonclustered index,


it searches the index for the specified value to obtain the location
of the rows of data and then retrieves the data from their storage
locations. This makes nonclustered indexes the optimal choice for
exact-match queries.

Some books contain multiple indexes. Since nonclustered indexes


frequently store clustered index keys as their pointers to data
rows, it is important to keep clustered index keys as small as
possible.

SQL Server supports up to 249 nonclustered indexes on each table.


The nonclustered indexes have a b-tree index structure similar to
the one in clustered indexes. The difference is that nonclustered
indexes have no effect on the order of the data rows. The
collection of data pages for a heap is not affected if nonclustered
indexes are defined for the table.

Data Type Changes

Unicode Data

SQL Server now supports Unicode data types, which makes it


easier to store data in multiple languages within one database by
eliminating the problem of converting characters and installing
multiple code pages. Unicode stores character data using two bytes
for each character rather than one byte. There are 65,536 different
bit patterns in two bytes, so Unicode can use one standard set of
bit patterns to encode each character in all languages, including
languages such as Chinese that have large numbers of characters.
Many programming languages also support Unicode data types.

The new data types that support Unicode are ntext, nchar, and
nvarchar. They are the same as text, char, and varchar, except for
the wider range of characters supported and the increased storage
space used.

Improved Data Storage

Data storage flexibility is greatly improved with the expansion of


the maximum limits for char, varchar, binary, and varbinary data
types to 8,000 bytes, increased from 255 bytes. It is no longer
necessary to use text and image data types for data storage for
anything but very large data values. The Transact-SQL string
functions also support these very long char and varchar values,
and the SUBSTRING function can be used to process text and image
columns. The handling of Nulls and empty strings has been
improved. A new unique identifier data type is provided for storing
a globally unique identifier (GUID).

Normalization

Normalization is the concept of analyzing the “inherent” or normal


relationships between the various elements of a database. Data is
normalized in different forms.

First normal form: Data is in first normal form if data of the


tables is moved in to separate tables where data in each table is of
a similar type, giving each table a primary key – a unique label or
an identifier. This eliminates repeating groups of data.

Second normal form: Involves taking out data that is only


dependent on part of key.

Third normal form: Involves removing the transitive


dependencies. This means getting rid of any thing in the tables
that doesn’t depend Solely on the primary key. Thus, through
normalization, effective data storage can be achieved eliminating
redundancies and repeating groups.

SQL

The structured query language is used to manipulate data in the


oracle database. It is also called SEQUEL.

SQL *plus- the user – friendly interface:

SQL *plus Is a superset of the standard SQL .it conforms to the


standards of an SQL – compliant language and it has some specific
oracle add – ones, leading to its name SQL and plus. SQL *plus
was always called UFI (user –friendly interface). The oracle server
only understands statements worded using SQL. Other front-end
tools interact with the oracle database using the SQL statements.
Oracle’s implementation of SQL through SQL *plus is compliant
with ANSI (American national standard institute) and the ISO
(international standards organization) standards. Almost all oracle
tools support identical SQL syntax

Data can be manipulated upon by using the Data Manipulation


Language (DML). The DML statements provided by SQL are select,
update, and delete. SQL *plus 3.3 can be accessed only by giving
the valid username and password. This is one of the security
features imposed by oracle to restrict unauthorized data accessed.
SQL allows provides commands for creating new users, granting
privileges etc.

All such features of SQL*plus make it a power data access tool


especially for oracle products.
Client Server Technologies
MS.NET

Overview of the .NET Framework

The .NET Framework is a new computing platform that simplifies


application development in the highly distributed environment of
the Internet. The .NET Framework is designed to fulfill the
following objectives:

 To provide a consistent object-oriented programming


environment whether object code is stored and executed
locally, executed locally but Internet-distributed, or
executed remotely.

 To provide a code-execution environment that minimizes


software deployment and versioning conflicts.

 To provide a code-execution environment that


guarantees safe execution of code, including code
created by an unknown or semi-trusted third party.

 To provide a code-execution environment that eliminates


the performance problems of scripted or interpreted
environments.

 To make the developer experience consistent across


widely varying types of applications, such as Windows-
based applications and Web-based applications.

 To build all communication on industry standards to


ensure that code based on the .NET Framework can
integrate with any other code.
The .NET Framework has two main components: the common
language runtime and the .NET Framework class library. The
common language runtime is the foundation of the .NET
Framework. You can think of the runtime as an agent that manages
code at execution time, providing core services such as memory
management, thread management, and remoting, while also
enforcing strict type safety and other forms of code accuracy that
ensure security and robustness. In fact, the concept of code
management is a fundamental principle of the runtime. Code that
targets the runtime is known as managed code, while code that
does not target the runtime is known as unmanaged code. The
class library, the other main component of the .NET Framework, is
a comprehensive, object-oriented collection of reusable types that
you can use to develop applications ranging from traditional
command-line or graphical user interface (GUI) applications to
applications based on the latest innovations provided by ASP.NET,
such as Web Forms and XML Web services.

The .NET Framework can be hosted by unmanaged components


that load the common language runtime into their processes and
initiate the execution of managed code, thereby creating a
software environment that can exploit both managed and
unmanaged features. The .NET Framework not only provides
several runtime hosts, but also supports the development of third-
party runtime hosts.

For example, ASP.NET hosts the runtime to provide a scalable,


server-side environment for managed code. ASP.NET works directly
with the runtime to enable Web Forms applications and XML Web
services, both of which are discussed later in this topic.

Internet Explorer is an example of an unmanaged application that


hosts the runtime (in the form of a MIME type extension). Using
Internet Explorer to host the runtime enables you to embed
managed components or Windows Forms controls in HTML
documents. Hosting the runtime in this way makes managed mobile
code (similar to Microsoft® ActiveX® controls) possible, but with
significant improvements that only managed code can offer, such
as semi-trusted execution and secure isolated file storage.

Features of the Common Language Runtime

The common language runtime manages memory, thread


execution, code execution, code safety verification, compilation,
and other system services. These features are intrinsic to the
managed code that runs on the common language runtime.

With regards to security, managed components are awarded


varying degrees of trust, depending on a number of factors that
include their origin (such as the Internet, enterprise network, or
local computer). This means that a managed component might or
might not be able to perform file-access operations, registry-
access operations, or other sensitive functions, even if it is
being used in the same active application.

The runtime enforces code access security. For example,


users can trust that an executable embedded in a Web page can
play an animation on screen or sing a song, but cannot access
their personal data, file system, or network. The security
features of the runtime thus enable legitimate Internet-deployed
software to be exceptionally feature rich.

The runtime also enforces code robustness by implementing a


strict type- and code-verification infrastructure called the
common type system (CTS). The CTS ensures that all managed
code is self-describing. The various Microsoft and third-party
language compilers generate managed code that conforms to the
CTS. This means that managed code can consume other managed
types and instances, while strictly enforcing type fidelity and
type safety.

In addition, the managed environment of the runtime


eliminates many common software issues. For example, the
runtime automatically handles object layout and manages
references to objects, releasing them when they are no longer
being used. This automatic memory management resolves the
two most common application errors, memory leaks and invalid
memory references.

The runtime also accelerates developer productivity. For


example, programmers can write applications in their
development language of choice, yet take full advantage of the
runtime, the class library, and components written in other
languages by other developers. Any compiler vendor who
chooses to target the runtime can do so. Language compilers
that target the .NET Framework make the features of the .NET
Framework available to existing code written in that language,
greatly easing the migration process for existing applications.

While the runtime is designed for the software of the future,


it also supports software of today and yesterday.
Interoperability between managed and unmanaged code enables
developers to continue to use necessary COM components and
DLLs.

The runtime is designed to enhance performance. Although


the common language runtime provides many standard runtime
services, managed code is never interpreted. A feature called
just-in-time (JIT) compiling enables all managed code to run in
the native machine language of the system on which it is
executing. Meanwhile, the memory manager removes the
possibilities of fragmented memory and increases memory
locality-of-reference to further increase performance.

Finally, the runtime can be hosted by high-performance,


server-side applications, such as Microsoft® SQL Server™ and
Internet Information Services (IIS). This infrastructure enables
you to use managed code to write your business logic, while still
enjoying the superior performance of the industry's best
enterprise servers that support runtime hosting.

Common Type System

The common type system defines how types are declared, used,
and managed in the runtime, and is also an important part of
the runtime's support for cross-language integration. The
common type system performs the following functions:

Establishes a framework that enables cross-language


integration, type safety, and high performance code execution.

Provides an object-oriented model that supports the complete


implementation of many programming languages.

Defines rules that languages must follow, which helps ensure


that objects written in different languages can interact with
each other.

In This Section Common Type System Overview

Describes concepts and defines terms relating to the common


type system.
Type Definitions

Describes user-defined types.

Type Members

Describes events, fields, nested types, methods, and properties,


and concepts such as member overloading, overriding, and
inheritance.

Value Types

Describes built-in and user-defined value types.

Classes

Describes the characteristics of common language runtime classes.

Delegates

Describes the delegate object, which is the managed alternative to


unmanaged function pointers.

Arrays

Describes common language runtime array types.

Interfaces

Describes characteristics of interfaces and the restrictions on


interfaces imposed by the common language runtime.

Pointers

Describes managed pointers, unmanaged pointers, and unmanaged


function pointers.
Related Sections

. NET Framework Class Library

Provides a reference to the classes, interfaces, and value types


included in the Microsoft .NET Framework SDK.

Common Language Runtime

Describes the run-time environment that manages the execution of


code and provides application development services.

Cross-Language Interoperability

The common language runtime provides built-in support for


language interoperability. However, this support does not
guarantee that developers using another programming language
can use code you write. To ensure that you can develop managed
code that can be fully used by developers using any programming
language, a set of language features and rules for using them
called the Common Language Specification (CLS) has been defined.
Components that follow these rules and expose only CLS features
are considered CLS-compliant.

This section describes the common language runtime's built-in


support for language interoperability and explains the role that the
CLS plays in enabling guaranteed cross-language interoperability.
CLS features and rules are identified and CLS compliance is
discussed.

In This Section

Language Interoperability

Describes built-in support for cross-language interoperability and


introduces the Common Language Specification.
What is the Common Language Specification?

Explains the need for a set of features common to all languages


and identifies CLS rules and features.

Writing CLS-Compliant Code

Discusses the meaning of CLS compliance for components and


identifies levels of CLS compliance for tools.

Common Type System

Describes how types are declared, used, and managed by the


common language runtime.

Metadata and Self-Describing Components

Explains the common language runtime's mechanism for describing


a type and storing that information with the type itself.

. NET Framework Class Library

The .NET Framework class library is a collection of reusable types


that tightly integrate with the common language runtime. The class
library is object oriented, providing types from which your own
managed code can derive functionality. This not only makes the
.NET Framework types easy to use, but also reduces the time
associated with learning new features of the .NET Framework. In
addition, third-party components can integrate seamlessly with
classes in the .NET Framework.

For example, the .NET Framework collection classes implement a


set of interfaces that you can use to develop your own collection
classes. Your collection classes will blend seamlessly with the
classes in the .NET Framework.

As you would expect from an object-oriented class library, the .NET


Framework types enable you to accomplish a range of common
programming tasks, including tasks such as string management,
data collection, database connectivity, and file access. In addition
to these common tasks, the class library includes types that
support a variety of specialized development scenarios. For
example, you can use the .NET Framework to develop the following
types of applications and services:

Console applications.

 Scripted or hosted applications.

 Windows GUI applications (Windows Forms).

 ASP.NET applications.

 XML Web services.

 Windows services.

For example, the Windows Forms classes are a comprehensive set


of reusable types that vastly simplify Windows GUI development. If
you write an ASP.NET Web Form application, you can use the Web
Forms classes.

Client Application Development

Client applications are the closest to a traditional style of


application in Windows-based programming. These are the types of
applications that display windows or forms on the desktop,
enabling a user to perform a task. Client applications include
applications such as word processors and spreadsheets, as well as
custom business applications such as data-entry tools, reporting
tools, and so on. Client applications usually employ windows,
menus, buttons, and other GUI elements, and they likely access
local resources such as the file system and peripherals such as
printers.

Another kind of client application is the traditional ActiveX control


(now replaced by the managed Windows Forms control) deployed
over the Internet as a Web page. This application is much like
other client applications: it is executed natively, has access to
local resources, and includes graphical elements.

In the past, developers created such applications using C/C++ in


conjunction with the Microsoft Foundation Classes (MFC) or with a
rapid application development (RAD) environment such as
Microsoft® Visual Basic®. The .NET Framework incorporates
aspects of these existing products into a single, consistent
development environment that drastically simplifies the
development of client applications.

The Windows Forms classes contained in the .NET Framework are


designed to be used for GUI development. You can easily create
command windows, buttons, menus, toolbars, and other screen
elements with the flexibility necessary to accommodate shifting
business needs.

For example, the .NET Framework provides simple properties to


adjust visual attributes associated with forms. In some cases the
underlying operating system does not support changing these
attributes directly, and in these cases the .NET Framework
automatically recreates the forms. This is one of many ways in
which the .NET Framework integrates the developer interface,
making coding simpler and more consistent.

Unlike ActiveX controls, Windows Forms controls have semi-trusted


access to a user's computer. This means that binary or natively
executing code can access some of the resources on the user's
system (such as GUI elements and limited file access) without
being able to access or compromise other resources. Because of
code access security, many applications that once needed to be
installed on a user's system can now be safely deployed through
the Web. Your applications can implement the features of a local
application while being deployed like a Web page.

Managed Execution Process

The managed execution process includes the following steps:

Choosing a Complier

To obtain the benefits provided by the common language runtime,


you must use one or more language compilers that target the
runtime.

Compiling your code to Microsoft Intermediate Language


(MSIL)

Compiling translates your source code into MSIL and generates the
required metadata.

Compiling MSIL to native code

At execution time, a just-in-time (JIT) compiler translates the MSIL


into native code. During this compilation, code must pass a
verification process that examines the MSIL and metadata to find
out whether the code can be determined to be type safe.

Executing your code

The common language runtime provides the infrastructure that


enables execution to take place as well as a variety of services
that can be used during execution.

Assemblies Overview

Assemblies are a fundamental part of programming with the .NET


Framework. An assembly performs the following functions:

It contains code that the common language runtime executes.


Microsoft intermediate language (MSIL) code in a portable
executable (PE) file will not be executed if it does not have an
associated assembly manifest. Note that each assembly can have
only one entry point (that is, DllMain, WinMain, or Main).

It forms a security boundary. An assembly is the unit at which


permissions are requested and granted. For more information about
security boundaries as they apply to assemblies, see Assembly
Security Considerations

It forms a type boundary. Every type's identity includes the name


of the assembly in which it resides. A type called MyType loaded in
the scope of one assembly is not the same as a type called MyType
loaded in the scope of another assembly.

It forms a reference scope boundary. The assembly's manifest


contains assembly metadata that is used for resolving types and
satisfying resource requests. It specifies the types and resources
that are exposed outside the assembly. The manifest also
enumerates other assemblies on which it depends.
It forms a version boundary. The assembly is the smallest
versionable unit in the common language runtime; all types and
resources in the same assembly are versioned as a unit. The
assembly's manifest describes the version dependencies you
specify for any dependent assemblies. For more information about
versioning, see Assembly Versioning

It forms a deployment unit. When an application starts, only the


assemblies that the application initially calls must be present.
Other assemblies, such as localization resources or assemblies
containing utility classes, can be retrieved on demand. This allows
applications to be kept simple and thin when first downloaded. For
more information about deploying assemblies, see Deploying
Applications

It is the unit at which side-by-side execution is supported. For


more information about running multiple versions of the same
assembly, see Side-by-Side Execution

Assemblies can be static or dynamic. Static assemblies can


include .NET Framework types (interfaces and classes), as well as
resources for the assembly (bitmaps, JPEG files, resource files, and
so on). Static assemblies are stored on disk in PE files. You can
also use the .NET Framework to create dynamic assemblies, which
are run directly from memory and are not saved to disk before
execution. You can save dynamic assemblies to disk after they
have executed.

There are several ways to create assemblies. You can use


development tools, such as Visual Studio .NET, that you have used
in the past to create .dll or .exe files. You can use tools provided
in the .NET Framework SDK to create assemblies with modules
created in other development environments. You can also use
common language runtime APIs, such as Reflection. Emit, to create
dynamic assemblies.

Server Application Development

Server-side applications in the managed world are implemented


through runtime hosts. Unmanaged applications host the common
language runtime, which allows your custom managed code to
control the behavior of the server. This model provides you with all
the features of the common language runtime and class library
while gaining the performance and scalability of the host server.

The following illustration shows a basic network schema with


managed code running in different server environments. Servers
such as IIS and SQL Server can perform standard operations while
your application logic executes through the managed code.

Server-side managed code

ASP.NET is the hosting environment that enables developers to use


the .NET Framework to target Web-based applications. However,
ASP.NET is more than just a runtime host; it is a complete
architecture for developing Web sites and Internet-distributed
objects using managed code. Both Web Forms and XML Web
services use IIS and ASP.NET as the publishing mechanism for
applications, and both have a collection of supporting classes in
the .NET Framework.

XML Web services, an important evolution in Web-based


technology, are distributed, server-side application components
similar to common Web sites. However, unlike Web-based
applications, XML Web services components have no UI and are not
targeted for browsers such as Internet Explorer and Netscape
Navigator. Instead, XML Web services consist of reusable software
components designed to be consumed by other applications, such
as traditional client applications, Web-based applications, or even
other XML Web services. As a result, XML Web services technology
is rapidly moving application development and deployment into the
highly distributed environment of the Internet.

If you have used earlier versions of ASP technology, you will


immediately notice the improvements that ASP.NET and Web Forms
offers. For example, you can develop Web Forms pages in any
language that supports the .NET Framework. In addition, your code
no longer needs to share the same file with your HTTP text
(although it can continue to do so if you prefer). Web Forms pages
execute in native machine language because, like any other
managed application, they take full advantage of the runtime. In
contrast, unmanaged ASP pages are always scripted and
interpreted. ASP.NET pages are faster, more functional, and easier
to develop than unmanaged ASP pages because they interact with
the runtime like any managed application.

The .NET Framework also provides a collection of classes and tools


to aid in development and consumption of XML Web services
applications. XML Web services are built on standards such as
SOAP (a remote procedure-call protocol), XML (an extensible data
format), and WSDL (the Web Services Description Language).
The .NET Framework is built on these standards to promote
interoperability with non-Microsoft solutions.

For example, the Web Services Description Language tool included


with the .NET Framework SDK can query an XML Web service
published on the Web, parse its WSDL description, and produce C#
or Visual Basic source code that your application can use to
become a client of the XML Web service. The source code can
create classes derived from classes in the class library that handle
all the underlying communication using SOAP and XML parsing.
Although you can use the class library to consume XML Web
services directly, the Web Services Description Language tool and
the other tools contained in the SDK facilitate your development
efforts with the .NET Framework.

If you develop and publish your own XML Web service, the .NET
Framework provides a set of classes that conform to all the
underlying communication standards, such as SOAP, WSDL, and
XML. Using those classes enables you to focus on the logic of your
service, without concerning yourself with the communications
infrastructure required by distributed software development.

Finally, like Web Forms pages in the managed environment, your


XML Web service will run with the speed of native machine
language using the scalable communication of IIS.

Programming with the .NET Framework

This section describes the programming essentials you need to


build .NET applications, from creating assemblies from your code to
securing your application. Many of the fundamentals covered in
this section are used to create any application using the .NET
Framework. This section provides conceptual information about key
programming concepts, as well as code samples and detailed
explanations.

Accessing Data with ADO.NET

Describes the ADO.NET architecture and how to use the ADO.NET


classes to manage application data and interact with data sources
including Microsoft SQL Server, OLE DB data sources, and XML.

Accessing Objects in Other Application Domains using .NET


Remoting
Describes the various communications methods available in the
.NET Framework for remote communications.

Accessing the Internet

Shows how to use Internet access classes to implement both


Web- and Internet-based applications.

Creating Active Directory Components

Discusses using the Active Directory Services Interfaces.

Creating Scheduled Server Tasks

Discusses how to create events that are raised on reoccurring


intervals.

Developing Components

Provides an overview of component programming and explains


how those concepts work with the .NET Framework.

Developing World-Ready Applications

Explains the extensive support the .NET Framework provides for


developing international applications.

Discovering Type Information at Runtime

Explains how to get access to type information at run time by


using reflection.

Drawing and Editing Images

Discusses using GDI+ with the .NET Framework.

Emitting Dynamic Assemblies


Describes the set of managed types in the
System.Reflection.Emit namespace.

Employing XML in the .NET Framework

Provides an overview to a comprehensive and integrated set of


classes that work with XML documents and data in the .NET
Framework.

Extending Metadata Using Attributes

Describes how you can use attributes to customize metadata.

Generating and Compiling Source Code Dynamically in Multiple


Languages

Explains the .NET Framework SDK mechanism called the Code


Document Object Model (CodeDOM) that enables the output of
source code in multiple programming languages.

Grouping Data in Collections

Discusses the various collection types available in the .NET


Framework, including stacks, queues, lists, arrays, and structs.

Handling and Raising Events

Provides an overview of the event model in the .NET Framework.

Handling and Throwing Exceptions

Describes error handling provided by the .NET Framework and


the fundamentals of handling exceptions.
Hosting the Common Language Runtime

Explains the concept of a runtime host, which loads the runtime


into a process, creates the application domain within the
process, and loads and executes user code.

Including Asynchronous Calls

Discusses asynchronous programming features in the .NET


Framework.

Interoperating with Unmanaged Code

Describes interoperability services provided by the common


language runtime.

Managing Applications Using WMI

Explains how to create applications using Windows Management


Instrumentation (WMI), which provides a rich set of system
management services built in to the Microsoft® Windows®
operating systems.

Creating Messaging Components

Discusses how to build complex messaging into your


applications.

Processing Transactions

Discusses the .NET Framework support for transactions.

Programming Essentials for Garbage Collection

Discusses how the garbage collector manages memory and how


you can program to use memory more efficiently.

Programming with Application Domains and Assemblies


Describes how to create and work with assemblies and
application domains.

Securing Applications

Describes .NET Framework code access security, role-based


security, security policy, and security tools.

Serializing Objects

Discusses XML serialization.

Creating System Monitoring Components

Discusses how to use performance counters and event logs with


your application.

Threading

Explains the runtime support for threading and how to program


using various synchronization techniques.

Working With Base Types

Discusses formatting and parsing base data types and using


regular expressions to process text.

Working with I/O

Explains how you can perform synchronous and asynchronous


file and data stream access and how to use to isolated storage.

Writing Serviced Components

Describes how to configure and register serviced components to


access COM+ services.

Creating ASP.NET Web Applications

Discusses how to create and optimize ASP.NET Web applications.


Creating Windows Forms Applications

Describes how to create Windows Forms and Windows controls


applications.

Building Console Applications

Discusses how to create console-based .NET applications.

Introduction to ASP.NET

ASP.NET is more than the next version of Active Server Pages


(ASP); it is a unified Web development platform that provides
the services necessary for developers to build enterprise-class
Web applications. While ASP.NET is largely syntax compatible
with ASP, it also provides a new programming model and
infrastructure for more secure, scalable, and stable applications.
You can feel free to augment your existing ASP applications by
incrementally adding ASP.NET functionality to them.

ASP.NET is a compiled,. NET-based environment; you can author


applications in any .NET compatible language, including Visual
Basic .NET, C#, and JScript .NET. Additionally, the entire .NET
Framework is available to any ASP.NET application. Developers
can easily access the benefits of these technologies, which
include the managed common language runtime environment,
type safety, inheritance, and so on.

ASP.NET has been designed to work seamlessly with WYSIWYG


HTML editors and other programming tools, including Microsoft
Visual Studio .NET. Not only does this make Web development
easier, but it also provides all the benefits that these tools have
to offer, including a GUI that developers can use to drop server
controls onto a Web page and fully integrated debugging
support.
Developers can choose from the following two features when
creating an ASP.NET application, Web Forms and Web services,
or combine these in any way they see fit. Each is supported by
the same infrastructure that allows you to use authentication
schemes, cache frequently used data, or customize your
application's configuration, to name only a few possibilities.

Web Forms allows you to build powerful forms-based Web pages.


When building these pages, you can use ASP.NET server controls
to create common UI elements, and program them for common
tasks. These controls allow you to rapidly build a Web Form out
of reusable built-in or custom components, simplifying the code
of a page. For more information, see Web Forms Pages. For
information on how to develop ASP.NET server controls, see
Developing ASP.NET Server Controls

An XML Web service provides the means to access server


functionality remotely. Using Web services, businesses can
expose programmatic interfaces to their data or business logic,
which in turn can be obtained and manipulated by client and
server applications. XML Web services enable the exchange of
data in client-server or server-server scenarios, using standards
like HTTP and XML messaging to move data across firewalls. XML
Web services are not tied to a particular component technology
or object-calling convention. As a result, programs written in
any language, using any component model, and running on any
operating system can access XML Web services. For more
information, see XML Web Services and XML Web Service Clients
Created Using ASP.NET

Each of these models can take full advantage of all ASP.NET


features, as well as the power of the .NET Framework and .NET
Framework common language runtime. These features and how
you can use them are outlined as follows:

If you have ASP development skills, the new ASP.NET


programming model will seem very familiar to you. However, the
ASP.NET object model has changed significantly from ASP,
making it more structured and object-oriented. Unfortunately
this means that ASP.NET is not fully backward compatible;
almost all existing ASP pages will have to be modified to some
extent in order to run under ASP.NET. In addition, major
changes to Visual Basic .NET mean that existing ASP pages
written with Visual Basic Scripting Edition typically will not port
directly to ASP.NET. In most cases, though, the necessary
changes will involve only a few lines of code. For more
information, see Migrating from ASP to ASP.NET

Accessing databases from ASP.NET applications is an often-used


technique for displaying data to Web site visitors. ASP.NET
makes it easier than ever to access databases for this purpose.
It also allows you to manage the database from your code. For
more information, see Accessing Data with ASP.NET

ASP.NET provides a simple model that enables Web developers


to write logic that runs at the application level. Developers can
write this code in the global.asax text file or in a compiled class
deployed as an assembly. This logic can include application-level
events, but developers can easily extend this model to suit the
needs of their Web application. For more information, see
ASP.NET Applications

ASP.NET provides easy-to-use application and session-state


facilities that are familiar to ASP developers and are readily
compatible with all other .NET Framework APIs. For more
information, see ASP.NET State Management

For advanced developers who want to use APIs as powerful as


the ISAPI programming interfaces that were included with
previous versions of ASP, ASP.NET offers the IHttpHandler and
IHttpModule interfaces. Implementing the IHttpHandler interface
gives you a means of interacting with the low-level request and
response services of the IIS Web server and provides
functionality much like ISAPI extensions, but with a simpler
programming model. Implementing the IHttpModule interface
allows you to include custom events that participate in every
request made to your application. For more information, see
HTTP Runtime Support

ASP.NET takes advantage of performance enhancements found in


the .NET Framework and common language runtime.
Additionally, it has been designed to offer significant
performance improvements over ASP and other Web development
platforms. All ASP.NET code is compiled, rather than
interpreted, which allows early binding, strong typing, and just-
in-time (JIT) compilation to native code, to name only a few of
its benefits. ASP.NET is also easily factorable, meaning that
developers can remove modules (a session module, for instance)
that are not relevant to the application they are developing.
ASP.NET also provides extensive caching services (both built-in
services and caching APIs). ASP.NET also ships with performance
counters that developers and system administrators can monitor
to test new applications and gather metrics on existing
applications. For more information, see ASP.NET Caching
Features and ASP.NET Optimization
Writing custom debug statements to your Web page can help
immensely in troubleshooting your application's code. However,
it can cause embarrassment if it is not removed. The problem is
that removing the debug statements from your pages when your
application is ready to be ported to a production server can
require significant effort. ASP.NET offers the Trace Context
class, which allows you to write custom debug statements to
your pages as you develop them. They appear only when you
have enabled tracing for a page or entire application. Enabling
tracing also appends details about a request to the page, or, if
you so specify, to a custom trace viewer that is stored in the
root directory of your application. For more information, see
ASP.NET Trace

The .NET Framework and ASP.NET provide default authorization


and authentication schemes for Web applications. You can easily
remove, add to, or replace these schemes, depending upon the
needs of your application. For more information, see ASP.NET
Web Application Security

ASP.NET configuration settings are stored in XML-based files,


which are human readable and writable. Each of your
applications can have a distinct configuration file and you can
extend the configuration scheme to suit your requirements. For
more information, see ASP.NET Configuration

Building Applications

The .NET Framework enables powerful new Web-based


applications and services, including ASP.NET applications,
Windows Forms applications, and Windows services. This section
contains instructive overviews and detailed, step-by-step
procedures for creating applications.
This section also includes information on using the .NET
Framework design-time architecture to support visual design
environments for authoring custom components and controls.

Creating ASP.NET Web Applications

Provides the information you need to develop enterprise-class


Web applications with ASP.NET.

Creating Windows Forms Applications

Introduces Windows Forms, the new object-oriented framework


for developing Windows-based applications.

Windows Service Applications

Describes creating, installing, starting, and stopping Windows


system services.

Building Console Applications

Describes writing applications that use the system console for


input and output.

Enhancing Design-Time Support

Describes the .NET Framework's rich design-time architecture


and support for visual design environments.

Debugging and Profiling Applications

Explains how to test and profile .NET Framework applications.

Deploying Applications
Shows how to use the .NET Framework and the common
language runtime to create self-described, self-contained
applications.

Configuring Applications

Explains how developers and administrators can apply settings


to various types of configuration files.

Debugging and Profiling Applications

To debug a .NET Framework application, the compiler and


runtime environment must be configured to enable a debugger to
attach to the application and to produce both symbols and line
maps, if possible, for the application and its corresponding
Microsoft Intermediate Language (MSIL). Once a managed
application is debugged, it can be profiled to boost performance.
Profiling evaluates and describes the lines of source code that
generate the most frequently executed code, and how much time
it takes to execute them.

The .NET Framework applications are easily debugged using


Visual Studio .NET, which handles many of the configuration
details. If Visual Studio .NET is not installed, you can examine
and improve the performance of .NET Framework applications in
several alternative ways using the following:

Systems. Diagnostics classes.

Runtime Debugger (Cordbg.exe), which is a command-line


debugger.

Microsoft common language runtime Debugger (DbgCLR.exe),


which is a Windows debugger.
The .NET Framework namespace System. Diagnostics includes
the Trace and Debug classes for tracing execution flow, and the
Process, Event Log, and Performance Counter classes for
profiling code. The Cordbg.exe command-line debugger can be
used to debug managed code from the command-line interpreter.
DbgCLR.exe is a debugger with the familiar Windows interface
for debugging managed code. It is located in the
Microsoft.NET/FrameworkSDK/GuiDebug folder.

Enabling JIT-attach Debugging

Shows how to configure the registry to JIT-attach a debug


engine to a .NET Framework application.
Making an Image Easier to Debug

Shows how to turn JIT tracking on and optimization off to make


an assembly easier to debug.

Enabling Profiling

Shows how to set environment variables to tie a .NET Framework


application to a profiler.

Introduction to ASP.NET Server Controls

When you create Web Forms pages, you can use these types of
controls:

HTML server controls   HTML elements exposed to the server so


you can program them. HTML server controls expose an object
model that maps very closely to the HTML elements that they
render.

Web server controls    Controls with more built-in features than


HTML server controls. Web server controls include not only form-
type controls such as buttons and text boxes, but also special-
purpose controls such as a calendar. Web server controls are
more abstract than HTML server controls in that their object
model does not necessarily reflect HTML syntax.

Validation controls   Controls that incorporate logic to allow you


to test a user's input. You attach a validation control to an input
control to test what the user enters for that input control.
Validation controls are provided to allow you to check for a
required field, to test against a specific value or pattern of
characters, to verify that a value lies within a range, and so on.
User controls   Controls that you create as Web Forms pages.
You can embed Web Forms user controls in other Web Forms
pages, which is an easy way to create menus, toolbars, and
other reusable elements.

You can use all types of controls on the same page. The
following sections provide more detail about ASP.NET server
controls. For more information about validation controls, see
Web Forms Validation for information about user controls; see
Introduction to Web User Controls

HTML Server Controls

HTML server controls are HTML elements containing attributes


that make them visible to — and programmable on — the server.
By default, HTML elements on a Web Forms page are not
available to the server; they are treated as opaque text that is
passed through to the browser. However, by converting HTML
elements to HTML server controls, you expose them as elements
you can program on the server.

The object model for HTML server controls maps closely to that
of the corresponding elements. For example, HTML attributes are
exposed in HTML server controls as properties.

Any HTML element on a page can be converted to an HTML


server control. Conversion is a simple process involving just a
few attributes. As a minimum, an HTML element is converted to
a control by the addition of the attribute RUNAT="SERVER". This
alerts the ASP.NET page framework during parsing that it should
create an instance of the control to use during server-side page
processing. If you want to reference the control as a member
within your code, you should also assign an ID attribute to the
control.
The page framework provides predefined HTML server controls
for the HTML elements most commonly used dynamically on a
page: forms, the HTML <INPUT> elements (text box, check box,
Submit button, and so on), list box (<SELECT>), table, image,
and so on. These predefined HTML server controls share the
basic properties of the generic control, and in addition, each
control typically provides its own set of properties and its own
event.

HTML server controls offer the following features

An object model that you can program against on the server


using the familiar object-oriented techniques. Each server
control exposes properties that allow you to manipulate the
control's HTML attributes programmatically in server code.

A set of events for which you can write event handlers in much
the same way you would in a client-based form, except that the
event is handled in server code.

The ability to handle events in client script.

Automatic maintenance of the control's state. If the form makes


a round trip to the server, the values that the user entered into
HTML server controls are automatically maintained when the
page is sent back to the browser.

Interaction with validation controls you can easily verify that a


user has entered appropriate information into a control.

Data binding to one or more properties of the control.

Support for HTML 4.0 styles if the Web Forms page is displayed
in a browser that supports cascading style sheets. Pass-through
of custom attributes. You can add any attributes you need to an
HTML server control and the page framework will read them and
render them without any change in functionality. This allows you
to add browser-specific attributes to your controls. For details
about how to convert an HTML element to an HTML server
control, see Adding HTML Server Controls to a Web Forms Page

Web Server Controls

Web server controls are a second set of controls designed with a


different emphasis. They do not map one-to-one to HTML server
controls. Instead, they are defined as abstract controls in which
the actual HTML rendered by the control can be quite different
from the model that you program against. For example, a
RadioButtonList Web server control might be rendered in a table
or as inline text with other HTML.

Web server controls include traditional form controls such as


buttons and text boxes as well as complex controls such as
tables. They also include controls that provide commonly used
form functionality such as displaying data in a grid, choosing
dates, and so on.

Web server controls offer all of the features described above for
HTML server controls (except one-to-one mapping to HTML
elements) and these additional features:

A rich object model that provides type-safe programming


capabilities.

Automatic browser detection. The controls can detect browser


capabilities and create appropriate output for both basic and
rich (HTML 4.0) browsers.
For some controls, the ability to define your own look for the
control using templates

For some controls, the ability to specify whether a control's


event causes immediate posting to the server or is instead
cached and raised when the form is submitted.

Ability to pass events from a nested control (such as a button in


a table) to the container control.

At design time in HTML view, the controls appear in your page in


a format such as:

<asp: button attributes run at="server"/>

The attributes in this case are not those of HTML elements.


Instead, they are properties of the Web control.

When the Web Forms page runs, the Web server control is
rendered on the page using appropriate HTML, which often
depends not only on the browser type but also on settings that
you have made for the control. For example, a Textbox control
might render as an <INPUT> tag or a <TEXTAREA> tag,
depending on its properties.
Chapt
er 5
Design
Docume
nt
Design Document

 The entire system is projected with a physical diagram which


specifics the actual storage parameters that are physically
necessary for any database to be stored on to the disk. The
overall systems existential idea is derived from this diagram.

 The relation upon the system is structure through a conceptual


ER-Diagram, which not only specifics the existential entities but
also the standard relations through which the system exists and
the cardinalities that are necessary for the system state to
continue.

 The content level DFD is provided to have an idea of the


functional inputs and outputs that are achieved through the
system. The system depicts the input and out put standards at
the high level of the systems existence.

Data Flow Diagrams

 This Diagram server two purpose.

 Provides an indication of how date is transformed as it


moves through the system.

 Disputes the functions and sub functions that transforms


the dataflow.

 The Data flow diagram provides additional information that is


used during the analysis of the information domain, and server
as a basis for the modeling of functions.

 The description of each function presented in the DFD is


contained is a process specifications called as PSPEC
ER-Diagrams

 The entity Relationship Diagram (ERD) depicts the relationship


between the data objects. The ERD is the notation that is used
to conduct the date modeling activity the attributes of each
data object noted is the ERD can be described resign a data
object descriptions.

 The set of primary components that are identified by the ERD


are

 Data object  Relationships

 Attributes  Various types of indicators.

 The primary purpose of the ERD is to represent data objects


and their relationships.

Unified Modeling Language Diagrams

 The unified modeling language allows the software engineer to


express an analysis model using the modeling notation that is
governed by a set of syntactic semantic and pragmatic rules.

 A UML system is represented using five different views that


describe the system from distinctly different perspective. Each
view is defined by a set of diagram, which is as follows.

 User Model View

i. This view represents the system from the users


perspective.

ii. The analysis representation describes a usage scenario


from the end-users perspective.
Structural model view
 In this model the data and functionality are arrived from
inside the system.

 This model view models the static structures.


Behavioral Model View
 It represents the dynamic of behavioral as parts of the
system, depicting the interactions of collection between
various structural elements described in the user model
and structural model view.

Implementation Model View

 In this the structural and behavioral as parts of the


system are represented as they are to be built.

Environmental Model View

In this the structural and behavioral aspects of the environment in which the
system is to be implemented are represented.

UML is specifically constructed through two different domains they are

 UML Analysis modeling, which focuses on the user


model and structural model views of the system.
 UML design modeling, which focuses on the behavioral
modeling, implementation modeling and environmental
model views.
Use Case Diagrams

Databases used

The actors who have been identified in this system are


 Customers
 Sales staff
 Internal Administrator
Customers: They are the actors who manage themselves onto the system
to execute the standard of material purchase,they prescriptionally log onto
the system and they have the privilege of placing the orders and checking
the status of inventory.

Login
Information

Customer
Registration

 Query for
existing
Administrator items

Raising
order

High-level Diagram
Sales staff
These are the internal actor within the systems; they execute the sales
process, with specific to the orders that are raised by the customers.

Login
Information

Query for
customers

 orders

Sales Staff
Query for
items
inventory

Generate
Bill
Internal Administrators: These are the actors who have the overall
control and construct upon the data maintenance of the system. He is in
charge of any consistent data transactions that may execute upon the
system.

Login
Information

Register
outlets

 Register stock
depots
Internal
Administrator

Register
Items

Maintain
stores
inventory

High Level Diagram


Elaborated diagram for customer

<<Uses>> <<Uses>> <<Uses>> Accessibility


Login Authenticate login Authenticate associated through
name
password privilege

<<Uses>> <<Uses>> <<Uses>>


Raise request for Generate Validate data
customer registration customer account fields
number Store

 Raise query for item


<<Uses>> <<Uses>> <<Uses>>
Enter the Select the Item
Customer details item
ID
information Display
screen

<<Uses>> <<Uses>>
Request for <<Uses>> Generate Order Order the Validate data
raising order No
Retail outlet Id fields

<<Uses>>
Display

<<Uses>>
Store
Elaborated diagram for Sales Staff

<<Uses>> <<Uses>> <<Uses>>


Authenticate login Authenticate Privileged
Login name
password activities

<<Uses>> <<Uses>>
<<Uses>> <<Uses>>
Query for customer Enter the order Validate the
orders No fields Display

 <<Uses>> <<Uses>>
Display

Query for Enter the Item Validate the


Sales Staff
Inventory Items number fields Display

<<Uses>> <<Uses>>
Select the <<Uses>>
Bill Generation Generate Bill No
Customer order Check all the
Number ordered items

Generate the
bill
Elaborated diagram for Internal Administrator

<<Uses>> <<Uses>> <<Uses>>


Accessibility
Authenticate login Authenticate
Login name associated through
password privilege

<<Uses>> <<Uses>>
Request for outlet Generate outlet Validate
Enter required
registration ID data fields

<<Uses>> <<Uses>>
<<Uses>>

 Request for stock


depots registration
Generate stock
depot Id
Enter required
Data
Validate
fields

Internal <<Uses>>
Administrator <<Uses>> <<Uses>> <<Uses>>
Request for item Generate Store
Enter the required Validate
registration Item ID data fields
<<Uses>>

Store
<<Uses>>
<<Uses>>
Stores Enter the <<Uses>>
inventory Inventory ID Validate the
field Display
Class Collaboration For:
Retail outlet, Major stores inventory, shelf Inventory and customer
orders collaboration

Retail outlet master Item master


Retail-outlet-ID: number Item -ID: number
Retail-outlet-name: Varchar2 Item -name: Varchar2
Outlet-address: Varchar2 Item-desc: varchar2
Outlet-phoneno: number Item-stock-on-hand:
Outlet-fax-no: varchar2 varchar2
Outlet-in charge-ID: number Item-reorder-level: varchar2
Insert () Item-unit-price: varchar2
Delete () Item-category-ID: number
Update () Item-packing-typ-id: number
Search () Item-min-stock: varchar2
Insert ()
Delete ()
Update ()
Search ()
Item wise main stores Validate-category-id ()
inventory Validate-packing-typeid ()
Inventory-id: number
Item-id: number
Retail-outlet-id: number 1
Item-stock: varchar2
Min-stock-qty: varchar2
Insert (), Delete () Customer Item order number
Update (), Search () Customer-order-no: number
Validate-item-id () Customer-order-date: date
Validate-retail-outlet-id () Customer-delivery-date: date
Customer-delivery-time: varchar2
Customer-retail-outlet-ID: number
Insert (), Delete ()
Update (), Search ()
Item wise shelf inventory Validate-retail-outlet-id ()
master
Shelf-inventory-id: number
Item-id: number
Retail-outlet-id: number
Shelf-item-stock: varchar2
Min-stock-quantity: varchar2
Insert (), Delete ()
Update (), Search ()
Validate-item-id ()
Validate-retail-outlet-id ()
Packing type master
Item-packing-type-id: number
Item-packing-type-desc: varchar2
Any-other-details: varchar2
Insert (), Delete ()
Update (), Search ()
1

Category master
Item-category-id: number
Item-category-name: varchar2
Item-category-description: varchar2
Insert (), Delete ()
Update (), Search ()
Customer Bill Generation collaboration

Customer Bill master Employee master


Bill_No: number Employee-number:number
Bill-date: date Employee-Name:varchar2
Customer-order-no: number Employee-address:varchar2
Sales-person-id: number Employee-DOB:Date
Insert (), Delete () Employee-DOJ:Date
Update (), Search () Insert ()
Validate-cust-order no () Delete ()
Validate-sales-person-id () Update ()
Validate-discount-id () Search ()

Customer item order master


Customer-order-no: number
Retail outlet master
Customer-item-no: number
Retail-outlet-ID: number
Customer-item-qty: varchar2
Retail-outlet-name: Varchar2
Insert (), Delete ()
Outlet-address: Varchar2
Update (), Search ()
Outlet-phone no: number
Validate-Retail-Outlet-Id ()
Outlet-fax-no: varchar2
Outlet-in charge-ID: number
Insert ()
Delete ()
Update ()
Search ()
Sequence Diagram for Login

Login Login Login


Login Login master screen master
screen screen

Enter Enter
Login Validate
name () Enter
Log name ()
Password () Validate

Password () Display

Customer Account registration

Customer Customer Account


account account registration
registration form master form

Request for
customer
account
registration Insert () Generate
Cust-Acc- Accept
NO ()
Fields () Validate

Data fields
() Commit
Customer Item order sequence

Customer Customer Retail outlet


account account Master Item order
registration form Master screen

Request for
customer
account
registration Insert () Generate
item order
NO () Validate
retail outlet Accept Validate
ID ()
Fields () Data fields
Commit
()

Customer Bill Generation sequence

Customer Bill Customer Discount


Bill master master Item order Employee
Screen master master
master

Request for
customer
account
registration Insert () Generate
Bill No ()
Validate
custord No() Validate
sales person
id() Validate
Accept
data Commit
Discount
fields ()
ID ()
Chapt
er 6
Coding
Program Design Language

 The program design language is also called as structured


English or pseudopodia. PDL is a generic reference for a
design language PDL looks like a modern language. The
difference between PDL and real programming language lies
in the narrative text embedded directly within PDL
statements.

The characteristics required by a design language are:

 A fixed system of keywords that provide for all structured


constructs date declaration and modularity characteristics.

 A free syntax of natural language that describes processing


features.

 Date declaration facilities that should include both simple and


complex data structures.

 Subprogram definition and calling techniques that support


various nodes of interface description.

PDL syntax should include constructs for subprogram definition,


interface description date declaration techniques for structuring,
conditions constructs, repetition constructs and I/O constructs.

PDL can be extended to include keywords for multitasking and/or


concurrent processing interrupt handling, interposes
synchronization the application design for which PDL is to be used
should dictate the final form for the design language.
Chapt
er 7
Te s t i n g
&
Debuggi
ng
Strategi
es
Testing
Testing is the process of detecting errors. Testing performs a very critical role for
quality assurance and for ensuring the reliability of software. The results of testing
are used later on during maintenance also.

Psychology of Testing
The aim of testing is often to demonstrate that a program works by showing that it
has no errors. The basic purpose of testing phase is to detect the errors that may
be present in the program. Hence one should not start testing with the intent of
showing that a program works, but the intent should be to show that a program
doesn’t work. Testing is the process of executing a program with the intent of
finding errors.

Testing Objectives
The main objective of testing is to uncover a host of errors, systematically

and with minimum effort and time. Stating formally, we can say,

 Testing is a process of executing a program with the intent of

finding an error.

 A successful test is one that uncovers an as yet undiscovered error.

 A good test case is one that has a high probability of finding error,

if it exists.

 The tests are inadequate to detect possibly present errors.

 The software more or less confirms to the quality and reliable

standards.
Levels of Testing
In order to uncover the errors present in different phases we have the

concept of levels of testing. The basic levels of testing are as shown below…

Client Needs Acceptance


Testing

Requirements
System Testing

Design
Integration Testing

Code
Unit Testing
System Testing
The philosophy behind testing is to find errors. Test cases are devised with this in
mind. A strategy employed for system testing is code testing.

Code Testing:
This strategy examines the logic of the program. To follow this method we
developed some test data that resulted in executing every instruction in the
program and module i.e. every path is tested. Systems are not designed as entire
nor are they tested as single systems. To ensure that the coding is perfect two
types of testing is performed or for that matter is performed or that matter is
performed or for that matter is performed on all systems.
Types Of Testing

 Unit Testing
 Link Testing

Unit Testing
Unit testing focuses verification effort on the smallest unit of software i.e. the
module. Using the detailed design and the process specifications testing is done to
uncover errors within the boundary of the module. All modules must be successful
in the unit test before the start of the integration testing begins.

In this project each service can be thought of a module. There are so many
modules like Login, HWAdmin, MasterAdmin, Normal User, and PManager. Giving
different sets of inputs has tested each module. When developing the module as
well as finishing the development so that each module works without any error. The
inputs are validated when accepting from the user.

In this application developer tests the programs up as system. Software units in a


system are the modules and routines that are assembled and integrated to form a
specific function. Unit testing is first done on modules, independent of one another
to locate errors. This enables to detect errors. Through this errors resulting from
interaction between modules initially avoided.

Link Testing
Link testing does not test software but rather the integration of each module in
system. The primary concern is the compatibility of each module. The Programmer
tests where modules are designed with different parameters, length, type etc.

Integration Testing
After the unit testing we have to perform integration testing. The goal here is to see
if modules can be integrated proprerly, the emphasis being on testing interfaces
between modules. This testing activity can be considered as testing the design and
hence the emphasis on testing module interactions.
In this project integrating all the modules forms the main system. When integrating
all the modules I have checked whether the integration effects working of any of
the services by giving different combinations of inputs with which the two services
run perfectly before Integration.

System Testing
Here the entire software system is tested. The reference document for this process
is the requirements document, and the goal os to see if software meets its
requirements.

Here entire ‘ATM’ has been tested against requirements of project and it is checked
whether all requirements of project have been satisfied or not.

Acceptance Testing
Acceptance Test is performed with realistic data of the client to demonstrate that
the software is working satisfactorily. Testing here is focused on external behavior
of the system; the internal logic of program is not emphasized.

In this project ‘Network Management Of Database System’ I have collected some


data and tested whether project is working correctly or not.

Test cases should be selected so that the largest number of attributes of an


equivalence class is exercised at once. The testing phase is an important part of
software development. It is the process of finding errors and missing operations
and also a complete verification to determine whether the objectives are met and
the user requirements are satisfied.

White Box Testing


This is a unit testing method where a unit will be taken at a time and tested

thoroughly at a statement level to find the maximum possible errors. I tested step

wise every piece of code, taking care that every statement in the code is executed

at least once. The white box testing is also called Glass Box Testing.

I have generated a list of test cases, sample data. which is used to check all
possible combinations of execution paths through the code at every module level.
Black Box Testing
This testing method considers a module as a single unit and checks the unit at
interface and communication with other modules rather getting into details at
statement level. Here the module will be treated as a block box that will take some
input and generate output. Output for a given set of input combinations are
forwarded to other modules.

Criteria Satisfied by Test Cases


1) Test cases that reduced by a count that is greater than one,
the number of additional test cases that much be designed to
achieve reasonable testing.

2) Test cases that tell us something about the presence or


absence of classes of errors, rather than an error associated
only with the specific test at hand.

You might also like