Automatic Time Table Generator
Automatic Time Table Generator
Automatic Time Table Generator
Abstract
The manual operating system of time table preparation in colleges is very monotonous and time-
consuming which results in either the same teachers ending up with more than one class at a time
or many classes conflicting at the same classroom. Due to a non-automatic perspective, absolute
utilization of resources has proven ineffective. In order to deal with such problems, a mechanized
system can be designed with a computer aided timetable generator. The system will take
different inputs like number of subjects, teachers, maximum lectures a teacher can conduct,
priority of subject and topics to be covered in a week or a lecture, considering which, it will
create feasible time tables for working days of the week, making excellent application of all
resources in a way which will be best suited for the constraints. A suitable timetable is then
chosen from the optimal solutions generated. Automated College Timetable generator is a web
based application. It is very difficult to maintain many faculties and allocating subject to them at
a time manually. So the proposed system will help to overcome this disadvantage. Thus, we can
generate timetable for any number of courses and semesters. This system helps to create dynamic
pages so that for implementing such a system we can make use of different tools widely
applicable and free to use also.
Existing System
As we know all institutions/organizations has its own timetable, managing and maintaining these
will not be difficult.
As mentioned, when Timetable generation is being done, it should consider the maximum and
minimum workload that is in a college.
With a single person or some group involved in task of scheduling it manually, this takes a lot of
effort and time.
Proposed System
Automatic timetable manager is a java based web application used to generate timetable
automatically.
Will help you to manage all the periods automatically and also will be helpful for faculty who
can manage the timetable through online.
It will also useful when if any faculty is absent and admin will maintain the all access privileges
of faculty and students.
And if the faculty is busy, he can able to refuse the class period timings also.
Proposed system will help to generate it automatically also helps to save time.
There is no need for "faculty to worry about their period details and maximum workload.
By using these application faculty can apply for leave by providing leave required date, reason.
Admin can also view the request send by faculty and can also view student details.
It is a comprehensive timetable management solution for colleges which helps to overcome the
challenges in current system.
Advantages
A compatible and accurate timetable is guaranteed and the system is therefore well organized
and reliable.
Software Requirements:
Technology : Java
Server : Tomcat
Database : MySQL
Hardware Requirements:
Admin:
Admin will login into the application.
Admin will add all the faculty and students.
Add course details.
Admin will create time table.
If any information to be updated it will be done by admin and can view feedback.
Faculty:
In order to view the time table the faculty must login into the application.
Faculty can view the time table.
The faculty can send feedback.
Student:
Student can login into the application.
Time table can be viewed by students.
2. SYSTEM STUDY
The feasibility of the project is analyzed in this phase and business proposal is put
forth with a very general plan for the project and some cost estimates. During
system analysis the feasibility study of the proposed system is to be carried out.
This is to ensure that the proposed system is not a burden to the company. For
feasibility analysis, some understanding of the major requirements for the system
is essential.
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will
have on the organization. The amount of fund that the company can pour into the
research and development of the system is limited. The expenditures must be
justified. Thus the developed system as well within the budget and this was
achieved because most of the technologies used are freely available. Only the
customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a high
demand on the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the
client. The developed system must have a modest requirement, as only minimal or
null changes are required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to use the system efficiently.
The user must not feel threatened by the system, instead must accept it as a
necessity. The level of acceptance by the users solely depends on the methods that
are employed to educate the user about the system and to make him familiar with
it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.
Client Server
Over view:
With the varied topic in existence in the fields of computers, Client Server is one,
which has generated more heat than light, and also more hype than reality. This
technology has acquired a certain critical mass attention with its dedication
conferences and magazines. Major computer vendors such as IBM and DEC, have
declared that Client Servers is their main future market. A survey of DBMS
magazine reveled that 76% of its readers were actively looking at the client server
solution. The growth in the client server development tools from $200 million in
1992 to more than $1.2 billion in 1996.
Client server implementations are complex but the underlying concept is simple
and powerful. A client is an application running with local resources but able to
request the database and relate the services from separate remote server. The
software mediating this client server interaction is often referred to as
MIDDLEWARE.
The key client server idea is that client as user is essentially insulated from the
physical location and formats of the data needs for their application. With the
proper middleware, a client input from or report can transparently access and
manipulate both local database on the client machine and remote databases on one
or more servers. An added bonus is the client server opens the door to multi-vendor
database access indulging heterogeneous table joins.
Two prominent systems in existence are client server and file server systems. It is
essential to distinguish between client servers and file server systems. Both provide
shared network access to data but the comparison dens there! The file server
simply provides a remote disk drive that can be accessed by LAN applications on a
file by file basis. The client server offers full relational database services such as
SQL-Access, Record modifying, Insert, Delete with full relational integrity
backup/ restore performance for high volume of transactions, etc. the client server
middleware provides a flexible interface between client and server, who does what,
when and to whom.
Why Client Server
Client server has evolved to solve a problem that has been around since the earliest
days of computing: how best to distribute your computing, data generation and
data storage resources in order to obtain efficient, cost effective departmental an
enterprise wide data processing. During mainframe era choices were quite limited.
A central machine housed both the CPU and DATA (cards, tapes, drums and later
disks). Access to these resources was initially confined to batched runs that
produced departmental reports at the appropriate intervals. A strong central
information service department ruled the corporation. The role of the rest of the
corporation limited to requesting new or more frequent reports and to provide hand
written forms from which the central data banks were created and updated. The
earliest client server solutions therefore could best be characterized as “SLAVE-
MASTER”.
Time-sharing changed the picture. Remote terminal could view and even change
the central data, subject to access permissions. And, as the central data banks
evolved in to sophisticated relational database with non-programmer query
languages, online users could formulate adhoc queries and produce local reports
with out adding to the MIS applications software backlog. However remote access
was through dumb terminals, and the client server remained subordinate to the
Slave\Master.
Front end or User Interface Design
The browser specific components are designed by using the HTML standards, and
the dynamism of the designed by concentrating on the constructs of the Java Server
Pages.
The standards of three-tire architecture are given major concentration to keep the
standards of higher cohesion and limited coupling for effectiveness of the
operations.
About Java
Initially the language was called as “oak” but it was renamed as “Java” in 1995.
The primary motivation of this language was the need for a platform-independent
(i.e., architecture neutral) language that could be used to create software to be
embedded in various consumer electronic devices.
Java is a programmer’s language.
Java has had a profound effect on the Internet. This is because; Java expands the
Universe of objects that can move about freely in Cyberspace. In a network, two
categories of objects are transmitted between the Server and the Personal
computer. They are: Passive information and Dynamic active programs. The
Dynamic, Self-executing programs cause serious problems in the areas of Security
and probability. But, Java addresses those concerns and by doing so, has opened
the door to an exciting new form of program called the Applet.
Java can be used to create two types of programs
Security
Every time you that you download a “normal” program, you are risking a viral
infection. Prior to Java, most users did not download executable programs
frequently, and those who did scanned them for viruses prior to execution. Most
users still worried about the possibility of infecting their systems with a virus. In
addition, another type of malicious program exists that must be guarded against.
This type of program can gather private information, such as credit card numbers,
bank account balances, and passwords. Java answers both these concerns by
providing a “firewall” between a network application and your computer.
When you use a Java-compatible Web browser, you can safely download Java
applets without fear of virus infection or malicious intent.
Portability
The key that allows the Java to solve the security and portability problems is that
the output of Java compiler is Byte code. Byte code is a highly optimized set of
instructions designed to be executed by the Java run-time system, which is called
the Java Virtual Machine (JVM). That is, in its standard form, the JVM is an
interpreter for byte code.
Translating a Java program into byte code helps makes it much easier to run a
program in a wide variety of environments. The reason is, once the run-time
package exists for a given system, any Java program can run on it.
Although Java was designed for interpretation, there is technically nothing about
Java that prevents on-the-fly compilation of byte code into native code. Sun has
just completed its Just In Time (JIT) compiler for byte code. When the JIT
compiler is a part of JVM, it compiles byte code into executable code in real time,
on a piece-by-piece, demand basis. It is not possible to compile an entire Java
program into executable code all at once, because Java performs various run-time
checks that can be done only at run time. The JIT compiles code, as it is needed,
during execution.
Beyond the language, there is the Java virtual machine. The Java virtual machine is
an important element of the Java technology. The virtual machine can be
embedded within a web browser or an operating system. Once a piece of Java code
is loaded onto a machine, it is verified. As part of the loading process, a class
loader is invoked and does byte code verification makes sure that the code that’s
has been generated by the compiler will not corrupt the machine that it’s loaded on.
Byte code verification takes place at the end of the compilation process to make
sure that is all accurate and correct. So byte code verification is integral to the
compiling and executing of Java code.
Overall Description
Java Source Java bytecode JavaVM
Java .Class
Java programming uses to produce byte codes and executes them. The first box
indicates that the Java source code is located in a. Java file that is processed with a
Java compiler called javac. The Java compiler produces a file called a. class file,
which contains the byte code. The. Class file is then loaded across the network or
loaded locally on your machine into the execution environment is the Java virtual
machine, which interprets and executes the byte code.
Java Architecture
Compilation of code
When you compile the code, the Java compiler creates machine code (called byte
code) for a hypothetical machine called Java Virtual Machine (JVM). The JVM is
supposed to execute the byte code. The JVM is created for overcoming the issue of
portability. The code is written and compiled for one machine and interpreted on
all machines. This machine is called Java Virtual Machine.
Compiling and interpreting Java Source Code
Java
PC Compiler Java
Interpreter
Source
(PC)
Code
Macintosh Byte code Java
………..
Compiler Interpreter
………..
(Macintosh)
SPARC
(Platform Java
dent) (Sparc)
…………
During run-time the Java interpreter tricks the byte code file into thinking that it is
running on a Java Virtual Machine. In reality this could be a Intel Pentium
Windows 95 or Sun SARC station running Solaris or Apple Macintosh running
system and all could receive code from any computer through Internet and run the
Applets.
Simple
Java was designed to be easy for the Professional programmer to learn and to use
effectively. If you are an experienced C++ programmer, learning Java will be even
easier. Because Java inherits the C/C++ syntax and many of the object oriented
features of C++. Most of the confusing concepts from C++ are either left out of
Java or implemented in a cleaner, more approachable manner. In Java there are a
small number of clearly defined ways to accomplish a given task.
Object-Oriented
Java was not designed to be source-code compatible with any other language. This
allowed the Java team the freedom to design with a blank slate. One outcome of
this was a clean usable, pragmatic approach to objects. The object model in Java is
simple and easy to extend, while simple types, such as integers, are kept as high-
performance non-objects.
Robust
Even though JavaScript supports both client and server Web programming, we
prefer JavaScript at Client side programming since most of the browsers supports
it. JavaScript is almost as easy to learn as HTML, and JavaScript statements can be
included in HTML documents by enclosing the statements between a pair of
scripting tags
<SCRIPTS>..</SCRIPT>.
JavaScript statements
</SCRIPT>
J a v a S c r i p t V s J a v a
JavaScript and Java are entirely different languages. A few of the most glaring
differences are:
A D V A N T A G E S
HTML can be used to display any type of document on the host computer, which
can be geographically at a different location. It is a versatile language and can be
used on any platform or desktop.
HTML provides tags (special codes) to make the document look attractive. HTML
tags are not case-sensitive. Using graphics, fonts, different sizes, color, etc., can
enhance the presentation of the document. Anything that is not a tag is part of the
document itself.
A HTML document is small and hence easy to send over the net. It is
small because it does not include formatted information.
HTML is platform independent.
HTML tags are not case-sensitive.
What Is JDBC?
JDBC is a Java API for executing SQL statements. (As a point of interest, JDBC is
a trademarked name and is not an acronym; nevertheless, JDBC is often thought of
as standing for Java Database Connectivity. It consists of a set of classes and
interfaces written in the Java programming language. JDBC provides a standard
API for tool/database developers and makes it possible to write database
applications using a pure Java API.
Using JDBC, it is easy to send SQL statements to virtually any relational database.
One can write a single program using the JDBC API, and the program will be able
to send SQL statements to the appropriate database. The combinations of Java and
JDBC lets a programmer write it once and run it anywhere.
So why not just use ODBC from Java? The answer is that you can use ODBC from
Java, but this is best done with the help of JDBC in the form of the JDBC-ODBC
Bridge, which we will cover shortly. The question now becomes "Why do you
need JDBC?" There are several answers to this question:
1. ODBC is not appropriate for direct use from Java because it uses a C
interface. Calls from Java to native C code have a number of drawbacks in
the security, implementation, robustness, and automatic portability of
applications.
2. A literal translation of the ODBC C API into a Java API would not be
desirable. For example, Java has no pointers, and ODBC makes copious use
of them, including the notoriously error-prone generic pointer "void *". You
can think of JDBC as ODBC translated into an object-oriented interface that
is natural for Java programmers.
3. ODBC is hard to learn. It mixes simple and advanced features together, and
it has complex options even for simple queries. JDBC, on the other hand,
was designed to keep simple things simple while allowing more advanced
capabilities where required.
4. A Java API like JDBC is needed in order to enable a "pure Java" solution.
When ODBC is used, the ODBC driver manager and drivers must be
manually installed on every client machine. When the JDBC driver is
written completely in Java, however, JDBC code is automatically installable,
portable, and secure on all Java platforms from network computers to
mainframes.
The JDBC API supports both two-tier and three-tier models for database access.
In the two-tier model, a Java applet or application talks directly to the database.
This requires a JDBC driver that can communicate with the particular database
management system being accessed. A user's SQL statements are delivered to the
database, and the results of those statements are sent back to the user. The database
may be located on another machine to which the user is connected via a network.
This is referred to as a client/server configuration, with the user's machine as the
client, and the machine housing the database as the server. The network can be an
Intranet, which, for example, connects employees within a corporation, or it can be
JAVA
Client machine
Application
Database server
DBMS
the Internet.
In the three-tier model, commands are sent to a "middle tier" of services, which
then send SQL statements to the database. The database processes the SQL
statements and sends the results back to the middle tier, which then sends them to
Java applet or
Client machine (GUI)
Html browser
Application
Server (Java) Server machine (business Logic)
DBMS-proprietary protocol
JDBC
Database server
DBMS
the user. MIS directors find the three-tier model very attractive because the middle
tier makes it possible to maintain control over access and the kinds of updates that
can be made to corporate data. Another advantage is that when there is a middle
tier, the user can employ an easy-to-use higher-level API which is translated by the
middle tier into the appropriate low-level calls. Finally, in many cases the three-tier
architecture can provide performance advantages.
Until now the middle tier has typically been written in languages such as C or
C++, which offer fast performance. However, with the introduction of
optimizing compilers that translate Java byte code into efficient machine-
specific code, it is becoming practical to implement the middle tier in Java. This
is a big plus, making it possible to take advantage of Java's robustness,
multithreading, and security features. JDBC is important to allow database
access from a Java middle tier.
JDBC Driver Types
The JDBC drivers that we are aware of at this time fit into one of four
categories:
JDBC-ODBC bridge plus ODBC driver
Native-API partly-Java driver
JDBC-Net pure Java driver
Native-protocol pure Java driver
JDBC-ODBC Bridge
If possible, use a Pure Java JDBC driver instead of the Bridge and an ODBC
driver. This completely eliminates the client configuration required by ODBC.
It also eliminates the potential that the Java VM could be corrupted by an error
in the native code brought in by the Bridge (that is, the Bridge native library,
the ODBC driver manager library, the ODBC driver library, and the database
client library).
sun.jdbc.odbc Java package and contains a native library used to access ODBC.
The Bridge is a joint development of Intersolv and JavaSoft.
Java Server Pages (JSP)
Java server Pages is a simple, yet powerful technology for creating and
maintaining dynamic-content web pages. Based on the Java programming
language, Java Server Pages offers proven portability, open standards, and a
mature re-usable component model .The Java Server Pages architecture enables
the separation of content generation from content presentation. This separation
not eases maintenance headaches, it also allows web team members to focus on
their areas of expertise. Now, web page designer can concentrate on layout, and
web application designers on programming, with minimal concern about
impacting each other’s work.
Features of JSP
Portability:
Java Server Pages files can be run on any web server or web-enabled
application server that provides support for them. Dubbed the JSP engine, this
support involves recognition, translation, and management of the Java Server
Page lifecycle and its interaction components.
Components
It was mentioned earlier that the Java Server Pages architecture can include
reusable Java components. The architecture also allows for the embedding of a
scripting language directly into the Java Server Pages file. The components current
supported include Java Beans, and Servlets.
Processing
A Java Server Pages file is essentially an HTML document with JSP scripting or
tags. The Java Server Pages file has a JSP extension to the server as a Java Server
Pages file. Before the page is served, the Java Server Pages syntax is parsed and
processed into a Servlet on the server side. The Servlet that is generated outputs
real content in straight HTML for responding to the client.
Access Models:
A Java Server Pages file may be accessed in at least two different ways. A client’s
request comes directly into a Java Server Page. In this scenario, suppose the page
accesses reusable Java Bean components that perform particular well-defined
computations like accessing a database. The result of the Beans computations,
called result sets is stored within the Bean as properties. The page uses such Beans
to generate dynamic content and present it back to the client.
In both of the above cases, the page could also contain any valid Java code. Java
Server Pages architecture encourages separation of content from presentation.
1. The client sends a request to the web server for a JSP file by giving the name
of the JSP file within the form tag of a HTML page.
JDBC connectivity
Tomcat is an open source web server developed by Apache Group. Apache Tomcat
is the servlet container that is used in the official Reference Implementation for the
Java Servlet and Java Server Pages technologies. The Java Servlet and Java Server
Pages specifications are developed by Sun under the Java Community Process.
Web Servers like Apache Tomcat support only web components while an
application server supports web components as well as business components
(BEAs Weblogic, is one of the popular application server).To develop a web
application with jsp/servlet install any web server like JRun, Tomcat etc to run
your application.
UML
UML was created by the Object Management Group (OMG) and UML 1.0
specification draft was proposed to the OMG in January 1997.
UML is not a programming language but tools can be used to generate code in
various languages using UML diagrams. UML has a direct relation with object
oriented analysis and design. After some standardization, UML has become an
OMG standard.
Goals of UML
A picture is worth a thousand words, this idiom absolutely fits describing UML.
Object-oriented concepts were introduced much earlier than UML. At that point
of time, there were no standard methodologies to organize and consolidate the
object-oriented development. It was then that UML came into picture.
There are a number of goals for developing UML but the most important is to
define some general purpose modeling language, which all modelers can use and
it also needs to be made simple to understand and use.
UML diagrams are not only made for developers but also for business users,
common people, and anybody interested to understand the system. The system can
be a software or non-software system. Thus it must be clear that UML is not a
development method rather it accompanies with processes to make it a successful
system.
An object contains both data and methods that control the data. The data
represents the state of the object. A class describes an object and they also form a
hierarchy to model the real-world system. The hierarchy is represented as
inheritance and the classes can also be associated in different ways as per the
requirement.
Objects are the real-world entities that exist around us and the basic concepts such
as abstraction, encapsulation, inheritance, and polymorphism all can be
represented using UML.
UML is powerful enough to represent all the concepts that exist in object-oriented
analysis and design. UML diagrams are representation of object-oriented concepts
only. Thus, before learning UML, it becomes important to understand OO concept
in detail.
Thus, it is important to understand the OO analysis and design concepts. The most
important purpose of OO analysis is to identify objects of a system to be designed.
This analysis is also done for an existing system. Now an efficient analysis is only
possible when we are able to start thinking in a way where objects can be
identified. After identifying the objects, their relationships are identified and
finally the design is produced.
There are three basic steps where the OO concepts are applied and implemented.
The steps can be defined as
UML Diagrams
UML diagrams are the ultimate output of the entire discussion. All the elements,
relationships are used to make a complete UML diagram and the diagram
represents a system.
The visual effect of the UML diagram is the most important part of the entire
process. All the other elements are used to make it complete.
UML includes the following nine diagrams, the details of which are described in
the subsequent chapters.
Class diagram
Use case diagram
Sequence diagram
Activity diagram
Class Diagram
Class diagrams are the most common diagrams used in UML. Class diagram
consists of classes, interfaces, associations, and collaboration. Class diagrams
basically represent the object-oriented view of a system, which is static in nature.
Active class is used in a class diagram to represent the concurrency of the system.
Activities are nothing but the functions of a system. Numbers of activity diagrams
are prepared to capture the entire flow in a system.
Activity diagrams are used to visualize the flow of controls in a system. This is
prepared to have an idea of how the system will work when executed.
6. SYSTEM TESTING
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated. It is the
testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing, that
relies on knowledge of its construction and is invasive. Unit tests perform basic
tests at component level and test a specific business process, application, and/or
system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined
inputs and expected results.
Integration testing
Functional test
System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.
Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.
Field testing will be performed manually and functional tests will be written
in detail.
Test objectives
All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
Features to be tested
Verify that the entries are of the correct format
No duplicate entries should be allowed
All links should take the user to the correct page.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
SYSTEM TESTING
TESTING METHODOLOGIES
The following are the Testing Methodologies:
o Unit Testing.
o Integration Testing.
o User Acceptance Testing.
o Output Testing.
o Validation Testing.
Unit Testing
Integration Testing
Integration testing addresses the issues associated with the dual problems of
verification and program construction. After the software has been integrated a set
of high order tests are conducted. The main objective in this testing process is to
take unit tested modules and builds a program structure that has been dictated by
design.
The following are the types of Integration Testing:
1)Top Down Integration
2. Bottom-up Integration
This method begins the construction and testing with the modules at the
lowest level in the program structure. Since the modules are integrated from the
bottom up, processing required for modules subordinate to a given level is always
available and the need for stubs is eliminated. The bottom up integration strategy
may be implemented with the following steps:
The low-level modules are combined into clusters into clusters that
perform a specific Software sub-function.
A driver (i.e.) the control program for testing is written to coordinate
test case input and output.
The cluster is tested.
Drivers are removed and clusters are combined moving upward in the
program structure
The bottom up approaches tests each module individually and then each module is
module is integrated with a main module and tested for functionality.
User Acceptance of a system is the key factor for the success of any system.
The system under consideration is tested for user acceptance by constantly keeping
in touch with the prospective system users at the time of developing and making
changes wherever required. The system developed provides a friendly user
interface that can easily be understood even by a person who is new to the system.
Output Testing
After performing the validation testing, the next step is output testing of the
proposed system, since no system could be useful if it does not produce the
required output in the specified format. Asking the users about the format required
by them tests the outputs generated or displayed by the system under consideration.
Hence the output format is considered in 2 ways – one is on screen and another in
printed format.
Validation Checking
Text Field:
The text field can contain only the number of characters lesser than or equal
to its size. The text fields are alphanumeric in some tables and alphabetic in other
tables. Incorrect entry always flashes and error message.
Numeric Field:
The numeric field can contain only numbers from 0 to 9. An entry of any
character flashes an error messages. The individual modules are checked for
accuracy and what it has to perform. Each module is subjected to test run along
with sample data. The individually tested modules are integrated into a single
system. Testing involves executing the real data information is used in the
program the existence of any program defect is inferred from the output. The
testing should be planned so that all the requirements are individually tested.
A successful test is one that gives out the defects for the inappropriate data
and produces and output revealing the errors in the system.
Taking various kinds of test data does the above testing. Preparation of test
data plays a vital role in the system testing. After preparing the test data the system
under study is tested using that test data. While testing the system by using test
data errors are again uncovered and corrected by using above testing steps and
corrections are also noted for future use.
Using Live Test Data:
Live test data are those that are actually extracted from organization files.
After a system is partially constructed, programmers or analysts often ask users to
key in a set of data from their normal activities. Then, the systems person uses this
data as a way to partially test the system. In other instances, programmers or
analysts extract a set of live data from the files and have them entered themselves.
Artificial test data are created solely for test purposes, since they can be generated
to test all combinations of formats and values. In other words, the artificial data,
which can quickly be prepared by a data generating utility program in the
information systems department, make possible the testing of all login and control
paths through the program.
The most effective test programs use artificial test data generated by persons other
than those who wrote the programs. Often, an independent team of testers
formulates a testing plan, using the systems specifications.
The package “Virtual Private Network” has satisfied all the requirements specified
as per software requirement specification and was accepted.
USER TRAINING
MAINTAINENCE
This covers a wide range of activities including correcting code and design errors.
To reduce the need for maintenance in the long run, we have more accurately
defined the user’s requirements during the process of system development.
Depending on the requirements, this system has been developed to satisfy the
needs to the largest possible extent. With development in technology, it may be
possible to add many more features based on the requirements in future. The
coding and designing is simple and easy to understand which will make
maintenance easier.
TESTING STRATEGY :
A strategy for system testing integrates system test cases and design techniques
into a well planned series of steps that results in the successful construction of
software. The testing strategy must co-operate test planning, test case design, test
execution, and the resultant data collection and evaluation .A strategy for software
testing must accommodate low-level tests that are necessary to verify that a
small source code segment has been correctly implemented as well as high
level tests that validate major system functions against user requirements.
SYSTEM TESTING:
Software once validated must be combined with other system elements (e.g.
Hardware, people, database). System testing verifies that all the elements are
proper and that overall system function performance is achieved. It also tests to
find discrepancies between the system and its original objective, current
specifications and system documentation.
UNIT TESTING:
In unit testing different are modules are tested against the specifications produced
during the design for the modules. Unit testing is essential for verification of the
code produced during the coding phase, and hence the goals to test the internal
logic of the modules. Using the detailed design description as a guide, important
Conrail paths are tested to uncover errors within the boundary of the modules.
This testing is carried out during the programming stage itself. In this type of
testing step, each module was found to be working satisfactorily as regards to the
expected output from the module.
Expected Actual
S. No Test Case Input Pass/Fail
Output Output
Check admin Admin
Login must be
1. login username Login success Pass
successful
functionality and password
Details of Must be added
2. Add faculty Add success Pass
faculty successfully
Details of Must be added
3. Add student Add success Pass
student successfully
Details of Must be
Generate time Generation
4. faculty and generated Pass
table success
subject successfully
Leave Must be
Accept/reject Accept/reject
5. request from accepted/rejected Pass
leaves success
faculty successfully
Expected Actual
S. No Test Case Input Pass/Fail
Output Output
Check faculty Faculty
Login must
1. login username and Login success Pass
be successful
functionality password
2. View time Request for Time table View success Pass
table time table must be
viewed
successfully
Give details Must be
Upload
3. Ask leave regarding uploaded Pass
success
leave successfully
Expected Actual
S. No Test Case Input Pass/Fail
Output Output
Check student Student
Login must
1. login username and Login success Pass
be successful
functionality password
Time table
View time Request for must be
2. View success Pass
table time table viewed
successfully
CONCLUSION
The manual operating system of time table preparation in colleges is very monotonous and time-
consuming which results in either the same teachers ending up with more than one class at a time
or many classes conflicting at the same classroom. Due to a non-automatic perspective, absolute
utilization of resources has proven ineffective. In order to deal with such problems, a mechanized
system can be designed with a computer aided timetable generator. Normally timetable
generation is done manually.
As we know all institutions/organizations has its own timetable, managing and maintaining these
will not be difficult.
As mentioned, when Timetable generation is being done, it should consider the maximum and
minimum workload that is in a college.
References
References for the project development were taken from the following books and
websites.
Java Technologies
HTML
JDBC
www.tutorialspoint.com
www.stackoverflow.com
www.javapoint.com