Objectives and Scope
Objectives and Scope
Objectives and Scope
testing are:
z Validate the core Sakai framework and certain tools meet the minimum performance
standards established for this project. The following tools will be measured for
performance:
testing effort outlined in this document will not cover the following:
existing Wiley performance test process. This test plan will serve as
the basis for Testware to create Silk Performer Test Scripts. These scripts will be run by Leo
Begelman using the Silk Performer software. Unicon, Inc. will watch and measure the CPU
utilization of the web and database servers used during testing. Unicon, Inc. will analyze and
present the performance test results to Wiley at the conclusion of the performance test cycle.
z Capacity Test – Determines the maximum number of concurrent users that the
z Consistent Load Test – Long-running stress test that drives a continuous load on the
application server for an extended period of time (at least 6 hours). The main purpose of
this type of test is to ensure the application can sustain acceptable levels of performance
2. Use the results from the first execution to make a guess as to how many users the
system might support. One possibility might be to run 1000 different users through the
system for one hour, with approximately 240 concurrent users at a time.
3. If the second execution continues to meet the performance goals outlined in section 7,
continue to run new tests with increasing quantities of concurrent users until the
performance goals are no longer met. It is desired that one server will support up to 500
concurrent users.
4. Assuming the maximum capacity is determined, a consistent load test will be run. The
changes can be made to the system under test environment (see section 14) and run the
test again.
Database Server:
• CPU Utilization – Max., Avg., and 95th percentile. This data will be collected using the sar
system utility.
• SQL query execution time: The time required to execute the top ten SQL queries involved
machine because the request is a fresh request to the server. Therefore; that
• The cached mode means that images and pages are cached on the client with only
Client:
• Time to last byte (TTLB): This is what will currently be measured in the stress tests, as
opposed to user-perceived response time. Time to last byte measures the time between the
request leaving the client machine and the last byte of the response being sent down from
the server. This time does not take in to account the scripting engine that must run in the
browser, the rendering, and other functions that can cause a user to experience poor
performance. If the client-side script is very complex this number and the user perceived
response time can be wildly different. A user will not care how fast the response reaches their
machine (about the user perceived response time) if they cannot interact with the page for an
extended amount of time. This data will be collected using Silk Performer.
Network:
• Network Traffic: Network traffic analysis is one of the most important functions in
performance testing. It can help identify unnecessary transmissions, transmissions which are
larger than expected, and those that can be improved. We need to watch network traffic to
identify the bytes over the wire being transmitted, the response times, and the concurrent
connections that are allowed. This data will be collected using the sar system utility.
The following are performance requirements (success criteria) for the performance tests:
1. The average response time (measured by the Time to last byte metric) is less then 2.5
seconds
2. The worst response time (measured by the Time to last byte metric) is less then 30
seconds
3. The average CPU utilization of the database server is less then 75%
4. The average CPU utilization of the application server is less then 75%
6. The maximum number of acceptable server errors, non HTTP-200 status codes on client
9. Load Descriptions
Each test outlined in section 5 will run with a ratio of 59 students to 1 instructor. There is no
expected difference between users logging in for the first time or subsequent logins given how the
data (outlined in section 10) will be created. The data set these tests will start with, will appear to
There will be no ramp up time for any of the Single Function Stress Tests. The ramp up time
for all other tests, should be set to 1 user every 3 seconds. 120 users should therefore be
running within 6 minutes. The wait time between requests is contained in the test scenarios in
In order to place as much stress on the system as possible with a small number of users, all
The Sakai system will need to be preloaded with data before the performance testing will begin.
This data will be created using the Sakai Shell and Sakai Web Services. Once the data is
created it will be extracted from the database with a database dump. The following table
identifies several types of data that will need to be preloaded into the Sakai environment.
Users 93567
Students 92000
Instructors 1557
Administrators 10
Large Worksites 5
Medium Worksites 50
6.5
Columns (Assignments) in Gradebook 13
Testware will create new Silk Performer Test Scripts based on the two scenarios outlined in
Appendices 1 and 2. Appendix 1 represents the Student scenario, while Appendix 2 represents
the Instructor scenario. The Test Suite should be set up to accommodate a ratio of 59 students
to 1 instructor. Wait time should be included between each page request, so that the total time
for an activity is equal to the number of page requests * 2.5 seconds + the total wait time.
This section details the load testing process that will be followed for all performance tests
5. Start the application and run a quick sanity test to make sure each application server can
successfully return login screen markup can successfully process a login request.
7. Once the Silk Performer scripts have completed, stop the data collection scripts
9. Collate the data for all the metrics specified in section 6, into one report.
Testware will need to be trained on the use of the Sakai environment. The scenarios outlined in
Appendix 1 and Appendix 2 give some idea about which parts of Sakai will be used, however,
Specifying mixes of system hardware, software, memory, network protocol, bandwidth, etc.
• Network access variables: For example, 56K modem, 128K Cable modem, T1, etc.
• ISP infrastructure variables: For example, first tier, second tier, etc.
• Computer variables
• Browser variables
• Computer variables
• How many other users are using the same resources on the system under test (SUT)?
• Are you testing the SUT in its complete, real-world environment (with load balances,
The following test deliverables are expected as part of this performance testing effort.
z Test Scripts – Silk Performer Test Scripts to implement the scenarios outlined in
z Test Results Data – The data resulting from the performance tests run
z Test Results Final Report – The final report that documents and analyzes the results of
the performance tests that were conducted according to this test plan
Business
Mitigation: Conduct performance testing of core Sakai at the beginning of the project. If Sakai
does not meet the goals established above, project management will have the most time possible
IT
Risk: Limit on the number of virtual users available with Silk Performer
Mitigation: Test only one blade server per 500 virtual users available with Silk Performer
Risk: All Sakai tools needed for testing at this stage may not be available
Mitigation: Tests will be conducted against the core tools that are in the Sakai 2.5.0 release.
Where a tool that is needed is not yet available, and place holder tool has been specified in the
test scenarios in Appendix 1 and 2. (e.g., Calendar will be used in place of Student Gateway for
this testing)