Blazemeter Execution

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Hello Every one , welcome to wipro’s Performance

Engineering Upskill programm, In todays video we will be


learning about how to do execution using Blaze meter.

In the Previous videos we had learned about blazemeter basics


and how it can be used for Performance testing and
Engineering and how to do recording of jmeter scripts using
blazemeter.

Once the script is recorded and the script enhancements are


done, the script becomes ready for execution

For creating scenarios in blazemeter , we need to login into


blazemeter Saas Ui and click on create test. We can select any
kind of test that is required in this case I would go with
Performance test.

We can upload the script .running with jmeter/Taurus test runs


via Taurus, which supports many various types of testing tools including JMeter,
Selenium, and Gatling.once script is uploaded blazemeter can auto detect the type of
script or you can provide your own Taurus YAML configuration file.

BlazeMeter is designed to speed the iterative process of testing. Frequently changed


parameters can be controlled (overridden) from within the BlazeMeter UI or API,
eliminating the need to edit hard-coded values in your scripts and re-upload them
between tests.

In the Load Configuration section, you may override the number of users, test
duration and ramp up. You can also choose to turn off any of these overrides, allowing
the script values to retain control. In order to do so, turn off the toggle switch in front
of any parameter.

Enter numbers into the fields or use the drop-down to choose popular configurations.
Select the number of users you want deployed at the peak of your test. BlazeMeter
will divide this user population across the number of test engines deployed.

Note: If your script uses multiple thread groups, the following will apply:

 If each thread group in the script is configured for 1 user (the default setting) in
the"Number of Threads (users)" field, then Blazemeter will divide the total users
evenly across the thread groups, rounding up and down as needed. For example,
if you specify 10 users for 2 thread groups, each thread group will run 5
users. If you specify 6 users for 3 thread groups, each will run 2. If you run 11
users for 3 thread groups, Blazemeter will round up (from 3.66...) and each will
run 4. If you run 7 users for 3 thread groups, Blazemeter will round down (from
2.33...) and each will run 2.
 If you specified different user numbers for each of your multiple thread groups,
then BlazeMeter will maintain the ratio of threads between the thread groups in
the JMX to achieve the total users you specify here. For example, if your JMX
has three thread groups and the "Number of Threads (users)" field in each of
them is set to 5, 3 and 2 respectively, then a test with 1000 users specified in
Load Configuration will run 500, 300 and 200 threads through those thread
groups.

You can configure your test to either run for a specified duration or for a specified
number of iterations by toggling between these two options:

1. Set the duration for the entire test, in minutes. The test will run infinite
iterations until the duration is met.
2. Toggle to Iterations and set the number of iterations instead. The test will run
however long is required to complete all iterations.

Ramp Up Time
Select how fast you want the test to ramp-up. This is the elapsed time in minutes from
test start until all users are running.

Ramp Up Steps
Select the number of steps for the ramp-up of your test.

 The default value is to 0, which delivers a linear Ramp-up from test start until
the end of Ramp up Time:

Limit RPS
This setting allows you to impose a maximum requests per second (RPS). When you
use this setting, you will see a "Change RPS" button on your live test reports and can
make changes mid-test.
Load Distribution can be used to run the test from different locations.You can
also change the default ratio of users to engines or set the engine count manually in
cases where you are not using the Total Users setting.Adding multiple location
requires licenced version of blazemeter

Failure Criteria can be used to define the SLA and to set the different criteria by which
the test can be counted as a failed.one can select the KPI’s and set the threshold

One can enable the Real user experience (UX) inorder to view the Application
behaviour under load and how it is seen by a real user

This feature executes a Selenium test in the background while your load test is
running, via Taurus. The Selenium test generates a Waterfall Report that shows what a
user would see in their web browser at different points during the load test. This can
be especially helpful when trying to debug why a certain page failed to load properly
from a user point of view at a certain point in the load test

When a test is executed with the "End User Experience Monitoring" feature enabled,
BlazeMeter will wrap the label + the URL specified with a YAML configuration file.
Alternatively, you can supply a YAML configuration of your own. Then, BlazeMeter
will execute the script via Taurus and Selenium. The script, containing only the URLs
specified, will run for the full duration of the load test.

Now for APM tools integration

BlazeMeter makes it easy to leverage the power of performance testing and


Application Performance Monitoring (APM) combined.

APM tools currently supported in New Test Create include:

 AppDynamics
 AWS CloudWatch
 CA APM
 New Relic APM
 New Relic Infrastructure
 Dynatrace
Now coming to Jmeter PropertieS

JMeter Properties can be used to parameterize your test. One version of your script can
then be used in several different test configurations. JMeter properties also allow you
to make mid-test changes to script behavior.

Directly above the JMETER PROPERTIES heading is a couple of drop-down menus


which can be used to specify the desired JMeter and/or Java version to run the test with:

These settings are optional. If left at their default selections, Blazemeter will attempt
to auto-detect the version of JMeter the uploaded test script was created in.

On toggling on the Jmeter values, one can add different jmeter properties. Such as think
time, base URL’s etc

Override DNS places entries in /etc/hosts of each test engine so a hostname in your
script is resolved to a different IP address during your test. This allows you to "point"
your test at an alternate server without editing the script.

Network emulation allows you impair the connection between BlazeMeter's test
engine and the system(s) you are testing in order to observe the impact on your key
performance indicators (KPIs).

To add Network Emulation to your test:

1. Toggle the feature "on"


2. Choose the desired network type

One can modify the bandwidth, Latency and packet loss percentage of the network
Once the scenario setup is done ,From the test configuration view, you can run it by
either clicking the "Run Test" button to run a full Performance Test or the "Debug
Test" button to run a low-scale Debug Test with enhanced logging.

to the left of the test name.

You'll then be asked if you're ready to launch the servers. Review the configuration,
then click the "Launch Servers" button to start the test.

You can optionally check the "Run test in the background box" if do not wish to see
the startup status view, in which case you'll be immediately returned to your previous
view while the test runs in the background.

If you enabled running in the background, you can check on the running test using the
"Reports" drop-down menu, beside which will be a count of currently running tests.

If you didn't check the run in background option, you'll next see a progress bar, telling
you how much longer you need to wait. Launching servers in the cloud usually take 2-
4 minutes. Tests involving a larger number of servers may take around 10-12 minutes.

the system log can be seen on clicking on the boxes at the bottom right hand side of
the screen.

There are three checkboxes at the top of the system log: System, Autoscroll, and New
Messages Alert.

 The System option, when checked, simply ensures system messages are
included in the log.
 The Autoscroll option, when checked, will ensure the window automatically
scrolls as new lines fill it up.
 The New Messages Alert, when checked, will provide you with notifications for
every new action.

You can complete other tasks while the test is running in the background, such as
viewing reports or configuring and running other tests.

You can wait a few minutes until the test ends or you can stop the test yourself. If you
need to manually stop the test for any reason, refer to our guide on Stopping a Test.

Scheduling a test via GUI


To schedule a test from GUI, follow these steps:

1. Navigate to the test configuration/history page.


2. On the "Schedule" section in the left panel, press the "Add" button
3. Choose the required frequency:
o Weekly - The test will run each week on the days selected.
If the checkbox "Mon-Fri" is marked, the test will run daily Monday
through Friday.
Multiple checkboxes can be selected.

The run time of the test can be configured by pressing the time box as
detailed further below.
o Monthly - The test will run on a specific day of the month according to
selection.
Multiple days can be selected

The run time of the test can be configured by pressing the time box as
detailed below.
o For both Weekly and Monthly options, the run time of the test can be
configured by pressing the time box to the right of the frequency bar:
Note: BlazeMeter's default time zone is UTC+0. when using the On
Timeselector, the defined time should be converted from local time zone
to UTC+0.
o Advanced - enter your own cron expression.

Note: BlazeMeter default time zone is UTC+0. when using CRON as the
frequency type, the defined time should be converted from local time zone
to UTC+0.

Modify Jmeter Properties

Add one or more properties to the JMeter Properties section of your test configuration.

Start your test.

When the test report appears, click the Run Time Control button at the top-right of the screen (this
button is only available while the test is running, and will disappear after the test completes).

Click the Remote Control button in the drop-down.


The Remote Control Live window will appear, listing all JMeter properties available
for updating. This by default includes all scenarios in all locations (see Advanced
Featureslater in this article for more details).

 Remote Control works for both single tests and multi-tests. If a test has
multiple scenarios, the default option for the Remote Control Live window is
to show all properties for all scenarios. This can be especially handy for a multi-
test in which various single tests within each have different properties to adjust.
 For tests / multi-tests with multiple scenarios, you can filter the Remote
Control Livewindow to show JMeter properties that only pertain to a specific
scenario and/or location. To do so, use the Scenario and/or Location filters at
the top-right of the test report before clicking the Remote Control button.

 You are not just limited to modifying existing properties; you can add new
ones! Doing so requires specifying which scenario (in a multi-scenario test) to
add the new property to. At the the top of the Remote Control Live window, in
the New Key row, click Select Scenario, then select the scenario you wish to
add the new property to from the drop-down.

If you'd like to modify your Requests Per Second (RPS) on the fly instead, then
check out the Changing RPS Limits 'On The Fly' section of the Load
Configuration guide.

 When executing a multi-test, you can add users dynamically so as to adjust the
load while the test is in progress. Check out Adding Users Dynamically for a
full guide.

You can stop your test while it's running by clicking the stop button.

 You can either stop a test while it's still in the startup phase by clicking the
"Abort Test" button...

However, if you instead stop the test while it's still in the starting phase, then you will
only have the option to terminate servers.

 Graceful Shutdown - This sends a signal to close the test, archive test and log
files, then generate an artifacts.zip archive.

 Terminate Servers - Terminates all servers immediately. This will result in the
loss of all test and log files (except for the ones you originally
uploaded). No artifacts.zip archive will be generated. This should be a last
resort, since without any log files, it will be impossible to identify what may
have caused a problem.

You can click the "x" icon in the upper-right corner of the window to cancel and
continue the test without interruption.

Note: If you manually terminated a test because it hung indefinitely, then check out
our knowledge base article on tests that fail to start.

The Summary Report is the main dashboard view of your test while it is running and
after it has finished.

Report Link
The Summary Report will appear as soon as your test starts to collect data.
Click the reports icon on the upper navigation bar to access the reports list. Most
recent reports are shown on top.

You'll see the summary panel at the top of the report. This panel showcases key
statistics of the test, including:

 Max Users - Maximum number of concurrent users generated at any given point
in the test run. (Note: this does NOT refer to the total users, only the total users
who ran simultaneously at any given moment. As a result, Max Users may not
match your total users, which may be significantly higher.)
 Average Throughput (Hits/s) - The average number of HTTP/s requests per
second that are generated by the test.
A note for JMeter tests: BlazeMeter counts unique requests that appear in the
JTL file generated during the test. This means that if only high level requests
are present in the JTL file, the Hits/s figure relates only to the high level
requests. If while configuring the test, you select to include sub-samples in your
runs, then HITS/s represents all high level requests and sub-samples (e.g.
images, CSSs, JSs etc).
 Errors Rate - The ratio of bad responses out of all responses received.
 Average Response Time - The average amount of time from first bit sent to the
network card to the last byte received by the client.
 90 Percentile of Response Time - The top average value of the first 90% of all
samples. Only 10% of the sample is higher than this value.
 Average Bandwidth (MB/s) - The average bandwidth consumption in
MegaBytes per second generated by the test.
This section shows key configurations alongside the main categories of responses
codes received during the test run. This really helps you grasp the general purpose and
performance. These include:

 The Test Duration (HH:MM:SS)


 The Test's Start & End Times.
 The Test Type - JMeter Test, Multi-Test. URL/API Test, Webdriver Test.
 Locations: The geo-locations the load has originated from.
 Response Codes: A breakdown of the HTTP response status codes received
during the test run.
 Internal notes about the report.

There are two graphs which indicate the key performance metrics and their
trends throughout the duration of the test:

 Load Graph - This shows the maximum number of users vs hits/s vs errors
rate. In the example above, you can see that while the load increased gradually
until it reached its maximum, the hit/s increased rapidly and remained
relatively high for the duration of the test and the errors rate stayed at 0%.
 Response Time Graph - This shows the maximum number of users vs
response times, revealing how the size of the load affects the response times.

he Timeline report can be also viewed by clicking the 'Reports' button from the upper
navigation bar, and then 'Show all Reports'.
On the left side of the screen, you'll notice the KPI selection panel.
The great advantage of this report is that it enables you to view many different types
of KPIs within one graph, and by doing so easily visualize certain events that might
have occurred throughout the test.
Some of the KPI include

 USERS shows how many virtual users are currently active.


 HITS/S (Hits per second) is the number of HTTP/s requests per second that are
generated by the test.
 RESPONSE TIME is the amount of time from the first byte sent to the server to
last byte received at the client side.
 LATENCY is the time from sending the request, processing it on the server side,
to the time the client received the first byte.
 BYTES/s is the average bandwidth consumption that’s generated by the test
per second.
 CONNECT TIME is the measurement of how long it takes the user to connect to
the server, and the server to respond, including SSL handshake.

KPIs By Labels
To view KPIs by labels, click on the arrow next to each KPI and choose the required
label from the different options that open up. E.g. Hits of 'Login' in the example
below.

You will now notice that the KPIs are available for every label in your test.

KPIs from APM Integrations


KPIs from APM Integration profiles you have included in your test configuration will
appear at the bottom of the list, after the built-in KPIs

Request Statistics Report Tabular View


The Request Statistics Report tabular view displays first row with label "ALL" which
displays values for all request made during the test and an individual row for each
named request in your test. If you have used your own JMeter scripts then it displays
the labels you used in your script.

Note: All times are in milliseconds.

 Element Label - The name of the HTTP Request from the Jmeter script
 #Samples - The total number of samples executed.
 Average Latency - The average Latency for the request(s) executed.
 Average Response Time - The average response time for the request(s)
executed. While the test is running, it will display the average of the requests
already executed, and the final value once test execution is finished.
 Geo Mean RT - T his type of calculation is less-sensitive to extreme values
(w.g spikes of high or low values that can affect regular "arithmetic" average).

Calculation:
 Standard Deviation - The standard deviation(a measure of variation) of the
sampled elapsed time.
 90% Line - 90th Percentile. 90% of the samples were smaller than or equal to
this time.
 95% Line - 95th Percentile. 95% of the samples were smaller than or equal to
this time.
 99% Line - 99th Percentile. 99% of the samples were smaller than or equal to
this time.
 Minimum Response Time - The shortest time for the samples with the same
label.
 Maximum Response Time - The longest time for the samples with the same
label.
 Median Response Time - 50th Percentile. 50% or half of the samples are smaller
than the median, and half are larger.
 Average Bandwidth (bytes/s) - The size of traffic made per second in Bytes.
 Hits/s - The number of requests made per second. When the throughput is saved
to a CSV file, it is expressed in requests/second, i.e. 30.0 requests/minute is
saved as 0.5. Once the test is done, the throughput is the actual throughput for
the duration of the entire test.
 Error % - The Error rate per label. While the test is running, it displays value
based on samples already completed, and a final value after completion of test
execution.
 Duration - The sum of the duration for all the samples in that label.

NOTE: In this section, we currently show only up to first 100 labels executed from
your script. In case that there are more than 100 labels, all the labels that are not
shown in the tab, are aggregated together, and are shown as "AGGREGATED
LABELS".

The Engine Health Report displays performance indicators received from the test
engines (i.e. the infrastructure delivering the test traffic, not your system under test).
The Engine Health indicates whether the test infrastructure itself could be the cause of
bottlenecks or errors which are appearing in other reports.

The Engine Health is also a great resource when deciding how many virtual users
(VUs) each engine can support. The ideal ratio depends on the complexity and
memory footprint of your script(s)

 CPU: Represents the Percentage usage of CPU in instance


 Memory: Represents the Percentage usage of the Virtual Memory in instance
 Network I/O: Represents the amount of data transferred in I/O operations (KB/s)
 Connections: Represents the number of persistent connections established for
each transaction throughout the test

The Errors tab, as it's name suggests, contains the errors that were received by the
web-server under the test as a result of HTTP request. We can see all errors received
during the test run, categorized by:

1. Labels (pages)
2. Response codes
3. Assertions

These report shows errors like

1. Top requests
2. Assertions
3. Failed embedded resources
For each error we will display the:

1. Response code
2. Response message
3. Number of failed requests

In your BlazeMeter Report, you can view and monitor the log of each server used
during the test run. Logs availability and server monitoring providing full test
transparency.
artifacts.zip - This ZIP file includes the JMX file that you've uploaded which has
been modified by BlazeMeter while running, The KPL.jtl file which contains the
results of the test run, CSVs and any additional file you might have used for that test
run. A click on the file name will download the artifacts.zip automatically.

The Original Test Configuration tab shows details about how your test was configured
at the time the test covered by the report ran. This tab is especially useful if you have
updated the test configuration since the report was run, as the details here will reflect
the configuration prior to those updates.
This section provides details on how the test was configured to run (as opposed to the
other report tabs, which detail how the test actually ran). It provides the following
details:

 Scenarios: How many test scenarios were configured for the test. For example,
if the test was a Taurus test (executed via a YAML file), then this counts how
many scenarios are specified under the "scenarios:" section of the script. For a
JMeter test, (executed with a JMX, without a YAML), there will be one
scenario. For a multitest, there will be one scenario per test. An End User
Experience Monitoring test will appears as its own scenario as well.
 Duration H:M:S: This refers to the duration originally set for the test, and is
expressed in an hour:minute:second format (for example, 01:02:30 would read
one hour, two minutes, and thirty seconds).
 Total Users: This is how many total users were originally configured for the
entire test (all scenarios combined).
 Locations: This details how many locations were selected for the entire test
(all scenarios combined) and the name of each location chosen.

You might also like