STRSW Ilt Perfcdot Exerciseguide
STRSW Ilt Perfcdot Exerciseguide
STRSW Ilt Perfcdot Exerciseguide
ATTENTION
The information contained in this course is intended only for training. This course contains information and activities that,
while beneficial for the purposes of training in a closed, nonproduction environment, can result in downtime or other
severe consequences in a production environment. This course material is not a technical reference and should not,
under any circumstances, be used in production environments. To obtain reference materials, refer to the NetApp product
documentation that is located at http://now.netapp.com/.
COPYRIGHT
2013 NetApp, Inc. All rights reserved. Printed in the U.S.A. Specifications subject to change without notice.
No part of this document covered by copyright may be reproduced in any form or by any meansgraphic, electronic, or
mechanical, including photocopying, recording, taping, or storage in an electronic retrieval systemwithout prior written
permission of NetApp, Inc.
TRADEMARK INFORMATION
NetApp, the NetApp logo, Go further, faster, AdminNODE, Akorri, ApplianceWatch, ASUP, AutoSupport, BalancePoint,
BalancePoint Predictor, Bycast, Campaign Express, ChronoSpan, ComplianceClock, ControlNODE, Cryptainer, Data
ONTAP, DataFabric, DataFort, Decru, Decru DataFort, DenseStak, Engenio, E-Stack, FAServer, FastStak, FilerView,
FlexCache, FlexClone, FlexPod, FlexScale, FlexShare, FlexVol, FPolicy, GatewayNODE, gFiler, Imagine Virtually
Anything, Infinivol, Lifetime Key Management, LockVault, Manage ONTAP, MetroCluster, MultiStore, NearStore, NetApp
Select, NetCache, NetCache, NOW (NetApp on the Web), OnCommand, ONTAPI, PerformanceStak, RAID DP,
SANscreen, SANshare, SANtricity, SecureAdmin, SecureShare, Securitis, Service Builder, Simplicity, Simulate ONTAP,
SnapCopy, SnapDirector, SnapDrive, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect, SnapRestore,
Snapshot, SnapValidator, SnapVault, StorageGRID, StorageNODE, StoreVault, SyncMirror, Tech OnTap, VelocityStak,
vFiler, VFM, Virtual File Manager, WAFL, and XBB are trademarks or registered trademarks of NetApp, Inc. in the United
States and/or other countries.
All other brands or products are either trademarks or registered trademarks of their respective holders and should be
treated as such.
E-2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
TABLE OF CONTENTS
WELCOME..................................................................................................................................................... E-1
MODULE 1: HOW A NETAPP STORAGE SYSTEM WORKS................................................................... E1-1
MODULE 2: PERFORMANCE OVERVIEW ................................................................................................ E2-1
MODULE 3: CLUSTERED STORAGE SYSTEM WORKLOADS AND BOTTLENECKS ......................... E3-1
MODULE 4: CLUSTER PERFORMANCE MONITORING AND ANALYSIS ............................................. E4-1
MODULE 5: ONCOMMAND MANAGEMENT TOOLS ............................................................................... E5-1
MODULE 6: STORAGE QOS...................................................................................................................... E6-1
MODULE 7: SUMMARY .............................................................................................................................. E7-1
APPENDIX A: ANSWERS............................................................................................................................. A-1
E-3
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
E1-1
Performance Analysis on Clustered Data ONTAP: How a NetApp Storage System Works
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
In this exercise, you identify your exercise equipment, log in to the exercise environment, and verify the
equipment.
OBJECTIVES
In this task, you log in to your assigned exercise environment. You perform all subsequent exercises from this
assigned machine.
STEP ACTION
1.
With the assistance of your instructor, identify your main Windows server. This machine might
be a virtual machine.
Windows Server
IP address: _______________________________________________
Domain: _________________________________________________
Domain administrator password: Netapp123
2.
With the assistance of your instructor, identify your clustered NetApp Data ONTAP operating
system nodes.
Clustered Data ONTAP
Node 1 Management LIF IP address: 192.168.0.91
Node 2 Management LIF IP address: 192.168.0.92
Cluster Management LIF IP address: 192.168.0.101
Cluster Administrator (admin) password: Netapp123
3.
With the assistance of your instructor, identify your Linux machine. This machine might be a
virtual machine.
Linux Server
IP address: 192.168.0.10
Root password: Netapp123
E2-1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
4.
With the assistance of your instructor, identify your secondary Windows server. This machine
might be a virtual machine.
Windows Server
IP address: 192.168.0.6
Root password: Netapp123
In this task, you use Remote Desktop Connection to log in to your assigned exercise environment. You
perform all subsequent tasks from this assigned machine.
STEP ACTION
1.
On your local Windows machine desktop, click the Remote Desktop Connection link to log in
to the remote Windows machine through the Remote Desktop Connection tool.
If this link is not available, ask your instructor where to find the tool.
2.
Enter the IP address of your remote Windows server, and then click Connect.
3.
4.
If you are asked for authentication, enter the user name and password that your instructor gave
you.
E2-2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
In this task, you add your cluster management port to the local hosts file, launch System Manager, and add
your assigned cluster.
NOTE: For more information about configuring a storage system with System Manager, see the Clustered
Data ONTAP Administration course.
STEP ACTION
1.
E2-3
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
2.
Navigate to C:\Windows\System32\Drivers\etc.
3.
Double-click hosts.
E2-4
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
4.
5.
6.
7.
8.
Exit Notepad.
9.
10.
On your Windows Server desktop, double-click the NetApp OnCommand System Manager
icon.
E2-5
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
11.
12.
In the upper left, click the Login button, and then enter the following:
User name: admin
Password: Netapp123
Save my credentials: selected
13.
E2-6
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
14.
E2-7
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
In this task, you configure the SNMP public community name so that System Manager and OnCommand
Balance can properly discover your cluster.
NOTE: In some highly secure environments, this might not be appropriate.
STEP ACTION
1.
Navigate to Cluster > cluster1 > Configuration > System Tools > SNMP.
2.
Click Edit.
E2-8
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
3.
E2-9
Click Add.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
4.
5.
Click OK.
E2-10
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
6.
E2-11
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
7.
E2-12
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
In this task, you use either System Manager or the clustered Data ONTAP CLI to identify key clustered Data
ONTAP components and revert any LIFs that are not on their home ports.
STEP ACTION
1.
Analyze and identify the following list of clustered Data ONTAP components:
Aggregates ______________________________________________________________
Storage virtual machines ___________________________________________________
Volumes ________________________________________________________________
LUNs __________________________________________________________________
Licenses (which features are installed?) ________________________________________
CIFS Shares ______________________________________________________________
LIFs (are all LIFs home?)____________________________________________________
2.
TASK 6: SET THE CLUSTERED DATA ONTAP COMMAND LINE SYSTEM TIMEOUT VALUE
(OPTIONAL)
In this optional task, you set the clustered Data ONTAP command-line system timeout value.
STEP ACTION
1.
2.
3.
END OF EXERCISE
E2-13
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
In this exercise, you examine the different variations of the statistics command, and identify storage
system workloads and potential bottlenecks.
OBJECTIVES
In this task, you issue the three statistics catalog commands and exercise the supporting parameters,
using different privilege levels.
STEP ACTION
1.
2.
3.
4.
What command syntax would you use to display only the advanced level statistics objects
without their descriptions?
5.
What command syntax would you use to display statistics objects that are associated with
storage virtual machines (SVMs)?
NOTE: You should still be in the advanced privilege level.
How many statistics objects are there in the list?
6.
Change to the admin privilege level and examine the instance names that are available for the
statistics object volume.
statistics catalog instance show -object volume
7.
What command syntax would you use to display the instance names that are available for the
statistics object that represents SVMs that have LIFs associated with them?
How many instance names are there in the list?
8.
What command syntax would you use to display the instance names that are available for the
statistics object that represents SVMs that have volumes associated with them?
How many instance names are there in the list?
E3-1
Performance Analysis on Clustered Data ONTAP: Clustered Storage System Workloads and Bottlenecks
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
9.
What command syntax would you use to display the instance names that are available for the
statistics object that represents the disk that is associated with the second node in the cluster
(cluster1-02)?
10.
Examine the counters that are available for the statistics object disk and show the detailed
information.
statistics catalog counter show -object disk describe
11.
12.
Examine the counters that are available for the statistics object aggregate and show the
detailed information.
statistics catalog counter show -object aggregate -describe
13.
Examine the counters that are available for the statistics object volume and show the detailed
information.
statistics catalog counter show -object volume -describe
In this task, you issue the statistics start and statistics show commands and exercise the
supporting parameters, using different privilege levels.
STEP ACTION
1.
Using the admin privilege level, start statistics data collection on the statistics object nfsv3.
statistics start -object nfsv3 -sample-id sample_nfsv3_adm
2.
Using the advanced privilege level, start statistics data collection on the statistics object nfsv3.
set advanced
statistics start -object nfsv3 -sample-id sample_nfsv3_adv
3.
Display the counters that are associated with the statistics object nfsv3 instance vs2 for both
samples.
statistics show -object nfsv3 -instance vs2 -counter * -sample-id
sample_nfsv3_adm
statistics show -object nfsv3 -instance vs2 -counter * -sample-id
sample_nfsv3_adv
Using the advanced privilege level, start statistics data collection on the statistics object disk.
statistics start -object disk -sample-id sample_disk
E3-2
Performance Analysis on Clustered Data ONTAP: Clustered Storage System Workloads and Bottlenecks
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
5.
What command syntax would you use to display all of the latency related counters for disk
v4.20?
How many counters are there in the list?
6.
What command syntax would you use to display all of the user_read_latency counters for all
disks in the cluster?
How many counters are there in the list?
7.
Display only the disks with the user_read_latency counter between 1000us and 10000us.
statistics show -object disk -instance * -counter user_read_latency
-sample-id sample_disk -value 1000..10000
8.
What command syntax would you use to display all of the user_read_latency counters that are
greater than 10000us for all disks in the cluster, limiting the display to only the counter and
value fields?
9.
Using the advanced privilege level, start statistics data collection on the statistics object
aggregate.
statistics start -object aggregate -sample-id sample_aggr
10.
11.
Using the advanced privilege level, start statistics data collection on the statistics object
volume.
statistics start -object volume -sample-id sample_volume
12.
Display all of the volume activity by displaying all of the data counters for each volume.
statistics show -object volume -instance * -counter *data -sample-id
sample_volume
13.
What command syntax would you use to display all of the latency counters for all volumes in
the cluster? How about all nonzero latency counters?
14.
Using the advanced privilege level, start statistics data collection on the statistics object
processor.
statistics start -object processor -sample-id sample_processor
15.
E3-3
What command syntax would you use to display all of the counters for all processors in the
cluster?
Performance Analysis on Clustered Data ONTAP: Clustered Storage System Workloads and Bottlenecks
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
16.
Using the advanced privilege level, start statistics data collection on the statistics object
workload.
statistics start -object workload -sample-id sample_workload
17.
Display all of the latency counters for each workload and limit the display to the counter and
value fields.
statistics show -object workload -instance * -counter *latency -sampleid sample_workload -fields counter,value
18.
In this task, you gather statistical data and evaluate the data to determine the workload characteristics.
STEP ACTION
1.
2.
3.
4.
5.
Normally you would start data collection for all of the protocols that are being served by the
cluster; however, for this exercise you use only CIFS.
Using the diagnostic privilege level, start statistics data collection on the objects cifs,
volume, and readahead.
NOTE: Diagnostic privilege level commands are required to capture
cifs_read_size_histo, cifs_write_size_histo, rand_read_req, and
seq_read_req.
set diagnostic
statistics start -object cifs|volume|readahead -sample-id sample_cifs1
E3-4
Performance Analysis on Clustered Data ONTAP: Clustered Storage System Workloads and Bottlenecks
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
6.
7.
8.
Press the Spacebar, type the following list of parameters, and then press Enter:
0 0 32k 300m 30 1 z:\300mfile
9.
After the command ends and the command prompt returns, go to the PuTTY session and turn off
data collection.
statistics stop -sample-id sample_cifs1
E3-5
Performance Analysis on Clustered Data ONTAP: Clustered Storage System Workloads and Bottlenecks
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
10.
Start statistics data collection again on the same objects, but this time change the sample name to
sample_cifs2.
statistics start -object cifs|volume|readahead -sample-id sample_cifs2
11.
In the Command Prompt window, press the Up arrow, change the SIO parameters to the
following, and then press Enter:
100 0 32k 300m 30 4 z:\300mfile
12.
After the command ends and the command prompt returns, go to the PuTTY session and turn off
data collection.
statistics stop -sample-id sample_cifs2
13.
Run through the same sequence (Steps 1012) one more time for sample_cifs3, using the
following SIO parameters:
sample_cifs3
50 100 4k 300m 30 32 z:\300mfile
NOTE: After the command ends and the command prompt returns, remember to go to the
PuTTY session and turn off data collection.
14.
After all three samples are collected; analyze the counters to determine the workload
characteristics of each sample.
The following commands will help to define the CIFS workload in each sample (sample_cifs1
through sample_cifs3).
NOTE: Change the X in sample_cifsX to the sample being analyzed (1 through 3).
statistics
-sample-id
statistics
-sample-id
statistics
-sample-id
statistics
-sample-id
The following statistics commands are specific to read workloads. If you have already
determined that the workload is a write workload, you can skip these commands.
statistics show -object readahead -instance * -counter rand_read_reqs
-sample-id sample_cifsX
statistics show -object readahead -instance * -counter seq_read_reqs
-sample-id sample_cifsX
E3-6
Performance Analysis on Clustered Data ONTAP: Clustered Storage System Workloads and Bottlenecks
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
15.
OBSERVED VALUE
Read
Write
Throughput
Latency
Operation Size
Concurrency
Randomness
16.
What did the command-line output look like for throughput of sample_cifs1?
17.
What did the command-line output look like for latency of sample_cifs1?
18.
What did the command-line output look like for read operation size of sample_cifs1?
19.
What did the command-line output look like for write operation size of sample_cifs1?
20.
What did the calculation (throughput * latency) look like for concurrency of sample_cifs1?
21.
What did the command-line output look like for randomness (rand_read_reqs) of
sample_cifs1?
22.
What did the command-line output look like for randomness (seq_read_reqs) of
sample_cifs1?
23.
What workload did you conclude from your data collection of sample_cifs1?
E3-7
Performance Analysis on Clustered Data ONTAP: Clustered Storage System Workloads and Bottlenecks
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
24.
OBSERVED VALUE
Read
Write
Throughput
Latency
Operation Size
Concurrency
Randomness
25.
What did the command-line output look like for throughput of sample_cifs2?
26.
What did the command-line output look like for latency of sample_cifs2?
27.
What did the command-line output look like for read operation size of sample_cifs2?
28.
What did the command-line output look like for write operation size of sample_cifs2?
29.
What did the calculation (throughput * latency) look like for concurrency of sample_cifs2?
30.
What did the command-line output look like for randomness (rand_read_reqs) of
sample_cifs2?
31.
What did the command-line output look like for randomness (seq_read_reqs) of
sample_cifs2?
32.
What workload did you conclude from your data collection of sample_cifs2?
E3-8
Performance Analysis on Clustered Data ONTAP: Clustered Storage System Workloads and Bottlenecks
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
33.
OBSERVED VALUE
Read
Write
Throughput
Latency
Operation Size
Concurrency
Randomness
34.
What did the command-line output look like for throughput of sample_cifs3?
35.
What did the command-line output look like for latency of sample_cifs3?
36.
What did the command-line output look like for read operation size of sample_cifs3?
37.
What did the command-line output look like for write operation size of sample_cifs3?
38.
What did the calculation (throughput * latency) look like for concurrency of sample_cifs3?
39.
What did the command-line output look like for randomness (rand_read_reqs) of
sample_cifs3?
40.
What did the command-line output look like for randomness (seq_read_reqs) of
sample_cifs3?
41.
What workload did you conclude from your data collection of sample_cifs3?
E3-9
Performance Analysis on Clustered Data ONTAP: Clustered Storage System Workloads and Bottlenecks
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
42.
The previous examples used CIFS as the basis for the workloads. What statistics objects and
counters would be needed for NFSv3?
END OF EXERCISE
E3-10
Performance Analysis on Clustered Data ONTAP: Clustered Storage System Workloads and Bottlenecks
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
In this exercise, you identify storage system performance baselines, analyze and isolate bottlenecks, and work
with the Perfstat utility.
OBJECTIVES
In this task, you query the cluster to assess the initial health status.
STEP ACTION
1.
2.
3.
Check to see if the replication rings have the same (or consistent) masters.
NOTE: Remember to set advanced privilege before you execute this command.
cluster ring show
4.
Using the advanced privilege level, check to see if the cluster connectivity is healthy.
cluster ping-cluster -node local
5.
6.
If any of the SVMs return a nonzero status, add the all parameter for additional information.
dashboard health vserver show-all
7.
8.
E4-1
Performance Analysis on Clustered Data ONTAP: Cluster Performance Monitoring and Analysis
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
9.
10.
Normally at this point in the health check, you would check the performance status by using the
commands that follow; however, because these commands will also be run as part of the next
task, they do not have to be run at this time.
To check the performance status (which is optional at this time), enter these commands:
a. Check the current level of activity on each node of the cluster.
statistics show-periodic -node cluster1-01 -instance node -iterations 4
statistics show-periodic -node cluster1-02 -instance node -iterations 4
In this task, you use cluster shell commands to monitor cluster performance.
STEP ACTION
1.
b. Check the current level of activity on the entire cluster and notice the overall latency
column.
dashboard performance show
2.
3.
Use the instance parameter to show everything and notice the overall latency by protocol
fields.
dashboard performance show instance
4.
E4-2
Performance Analysis on Clustered Data ONTAP: Cluster Performance Monitoring and Analysis
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
5.
6.
7.
Normally at this point in the baseline check, you would check the ifnet status by using the
command that follows; however, because the lab kits do not have any ifnets defined, the
command does not have to be run at this time.
If you want to do so at this time, check the throughput by interface.
statistics show-periodic -object ifnet -instance a0a -node cluster1-02
-iterations 4
9.
10.
Using the advanced privilege level, start statistics data collection on the objects volume,
aggregate, disk, ext_cache_obj, port, and lif.
set advanced
statistics start -object volume|aggregate|disk|port|lif -sample-id
sample_baseline1
11.
12.
13.
E4-3
Performance Analysis on Clustered Data ONTAP: Cluster Performance Monitoring and Analysis
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
14.
15.
16.
17.
18.
After completing the task, turn off the statistics data collection.
statistics stop -sample-id sample_baseline1
19.
In this task, you use cluster shell commands to monitor cluster performance.
STEP ACTION
1.
3.
Mount the vs2 export via NFS, using the following IP:
mount -t nfs 192.168.0.122:/vs2_vol1 /mnt/path01
4.
Change directory to /usr/tmp, make a new directory sio, and change directory into it.
cd /usr/tmp
mkdir sio
cd sio
E4-4
Performance Analysis on Clustered Data ONTAP: Cluster Performance Monitoring and Analysis
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
5.
6.
In your cluster1 PuTTY session, verify that your LIFs that are associated with SVM vs2 are
home and, if they are not, send them home.
network interface show -vserver vs2
network interface revert -vserver vs2 -lif *
7.
Normally you would start data collection for all of the protocols being served by the cluster;
however, for this exercise you use only NFSv3.
In your cluster1 PuTTY session, using the diagnostic privilege level, start statistics data
collection on the objects nfsv3, volume, aggregate, disk, port, lif, and readahead.
NOTE: Diagnostic privilege level commands are required to capture rand_read_req and
seq_read_req.
set diagnostic
statistics start -object nfsv3|volume|aggregate|disk|port|lif|readahead
-sample-id sample_nfs1
8.
On the Linux server, start SIO with a 0% read workload, 0% random, 32-KB block size, 300MB file size, run for one hundred seconds, one thread, and point it at the file on the NFS mount.
/usr/tmp/sio/sio_ntap_linux 0 0 32k 300m 100 1 /mnt/path01/300mfile
9.
Using the analysis commands that you learned in the previous task, analyze the data and record
or save the results. It is recommended that you increase the Lines of scrollback in your
cluster1 PuTTY session to at least 2000.
This information will be used to complete questions later in this task.
HINT: Use the baseline analysis commands.
10.
After the SIO command ends and the command prompt returns, go to the PuTTY session and
turn off data collection.
statistics stop -sample-id sample_nfs1
11.
In your cluster1 PuTTY session, using the diagnostic privilege level, start statistics data
collection on the objects nfsv3, volume, aggregate, disk, port, lif, and readahead.
statistics start -object nfsv3|volume|aggregate|disk|port|lif|readahead
-sample-id sample_nfs2
12.
On the Linux server, start SIO with a 0% read workload, 0% random, 32-KB block size, 300MB file size, run for one hundred seconds, four threads, and point it at the file on the NFS
mount.
/usr/tmp/sio/sio_ntap_linux 0 0 32k 300m 100 4 /mnt/path01/300mfile
13.
E4-5
Using the analysis commands that you learned in the previous task, analyze the data and record
or save the results.
Performance Analysis on Clustered Data ONTAP: Cluster Performance Monitoring and Analysis
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
14.
After the SIO command ends and the command prompt returns, go to the PuTTY session and
turn off data collection.
statistics stop -sample-id sample_nfs2
15.
In your cluster1 PuTTY session, using the diagnostic privilege level, start statistics data
collection on the objects nfsv3, volume, aggregate, disk, port, lif, and readahead.
statistics start -object nfsv3|volume|aggregate|disk|port|lif|readahead
-sample-id sample_nfs3
16.
On the Linux server, start SIO with a 100% read workload, 0% random, 32-KB block size, 300MB file size, run for one hundred seconds, one thread, and point it at the file on the NFS mount.
/usr/tmp/sio/sio_ntap_linux 100 0 32k 300m 100 1 /mnt/path01/300mfile
17.
Using the analysis commands that you learned in the previous task, analyze the data and record
or save the results.
18.
After the SIO command ends and the command prompt returns, go to the PuTTY session and
turn off data collection.
statistics stop -sample-id sample_nfs3
19.
In your cluster1 PuTTY session, using the diagnostic privilege level, start statistics data
collection on the objects nfsv3, volume, aggregate, disk, port, lif, and readahead.
statistics start -object nfsv3|volume|aggregate|disk|port|lif|readahead
-sample-id sample_nfs4
20.
On the Linux server, start SIO with a 100% read workload, 0% random, 32-KB block size, 300MB file size, run for one hundred seconds, four threads, and point it at the file on the NFS
mount.
/usr/tmp/sio/sio_ntap_linux 100 0 32k 300m 100 4 /mnt/path01/300mfile
21.
Using the analysis commands that you learned in the previous task, analyze the data and record
or save the results.
22.
After the SIO command ends and the command prompt returns, go to the PuTTY session and
turn off data collection.
statistics stop -sample-id sample_nfs4
23.
In your cluster1 PuTTY session, using the diagnostic privilege level, start statistics data
collection on the objects nfsv3, volume, aggregate, disk, port, lif, and readahead.
statistics start -object nfsv3|volume|aggregate|disk|port|lif|readahead
-sample-id sample_nfs5
24.
On the Linux server, start SIO with a 100% read workload, 0% random, 32-KB block size, 20MB file size, run for one hundred seconds, four threads, and point it at the file on the NFS
mount. The 20-MB file will create a cached workload.
/usr/tmp/sio/sio_ntap_linux 100 0 32k 20m 100 4 /mnt/path01/300mfile
25.
E4-6
Using the analysis commands that you learned in the previous task, analyze the data and record
or save the results.
Performance Analysis on Clustered Data ONTAP: Cluster Performance Monitoring and Analysis
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
26.
After the SIO command ends and the command prompt returns, go to the PuTTY session and
turn off data collection.
statistics stop -sample-id sample_nfs5
27.
In your cluster1 PuTTY session, using the diagnostic privilege level, start statistics data
collection on the objects nfsv3, volume, aggregate, disk, port, lif, and readahead.
statistics start -object nfsv3|volume|aggregate|disk|port|lif|readahead
-sample-id sample_nfs6
28.
On the Linux server, start SIO with a 50% read and 50% write workload, 100% random, 4-KB
block size, 300-MB file size, run for one hundred seconds, 32 threads, and point it at the file on
the NFS mount.
/usr/tmp/sio/sio_ntap_linux 50 100 4k 300m 100 32 /mnt/path01/300mfile
29.
Using the analysis commands that you learned in the previous task, analyze the data and record
or save the results.
30.
After the SIO command ends and the command prompt returns, go to the PuTTY session and
turn off data collection.
statistics stop -sample-id sample_nfs6
31.
Using the data collected in sample_nfs1 through sample_nfs6 and the analysis done on this data,
did the throughput and I/Os increase when the number of threads increased?
32.
Using the data collected in sample_nfs1 through sample_nfs6 and the analysis done on this data,
which workload had the highest throughput in terms of KBps?
33.
Using the data collected in sample_nfs1 through sample_nfs6 and the analysis done on this data,
which workload had the highest throughput in terms of IOPS?
34.
Using the data collected in sample_nfs1 through sample_nfs6 and the analysis done on this data,
which workload showed the lowest latencies on the storage system?
35.
Using the data collected in sample_nfs1 through sample_nfs6 and the analysis done on this data,
which workload showed the highest disk utilization on the storage system?
36.
Using the data collected in sample_nfs1 through sample_nfs6 and the analysis done on this data,
which workload showed the highest CPU utilization on the storage system?
37.
In your cluster1 PuTTY session, using the diagnostic privilege level, start statistics data
collection on the objects nfsv3, volume, aggregate, disk, port, lif, and readahead.
statistics start -object nfsv3|volume|aggregate|disk|port|lif|readahead
-sample-id sample_nfs7
38.
On the Linux server, reissue the same SIO workload by starting SIO with a 50% read and 50%
write workload, 100% random, 4-KB block size, 300-MB file size, run for one hundred seconds,
32 threads, and point it at the file on the NFS mount.
/usr/tmp/sio/sio_ntap_linux 100 0 32k 300m 100 4 /mnt/path01/300mfile
E4-7
Performance Analysis on Clustered Data ONTAP: Cluster Performance Monitoring and Analysis
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
39.
While the SIO is running, in your cluster1 PuTTY session, migrate your NAS LIF that is
associated with /mnt/path01.
network interface migrate -vserver vs2 -lif vs2_cifs_nfs_lif2 -destnode cluster1-02
40.
Using the analysis commands that you have learned, analyze the data and record or save the
results.
41.
Compare your results with the results from sample_nfs6 which uses the same workload
without a migrating LIF.
42.
After the SIO command ends and the command prompt returns, go to the PuTTY session and
turn off data collection.
statistics stop -sample-id sample_nfs7
43.
In your cluster1 PuTTY session, using the diagnostic privilege level, start statistics data
collection on the objects nfsv3, volume, aggregate, disk, port, lif, and readahead.
statistics start -object nfsv3|volume|aggregate|disk|port|lif|readahead
-sample-id sample_nfs8
44.
On the Linux server, start SIO with a 50% read and 50% write workload, 100% random, 4-KB
block size, 300-MB file size, run for one hundred seconds, 32 threads, and point it at the file on
the NFS mount.
/usr/tmp/sio/sio_ntap_linux 50 0 32k 300m 100 32 /mnt/path01/300mfile
45.
46.
Using the analysis commands that you have learned, analyze the data and record or save the
results.
HINT: Analyze the cluster interconnect statistics.
NOTE: You can view the status of the volume move with the following command:
volume move show -vserver vs2 -volume vs2_vol01
47.
After the volume move command ends and the command prompt returns, go to the PuTTY
session and turn off data collection.
statistics stop -sample-id sample_nfs8
48.
In this task, you unlock the diag userid for use with Perfstat.
E4-8
Performance Analysis on Clustered Data ONTAP: Cluster Performance Monitoring and Analysis
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
1.
2.
3.
4.
In this task, you use Perfstat to collect detailed profile and troubleshooting data samples from a cluster.
STEP ACTION
1.
NOTE: Normally, you would open a browser and navigate to My Support > Downloads >
Utility Toolchest on the Support site and download the Perfstat Converged compressed file for
either Windows or Linux. This process has already been completed on your Windows Server.
2.
In the CourseFiles directory, create a directory called Perfstat, and then copy the perfstat8.exe
file from the zip file into the Perfstat directory.
3.
In the Perfstat directory, create two directories called workload1 and workload2.
4.
On the Windows Server desktop, double-click the Command Prompt icon and navigate to
C:\Users\Administrator\Desktop\CourseFiles\Perfstat\workload1.
5.
On the Linux server, start SIO with a 0% read workload, 0% random, 32-KB block size, 300MB file size, run indefinitely, 32 threads, and point it at the file on the NFS mount.
/usr/tmp/sio/sio_ntap_linux 0 0 32k 300m 0 32 /mnt/path01/300mfile
6.
In the first Command Prompt window, start Perfstat with the following command parameters:
192.168.0.101 (the cluster management LIF of cluster1)
--mode=cluster (to signify that this is a clustered Data ONTAP storage
environment)
--time 1 (to signify one minute iterations)
..\perfstat8 192.168.0.101 --mode="cluster" --time 1
7.
When prompted for the SSH passphrase, press Enter, and then press Enter again for same
passphrase.
8.
When prompted for filer password, type Netapp123, and when prompted for system shell
password (this may take a few seconds), type Netapp123 again.
After a few seconds you will see . where the curser was positioned. Perfstat runs one
iteration for each controller in the cluster and one for the host machine (in this case Windows).
E4-9
Performance Analysis on Clustered Data ONTAP: Cluster Performance Monitoring and Analysis
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
9.
After the Perfstat operation is complete, issue Ctrl-C in the Linux server session to stop the SIO
run.
10.
Look for the protocol latencies, CPU utilization, network utilization, and average per disk
utilization in the Perfstat data.
11.
12.
On the Linux server, start SIO with a 100% read workload, 100% random, 4-KB block size,
300-MB file size, run indefinitely, 32 threads, and point it at the file on the NFS mount.
/usr/tmp/sio/sio_ntap_linux 100 100 4k 300m 0 32 /mnt/path01/300mfile
13.
In the first Command Prompt window, start Perfstat with the same parameters as the first run.
14.
After the Perfstat operation is complete, issue Ctrl-C in the Linux server session to stop the SIO
run.
15.
Find the protocol latencies, CPU utilization, network utilization, and average per disk utilization
in the Perfstat data.
END OF EXERCISE
E4-10
Performance Analysis on Clustered Data ONTAP: Cluster Performance Monitoring and Analysis
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
E5-1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
In this exercise, you work with the storage quality of service (QoS) feature to prevent problem workloads and
tenants from impacting other workloads and tenants.
OBJECTIVES
In this task, you use Iometer to generate a 50% write and 50% read workload on a LUN and set up a storage
QoS limit to throttle the workload that is associated with the LUN.
STEP ACTION
1.
On the taskbar, click the Iometer icon and, if prompted, approve the Iometer EULA.
E6-1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
2.
3.
E6-2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
4.
5.
E6-3
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
6.
7.
8.
E6-4
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
9.
10.
E6-5
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
11.
(a green flag).
12.
Open Windows Explorer to the D:lun_vsISCSI1_1 drive and notice a new file, iobw.tst.
This is the test file that is growing until it is 40,000 KB (80,000 x 512 B).
13.
Notice that, when the file reaches its maximum size, the ramp up time begins to count down for
four seconds.
The ramp up time ensures that the storage is stable before the tests begin.
E6-6
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
14.
When the 50% read and 50% write test results begin to appear, notice that the total I/Os per
second (IOPS) are recorded on the top row of the display output.
NOTE: On the test system, the total IOPS will probably fall in a range below that of a clustered
system running on nonvirtualized equipment. Remember that this is a Windows machine and a
cluster running in a shared virtualized environment.
15.
16.
17.
Using the statistics command, identify the volume or LUN that is doing the most IOPS.
18.
E6-7
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
19.
20.
21.
22.
23.
24.
Wait a few moments and watch the Iometers total IOPS drop.
NOTE: You can continue to run the previous command to observe the current storage readings.
It will take a few minutes for the IOPS to drop.
25.
When your testing is complete, stop the Iometer test by clicking the Stop button
toolbar.
26.
27.
E6-8
Close Iometer.
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
on the
In this task, you set up a storage QoS limit for a volume and use Iometer to generate a 50% write and 50%
read workload on the volume.
STEP ACTION
1.
2.
3.
4.
5.
6.
E6-9
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
7.
8.
E6-10
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
9.
10.
E6-11
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
11.
12.
13.
E6-12
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
14.
15.
E6-13
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
16.
(a green flag).
17.
Open Windows Explorer to the Z drive and notice a new file, iobw.tst.
This is the test file that is growing until it is 80,000 KB (20,000 x 4 KB).
18.
Notice that, when the file reaches its maximum size, the ramp up time begins to count down for
four seconds.
The ramp up time ensures that the storage is stable before the tests begin.
E6-14
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
19.
When the 50% read and 50% write test results begin to appear, notice that the total IOPS are
recorded on the top row of the display output.
NOTE: Remember that this is a Windows machine and a cluster running in a shared virtualized
environment.
20.
21.
E6-15
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
22.
In the PuTTY window, verify the current throughput from the storage.
qos statistics performance show iterations 4
23.
When your testing is complete, stop the Iometer test by clicking the Stop button
toolbar.
24.
on the
25.
Close Iometer.
In this task, you set up a storage QoS tenant limit at the SVM level and use Iometer to generate a 50% write
and 50% read workload on one of the tenants.
STEP ACTION
1.
2.
3.
4.
5.
E6-16
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
6.
7.
E6-17
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
8.
9.
E6-18
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
10.
11.
12.
E6-19
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
13.
14.
E6-20
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
15.
16.
E6-21
(a green flag).
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
17.
Open File Explorer to the Z drive and notice a new file, iobw.tst.
This is the test file that is growing until it is 80,000 KB (20,000 x 4 KB).
18.
Notice that, when the file reaches its maximum size, the ramp up time begins to count down for
four seconds.
The ramp up time ensures that the storage is stable before the tests begin.
E6-22
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
19.
When the 50% read and 50% write test results begin to appear, notice that the total IOPS are
recorded on the top row of the display output.
NOTE: Remember that this is a Windows machine and a cluster running in shared virtualized
environment.
20.
21.
In the PuTTY window, verify the current throughput from the storage.
qos statistics performance show iterations 4
E6-23
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
22.
23.
When your testing is complete, stop the Iometer test by clicking the Stop button
toolbar.
24.
Close Iometer.
END OF EXERCISE
E6-24
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
on the
MODULE 7: SUMMARY
There is no exercise associated with Module 7.
E7-1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NETAPP UNIVERSITY
A-1
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
In this task, you use either System Manager or the clustered Data ONTAP CLI to identify key clustered Data
ONTAP components and revert any LIFs that are not on their home ports.
STEP ACTION
1.
Analyze and identify the following list of clustered Data ONTAP components:
Aggregates __aggr0_n1, aggr0_n2, n01_aggr1,
n02_aggr1____________________________________________________________
Storage virtual machines __cluster1, cluster1-01, cluster1-02, vs1, vs2,
vsISCSI1_________________________________________________
Volumes ________________________________________________________________vol0,
vol0, vs1_root, vs2_root, vs2_vol01, lun_ vsISCSI1_1 _vol, vsISCSI1_root
LUNs __/vol/lun_vsISCSI1_1_vol/lun_vsISCSI1_1
________________________________________________________________
Licenses (which features are installed?) __Base, Insight Balance, CIFS, NFS,
iSCSI______________________________________
CIFS Shares __admin$, ipc$, rootdir, vol1, ~%w
____________________________________________________________
LIFs (are all LIFs home?)__No__________________________________________________
A-2
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
In this task, you issue the three statistics catalog commands and exercise the supporting parameters,
using different privilege levels.
STEP ACTION
4.
What command syntax would you use to display only the advanced level statistics objects
without their descriptions?
statistics catalog object show -privilege advanced -fields object
5.
What command syntax would you use to display statistics objects that are associated with
storage virtual machines (SVMs)?
NOTE: You should still be in the advanced privilege level.
statistics catalog object show -object *vserver*
How many statistics objects are there in the list? 11 (in Clustered Data ONTAP 8.2)
7.
What command syntax would you use to display the instance names that are available for the
statistics object that represents SVMs that have LIFs associated with them?
statistics catalog instance show -object lif:vserver
What command syntax would you use to display the instance names that are available for the
statistics object that represents SVMs that have volumes associated with them?
statistics catalog instance show -object volume:vserver
What command syntax would you use to display the instance names that are available for the
statistics object that represents the disk that is associated with the second node in the cluster
(cluster1-02)?
statistics catalog instance show -object disk -node cluster1-02
In this task, you issue the statistics start and statistics show commands and exercise the
supporting parameters, using different privilege levels.
STEP ACTION
3.
Display the counters that are associated with the statistics object nfsv3 instance vs2 for both
samples.
statistics show -object nfsv3 -instance vs2 -counter * -sample-id
sample_nfsv3_adm
statistics show -object nfsv3 -instance vs2 -counter * -sample-id
sample_nfsv3_adv
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
5.
What command syntax would you use to display all of the latency related counters for disk
v4.20?
statistics show -object disk -instance v4.20 -counter *latency*
-sample-id sample_disk
What command syntax would you use to display all of the user_read_latency counters for all
disks in the cluster?
statistics show -object disk -instance * -counter user_read_latency
-sample-id sample_disk
What command syntax would you use to display all of the user_read_latency counters that are
greater than 10000us for all disks in the cluster, limiting the display to only the counter and
value fields?
statistics show -object disk -instance * -counter user_read_latency
-sample-id sample_disk -fields counter,value -value >10000
13.
What command syntax would you use to display all of the latency counters for all volumes in
the cluster? How about all nonzero latency counters?
statistics show -object volume -instance * -counter *latency
-sample-id sample_volume
statistics show -object volume -instance * -counter *latency
-sample-id sample_volume -value >0
15.
What command syntax would you use to display all of the counters for all processors in the
cluster?
statistics show -object processor -instance * -sample-id
sample_processor
A-4
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
In this task, you gather statistical data and evaluate the data to determine the workload characteristics.
STEP ACTION
15.
A-5
OBSERVED VALUE
Read
Write
Throughput
0/sec
99/sec
Latency
6.32ms
Operation Size
32K
Concurrency
0.63
Randomness
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
16.
What did the command-line output look like for throughput of sample_cifs1?
cluster1::*> statistics show -object cifs -instance * -counter cifs_read_ops -sample-id
sample_cifs1
Object: cifs
Instance: vs2
Start-time: 9/22/2013 21:56:21
End-time: 9/22/2013 21:58:18
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_read_ops
cifs_read_ops
Object: cifs
Instance: vs2
Start-time: 9/22/2013 21:56:21
End-time: 9/22/2013 21:58:18
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_write_ops
99
cifs_write_ops
A-6
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
17.
What did the command-line output look like for latency of sample_cifs1?
cluster1::*> statistics show -object volume -instance * -counter cifs_read_latency sample-id sample_cifs1
...
Object: volume
Instance: vs2_vol01
Start-time: 9/22/2013 21:56:21
End-time: 9/22/2013 21:58:18
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_read_latency
...
7 entries were displayed.
cluster1::*> statistics show -object volume -instance * -counter cifs_write_latency sample-id sample_cifs1
...
Object: volume
Instance: vs2_vol01
Start-time: 9/22/2013 21:56:21
End-time: 9/22/2013 21:58:18
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_write_latency
6324us
Object: volume
Instance: vsISCSI1_root
Start-time: 9/22/2013 21:56:21
End-time: 9/22/2013 21:58:18
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_write_latency
A-7
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
18.
What did the command-line output look like for read operation size of sample_cifs1?
cluster1::*> statistics show -object cifs -instance * -counter cifs_read_size_histo sample-id sample_cifs1
Object: cifs
Instance: vs2
Start-time: 9/22/2013 21:56:21
End-time: 9/22/2013 21:58:18
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_read_size_histo
0 bytes
<= 1 KB
<= 2 KB
<= 4 KB
<= 8 KB
<= 16 KB
<= 32 KB
<= 64 KB
> 64 KB
0 bytes
<= 1 KB
<= 2 KB
<= 4 KB
<= 8 KB
<= 16 KB
cifs_read_size_histo
<= 32 KB
<= 64 KB
> 64 KB
A-8
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
19.
What did the command-line output look like for write operation size of sample_cifs1?
cluster1::*> statistics show -object cifs -instance * -counter cifs_write_size_histo sample-id sample_cifs1
Object: cifs
Instance: vs2
Start-time: 9/22/2013 21:56:21
End-time: 9/22/2013 21:58:18
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_write_size_histo
0 bytes
<= 1 KB
<= 2 KB
<= 4 KB
<= 8 KB
<= 16 KB
<= 32 KB
23284
<= 64 KB
> 64 KB
0 bytes
<= 1 KB
<= 2 KB
<= 4 KB
<= 8 KB
<= 16 KB
<= 32 KB
<= 64 KB
> 64 KB
cifs_write_size_histo
20.
What did the calculation (throughput * latency) look like for concurrency of sample_cifs1?
99/sec * 0.00632 (6.32ms) = 0.63
21.
What did the command-line output look like for randomness (rand_read_reqs) of
sample_cifs1?
100% write workload, no reason to run this command
A-9
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
22.
What did the command-line output look like for randomness (seq_read_reqs) of
sample_cifs1?
100% write workload, no reason to run this command
23.
What workload did you conclude from your data collection of sample_cifs1?
All writes, large operations, virtually no concurrency
24.
A-10
OBSERVED VALUE
Read
Write
Throughput
715/sec
0/sec
Latency
3.20ms
Operation Size
32K
Concurrency
2.29
Randomness
Sequential
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
25.
What did the command-line output look like for throughput of sample_cifs2?
cluster1::*> statistics show -object cifs -instance * -counter cifs_read_ops -sample-id
sample_cifs2
Object: cifs
Instance: vs2
Start-time: 9/22/2013 22:21:55
End-time: 9/22/2013 22:23:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_read_ops
715
cifs_read_ops
Object: cifs
Instance: vs2
Start-time: 9/22/2013 22:21:55
End-time: 9/22/2013 22:23:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_write_ops
cifs_write_ops
A-11
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
26.
What did the command-line output look like for latency of sample_cifs2?
cluster1::*> statistics show -object volume -instance * -counter cifs_read_latency sample-id sample_cifs2
...
Object: volume
Instance: vs2_vol01
Start-time: 9/22/2013 22:21:55
End-time: 9/22/2013 22:23:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_read_latency
3196us
...
7 entries were displayed.
cluster1::*> statistics show -object volume -instance * -counter cifs_write_latency sample-id sample_cifs2
...
Object: volume
Instance: vs2_vol01
Start-time: 9/22/2013 22:21:55
End-time: 9/22/2013 22:23:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_write_latency
Object: volume
Instance: vsISCSI1_root
Start-time: 9/22/2013 22:21:55
End-time: 9/22/2013 22:23:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_write_latency
A-12
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
27.
What did the command-line output look like for read operation size of sample_cifs2?
cluster1::*> statistics show -object cifs -instance * -counter cifs_read_size_histo sample-id sample_cifs2
Object: cifs
Instance: vs2
Start-time: 9/22/2013 22:21:55
End-time: 9/22/2013 22:23:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_read_size_histo
0 bytes
<= 1 KB
<= 2 KB
<= 4 KB
<= 8 KB
<= 16 KB
<= 32 KB
137368
<= 64 KB
> 64 KB
0 bytes
<= 1 KB
<= 2 KB
<= 4 KB
<= 8 KB
<= 16 KB
<= 32 KB
<= 64 KB
> 64 KB
cifs_read_size_histo
A-13
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
28.
What did the command-line output look like for write operation size of sample_cifs2?
cluster1::*> statistics show -object cifs -instance * -counter cifs_write_size_histo sample-id sample_cifs2
Object: cifs
Instance: vs2
Start-time: 9/22/2013 22:21:55
End-time: 9/22/2013 22:23:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_write_size_histo
0 bytes
<= 1 KB
<= 2 KB
<= 4 KB
<= 8 KB
<= 16 KB
<= 32 KB
<= 64 KB
> 64 KB
0 bytes
<= 1 KB
<= 2 KB
<= 4 KB
<= 8 KB
<= 16 KB
<= 32 KB
<= 64 KB
> 64 KB
cifs_write_size_histo
29.
What did the calculation (throughput * latency) look like for concurrency of sample_cifs2?
715/sec * 0.00320 (3.20ms) = 2.29
A-14
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
30.
What did the command-line output look like for randomness (rand_read_reqs) of
sample_cifs2?
cluster1::*> statistics show -object readahead -instance * -counter rand_read_reqs sample-id sample_cifs2
Object: readahead
Instance: readahead
Start-time: 9/22/2013 22:21:55
End-time: 9/22/2013 22:23:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------rand_read_reqs
UNUSED
4K
100
8K
98
12K
100
16K
100
20K
100
24K
28K
50
32K
40K
48K
56K
MAX
...
rand_read_reqs
UNUSED
4K
100
8K
100
12K
100
16K
20K
100
24K
100
28K
100
32K
40K
48K
56K
MAX
...
A-15
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
31.
What did the command-line output look like for randomness (seq_read_reqs) of
sample_cifs2?
cluster1::*> statistics show -object readahead -instance * -counter seq_read_reqs sample-id sample_cifs2
Object: readahead
Instance: readahead
Start-time: 9/22/2013 22:21:55
End-time: 9/22/2013 22:23:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------seq_read_reqs
UNUSED
4K
8K
12K
16K
20K
24K
28K
50
32K
99
40K
48K
56K
MAX
...
seq_read_reqs
UNUSED
4K
8K
12K
16K
20K
24K
28K
32K
40K
48K
56K
MAX
...
A-16
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
32.
What workload did you conclude from your data collection of sample_cifs2?
All reads, large operations, some concurrency, mostly random
33.
A-17
OBSERVED VALUE
Read
Write
Throughput
159/sec
159/sec
Latency
168.37ms
0.14ms
Operation Size
4K
4K
Concurrency
26.77
0.02
Randomness
Random
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
34.
What did the command-line output look like for throughput of sample_cifs3?
cluster1::*> statistics show -object cifs -instance * -counter cifs_read_ops -sample-id
sample_cifs3
Object: cifs
Instance: vs2
Start-time: 9/22/2013 22:25:56
End-time: 9/22/2013 22:27:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_read_ops
159
cifs_read_ops
Object: cifs
Instance: vs2
Start-time: 9/22/2013 22:25:56
End-time: 9/22/2013 22:27:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_write_ops
159
cifs_write_ops
A-18
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
35.
What did the command-line output look like for latency of sample_cifs3?
cluster1::*> statistics show -object volume -instance * -counter cifs_read_latency sample-id sample_cifs3
...
Object: volume
Instance: vs2_vol01
Start-time: 9/22/2013 22:25:56
End-time: 9/22/2013 22:27:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_read_latency
168371us
...
7 entries were displayed.
cluster1::*> statistics show -object volume -instance * -counter cifs_write_latency sample-id sample_cifs3
...
Object: volume
Instance: vs2_vol01
Start-time: 9/22/2013 22:25:56
End-time: 9/22/2013 22:27:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_write_latency
143us
Object: volume
Instance: vsISCSI1_root
Start-time: 9/22/2013 22:25:56
End-time: 9/22/2013 22:27:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_write_latency
A-19
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
36.
What did the command-line output look like for read operation size of sample_cifs3?
cluster1::*> statistics show -object cifs -instance * -counter cifs_read_size_histo sample-id sample_cifs3
Object: cifs
Instance: vs2
Start-time: 9/22/2013 22:25:56
End-time: 9/22/2013 22:27:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_read_size_histo
0 bytes
<= 1 KB
<= 2 KB
<= 4 KB
30356
<= 8 KB
<= 16 KB
<= 32 KB
<= 64 KB
> 64 KB
0 bytes
<= 1 KB
<= 2 KB
<= 4 KB
<= 8 KB
<= 16 KB
<= 32 KB
<= 64 KB
> 64 KB
cifs_read_size_histo
A-20
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
37.
What did the command-line output look like for write operation size of sample_cifs3?
cluster1::*> statistics show -object cifs -instance * -counter cifs_write_size_histo sample-id sample_cifs3
Object: cifs
Instance: vs2
Start-time: 9/22/2013 22:25:56
End-time: 9/22/2013 22:27:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------cifs_write_size_histo
0 bytes
<= 1 KB
<= 2 KB
<= 4 KB
30288
<= 8 KB
<= 16 KB
<= 32 KB
<= 64 KB
> 64 KB
0 bytes
<= 1 KB
<= 2 KB
<= 4 KB
<= 8 KB
<= 16 KB
<= 32 KB
<= 64 KB
> 64 KB
cifs_write_size_histo
38.
What did the calculation (throughput * latency) look like for concurrency of sample_cifs3?
Read:
A-21
0.02
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
39.
What did the command-line output look like for randomness (rand_read_reqs) of
sample_cifs3?
cluster1::*> statistics show -object readahead -instance * -counter rand_read_reqs sample-id sample_cifs3
Object: readahead
Instance: readahead
Start-time: 9/22/2013 22:25:56
End-time: 9/22/2013 22:27:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------rand_read_reqs
UNUSED
4K
82
8K
50
12K
97
16K
20K
83
24K
100
28K
32K
13
40K
48K
56K
MAX
...
rand_read_reqs
UNUSED
4K
100
8K
12K
100
16K
100
20K
100
24K
100
28K
100
32K
100
40K
48K
56K
MAX
...
A-22
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
40.
What did the command-line output look like for randomness (seq_read_reqs) of
sample_cifs3?
cluster1::*> statistics show -object readahead -instance * -counter seq_read_reqs sample-id sample_cifs3
Object: readahead
Instance: readahead
Start-time: 9/22/2013 22:25:56
End-time: 9/22/2013 22:27:31
Cluster: cluster1
Counter
Value
-------------------------------- -------------------------------seq_read_reqs
UNUSED
4K
17
8K
50
12K
16K
100
20K
16
24K
28K
100
32K
86
40K
48K
56K
MAX
...
seq_read_reqs
UNUSED
4K
8K
12K
16K
20K
24K
28K
32K
40K
48K
56K
MAX
...
A-23
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
41.
What workload did you conclude from your data collection of sample_cifs3?
50% reads 50% writes, small operations, some read concurrency minimal write
concurrency
42.
The previous examples used CIFS as the basis for the workloads. What statistics objects and
counters would be needed for NFSv3?
set diagnostic
statistics start -object nfsv3|volume|readahead -sample-id sample_nfsv3
statistics
-sample-id
statistics
-sample-id
statistics
-sample-id
statistics
-sample-id
A-24
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
In this task, you query the cluster to assess the initial health status.
STEP ACTION
2.
Health
Eligibility
true
true
cluster1-02
true
true
3.
Check to see if the replication rings have the same (or consistent) masters.
NOTE: Remember to set advanced privilege before you execute this command.
cluster ring show
cluster1::> set advanced
Warning: These advanced commands are potentially dangerous; use them
only when directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
cluster1::*> cluster ring show
Node
UnitName Epoch
DB Epoch
--------- -------- -------- -------cluster1-01 mgmt
3
3
cluster1-01 vldb
3
3
cluster1-01 vifmgr 3
3
cluster1-01 bcomd 3
3
cluster1-02 mgmt
3
3
cluster1-02 vldb
3
3
cluster1-02 vifmgr 3
3
cluster1-02 bcomd 3
3
8 entries were displayed.
A-25
DB Trnxs
-------3910
11
20
22
3910
11
20
22
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Master
Online
--------- --------cluster1-01 master
cluster1-01 master
cluster1-01 master
cluster1-01 master
cluster1-01 secondary
cluster1-01 secondary
cluster1-01 secondary
cluster1-01 secondary
STEP ACTION
4.
Using the advanced privilege level, check to see if the cluster connectivity is healthy.
cluster ping-cluster -node local
cluster1::*> cluster ping-cluster -node local
Host is cluster1-02
Getting addresses from network interface table...
Local = 169.254.173.221 169.254.230.66
Remote = 169.254.224.211 169.254.185.5
Ping status:
....
Basic connectivity succeeds on 4 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 1500 byte MTU on 4 path(s):
Local 169.254.173.221 to Remote 169.254.185.5
Local 169.254.173.221 to Remote 169.254.224.211
Local 169.254.230.66 to Remote 169.254.185.5
Local 169.254.230.66 to Remote 169.254.224.211
Larger than PMTU communication succeeds on 4 path(s)
RPC status:
2 paths up, 0 paths down (tcp check)
2 paths up, 0 paths down (udp check)
5.
vsISCSI1
offline ok
0
0
0
Issues: The filesystem protocols are not configured.
3 entries were displayed.
6.
If any of the SVMs return a nonzero status, add the all parameter for additional information.
dashboard health vserver show-all
cluster1::> dashboard health vserver show-all
There are no Vserver health issues reported.
7.
8.
A-26
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
9.
In this task, you use cluster shell commands to monitor cluster performance.
STEP ACTION
1.
b. Check the current level of activity on the entire cluster and notice the overall latency
column.
dashboard performance show
cluster1::> dashboard performance show
Average
---Data-Network--Total Latency CPU Busy
Recv
Sent
Ops/s in usec Busy Util
MB/s
MB/s
------ ------- ---- ---- ------ -----cluster1-01
0
0
3%
0%
0
0
cluster1-02
0
0
3%
0%
0
0
cluster:summary
0
0
3%
0%
0
0
3 entries were displayed.
A-27
-Cluster--Network- ----Storage--Busy
Recv
Sent
Read Write
Util
MB/s
MB/s
MB/s
MB/s
---- ------ ------ ------ -----0%
0%
0%
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
2.
A-28
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
3.
Use the instance parameter to show everything and notice the overall latency by protocol
fields.
dashboard performance show instance
cluster1::> dashboard performance show -instance
A-29
Node:
Average Latency (usec):
CPU Busy:
Total Ops/s:
NFS Ops/s:
CIFS Ops/s:
Data Network Utilization:
Data Network Received (per sec):
Data Network Sent (per sec):
Cluster Network Utilization:
Cluster Network Received (per sec):
Cluster Network Sent (per sec):
Storage Read (per sec):
Storage Write (per sec):
CIFS Average Latency:
NFS Average Latency:
cluster1-01
0us
2%
0
0
0
0%
0
0
0%
0
0
0
0
0us
0us
Node:
Average Latency (usec):
CPU Busy:
Total Ops/s:
NFS Ops/s:
CIFS Ops/s:
Data Network Utilization:
Data Network Received (per sec):
Data Network Sent (per sec):
Cluster Network Utilization:
Cluster Network Received (per sec):
Cluster Network Sent (per sec):
Storage Read (per sec):
Storage Write (per sec):
CIFS Average Latency:
NFS Average Latency:
cluster1-02
0us
2%
0
0
0
0%
0
0
0%
0
0
0
0
0us
0us
Node:
Average Latency (usec):
CPU Busy:
Total Ops/s:
NFS Ops/s:
CIFS Ops/s:
Data Network Utilization:
Data Network Received (per sec):
Data Network Sent (per sec):
Cluster Network Utilization:
Cluster Network Received (per sec):
Cluster Network Sent (per sec):
Storage Read (per sec):
Storage Write (per sec):
CIFS Average Latency:
NFS Average Latency:
3 entries were displayed.
cluster:summary
0us
2%
0
0
0
0%
0
0
0%
0
0
0
0
0us
0us
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
4.
A-30
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
5.
0
764KB
0%
412B
458B
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
0%
30.8KB
3.41KB
STEP ACTION
6.
A-32
cluster
cluster
cluster
busy
recv
sent
89.5KB
89.1KB
0%
8.33KB
8.22KB
0%
8.45KB
8.34KB
0%
11.1KB
10.9KB
cluster
cluster
cluster
busy
recv
sent
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
0%
8.33KB
8.22KB
0%
29.3KB
29.1KB
0%
89.5KB
89.1KB
STEP ACTION
7.
How can you make the output more readable? Specify only the fields that you want to see
with the -fields parameter.
A-33
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
9.
10.
Using the advanced privilege level, start statistics data collection on the objects volume,
aggregate, disk, ext_cache_obj, port, and lif.
set advanced
statistics start -object volume|aggregate|disk|port|lif -sample-id
sample_baseline1
cluster1::> set advanced
Warning: These advanced commands are potentially dangerous; use them
only when directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
cluster1::*> statistics start -object volume|aggregate|disk|port|lif sample-id sample_baseline1
Statistics collection is being started for Sample-id: sample_baseline1
A-34
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
11.
A-35
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
12.
A-36
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
13.
A-37
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
14.
A-38
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
15.
Value
...
A-39
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
16.
A-40
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
17.
18.
After completing the task, turn off the statistics data collection.
statistics stop -sample-id sample_baseline1
cluster1::*> statistics stop -sample-id sample_baseline1
Statistics collection is being stopped for Sample-id: sample_baseline1
A-41
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
19.
In this task, you use cluster shell commands to monitor cluster performance.
STEP ACTION
5.
A-42
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
7.
Normally you would start data collection for all of the protocols being served by the cluster;
however, for this exercise you use only NFSv3.
In your cluster1 PuTTY session, using the diagnostic privilege level, start statistics data
collection on the objects nfsv3, volume, aggregate, disk, port, lif, and readahead.
NOTE: Diagnostic privilege level commands are required to capture rand_read_req and
seq_read_req.
set diagnostic
statistics start -object nfsv3|volume|aggregate|disk|port|lif|readahead
-sample-id sample_nfs1
cluster1::*> set diagnostic
Warning: These diagnostic commands are for use by NetApp personnel
only.
Do you want to continue? {y|n}: y
cluster1::*> statistics start -object nfsv3|volume|readahead -sample-id
sample_nfs1
Statistics collection is being started for Sample-id: sample_nfs1
A-43
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
9.
Using the analysis commands that you learned in the previous task, analyze the data and record
or save the results. It is recommended that you increase the Lines of scrollback in your
cluster1 PuTTY session to at least 2000.
This information will be used to complete questions later in this task.
HINT: Use the baseline analysis commands.
statistics show-periodic -node cluster1-01 -instance node -iterations 4
statistics show-periodic -node cluster1-02 -instance node -iterations 4
dashboard performance show
dashboard performance show -operations
dashboard performance show -instance
statistics show-periodic -instance latency -iterations 4
statistics show-periodic -node cluster1-01 -instance latency -iterations 4
statistics show-periodic -node cluster1-02 -instance latency -iterations 4
statistics show-periodic -iterations 4
statistics show-periodic -object volume -instance vs2_vol01 -iterations 4
statistics show-periodic -object lif -instance vs2:vs2_cifs_nfs_lif1 iterations 4
statistics show -object volume -counter *latency -sample-id sample_nfs1
statistics show-periodic -object volume -instance vs2_vol01 -iterations 4
statistics show -object volume -counter *data -sample-id sample_nfs1
statistics show -object aggregate -counter user_reads|user_writes -sample-id
sample_nfs1
statistics show -object disk -counter *latency -sample-id sample_nfs1
statistics show -object port -counter *data -sample-id sample_nfs1
statistics show -object lif -counter *data -sample-id sample_nfs1
The following can be run from the cluster shell as a single command:
row 0; set diagnostic; statistics start -object
nfsv3|volume|aggregate|disk|port|lif|readahead -sample-id sample_nfs1;
statistics show-periodic -node cluster1-01 -instance node -iterations 4;
statistics show-periodic -node cluster1-02 -instance node -iterations 4;
dashboard performance show; dashboard performance show -operations; dashboard
performance show -instance; statistics show-periodic -instance latency iterations 4; statistics show-periodic -node cluster1-01 -instance latency iterations 4; statistics show-periodic -node cluster1-02 -instance latency iterations 4; statistics show-periodic -iterations 4; statistics show-periodic
-object volume -instance vs2_vol01 -iterations 4 ; statistics show-periodic object lif -instance vs2:vs2_cifs_nfs_lif1 -iterations 4; statistics show object volume -counter *latency -sample-id sample_nfs1; statistics showperiodic -object volume -instance vs2_vol01 -iterations 4; statistics show object volume -counter *data -sample-id sample_nfs1; statistics show -object
aggregate -counter user_reads|user_writes -sample-id sample_nfs1; statistics
show -object disk -counter *latency -sample-id sample_nfs1; statistics show object port -counter *data -sample-id sample_nfs1; statistics show -object lif
-counter *data -sample-id sample_nfs1; statistics stop -sample-id sample_nfs1
A-44
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
STEP ACTION
31.
Using the data collected in sample_nfs1 through sample_nfs6 and the analysis done on this data,
did the throughput and I/Os increase when the number of threads increased? Yes, when all of
the other parameters stayed the same.
32.
Using the data collected in sample_nfs1 through sample_nfs6 and the analysis done on this data,
which workload had the highest throughput in terms of KBps? 100% read workload, 0%
random, 32-KB block size, 20-MB file size, and four threads.
33.
Using the data collected in sample_nfs1 through sample_nfs6 and the analysis done on this data,
which workload had the highest throughput in terms of IOPS? 50% read and 50% write
workload, 100% random, 4-KB block size, 300-MB file size, 32 threads.
34.
Using the data collected in sample_nfs1 through sample_nfs6 and the analysis done on this data,
which workload showed the lowest latencies on the storage system? 100% read workload, 0%
random, 32-KB block size, 20-MB file size, and four threads.
35.
Using the data collected in sample_nfs1 through sample_nfs6 and the analysis done on this data,
which workload showed the highest disk utilization on the storage system? 50% read and 50%
write workload, 100% random, 4-KB block size, 300-MB file size, 32 threads.
36.
Using the data collected in sample_nfs1 through sample_nfs6 and the analysis done on this data,
which workload showed the highest CPU utilization on the storage system? 0% read workload,
0% random, 32-KB block size, 300-MB file size, run for one hundred seconds, four
threads.
A-45
2013 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.