B SRV Admin Guide Windows
B SRV Admin Guide Windows
B SRV Admin Guide Windows
for Windows
Version 6.3.4
Administrator's Guide
SC23-9773-05
IBM Tivoli Storage Manager
for Windows
Version 6.3.4
Administrator's Guide
SC23-9773-05
Note:
Before using this information and the product it supports, read the information in “Notices” on page 1173.
This edition applies to Version 6.3.4 of IBM Tivoli Storage Manager (product numbers 5608-E01, 5608-E02,
5608-E03), and to all subsequent releases and modifications until otherwise indicated in new editions or technical
newsletters. This edition replaces SC23-9773-04.
© Copyright IBM Corporation 1993, 2013.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Preface . . . . . . . . . . . . . . xv Tivoli Storage Manager server networks . . . . 32
Who should read this guide . . . . . . . . . xv Exporting and importing data . . . . . . . 33
Publications . . . . . . . . . . . . . . xv Protecting Tivoli Storage Manager and client data 33
Tivoli Storage Manager publications . . . . . xvi Protecting the server . . . . . . . . . . 33
Tivoli Storage FlashCopy Manager publications xviii
Related hardware publications . . . . . . xviii Chapter 3. Configuring the server . . . 35
Support information. . . . . . . . . . . xviii Initial configuration overview . . . . . . . . 35
Getting technical training . . . . . . . . xix Standard configuration . . . . . . . . . 36
Searching knowledge bases . . . . . . . . xix Minimal configuration . . . . . . . . . . 36
Contacting IBM Software Support . . . . . xxi Stopping the initial configuration . . . . . . 37
Conventions used in this guide . . . . . . . xxiii Performing the initial configuration . . . . . . 37
Initial Configuration wizard and tasks . . . . 37
New for IBM Tivoli Storage Manager Server Initialization wizard . . . . . . . . 38
Version 6.3 . . . . . . . . . . . . xxv Device Configuration wizard . . . . . . . 39
Server updates . . . . . . . . . . . . . xxv Client Node Configuration wizard. . . . . . 40
New for the server in Version 6.3.4 . . . . . xxv Media Labeling wizard . . . . . . . . . 43
New for the server in Version 6.3.3 . . . . . xxvi Default configuration results. . . . . . . . . 44
New for the server in Version 6.3.1 . . . . xxviii Data management policy objects . . . . . . 45
New for the server in Version 6.3.0 . . . . xxviii Storage device and media policy objects . . . . 45
Objects for Tivoli Storage Manager clients . . . 46
Verifying the initial configuration . . . . . . . 47
Part 1. Tivoli Storage Manager Performing pre-backup tasks for remote clients 47
basics . . . . . . . . . . . . . . . 1 Backing up a client . . . . . . . . . . . 48
Restoring client files or directories . . . . . . 48
Chapter 1. Tivoli Storage Manager Archiving and retrieving files . . . . . . . 49
Getting started with administrative tasks . . . . 50
overview . . . . . . . . . . . . . . 3 Managing Tivoli Storage Manager servers . . . 51
How client data is stored . . . . . . . . . . 5 Installing and configuring backup-archive clients 53
Data-protection options . . . . . . . . . . 8 Working with schedules on network clients. . . 55
Data movement to server storage . . . . . . 14 Setting client and server communications options 56
Consolidation of backed-up client data . . . . 14 Registering additional administrators . . . . . 57
How the server manages storage . . . . . . . 15 Changing Tivoli Storage Manager administrator
Device support . . . . . . . . . . . . 15 passwords . . . . . . . . . . . . . . 58
Data migration through the storage hierarchy . . 16
Removal of expired data . . . . . . . . . 16
Part 2. Configuring and managing
Chapter 2. Tivoli Storage Manager storage devices . . . . . . . . . . 59
concepts . . . . . . . . . . . . . . 19
Interfaces to Tivoli Storage Manager . . . . . . 19 Chapter 4. Storage device concepts . . 61
Server options . . . . . . . . . . . . . 20 Road map for key device-related task information 61
Storage configuration and management . . . . . 20 Tivoli Storage Manager storage devices . . . . . 62
Disk devices . . . . . . . . . . . . . 21 Tivoli Storage Manager storage objects . . . . . 62
Removable media devices . . . . . . . . 22 Libraries . . . . . . . . . . . . . . 62
Migrating data from disk to tape . . . . . . 22 Drives . . . . . . . . . . . . . . . 65
Storage pools and volumes . . . . . . . . 23 Device class . . . . . . . . . . . . . 65
Windows cluster environments . . . . . . . . 24 Library, drive, and device-class objects . . . . 68
Management of client operations . . . . . . . 25 Storage pools and storage-pool volumes . . . . 69
Managing client nodes. . . . . . . . . . 25 Data movers . . . . . . . . . . . . . 70
Managing client data with policies . . . . . 28 Paths . . . . . . . . . . . . . . . 71
Schedules for client operations . . . . . . . 28 Server objects. . . . . . . . . . . . . 71
Server maintenance . . . . . . . . . . . . 29 Tivoli Storage Manager volumes . . . . . . . 71
Server-operation management . . . . . . . 30 Volume inventory for an automated library. . . 72
Server script automation . . . . . . . . . 30 Device configurations . . . . . . . . . . . 73
Database and recovery-log management . . . . 31 Devices on local area networks . . . . . . . 73
Sources of information about the server . . . . 31 Devices on storage area networks . . . . . . 73
Contents v
International characters for NetApp file servers 262 Querying about data deduplication in file
File level restore from a directory-level backup spaces . . . . . . . . . . . . . . . 347
image . . . . . . . . . . . . . . . 263 Scenarios for data deduplication . . . . . . 348
Directory-level backup and restore . . . . . . 263 Data deduplication and data compatibility . . 353
Directory-level backup and restore for NDMP Data deduplication and disaster recovery
operations . . . . . . . . . . . . . 264 management . . . . . . . . . . . . 354
Backing up and restoring with snapshots . . . 264 Writing data simultaneously to primary, copy, and
Backup and restore using NetApp SnapMirror to active-data pools . . . . . . . . . . . . 355
Tape feature . . . . . . . . . . . . . . 265 Guidelines for using the simultaneous-write
NDMP backup operations using Celerra file server function . . . . . . . . . . . . . . 356
integrated checkpoints . . . . . . . . . . 266 Limitations that apply to simultaneous-write
Replicating NAS nodes with NDMP backup data 266 operations . . . . . . . . . . . . . 357
Controlling the simultaneous-write function . . 359
Chapter 11. Managing storage pools Simultaneous-write operations: Examples . . . 362
and volumes. . . . . . . . . . . . 267 Planning simultaneous-write operations . . . 376
Simultaneous-write function as part of a backup
Storage pools . . . . . . . . . . . . . 268
strategy: Example . . . . . . . . . . . 380
Primary storage pools . . . . . . . . . 268
Keeping client files together using collocation . . 381
Copy storage pools . . . . . . . . . . 269
The effects of collocation on operations . . . . 382
Active-data pools . . . . . . . . . . . 269
How the server selects volumes with collocation
Example: Setting up server storage . . . . . 271
enabled . . . . . . . . . . . . . . 384
Defining storage pools . . . . . . . . . 273
How the server selects volumes with collocation
Task tips for storage pools . . . . . . . . 279
disabled . . . . . . . . . . . . . . 386
Storage pool volumes . . . . . . . . . . 280
Collocation on or off settings . . . . . . . 386
Random-access storage pool volumes . . . . 280
Collocation of copy storage pools and
Sequential-access storage pool volumes. . . . 281
active-data pools . . . . . . . . . . . 387
Preparing volumes for random-access storage
Planning for and enabling collocation . . . . 388
pools . . . . . . . . . . . . . . . 282
Reclaiming space in sequential-access storage pools 390
Preparing volumes for sequential-access storage
How Tivoli Storage Manager reclamation works 390
pools . . . . . . . . . . . . . . . 283
Reclamation thresholds . . . . . . . . . 392
Updating storage pool volumes . . . . . . 285
Reclaiming volumes with the most reclaimable
Access modes for storage pool volumes . . . 286
space . . . . . . . . . . . . . . . 392
Storage pool hierarchies . . . . . . . . . . 288
Starting reclamation manually or in a schedule 393
Setting up a storage pool hierarchy . . . . . 288
Optimizing drive usage using multiple
How the server groups files before storing . . 290
concurrent reclamation processes . . . . . . 393
Where the server stores files . . . . . . . 291
Reclaiming volumes in a storage pool with one
Example: How the server determines where to
drive . . . . . . . . . . . . . . . 394
store files in a hierarchy . . . . . . . . . 291
Reducing the time to reclaim tape volumes with
Backing up the data in a storage hierarchy . . 293
high capacity . . . . . . . . . . . . 395
Staging client data from disk to tape . . . . 298
Reclamation of write-once, read-many (WORM)
Migrating files in a storage pool hierarchy. . . . 299
media . . . . . . . . . . . . . . . 395
Migrating disk storage pools . . . . . . . 300
Controlling reclamation of virtual volumes . . 396
Migrating sequential-access storage pools . . . 305
Reclaiming copy storage pools and active-data
The effect of migration on copy storage pools
pools . . . . . . . . . . . . . . . 396
and active-data pools . . . . . . . . . . 310
How collocation affects reclamation . . . . . 400
Caching in disk storage pools . . . . . . . . 310
Estimating space needs for storage pools . . . . 401
How the server removes cached files . . . . 311
Estimating space requirments in random-access
Effect of caching on storage pool statistics . . . 311
storage pools . . . . . . . . . . . . 401
Deduplicating data . . . . . . . . . . . 311
Estimating space needs in sequential-access
Data deduplication overview . . . . . . . 312
storage pools . . . . . . . . . . . . 403
Data deduplication limitations. . . . . . . 315
Monitoring storage-pool and volume usage . . . 403
Planning guidelines for data deduplication . . 317
Monitoring space available in a storage pool 403
Detecting possible security attacks on the server
Monitoring the use of storage pool volumes . . 406
during client-side deduplication . . . . . . 329
Monitoring migration processes . . . . . . 414
Evaluating data deduplication in a test
Monitoring the use of cache space on disk
environment. . . . . . . . . . . . . 330
storage . . . . . . . . . . . . . . 416
Managing deduplication-enabled storage pools 332
Obtaining information about the use of storage
Controlling data deduplication . . . . . . 336
space . . . . . . . . . . . . . . . 417
Displaying statistics about server-side data
Moving data from one volume to another volume 421
deduplication . . . . . . . . . . . . 343
Data movement within the same storage pool 422
Displaying statistics about client-side data
Data movement to a different storage pool . . 422
deduplication . . . . . . . . . . . . 344
Contents vii
Logical volume backup . . . . . . . . . 516 Changing your encryption method and
Archive . . . . . . . . . . . . . . 517 hardware configuration . . . . . . . . . 562
Automatic migration from a client node . . . 517 Securing sensitive client data . . . . . . . . 563
How client migration works with backup and Setting up shredding . . . . . . . . . . 564
archive . . . . . . . . . . . . . . . 518 Ensuring that shredding is enforced . . . . . 565
Creating your own policies . . . . . . . . . 518 Creating and using client backup sets . . . . . 566
Example: sample policy objects . . . . . . 519 Generating client backup sets on the server . . 568
Defining and updating a policy domain . . . 520 Restoring backup sets from a backup-archive
Defining and updating a policy set . . . . . 522 client . . . . . . . . . . . . . . . 572
Defining and updating a management class . . 523 Moving backup sets to other servers. . . . . 572
Defining and updating a backup copy group 524 Managing client backup sets . . . . . . . 573
Defining and updating an archive copy group 530 Enabling clients to use subfile backup . . . . . 576
Assigning a default management class . . . . 532 Setting up clients to use subfile backup. . . . 577
Validating and activating a policy set . . . . 532 Managing subfile backups . . . . . . . . 577
Assigning client nodes to a policy domain. . . . 534 Optimizing restore operations for clients . . . . 578
Running expiration processing to delete expired Environment considerations . . . . . . . 579
files . . . . . . . . . . . . . . . . 535 Restoring entire file systems . . . . . . . 580
Running expiration processing automatically 535 Restoring parts of file systems . . . . . . . 581
Using commands to control expiration Restoring databases for applications . . . . . 582
processing . . . . . . . . . . . . . 536 Restoring files to a point-in-time . . . . . . 582
Additional expiration processing with disaster Concepts for client restore operations . . . . 582
recovery manager . . . . . . . . . . . 536 Archiving data . . . . . . . . . . . . . 585
Protection and expiration of archive data . . . . 537 Archive operations overview . . . . . . . 585
Data retention protection . . . . . . . . 537 Managing storage usage for archives . . . . 586
Deletion hold . . . . . . . . . . . . 538
Protecting data using the NetApp SnapLock Chapter 16. Scheduling operations for
licensed feature. . . . . . . . . . . . . 539 client nodes . . . . . . . . . . . . 589
Reclamation and the SnapLock feature . . . . 540
Prerequisites to scheduling operations . . . . . 589
Set up SnapLock volumes as Tivoli Storage
Scheduling a client operation . . . . . . . . 590
Manager WORM FILE volumes . . . . . . 544
Creating Tivoli Storage Manager schedules . . 590
Policy configuration scenarios . . . . . . . . 545
Associating client nodes with schedules . . . 591
Configuring policy for direct-to-tape backups 545
Starting the scheduler on the clients . . . . . 591
Configuring policy for Tivoli Storage Manager
Displaying schedule information . . . . . . 592
application clients . . . . . . . . . . . 546
Checking the status of scheduled operations . . 592
Policy for logical volume backups . . . . . 546
Creating schedules for running command files . . 593
Configuring policy for NDMP operations . . . 548
Updating the client options file to automatically
Configuring policy for LAN-free data
generate a new password . . . . . . . . . 594
movement . . . . . . . . . . . . . 549
Configuring the scheduler to run under the
Policy for Tivoli Storage Manager servers as
site-server account. . . . . . . . . . . . 594
clients . . . . . . . . . . . . . . . 551
Overview of the Tivoli Storage Manager scheduler
Setting policy to enable point-in-time restore for
running as a Windows service . . . . . . . . 594
clients . . . . . . . . . . . . . . . 551
Distributing policy using enterprise configuration 552
Querying policy . . . . . . . . . . . . 552 Chapter 17. Managing schedules for
Querying copy groups . . . . . . . . . 553 client nodes . . . . . . . . . . . . 597
Querying management classes. . . . . . . 553 Managing IBM Tivoli Storage Manager schedules 597
Querying policy sets . . . . . . . . . . 554 Adding new schedules . . . . . . . . . 597
Querying policy domains . . . . . . . . 554 Copying existing schedules . . . . . . . . 598
Deleting policy . . . . . . . . . . . . . 555 Modifying schedules . . . . . . . . . . 598
Deleting copy groups. . . . . . . . . . 555 Deleting schedules . . . . . . . . . . 599
Deleting management classes . . . . . . . 556 Displaying information about schedules . . . 599
Deleting policy sets . . . . . . . . . . 556 Managing node associations with schedules . . . 600
Deleting policy domains. . . . . . . . . 556 Adding new nodes to existing schedules . . . 600
Moving nodes from one schedule to another 600
Chapter 15. Managing data for client Displaying nodes associated with schedules . . 600
Removing nodes from schedules . . . . . . 601
nodes. . . . . . . . . . . . . . . 559
Managing event records . . . . . . . . . . 601
Validating a node's data . . . . . . . . . . 559
Displaying information about scheduled events 601
Performance considerations for data validation 560
Managing event records in the server database 603
Validating a node's data during a client session 560
Managing the throughput of scheduled operations 603
Encrypting data on tape . . . . . . . . . . 560
Modifying the default scheduling mode . . . 604
Choosing an encryption method . . . . . . 561
| Chapter 18. Managing servers with the Chapter 21. Automating server
| Operations Center . . . . . . . . . 615 operations. . . . . . . . . . . . . 659
| Opening the Operations Center . . . . . . . 615
Automating a basic administrative command
| Getting started with your tasks . . . . . . . 616
schedule . . . . . . . . . . . . . . . 660
| Viewing the Operations Center on a mobile device 617
Defining the schedule . . . . . . . . . 660
| Administrator IDs and passwords . . . . . . 617
Verifying the schedule . . . . . . . . . 661
| Hub and spoke servers . . . . . . . . . . 618
Tailoring schedules . . . . . . . . . . . 661
| Adding spoke servers . . . . . . . . . 619
Using classic and enhanced command schedules 663
| Restarting the initial configuration wizard . . . . 620
Copying schedules . . . . . . . . . . . 664
| Stopping and starting the web server . . . . . 621
Deleting schedules . . . . . . . . . . . 664
Managing scheduled event records . . . . . . 664
Chapter 19. Managing servers with the Querying events . . . . . . . . . . . 665
Administration Center. . . . . . . . 623 Removing event records from the database . . 665
Using the Administration Center . . . . . . . 623 Tivoli Storage Manager server scripts . . . . . 666
Starting and stopping the Administration Center 626 Defining a server script . . . . . . . . . 666
Functions in the Administration Center supported Managing server scripts . . . . . . . . . 672
only by command line . . . . . . . . . . 626 Running a server script . . . . . . . . . 675
Protecting the Administration Center . . . . . 629 Using macros . . . . . . . . . . . . . 676
Backing up the Administration Center . . . . 629 Writing commands in a macro. . . . . . . 677
Restoring the Administration Center. . . . . 629 Writing comments in a macro . . . . . . . 677
Using continuation characters . . . . . . . 678
Chapter 20. Managing server Using substitution variables in a macro. . . . 678
operations. . . . . . . . . . . . . 631 Running a macro . . . . . . . . . . . 679
Command processing in a macro . . . . . . 679
Licensing IBM Tivoli Storage Manager . . . . . 631
Registering licensed features . . . . . . . 632
Monitoring licenses . . . . . . . . . . 633 Chapter 22. Managing the database
Role of processor value units in assessing and recovery log . . . . . . . . . . 681
licensing requirements . . . . . . . . . 634 Database and recovery log overview . . . . . 681
Estimating processor value units . . . . . . 637 Database: Overview . . . . . . . . . . 682
Collecting processor value unit information in a Connecting the server to the database with
VMware host environment . . . . . . . . 640 TCP/IP . . . . . . . . . . . . . . 684
Working with the IBM Tivoli Storage Manager Recovery log . . . . . . . . . . . . 684
Server and Active Directory . . . . . . . . 640 Setting the user data limit for the database . . . 688
Configuring the Active Directory schema . . . 641 Disk space requirements for the server database
Starting the Tivoli Storage Manager server . . . 643 and recovery log . . . . . . . . . . . . 688
Starting the server on Windows . . . . . . 644 Capacity planning . . . . . . . . . . . . 689
Stand-alone mode for server startup . . . . . 644 Estimating database space requirements . . . 689
Starting the Tivoli Storage Manager server as a Estimating recovery log space requirements . . 693
service. . . . . . . . . . . . . . . 645 Monitoring space utilization for the database
Starting the IBM Tivoli Storage Manager Server and recovery logs . . . . . . . . . . . 706
Console . . . . . . . . . . . . . . 647 Monitoring the database and recovery log . . . . 708
Halting the server . . . . . . . . . . . . 647 Increasing the size of the database . . . . . . 709
Moving the Tivoli Storage Manager server to Reducing the size of the database . . . . . . 710
another system . . . . . . . . . . . . . 648 Scheduling table and index reorganization. . . . 710
Date and time on the server . . . . . . . . 649 Restrictions to table and index reorganization 711
Stopping the Tivoli Storage Manager device driver 649 | Scheduling table or index reorganization . . . 712
Managing server processes . . . . . . . . . 650 Increasing the size of the active log . . . . . . 712
Requesting information about server processes 651 Reducing the size of the active log . . . . . . 713
Contents ix
Moving the database and recovery log on a server 713 Setting up source and target servers for virtual
Moving both the database and recovery log . . 713 volumes . . . . . . . . . . . . . . 765
Moving only the database . . . . . . . . 714 Performance limitations for virtual volume
| Moving only the active log, archive log, or operations . . . . . . . . . . . . . 766
| archive failover log . . . . . . . . . . 715 Performing operations at the source server . . 767
Specifying alternative locations for database log Reconciling virtual volumes and archive files 769
files . . . . . . . . . . . . . . . . 716
Specifying an alternative location with the Chapter 24. Exporting and importing
ARCHFAILOVERLOGDIRECTORY server option or data . . . . . . . . . . . . . . . 771
parameter . . . . . . . . . . . . . 716
Reviewing data that can be exported and imported 771
Specifying an alternative location with the
Exporting restrictions. . . . . . . . . . 772
ARCHLOGDIRECTORY server option or parameter . 717
Deciding what information to export . . . . 772
Specifying the location of RstDbLog using the
Deciding when to export . . . . . . . . 773
RECOVERYDIR parameter . . . . . . . . . 717
Exporting data directly to another server . . . . 774
Adding optional logs after server initialization . . 718
Options to consider before exporting . . . . 774
Transaction processing . . . . . . . . . . 718
Preparing to export to another server for
Files moved as a group between client and
immediate import . . . . . . . . . . . 778
server . . . . . . . . . . . . . . . 719
Monitoring the server-to-server export process 780
Exporting administrator information to another
Chapter 23. Managing a network of server . . . . . . . . . . . . . . . 780
Tivoli Storage Manager servers . . . 721 Exporting client node information to another
Concepts for managing server networks . . . . 721 server . . . . . . . . . . . . . . . 781
Enterprise configuration . . . . . . . . . 722 Exporting policy information to another server 782
Command routing. . . . . . . . . . . 723 Exporting server data to another server . . . 782
Central monitoring for the Tivoli Storage Exporting and importing data using sequential
Manager server. . . . . . . . . . . . 723 media volumes . . . . . . . . . . . . . 782
Data storage on another server . . . . . . 724 Using preview before exporting or importing
Examples: management of multiple Tivoli data . . . . . . . . . . . . . . . 782
Storage Manager servers . . . . . . . . 724 Planning for sequential media used to export
Enterprise-administration planning . . . . . . 726 data . . . . . . . . . . . . . . . 783
Setting up communications among servers . . . 726 Exporting tasks. . . . . . . . . . . . 784
Setting up communications for enterprise Importing data from sequential media volumes 787
configuration and enterprise event logging . . 726 Monitoring export and import processes . . . 798
Setting up communications for command Exporting and importing data from virtual
routing . . . . . . . . . . . . . . 730 volumes . . . . . . . . . . . . . . 801
Updating and deleting servers. . . . . . . 734
Setting up enterprise configurations . . . . . . 735
Enterprise configuration scenario . . . . . . 736
Part 5. Monitoring operations . . . 803
Creating the default profile on a configuration
manager . . . . . . . . . . . . . . 740 Chapter 25. Daily monitoring tasks 805
Creating and changing configuration profiles 740 Monitoring operations using the command line 806
Getting information about profiles . . . . . 748 Monitoring your server processes daily. . . . 806
Subscribing to a profile . . . . . . . . . 750 Monitoring your database daily . . . . . . 807
Refreshing configuration information . . . . . 754 Monitoring disk storage pools daily . . . . . 810
Managing problems with configuration refresh 754 Monitoring sequential access storage pools daily 811
Returning managed objects to local control . . . 755 Monitoring scheduled operations daily . . . . 814
Setting up administrators for the servers . . . . 755 Monitoring operations daily with Tivoli Monitoring
Managing problems with synchronization of for Tivoli Storage Manager . . . . . . . . . 815
profiles . . . . . . . . . . . . . . . 756 | Monitoring operations daily using the Operations
Switching a managed server to a different | Center . . . . . . . . . . . . . . . . 817
configuration manager . . . . . . . . . . 756
Deleting subscribers from a configuration manager 757 Chapter 26. Basic monitoring methods 819
Renaming a managed server . . . . . . . . 757 Using IBM Tivoli Storage Manager queries to
Completing tasks on multiple servers . . . . . 757 display information . . . . . . . . . . . 819
Working with multiple servers by using a web Requesting information about IBM Tivoli
interface . . . . . . . . . . . . . . 758 Storage Manager definitions . . . . . . . 819
Routing commands . . . . . . . . . . 758 Requesting information about client sessions 820
Setting up server groups . . . . . . . . 761 Requesting information about server processes 821
Querying server availability . . . . . . . 763 Requesting information about server settings 822
Using virtual volumes to store data on another Querying server options . . . . . . . . . 822
server . . . . . . . . . . . . . . . . 763
Chapter 30. Reporting and monitoring Part 6. Protecting the server . . . 905
with Tivoli Monitoring for Tivoli
Storage Manager . . . . . . . . . . 839 Chapter 33. Managing Tivoli Storage
Types of information to monitor with Tivoli Manager security . . . . . . . . . . 907
Enterprise Portal workspaces . . . . . . . . 841 Securing communications . . . . . . . . . 907
Monitoring Tivoli Storage Manager real-time data 844 Setting up TLS . . . . . . . . . . . . 908
Viewing historical data and running reports . . . 845 Securing the server console . . . . . . . . . 918
Cognos Business Intelligence . . . . . . . . 846 Administrative authority and privilege classes . . 918
Cognos status and trend reports . . . . . . 846 Managing Tivoli Storage Manager administrator
Opening the Cognos Report Studio portal . . . 852 IDs . . . . . . . . . . . . . . . . 920
Creating a custom Cognos report . . . . . . 853 Managing access to the server and clients . . . . 925
Opening or modifying an existing Cognos Managing passwords and logon procedures . . . 926
report . . . . . . . . . . . . . . . 854 Configuring a directory server for password
Running a Cognos report . . . . . . . . 854 authentication . . . . . . . . . . . . 928
Scheduling Cognos reports to be emailed . . . 855 Setting the policy for an LDAP-authenticated
| Sharing Cognos Reports . . . . . . . . . 856 password . . . . . . . . . . . . . . 929
BIRT Client reports . . . . . . . . . . . 861 Configuring the Tivoli Storage Manager server
BIRT Server reports . . . . . . . . . . . 863 to authenticate passwords with an LDAP
Modifying the IBM Tivoli Monitoring environment directory server . . . . . . . . . . . 930
file to customize agent data collection . . . . . 865 Registering nodes and administrator IDs to
IBM Tivoli Monitoring environment file authenticate passwords with an LDAP directory
reporting queries . . . . . . . . . . . 866 server . . . . . . . . . . . . . . . 931
Backing up and restoring Tivoli Monitoring for Updating nodes and administrator IDs to
Tivoli Storage Manager . . . . . . . . . . 868 authenticate passwords with a directory server . 932
Backing up Tivoli Monitoring for Tivoli Storage Determining which nodes and administrator IDs
Manager . . . . . . . . . . . . . . 869 are configured to authenticate with an LDAP
Restoring Tivoli Monitoring for Tivoli Storage server . . . . . . . . . . . . . . . 933
Manager . . . . . . . . . . . . . . 878 Modifying the default password expiration
period for passwords that are managed by the
Chapter 31. Monitoring client backup Tivoli Storage Manager server . . . . . . . 933
and restore operations . . . . . . . 883 | Scenarios for authenticating passwords . . . . 934
Configuring the client performance monitor . . . 883 Setting a limit for invalid password attempts 936
Starting and stopping the client performance Setting a minimum length for a password . . . 937
monitor . . . . . . . . . . . . . . . 884 Disabling the default password authentication 937
Enabling unified logon with backup-archive
Chapter 32. Logging IBM Tivoli clients . . . . . . . . . . . . . . . 938
Storage Manager events to receivers . 885
Enabling and disabling events . . . . . . . . 886
Contents xi
Chapter 34. Protecting and recovering Step 1: Setting up server-to-server
the server infrastructure and client communications . . . . . . . . . . . 1014
Step 2: Specifying a target replication server 1016
data . . . . . . . . . . . . . . . 941
Step 3: Configuring client nodes for replication 1016
Protecting the database and infrastructure setup
Customizing a node replication configuration 1018
files . . . . . . . . . . . . . . . . 942
Changing replication rules . . . . . . . 1018
Backing up the server database . . . . . . 942
Scenario: Converting to node replication from
Protecting infrastructure setup files . . . . . 948
import and export operations . . . . . . 1026
Protecting client data . . . . . . . . . . . 953
Adding and removing client nodes for
Protecting the data that is in primary storage
replication . . . . . . . . . . . . . 1027
pools . . . . . . . . . . . . . . . 953
Managing source and target replication servers 1030
Auditing storage pool volumes . . . . . . 958
Verifying a node replication setup before
Fixing damaged files . . . . . . . . . . 967
processing . . . . . . . . . . . . . . 1032
Scenario: Protecting the database and storage pools 968
Validating a replication configuration . . . . 1032
Recovering the database and client data . . . . 970
Previewing node replication results . . . . 1033
Restoring the database . . . . . . . . . 970
Managing data replication. . . . . . . . . 1033
Restoring storage pools and storage pool
Replicating data by command . . . . . . 1034
volumes . . . . . . . . . . . . . . 976
Controlling throughput for node replication 1038
Restoring to a point-in-time in a shared library
Disabling and enabling node replication . . . 1040
environment. . . . . . . . . . . . . . 983
Purging replicated data in a file space . . . . 1044
Restoring to a point-in-time a library manager
Replicating client node data after a database
server . . . . . . . . . . . . . . . 983
restore . . . . . . . . . . . . . . 1045
Restoring to a point-in-time a library client
Canceling replication processes . . . . . . 1046
server . . . . . . . . . . . . . . . 983
Monitoring node replication processing and
Example: Recovering to a point-in-time. . . . 984
verifying results . . . . . . . . . . . . 1046
Displaying information about node replication
Chapter 35. Replicating client node settings . . . . . . . . . . . . . . 1046
data . . . . . . . . . . . . . . . 987 Displaying information about node replication
Source and target node-replication servers. . . . 988 processes . . . . . . . . . . . . . 1047
Replication server configurations . . . . . . 988 Measuring the effectiveness of a replication
Policy management for node replication . . . 989 configuration . . . . . . . . . . . . 1048
Node replication processing . . . . . . . . 990 Measuring the effects of data deduplication on
Replication rules . . . . . . . . . . . 990 node replication processing . . . . . . . 1049
Replication state . . . . . . . . . . . 994 Retaining replication records . . . . . . . 1049
Replication mode . . . . . . . . . . . 997 Recovering and storing client data after a disaster 1050
Replication of deduplicated data . . . . . . 998 Restoring, retrieving, and recalling data from a
Client node attributes that are updated during target replication server . . . . . . . . 1050
replication . . . . . . . . . . . . . 999 Converting client nodes for store operations on
Node replication restrictions . . . . . . . . 1000 a target replication server . . . . . . . . 1050
Task tips for node replication. . . . . . . . 1002 Removing a node replication configuration . . . 1051
Change replication rules . . . . . . . . 1002
Add and remove client nodes for replication 1002 Chapter 36. Disaster recovery
Manage replication servers . . . . . . . 1003 manager . . . . . . . . . . . . . 1053
Validate a configuration and preview results 1003
Querying defaults for the disaster recovery plan
Manage replication processing . . . . . . 1004
file . . . . . . . . . . . . . . . . 1054
Monitor replication processing and verify
Specifying defaults for the disaster recovery
results . . . . . . . . . . . . . . 1005
plan file . . . . . . . . . . . . . . 1054
Planning for node replication. . . . . . . . 1005
Specifying defaults for offsite recovery media
Determining server database requirements for
management . . . . . . . . . . . . 1057
node replication . . . . . . . . . . . 1007
Specifying recovery instructions for your site . . 1059
Estimating the total amount of data to be
Specifying information about your server and
replicated . . . . . . . . . . . . . 1007
client node machines . . . . . . . . . . 1061
Estimating network bandwidth required for
Specifying recovery media for client machines 1065
replication . . . . . . . . . . . . . 1008
Creating and storing the disaster recovery plan 1066
Calculating the time that is required for
Storing the disaster recovery plan locally . . . 1067
replication . . . . . . . . . . . . . 1008
Storing the disaster recovery plan on a target
Selecting a method for the initial replication 1009
server . . . . . . . . . . . . . . 1067
Scheduling incremental replication after the
Disaster recovery plan environmental
initial replication . . . . . . . . . . . 1011
considerations . . . . . . . . . . . . 1068
Setting up the default replication configuration 1012
Managing disaster recovery plan files stored on
target servers . . . . . . . . . . . . . 1070
Contents xiii
xiv IBM Tivoli Storage Manager for Windows: Administrator's Guide
Preface
IBM® Tivoli® Storage Manager is a client/server program that provides storage
management solutions to customers in a multi-vendor computer environment. IBM
Tivoli Storage Manager provides an automated, centrally scheduled,
policy-managed backup, archive, and space-management facility for file servers
and workstations.
You should be familiar with the operating system on which the server resides and
the communication protocols required for the client/server environment. You also
need to understand the storage management practices of your organization, such
as how you are currently backing up workstation files and how you are using
storage devices.
Publications
Publications for the IBM Tivoli Storage Manager family of products are available
online. The Tivoli Storage Manager product family includes IBM Tivoli Storage
FlashCopy® Manager, IBM Tivoli Storage Manager for Space Management, IBM
Tivoli Storage Manager for Databases, and several other storage management
products from IBM Tivoli.
To search all publications, search across the appropriate Tivoli Storage Manager
information center:
v Version 6.3 information center: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3
v Version 6.4 information center: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4
You can download PDF versions of publications from the Tivoli Storage Manager
information center or from the IBM Publications Center at http://www.ibm.com/
shop/publications/order/.
You can also order some related publications from the IBM Publications Center
website at http://www.ibm.com/shop/publications/order/. The website provides
information about ordering publications from countries other than the United
States. In the United States, you can order publications by calling 1-800-879-2755.
Preface xvii
Table 5. IBM Tivoli Storage Manager troubleshooting and tuning publications (continued)
Publication title Order number
IBM Tivoli Storage Manager for Enterprise Resource Planning: Data SC27-4016
Protection for SAP Messages
Note: You can find information about IBM System Storage® Archive Manager at
the Tivoli Storage Manager v6.3.0 information center.
For additional information on hardware, see the resource library for tape products
at http://www.ibm.com/systems/storage/tape/library.html.
Support information
You can find support information for IBM products from various sources.
Go to the following websites to sign up for training, ask questions, and interact
with others who use IBM storage products.
Tivoli software training and certification
Choose from instructor led, online classroom training, self-paced Web
classes, Tivoli certification preparation, and other training options at
http://www.ibm.com/software/tivoli/education/
Tivoli Support Technical Exchange
Technical experts share their knowledge and answer your questions in
webcasts at http://www.ibm.com/software/sysmgmt/products/support/
supp_tech_exch.html.
Storage Management community
Interact with others who use IBM storage management products at
http://www.ibm.com/developerworks/servicemanagement/sm/
index.html
Global Tivoli User Community
Share information and learn from other Tivoli users throughout the world
at http://www.tivoli-ug.org/.
IBM Education Assistant
View short "how to" recordings designed to help you use IBM software
products more effectively at http://publib.boulder.ibm.com/infocenter/
ieduasst/tivv1r0/index.jsp
You can search for information without signing in. Sign in using your IBM ID and
password if you want to customize the site based on your product usage and
information needs. If you do not already have an IBM ID and password, click Sign
in at the top of the page and follow the instructions to register.
From the support website, you can search various resources including:
v IBM technotes.
v IBM downloads.
v IBM Redbooks® publications.
v IBM Authorized Program Analysis Reports (APARs). Select the product and click
Downloads to search the APAR list.
Preface xix
If you still cannot find a solution to the problem, you can search forums and
newsgroups on the Internet for the latest information that might help you find
problem resolution.
An independent user discussion list, ADSM-L, is hosted by Marist College. You can
subscribe by sending an email to [email protected]. The body of the message
must contain the following text: SUBSCRIBE ADSM-L your_first_name
your_family_name.
To share your experiences and learn from others in the Tivoli Storage Manager and
Tivoli Storage FlashCopy Manager user communities, go to Service Management
Connect (http://www.ibm.com/developerworks/servicemanagement/sm/
index.html). From there you can find links to product wikis and user communities.
To learn about which products are supported, go to the IBM Support Assistant
download web page at http://www.ibm.com/software/support/isa/
download.html.
IBM Support Assistant helps you gather support information when you must open
a problem management record (PMR), which you can then use to track the
problem. The product-specific plug-in modules provide you with the following
resources:
v Support links
v Education links
v Ability to submit problem management reports
You can find more information at the IBM Support Assistant website:
http://www.ibm.com/software/support/isa/
You can also install the stand-alone IBM Support Assistant application on any
workstation. You can then enhance the application by installing product-specific
plug-in modules for the IBM products that you use. Find add-ons for specific
products at http://www.ibm.com/support/docview.wss?uid=swg27012689.
You can determine what fixes are available by checking the IBM software support
website at http://www.ibm.com/support/entry/portal/.
v If you previously customized the site based on your product usage:
1. Click the link for your product, or a component for which you want to find a
fix.
2. Click Downloads, and then click Fixes by version.
v If you have not customized the site based on your product usage, click
Downloads and search for your product.
To obtain help from IBM Software Support, complete the following steps:
1. Ensure that you have completed the following prerequisites:
a. Set up a subscription and support contract.
b. Determine the business impact of your problem.
c. Describe your problem and gather background information.
2. Follow the instructions in “Submitting the problem to IBM Software Support”
on page xxii.
For IBM distributed software products (including, but not limited to, IBM Tivoli,
Lotus®, and Rational® products, as well as IBM DB2® and IBM WebSphere®
products that run on Microsoft Windows or on operating systems such as AIX or
Linux), enroll in IBM Passport Advantage® in one of the following ways:
v Online: Go to the Passport Advantage website at http://www.ibm.com/
software/lotus/passportadvantage/, click How to enroll, and follow the
instructions.
v By telephone: You can call 1-800-IBMSERV (1-800-426-7378) in the United States.
For the telephone number to call in your country, go to the IBM Software
Support Handbook web page at http://www14.software.ibm.com/webapp/
set2/sas/f/handbook/home.html and click Contacts.
Preface xxi
Determining the business impact
When you report a problem to IBM, you are asked to supply a severity level.
Therefore, you must understand and assess the business impact of the problem
you are reporting.
Severity 1 Critical business impact: You are unable to use the program,
resulting in a critical impact on operations. This condition
requires an immediate solution.
Severity 2 Significant business impact: The program is usable but is
severely limited.
Severity 3 Some business impact: The program is usable with less
significant features (not critical to operations) unavailable.
Severity 4 Minimal business impact: The problem causes little impact on
operations, or a reasonable circumvention to the problem has
been implemented.
In the usage and descriptions for administrative commands, the term characters
corresponds to the number of bytes available to store an item. For languages in
which it takes a single byte to represent a displayable character, the character to
byte ratio is 1 to 1. However, for DBCS and other multi-byte languages, the
reference to characters refers only to the number of bytes available for the item and
may represent fewer actual characters.
Preface xxiii
xxiv IBM Tivoli Storage Manager for Windows: Administrator's Guide
New for IBM Tivoli Storage Manager Version 6.3
Many features in the Tivoli Storage Manager Version 6.3 server are new for
previous Tivoli Storage Manager users.
Server updates
New features and other changes are available in the IBM Tivoli Storage Manager
V6.3 server. Technical updates since the previous edition are marked with a vertical
bar ( | ) in the left margin.
The server that is included with the Tivoli Storage Manager and IBM Tivoli Storage
Manager Extended Edition V6.4 products is at the V6.3.4 level. The V6.3.4 server is
also available for download separately, as a fix pack for current users of V6.3.
| The V6.4.1 Operations Center includes an Overview page that shows the
| interaction of Tivoli Storage Manager servers and clients. You can use the
| Operations Center to identify potential issues at a glance, manage alerts, and
| access the Tivoli Storage Manager command line. The Administration Center
| interface is also available, but the Operations Center is the preferred monitoring
| interface.
| Related tasks:
| Chapter 18, “Managing servers with the Operations Center,” on page 615
| The Agent Log workspace is enhanced to display whether the monitored servers
| are up and running.
| Pruning values are now automatically configured during new installations. If you
| upgraded the application, you must manually configure the pruning settings to
| periodically remove data from the WAREHOUS database.
The server that is included with the Tivoli Storage Manager and IBM Tivoli Storage
Manager Extended Edition V6.4 products is at the V6.3.3 level. The V6.3.3 server is
also available for download separately, as a fix pack for current users of V6.3.
LDAP-authenticated passwords
IBM Tivoli Storage Manager server V6.3.3 can use an LDAP directory server to
authenticate passwords. LDAP-authenticated passwords give you an extra level of
security by being case-sensitive, offering advanced password rule enforcement, and
a centralized server on which to authenticate them.
The two methods of authentication are LDAP and LOCAL. LOCAL means that the
password is authenticated with the Tivoli Storage Manager server.
Passwords that are authenticated with the Tivoli Storage Manager server are not
case-sensitive. All passwords can be composed of characters from the following
list:
a b c d e f g h i j k l m n o p q r s t u v w x y z
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
0 1 2 3 4 5 6 7 8 9
~ ! @ # $ % ^ & * _ - + = ` | ( ) { } [ ] : ; < > , . ? /
You can use logical block protection only with the following types of drives and
media:
v IBM LTO5 and later
v IBM 3592 Generation 3 drives, and later, with 3592 Generation 2 media, and later
Node replication
Node replication is the process of incrementally copying or replicating client node
data from one server of Tivoli Storage Manager to another server of Tivoli Storage
Manager for the purpose of disaster recovery.
The server from which client node data is replicated is called a source replication
server. The server to which client node data is replicated is called a target replication
server.
Node replication avoids the logistics and security exposure of physically moving
tape media to a remote location. If a disaster occurs and the source replication
If you use the export and import functions of Tivoli Storage Manager to store client
node data on a disaster-recovery server, you can convert the nodes to replicating
nodes. When replicating data, you can also use data deduplication to reduce
bandwidth and storage requirements.
Tivoli Storage Manager V6.3 servers can be used for node replication. However,
you can replicate data for client nodes that are at V6.3 or earlier. You can also
replicate data that was stored on a Tivoli Storage Manager V6.2 or earlier server
before you upgraded it to V6.3.
You cannot replicate nodes from a Tivoli Storage Manager V6.3.3 server to a server
that is running on an earlier level of Tivoli Storage Manager.
Related tasks:
Chapter 35, “Replicating client node data,” on page 987
You can deploy backup-archive clients on operating systems other than Windows
from all releases at V5.5 or later. These Backup-Archive Clients can go to any later
version, release, modification, or fix level. You can coordinate the updates to each
Backup-Archive Client from the Administration Center.
During restore operations, the Tivoli Storage Manager server attempts to use the
same number of data streams that you specified for the backup operation. For
example, suppose that you specify four data streams for a database backup
operation. During a restore operation, the server attempts to use four drives. If one
drive is offline and unavailable, the server uses three drives for the restore
operation.
Updates to Tivoli Monitoring for Tivoli Storage Manager include the following
items:
v Cognos Business Intelligence V8 is an integrated business intelligence suite that
is provided as part of Tivoli Common Reporting. Tivoli Common Reporting is
included in the Administration Center installation when you select the Tivoli
Common Reporting component. See Customizing reports with Cognos Business
Intelligence, in the Monitoring operations section of the Administrator's Guide for
details. All of the information regarding client and server reports can also be
found in that section.
v The installation process has been improved to include a prerequisite checker,
and now performs all installation configuration tasks automatically.
v A customizable dashboard workspace has been added to display many
commonly viewed items in a single view. With the default setting, the dashboard
displays data about the storage space used by node; unsuccessful client and
server schedules; and details about storage pools, drives, and activity log error
messages.
v You can include multiple servers in a single report. Reports have been enhanced
to refine the accuracy of the data being displayed.
v New Tivoli Enterprise Portal workspaces are: activity log, agent log, updates to
client node status, drives, libraries, occupancy, PVU details, and replication
status and details.
v New client reports are available: storage pool media details, storage summary
details, replication details, replication growth, and replication summary.
v New server reports are available: activity log details, server throughput, and an
updated server throughput report for data collected by agents earlier than
version 6.3.
By using the new QUERY PVUESTIMATE command, you can generate reports that
estimate the number of server devices and client devices managed by the Tivoli
Storage Manager server. You can also view PVU information on a per-node basis.
These reports are not legally binding, but provide a starting point for determining
license requirements. Alternatively, you can view PVU information in the
Administration Center. The Administration Center provides summaries of client
devices, server devices, and estimated PVUs, and more detailed information.
For a detailed report, issue the SQL SELECT * FROM PVUESTIMATE_DETAILS command.
This command extracts information at the node level. This data can be exported to
a spreadsheet and modified to more accurately represent the system environment.
For more information about PVU calculations and their use for licensing purposes,
see the topic describing the role of PVUs in the Administrator's Guide.
Prerequisite checker
Tivoli Storage Manager Version 6.3 includes a prerequisite checker, a tool that can
be run before starting the Tivoli Storage Manager installation.
The prerequisite checker verifies requirements for the Tivoli Storage Manager
server, the Administration Center, and Tivoli Monitoring for Tivoli Storage
Manager. The prerequisite checker verifies the operating system, the amount of free
disk space, the required memory for the server, and other prerequisites. The tool
presents a summary of results, informs you about changes that are required in
your environment before installation, and creates required directories. In this way,
the prerequisite checker can help simplify the installation process.
For more information, see the section about running the prerequisite checker in the
Installation Guide.
With enhancements available in Version 6.3, you can define a library as a virtual
tape library (VTL) to Tivoli Storage Manager.
VTLs primarily use disk subsystems to internally store data. Because they do not
use tape media, you can exceed the capabilities of a physical tape library when
using VTL storage. Using a VTL, you can define many volumes and drives which
provides for greater flexibility in the storage environment and increases
productivity by allowing more simultaneous mounts and tape I/O.
The Tivoli Storage Manager V6.3 server accesses client data by using a storage
device attached to z/OS. The storage device is made available by IBM Tivoli
Storage Manager for z/OS Media.
In addition, Tivoli Storage Manager for z/OS Media facilitates access to Virtual
Storage Access Method (VSAM) linear data sets on z/OS by using an enhanced
sequential FILE storage method.
The CHECKTAPEPOS server option allows the Tivoli Storage Manager server to check
the validity and consistency of data block positions on tape.
Enhancements to this option enable a drive to check for data overwrite problems
before each WRITE operation and allow Tivoli Storage Manager to reposition tapes
to the correct location and continue to write data. Use the CHECKTAPEPOS option
with IBM LTO Generation 5 drives.
Note: You can enable append-only mode for IBM LTO Generation 5 and later
drives, and for any drives that support this feature.
In Tivoli Storage Manager Version 6.3, persistent reserve is enabled for drives and
driver levels that support the feature.
The Tivoli Storage Manager Administration Center uses Tivoli Integrated Portal for
its graphical user interface (GUI). With Tivoli Integrated Portal V2.1, you can now
monitor the Administration Center with Internet Explorer 8 and Mozilla Firefox
3.5. All browsers that you used with Tivoli Integrated Portal V1.1.1 and later can
be used with this latest version.
When you install Tivoli Integrated Portal V2.1 installing Tivoli Common Reporting,
embedded security service, or the time scheduling service is optional. These
features can be added and registered with Tivoli Integrated Portal V2.1 at a later
time.
Related concepts:
Chapter 19, “Managing servers with the Administration Center,” on page 623
With enhancements to the Administration Center, you can now specify server
event-based archive settings using the Policy Domain and Management Class
wizards.
If you set an archive retention period for an object through the server, you can
update these settings using the Administration Center Management Class
notebook.
Setting an archive retention period ensures that objects are not deleted from the
Tivoli Storage Manager server until policy-based retention requirements for that
object are satisfied.
With the new client performance monitor function, you have the capability to
gather and analyze performance data about backup and restore operations for an
IBM Tivoli Storage Manager client.
The client performance monitor function is accessed from the Tivoli Storage
Manager Administration Center and uses data that is collected by the API. You can
view performance information about processor, disk, and network utilization, and
performance data that relates to data transfer rates and data compression. You can
analyze data throughput rates at any time during a backup or restore operation.
Also, you can use the performance information to analyze processor, disk, or
network performance bottlenecks.
This feature is useful, for example, if you have a planned network outage that
might affect communication between a source and a target replication server. To
prevent replication failures, you can disable outbound sessions from the source
replication server before the outage. After communications have been reestablished,
you can resume replication by enabling outbound sessions.
To display help for the DEFINE DEVCLASS command for 3570 device classes, type:
help 3.13.10.1
As in previous releases, you can use this method to display help for commands
that have unique names, such as REGISTER NODE:
To display help for the REGISTER NODE command, you can type:
help 3.46.1
You can also type help commandName, where commandName is the name of the
server command for which you want information:
help register node
For Tivoli Storage Manager V6.3 and later, to use SSL with self-signed certificates,
use the SSLTLS12 option after you distribute new self-signed certificates to all V6.3
backup-archive clients. You can use certificates from previous server versions, but
you then cannot use TLS 1.2.
For Tivoli Storage Manager V6.3.3 server, TLS/SSL is available for LAN-free and
server-to-server functions.
For information about supported operating systems for clients, see the IBM Tivoli
Storage Manager website at http://www.ibm.com/support/entry/portal/
Overview/Software/Tivoli/Tivoli_Storage_Manager.
Client programs such as the backup-archive client and the HSM client (space
manager) are installed on systems that are connected through a LAN and are
registered as client nodes. From these client nodes, users can back up, archive, or
migrate files to the server.
The following sections present key concepts and information about IBM Tivoli
Storage Manager. The sections describe how Tivoli Storage Manager manages client
files based on information provided in administrator-defined policies, and manages
devices and media based on information provided in administrator-defined Tivoli
Storage Manager storage objects.
The final section gives an overview of tasks for the administrator of the server,
including options for configuring the server and how to maintain the server.
You can have multiple policies and assign the different policies as needed to
specific clients, or even to specific files. Policy assigns a location in server storage
where data is initially stored. Server storage is divided into storage pools that are
groups of storage volumes.
When you install Tivoli Storage Manager, you have a default policy that you can
use. For details about this default policy, see “Reviewing the standard policy” on
page 499. You can modify this policy and define additional policies.
Clients use Tivoli Storage Manager to store data for any of the following purposes:
Backup and restore
The backup process copies data from client workstations to server storage
to ensure against loss of data that is regularly changed. The server retains
Figure 1 on page 7 shows how policy is part of the Tivoli Storage Manager process
for storing client data.
Migration
Backup
or
Archive
Database
Policy Domain
Policy Set
Management Class
Copy Group
Figure 1. How IBM Tivoli Storage Manager Controls Backup, Archive, and Migration
Processes
Files remain in server storage until they expire and expiration processing occurs, or
until they are deleted from server storage. A file expires because of criteria that are
set in policy. For example, the criteria include the number of versions allowed for a
file and the number of days that have elapsed since a file was deleted from the
client's file system. If data retention protection is activated, an archive object cannot
be inadvertently deleted.
For information on managing the database, see Chapter 22, “Managing the
database and recovery log,” on page 681.
For information about storage pools and storage pool volumes, see Chapter 11,
“Managing storage pools and volumes,” on page 267.
For information about event-based policy, deletion hold, and data retention
protection, see Chapter 14, “Implementing policies for client data,” on page 497.
Data-protection options
Tivoli Storage Manager provides a variety of backup and archive operations,
allowing you to select the right protection for the situation.
Schedule the backups of client data to help enforce the data management policy
that you establish. If you schedule the backups, rather than rely on the clients to
perform the backups, the policy that you establish is followed more consistently.
See Chapter 16, “Scheduling operations for client nodes,” on page 589.
The standard backup method that Tivoli Storage Manager uses is called progressive
incremental backup. It is a unique and efficient method for backup. See “Progressive
incremental backups” on page 13.
Table 8 summarizes the client operations that are available. In all cases, the server
tracks the location of the backup data in its database. Policy that you set
determines how the backup data is managed.
Table 8. Summary of client operations
Type of Description Usage Restore options For more
operation information
Progressive The standard method of Helps ensure complete, The user can restore just See “Incremental
incremental backup used by Tivoli effective, policy-based the version of the file that backup” on page 514
backup Storage Manager. After backup of data. Eliminates is needed. and the
the first, full backup of a the need to retransmit Backup-Archive
client system, incremental backup data that has not Tivoli Storage Manager Clients Installation
backups are done. been changed during does not need to restore a and User's Guide.
Incremental backup by successive backup base file followed by
date is also available. operations. incremental backups. This
means reduced time and
No additional full fewer tape mounts, as
backups of a client are well as less data
required after the first transmitted over the
backup. network.
Selective Backup of files that are Allows users to protect a The user can restore just See “Selective
backup selected by the user, subset of their data the version of the file that backup” on page 516
regardless of whether the independent of the is needed. and the
files have changed since normal incremental Backup-Archive
the last backup. backup process. Tivoli Storage Manager Clients Installation
does not need to restore a and User's Guide.
base file followed by
incremental backups. This
means reduced time and
fewer tape mounts, as
well as less data
transmitted over the
network.
Applicable to clients on
Windows systems.
Journal- Aids all types of backups Reduces the amount of Journal-based backup has See the
based (progressive incremental time required for backup. no effect on how files are Backup-Archive
backup backup, selective backup, The files eligible for restored; this depends on Clients Installation
adaptive subfile backup) backup are known before the type of backup and User's Guide.
by basing the backups on the backup operation performed.
a list of changed files. begins.
The list is maintained on
the client by the journal Applicable to clients on
engine service of IBM AIX and Windows
Tivoli Storage Manager. systems, except Windows
2003 64-bit IA64.
Image Full volume backup. Allows backup of an The entire image is See “Policy for
backup entire file system or raw restored. logical volume
Nondisruptive, on-line volume as a single object. backups” on page
backup is possible for Can be selected by 546 and the
Windows clients by using backup-archive clients on Backup-Archive
the Tivoli Storage UNIX, Linux, and Clients Installation
Manager snapshot Windows systems. and User's Guide.
function.
Image Full volume backup, Used only for the image The full image backup See Chapter 10,
backup which can be followed by backups of NAS file plus a maximum of one “Using NDMP for
with subsequent differential servers, performed by the differential backup are operations with NAS
differential backups. server using NDMP restored. file servers,” on page
backups operations. 233.
Backup A method of backup that Implements Details depend on the See the
using exploits the capabilities high-efficiency backup hardware. documentation for
hardware of IBM Enterprise Storage and recovery of Tivoli Storage
snapshot Server FlashCopy and business-critical FlashCopy Manager.
capabilities EMC TimeFinder to make applications while
copies of volumes used virtually eliminating
by database servers. The backup-related downtime
Tivoli Storage FlashCopy or user disruption on the
Manager product then database server.
uses the volume copies to
back up the database
volumes.
Tivoli Storage Manager takes incremental backup one step further. After the initial
full backup of a client, no additional full backups are necessary because the server,
using its database, keeps track of whether files need to be backed up. Only files
that change are backed up, and then entire files are backed up, so that the server
does not need to reference base versions of the files. This means savings in
resources, including the network and storage.
If you choose, you can force full backup by using the selective backup function of
a client in addition to the incremental backup function. You can also choose to use
adaptive subfile backup, in which the server stores the base file (the complete
initial backup of the file) and subsequent subfiles (the changed parts) that depend
on the base file.
You can back up client backup, archive, and space-managed data in primary
storage pools to copy storage pools. You can also copy active versions of client
backup data from primary storage pools to active-data pools. The server can
automatically access copy storage pools and active-data pools to retrieve data. See
“Protecting client data” on page 953.
You can also back up the server's database. The database is key to the server's
ability to track client data in server storage. See “Protecting the database and
infrastructure setup files” on page 942.
These backups can become part of a disaster recovery plan, created automatically
by the disaster recovery manager. See Chapter 36, “Disaster recovery manager,” on
page 1053.
In many configurations, the Tivoli Storage Manager client sends its data to the
server over the LAN. The server then transfers the data to a device that is attached
to the server. You can also use storage agents that are installed on client nodes to
send data over a SAN. This minimizes use of the LAN and the use of the
computing resources of both the client and the server. For details, see “LAN-free
data movement” on page 75.
For network-attached storage, use NDMP operations to avoid data movement over
the LAN. For details, see “NDMP backup operations” on page 77.
Device support
With Tivoli Storage Manager, you can use a variety of devices for server storage.
See the current list on the Tivoli Storage Manager website at http://
www.ibm.com/support/entry/portal/Overview/Software/Tivoli/
Tivoli_Storage_Manager.
Tivoli Storage Manager represents physical storage devices and media with the
following administrator-defined objects:
Library
A library is one or more drives (and possibly robotic devices) with similar
media mounting requirements.
Drive
Each drive represents a drive mechanism in a tape or optical device.
For details about device concepts, see Chapter 4, “Storage device concepts,” on
page 61.
For example, you have a backup policy that specifies that three versions of a file be
kept. File A is created on the client, and backed up. Over time, the user changes
file A, and three versions of the file are backed up to the server. Then the user
changes file A again. When the next incremental backup occurs, a fourth version of
file A is stored, and the oldest of the four versions is eligible for expiration.
To remove data that is eligible for expiration, a server expiration process marks
data as expired and deletes metadata for the expired data from the database. The
space occupied by the expired data is then available for new data.
You control the frequency of the expiration process by using a server option, or
you can start the expiration processing by command or scheduled command.
Your changing storage needs and client requirements can mean on-going
configuration changes and monitoring. The server's capabilities are described in the
following topics.
Server options
Server options let you customize the server and its operations.
Server options are in the server options file. Some options can be changed and
made active immediately by using the command, SETOPT. Most server options are
changed by editing the server options file and then halting and restarting the
server to make the changes active. See the Administrator's Reference for details
about the server options file and reference information for all server options.
You can also change the options through the IBM Tivoli Storage Manager Console.
See the Installation Guide for information about the IBM Tivoli Storage Manager
Console.
The server uses its storage for the data it manages for clients. The storage can be a
combination of devices.
v Disk
v Tape drives that are either manually operated or automated
v Optical drives
v Other drives that use removable media
Disk devices
Disk devices can be used with Tivoli Storage Manager for storing the database and
recovery log or client data that is backed up, archived, or migrated from client
nodes.
The server can store data on disk by using random-access volumes (device type of
DISK) or sequential-access volumes (device type of FILE).
The Tivoli Storage Manager product allows you to exploit disk storage in ways
that other products do not. You can have multiple client nodes back up to the
same disk storage pool at the same time, and still keep the data for the different
client nodes separate. Other products also allow you to back up different systems
at the same time, but only by interleaving the data for the systems, leading to
slower restore processes.
If you have enough disk storage space, data can remain on disk permanently or
temporarily, depending on the amount of disk storage space that you have. Restore
process performance from disk can be very fast compared to tape.
You can have the server later move the data from disk to tape; this is called
migration through the storage hierarchy. Other advantages to this later move to
tape include:
v Ability to collocate data for clients as the data is moved to tape
v Streaming operation of tape drives, leading to better tape drive performance
v More efficient use of tape drives by spreading out the times when the drives are
in use
For information about storage hierarchy and setting up storage pools on disk
devices, see:
Chapter 5, “Magnetic disk devices,” on page 89 and “Storage pool hierarchies”
on page 288
The following topics provide an overview of how to use removable media devices
with Tivoli Storage Manager.
Device classes
A device class represents a set of storage devices with similar availability,
performance, and storage characteristics.
You must define device classes for the drives available to the Tivoli Storage
Manager server. You specify a device class when you define a storage pool so that
the storage pool is associated with drives.
For more information about defining device classes, see “Defining device classes”
on page 209.
Migration requires tape mounts. The mount messages are directed to the console
message queue and to any administrative client that has been started with either
the mount mode or console mode option. To have the server migrate data from
BACKUPPOOL to AUTOPOOL and from ARCHIVEPOOL to TAPEPOOL do the
following:
update stgpool backuppool nextstgpool=autopool
update stgpool archivepool nextstgpool=tapepool
The server can perform migration as needed, based on migration thresholds that
you set for the storage pools. Because migration from a disk to a tape storage pool
uses resources such as drives and operators, you might want to control when
migration occurs. To do so, you can use the MIGRATE STGPOOL command:
migrate stgpool backuppool
To migrate from a disk storage pool to a tape storage pool, devices must be
allocated and tapes must be mounted. For these reasons, you may want to ensure
that migration occurs at a time that is best for your situation. You can control
when migration occurs by using migration thresholds.
See “Migrating disk storage pools” on page 300 and the Administrator's Reference
for more information.
The following are other examples of what you can control for a storage pool:
Collocation
The server can keep each client's files on a minimal number of volumes
within a storage pool. Because client files are consolidated, restoring
collocated files requires fewer media mounts. However, backing up files
from different clients requires more mounts.
Reclamation
Files on sequential access volumes might expire, move, or be deleted. The
reclamation process consolidates the active, unexpired data on many
volumes onto fewer volumes. The original volumes can then be reused for
new data, making more efficient use of media.
Storage pool backup
Client backup, archive, and space-managed data in primary storage pools
can be backed up to copy storage pools for disaster recovery purposes. As
client data is written to the primary storage pools, it can also be
simultaneously written to copy storage pools.
Copy active data
The active versions of client backup data can be copied to active-data
pools. Active-data pools provide a number of benefits. For example, if the
device type associated with an active-data pool is sequential-access disk
(FILE), you can eliminate the need for disk staging pools. Restoring client
data is faster because FILE volumes are not physically mounted, and the
server does not have to position past inactive files that do not have to be
restored.
An active-data pool that uses removable media, such as tape or optical,
reduces the number of volumes for onsite and offsite storage. (Like
volumes in copy storage pools, volumes in active-data pools can be moved
offsite for protection in case of disaster.) If you vault data electronically to
a remote location, a SERVER-type active-data pool saves bandwidth by
copying and restoring only active data.
As backup client data is written to primary storage pools, the active
versions can be simultaneously written to active-data pools.
Cache When the server migrates files from disk storage pools, duplicate copies of
the files can remain in cache (disk storage) for faster retrieval. Cached files
are deleted only when space is needed. However, client backup operations
that use the disk storage pool can have poorer performance.
You manage storage volumes by defining, updating, and deleting volumes, and by
monitoring the use of server storage. You can also move files within and across
storage pools to optimize the use of server storage.
For more information about storage pools and volumes and taking advantage of
storage pool features, see Chapter 11, “Managing storage pools and volumes,” on
page 267.
Tip: To define disk volumes, you can also use the Server Initialization wizard. This
wizard is displayed during the server configuration process.
Tivoli Storage Manager can support tape failover for a cluster environment using a
Fibre or SCSI connection. Although Microsoft Failover Clusters do not support the
failover of tape devices, the failover configuration can be monitored through the
Microsoft Cluster Administrator interface after it is set up through Tivoli Storage
Manager.
After you have created schedules, you manage and coordinate those schedules.
Your tasks include the following:
v Verify that the schedules ran successfully.
v Determine how long Tivoli Storage Manager retains information about schedule
results (event records) in the database.
v Balance the workload on the server so that all scheduled operations complete.
For more information about client operations, see the following sections:
v For setting up an include-exclude list for clients, see “Getting users started” on
page 500.
v For automating client operations, see Chapter 16, “Scheduling operations for
client nodes,” on page 589.
v For running the scheduler on a client system, see the user's guide for the client.
v For setting up policy domains and management classes, see Chapter 14,
“Implementing policies for client data,” on page 497.
For more information about these tasks, see Chapter 17, “Managing schedules for
client nodes,” on page 597
The Tivoli Storage Manager server supports a variety of client nodes. You can
register the following types of clients and servers as client nodes:
v Tivoli Storage Manager backup-archive client
v Application clients that provide data protection through one of the following
products: Tivoli Storage Manager for Application Servers, Tivoli Storage
Manager for Databases, Tivoli Storage Manager for Enterprise Resource
Planning, or Tivoli Storage Manager for Mail.
v Tivoli Storage Manager for Space Management client (called space manager
client or HSM client)
v A NAS file server for which the Tivoli Storage Manager server uses NDMP for
backup and restore operations
v Tivoli Storage Manager source server (registered as a node on a target server)
When you register clients, you have choices to make about the following:
v Whether the client should compress files before sending them to the server for
backup
For more information on managing client nodes, see the Backup-Archive Clients
Installation and User's Guide.
Registration for clients can be closed or open. With closed registration, a user with
administrator authority must register all clients. With open registration, clients can
register themselves at first contact with the server. See “Registering nodes with the
server” on page 440.
You can ensure that only authorized administrators and client nodes are
communicating with the server by requiring passwords. Passwords can
authenticate with an LDAP directory server or the Tivoli Storage Manager server.
Most password-related commands work for both kinds of servers. The PASSEXP and
RESET PASSEXP commands do not work for passwords that authenticate with an
LDAP directory server. You can use the LDAP directory server to give more
options to your passwords, independent of the Tivoli Storage Manager server.
Whether you store your passwords on an LDAP directory server, or on the Tivoli
Storage Manager server, you can set the following requirements for passwords:
v Minimum number of characters in a password.
v Expiration time.
v A limit on the number of consecutive, invalid password attempts. When the
client exceeds the limit, Tivoli Storage Manager stops the client node from
accessing the server. The limit can be set on the Tivoli Storage Manager server,
and on the LDAP directory server.
Important: The invalid password limit is for passwords that authenticate with the
Tivoli Storage Manager server and any LDAP directory servers. Invalid password
attempts can be configured on an LDAP directory server, outside of the Tivoli
Storage Manager server. But the consequence of setting the number of invalid
attempts on the LDAP directory server might pose some problems. For example,
when the REGISTER NODE command is issued, the default behavior is to name the
node administrator the same name as the node. The LDAP server does not
recognize the difference between the node “NODE_Q” and the administrator
“NODE_Q”. The node and the administrator can authenticate to the LDAP server
if they have the same password. If the node and administrator have different
passwords, the authentication fails for either the node or administrator. If the node
or the administrator fail to logon consistently, their IDs are locked. You can avoid
this situation by issuing the REGISTER NODE command with USERID=userid or
USERID=NONE.
You can control the authority of administrators. An organization can name a single
administrator or distribute the workload among a number of administrators and
grant them different levels of authority. For details, see “Managing Tivoli Storage
Manager administrator IDs” on page 920.
For better security when clients connect across a firewall, you can control whether
clients can initiate contact with the server for scheduled operations. See “Managing
client nodes across a firewall” on page 452 for details.
For additional ways to manage security, see Chapter 33, “Managing Tivoli Storage
Manager security,” on page 907.
In Tivoli Storage Manager, you define policies by defining policy domains, policy
sets, management classes, and backup and archive copy groups. When you install
Tivoli Storage Manager, you have a default policy that consists of a single policy
domain named STANDARD.
The default policy provides basic backup protection for end-user workstations. To
provide different levels of service for different clients, you can add to the default
policy or create new policy. For example, because of business needs, file servers are
likely to require a policy different from policy for users' workstations. Protecting
data for applications such as Lotus Domino also may require a unique policy.
For more information about the default policy and establishing and managing new
policies, see Chapter 14, “Implementing policies for client data,” on page 497.
Scheduling also can mean better utilization of resources such as the network.
Client backups that are scheduled at times of lower usage can minimize the impact
on user operations on a network.
You can automate operations for clients by using schedules. Tivoli Storage
Manager provides a central scheduling facility. You can also use operating system
utilities or other scheduling tools to schedule Tivoli Storage Manager operations.
With Tivoli Storage Manager schedules, you can perform the operations for a client
immediately or schedule the operations to occur at regular intervals.
For a schedule to work on a particular client, the client machine must be turned
on. The client either must be running the client scheduler or must allow the client
acceptor daemon to start the scheduler when needed.
Server maintenance
If you manage more than one server, you can ensure that the multiple servers are
consistently managed by using the enterprise management functions of Tivoli
Storage Manager.
You can set up one server as the configuration manager and have other servers
obtain configuration information from it.
To keep the server running well, you can perform these tasks:
v Managing server operations, such as controlling client access to the server
v Automating repetitive administrative tasks
v Monitoring and adjusting space for the database and the recovery log
v Monitoring the status of the server, server storage, and clients
Some of the more common tasks that you can perform to manage your server
operations are shown in the following list:
v Start and stop the server.
v Allow and suspend client sessions with the server.
v Query, cancel, and preempt server processes such as backing up the server
database.
v Customize server options.
See “Licensing IBM Tivoli Storage Manager” on page 631. For suggestions about
the day-to-day tasks required to administer the server, see Chapter 20, “Managing
server operations,” on page 631.
You can define schedules for the automatic processing of most administrative
commands. For example, a schedule can run the command to back up the server's
database every day.
For more information about automating Tivoli Storage Manager operations, see
Chapter 21, “Automating server operations,” on page 659.
If you have a predefined maintenance script, you can add or subtract commands
using the maintenance script wizard. You can add, subtract, or reposition
commands if you have a custom maintenance script. Both methods can be accessed
through the same process. If you want to convert your predefined maintenance
script to a custom maintenance script, select a server with the predefined script,
click Select Action > Convert to Custom Maintenance Script.
The information about the client data, also called metadata, includes the file name,
file size, file owner, management class, copy group, and location of the file in
server storage. The server records changes made to the database (database
transactions) in its recovery log. The recovery log is used to maintain the database
in a transactionally consistent state, and to maintain consistency across server
startup operations.
For more information about the Tivoli Storage Manager database and recovery log
and about the tasks associated with them, see Chapter 22, “Managing the database
and recovery log,” on page 681.
The Administration Center includes a health monitor, which presents a view of the
overall status of multiple servers and their storage devices. From the health
monitor, you can link to details for a server, including a summary of the results of
client schedules and a summary of the availability of storage devices. See
Chapter 19, “Managing servers with the Administration Center,” on page 623.
Tivoli Monitoring for Tivoli Storage Manager can also be used to monitor client
and server operations. It brings together multiple components to provide historical
reporting and real-time monitoring. Tivoli Monitoring for Tivoli Storage Manager
can help you determine if there are any issues that require attention. You can
monitor server status, database size, agent status, client node status, scheduled
events, server IDs, and so on, using the workspaces within the Tivoli Enterprise
Portal. See Chapter 30, “Reporting and monitoring with Tivoli Monitoring for
Tivoli Storage Manager,” on page 839.
You can use Tivoli Storage Manager queries and SQL queries to get information
about the server. You can also set up automatic logging of information about Tivoli
Storage Manager clients and server events. Daily checks of some indicators are
suggested.
See the following sections for more information about these tasks:
v Part 5, “Monitoring operations,” on page 803
When you have a network of Tivoli Storage Manager servers, you can simplify
configuration and management of the servers by using enterprise administration
functions. You can do the following:
v Designate one server as a configuration manager that distributes configuration
information such as policy to other servers. See “Setting up enterprise
configurations” on page 735.
v Route commands to multiple servers while logged on to one server. See
“Routing commands” on page 758.
v Log events such as error messages to one server. This allows you to monitor
many servers and clients from a single server. See “Enterprise event logging:
logging events to another server” on page 897.
v Store data for one Tivoli Storage Manager server in the storage of another Tivoli
Storage Manager server. The storage is called server-to-server virtual volumes.
See “Using virtual volumes to store data on another server” on page 763 for
details.
v Share an automated library among Tivoli Storage Manager servers. See “Devices
on storage area networks” on page 73.
v Store a recovery plan file for one server on another server, when using disaster
recovery manager. You can also back up the server database and storage pools to
another server. See Chapter 36, “Disaster recovery manager,” on page 1053 for
details.
v Back up the server database and storage pools to another server. See “Using
virtual volumes to store data on another server” on page 763 for details.
v To simplify password management, have client nodes and administrators
authenticate their passwords on multiple servers using an LDAP directory
server. See “Managing passwords and logon procedures” on page 926.
For example, you may need to balance workload among servers by moving client
nodes from one server to another. The following methods are available:
v You can export part or all of a server's data to sequential media, such as tape or
a file on hard disk. You can then take the media to another server and import
the data to that server
v You can export part or all of a server's data and import the data directly to
another server, if server-to-server communications are set up.
For more information about moving data between servers, see Chapter 24,
“Exporting and importing data,” on page 771.
Attention: If the database is unusable, the entire Tivoli Storage Manager server is
unavailable. If a database is lost and cannot be recovered, it might be difficult or
impossible to recover data that is managed by that server. Therefore, It is critically
important to back up the database. However, even without the database, fragments
of data or complete files might easily be read from storage pool volumes that are
not encrypted. Even if data is not completely recovered, security can be
compromised. For this reason, always encrypt sensitive data by using the Tivoli
Storage Manager client or the storage device, unless the storage media is physically
secured. See Part 6, “Protecting the server,” on page 905 for steps that you can take
to protect your database.
IBM Tivoli Storage Manager provides a number of ways to protect your data,
including backing up your storage pools and database. For example, you can
define schedules so that the following operations occur:
v After the initial full backup of your storage pools, incremental storage pool
backups are done nightly.
v Full database backups are done weekly.
v Incremental database backups are done nightly.
You can also create a maintenance script to perform database and storage pool
backups through the Server Maintenance work item in the Administration Center.
See Chapter 19, “Managing servers with the Administration Center,” on page 623
for details.
In addition to taking these actions, you can prepare a disaster recovery plan to
guide you through the recovery process by using the disaster recovery manager,
which is available with Tivoli Storage Manager Extended Edition. The disaster
recovery manager (DRM) assists you in the automatic preparation of a disaster
recovery plan. You can use the disaster recovery plan as a guide for disaster
recovery as well as for audit purposes to certify the recoverability of the Tivoli
Storage Manager server.
The disaster recovery methods of DRM are based on taking the following
measures:
v Sending server backup volumes offsite or to another Tivoli Storage Manager
server
v Creating the disaster recovery plan file for the Tivoli Storage Manager server
v Storing client machine information
v Defining and tracking client recovery media
For more information about protecting your server and for details about recovering
from a disaster, see Chapter 34, “Protecting and recovering the server infrastructure
and client data,” on page 941.
While all Tivoli Storage Manager configuration and management tasks can also be
performed using the command-line interface, the wizards are the preferred method
for initial configuration. You can return to individual wizards after the initial
configuration to update settings and perform management tasks. Refer to the
Installation Guide for more information on configuration and management wizards.
Although the wizards simplify the configuration process by hiding some of the
detail, a certain amount of IBM Tivoli Storage Manager knowledge is still required
to create and maintain a typically complex storage management environment. If
you are not familiar with Tivoli Storage Manager functions and concepts, you
should refer to Chapter 1, “Tivoli Storage Manager overview,” on page 3 before
you begin.
The initial configuration process configures a single server. If you plan to configure
a network of servers, you must perform additional tasks. For details, see
Chapter 23, “Managing a network of Tivoli Storage Manager servers,” on page 721.
Additional configuration wizards can help you perform the following optional
tasks:
v Configure Tivoli Storage Manager for use in a Microsoft Cluster Server (MSCS)
environment (Refer to “Configuring a Windows clustered environment” on page
1134.)
v Configure Tivoli Storage Manager for use in a Windows registry Active
Directory environment (Refer to theAppendix D, “Windows Active Directory,”
on page 1163 for more information.)
v Create a remote Tivoli Storage Manager for Windows client configuration
package (Refer to “Installing clients using shared resources” on page 53.)
The standard initial configuration process does not include all IBM Tivoli Storage
Manager features, but it does produce a functional Tivoli Storage Manager system
that can be further customized and tuned. The default settings suggested by the
wizards are appropriate for use in many cases.
Minimal configuration
During the minimal configuration process, a wizard helps you initialize a Tivoli
Storage Manager server instance. Open client registration is enabled, so Tivoli
Storage Manager client nodes can automatically register themselves with the
server.
You can click Yes to continue to the next wizard, or No to exit the initial
configuration process. However, cancelling during initial configuration can produce
unexpected results. The preferred method is to complete the entire wizard
sequence, and then restart an individual wizard to make any configuration
changes.
After you have installed IBM Tivoli Storage Manager, complete these steps:
1. Double click the Tivoli Storage Manager Management Console icon on the
desktop to open the Tivoli Storage Manager Console window.
2. Expand the IBM Tivoli Storage Manager tree in the left pane until the local
system name is displayed.
3. Right-click the local system name and select Add a New Tivoli Storage
Manager Server.
4. The Initial Configuration Task List is displayed. Select Standard configuration
or Minimal configuration and click Start. For more information about
configuration options, see “Initial configuration overview” on page 35.
v If you selected Standard configuration, see “Initial Configuration wizard and
tasks” for instructions.
v If you selected Minimal configuration, see “Server Initialization wizard” on
page 38 for instructions.
Note: If a Tivoli Storage Manager server instance exists on the local system, you
are prompted to confirm that you want to create and configure a new server
instance. Be careful to create only the server instances that you require. In most
cases, only one server instance is necessary.
The information that you provide in this wizard is used to customize subsequent
wizards and reflect your preferences and storage environment.
This wizard consists of a Welcome page and a series of input pages that help you
perform the following tasks:
First Input Page
Choose whether configuration tips are automatically displayed during the
initial configuration process. This additional information can be helpful for
new Tivoli Storage Manager users.
This wizard consists of a Welcome page and a series of input pages that help you
perform the following tasks:
First Input Page
Choose a directory to store files that are unique to the Tivoli Storage
Manager server instance you are currently configuring. Enter the location
of the initial-disk storage pool volume.
Second Input Page
Enter the locations of the directories to be used by the database. Each
location must be on a separate line, and the directories must be empty.
Third Input Page
Enter the directories to be used by the logs.
Fourth Input Page
Choose a name and password for the Tivoli Storage Manager server. Some
Tivoli Storage Manager features require a server password.
The database and log directory names are limited to the following characters:
A-Z Any letter, A through Z
0–9 Any number, 0 through 9
_ Underscore
. Period
- Hyphen
+ Plus
& Ampersand
Note: The minimal configuration process does not support cluster configuration.
When you complete the Server Initialization wizard, Tivoli Storage Manager does
the following:
v Initializes the server database and logs.
v Creates two default schedules: DAILY_INCR and WEEKLY_INCR. You can use
the Schedule Configuration wizard to work with these schedules or create
others.
v Registers an administrative ID with the server. This ID is used to provide access
to the Administration Center and server command-line interface. The ID is
named admin, and the default password is admin. To ensure system security,
change this password.
Initialization results are recorded in the initserv.log file in the server directory. If
you have problems starting the server after initialization, check this log file for
error statements. If you contact technical support for help, you might be asked to
provide this file.
If you are performing a minimal configuration, see the Installation Guide for
instructions about how to test backup and archive function.
The Device Configuration wizard consists of a Welcome page and input pages that
help you complete the following tasks:
v Select and define the storage devices that you want to use with Tivoli Storage
Manager.
v Manually associate drives with libraries, if required.
v Specify SCSI element number order for manually associated drives.
v Configure device sharing, if required.
v Manually add virtual or undetected devices.
The wizard displays a tree-view of devices that are connected to the Tivoli Storage
Manager server system. Tivoli Storage Manager device names are used to identify
devices. Libraries and drives can be detected only if your hardware supports this
function. Basic and detailed information about a device that is selected in the
tree-view is also displayed. If the device is a type that can be shared, the Sharing
tab displays any Tivoli Storage Manager components that share the device.
You can complete the following tasks with the device configuration wizard:
Manually associating drives
Any drive that is listed as Unknown must be manually associated with a
Note: If you manually associate more than one drive with the same library,
you must order the drives according to element number. If you do not
arrange the drives correctly, Tivoli Storage Manager does not work as
expected. To determine the element number for a drive, select the drive
and click the Detailed tab. Use the element number lookup tool to
determine the correct position of the drive. If your drive is not listed, refer
to the manufacturer's documentation.
Setting up device sharing
To set up device sharing, click the Sharing tab and then click the
Components button. The Device Sharing dialog is displayed. Follow the
directions in this dialog.
Adding virtual or undetected devices
Click the New button to add file type devices and drives or libraries that
are accessed through an NDMP file server.
To define a device, select its check box. Any device with an open check box can be
defined to the Tivoli Storage Manager server. A library check box that is partially
filled indicates that some of the drives that are associated with that library are not
selected for use with Tivoli Storage Manager.
Note: A solid green check box indicates that the device was previously defined to
Tivoli Storage Manager. Previously defined devices cannot be manipulated or
removed using the wizard. You can use the Administration Center or server
command line to complete this task.
After you define libraries and drives to Tivoli Storage Manager, they are available
to store data.
The Client Node Configuration wizard consists of a Welcome page and several
input pages that help you perform the following tasks:
v Register client nodes with the Tivoli Storage Manager server. You can add nodes
individually, or detect and register multiple clients at one time.
v Associate registered nodes with storage pools by adding the clients to a new or
existing policy domain.
v Arrange the storage pool hierarchy to meet your storage needs.
The wizard also allows you to specify how the backup data for these clients is
stored, by associating client nodes with storage pools. If you used the Device
Configuration wizard to define any storage devices to Tivoli Storage Manager,
storage pools associated with those devices were automatically generated, and are
also displayed.
Consider using this wizard to register any remote client nodes now, even if you
have not yet installed Tivoli Storage Manager client code on those systems. After
you complete the initial server configuration, you can install the client code
remotely and configure the client nodes to transfer data to this server. See
“Installing clients using shared resources” on page 53 for more information.
Client nodes you have registered can be configured to back up data to this Tivoli
Storage Manager server instance. The backup data is managed according to way
you set up the associated storage pool hierarchy for the client.
Tivoli Storage Manager provides a default storage pool named DISKPOOL, which
represents random-access storage space on the hard drive of the Tivoli Storage
Manager server machine. During server initialization, Tivoli Storage Manager
created one volume (representing a discrete amount of allocated space) in this
storage pool. By default, this volume was configured to grow dynamically. You can
add more volumes to expand this storage pool as required.
Tivoli Storage Manager also provides three other default storage pools, which are
all set up to point to DISKPOOL. These three storage pools correspond to the three
ways Tivoli Storage Manager manages client data: backup, archive, and
space-management. The Client Node Configuration Wizard allows you to work
with the backup storage pool, BACKUPPOOL.
By default, data for any client nodes you associate with BACKUPPOOL will be
immediately transferred to DISKPOOL. You can store the data in DISKPOOL
indefinitely, or just use DISKPOOL as a temporary cache and then migrate the data
to any other storage devices represented in the storage pool hierarchy.
To detect and register multiple client nodes at one time, return to the main wizard
panel and click the Advanced button. Follow the instructions in the Properties
dialog. You can add clients from a text file, or choose from computers detected in
your Windows domain. The Tivoli Storage Manager console directory contains a
file named sample_import_nodes.txt , which defines the format required to import
client nodes.
To modify Tivoli Storage Manager client node information, select a client node
name from the right wizard pane and click the Edit button. To delete a client node
that you just added, select the client node name and click the Delete button.
Note: You cannot use the wizard to delete a client that was previously defined to
the server. You can use the Administration Center or server command line to
perform this task.
A storage pool can migrate data to one other storage pool. Multiple storage pools
can be set up to migrate data to the same storage pool. To see which clients are
associated with a storage pool, select a storage pool in the left wizard pane. Any
client nodes associated with that pool are displayed in the right pane.
Media labels are written at the start of each volume to uniquely identify that
volume to Tivoli Storage Manager. The Media Labeling wizard is available only
when attached storage devices have been defined to Tivoli Storage Manager.
Slightly different versions of the wizard are displayed for automated and manual
storage devices. This section describes the media labeling and check-in process for
automated library devices. The Media Labeling wizard consists of a Welcome page
and a series of input pages that help you perform the following tasks:
First Input Page
Select the devices that contain the media you want to label.
Second Input Page
Select and label specific media.
Third Input Page
Check in labeled media to Tivoli Storage Manager.
The wizard lists any devices and drives that are recognized by Tivoli Storage
Manager. You can display information about any device or drive that is selected.
To select a device and any associated drives, select the check box next to the device
or drive name.
When the check-in process is complete, media is available for use by Tivoli Storage
Manager. By default, media volumes are checked in with scratch status. For more
information, see Chapter 8, “Managing removable media operations,” on page 157.
After you have labeled media, use the Check-In Now button to check it in to
Tivoli Storage Manager. Media volumes from all of the storage devices that you
selected in the first media labeling dialog are eligible for check-in. All labeled
media not previously checked in to this server is automatically checked in at this
time. A dialog describing the check-in process is displayed. Checking in media
runs as a background process, and media is not available for use until the process
completes. Depending on your storage hardware, and the number of media being
checked in, this process can take some time.
To monitor the check-in process, finish the initial configuration and then complete
the following steps:
1. From the Tivoli Storage Manager Console, expand the tree for the Tivoli
Storage Manager server that you are configuring.
2. Expand Reports and click Monitor.
3. Click the Start button to monitor server processes in real time.
If you have installed a local backup-archive client, click Yes to immediately start
the client. Click No if you have not installed the client code locally, or if you plan
to verify your configuration by backing up remotely installed clients.
Note: Click the Tivoli Storage Manager Backup Client icon on your desktop
to start the local backup-archive client at any time.
You can use the Tivoli Storage Manager Console to perform a variety of
administrative tasks, including issuing commands and monitoring server processes.
You can also access the individual wizards that you used during the initial
configuration process from this interface. Additional wizards are also available.
The Tivoli Storage Manager configuration wizards simplify the setup process by
hiding some of the detail. For the ongoing management of your Tivoli Storage
Manager system, it can be helpful to understand the default configuration that has
been created for you.
Your environment might differ somewhat from the one described in this section,
depending on the choices you made during the initial configuration process. All of
these default settings can be modified, and new policy objects can be created.
Table 10 lists them. For more information, refer to Chapter 14, “Implementing
policies for client data,” on page 497.
Table 10. Default data management policy objects
Tivoli Name Details
Storage
Manager
Object
Policy STANDARD By default, any clients or schedules you created were
Domain added to this domain. The domain contains one policy set.
Policy Set STANDARD This policy set is ACTIVE. It contains one management
class.
Management STANDARD This management class contains a backup copy group and
Class an archive copy group.
Copy Group STANDARD This copy group stores one active and one inactive version
(Backup) of existing files. The inactive version will be kept for 30
days.
Points to BACKUPPOOL.
Copy Group STANDARD This copy group stores one active and one inactive version
(Archive) of existing files. The inactive version will be kept for 30
days.
Points to ARCHIVEPOOL.
Table 11 lists them. For more information, refer to Chapter 11, “Managing storage
pools and volumes,” on page 267.
Table 11. Default storage device and media policy objects
Tivoli Name Details
Storage
Manager
Object
Storage Pool BACKUPPOOL This storage pool points to DISKPOOL. No volumes are
(Backup) defined, so data will migrate immediately.
Tivoli Storage Manager library, drive, storage pool, and path objects will have been
created for any storage libraries or drives you defined using the Device
Configuration Wizard. Tivoli Storage Manager volumes will have been created for
any media you labeled using the Media Labeling Wizard. If you used the Client
Node Configuration Wizard to associate a Tivoli Storage Manager client with
SAN-attached disk, a Tivoli Storage Manager disk object was also created.
For more information, refer to Chapter 16, “Scheduling operations for client
nodes,” on page 589 and Chapter 17, “Managing schedules for client nodes,” on
page 597.
For more information, see the appropriate Using the Backup-Archive Clients User's
Guide.
Note: The first backup of a file is always a full backup, regardless of what
you specify.
7. Click Backup. The Backup Report window displays the backup processing
status.
To exclude certain files from both incremental and selective backup processing,
create an include-exclude list in the client options file . IBM Tivoli Storage Manager
backs up any file that is not explicitly excluded from backup. You can also include
specific files that are in a directory that you have excluded. For more information,
see the appropriate Using the Backup-Archive Clients User's Guide.
For details and advanced procedures, see the appropriate Backup-Archive Clients
Installation and User's Guide publication.
IBM Tivoli Storage Manager can keep multiple versions of files, and you can
choose which version to restore. Tivoli Storage Manager marks the most recent
version as active and all other versions as inactive. When you back up a file, Tivoli
Storage Manager marks the new backup version active, and marks the previous
active version as inactive. When the maximum number of inactive versions is
reached, Tivoli Storage Manager deletes the oldest inactive version.
If you try to restore both an active and inactive version of a file at the same time,
only the active version is restored.
v To restore an active backup version, click Display active files only from the
View drop-down list.
v To restore an inactive backup version, click Display active/inactive files from
the View drop-down list.
For more information, see the appropriate Using the Backup-Archive Clients User's
Guide.
For more information, see the appropriate Using the Backup-Archive Clients manual.
a. Click the Find icon on the tool bar. The Find Files window opens.
b. Enter your search information in the Find Files window.
c. Click Search. The Matching Files window opens.
3. Click on the selection boxes next to the objects that you want to retrieve.
4. Click Retrieve. The Retrieve Destination window opens.
5. Enter the information in the Retrieve Destination window.
6. Click Retrieve. The Retrieve Report window displays the processing results.
You can also use the Administration Center to manage servers and clients. See
Chapter 19, “Managing servers with the Administration Center,” on page 623.
Tip: If the Tivoli Storage Manager server service is configured to run under the
Local System account, the Local System account must be explicitly granted access
to the Tivoli Storage Manager database. For more information, see “Starting the
Tivoli Storage Manager server as a service” on page 645.
Service Information.
3. If the server status displays Stopped, right-click the service line, and select
Start.
For most tasks, your server must be running. This procedure is explained here only
if an unusual situation requires that you stop the server. To stop the server, do one
of the following:
v Stop a server that is running as a Service:
1. Expand the tree for the Tivoli Storage Manager server you are stopping and
expand Reports
Note: This shuts down the server immediately. The shutdown also cancels all
Tivoli Storage Manager sessions.
v Stop a server from the Administration Center:
1. In the navigation tree, click Manage Servers.
2. Select a server from the servers table.
3. Click Select Action > Halt.
Note: This procedure stops the server immediately and cancels all client
sessions.
v Stop a server from the administrative command line:
1. Expand the tree for the Tivoli Storage Manager server you are stopping and
expand Reports
2. Click Command Line.
The Command Line view appears in the right pane.
3. Click Command Line Prompt in the right pane.
Note: This shuts down the server immediately. The shutdown also cancels all
client sessions.
Attention: If the database is unusable, the entire Tivoli Storage Manager server is
unavailable. If a database is lost and cannot be recovered, it might be difficult or
impossible to recover data managed by that server. Therefore, It is critically
important to back up the database. However, even without the database, fragments
of data or complete files might easily be read from storage pool volumes that are
not encrypted. Even if data is not completely recovered, security can be
compromised. For this reason, sensitive data should always be encrypted by the
Tivoli Storage Manager client or the storage device, unless the storage media is
physically secured. See Part 6, “Protecting the server,” on page 905 for steps that
you can take to protect your database.
In the example shown in Figure 2 on page 54, IBM Tivoli Storage Manager is
installed on a server named EARTH, which shares its D drive with all the
Windows client systems.
Each client system is configured so that when it boots up, it maps the EARTH D
drive as its Z drive. For example, at start-up each client issues this command:
NET USE Z: \\EARTH\D$
The administrator used the Network Client Options File wizard to create a client
configuration package named earthtcp that was stored on EARTH in the d:\tsmshar
directory. The administrator then registered each client node (“Client Node
Configuration wizard” on page 40).
The following scenario describes how to install the remote client and configure it
from a shared directory:
1. On EARTH:
For 32-bit clients
Copy the contents of the \tsmcli\x32\client\Disk1 directory from the
IBM Tivoli Storage Manager client CD to the d:\tsmshar directory.
Ensure that you include any client subdirectories. You can use
Windows Explorer or the xcopy command with the /s option to
perform the copy.
For 64-bit clients
Copy the contents of the \tsmcli\x64\client\Disk1 directory from the
IBM Tivoli Storage Manager client CD to the d:\tsmshar directory.
Ensure that you include any client subdirectories. You can use
Windows Explorer or the xcopy command with the /s option to
perform the copy.
2. Provide the users of the Windows clients with the following instructions for
installing the client from the shared directory:
a. Open a command prompt and change directories to the shared CD-ROM
drive on EARTH. For example:
chdir /d x:\tsmshar
b. Start the client installation and follow the instructions in the setup routine.
setup
c. Run the configuration package batch file to configure the client to
communicate with the server (that is, create the client options file) by
issuing:
earthtcp.bat
Note: Using Windows Explorer, you can run the batch file if the drive is
shared and if you start the file from the shared directory. However, you
cannot run the batch file if you go to the directory using Explorer's network
You can edit or create client options files in several ways, depending on the client
platform and configuration of your system:
v Any Client
Edit the dsm.opt client options file with a text editor at a client workstation. This
is the most direct method, but it may not be best if you have many clients.
v Windows Clients
Generate the dsm.opt client options file from the server with the Network Client
Options File Wizard. This is easy and direct, and the wizard detects the network
address of the Tivoli Storage Manager server. To run the wizard, do the
following:
1. From the Tivoli Storage Manager Console, expand the tree for the Tivoli
Storage Manager server on which you want to create the file and click
Wizards.
The Wizards list is displayed in the right pane.
2. Double-click Client Options File from the Wizards list to start the wizard.
3. Follow the instructions in the wizard.
v Networked Windows Clients with a Shared Directory on a File Server
Use the Remote Client Configuration Wizard to create a package that allows
remote users to create client options files. The administrator uses the wizard to
generate a client configuration file and stores the file in a shared directory.
Clients access the shared directory and run the configuration file to create the
client options file. This method is suitable for sites with many clients.
The client scheduler can be installed using a wizard provided by the Tivoli Storage
Manager client graphical interface. To automatically start the scheduler service as
required, manually start the scheduler service on each client node, or update the
managedservices option in the client options file. Refer to Backup-Archive Clients
Installation and User's Guide for more information.
Note: The include-exclude list (file on UNIX clients) on each client also affects
which files are backed up or archived. For example, if a file is excluded from
backup with an EXCLUDE statement, the file will not be backed up when the
schedule runs.
To view and specify server communications options, use the Server Options utility
available from the Tivoli Storage Manager Console. This utility is available from
the Service Information view in the server tree. By default, the server uses the
TCP/IP, Named Pipes, and HTTP communication methods. If you start the server
console and see warning messages that a protocol could not be used by the server,
either the protocol is not installed or the settings do not match the Windows
protocol settings.
For a client to use a protocol that is enabled on the server, the client options file
must contain corresponding values for communication options. From the Server
Options utility, you can view the values for each protocol.
Tip: This section describes setting server options before you start the server. When
you start the server, the new options go into effect. If you modify any server
options after starting the server, you must stop and restart the server to activate
the updated options.
For additional data protection you can use Secure Sockets Layer (SSL) and SSL
options SSLTCPADMINPORT and SSLTCPPORT. SSL is the standard technology
for creating encrypted links between servers and clients. SSL provides a secure
channel for servers and clients to communicate over open communications paths.
With SSL the identities of the parties are verified through the use of digital
certificates.
For more information about server options, see the Administrator's Reference or the
Tivoli Storage Manager Console online help.
TCP/IP options
The Tivoli® Storage Manager server provides a range of TCP/IP options to
configure your system.
Related tasks:
“Setting up TLS” on page 908
For details about configuring SNMP for use with Tivoli Storage Manager, see the
Administrator's Guide.
The subagent communicates with the snmpd daemon, which in turn communicates
with a management application. The snmpd daemon must support the DPI
protocol. Agents are available on AIX. The subagent process is separate from the
Tivoli Storage Manager server process, but the subagent gets its information from a
server options file. When the SNMP management application is enabled, it can get
information and messages from servers.
If you do not specify an administrator authority level, the new administrator can
only request command-line help and issue query commands.
Remember: Passwords that authenticate with an LDAP directory server use letters
that are case-sensitive. “password” is distinguished from “PaSSword.” Passwords
that authenticate with the IBM Tivoli Storage Manager server have passwords that
are case-insensitive. The server cannot distinguish between “password” and
“PaSSword.”
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see the
Administrator's Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
Use the following table to identify key tasks and the topics that describe how to
perform those tasks.
Task Topic
Configure and manage magnetic disk Chapter 5, “Magnetic disk devices,” on page
devices, which Tivoli Storage Manager uses 89
to store client data, the database, database
backups, recovery log, and export data.
Physically attach storage devices to your Chapter 6, “Attaching devices for the
system. Install and configure the required server,” on page 99
device drivers.
Configure devices to use with Tivoli Storage Chapter 7, “Configuring storage devices,” on
Manager, using detailed scenarios of page 111
representative device configurations.
Plan, configure, and manage an environment Chapter 10, “Using NDMP for operations
for NDMP operations with NAS file servers,” on page 233
Perform routine operations such as labeling Chapter 8, “Managing removable media
volumes, checking volumes into automated operations,” on page 157
libraries, and maintaining storage volumes
and devices.
Define and manage device classes. “Defining device classes” on page 209
For a summary of supported devices, see Table 13 on page 84. For details and
updates, see the Tivoli Storage Manager device support Web site:
http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html
Libraries
A physical library is a collection of one or more drives that share similar
media-mounting requirements. That is, the drive can be mounted by an operator or
by an automated mounting mechanism.
A library object definition specifies the library type, for example, SCSI or 349X, and
other characteristics associated with the library type, for example, the category
numbers used by an IBM TotalStorage 3494 Tape Library for private, scratch
volumes, and scratch, write-once, read-many (WORM) volumes.
Restriction: To use ACSLS functions, the StorageTek Library Attach software must
be installed. For more information, see “ACSLS-managed libraries” on page 136.
Manual libraries
In manual libraries, operators mount the volumes in response to mount-request
messages issued by the server.
The server sends these messages to the server console and to administrative clients
that were started by using the special MOUNTMODE or CONSOLEMODE parameter.
You can also use manual libraries as logical entities for sharing sequential-access
disk (FILE) volumes with other servers.
You cannot combine drives of different types or formats, such as Digital Linear
Tape (DLT) and 8MM, in a single manual library. Instead, you must create a
separate manual library for each device type.
The drives in a SCSI library can be of different types. A SCSI library can contain
drives of mixed technologies, for example LTO Ultrium and DLT drives. Some
examples of this library type are:
v The Oracle StorageTek L700 library
v The IBM 3590 tape device, with its Automatic Cartridge Facility (ACF)
Remember: Although it has a SCSI interface, the IBM 3494 Tape Library
Dataserver is defined as a 349X library type.
Using a VTL, you can create variable numbers of drives and volumes because they
are only logical entities within the VTL. The ability to create more drives and
volumes increases the capability for parallelism, giving you more simultaneous
mounts and tape I/O.
VTLs use SCSI and Fibre Channel interfaces to interact with applications. Because
VTLs emulate tape drives, libraries, and volumes, an application such as Tivoli
Storage Manager cannot distinguish a VTL from real tape hardware unless the
library is identified as a VTL.
For information about configuring a VTL library, see “Managing virtual tape
libraries” on page 143.
349X libraries
A 349X library is a collection of drives in an IBM 3494. Volume mounts and
demounts are handled automatically by the library. A 349X library has one or more
library management control points (LMCP) that the server uses to mount and
dismount volumes in a drive. Each LMCP provides an independent interface to the
robot mechanism in the library.
The external media manager selects the appropriate drive for media-access
operations. You do not define the drives, check in media, or label the volumes in
an external library.
An external library allows flexibility in grouping drives into libraries and storage
pools. The library can have one drive, a collection of drives, or even a part of an
automated library.
For a definition of the interface that Tivoli Storage Manager provides to the
external media management system, see Appendix B, “External media management
interface description,” on page 1145.
Drives
A drive object represents a drive mechanism within a library that uses removable
media. For devices with multiple drives, including automated libraries, you must
define each drive separately and associate it with a library.
Drive definitions can include such information as the element address for drives in
SCSI or virtual tape libraries (VTLs), how often a tape drive is cleaned, and
whether the drive is online.
Tivoli Storage Manager drives include tape and optical drives that can stand alone
or that can be part of an automated library. Supported removable media drives
also include removable file devices such as rewritable CDs.
Device class
Each device that is defined to Tivoli Storage Manager is associated with one device
class, which specifies the device type and media management information, such as
recording format, estimated capacity, and labeling prefixes.
A device class for a tape or optical drive must also specify a library.
Removable media
Tivoli Storage Manager provides a set of specified removable-media device types,
such as 8MM for 8 mm tape devices, or REMOVABLEFILE for Jaz or DVD-RAM
drives.
The GENERICTAPE device type is provided to support certain devices that are not
supported by the Tivoli Storage Manager server.
For more information about supported removable media device types, see
“Defining device classes” on page 209 and the Administrator's Reference.
FILE volumes are a convenient way to use sequential-access disk storage for the
following reasons:
v You do not need to explicitly define scratch volumes. The server can
automatically acquire and define scratch FILE volumes as needed.
v You can create and format FILE volumes using a single command. The
advantage of private FILE volumes is that they can reduce disk fragmentation
and maintenance overhead.
v Using a single device class definition that specifies two or more directories, you
can create large, FILE-type storage pools. Volumes are created in the directories
you specify in the device class definition. For optimal performance, volumes
should be associated with file systems.
v When predetermined space-utilization thresholds have been exceeded, space
trigger functionality can automatically allocate space for private volumes in
FILE-type storage pools.
v The Tivoli Storage Manager server allows concurrent read-access and
write-access to a volume in a storage pool associated with the FILE device type.
Concurrent access improves restore performance by allowing two or more clients
to access the same volume at the same time. Multiple client sessions (archive,
retrieve, backup, and restore) or server processes (for example, storage pool
backup) can read the volume concurrently. In addition, one client session or one
server process can write to the volume while it is being read.
The following server processes are allowed shared read access to FILE volumes:
– BACKUP DB
– BACKUP STGPOOL
– COPY ACTIVEDATA
– EXPORT/IMPORT NODE
– EXPORT/IMPORT SERVER
Unless sharing with storage agents is specified, the FILE device type does not
require you to define library or drive objects. The only required object is a device
class.
The Centera storage device can also be configured with the Tivoli Storage Manager
server to form a specialized storage system that protects you from inadvertent
deletion of mission-critical data such as e-mails, trade settlements, legal documents,
and so on.
The CENTERA device class creates logical sequential volumes for use with Centera
storage pools. These volumes share many of the same characteristics as FILE type
volumes. With the CENTERA device type, you are not required to define library or
drive objects. CENTERA volumes are created as needed and end in the suffix
"CNT."
Multiple client retrieve sessions, restore sessions, or server processes can read a
volume concurrently in a storage pool that is associated with the CENTERA device
type. In addition, one client session or one server process can write to the volume
while it is being read. Concurrent access improves restore and retrieve performance
because two or more clients can have access to the same volume at the same time.
The following server processes can share read access to Centera volumes:
v EXPORT NODE
v EXPORT SERVER
v GENERATE BACKUPSET
The following server processes cannot share read access to Centera volumes:
v AUDIT VOLUME
v DELETE VOLUME
For more information about the Centera device class, see “Defining device classes
for CENTERA devices” on page 227. For details about Centera-related commands,
refer to the Administrator's Reference.
Chapter 4. Storage device concepts 67
Sequential volumes on another Tivoli Storage Manager server
(SERVER)
The SERVER device type lets you create volumes for one Tivoli Storage Manager
server that exist as archived files in the storage hierarchy of another server. These
virtual volumes have the characteristics of sequential-access volumes such as tape.
No library or drive definition is required.
Server Environment
Physical Device
Environment
Device Class
Library Represents
Drives
Device
Drive Drive
Figure 3. Removable media devices are represented by a library, drive, and device class
You can control the characteristics of storage pools, such as whether scratch
volumes are used.
Figure 4 shows storage pool volumes grouped into a storage pool. Each storage
pool represents only one type of media. For example, a storage pool for 8-mm
devices represents collections of only 8-mm tapes.
Volume Volume
Represents
Storage Media
Pool
For DISK device classes, you must define volumes. For other device classes, such
as tape and FILE, you can allow the server to dynamically acquire scratch volumes
and define those volumes as needed. For details, see:
“Preparing volumes for random-access storage pools” on page 282
“Preparing volumes for sequential-access storage pools” on page 283
One or more device classes are associated with one library, which can contain
multiple drives. When you define a storage pool, you associate the pool with a
device class. Volumes are associated with pools. Figure 5 on page 70 shows these
relationships.
Library
For information about defining storage pool and volume objects, see Chapter 11,
“Managing storage pools and volumes,” on page 267.
Data movers
Data movers are devices that accept requests from Tivoli Storage Manager to
transfer data on behalf of the server. Data movers transfer data between storage
devices without using significant server, client, or network resources.
For NDMP operations, data movers are NAS file servers. The definition for a NAS
data mover contains the network address, authorization, and data formats required
for NDMP operations. A data mover enables communication and ensures authority
for NDMP operations between the Tivoli Storage Manager server and the NAS file
server.
Server objects
Server objects are defined to use a library that is on a SAN and that is managed by
another Tivoli Storage Manager server, to use LAN-free data movement, or to store
data in virtual volumes on a remote server.
Among other characteristics, you must specify the server TCP/IP address.
For each storage pool, you must decide whether to use scratch volumes. If you do
not use scratch volumes, you must define private volumes, or you can use
space-triggers if the volume is assigned to a storage pool with a FILE device type.
Tivoli Storage Manager keeps an inventory of volumes in each automated library it
manages and tracks whether the volumes are in scratch or private status. When a
volume mount is requested, Tivoli Storage Manager selects a scratch volume only
if scratch volumes are allowed in the storage pool. The server can choose any
scratch volume that has been checked into the library.
You do not need to allocate volumes to different storage pools associated with the
same automated library. Each storage pool associated with the library can
dynamically acquire volumes from the library's inventory of scratch volumes. Even
if only one storage pool is associated with a library, you do not need to explicitly
define all the volumes for the storage pool. The server automatically adds volumes
to and deletes volumes from the storage pool.
This inventory is not necessarily identical to the list of volumes in the storage
pools associated with the library. For example:
v A volume can be checked into the library but not be in a storage pool (a scratch
volume, a database backup volume, or a backup set volume).
v A volume can be defined to a storage pool associated with the library (a private
volume), but not checked into the library.
For information about supported devices and Fibre Channel hardware and
configurations, see http://www.ibm.com/support/entry/portal/Overview/
Software/Tivoli/Tivoli_Storage_Manager
In a SAN you can share tape drives, optical drives, and libraries that are supported
by the Tivoli Storage Manager server, including most SCSI devices.
This does not include devices that use the GENERICTAPE device type.
Library Control
Data Flow
SAN Data Flow
Tape Library
Figure 6. Library sharing in a storage area network (SAN) configuration. The servers
communicate over the LAN. The library manager controls the library over the SAN. The
library client stores data to the library devices over the SAN.
When Tivoli Storage Manager servers share a library, one server, the library
manager, controls device operations. These operations include mount, dismount,
volume ownership, and library inventory. Other Tivoli Storage Manager servers,
library clients, use server-to-server communications to contact the library manager
and request device service. Data moves over the SAN between each server and the
storage device.
Tivoli Storage Manager servers use the following features when sharing an
automated library:
Partitioning of the Volume Inventory
The inventory of media volumes in the shared library is partitioned among
servers. Either one server owns a particular volume, or the volume is in
the global scratch pool. No server owns the scratch pool at any given time.
Serialized Drive Access
Only one server accesses each tape drive at a time. Drive access is
serialized and controlled so that servers do not dismount other servers'
volumes or write to drives where other servers mount their volumes.
Serialized Mount Access
The library autochanger performs a single mount or dismount operation at
a time. A single server (library manager) performs all mount operations to
provide this serialization.
Tape library
File library
Figure 7. LAN-Free data movement. Client and server communicate over the LAN. The
server controls the device on the SAN. Client data moves over the SAN to the device.
LAN-free data movement requires the installation of a storage agent on the client
machine. The server maintains the database and recovery log, and acts as the
library manager to control device operations. The storage agent on the client
handles the data transfer to the device on the SAN. This implementation frees up
bandwidth on the LAN that would otherwise be used for client data movement.
The following outlines a typical backup scenario for a client that uses LAN-free
data movement:
1. The client begins a backup operation. The client and the server exchange policy
information over the LAN to determine the destination of the backed up data.
For a client using LAN-free data movement, the destination is a storage pool
that uses a device on the SAN.
2. Because the destination is on the SAN, the client contacts the storage agent,
which will handle the data transfer. The storage agent sends a request for a
volume mount to the server.
3. The server contacts the storage device and, in the case of a tape library, mounts
the appropriate media.
4. The server notifies the client of the location of the mounted media.
5. The client, through the storage agent, writes the backup data directly to the
device over the SAN.
6. The storage agent sends file attribute information to the server, and the server
stores the information in its database.
Remember:
v Centera storage devices and optical devices cannot be targets for LAN-free
operations.
v For the latest information about clients that support the feature, see the IBM
Tivoli Storage Manager support page at http://www.ibm.com/support/entry/
portal/Overview/Software/Tivoli/Tivoli_Storage_Manager.
Network-attached storage
Network-attached storage (NAS) file servers are dedicated storage machines whose
operating systems are optimized for file-serving functions. NAS file servers
typically do not run software acquired from another vendor. Instead, they interact
with programs like Tivoli Storage Manager through industry-standard network
protocols, such as network data management protocol (NDMP).
Tivoli Storage Manager provides two basic types of configurations that use NDMP
for backing up and managing NAS file servers. In one type of configuration, Tivoli
Storage Manager uses NDMP to back up a NAS file server to a library device
directly attached to the NAS file server. (See Figure 8.) The NAS file server, which
can be distant from the Tivoli Storage Manager server, transfers backup data
directly to a drive in a SCSI-attached tape library. Data is stored in special,
NDMP-formatted storage pools, which can be backed up to storage media that can
be moved offsite for protection in case of an on-site disaster.
Server
Offsite storage
Tape Library
NAS File
Server
Legend:
SCSI or Fibre
Channel Connection
TCP/IP NAS File Server
Connection File System
Data Flow Disks
Disk Storage
Server Offsite storage
Tape Library
NAS File
Server
Legend:
SCSI or Fibre
Channel Connection
TCP/IP NAS File Server
Connection File System
Data Flow Disks
Note:
v A Centera storage device cannot be a target for NDMP operations.
v Support for filer-to-server data transfer is only available for NAS devices that
support NDMP version 4.
v For a comparison of NAS backup methods, including using a backup-archive
client to back up a NAS file server, see “Determining the location of NAS
backup” on page 242.
The image backups are different from traditional Tivoli Storage Manager backups
because the NAS file server transfers the data to the drives in the library or
directly to the Tivoli Storage Manager server. NAS file system image backups can
be either full or differential image backups. The first backup of a file system on a
NAS file server is always a full image backup. By default, subsequent backups are
differential image backups containing only data that has changed in the file system
since the last full image backup. If a full image backup does not already exist, a
full image backup is performed.
Using the Web backup-archive client, users can then browse the TOC and select
the files that they want to restore. If you do not create a TOC, users must be able
to specify the name of the backup image that contains the file to be restored and
the fully qualified name of the file.
By defining virtual file spaces, a file system backup can be partitioned among
several NDMP backup operations and multiple tape drives. You can also use
different backup schedules to back up sub-trees of a file system.
The virtual file space name cannot be identical to any file system on the NAS
node. If a file system is created on the NAS device with the same name as a virtual
file system, a name conflict will occur on the Tivoli Storage Manager server when
the new file space is backed up. See the Administrator's Reference for more
information about virtual file space mapping commands.
Remember: Virtual file space mappings are only supported for NAS nodes.
Libraries with this capability are those models supplied from the manufacturer
already containing mixed drives, or capable of supporting the addition of mixed
drives. Check with the manufacturer, and also check the Tivoli Storage Manager
Web site for specific libraries that have been tested on Tivoli Storage Manager with
mixed device types.
For example, you can have Quantum SuperDLT drives, LTO Ultrium drives, and
StorageTek 9940 drives in a single library defined to the Tivoli Storage Manager
server. For examples of how to set this up, see:
“Defining Tivoli Storage Manager storage objects with commands” on page 120
“Configuring a 3494 library with multiple drive device types” on page 126
If the new drive technology cannot write to media formatted by older generation
drives, the older media must be marked read-only to avoid problems for server
operations. Also, the older drives must be removed from the library. Some
examples of combinations that the Tivoli Storage Manager server does not support
in a single library are:
v SDLT 220 drives with SDLT 320 drives
v DLT 7000 drives with DLT 8000 drives
v StorageTek 9940A drives with 9940B drives
v UDO1 drives with UDO2 drives
There are exceptions to the rule against mixing generations of LTO Ultrium drives
and media. The Tivoli Storage Manager server does support mixtures of the
following types:
v LTO Ultrium Generation 1 (LTO1) and LTO Ultrium Generation 2 (LTO2)
v LTO Ultrium Generation 2 (LTO2) with LTO Ultrium Generation 3 (LTO3)
v LTO Ultrium Generation 3 (LTO3) with LTO Ultrium Generation 4 (LTO4)
v LTO Ultrium Generation 4 (LTO4) with LTO Ultrium Generation 5 (LTO5)
| v LTO Ultrium Generation 5 (LTO5) with LTO Ultrium Generation 6 (LTO6)
The server supports these mixtures because the different drives can read and write
to the different media. If you plan to upgrade all drives to Generation 2 (or
Generation 3, Generation 4, or Generation 5), first delete all existing Ultrium drive
definitions and the paths associated with them. Then you can define the new
Generation 2 (or Generation 3, Generation 4, or Generation 5) drives and paths.
Note:
1. LTO Ultrium Generation 3 drives can only read Generation 1 media. If you are
mixing Ultrium Generation 1 with Ultrium Generation 3 drives and media in a
single library, you must mark the Generation 1 media as read-only, and all
Generation 1 scratch volumes must be checked out.
2. LTO Ultrium Generation 4 drives can only read Generation 2 media. If you are
mixing Ultrium Generation 2 with Ultrium Generation 4 drives and media in a
single library, you must mark the Generation 2 media as read-only, and all
Generation 2 scratch volumes must be checked out.
3. LTO Ultrium Generation 5 drives can only read Generation 3 media. If you are
mixing Ultrium Generation 3 with Ultrium Generation 5 drives and media in a
single library, you must mark the Generation 3 media as read-only, and all
Generation 3 scratch volumes must be checked out.
| 4. LTO Ultrium Generation 6 drives can only read Generation 4 media. If you are
| mixing Ultrium Generation 4 with Ultrium Generation 6 drives and media in a
| single library, you must mark the Generation 4 media as read-only, and all
| Generation 4 scratch volumes must be checked out.
If you plan to encrypt volumes in a library, do not mix media generations in the
library.
| This includes LTO formats. Multiple storage pools and their device classes of
| different types can point to the same library that can support them as explained in
| “Different media generations in a library” on page 79.
You can migrate to a new generation of a media type within the same storage pool
by following these steps:
1. ALL older drives are replaced with the newer generation drives within the
library (they cannot be mixed).
| 2. The existing volumes with the older formats are marked R/O if the new drive
| cannot append those tapes in the old format. If the new drive can write to the
| existing media in their old format, this is not necessary, but Step 1 is still
| required. If it is necessary to keep different drive generations that are read but
| not write compatible within the same library, separate storage pools for each
| must be used.
Library sharing
Library sharing or tape resource sharing allows multiple Tivoli Storage Manager
servers to use the same tape library and drives on a storage area network (SAN)
and to improve backup and recovery performance and tape hardware asset
utilization.
When Tivoli Storage Manager servers share a library, one server is set up as the
library manager and controls library operations such as mount and dismount. The
library manager also controls volume ownership and the library inventory. Other
servers are set up as library clients and use server-to-server communications to
contact the library manager and request resources.
Library clients must be at the same or a lower version than the library manager
server. A library manager cannot support library clients that are at a higher
version. For example, a version 6.2 library manager can support a version 6.1
library client but cannot support a version 6.3 library client.
When data is to be stored in or retrieved from a storage pool, the server does the
following:
1. The server selects a volume from the storage pool. The selection is based on the
type of operation:
Tivoli Storage Manager manages the data on the media, but you manage the media
itself, or you can use a removable media manager. Regardless of the method used,
managing media involves creating a policy to expire data after a certain period of
time or under certain conditions, move valid data onto new media, and reuse the
empty media.
Tape inventory
Ongoing tape processing
3
Select tape
Data
expires or
moves
Reclaim
1. You label 1 and check in 2 the media. Checking media into a manual library
simply means storing them (for example, on shelves). Checking media into an
automated library involves adding them to the library volume inventory.
See
v “Labeling media with automated tape libraries” on page 159 or “Labeling
media for manual libraries” on page 172
2. If you plan to define volumes to a storage pool associated with a device, you
should check in the volume with its status specified as private. Use of scratch
volumes is more convenient in most cases.
3. A client sends data to the server for backup, archive, or space management.
The server stores the client data on the volume. Which volume the server
selects 3 depends on:
v The policy domain to which the client is assigned.
v The management class for the data (either the default management class for
the policy set, or the class specified by the client in the client's
include/exclude list or file).
v The storage pool specified as the destination in either the management class
(for space-managed data) or copy group (for backup or archive data). The
storage pool is associated with a device class, which determines which
device and which type of media is used.
Table 13 summarizes the definitions that are required for different device types.
Table 13. Required definitions for storage devices
Required Definitions
Device Device Types Library Drive Path Device Class
Magnetic disk DISK — — — Yes See note
FILE See note — — — Yes
CENTERA — — — Yes
Tape 3590 Yes Yes Yes Yes
3592
4MM
8MM
DLT
LTO
NAS
QIC
VOLSAFE
3570
DTF
GENERICTAPE
CARTRIDGE See note
ECARTRIDGE See note
| Optical OPTICAL Yes Yes Yes Yes
| WORM
Removable media REMOVABLEFILE Yes Yes Yes Yes
(file system)
Notes:
v The DISK device class exists at installation and cannot be changed.
v FILE libraries, drives, and paths are required for sharing with storage agents.
v Support for the CARTRIDGE device type:
– IBM 3480, 3490, and 3490E tape drives
v The ECARTRIDGE device type is for StorageTek's cartridge tape drives such as
– SD-3, 9480, 9890, and 9940 drives
To map storage devices to device classes, use the information shown in Table 14.
Table 14. Mapping storage devices to device classes
Device Class Description
DISK Storage volumes that reside on the internal disk drive
You must define any device classes that you need for your removable media
devices such as tape drives. See “Defining device classes” on page 209 for
information on defining device classes to support your physical storage
environment.
For example, you determine that users in the business department have three
requirements:
v Immediate access to certain backed-up files, such as accounts receivable and
payroll accounts.
To match user requirements to storage devices, you define storage pools, device
classes, and, for device types that require them, libraries and drives. For example,
to set up the storage hierarchy so that data migrates from the BACKUPPOOL to 8
mm tapes, you specify BACKTAPE1 as the next storage pool for BACKUPPOOL.
See Table 15.
Table 15. Mapping storage pools to device classes, libraries, and drives
Library
Storage Pool Device Class (Hardware) Drives Volume Type Storage Destination
BACKUPPOOL DISK — — Storage volumes For a backup copy
on the internal group for files
disk drive requiring immediate
access
BACKTAPE1 8MM_CLASS AUTO_8MM DRIVE01, 8-mm tapes For overflow from the
(Exabyte DRIVE02 BACKUPPOOL and for
EXB-210) archived data that is
periodically accessed
BACKTAPE2 DLT_CLASS MANUAL_LIB DRIVE03 DLT tapes For backup copy
(Manually groups for files that are
mounted) occasionally accessed
Note: Tivoli Storage Manager has the following default disk storage pools:
v BACKUPPOOL
v ARCHIVEPOOL
v SPACEMGPOOL
v DISKPOOL
For more information, see
“Configuring random access volumes on disk devices” on page 95
Tip: For sequential access devices, you can categorize the type of removable
media based on their capacity.
For example, standard length cartridge tapes and longer length cartridge tapes
require different device classes.
8. Determine how the mounting of volumes is accomplished for the devices:
v Devices that require operators to load volumes must be part of a defined
MANUAL library.
| v Devices that are automatically loaded must be part of a defined SCSI, 349X,
| or VTL library. Each automated library device is a separate library.
v Devices that are controlled by Oracle StorageTek Automated Cartridge
System Library Software (ACSLS) must be part of a defined ACSLS library.
v Devices that are managed by an external media management system must
be part of a defined EXTERNAL library.
9. If you are considering storing data for one Tivoli Storage Manager server by
using the storage of another Tivoli Storage Manager server, consider network
bandwidth and network traffic. If your network resources constrain your
environment, you might have problems with using the SERVER device type
efficiently.
Also, consider the storage resources available on the target server. Ensure that
the target server has enough storage space and drives to handle the load from
the source server.
10. Determine the storage pools to set up, based on the devices you have and on
user requirements. Gather users' requirements for data availability. Determine
which data needs quick access and, which does not.
11. Be prepared to label removable media. You might want to create a new
labeling convention for media so that you can distinguish them from media
that are used for other purposes.
Tivoli Storage Manager stores data on magnetic disks in random access volumes,
as data is normally stored on disk, and in files on the disk that are treated as
sequential access volumes.
You can store the following types of data on magnetic disk devices:
v The database and recovery log
v Backups of the database
v Export and import data
v Client data that is backed up, archived, or migrated from client nodes. The client
data is stored in storage pools.
Tasks:
“Configuring random access volumes on disk devices” on page 95
“Configuring FILE sequential volumes on disk devices” on page 96
“Varying disk volumes online or offline” on page 97
“Cache copies for files stored on disk” on page 97
“Freeing space on disk” on page 97
“Scratch FILE volumes” on page 98
“Volume history file and volume reuse” on page 98
Review the following Tivoli Storage Manager requirements for disk devices and
compare them with information from your disk system vendor. A list of supported
disk storage devices is not available. Contact the vendor for your disk system if
you have questions or concerns about whether Tivoli Storage Manager
requirements are supported. The vendor should be able to provide the
configuration settings to meet these requirements.
I/O operation results must be reported synchronously and accurately. For the
database and the active and archive logs, unreported or asynchronously reported
write errors that result in data not being permanently committed to the storage
Data in Tivoli Storage Manager storage pools, database volumes, and log volumes
must be interdependent. Tivoli Storage Manager requires that the data written to
these entities can be retrieved exactly as it was written. Also data in these entities
must be consistent with one another. There cannot be timing windows in which
data that is being retrieved varies depending on the way that an I/O system
manages the writing of data. Generally, this means that replicated Tivoli Storage
Manager environments must use features such as maintenance of write-order
between the source and replication targets. It also requires that the database, log,
and disk storage pool volumes be part of a consistency group in which any I/O to
the members of the target consistency group are written in the same order as the
source and maintain the same volatility characteristics. Requirements for I/O to
disk storage systems at the remote site must also be met.
Database write operations must be nonvolatile for active and archive logs and
DISK device class storage pool volumes. Data must be permanently committed to
storage that is known toTivoli Storage Manager Tivoli Storage Manager has many
of the attributes of a database system, and data relationships that are maintained
require that data written as a group be permanently resident as a group or not
resident as a group. Intermediate states produce data integrity issues. Data must be
permanently resident after each operating-system write API invocation.
For FILE device type storage pool volumes, data must be permanently resident
following an operating system flush API invocation. This API is used at key
processing points in the Tivoli Storage Manager application. The API is used when
data is to be permanently committed to storage and synchronized with database
and log records that have already been permanently committed to disk storage.
For systems that use caches of various types, the data must be permanently
committed by the write APIs for the database, the active and archive logs, and
DISK device class storage pool volumes and by the flush API (for FILE device class
storage pool volumes). Tivoli Storage Manager uses write-through flags internally
when using storage for the database, the active and archive logs, and DISK device
class storage pool volumes. Data for the I/O operation can be lost if nonvolatile
cache is used to safeguard I/O writes to a device and the nonvolatile cache is
battery protected. If there is a power loss and power is not restored before the
battery is exhausted, then data can be lost. This would be the same as having
uncommitted storage resulting in data integrity issues.
To write properly to the Tivoli Storage Manager database, to active and archive
logs, and to DISK device class storage pool volumes, the operating system API
write invocation must synchronously and accurately report the operation results.
Similarly, the operating system API flush invocation for FILE device type storage
pool volumes must also synchronously and accurately report the operation results.
A successful result from the API for either write or flush must guarantee that the
data is permanently committed to the storage system.
These requirements extend to replicated environments such that the remote site
must maintain consistency with the source site in terms of the order of writes; I/O
must be committed to storage at the remote site in the same order that it was
written at the source site. The ordering applies to the set of files that Tivoli Storage
Manager is writing, whether the files belong to the database, recovery log, or
To avoid having the Tivoli Storage Manager server at the local and remote site
losing synchronization, the server at the remote site should not be started except in
a fail-over situation. If there is a possibility that data at the source and target
locations can lose synchronization, there must be a mechanism to recognize this
situation. If synchronization is lost, the Tivoli Storage Manager server at the remote
location must be restored by conventional means by using Tivoli Storage Manager
database and storage pool restores.
Tivoli Storage Manager supports the use of remote file systems or drives for
reading and writing storage pool data, database backups, and other data
operations. Remote file systems in particular might report successful writes, even
after being configured for synchronous operations. This mode of operation causes
data integrity issues if the file system can fail after reporting a successful write.
Check with the vendor of your file system to ensure that flushes are performed to
nonvolatile storage in a synchronous manner.
Note: Define storage pool volumes on disk drives that reside on the server
system, not on remotely mounted file systems. Network attached drives can
compromise the integrity of the data that you are writing.
Related concepts:
“Disk devices” on page 66
Related tasks:
“Defining storage pool volumes” on page 284
The Device Configuration wizard automatically creates a storage pool when the
FILE volume is configured. Administrators must then complete one of the
following actions:
v Use Tivoli Storage Manager policy to specify the new storage pool as the
destination for client data. See Chapter 14, “Implementing policies for client
data,” on page 497.
v Place the new storage pool in the storage pool migration hierarchy by updating
an already defined storage pool. See “Example: Updating storage pools” on page
278.
Related tasks:
“Defining sequential-access disk (FILE) device classes” on page 218
“Step 1: Defining device classes for database backups” on page 943
“Planning for sequential media used to export data” on page 783
“Defining storage pool volumes” on page 284
“Preparing volumes for sequential-access storage pools” on page 283
For example, to vary the disk volume named STGVOL.POOL001 offline, enter:
vary offline stgvol.pool001
You can make the disk volume available to the server again by varying the volume
online. For example:
vary online stgvol.pool001
Using cache can improve how fast a frequently accessed file is retrieved. Faster
retrieval can be important for clients that are storing space-managed files. If the file
needs to be accessed, the copy in cache can be used rather than the copy on tape.
However, using cache can degrade the performance of client backup operations
and increase the space needed for the database.
Related tasks:
“Caching in disk storage pools” on page 310
Expiration processing deletes information from the database about any client files
that are no longer valid according to the policies you have set. For example,
suppose that four backup versions of a file exist in server storage, and only three
versions are allowed in the backup policy (the management class) for the file.
Expiration processing deletes information about the oldest of the four versions of
the file. The space that the file occupied in the storage pool becomes available for
reuse.
You can run expiration processing by using one or both of the following methods:
v Use the EXPIRE INVENTORY command.
v Set the EXPINTERVAL server option and specify the interval so that expiration
processing runs periodically.
Shredding occurs only after a data deletion commits, but it is not necessarily
completed immediately after the deletion. The space occupied by the data to be
shredded remains occupied while the shredding takes place, and is not available as
You can specify a maximum number of scratch volumes for a storage pool that has
a FILE device type.
When scratch volumes used in storage pools become empty, the files are deleted.
Scratch volumes can be located in multiple directories on multiple file systems.
To reuse volumes that were previously used for database backup or export, use the
DELETE VOLHISTORY command.
| Note: With Tivoli Storage Manager Extended Edition, the disaster recovery
| manager (DRM) function automatically deletes volume information during
| processing of the MOVE DRMEDIA command.
Related tasks:
“Protecting the volume history file” on page 949
Chapter 36, “Disaster recovery manager,” on page 1053
Attached devices should be on their own host bus adapter (HBA) and should not
share with other devices types (disk, CDROM, and so on). IBM tape drives have
some special requirements for HBAs and associated drivers.
Tasks:
t_drive_manual_attach_win
t_device_automated_lib_attach_win
“Device alias names” on page 101
“Selecting a device driver” on page 104
t_lib_centera_sdk_install_win
Note: Each device that is connected in a chain to a single SCSI bus must be set
to a unique SCSI ID. If each device does not have a unique SCSI ID, you may
have serious system problems.
4. Follow the manufacturer's instructions to attach the device to your server
system hardware.
Attention:
a. Power off your system before attaching a device to prevent damage to the
hardware.
b. Attach a terminator to the last device in the chain of devices connected on
one SCSI adapter card.
5. Install the appropriate device drivers. See “Selecting a device driver” on page
104.
6. Determine the name for the device and record the name. This information can
help you when you need to perform operations such as adding volumes. Keep
the records for future reference.
Note: In some automated libraries, the drives and the autochanger share a
single SCSI ID, but have different LUNs. For these libraries, only a single SCSI
ID is required. Check the documentation for your device.
3. Follow the manufacturer's instructions to set the SCSI ID for the drives and
library controller to the unused SCSI IDs that you found. Usually this means
setting switches on the back of the device.
Note: Each device that is connected in a chain to a single SCSI bus must be set
to a unique SCSI ID. If each device does not have a unique SCSI ID, you may
have serious system problems.
4. Follow the manufacturer's instructions to attach the device to your server
system hardware.
Attention:
a. Power off your system before attaching a device to prevent damage to the
hardware.
b. Attach a terminator to the last device in the chain of devices connected on
one SCSI adapter card. Detailed instructions should be in the
documentation that came with your hardware.
5. Install the appropriate device drivers. See “Selecting a device driver” on page
104.
6. Determine the name for each drive and for the library, and record the names.
This information can help you when you need to perform operations such as
adding volumes to an autochanger. Keep the records for future reference.
7. For the IBM Tivoli Storage Manager server to access a SCSI library, set the
device for the appropriate mode. This is usually called random mode; however,
terminology may vary from one device to another. Refer to the documentation
for your device to determine how to set it to the appropriate mode.
Note:
a. Some libraries have front panel menus and displays that can be used for
explicit operator requests. However, if you set the device to respond to such
requests, it typically will not respond to IBM Tivoli Storage Manager
requests.
b. Some libraries can be placed in sequential mode, in which volumes are
automatically mounted in drives by using a sequential approach. This mode
conflicts with how IBM Tivoli Storage Manager accesses the device.
Device names for the IBM Tivoli Storage Manager device driver differ from device
names for the Windows device driver. For example, an automated library device
might be known as lb0.0.0.1 to the IBM Tivoli Storage Manager device driver and
as changerx (where x is a number 0–9), to the Windows device driver.
| When you configure devices by using IBM Tivoli Storage Manager commands, you
| must provide the device names as parameters to the DEFINE PATH command. The
| names can be either:
v Drive letters, for devices that are attached as local, removable file systems
v Alias names, for devices that are controlled by either the IBM Tivoli Storage
Manager device driver or the Windows device drivers
“Obtaining device alias names” on page 102 describes the procedure for using the
IBM Tivoli Storage Manager Console to obtain device names.
Alias names replace the real device names in IBM Tivoli Storage Manager
commands and screens. The IBM Tivoli Storage Manager device driver
communicates with devices by using the alias names. See “Obtaining device alias
names” on page 102.
If you use the IBM Tivoli Storage Manager Device Configuration Wizard to initially
configure devices, this step is unnecessary because the wizard gathers information
about the SCSI Target IDs, logical unit numbers, bus numbers, and SCSI port
numbers required for the alias names. However, if you add devices using IBM
Tivoli Storage Manager commands, you must provide the information in the
DEFINE PATH command. To determine the SCSI properties for a device:
1. From the Tivoli Storage Manager Console, expand the tree to Tivoli Storage
Manager Device Driver for the machine that you are configuring.
2. Expand Tivoli Storage Manager Device Driver and Reports.
3. Click Device Information. The Device Information view appears. The view
lists all devices connected to the server and lists their SCSI attributes in the
form of the alias names.
4. You can also obtain device alias names from the TSM Name column.
See “Device alias names” on page 101 for an overview of IBM Tivoli Storage
Manager device names.
Options
After devices are configured, you can run the tsmdlst utility to display device
information. The utility is in the devices bin directory, which is \Program
Files\Tivoli\TSM\console by default.
/computer=computer_name
Specifies the name of the computer for which devices are listed. The default is
the local system.
/detail
Displays details on devices in the list. By default, a summary is shown.
/all
Displays information about all types of devices. By default, only tape drives
and tape libraries are included in the results.
/nogenerictapecheck
Skips the step for opening detected drives to see if they are supported for the
Tivoli Storage Manager GENERICTAPE device type.
/nohbacheck
Skips the step for HBA API detection, which might speed up processing. This
option can be useful when debugging is needed.
/trace
Used for diagnostic purposes. Stores trace output in the tsmdlst_trace.txt file.
/? Displays usage information about tsmdlst and its parameters.
/xinquiry
Provides an alternate way to obtain serial number and worldwide name
Display information about tape devices and tape libraries for a local system,
ATLAS, by issuing the tsmdlst utility:
tsmdlst
Note: The device name displayed is the alias name that can be used in the DEFINE
PATH command and the UPDATE PATH command. The alias name is not the actual
device name.
Computer Name: ATLAS
TSM Device Driver: TSMScsi - Running
TSM Name ID LUN Bus Port SSN WWN TSM Type Driver Device Identifier
-----------------------------------------------------------------------------------------
mt0.0.0.3 0 0 0 3 HU1206LY0B ..638D LTO NATIVE HP Ultrium 5-SCSI Y50S
mt1.0.0.3 1 0 0 3 HU1206LY9N ..6390 LTO NATIVE HP Ultrium 5-SCSI Y5AS
mt2.4.0.3 2 4 0 3 8395261003 ..C358 LTO IBM IBM ULT3580-TD3 5AT0
lb3.0.0.3 3 0 0 3 1333508999 ..7A14 LIBRARY TSM ATL P3000 0100
mt3.1.0.3 3 1 0 3 1333508000 ..7A14 DLT TSM QUANTUM DLT7000 0100
Windows device drivers are recommended for all devices. IBM device drivers are
available for most IBM labeled devices. If a Windows device driver is not available
for your device, and the device is supported by the Tivoli Storage Manager device
driver, you can use the Tivoli Storage Manager device driver for your device.
Starting with Tivoli Storage Manager Version 6.2, support for native drivers
through SCSI passthru is available. You can choose to use a Windows Hardware
Qualification Lab certified native device driver instead of the Tivoli Storage
Manager device driver to control devices. Devices that are already controlled by
the Tivoli Storage Manager device driver can be switched to a native driver
without updating drive or device class definitions.
The IBM device driver should be installed for the following IBM devices:
IBM 3494 library
IBM Ultrium 3580, TS2230, TS2340 tape drives
IBM 3581, 3582, 3583, 3584 tape libraries
IBM 3590, 3590E, and 3590H tape drives
IBM 3592 and TS1120 tape drives
IBM TS3100, TS3200, TS3310, TS3400, and TS3500 tape libraries
IBM device drivers are available at the Fix Central support website:
1. Go to the Fix Central website: http://www.ibm.com/support/fixcentral/.
2. Select Storage Systems for the Product Group.
3. Select Tape Systems for the Product Family.
4. Select Tape device drivers and software for the Product Type.
5. Select Tape device drivers for the Product.
6. Select your operating system for the Platform.
It is recommended that you install the most current driver available.
Tivoli Storage Manager supports all devices that are supported by IBM device
drivers. However, Tivoli Storage Manager does not support all the
operating-system levels that are supported by IBM device drivers. For the most
up-to-date list of devices and operating-system levels supported by IBM device
drivers, see the Tivoli Storage Manager Supported Devices website at
http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html.
Tivoli Storage Manager Support for Multipath I/O with IBM Tape
Devices
Multipath I/O is the use of different paths to get to the same physical device (for
example, through multiple host bus adapters, switches, and so on). Multipathing
helps ensure that there is no single point of failure.
The IBM tape device driver provides multipathing support so that if a path fails,
the Tivoli Storage Manager server can use a different path to access data on a
storage device. The failure and transition to a different path are undetected by the
server. The IBM tape device driver also uses multipath I/O to provide dynamic
load balancing for enhanced I/O performance.
A computer has a unique SCSI address and Tivoli Storage Manager device name
for each path to a changer or tape device, even though some paths may be
redundant. For each set of redundant paths, you must define only one path to
Tivoli Storage Manager using one of the corresponding Tivoli Storage Manager
device names.
For an overview of path failover and load balancing, as well as information about
how to enable, disable, or query the status of path failover for each device, see the
IBM Tape Device Drivers Installation and User's Guide.
If you are not using RSM to manage your SCSI tape library devices, disable it so
that it does not conflict with Tivoli Storage Manager's use of these devices.
You can also allow RSM to run, but selectively disable each SCSI device that it tries
to manage:
1. From your desktop, right-click My Computer.
2. Select Manage.
3. Select Storage.
4. Select Removable Storage.
5. Select Physical Locations.
6. Under Physical Locations you will see a list of tape libraries, under which are
listed the library's drives.
7. Right-click each library and drive to be disabled from RSM and select its
properties.
8. Uncheck the Enable Library or Enable Drive box.
9. Click OK.
10. Close the Computer Management Console.
When the operating system is started, the Windows device driver tries to acquire
the devices it supports before the IBM Tivoli Storage Manager device driver can
acquire devices. Read the following sections to determine how to select the device
driver you want.
The Tivoli Storage Manager device driver is installed with the server. For details
on device driver installation directories, see Installation directories. The Tivoli
Storage Manager device driver uses persistent reservation for some tape drives. See
Technote 1470319 at http://www.ibm.com/support/docview.wss?uid=swg21470319
for details.
For devices not currently supported by the Tivoli Storage Manager device driver,
the Windows driver might be suitable.
v Optical and WORM devices are controlled by the disk driver that is supplied
with Windows.
v Removable media devices (attached as local file systems) require the Windows
device driver.
v Unsupported tape drives require the Windows device driver and must be used
with the GENERICTAPE device class. For more information, see the DEFINE
DEVCLASS - GENERICTAPE command in the Administrator's Reference.
To install the device driver for an IBM 3494 Tape Library Dataserver, refer to the
IBM TotalStorage Tape Device Drivers Installation and User's Guide.
To define a path for the library, you can determine the symbolic name of the
library by verifying the value entered in the C:\winnt\ibmatl.conf file. For example,
if the symbolic name for the library in the C:\winnt\ibmatl.conf file is 3494a, then
this is the name of your device. Drives in the library are set up separately.
Before installing a new version of the Tivoli Storage Manager device driver,
uninstall the previous version. Then complete this procedure during installation of
the device driver package.
1. When the Device Driver Installation Wizard welcome panel displays, select
Next and proceed through the panels to install the device drivers.
Note: During installation, the system might display a Windows Security dialog
box asking if you would like to install the device software. Place a check mark
on Always trust software from "IBM Corporation" and select Install
2. After your device drivers have been installed, the final panel in the wizard is
displayed. Select Finish to complete the installation.
After a successful installation, use the Device Manager to configure devices with
the Tivoli Storage Manager device driver.
Complete this procedure to uninstall the Tivoli Storage Manager device driver.
1. From your Windows Control Panel, navigate to Programs and Features.
2. Remove or uninstall the IBM Tivoli Storage Manager Device Driver entry.
3. Do not manually remove the Windows Driver Package entries for tsmscsi.
These packages are automatically removed after the IBM Tivoli Storage Device
Driver program is removed in Step 2. These entries, however, might still appear
in the Add or Remove Programs or Programs and Features windows until the
window is refreshed.
Because some tape drives do not have all of the functions that the Tivoli Storage
Manager server requires, not all tape drives can be used with the GENERICTAPE
device class. To determine if you can use the Windows device driver with a
specific tape drive, see “Creating a file to list devices and their attributes” on page
109. You can find the setup procedure for these devices at “Configuring devices
not supported by the Tivoli Storage Manager server” on page 116.
When using Windows device drivers with the GENERICTAPE device class, be
aware of the following:
v Tivoli Storage Manager does not recognize the device type.
If you add devices and intend to use the GENERICTAPE device class, you
should understand that the server does not know device types and recording
formats. For example, if you use a Windows device driver for a 4MM drive
using the DDS2 recording format, Tivoli Storage Manager knows only that the
device is a tape drive. The default recording format is used.
The server cannot prevent errors when it does not know the device type. For
example, if one GENERICTAPE device class points to a manual library device
containing a 4MM drive and an 8MM drive, the server might make an
impossible request: mount a 4MM cartridge into an 8MM drive.
v Device problems might be more difficult to solve.
The server cannot report I/O errors with as much detail. The server can obtain
only minimal information for display in the server console log.
It is recommended that you use the GENERICTAPE device class only with
unsupported tape devices.
A file listing devices and their attributes can be created by completing the
following procedure.
1. Click Start > Programs > Command Prompt on the Windows Start button. The
Command Prompt dialog appears.
2. Change directories to the directory in which the Tivoli Storage Manager
Console been installed. For default installations, the path resembles the
following:
c:\program files\tivoli\tsm\console
3. To create the file, type in the following command:
tsmdlst > devlist.txt
4. To view the file, type in the following command:
notepad devlist.txt
Tape drives may be automatically controlled by the Tivoli Storage Manager device
driver if the Windows device drivers are not available. If the devices are not
automatically configured and controlled by the Tivoli Storage Manager device
driver, you must manually update the controlling driver for each device that you
want controlled by the tsmscsi device driver.
Perform the following steps when setting up the Tivoli Storage Manager server to
access Centera.
1. Install the Tivoli Storage Manager server.
2. If you are upgrading from a previous level of Tivoli Storage Manager, delete
the following Centera SDK library files from the directory where the server was
installed:
FPLibrary.dll
FPParser.dll
fpos.dll
PAImodule.dll
3. Contact your EMC representative to obtain the installation packages and
instructions to install the Centera SDK Version 3.2 or later.
4. Install the Centera SDK. During the installation, take note of the directory
where the Centera SDK is installed.
a. Unzip and untar the package in a working directory.
b. Copy the files in the lib32 directory to the directory with the server
executable (dsmserv.exe).
5. Start the Tivoli Storage Manager server and set up the policy, device class, and
storage pools for Centera.
For the most up-to-date list of supported devices and operating-system levels, see
the Supported Devices website at http://www.ibm.com/software/sysmgmt/
products/support/IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html.
Concepts:
“Device configuration overview” on page 112
“Mixed device types in libraries” on page 78
“Server options that affect storage operations” on page 88
“Impact of device changes on the SAN” on page 153
“Defining devices and paths” on page 205
Tasks:
“Configuring manual devices” on page 113
“Configuring automated library devices” on page 113
“Configuring optical devices” on page 114
“Configuring devices not supported by the Tivoli Storage Manager server” on page 116
“Configuring removable media devices” on page 116
“Configuring devices using Tivoli Storage Manager commands” on page 119
“Configuring Tivoli Storage Manager servers to share SAN-connected devices” on page 146
“Configuring Tivoli Storage Manager for LAN-free data movement” on page 150
“Validating your LAN-free configuration” on page 151
“Configuring Tivoli Storage Manager for NDMP operations” on page 151
“Configuring IBM 3494 libraries” on page 123
“ACSLS-managed libraries” on page 136
“Troubleshooting device configuration” on page 152
You can perform configuration tasks by using the Administration Center and the
command-line interface. For information about Tivoli Storage Manager commands,
see the Administrator's Reference or issue the HELP command from the command line
of a Tivoli Storage Manager administrative client.
For more information about using the Administration Center, see the Installation
Guide.
The tasks require an understanding of Tivoli Storage Manager storage objects. For
an introduction to these storage objects, see “Tivoli Storage Manager storage
objects” on page 62.
Important: In most cases, the server expects to have exclusive use of devices
that are defined to the server. Attempting to use a Tivoli Storage Manager
device with another application might cause the server or the other application
to fail. This restriction does not apply to 3494 library devices, or when you use
a storage area network (SAN) to share library devices.
5. Determine the media type and device type for client data.
You can link clients to devices by directing client data to a type of media. For
example, accounting department data might be directed to LTO Ultrium tapes,
and as a result the server would select LTO Ultrium devices.
You can direct data to a specific media type through Tivoli Storage Manager
policy. When you register client nodes, you specify the associated policy.
For configuring devices by using Tivoli Storage Manager commands, you must
also define or update the Tivoli Storage Manager policy objects that link clients
to the pool of storage volumes and to the device.
6. Register clients to the policy domain defined or updated in the preceding step.
This step links clients and their data with storage volumes and devices.
7. Prepare media for the device.
Label tapes and optical disks before they can be used. For automated library
devices, you must also add the media to the device's volume inventory by
checking media into the library device.
| The device configuration wizard does not support IBM devices, some tape
| libraries, and some tape devices with features such as WORM, encryption, and
| logical block protection. To configure these devices, use the Administration Center
| or the Tivoli Storage Manager command-line interface.
Devices that are not supported cannot be configured by using the device
configuration wizard. To use a tape drive with the GENERICTAPE device class,
you must add the device by using the Tivoli Storage Manager command line.
For devices not yet supported by the Tivoli Storage Manager device driver, you
can use the Windows device driver. Perform the following steps to configure
manually-operated, stand-alone tape and optical devices:
1. Attach the device to the system.
Follow the manufacturer's instructions to attach the device to the system.
2. Set up the appropriate device driver for the device.
3. Configure the device.
a. From the Tivoli Storage Manager Console, expand the tree for the server
instance that you are configuring.
b. Click Wizards, then double-click Device Configuration in the right pane.
The Device Configuration Wizard appears.
c. Follow the instructions in the wizard.
4. Determine your backup strategy.
Determine which device the server backs up client data to, and whether client
data is backed up to disk, and then migrated to tape, or if it is backed up
directly to tape.
5. Update the Tivoli Storage Manager policy.
Define the Tivoli Storage Manager policy that links client data with media for
the device.
6. Label volumes.
To see if a device has been automatically configured with the Tivoli Storage
Manager device driver, go to Device Manager. Right click on the device and select
Properties. Select the Driver tab and Driver File Details. This will allow you to see
the device driver that is currently controlling your device.
You can also run tsmdlst.exe in the Tivoli Storage Manager console directory to see
if devices have been configured with the Tivoli Storage Manager device driver. If
the devices have not been configured with the Tivoli Storage Manager device
driver, the TSM Type will show GENERICTAPE.
To manually configure devices for the Tivoli Storage Manager device driver,
tsmscsi.sys, complete this procedure.
1. Locate the device in the Device Manager console (devmgmt.msc) and select it.
Tape drives are listed under Tape drives, medium changers are under Medium
Changers, and optical drives are under Disk drives.
2. Configure the device for use by tsmscsi.sys.
a. Select Update Driver... either from Action -> Update Driver... or by
right-clicking on the device and selecting Update Driver Software...
b. Select Browse my computer for driver software.
3. Select Let me pick from a list of device drivers on my computer.
4. Click Next.
5. Select one of the these options, depending on the device that you are
configuring:
v For a tape drive, select IBM Tivoli Storage Manager for Tape Drives.
v For a medium changer, select IBM Tivoli Storage Manager for Medium
Changers.
v For an optical drive, select IBM Tivoli Storage Manager for Optical Drives.
6. Click Next.
7. Click Close.
8. Verify that the device has been configured correctly for tsmscsi.
a. Right-click on the device and select Properties.
b. Select the Driver tab and Driver Details.
c. The Driver Details panel shows the device driver that is controlling the
device. This should be tsmscsi.sys for 32-bit Windows Server 2008, or
tsmscsi64.sys for 64-bit Windows Server 2008.
For Windows Server 2008 Server Core, devices cannot be configured through
Device Manager. If the devices are not automatically configured, you must use the
Tivoli Storage Manager CHANGE DEVDRIVER command to configure the devices.
See Technote 1320150 for more information.
Devices not supported by the Tivoli Storage Manager server can be added by
using the Tivoli Storage Manager command-line.
1. Attach the device to the system.
Follow the manufacturer's instructions to attach the device to the system.
2. Set up the appropriate Windows device driver for the device.
3. Configure the device. The following guidelines must be followed:
v Define the drive path with GENERICTAPE=Yes.
v The device class must have a device type of GENERICTAPE.
v Define a different device class and a different manual library device for every
unique device type that will be controlled by the Windows device driver. For
example, to use a 4 mm drive and an 8 mm drive, define two manual
libraries, and two device classes (both with device type GENERICTAPE).
4. Determine your backup strategy.
Determine which device the server backs up client data to, and whether client
data is backed up to disk, and then migrated to tape, or if it is backed up
directly to tape.
5. Update the Tivoli Storage Manager policy.
Define the Tivoli Storage Manager policy that links client data with media for
the device.
6. Label volumes.
If a removable media device can be formatted with a file system, Tivoli Storage
Manager may be able to use the device. The server recognizes the device as a
device with type REMOVABLEFILE. To use device type REMOVABLEFILE for a
device, the device:
v Must not be supported by a device type that is available for a Tivoli Storage
Manager device class.
Tip: If a data cartridge that is associated with a REMOVABLEFILE device class has
two sides, the server treats each side as a separate Tivoli Storage Manager volume.
Tivoli Storage Manager REMOVABLEFILE device class supports only single-sided
media.
You can use the CD or DVD media as input media on a target Tivoli Storage
Manager server by using the REMOVABLEFILE device class for input. Using the
REMOVABLEFILE device class allows the server to distinguish media volumes by
a “volume label,” to prompt for additional media, and to dismount media.
With CD support for Windows, you can also use CD media as an output device
class. Using CD media as output requires other software which uses a file system
on top of the CD media. This media allows other software to write to a CD by
using a drive letter and file names. The media can be either CD-R (read) or
CD-RW (read/write).
With DVD support for Windows, you can also use DVD media as an output device
class. Using DVD media as output requires other software which uses a file system
on top of the DVD media. DVDFORM software is ../common tool that comes with
some DVD-RAM device drivers. The DVDFORM software, for example, allows you
to label the media, which has to be DVD-RAM, by using upper case letters and
numbers. After the media is formatted, you can use the LABEL system command
to change the label.
You can use software for writing CDs to create a CD with volume label
CDR03 that contains the file named CDR03.
v Server B:
– Insert the CD in a drive on the Windows system, for example, E:
– Issue the following Tivoli Storage Manager commands to import the node
data on the CD volume CDR03:
define library manual
define devclass cdrom devtype=removablefile library=manual
define drive manual cddrive
define path server01 cddrive srctype=server desttype=drive
library=manual directory=e:\ device=e:
import node user1 filedata=all devclass=cdrom vol=CDR03
The steps (similar to CD support) are used to move data from one server to
another.
The following example shows how DVD-RAM drives work inside a SCSI library:
v Server A:
– Configure the device.
– For the library, follow the normal tape library configuration method.
– To configure the DVD-RAM drives, use the following procedure:
1. From your desktop, right click My Computer.
2. Select Device Manager.
3. Select the correct SCSI CD-ROM Device and right click for Properties.
4. Select Drivers.
The scenario documented adds a manual tape device, automated library devices,
and a removable file system device such as an Iomega Jaz drive.
Automated library devices can have more than one type of device. The scenario
shows the case of a library with one type of device (a DLT 8000 drive) and a
library with two types of devices (a DLT 8000 drive and an LTO Ultrium drive).
Defining libraries
All devices must be defined as libraries. Manual devices require a manual type
library, and most automated devices require the SCSI type library. Automated
libraries also require a path defined to them using the DEFINE PATH command.
You define libraries with the DEFINE LIBRARY command. See the following
examples of the different ways to define a library:
Manual device
define library manual8mm libtype=manual
Automated library device with one device type
define library autodltlib libtype=scsi
Note: If you have a SCSI library with a barcode reader and you would like
to automatically label tapes before they are checked in, you can specify the
following:
define library autodltlib libtype=scsi autolabel=yes
define path server01 autodltlib srctype=server desttype=library
device=lb3.0.0.0
Automated library device with two device types
define library automixlib libtype=scsi
define path server01 automixlib srctype=server desttype=library
device=lb3.0.0.0
Removable file system device (Iomega Jaz drive)
define library manualjaz libtype=manual
For more information about defining Tivoli Storage Manager libraries, see
“Defining devices and paths” on page 205.
For drives in SCSI libraries with more than one drive, the server requires
the element address for each drive. The element address indicates the
physical location of a drive within an automated library. The server
attempts to obtain the element address directly from the drive. If the drive
is not capable of supplying the information, you must supply the element
address in the drive definition.
Automated library device with two device types
define drive automixlib dlt_mt4
define drive automixlib lto_mt5
define path server01 dlt_mt4 srctype=server desttype=drive
library=automixlib device=mt4.0.0.0
define path server01 lto_mt5 srctype=server desttype=drive
library=automixlib device=mt5.0.0.0
For drives in SCSI libraries with more than one drive, the server requires
the element address for each drive. The element address indicates the
physical location of a drive within an automated library. The server
attempts to obtain the element address directly from the drive. If the drive
is not capable of supplying the information, you must supply the element
address in the drive definition.
Removable file system device (Iomega Jaz drive)
define drive manualjaz drive01
define path server01 drive01 srctype=server desttype=drive
library=manualJAZ directory=e:\ device=e:
See the following examples of defining device classes that group together similar
devices:
Manual device
define devclass tape8mm_class devtype=8mm format=8500 library=manual8mm
Automated library device with one device type
define devclass autodlt_class devtype=dlt format=drive library=autodltlib
Automated library device with two device types
define devclass autodlt_class devtype=dlt format=dlt40 library=automixlib
define devclass autolto_class devtype=lto format=ultriumc library=automixlib
For detailed information about defining Tivoli Storage Manager device classes, see
“Defining device classes” on page 209.
They are organized for a grouping of specific types of media, for example a storage
pool named TAPE8MM_POOL for the device class TAPE8MM_CLASS, and
AUTODLT_POOL for the device class AUTODLT_CLASS. See the following
examples of how to create a storage pool for the added device:
Manual device
define stgpool tape8mm_pool tape8mm_class maxscratch=20
Automated library device with one device type
define stgpool autodlt_pool autodlt_class maxscratch=20
Automated library device with two device types
define stgpool autodlt_pool autodlt_class maxscratch=20
define stgpool autolto_pool autolto_class maxscratch=20
Removable file system device (Iomega Jaz drive)
define stgpool manualjaz_pool jazdisk_class
For detailed information about defining storage pools, see Chapter 11, “Managing
storage pools and volumes,” on page 267.
For backups directly to tape, you must create new policy by copying default policy
and modifying it for the desired results.
See the following examples for how to determine the media and device type for
client backups:
Manual device
To assign client node astro to the direct-to-tape policy named dir2tape,
with the password cadet, enter:
register node astro cadet dir2tape
Automated library devices
To assign client node astro to a direct-to-tape policy domain named
dsk2tape, with the password cadet, enter:
register node astro cadet dsk2tape
Removable file system device (Iomega Jaz drive)
To assign client node astro to a removable media device policy domain
named rmdev, with the password cadet, enter:
register node astro cadet rmdev
For detailed and current library support information, see the Supported Devices
Web site at: http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html
Attention: If other systems or other Tivoli Storage Manager servers connect to the
same 3494 library, each must use a unique set of category numbers. Otherwise, two
or more systems may try to use the same volume, and cause corruption or loss of
data.
Typically, a software application that uses a 3494 library uses volumes in one or
more categories that are reserved for that application. To avoid loss of data, each
application sharing the library must have unique categories. When you define a
3494 library to the server, you can use the PRIVATECATEGORY and
SCRATCHCATEGORY parameters to specify the category numbers for private and
When a volume is first inserted into the library, either manually or automatically at
the convenience I/O station, the volume is assigned to the insert category
(X'FF00'). A software application such as Tivoli Storage Manager can contact the
library manager to change a volume's category number. For Tivoli Storage
Manager, you use the CHECKIN LIBVOLUME command (see “Checking media
into automated library devices” on page 161).
The Tivoli Storage Manager server only supports 3590 and 3592 tape drives in an
IBM 3494 library. The server reserves two different categories for each 3494 library
object. The categories are private and scratch.
When you define a 3494 library, you can specify the category numbers for volumes
that the server owns in that library by using the PRIVATECATEGORY,
SCRATCHCATEGORY, and if the volumes are IBM 3592 WORM volumes, the
WORMSCRATCHCATEGORY parameters. For example:
define library my3494 libtype=349x privatecategory=400 scratchcategory=401
wormscratchcategory=402
For this example, the server uses the following categories in the new my3494
library:
v 400 (X'190') Private volumes
v 401 (X'191') Scratch volumes
v 402 (X'192') WORM scratch volumes
Note: The default values for the categories may be acceptable in most cases.
However, if you connect other systems or Tivoli Storage Manager servers to a
single 3494 library, ensure that each uses unique category numbers. Otherwise, two
or more systems may try to use the same volume, and cause a corruption or loss
of data.
For a discussion regarding the interaction between library clients and the library
manager in processing Tivoli Storage Manager operations, see “Shared libraries” on
page 170.
You must first set up the IBM 3494 library on the server system. This involves the
following tasks:
1. Set the symbolic name for the library in the configuration file for the library
device driver (c:\winnt\ibmatl.conf). This procedure is described in IBM Tape
Device Drivers Installation and User’s Guide.
2. Physically attach the devices to the server hardware or the SAN.
3. Install and configure the appropriate device drivers for the devices on the
server that will use the library and drives.
4. Determine the device names that are needed to define the devices to Tivoli
Storage Manager.
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see “Keeping client files together using
collocation” on page 381 and “How collocation affects reclamation” on page
400.
For more information, see “Defining storage pools” on page 273.
Note: Specify scratch and private categories explicitly. If you accept the
category defaults for both library definitions, different types of media will be
assigned to the same categories.
2. Define a path from the server to each library:
define path server1 3590elib srctype=server desttype=library device=library1
define path server1 3590hlib srctype=server desttype=library device=library1
The DEVICE parameter specifies the symbolic name for the library, as defined
in the configuration file for the library device driver (c:\winnt\ibmatl.conf).
For more information about paths, see “Defining paths” on page 208.
3. Define the drives, ensuring that they are associated with the appropriate
libraries.
Note: Tivoli Storage Manager does not prevent you from associating a drive
with the wrong library.
See “Defining drives” on page 206.
4. Define a path from the server to each drive. Ensure that you specify the correct
library.
v For the 3590E drives:
define path server1 3590e_drive1 srctype=server desttype=drive
library=3590elib device=mt1.0.0.0
define path server1 3590e_drive2 srctype=server desttype=drive
library=3590elib device=mt2.0.0.0
v For the 3590H drives:
define path server1 3590h_drive3 srctype=server desttype=drive
library=3590hlib device=mt3.0.0.0
define path server1 3590h_drive4 srctype=server desttype=drive
library=3590hlib device=mt4.0.0.0
The DEVICE parameter gives the device alias name for the drive. For more
about device names, see “Device alias names” on page 101.
For more information about paths, see “Defining paths” on page 208.
5. Classify the drives according to type by defining Tivoli Storage Manager device
classes, which specify the recording formats of the drives. Because there are
separate libraries, you can enter a specific recording format, for example 3590H,
or you can enter DRIVE.
define devclass 3590e_class library=3590elib devtype=3590 format=3590e
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries.
The procedures for volume check-in and labeling are the same whether the library
contains drives of a single device type, or drives of multiple device types.
Note: If your library has drives of multiple device types, you defined two libraries
to the Tivoli Storage Manager server in the procedure in “Configuring a 3494
library with multiple drive device types” on page 126. The two Tivoli Storage
Manager libraries represent the one physical library. The check-in process finds all
available volumes that are not already checked in. You must check in media
separately to each defined library. Ensure that you check in volumes to the correct
Tivoli Storage Manager library.
Do the following:
1. Check in the library inventory. The following shows two examples.
v Check in volumes that are already labeled:
checkin libvolume 3494lib search=yes status=scratch checklabel=no
v Label and check in volumes:
label libvolume 3494lib search=yes checkin=scratch
2. Depending on whether you use scratch volumes or private volumes, do one of
the following:
v If you use only scratch volumes, ensure that enough scratch volumes are
available. For example, you may need to label more volumes. As volumes are
used, you may also need to increase the number of scratch volumes allowed
in the storage pool that you defined for this library.
v If you want to use private volumes in addition to or instead of scratch
volumes in the library, define volumes to the storage pool you defined. The
volumes you define must have been already labeled and checked in. See
“Defining storage pool volumes” on page 284.
The following tasks are required for Tivoli Storage Manager servers to share library
devices over a SAN:
1. Ensure the server that will be defined as the library manager is at the same or
higher version as the server or servers that will be defined as library clients.
2. Set up server-to-server communications.
3. Set up the device on the server systems.
4. Set up the library on the library manager server. In the following example, the
library manager server is named MANAGER.
5. Set up the library on the library client server. In the following example, the
library client server is named CLIENT.
See “Categories in an IBM 3494 library” on page 123 for additional information
about configuring 3494 libraries.
Note: You can also configure a 3494 library so that it contains drives of multiple
device types or different generations of drives of the same device type. The
procedure for working with multiple drive device types is similar to the one
described for a LAN in “Configuring a 3494 library with multiple drive device
types” on page 126.
For details about mixing generations of drives, see “Defining 3592 device classes”
on page 215 and “Defining LTO device classes” on page 222.
Note: Ensure that the library name agrees with the library name on the library
manager.
define library 3494san libtype=shared primarylibmanager=manager
3. Perform this step from the library manager. Define a path from the library client
server to each drive that the library client server will be allowed to access. The
device name should reflect the way the library client system sees the device.
There must be a path defined from the library manager to each drive in order
for the library client to use the drive. The following is an example of how to
define a path:
To help ensure a smoother migration and to ensure that all tape volumes that are
being used by the servers get associated with the correct servers, perform the
following migration procedure.
1. Do the following on each server that is sharing the 3494 library:
a. Update the storage pools using the UPDATE STGPOOL command. Set the
value for the HIGHMIG and LOWMIG parameters to 100%.
b. Stop the server by issuing the HALT command.
c. Edit the dsmserv.opt file and make the following changes:
1) Comment out the 3494SHARED YES option line
2) Activate the DISABLESCHEDS YES option line if it is not active
3) Activate the EXPINTERVAL X option line if it is not active and change
its value to 0, as follows:
EXPINTERVAL 0
d. Start the server.
e. Enter the following Tivoli Storage Manager command:
Note: You can use the saved volume history files from the library clients
as a guide.
b. Check in any remaining volumes as scratch volumes. Use the CHECKIN
LIBVOLUME command with STATUS=SCRATCH.
5. Halt all the servers.
6. Edit the dsmserv.opt file and comment out the following lines in the file:
DISABLESCHEDS YES
EXPINTERVAL 0
7. Start the servers.
Tivoli Storage Manager uses the capability of the 3494 library manager, which
allows you to partition a library between multiple Tivoli Storage Manager servers.
Library partitioning differs from library sharing on a SAN in that with
partitioning, there are no Tivoli Storage Manager library managers or library
clients.
When you partition a library on a LAN, each server has its own access to the same
library. For each server, you define a library with tape volume categories unique to
that server. Each drive that resides in the library is defined to only one server. Each
server can then access only those drives it has been assigned. As a result, library
partitioning does not allow dynamic sharing of drives or tape volumes because
they are pre-assigned to different servers using different names and category
codes.
In the following example, an IBM 3494 library containing four drives is attached to
a Tivoli Storage Manager server named ASTRO and to another Tivoli Storage
Manager server named JUDY.
Note: Tivoli Storage Manager can also share the drives in a 3494 library with other
servers by enabling the 3494SHARED server option. When this option is enabled,
you can define all of the drives in a 3494 library to multiple servers, if there are
SCSI connections from all drives to the systems on which the servers are running.
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see “Keeping client files together using
collocation” on page 381 and “How collocation affects reclamation” on page
400.
For more information, see “Defining storage pools” on page 273.
Key choices:
a. Scratch volumes are empty volumes that are labeled and available for use.
If you allow scratch volumes for the storage pool by specifying a value for
the maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see “Keeping client files together using
collocation” on page 381 and “How collocation affects reclamation” on page
400.
For more information, see “Defining storage pools” on page 273.
The ACSLS client application communicates with the ACSLS library server to
access tape cartridges in an automated library. Tivoli Storage Manager is one of the
applications that gains access to tape cartridges by interacting with ACSLS through
its client, which is known as the control path. The Tivoli Storage Manager server
reads and writes data on tape cartridges by interacting directly with tape drives
through the data path. The control path and the data path are two different paths.
The ACSLS client daemon must be initialized before starting the server using
StorageTek Library Attach. For detailed installation, configuration, and system
administration of ACSLS, refer to the appropriate StorageTek documentation.
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see “Keeping client files together using
collocation” on page 381 and “How collocation affects reclamation” on page
400.
Chapter 7. Configuring storage devices 137
For more information, see “Defining storage pools” on page 273.
Note: Tivoli Storage Manager does not prevent you from associating a drive
with the wrong library.
v Define the 9840 drives to 9840LIB.
define drive 9840lib 9840_drive1 acsdrvid=1,2,3,1
define drive 9840lib 9840_drive2 acsdrvid=1,2,3,2
v Define the 9940 drives to 9940LIB.
define drive 9940lib 9940_drive3 acsdrvid=1,2,3,3
define drive 9940lib 9940_drive4 acsdrvid=1,2,3,4
The ACSDRVID parameter specifies the ID of the drive that is being accessed.
The drive ID is a set of numbers that indicate the physical location of a drive
within an ACSLS library. This drive ID must be specified as a, l, p, d, where a is
the ACSID, l is the LSM (library storage module), p is the panel number, and d
is the drive ID. The server needs the drive ID to connect the physical location
of the drive to the drive's SCSI address. See the StorageTek documentation for
details.
See “Defining drives” on page 206.
3. Define a path from the server to each drive. Ensure that you specify the correct
library.
v For the 9840 drives:
define path server1 9840_drive1 srctype=server desttype=drive
library=9840lib device=mt1.0.0.0
Key choices:
a. Scratch volumes are labeled, empty volumes that are available for use. If
you allow scratch volumes for the storage pool by specifying a value for the
maximum number of scratch volumes, the server can choose from the
scratch volumes available in the library, without further action on your part.
If you do not allow scratch volumes, you must perform the extra step of
explicitly defining each volume to be used in the storage pool.
b. The default setting for primary storage pools is collocation by group. The
default for copy storage pools and active-data pools is disablement of
collocation. Collocation is a process by which the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client
file space on a minimal number of volumes. If collocation is disabled for a
storage pool and clients begin storing data, you cannot easily change the
data in the pool so that it is collocated. To understand the advantages and
disadvantages of collocation, see “Keeping client files together using
collocation” on page 381 and “How collocation affects reclamation” on page
400.
For more information, see “Defining storage pools” on page 273.
When upgrading multiple servers participating in library sharing, upgrade all the
servers at once, or do the library manager servers and then the library client
servers. Library manager servers are compatible with downlevel library clients.
However, library clients are not compatible with downlevel library manager
servers.
You must define the library manager server before setting up the library client
server.
1. Verify that the server that is the library client is running, and start it if it is not:
a. From the Tivoli Storage Manager Console, expand the tree for the server
instance you are configuring.
b. Expand Reports.
Set the parameters for the device class the same on the library client as on the
library manager. Making the device class names the same on both servers is
also a good practice, but is not required.
The device class parameters specified on the library manager server override
those specified for the library client. This is true whether or not the device class
names are the same on both servers. If the device class names are different, the
library manager uses the parameters specified in a device class that matches the
device type specified for the library client.
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries.
Attention: If your library has drives of multiple device types, you defined two
libraries to the Tivoli Storage Manager server in the procedure in “Configuring an
ACSLS library with multiple drive device type” on page 138. The two Tivoli
Storage Manager libraries represent the one physical library. The check-in process
finds all available volumes that are not already checked in. You must check in
media separately to each defined library. Ensure that you check in volumes to the
correct Tivoli Storage Manager library.
1. Check in the library inventory. The following shows examples for libraries with
a single drive device type and with multiple drive device types.
v Check in volumes that are already labeled:
Defining a VTL to the Tivoli Storage Manager server can help improve
performance because the server handles mount point processing for VTLs
differently than real tape libraries. The physical limitations for real tape hardware
are not applicable to a VTL, affording options for better scalability.
You can use a VTL for any virtual tape library when the following conditions are
true:
v There is no mixed media involved in the VTL. Only one type and generation of
drive and media is emulated in the library.
v Every server and storage agent with access to the VTL has paths that are defined
for all drives in the library.
If either of these conditions are not met, any mount performance advantage from
defining a VTL library to the Tivoli Storage Manager server can be reduced or
negated.
VTLs are compatible with earlier versions of both library clients and storage
agents. The library client or storage agent is not affected by the type of library that
is used for storage. If mixed media and path conditions are true for a SCSI library,
it can be defined or updated as LIBTYPE=VTL.
The concept of storage capacity in a virtual tape library is different from capacity
in physical tape hardware. In a physical tape library, each volume has a defined
capacity, and the library's capacity is defined in terms of the total number of
volumes in the library. The capacity of a VTL, alternatively, is defined in terms of
total available disk space. You can increase or decrease the number and size of
volumes on disk.
This variability affects what it means to run out of space in a VTL. For example, a
volume in a VTL can run out of space before reaching its assigned capacity if the
total underlying disk runs out of space. In this situation, the server can receive an
end-of-volume message without any warning, resulting in backup failures.
When out-of-space errors and backup failures occur, disk space is usually still
available in the VTL. It is hidden in volumes that are not in use. For example,
volumes that are logically deleted or returned to scratch status in the Tivoli Storage
Manager server are only deleted in the server database. The VTL is not notified,
and the VTL maintains the full size of the volume as allocated in its capacity
considerations.
To help prevent out-of-space errors, ensure that any SCSI library that you update
to LIBTYPE=VTL is updated with the RELABELSCRATCH parameter set to YES. The
RELABELSCRATCH option enables the server to overwrite the label for any volume
that is deleted and to return the volume to scratch status in the library. The
RELABELSCRATCH parameter defaults to YES for any library defined as a VTL.
Most VTL environments use as many drives as possible to maximize the number
of concurrent tape operations. A single tape mount in a VTL environment is
typically faster than a physical tape mount. However, using many drives increases
the amount of time that the Tivoli Storage Manager server requires when a mount
is requested. The selection process takes longer as the number of drives that are
defined in a single library object in the server increases. Virtual tape mounts can
take as long or longer than physical tape mounts depending on the number of
drives in the VTL.
For best results, create VTLs with 300-500 drives each. If more drives are required,
you can logically partition the VTL into multiple libraries and assign drives to each
library. Operating system and SAN hardware configurations could impose
limitations on the number of devices that can be utilized within the VTL library.
VTLs are identified by using the DEFINE LIBRARY command and specifying
LIBTYPE=VTL. Because a VTL library functionally interacts with the server in the
same way that a SCSI library does, it is possible to use the UPDATE LIBRARY
command to change the library type of a SCSI library that is already defined. You
do not have to redefine the library.
The following examples show how to add a VTL library to your environment.
If you have a new VTL library and want to use the VTL enhancements that are
available in Tivoli Storage Manager Version 6.3, define the library as a VTL to the
server:
define library chester libtype=vtl
This sets up the new VTL library and enables the RELABELSCRATCH option to
relabel volumes that have been deleted and returned to scratch status.
If you have a SCSI library and you want to change it to a VTL, use the UPDATE
LIBRARY command to change the library type:
update library calzone libtype=vtl
You can only issue this command when the library being updated is defined with
LIBTYPE=SCSI.
If you define a SCSI tape library as a VTL and want to change it back to the SCSI
library type, update the library by issuing the UPDATE LIBRARY command:
update library chester libtype=scsi
If you are setting up or modifying your hardware environment and must create or
change large numbers of drive definitions, the PERFORM LIBACTION command can
make this task much simpler. You can define a new library and then define all
drives and paths to the drives. Or, if you have an existing library that you want to
delete, you can delete all existing drives and their paths in one step.
The PREVIEW parameter allows you to view the output of commands before they
are processed to verify the action that you want to perform. If you are defining a
library, a path to the library must already be defined if you want to specify the
PREVIEW parameter. You cannot use the PREVIEW and DEVICE parameters
together.
The following tasks are required to share tape library devices over a SAN:
First you must define the library manager server. Use the following procedure as
an example of how to set up a Tivoli Storage Manager server named JUDY as a
library client.
1. Verify that the server that is the library client is running. Start the server if it is
not running:
a. From the Tivoli Storage Manager Console, expand the tree for the server
instance you are configuring.
b. Expand Reports.
Set the parameters for the device class the same on the library client as on the
library manager. Making the device class names the same on both servers is
also a good practice, but is not required.
The device class parameters specified on the library manager server override
those specified for the library client. This is true whether or not the device class
names are the same on both servers. If the device class names are different, the
library manager uses the parameters specified in a device class that matches the
device type specified for the library client.
If a library client requires a setting that is different from what is specified in the
library manager's device class (for example, a different mount limit), perform
the following steps:
As part of the configuration, a storage agent is installed on the client system. Tivoli
Storage Manager supports both tape libraries and FILE libraries. This feature
supports SCSI, 349X, and ACSLS tape libraries.
The configuration procedure you follow will depend on the type of environment
you implement; however in all cases you must perform the following steps:
1. Install and configure the client.
2. Install and configure the storage agent.
3. Configure the libraries for LAN-free data movement.
4. Define the libraries and associated paths.
5. Define associated devices and their paths.
6. Configure Tivoli Storage Manager policy for LAN-free data movement for the
client. If you are using shared FILE storage, install and configure IBM
TotalStorage SAN File System, Tivoli SANergy, or IBM General Parallel File
System.
For more information on configuring Tivoli Storage Manager for LAN-free data
movement see the Storage Agent User's Guide.
To help you tune the use of your LAN and SAN resources, you can control the
path that data transfers take for clients with the capability of LAN-free data
movement. For each client you can select whether data read and write operations
use:
v The LAN path only
v The LAN-free path only
v Any path
See the REGISTER NODE and UPDATE NODE commands in the Administrator's
Reference.
To determine if there is a problem with the client node FRED using the storage
agent FRED_STA, issue the following:
validate lanfree fred fred_sta
The output will allow you to see which management class destinations for a given
operation type are not LAN-free capable. It will also report the total number of
LAN-free destinations.
See the VALIDATE LANFREE command in the Administrator's Reference for more
information.
To configure Tivoli Storage Manager for NDMP operations, perform the following
steps:
1. Define the libraries and their associated paths.
Important: An NDMP device class can only use a Tivoli Storage Manager
library in which all of the drives can read and write all of the media in the
library.
2. Define a device class for NDMP operations.
3. Define the storage pool for backups performed by using NDMP operations.
4. Optional: Select or define a storage pool for storing tables of contents for the
backups.
5. Configure Tivoli Storage Manager policy for NDMP operations.
6. Register the NAS nodes with the server.
7. Define a data mover for the NAS file server.
8. Define the drives and their associated paths.
The information provided by this utility is from the Windows registry. Some of the
information is put into the registry by the Tivoli Storage Manager device driver. To
receive accurate information, ensure that the device driver is running. If the device
driver is not running, the information may be incorrect if device attachments have
changed since the last time the device driver was running.
The server may know a device as id=1 based on the original path specification to
the server and original configuration of the LAN. However, some event in the SAN
(new device added, cabling change) causes the device to be assigned id=2. When
the server tries to access the device with id=1, it will either get a failure or the
wrong target device. The server assists in recovering from changes to devices on
the SAN by using serial numbers to confirm the identity of devices it contacts.
When you define a device (drive or library) you have the option of specifying the
serial number for that device. If you do not specify the serial number when you
define the device, the server obtains the serial number when you define the path
for the device. In either case, the server then has the serial number in its database.
From then on, the server uses the serial number to confirm the identity of a device
for operations.
If the serial numbers do not match, the server begins the process of discovery on
the SAN to attempt to find the device with the matching serial number. If the
server finds the device with the matching serial number, it corrects the definition
of the path in the server's database by updating the device name in that path. The
server issues a message with information about the change made to the device.
Then the server proceeds to use the device.
You can monitor the activity log for messages if you want to know when device
changes on the SAN have affected Tivoli Storage Manager. The following are the
number ranges for messages related to serial numbers:
v ANR8952 through ANR8958
v ANR8961 through ANR8967
Restriction: Some devices do not have the capability of reporting their serial
numbers to applications such as the Tivoli Storage Manager server. If the server
cannot obtain the serial number from a device, it cannot assist you with changes to
that device's location on the SAN.
Actual results will depend upon your system environment. The utility does not
affect the generation of backup sets.
The utility increases the maximum transfer length for some Host Bus Adapters
(HBAs) and, consequently, the block size used by the Tivoli Storage Manager
server for writing data to and getting data from the following types of tape drives:
v 3570
v 3590
v 3592
v DLT
v DTF
v ECARTRIDGE
v LTO
The maximum supported block size with this utility is 256 KB. When you run
DSMMAXSG, it modifies one registry key for every HBA driver on your system.
The name of the key is MaximumSGList.
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see the
Administrator's Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
Defining volumes
For each storage pool, decide whether to use scratch volumes or private volumes.
Private volumes require more human intervention than scratch volumes.
When you add devices with the Device Configuration Wizard, the wizard
automatically creates a storage pool for each device it configures and allows a
maximum of 500 scratch volumes for the storage pool. When you use commands
to add devices, you specify the maximum number of scratch volumes with the
MAXSCRATCH parameter of the DEFINE STGPOOL or UPDATE STGPOOL
command. If the MAXSCRATCH parameter is 0, all the volumes in the storage
pool are private volumes that you must define.
For example, to create a storage pool named STORE1 that can use up to 500
scratch volumes, issue the following command:
define stgpool store1 maxscratch=500
Use private volumes to regulate the volumes used by individual storage pools, and
to manually control the volumes. Define each private volume with the DEFINE
VOLUME command. For database backups, dumps, or loads, or for server import
or export operations, you must list the private volumes.
Managing volumes
When Tivoli Storage Manager needs a new volume, it chooses a volume from the
storage pool available for client backups. If you set up private volumes, it selects a
specific volume. If you set up scratch volumes, it selects any scratch volume in the
library.
IBM 3494 Tape Library Dataservers use category numbers to identify volumes that
are used for the same purpose or application. For details, see “Category numbers
for IBM 3494 libraries” on page 171. For special considerations regarding
write-once, read-many (WORM) volumes, see “Write-once, read-many tape media”
on page 164.
Remember: Each volume used by a server for any purpose must have a unique
name. This requirement applies to all volumes, whether the volumes are used for
storage pools, or used for operations such as database backup or export. The
requirement also applies to volumes that reside in different libraries but that are
used by the same server.
Partially-written volumes
Partially-written volumes are always private volumes, even if their status was
scratch before Tivoli Storage Manager selects them to be mounted. Tivoli Storage
Manager tracks the original status of scratch volumes, so it can return them to
scratch status when they become empty.
For information about changing the status of a volume in an automated library, see
“Changing the status of automated library volumes” on page 167.
The volume inventory is created when you check media volumes into the library.
Tivoli Storage Manager tracks the status of volumes in the inventory as either
scratch or private.
A list of volumes in the library volume inventory will not necessarily be identical
to a list of volumes in the storage pool inventory for the device. For example,
scratch volumes may be checked in to the library but not defined to a storage pool
because they have not yet been selected for backup; private volumes may be
defined to a storage pool, but not checked into the device's volume inventory.
For details about the volume history file, see Chapter 34, “Protecting and
recovering the server infrastructure and client data,” on page 941.
Labeling media
All media require labels. Labeling media with an automated library requires you to
check media into the library. Checkin processing can be done at the same time that
the volume is labeled.
To label volumes with the LABEL LIBVOLUME command, specify the CHECKIN
parameter.
A label cannot include embedded blanks or periods and must be valid when used
as a file name on the media.
Insert the media into storage slots or entry/exit ports and invoke the Labeling
Wizard.
Tip: The Labeling Wizard does not support labeling of optical media. To label
optical media, you must issue the LABEL LIBVOLUME command.
By default, the label command does not overwrite an existing label on a volume.
However, if you want to overwrite existing volume labels, you can specify
OVERWRITE=YES when you issue the LABEL LIBVOLUME command. See
“Labeling volumes using commands” on page 179.
If you are labeling media with the labeling wizard, check the bar code check box in
the wizard. If you are labeling media with commands, issue the LABEL
LIBVOLUME command, specifying SEARCH=YES and
LABELSOURCE=BARCODE. Tivoli Storage Manager reads the bar code and the
media are moved from the entry/exit port to a drive where the information on the
bar code label is written as the internal label on the media. After the tape is
labeled, it is moved back to the entry/exit port or to a storage slot if the
CHECKIN option is specified.
Because bar code scanning can take a long time for unlabeled volumes, do not mix
volumes with bar code labels and volumes without bar code labels in a library.
Bar code support is available for libraries controlled by Tivoli Storage Manager
using the Tivoli Storage Manager device driver or the RMSS LTO Ultrium device
driver. Bar code support is unavailable for devices using the native Windows
device driver or devices whose media are managed by Removable Storage
Manager (RSM). See “Using removable media managers” on page 179.
The CHECKIN LIBVOLUME command involves device access, and may take a
long time to complete. For this reason, the command always executes as a
background process. Wait for the CHECKIN LIBVOLUME process to complete
before defining volumes or the defining process will fail. You can save time by
checking in volumes as part of the labeling operation. For details, see “Labeling
media” on page 159.
You can specify that Tivoli Storage Manager read media labels for the volumes you
are checking in. When label-checking is enabled, Tivoli Storage Manager mounts
each volume and reads the internal label before checking in the volume. Tivoli
Storage Manager checks in only volumes that are properly labeled. Checking labels
can prevent errors later, when Tivoli Storage Manager selects and mounts volumes,
but it also increases check in processing time.
Tivoli Storage Manager issues a mount request identifying a storage slot with an
element address. The media can be loaded directly into a single storage slot or into
one of the device's entry/exit ports, if it is equipped with them. For example,
check a scratch volume named VOL001 into a library named TAPELIB by entering
the following command:
checkin libvolume tapelib vol001 search=no status=scratch
Tivoli Storage Manager finds that the first empty slot is at element address 5, and
issues the following message:
ANR8306I 001: Insert 8MM volume VOL001 R/W in slot with element
address 5 of library TAPELIB within 60 minutes; issue ’REPLY’ along
with the request ID when ready.
If the library is equipped with entry/exit ports, the administrator can load the
volume into a port without knowing the element addresses of the device's storage
slots. After inserting the volume into an entry/exit port or storage slot, the
administrator responds to the preceding message at a Tivoli Storage Manager
command line by issuing the REPLY command with the request number (the
number at the beginning of the mount request):
reply 1
Tip: A REPLY command is not required if you specify a wait time of zero using
the optional WAITTIME parameter on the CHECKIN LIBVOLUME command. The
default wait time is 60 minutes.
Tivoli Storage Manager reads the bar code labels and uses the information on the
labels to write the internal media labels. For volumes missing bar code labels,
Tivoli Storage Manager mounts the volumes in a drive and attempts to read the
internal, recorded label.
For example, to use a bar code reader to search a library named TAPELIB and
check in a scratch tape, enter:
checkin libvolume tapelib search=yes status=scratch
checklabel=barcode
To have Tivoli Storage Manager load a cartridge in a drive and read the label, you
must specify the CHECKLABEL=YES option. The CHECKLABEL=NO option is
invalid with the SEARCH=BULK option. After reading the label, Tivoli Storage
Manager moves the tape from the drive to a storage slot. When bar code reading is
enabled with the CHECKLABEL=BARCODE parameter, Tivoli Storage Manager
reads the label and moves the tape from the entry/exit port to a storage slot.
Partially-written volumes are always private volumes. Volumes begin with a status
of either scratch or private, but once Tivoli Storage Manager stores data on them,
their status becomes private. See “Returning partially-written volumes to
automated libraries” on page 168.
Tivoli Storage Manager selects the volume to eject by checking first for any
available scratch volumes, then for the least frequently mounted volumes. Without
tape swapping, the checkin fails. See “Setting up volume overflow locations for
automated libraries” on page 169.
When a volume is first inserted into an IBM 3494 library, either manually or
automatically at the convenience I/O station, the volume is assigned to the insert
category (X'FF00'). You can then change the category number when issuing the
CHECKIN LIBVOLUME command.
| If you load tapes into storage slots, you must reply to mount requests that identify
| storage slots with element addresses, unless you specify a wait time of zero when
| issuing the CHECKIN LIBVOLUME or LABEL LIBVOLUME commands. (If the
| wait time is zero, no reply is required.) An element address is a number that
| indicates the physical location of a storage slot or drive within an automated
| library.
Tips:
v External and manual libraries use separate logical libraries to segregate their
media. Ensuring that the correct media are loaded is the responsibility of the
operator and the library manager software.
v A storage pool can consist of either WORM or RW media, but not both.
v Do not use WORM tapes for database backup or export operations. Doing so
wastes tape following a restore or import operation.
For information about defining device classes for WORM tape media, see
“Defining device classes for StorageTek VolSafe devices” on page 226 and
“Defining tape and optical device classes” on page 212.
For information about selecting device drivers for IBM and devices from other
vendors, see:
“Selecting a device driver” on page 104.
WORM-capable drives
To use WORM media in a library, all the drives in the library must be
WORM-capable. A mount will fail if a WORM cartridge is mounted in a read write
(RW) drive.
Library changers cannot identify the difference between standard read-write (RW)
tape media and the following types of WORM tape media:
v VolSafe
v Sony AIT
v LTO
v SDLT
v DLT
If they provide support for WORM media, IBM 3592 library changers can detect
whether a volume is WORM media without loading the volume into a drive.
Specifying CHECKLABEL=YES is not required. Verify with your hardware vendors
that your 3592 drives and libraries provide the required support.
Issue the LABEL LIBVOLUME command only once for VolSafe volumes. You can
guard against overwriting the label by using the OVERWRITE=NO option on the
LABEL LIBVOLUME command.
If you have SDLT-600, DLT-V4, or DLT-S4 drives and you want to enable them for
WORM media, upgrade the drives using V30 or later firmware available from
Quantum. You can also use DLTIce software to convert unformatted read-write
(RW) volumes or blank volumes to WORM volumes.
In manual libraries, you can use the server to format empty volumes to WORM.
Tivoli Storage Manager tracks the media in the library volume inventory, which it
maintains for each automated library. The library volume inventory is separate
from the storage pool inventory for the device. To add volumes to the volume
inventory for a device, check volumes into the device. For details on the checkin
procedure, see “Checking media into automated library devices” on page 161. To
add volumes to a storage pool for a library, see “Adding scratch volumes to
automated library devices” on page 169.
You can extend the media management function of Tivoli Storage Manager by
using Windows Removable Storage Manager (RSM) to manage media. The
capabilities of these programs go beyond the media management function offered
by Tivoli Storage Manager and they allow different applications to share the same
device. See “Using removable media managers” on page 179.
Tivoli Storage Manager mounts each volume and verifies its internal label before
checking it out of the volume inventory. After a volume has been checked out,
Tivoli Storage Manager moves the media to the entry/exit port of the device if it
has one, or Tivoli Storage Manager requests that the operator remove the volume
from a drive within the device.
For automated libraries with multiple entry/exit ports, you can issue the
CHECKOUT LIBVOLUME command with the SEARCH=BULK parameter. Tivoli
Storage Manager ejects the volume to the next available entry/exit port.
Partially-written volumes that are removed from the device will need to be
checked in again if Tivoli Storage Manager attempts to access them. See
“Partially-written volumes” on page 158.
These messages indicate a hardware error, and not a Tivoli Storage Manager
application error.
To audit the volume inventories of automated libraries, issue the AUDIT LIBRARY
command . Tivoli Storage Manager deletes missing volumes and updates the
locations of volumes that have moved since the last audit. Tivoli Storage Manager
cannot add new volumes during an audit.
Unless devices are equipped with bar code readers, the server mounts each volume
during the audit process to verify the label. After the label has been verified, the
volume remains in a wait state until the mount retention interval times out. You
can save time by issuing the DISMOUNT VOLUME command to force idle
volumes to be dismounted.
If a volume has a bar code label with six characters or less, Tivoli Storage Manager
reads the volume name from the bar code label during the audit. The volume is
not mounted to verify that the external bar code name matches the internal,
recorded volume name.
If a volume has no bar code label or the bar code label does not meet Tivoli
Storage Manager label requirements, Tivoli Storage Manager mounts the volume in
a drive and attempts to read the internal label. See “Labeling media” on page 159.
For example, to audit the TAPELIB library using its bar code reader, issue the
following command:
audit library tapelib checklabel=barcode
The initial maximum number of scratch volumes for a library is determined when
the library storage pool is created. See “Defining volumes” on page 157.
Tivoli Storage Manager tracks the volumes moved to the overflow area thus
allowing you to make storage slots available for new volumes. To set up and
manage an overflow location:
1. Create a volume overflow location. Define or update the storage pool
associated with the automated library by issuing the DEFINE STGPOOL or
UPDATE STGPOOL command with the OVERFLOW parameter. For example,
to create an overflow location named ROOM2948 for a storage pool named
ARCHIVEPOOL, issue the following:
update stgpool archivepool ovflocation=Room2948
2. Move media to the overflow location as required. Issue the MOVE MEDIA
command to move media from the library to the overflow location. For
example, to move all full volumes in the specified storage pool out of the
library.
move media * stgpool=archivepool
All full volumes are checked out of the library, and Tivoli Storage Manager
records the location of the volumes as Room2948.
Use the DAYS parameter to specify the number of days that must elapse before
the volume is eligible for processing by the MOVE MEDIA command.
3. Check in new scratch volumes (if required). See “Checking media into
automated library devices” on page 161. If a volume has an entry in volume
history, you cannot check it in as a scratch volume.
4. Identify the empty scratch tapes in the overflow location. For example, enter
this command:
query media * stg=* whereovflocation=Room2948 wherestatus=empty
move media * stg=* wherestate=mountablenotinlib wherestatus=empty
cmd="checkin libvol autolib &vol status=scratch"
cmdfilename=\storage\move\media\checkin.vols
5. Check in volumes from the overflow area when Tivoli Storage Manager
requests them. Operators must check volumes in from the overflow area when
Tivoli Storage Manager needs them. Tivoli Storage Manager issues mount
requests that include the location of the volumes.
To change the access mode of a volume, issue the UPDATE VOLUME command,
specifying ACCESS=UNAVAILABLE.
If you want to make volumes unavailable in order to send the data they contain
offsite for safekeeping, consider using copy storage pools or active-data pools
instead. You can back up primary storage pools to a copy storage pool and then
send the copy storage pool volumes offsite. You can also copy active versions of
client backup data to active-data pools, and then send the volumes offsite. You can
track copy storage pool volumes and active-data pool volumes by changing their
access mode to offsite, and updating the volume history to identify their location.
For more information, see “Backing up primary storage pools” on page 954.
Shared libraries
Shared libraries are logical libraries that are represented physically by SCSI, 349X,
or ACSLS libraries. The Tivoli Storage Manager server is configured as a library
manager and controls the physical library. Tivoli Storage Manager servers using
the SHARED library type are library clients to the library manager server.
The library client contacts the library manager, when the library manager starts
and the storage device initializes, or after a library manager is defined to a library
client. The library client confirms that the contacted server is the library manager
for the named library device. The library client also compares drive definitions
with the library manager for consistency. The library client contacts the library
manager for each of the following operations:
Volume Mount
A library client sends a request to the library manager for access to a
particular volume in the shared library device. For a scratch volume, the
library client does not specify a volume name. If the library manager
cannot access the requested volume, or if scratch volumes are not available,
the library manager denies the mount request. If the mount is successful,
the library manager returns the name of the drive where the volume is
mounted.
Table 18 shows the interaction between library clients and the library manager in
processing Tivoli Storage Manager operations.
Table 18. How SAN-enabled servers process Tivoli Storage Manager operations
Operation Library Manager Library Client
(Command)
Query library volumes Displays the volumes that Not applicable.
are checked into the library.
(QUERY LIBVOLUME) For private volumes, the
owner server is also
displayed.
Check in and check out Performs the commands to Not applicable.
library volumes the library device.
When a checkin operation
(CHECKIN LIBVOLUME, must be performed because
CHECKOUT LIBVOLUME) of a client restore, a request
is sent to the library manager
server.
Audit library inventory Performs the inventory Performs the inventory
synchronization with the synchronization with the
(AUDIT LIBRARY) library device. library manager server.
Label a library volume Performs the labeling and Not applicable.
checkin of media.
(LABEL LIBVOLUME)
Dismount a volume Sends the request to the Requests that the library
library device. manager server perform the
(DISMOUNT VOLUME) operation.
Query a volume Checks whether the volume Requests that the library
is owned by the requesting manager server perform the
(QUERY VOLUME) library client server and operation.
checks whether the volume
is in the library device.
A 3494 library has an intelligent control unit that tracks the category number of
each volume in the volume inventory. The category numbers are useful when
multiple systems share the resources of a single library. Typically, a software
application that uses a 3494 uses only volumes in categories that are reserved for
that application.
You can set up expiration processing and reclamation processing and tune the
media rotation to achieve the desired results.
v Setting up expiration processing
Expiration processing is the same, regardless of the type of device and media on
which backups are stored. See “Running expiration processing to delete expired
files” on page 535.
v Setting up reclamation processing
For a storage pool associated with a library that has more than one drive, the
reclaimed data is moved to other volumes in the same storage pool. See
“Reclaiming space in sequential-access storage pools” on page 390.
v Returning reclaimed media to the storage pool
Most media can be returned to a storage pool after it has been reclaimed but
media containing database backups and database export data require you to
perform an additional step. For these volumes, you must issue the DELETE
VOLHISTORY command or the UPDATE LIBVOLUME command to change the
status of the volume.
When Tivoli Storage Manager backs up the database or exports server
information, Tivoli Storage Manager records information about the volumes used
for these operations in the volume history file. Volumes that are tracked in the
volume history file require the administrator to delete the volume information
from the volume history file. The volume history file is a key component of
server recovery and is discussed in detail in Chapter 34, “Protecting and
recovering the server infrastructure and client data,” on page 941.
Tip: If your server uses the disaster recovery manager function, the volume
information is automatically deleted during MOVE DRMEDIA command
processing. For additional information about DRM, see Chapter 36, “Disaster
recovery manager,” on page 1053.
v Ensuring media are available
See “Tape rotation” on page 177.
Note: You must label CD-ROM, Zip, or Jaz volumes with the device
manufacturer's or Windows utilities because Tivoli Storage Manager does not
provide utilities to format or label these media. The operating system utilities
include the Disk Administrator program (a graphical user interface) and the label
command. See “Labeling media” on page 159.
You can manage media with Windows Removable Storage Manager (RSM).
However, unless device sharing across storage management applications is
required, using a media manager for stand-alone devices could introduce
unjustifiable administrative overhead.
For manual libraries, Tivoli Storage Manager detects when there is a cartridge
loaded in a drive, so no operator reply is necessary. For automated libraries, the
CHECKIN LIBVOLUME and LABEL LIBVOLUME commands involve inserting
cartridges into slots and, depending on the value of the WAITTIME parameter,
issuing a reply message. (If the value of the parameter is zero, no reply is
required.) The CHECKOUT LIBVOLUME command involves inserting cartridges
into slots and, in all cases, issuing a reply message.
To start a server console monitor from an operating system command line, enter
this command:
> dsmadmc -consolemode
If a wait time greater than zero was specified, the server waits the specified
number of minutes before resuming processing.
The first parameter for the REPLY command is the three-digit request ID number
that indicates which of the pending mount requests has been completed. For
example, an operator can issue the following command to respond to request 001
in the previous code sample.
reply 001
The CANCEL REQUEST command must include the request identification number.
This number is included in the request message, or it can be obtained by issuing a
QUERY REQUEST command, as described in “Displaying information about
mount requests that are pending.”
To ensure that the server does not try to mount the requested volume again,
specify the PERMANENT parameter to mark the volume as unavailable.
For most of the requests associated with automated libraries, the Tivoli Storage
Manager CANCEL REQUEST command is not accepted by the server. An operator
must perform a hardware or system action to cancel the requested mount.
Using mount retention can reduce the access time if volumes are used repeatedly.
For information about setting mount retention times, see “Controlling the amount
of time that a volume remains mounted” on page 214.
Tivoli Storage Manager checks the drive every seven seconds to see if the medium
has been ejected. A volume dismount is not considered complete until Tivoli
Storage Manager detects that the medium has been ejected from the drive or that a
different medium has been inserted into the drive.
A log page is created and can be retrieved at any given time or at a specific time
such as when a drive is dismounted.
Tape alert messages are turned off by default. To set tape alert messages to ON,
issue the SET TAPEALERTMSG command. To query tape alert messages, issue the
QUERY TAPEALERTMSG command.
Tape rotation
Over time, media ages, and certain backup data might no longer be needed. You
can reclaim useful data on media and then reclaim and reuse the media
themselves.
Tivoli Storage Manager policy determines how many backup versions are retained
and how long they are retained. See “Basic policy planning” on page 498.
Deleting data - expiration processing
Expiration processing deletes data that is no longer valid either because it
exceeds the retention specifications in policy or because users or
administrators have deleted the active versions of the data. See “File
expiration and expiration processing” on page 501 and “Running
expiration processing to delete expired files” on page 535.
Reusing media - reclamation processing
Data on tapes may expire, move, or be deleted. Reclamation processing
consolidates any unexpired data by moving it from multiple volumes onto
fewer volumes. The media can then be returned to the storage pool and
reused.
You can set a reclamation threshold that allows Tivoli Storage Manager to
reclaim volumes whose valid data drops below a threshold. The threshold
is a percentage of unused space on the volume and is set for each storage
pool. The amount of data on the volume and the reclamation threshold for
the storage pool affects when the volume is reclaimed. See “Reclaiming
space in sequential-access storage pools” on page 390.
To automatically label tape volumes in SCSI-type libraries, you can use the
AUTOLABEL parameter on the DEFINE LIBRARY and UPDATE LIBRARY
commands. Using this parameter eliminates the need to pre-label a set of tapes. It
is also more efficient than using the LABEL LIBVOLUME command, which
requires you to mount volumes separately. If you use the AUTOLABEL parameter,
you must check in tapes by specifying CHECKLABEL=BARCODE on the
CHECKIN LIBVOLUME command.
Tip: To automatically label tape volumes in SCSI-type libraries, you can use the
AUTOLABEL parameter on the DEFINE LIBRARY and UPDATE LIBRARY
commands. Using this parameter eliminates the need to pre-label a set of tapes. It
is also more efficient than using the LABEL LIBVOLUME command, which
requires you to mount volumes separately. If you use the AUTOLABEL parameter,
you must check in tapes by specifying CHECKLABEL=BARCODE on the
CHECKIN LIBVOLUME command.
The principal value of using these media managers with Tivoli Storage Manager is
the improved capability to share multiple devices with other applications. RSM
requires some additional administrative overhead, which may be justified by the
savings from sharing expensive hardware like automated libraries.
Tivoli Storage Manager also provides a programming interface that allows you to
use a variety of external programs to control Tivoli Storage Manager media. See
Appendix B, “External media management interface description,” on page 1145 for
Note: For specific information about installing and configuring RSM, see
the Windows online help.
External Media Management Interface
The External Media Management Interface uses the EXTERNAL library
type. The EXTERNAL library type does not map to a device or media type,
but instead specifies the installed path of the external media manager. See
“Using external media managers to control media” on page 183.
This procedure creates the following Tivoli Storage Manager storage objects:
v An RSM library
v An associated device class with a device type of GENERICTAPE
v An associated storage pool
Media pools:
When you create and configure an RSM library, typically with the Tivoli Storage
Manager Device Configuration Wizard, Tivoli Storage Manager directs RSM to
create:
v A top-level media pool called IBM Tivoli Storage Manager
v A second-level Tivoli Storage Manager server instance pool
Under the IBM Tivoli Storage Manager media pool, Tivoli Storage Manager creates
two storage pools that are media-type specific. The first pool is associated with the
automated library and the second pool with an import media pool.
You can use the library door to insert and remove media. After media are injected
and the library door is closed, RSM automatically inventories the device. If the
new media matches the media type for a defined RSM library, RSM labels the
media and adds it to one of the following media pools in that library:
Free Pool for RSM
This pool is used to track previously unlabeled media. Free pool media are
assumed to be empty or to contain invalid data. Media in free pools are
available for use by any application. You must provide an adequate supply
of media in the free or scratch pool to satisfy mount requests. When Tivoli
Storage Manager needs media, RSM obtains it from the scratch pool. RSM
manages the media from that point.
Import Pool
This pool is used to track previously labeled media that is recognized by a
particular application in the RSM-controlled storage management system.
Media in import pools can be allocated by any application, including the
application that originally labeled it. To safeguard data, it is recommended
that you move these volumes to the application-specific import pool.
Unrecognized Pool
This pool is used to track previously labeled media that are not recognized
by any application in the RSM-controlled storage management system.
Unrecognized pool volumes cannot be allocated by any application, and
require administrator intervention to correct labeling or program errors.
Normally, volumes in the Unrecognized Pool would be moved to the Free
Pool for later application use.
Note: You can use the Properties dialog to view the attributes of any volume in
the Free, Import, or Unrecognized pools.
The following example defines an RSM library for an 8-mm autochanger device
containing two drives:
1. Define a library for the RSM-managed device. For example:
define library astro libtype=rsm mediatype="8mm AME"
Tip:
v Specify the library type as libtype=rsm for RSM.
v Use the RSM documentation to determine the value to use for the media
type.
v Enclose the media type within quotation marks if it contains embedded
blanks.
2. Define a device class for the RSM library with a device type of GENERICTAPE.
For example:
define devclass 8MMCLASS1 devtype=generictape library=rsmlib
format=drive mountretention=5 mountwait=10 mountlimit=2
Tip: For storage environments in which devices are shared across applications,
MOUNTRETENTION and MOUNTWAIT settings must be carefully considered.
These parameters determine how long an idle volume remains in a drive and
the timeout value for mount requests. Because RSM will not dismount an
allocated drive to satisfy pending requests, you must tune these parameters to
satisfy competing mount requests while maintaining optimal system
performance.
3. Define a storage pool for the device class. For example:
define stgpool 8MMPOOL1 8MMCLASS1 maxscratch=500
For details about the interface, see Appendix B, “External media management
interface description,” on page 1145.
To remove RSM-managed devices from media manager control, modify the device
configuration to allow the ADSMSCSI device driver to claim the devices before
RSM. For more information, see “Selecting a device driver” on page 104. For
information about removing devices from other external media managers, refer to
the specific management product's documentation set.
The most likely symptom of this problem is that the volumes in the media
manager's database are not known to Tivoli Storage Manager, and thus not
available for use. Verify the Tivoli Storage Manager volume list and any disaster
recovery media. If volumes not identified to Tivoli Storage Manager are found, use
the media manager interface to deallocate and delete the volumes.
To obtain information about libraries, use the QUERY LIBRARY command. The
default is a standard report. For example, to display information about all libraries
in a standard report, issue the following command:
query library
If your system or device is reconfigured, and the device name changes, you may
need to update the device name. The examples below show how you can issue the
UPDATE LIBRARY and UPDATE PATH commands for the following library types:
v SCSI
v 349X
v ACSLS
v External
Examples:
v SCSI Library
Update the path from SERVER1 to a SCSI library named SCSILIB:
update path server1 scsilib srctype=server desttype=library device=lb4.0.0.0
Update the definition of a SCSI library named SCSILIB defined to a library client
so that a new library manager is specified:
update library scsilib primarylibmanager=server2
v 349X Library
Update the path from SERVER1 to an IBM 3494 library named 3494LIB with
new device names.
Deleting libraries
Before you delete a library with the DELETE LIBRARY command, you must delete
all of the drives and drive paths that have been defined as part of the library and
delete the path to the library.
For information about deleting drives, see “Deleting drives” on page 202.
For example, suppose that you want to delete a library named 8MMLIB1. After
deleting all of the drives defined as part of this library and the path to the library,
issue the following command to delete the library itself:
delete library 8mmlib1
Managing drives
You can query, update, and delete drives.
The QUERY DRIVE command accepts wildcard characters for both a library name
and a drive name. See the Administrator's Reference for information about using
wildcard characters.
For example, to query all drives associated with your server, issue the following
command:
query drive
Updating drives
You can change the attributes of a drive by issuing the UPDATE DRIVE command.
You can change the following attributes of a drive by issuing the UPDATE DRIVE
command.
v The element address, if the drive is in a SCSI or virtual tape library (VTL).
v The ID of a drive in an automated cartridge system library software (ACSLS)
library
v The cleaning frequency
v Change whether the drive is online or offline
For example, to change the element address of a drive named DRIVE3 to 119, issue
the following command:
update drive auto drive3 element=119
Note: You cannot change the element number if a drive is in use. If a drive has a
volume mounted, but the volume is idle, it can be explicitly dismounted as
described in “Dismounting idle volumes” on page 176.
If you are reconfiguring your system, you can change the device name of a drive
by issuing the UPDATE PATH command. For example, to change the device name of a
drive named DRIVE3, issue the following command:
update path server1 drive3 srctype=server desttype=drive library=scsilib
device=mt3.0.0.0
You can change a drive to offline status while the drive is in use. Tivoli Storage
Manager finishes with the current tape in the drive, and then does not use the
drive anymore. By changing a drive to offline, you can drain work off a drive.
However, if the tape that was in use was part of a series of tapes for a single
transaction, the drive is not available to complete the series. If no other drives are
available, the transaction might fail. If all drives in a library are made offline, any
attempts by Tivoli Storage Manager to write to the storage pool associated with the
library fails.
The ONLINE parameter specifies the value of the drive's online state, even if the
drive is in use. ONLINE=YES indicates that the drive is available for use (online).
ONLINE=NO indicates that the drive is not available for use (offline). This
parameter is optional. Do not specify other optional parameters along with
ONLINE=YES or ONLINE=NO. If you do, the drive is not updated, and the
command fails when the drive is in use. This command can be issued when the
drive is involved in an active process or session, but this action is not
recommended.
Drives must be able to recognize the correct format. With Tivoli Storage Manager,
you can use the following encryption methods:
Table 19. Encryption methods supported
Application method Library method System method
3592 generation 2 Yes Yes Yes
and later
IBM LTO-4 Yes Yes, but only if your Yes
system hardware (for
example, 3584)
supports it
HP LTO-4 Yes No No
Oracle StorageTek Yes No No
T10000B
Oracle StorageTek Yes No No
T10000C
To enable drive encryption with IBM LTO-4, you must have the IBM RMSS
Ultrium device driver installed. SCSI drives do not support IBM LTO-4 encryption.
To enable encryption with HP LTO-4, you must have the Tivoli Storage Manager
device driver installed.
A library can contain a mixture of drives, some of which support encryption and
some which do not. (For example, a library might contain two LTO-2 drives, two
With logical block protection, you can identify errors that occur while data is being
written to tape and while data is transferred from the tape drive to Tivoli Storage
Manager through the storage area network. Drives that support logical block
protection validate data during read and write operations. The Tivoli Storage
Manager server validates data during read operations.
If validation by the drive fails during write operations, it can indicate that data
was corrupted while being transferred to tape. The Tivoli Storage Manager server
fails the write operation. You must restart the operation to continue. If validation
by the drive fails during read operations, it can indicate that the tape media is
corrupted. If validation by the Tivoli Storage Manager server fails during read
operations, it can indicate that data was corrupted while being transferred from the
tape drive and the server tries the operation again. If validation fails consistently,
the Tivoli Storage Manager server issues an error message that indicates hardware
or connection problems.
If logical block protection is disabled on a tape drive, or the drive does not support
logical block protection, the Tivoli Storage Manager server can read protected data.
However, the data is not validated.
Logical block protection is superior to the CRC validation that you can specify
when you define or update a storage pool definition. When you specify CRC
validation for a storage pool, data is validated only during volume auditing
operations. Errors are identified after data is written to tape.
The following table shows the media and the formats that you can use with drives
that support logical block protection.
Tip: If you have a 3592, LTO, or Oracle StorageTek drive that is not capable of
logical block protection, you can upgrade the drive with firmware that provides
logical block protection.
Logical block protection is only available for drives that are in MANUAL, SCSI,
349x, and ACSLS libraries. Logical block protection is not available for drives that
are in external libraries. For the most current information about support for logical
block protection, see http://www.ibm.com/support/
docview.wss?uid=swg21568108.
To use logical block protection for write operations, all the drives in a library must
support logical block protection. If a drive is not capable of logical block
protection, volumes that have read/write access are not mounted. However, the
server can use the drive to mount volumes that have read-only access. The
protected data is read and validated by the Tivoli Storage Manager server if logical
block protection is enabled for read/write operations.
To enable logical block protection, specify the LBPROTECT parameter on the DEFINE
DEVCLASS or the UPDATE DEVCLASS command for the 3592, LTO, and ECARTRIDGE
device types:
v To enable logical block protection, specify a value of READWRITE or
WRITEONLY for the LBPROTECT parameter.
For example, to specify logical block protection during read/write operations for
a 3592 device class named 3592_lbprotect, issue the following command:
define devclass 3592_lbprotect library=3594 lbprotect=readwrite
Tips:
– If you update the value of the LBPROTECT parameter from NO to READWRITE
or WRITEONLY and the server selects a filling volume without logical block
protection for write operations, the server issues a message each time the
volume is mounted. The message indicates that data will be written to the
volume without logical block protection. To prevent this message from
displaying or to have Tivoli Storage Manager only write data with logical
block protection, update the access of filling volumes without logical block
protection to read-only.
– To reduce the performance effects, do not specify the CRCDATA parameter on
the DEFINE STGPOOL or UPDATE STGPOOL command.
– When data is validated during read operations by both the drive and by the
Tivoli Storage Manager server, it can slow server performance during restore
and retrieval operations. If the time that is required for restore and retrieval
operations is critical, you can change the setting of the LBPROTECT parameter
from READWRITE to WRITEONLY to increase the restore or retrieval speed.
After data is restored or retrieved, you can reset the LBPROTECT parameter to
READWRITE.
v To disable logical block protection, specify a value of NO for the LBPROTECT
parameter.
Restriction: If logical block protection is disabled, the server does not write to
an empty tape with logical block protection. However, if a filling volume with
logical block protection is selected, the server continues to write to the volume
with logical block protection. To prevent the server from writing to tapes with
logical block protection, change access of filling volumes with logical block
protection to read-only. When data is read, the CRC on each block is not
checked by either drive or the server.
If a disaster occurs and the disaster recovery site does not have drives that
support logical block protection, you must set the LBPROTECT parameter to NO. If
the tape drives are used for write operations, you must change the volume
access for volumes with protected data to read-only to prevent the server from
using the volumes.
If the server is to enable logical block protection, the server issues an error
message that indicates that the drive does not support logical block protection.
To determine whether a volume has logical block protection, issue the QUERY
VOLUME command and verify the value in the field Logical Block Protection.
If you use the UPDATE DEVCLASS command to change the setting for logical block
protection, the change applies only to empty volumes. Filling and full volumes
maintain their status of logical block protection until they are empty and ready to
be refilled.
For example, suppose that you change the value of the LBPROTECT parameter from
READWRITE to NO. If the server selects a volume that is associated with the
device class and that has logical block protection, the server continues writing
protected data to the volume.
Remember:
v Before you select the volume, the Tivoli Storage Manager server does not verify
whether the volume has logical block protection.
v If a drive does not support logical block protection, the mounts of volumes with
logical block protection for write operations fail. To prevent the server from
mounting the protected volumes for write operations, change the volume access
to read-only. Also, disable logical block protection to prevent the server from
enabling the feature on the tape drive.
v If a drive does not support logical block protection, and logical block protection
is disabled, the server reads data from protected volumes. However, the data is
not validated by the server and the tape drive.
To determine whether a volume has logical block protection, issue the QUERY
VOLUME command and verify the value in the field Logical Block Protection.
Tip: Consider updating the access of filling volumes to read-only if you update the
value of the LBPROTECT parameter in one of the following ways:
v READWRITE or WRITEONLY to NO
v NO to READWRITE or WRITEONLY
For example, suppose that you change the setting of the LBPROTECT parameter from
NO to READWRITE. If the server selects a filling volume without logical block
protection for write operations, the server issues a message each time the volume
is mounted. The message indicates that data will be written to the volume without
logical block protection. To prevent this message from being displayed or to have
Tivoli Storage Manager only write data with logical block protection, update the
access of filling volumes without logical block protection to read-only.
Suppose, for example, that you have a 3584 library that has LTO-5 drives and that
you want to use for protected and unprotected data. To define the required device
classes and storage pools, you can issue the following commands.
Replacing drive and path definitions is required even if you are exchanging one
drive for another of the same type, using the same logical address, physical
address, SCSI ID, and port number. Device alias names can change when you
change your drive connections.
If the new drive is an upgrade that supports a new media format, you might also
need to define a new logical library, device class, and storage pool. Procedures for
setting up policy for a new drive in a multiple-drive library will vary, depending
on the types of drives and media in the library.
By default, existing volumes with a status of FILLING will remain in that state
after a drive upgrade. In some cases, you might want to continue using an older
drive to fill these volumes. This will preserve read/write capability for the existing
volumes until they have been reclaimed. If you choose to upgrade all of the drives
in a library, pay attention to the media formats supported by the new hardware.
Unless you are planning to use only the latest media with your new drive, you
will need to be aware of any compatibility issues. For migration instructions, see
“Migrating to upgraded drives” on page 197.
To use a new drive with media it can read but not write to, issue the UPDATE
VOLUME command to set the access for those volumes to read-only. This will
prevent errors caused by read/write incompatibility. For example, a new drive
may eject media written in a density format it does not support as soon as the
media is loaded into the drive. Or a new drive may fail the first write command to
media partially written in a format it does not support.
When data on the read-only media expires and the volume is reclaimed, replace it
with media that is fully compatible with the new drive. Errors can be generated if
a new drive is unable to correctly calibrate a volume written using an older
format. To avoid this problem, ensure that the original drive is in good working
order and at current microcode levels.
To remove a drive:
1. Stop the IBM Tivoli Storage Manager server and shut down the operating
system.
2. Remove the old drive and follow the manufacturer's instructions to install the
new drive.
3. Restart the operating system and the IBM Tivoli Storage Manager server.
4. Delete the path from the server to the drive. For example:
delete path server1 lib1 srctype=server desttype=drive
5. Delete the drive definition. For example, to delete a drive named DLT1 from a
library device named LIB1, enter:
delete drive lib1 dlt1
6. Define the new drive and path. This procedure will vary, depending on the
configuration of drives in your library. See “Defining new drives.”
To add a drive that supports the same media formats as the drive it replaces, you
need to define a new drive and path.
For example, to define a new drive and name it DRIVE1 and a path to it from
SERVER1, enter the following commands:
define drive lib1 drive1
You can use your existing library, device class, and storage pool definitions.
Upgrading all of the drives in a library that contained only one type of drive:
To upgrade all the drives in a library that contained only one type of drive, you
need to define a new drive and path. You also need to update device class and
storage pool definitions.
You must decide how to manage any new types of media supported by the new
drives. See “Preventing errors caused by media incompatibility” on page 193 for
more information.
The following scenario assumes you already have a library device defined as
follows:
Library Library Private Scratch WORM Scratch External
Name Type Category Category Category Manager
------- ------- -------- -------- ------------ --------
LIB1 349X 200 201
Note: You must specify FORMAT=DRIVE for the new device classes.
You can then define two storage pools to divide the tapes within the
library:
define stgpool 3590E_pool 3590E_class maxscratch=number_of_3590E_tapes
define stgpool 3590H_pool 3590H_class maxscratch=number_of_3590H_tapes
Upgrading some of the drives in a library that contained only one type of
drive:
To upgrade some of the drives in a library that contained only one type of drive,
you need to define a separate logical library for each type of drive.
The following scenario assumes you already have a library device defined as
follows:
Library Library Private Scratch WORM Scratch External
Name Category Category Category Manager
------- ------- -------- -------- ------------ --------
LIB1 349X 200 201
Define a new logical library and path for each new type of drive
For example, to add a logical library named LIB2 for the same physical
device already defined as LIB1, enter:
You can then issue the CHECKIN LIBVOLUME command to check the
new media into the logical library LIB2.
Upgrading all of the drives in a library that contained more than one type of
drive:
To upgrade all the drives in a library that contained more than one type of drive,
you need to update the drive and path definitions for each logical library.
The following scenario assumes you already have two logical libraries defined. For
example:
To upgrade some ofl the drives in a library that contained more than one type of
drive, you need to update the drive and path definitions for each logical library.
The following scenario assumes you already have two logical libraries defined, for
example:
You must update the drive and path definitions for each logical library. Follow the
guidelines in “Upgrading some of the drives in a library that contained only one
type of drive” on page 195. For accurate reporting of capacity information, you
cannot use a global scratch pool with this configuration.
Define a new DISK storage pool and set it up to migrate its data to a storage pool
created for the new drives. Then update your existing management-class
definitions to begin storing data in the new DISK storage pool.
Cleaning drives
You can use the server to manage tape drive cleaning. The server can control
cleaning tape drives in SCSI libraries and offers partial support for cleaning tape
drives in manual libraries.
For automated libraries, you can automate cleaning by specifying the frequency of
cleaning operations and checking a cleaner cartridge into the library's volume
inventory. Tivoli Storage Manager mounts the cleaner cartridge as specified. For
manual libraries, Tivoli Storage Manager issues a mount request for the cleaner
cartridge. There are special considerations if you plan to use server-controlled
drive cleaning with a SCSI library that provides automatic drive cleaning support
in its device hardware.
| Note: Use library based cleaning for automated tape libraries that support this
| function.
| Library based cleaning provides several advantages for automated tape libraries
| that support this function:
| v Library based cleaning lowers the burden on the Tivoli Storage Manager
| administrator to manage cleaning cartridges.
| v It can improve cleaning cartridge usage rates. Most tape libraries track the
| number of cleans left based on the hardware indicators. Tivoli Storage Manager
| uses a raw count.
| v Unnecessary cleaning is reduced. Modern tape drives do not need cleaning at
| fixed intervals and can detect and request when cleaning is required.
Device manufacturers that include library cleaning recommend its use to prevent
premature wear on the read/write heads of the drives. For example, SCSI libraries
such as StorageTek 9710, IBM 3570, and IBM 3575 have their own automatic
cleaning that is built into the device.
Drives and libraries from different manufacturers differ in how they manage
cleaner cartridges and how they report the presence of a cleaner cartridge in a
drive. The device driver might not be able to open a drive that contains a cleaner
cartridge. Sense codes and error codes that are issued by devices for drive cleaning
vary. Library drive cleaning is usually transparent to all applications. Therefore,
Tivoli Storage Manager might not always detect cleaner cartridges in drives and
might not be able to determine when cleaning begins.
Some devices require a small amount of idle time between mount requests to start
drive cleaning. However, Tivoli Storage Manager tries to minimize the idle time for
a drive. The result may be to prevent the library drive cleaning from functioning
effectively. If this happens, try using Tivoli Storage Manager to control drive
cleaning. Set the frequency to match the cleaning recommendations from the
manufacturer.
If you have Tivoli Storage Manager control drive cleaning, disable the library drive
cleaning function to prevent problems. If the library drive cleaning function is
enabled, some devices automatically move any cleaner cartridge that is found in
the library to slots in the library that are dedicated for cleaner cartridges. An
application does not know that these dedicated slots exist. You cannot check a
cleaner cartridge into the Tivoli Storage Manager library inventory until you
disable the library drive cleaning function.
Restrictions:
a. For IBM 3570, 3590, and 3592 drives, specify a value for the
CLEANFREQUENCY parameter rather than specify ASNEEDED. Using the
cleaning frequency recommended by the product documentation will not
overclean the drives.
b. The CLEANFREQUENCY=ASNEEDED parameter value does not work for
all tape drives. To determine whether a drive supports this function, see the
website: http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html. At this website,
click the drive to view detailed information. If ASNEEDED is not
supported, you can use the gigabytes value for automatic cleaning.
2. Check a cleaner cartridge into the library's volume inventory with the
CHECKIN LIBVOLUME command. For example:
checkin libvolume autolib1 cleanv status=cleaner cleanings=10 checklabel=no
After the cleaner cartridge is checked in, the server will mount the cleaner
cartridge in a drive when the drive needs cleaning. The server will use that
cleaner cartridge for the number of cleanings specified. See “Checking in
cleaner volumes” and “Operations with cleaner cartridges in a library” on page
200 for more information.
To allow server to control drive cleaning without operator intervention, you must
check a cleaner cartridge into the automated library's volume inventory.
It is recommended that you check in cleaner cartridges one at a time and do not
use the search function of checkin for a cleaner cartridge.
The server then requests that the cartridge be placed in the entry/exit port, or
into a specific slot.
v Check in using search, but limit the search by using the VOLRANGE or
VOLLIST parameter:
checkin libvolume autolib1 status=cleaner cleanings=10
search=yes checklabel=barcode vollist=cleanv
The process scans the library by using the bar code reader, looking for the
CLEANV volume.
If your library has limited capacity and you do not want to use a slot in your
library for a cleaner cartridge, the server can issue messages telling you that a
drive needs to be cleaned.
Set the cleaning frequency for the drives in the library. When a drive needs
cleaning based on the frequency setting, the server issues the message, ANR8914I.
For example:
ANR89141I Drive DRIVE1 in library AUTOLIB1 needs to be cleaned.
You can use that message as a cue to manually insert a cleaner cartridge into the
drive. However, the server cannot track whether the drive has been cleaned.
When a drive needs to be cleaned, the server runs the cleaning operation after
dismounting a data volume if a cleaner cartridge is checked in to the library. If the
cleaning operation fails or is canceled, or if no cleaner cartridge is available, then
the indication that the drive needs cleaning is lost. Monitor cleaning messages for
these problems to ensure that drives are cleaned as needed. If necessary, issue the
CLEAN DRIVE command to have the server try the cleaning again, or manually
load a cleaner cartridge into the drive.
The server uses a cleaner cartridge for the number of cleanings that you specify
when you check in the cleaner cartridge. If you check in two or more cleaner
cartridges, the server uses only one of the cartridges until the designated number
of cleanings for that cartridge has been reached. Then the server begins to use the
next cleaner cartridge. If you check in two or more cleaner cartridges and issue
two or more CLEAN DRIVE commands concurrently, the server uses multiple
cartridges at the same time and decrements the remaining cleanings on each
cartridge.
Monitor the activity log or the server console for these messages and load a cleaner
cartridge into the drive as needed. The server cannot track whether the drive has
been cleaned.
When a drive needs cleaning, the server loads what its database shows as a cleaner
cartridge into the drive. The drive then moves to a READY state, and Tivoli
Storage Manager detects that the cartridge is a data cartridge. The server then
performs the following steps:
1. The server attempts to read the internal tape label of the data cartridge.
2. The server ejects the cartridge from the drive and moves it back to the home
slot of the “cleaner” cartridge within the library. If the eject fails, the server
marks the drive offline and issues a message that the cartridge is still in the
drive.
3. The server checks out the “cleaner” cartridge to avoid selecting it for another
drive cleaning request. The “cleaner” cartridge remains in the library but no
longer appears in the Tivoli Storage Manager library inventory.
4. If the server was able to read the internal tape label, the server checks the
volume name against the current library inventory, storage pool volumes, and
the volume history file.
v If there is not a match, you probably checked in a data cartridge as a cleaner
cartridge by mistake. Now that the volume is checked out, you do not need
to do anything else.
v If there is a match, the server issues messages that manual intervention and a
library audit are required. Library audits can take considerable time, so you
should issue the command when sufficient time permits. See “Auditing
volume inventories in libraries” on page 168.
Note: A drive cannot be deleted until the defined path to the drive has been
deleted. Also, a library cannot be deleted until all of the drives defined within it
are deleted.
For details about dismounting, see “Dismounting idle volumes” on page 176.
Managing paths
You can use Tivoli Storage Manager commands to query, update, and delete paths.
You can request either a standard or a detailed report. For example, to display
information about all paths, issue the following command:
query path
Updating paths
You can use the UPDATE PATH command to update the attributes of an existing
path definition.
The examples below show how you can use the UPDATE PATH commands for the
following path types:
v Library Paths
Update the path from SERVER1 to a SCSI library named SCSILIB:
update path server1 scsilib srctype=server desttype=library device=lb4.0.0.0
v Drive Paths
Update the path from SERVER1 to a SCSI library named SCSILIB:
update path nas1 scsilib srctype=datamover desttype=drive
library=naslib device=mt3.0.0.0
To delete a path from a NAS data mover NAS1 to the library NASLIB:
delete path nas1 naslib srctype=datamover desttype=library
Attention: If you delete the path to a device or make the path offline, you disable
access to that device.
You can request either a standard or a detailed report. For example, to display a
standard report about all data movers, issue the following command:
query datamover *
For example, to update the data mover for the node named NAS1 to change the IP
address, issue the following command:
update datamover nas1 hladdress=9.67.97.109
Before you can delete a data mover, you must delete all paths defined for the data
mover.
Managing disks
You can query, update, and delete client-owned disks that reside in a storage area
network.
You can request either a standard or a detailed report. For example, to display a
standard report about all defined disks, issue the following command:
query disk *
Updating disks
You can use the UPDATE DISK command to update the attributes of an existing
disk definition.
The example below shows how you can use the UPDATE DISK command to
change the world wide name, serial number, and status of a disk.
Update a disk named Harddisk1 owned by NODE1. The world wide name to
20020060450d00e2 and the serial number to 100047. Change the ONLINE status to
YES.
update disk node1 Harddisk1 wwn=20020060450d00e2 serial=100047 online=yes
Deleting disks
You can use the DELETE DISK command to delete an existing disk definition.
All paths related to a disk must be deleted before the disk itself can be deleted.
See “Managing libraries” on page 185 and “Managing drives” on page 186 for
information about displaying library and drive information, and updating and
deleting libraries and drives.
Defining libraries
Before you can use a drive, you must first define the library to which the drive
belongs.
For both manually mounted drives and drives in automated libraries, the library
must be defined before the drives can be used. For example, you have several
stand-alone tape drives. You can define a library named MANUALMOUNT for
these drives by using the following command:
define library manualmount libtype=manual
For all libraries other than manual libraries, you define the library and then define
a path from the server to the library. For example, if you have an IBM 3583 device,
you can define a library named ROBOTMOUNT using the following command:
define library robotmount libtype=scsi
Next, you use the DEFINE PATH command. In the path, you must specify the
DEVICE parameter. The DEVICE parameter is required and specifies the device
alias name by which the library's robotic mechanism is known.
define path server1 robotmount srctype=server desttype=library
device=lb3.0.0.0
For more information about paths, see “Defining paths” on page 208.
If you choose, you can specify the serial number when you define the library to
the server. For convenience, the default is to allow the server to obtain the serial
number from the library itself at the time that the path is defined.
If you specify the serial number, the server confirms that the serial number is
correct when you define the path to the library. When you define the path, you can
Depending on the capabilities of the library, the server may not be able to
automatically detect the serial number. Not all devices are able to return a serial
number when asked for it by an application such as the server. In this case, the
server will not record a serial number for the device, and will not be able to
confirm the identity of the device when you define the path or when the server
uses the device. See “Impact of device changes on the SAN” on page 153.
Defining drives
To inform the server about a drive that can be used to access storage volumes,
issue the DEFINE DRIVE command, followed by the DEFINE PATH command.
When issuing the DEFINE DRIVE command, you must provide some or all of the
following information:
Library name
The name of the library in which the drive resides.
Drive name
The name assigned to the drive.
Serial number
The serial number of the drive. The serial number parameter applies only
to drives in SCSI libraries. With the serial number, the server can confirm
the identity of the device when you define the path or when the server
uses the device.
You can specify the serial number if you choose. The default is to allow the
server to obtain the serial number from the drive itself at the time that the
path is defined. If you specify the serial number, the server confirms that
the serial number is correct when you define the path to the drive. When
you define the path, you can set AUTODETECT=YES to allow the server to
correct the serial number if the number that it detects does not match what
you entered when you defined the drive.
Depending on the capabilities of the drive, the server may not be able to
automatically detect the serial number. In this case, the server will not
record a serial number for the device, and will not be able to confirm the
identity of the device when you define the path or when the server uses
the device.
Element address
The element address of the drive. The ELEMENT parameter applies only
to drives in SCSI libraries. The element address is a number that indicates
the physical location of a drive within an automated library. The server
needs the element address to connect the physical location of the drive to
the drive's SCSI address. You can allow the server to obtain the element
number from the drive itself at the time that the path is defined, or you
can specify the element number when you define the drive.
Depending on the capabilities of the library, the server may not be able to
automatically detect the element address. In this case you must supply the
element address when you define the drive, if the library has more than
one drive. Element numbers for many libraries are available at
http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html.
Next, you define the path from the server to the drive, using the device name used
to access the drive:
define path server1 mandrive srctype=server desttype=drive library=manlib
device=mt3.0.0.0
When issuing the DEFINE DATAMOVER command, you must provide some or all
of the following information:
Data mover name
The name of the defined data mover.
Type The type of data mover (SCSI or NAS).
World wide name
The Fibre Channel world wide name for the data mover device.
Serial number
Specifies the serial number of the data mover.
High level address
The high level address is either the numerical IP address or the domain
name of a NAS file server.
Low level address
The low level address specifies the TCP port number used to access a NAS
file server.
User ID
The user ID specifies the ID for a user when initiating a Network Data
Management Protocol (NDMP) session with a NAS file server.
Password
The password specifies the password associated with a user ID when
initiating an NDMP session with a NAS file server. Check with your NAS
file server vendor for user ID and password conventions.
Copy threads
The number of concurrent copy operations that the SCSI data mover can
support.
Online
The online parameter specifies whether the data mover is online.
Data format
The data format parameter specifies the data format used according to the
type of data mover device used.
Defining paths
Before a device can be used, a path must be defined between the device and the
server or the device and the data mover responsible for outboard data movement.
When issuing the DEFINE PATH command, you must provide some or all of the
following information:
Source name
The name of the server, storage agent, or data mover that is the source for
the path.
Destination name
The assigned name of the device that is the destination for the path.
Source type
The type of source for the path. (A storage agent is considered a type of
server for this purpose.)
Destination type
The type of device that is the destination for the path.
Library name
The name of the library that a drive is defined to if the drive is the
destination of the path.
GENERICTAPE device class
If you plan to use a device that is not supported by the Tivoli Storage
Manager server and you want to use the GENERICTAPE device class,
specify GENERICTAPE=Yes when defining the path.
Device
The alias name of the device (or for an IBM 3494 library, the symbolic
name). This parameter is used when defining a path between a server or a
storage agent and a library, drive, or disk. This parameter should not be
used when defining a data mover as the source type, except when the data
mover is a NAS data mover. NAS data movers always require a device
parameter. For shared FILE drives, this value is always “FILE.”
Directory
The directory location or locations of the files used in the FILE device
class. The default is the current working directory of the server at the time
the command is issued. Windows registry information is used to determine
the default directory.
Automatic detection of serial number and element address
For devices on a SAN, you can specify whether the server should correct
the serial number or element address of a drive or library, if it was
incorrectly specified on the definition of the drive or library. The server
uses the device name to locate the device and compares the serial number
For example, if you had a SCSI type library named AUTODLTLIB that had a
device name of lb3.0.0.0, and you wanted to define it to a server named ASTRO1,
you would issue the following command:
define path astro1 autodltlib srctype=server desttype=library
device=lb3.0.0.0
If you had a drive, DRIVE01, that resided in library AUTODLTLIB, and had a
device name of mt3.0.0.0, and you wanted to define it to server ASTRO1, you
would issue the following command:
define path astro1 drive01 srctype=server desttype=drive library=autodltlib
device=mt3.0.0.0
Sequential-access device types include tape, optical, and sequential-access disk. For
random access storage, Tivoli Storage Manager supports only the DISK device
class, which is defined by Tivoli Storage Manager.
To define a device class, use the DEFINE DEVCLASS command and specify the
DEVTYPE parameter. The DEVTYPE parameter assigns a device type to the device
class. You can define multiple device classes for each device type. For example,
you might need to specify different attributes for different storage pools that use
the same type of tape drive. Variations may be required that are not specific to the
device, but rather to how you want to use the device (for example, mount
retention or mount limit). For all device types other than FILE or SERVER, you
must define libraries and drives to Tivoli Storage Manager before you define the
device classes.
To update an existing device class definition, use the UPDATE DEVCLASS command.
You can also delete a device class and query a device class using the DELETE
DEVCLASS and QUERY DEVCLASS commands, respectively.
Remember:
v One device class can be associated with multiple storage pools, but each storage
pool is associated with only one device class.
v If you include the DEVCONFIG option in the dsmserv.opt file, the files that you
specify with that option are automatically updated with the results of the
DEFINE DEVCLASS, UPDATE DEVCLASS, and DELETE DEVCLASS
commands.
v Tivoli Storage Manager now allows SCSI libraries to include tape drives of more
than one device type. When you define the device class in this environment, you
must declare a value for the FORMAT parameter.
Tasks
“Defining tape and optical device classes” on page 212
“Defining 3592 device classes” on page 215
“Device classes for devices not supported by the Tivoli Storage Manager server” on page
218
“Defining device classes for removable media devices” on page 218
“Defining sequential-access disk (FILE) device classes” on page 218
“Defining LTO device classes” on page 222
“Defining SERVER device classes” on page 225
“Defining device classes for StorageTek VolSafe devices” on page 226
“Defining device classes for CENTERA devices” on page 227
“Obtaining information about device classes” on page 229
“How Tivoli Storage Manager fills volumes” on page 230
For details about commands and command parameters, see the Administrator's
Reference.
For the most up-to-date list of supported devices and valid device class formats,
see the Tivoli Storage Manager Supported Devices website:
http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see the
Administrator's Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
The following tables list supported devices, media types, and Tivoli Storage
Manager device types.
For tape and optical device classes, the default values selected by the server
depend on the recording format used to write data to the volume. You can either
accept the default for a given device type or specify a value.
To specify estimated capacity for tape volumes, use the ESTCAPACITY parameter
when you define the device class or update its definition.
For more information about how Tivoli Storage Manager uses the estimated
capacity value, see “How Tivoli Storage Manager fills volumes” on page 230.
To specify a recording format, use the FORMAT parameter when you define the
device class or update its definition.
If all drives associated with that device class are identical, specify
FORMAT=DRIVE. The server selects the highest format that is supported by the
drive on which a volume is mounted.
If some drives associated with the device class support a higher density format
than others, specify a format that is compatible with all drives. If you specify
FORMAT=DRIVE, mount errors can occur. For example, suppose a device class
uses two incompatible devices such as an IBM 7208-2 and an IBM 7208-12. The
server might select the high-density recording format of 8500 for each of two new
volumes. Later, if the two volumes are to be mounted concurrently, one fails
because only one of the drives is capable of the high-density recording format.
If drives in a single SCSI library use different tape technologies (for example, DLT
and LTO Ultrium), specify a unique value for the FORMAT parameter in each
device class definition.
To associate a device class with a library, use the LIBRARY parameter when you
define a device class or update its definition.
When setting a mount limit for a device class, you need to consider the number of
storage devices connected to your system, whether you are using the
simultaneous-write function, whether you are associating multiple device classes
with a single library, and the number of processes that you want to run at the
same time.
When selecting a mount limit for a device class, consider the following issues:
v How many storage devices are connected to your system?
Do not specify a mount limit value that is greater than the number of associated
available drives in your installation. If the server tries to mount as many
volumes as specified by the mount limit and no drives are available for the
required volume, an error occurs and client sessions may be terminated. (This
does not apply when the DRIVES parameter is specified.)
v Are you using the simultaneous-write function to primary storage pools, copy
storage pools, and active-data pools?
Specify a mount limit value that provides a sufficient number of mount points to
support writing data simultaneously to the primary storage pool and all
associated copy storage pools and active-data pools.
v Are you associating multiple device classes with a single library?
A device class associated with a library can use any drive in the library that is
compatible with the device class' device type. Because you can associate more
than one device class with a library, a single drive in the library can be used by
more than one device class. However, Tivoli Storage Manager does not manage
how a drive is shared among multiple device classes.
v How many Tivoli Storage Manager processes do you want to run at the same
time, using devices in this device class?
Tivoli Storage Manager automatically cancels some processes to run other,
higher priority processes. If the server is using all available drives in a device
class to complete higher priority processes, lower priority processes must wait
until a drive becomes available. For example, Tivoli Storage Manager cancels the
process for a client backing up directly to tape if the drive being used is needed
for a server migration or tape reclamation process. Tivoli Storage Manager
Best Practice: If the library associated with this device class is EXTERNAL type,
explicitly specify the mount limit instead of using MOUNTLIMIT=DRIVES.
You can control the amount of time that a mounted volume remains mounted after
its last I/O activity. If a volume is used frequently, you can improve performance
by setting a longer mount retention period to avoid unnecessary mount and
dismount operations.
To control the amount of time a mounted volume remains mounted, use the
MOUNTRETENTION parameter when you define the device class or update its
definition. For example, if the mount retention value is 60, and a mounted volume
remains idle for 60 minutes, then the server dismounts the volume.
While Tivoli Storage Manager has a volume mounted, the drive is allocated to
Tivoli Storage Manager and cannot be used for anything else. If you need to free
the drive for other uses, you can cancel Tivoli Storage Manager operations that are
using the drive and then dismount the volume. For example, you can cancel server
migration or backup operations. For information on how to cancel processes and
dismount volumes, see:
v “Canceling server processes” on page 651
v “Dismounting idle volumes” on page 176
Controlling the amount of time that the server waits for a drive:
You can specify the maximum amount of time, in minutes, that the Tivoli Storage
Manager server waits for a drive to become available for the current mount
request.
To control wait time, use the MOUNTWAIT parameter when you define the device
class or update its definition.
For an example that shows how to configure a VolSafe device using the WORM
parameter, see “Defining device classes for StorageTek VolSafe devices” on page
226
For optimal performance, do not mix generations of 3592 media in a single library.
Media problems can result when different drive generations are mixed. For
example, Tivoli Storage Manager might not be able to read a volume's label.
If you must mix generations of drives in a library, use one of the methods in the
following table to prevent or minimize the potential for problems.
If your library contains three drive generations, the latest drive generation in your library
can only read media from the earliest format, but cannot write with it. For example, if your
library contains generation 2, generation 3, and generation 4 drives, the generation 4 drives
can only read the generation 2 format. In this configuration, mark all media previously
written in generation 2 format to read-only.
Specify a path with the same special file name for each new library object. In addition, for
349X libraries, specify disjoint scratch categories (including the WORMSCRATCH category,
if applicable) for each library object. Specify a new device class and a new storage pool
that points to each new library object.
(SCSI libraries only) Define a new storage pool and device class for the latest drive
generation. For example, suppose you have a storage pool and device class for 3592-2. The
storage pool will contain all the media written in generation 2 format. Suppose that the
value of the FORMAT parameter in the device class definition is set to 3952-2 (not DRIVE).
You add generation 3 drives to the library. Complete the following steps:
1. In the new device-class definition for the generation 3 drives, set the value of the
FORMAT parameter to 3592-3 or 3592-3C. Do not specify DRIVE.
2. In the definition of the storage pool associated with generation 2 drives, update the
MAXSCRATCH parameter to 0, for example:
update stgpool genpool2 maxscratch=0
This method allows both generations to use their optimal format and minimizes potential
media problems that can result from mixing generations. However, it does not resolve all
media issues. For example, competition for mount points and mount failures might result.
(To learn more about mount point competition in the context of LTO drives and media, see
“Defining LTO device classes” on page 222.) The following list describes media restrictions:
v CHECKIN LIBVOL: The issue resides with using the CHECKLABEL=YES option. If the label is
currently written in a generation 3 or later format, and you specify the
CHECKLABEL=YES option, drives of previous generations fail using this command. As
a best practice, use CHECKLABEL=BARCODE.
v LABEL LIBVOL: When the server tries to use drives of a previous generation to read the
label written in a generation 3 or later format, the LABEL LIBVOL command fails unless
OVERWRITE=YES is specified. Verify that the media being labeled with OVERWRITE=YES
does not have any active data.
v CHECKOUT LIBVOL: When Tivoli Storage Manager verifies the label (CHECKLABEL=YES),
as a generation 3 or later format, and read operations by drives of previous generations,
the command fails. As a best practice, use CHECKLABEL=NO.
Tivoli Storage Manager lets you reduce media capacity to create volumes with
faster data-access speeds. The benefit is that can partition data into storage pools
that have volumes with faster data-access speeds.
To reduce media capacity, use the SCALECAPACITY parameter when you define
the device class or update its definition.
Scale capacity only takes effect when data is first written to a volume. Updates to
the device class for scale capacity do not affect volumes that already have data
written to them until the volume is returned to scratch status.
Encrypting data with drives that are 3592 generation 2 and later:
With Tivoli Storage Manager, you can use the following types of drive encryption
with drives that are 3592 generation 2 and later: Application, System, and Library.
These methods are defined through the hardware.
The following simplified example shows how to permit the encryption of data for
empty volumes in a storage pool, using Tivoli Storage Manager as the key
manager:
1. Define a library. For example:
define library 3584 libtype=SCSI
2. Define a device class, 3592_ENCRYPT, and specify the value ON for the
DRIVEENCRYPTION parameter. For example:
define devclass 3592_encrypt library=3584 devtype=3592 driveencryption=on
3. Define a storage pool. For example:
define stgpool 3592_encrypt_pool 3592_encrypt
For more information about using drive encryption, refer to “Encrypting data on
tape” on page 560.
For Windows systems, you must also define a drive path with GENERICTAPE=Yes
to use a tape device.
For a manual library with multiple drives of device type GENERICTAPE, ensure
that the device types and recording formats of the drives are compatible. Because
the devices are controlled by the operating system device driver, the Tivoli Storage
Manager server is not aware of the following:
v The actual type of device: 4 mm, 8 mm, digital linear tape, and so forth. For
example, if you have a 4 mm device and an 8 mm device, you must define
separate manual libraries for each device.
v The actual cartridge recording format. For example, if you have a manual library
defined with two device classes of GENERICTAPE, ensure the recording formats
are the same for both drives.
When using CD-ROM media for the REMOVABLEFILE device type, the library
type must be specified as MANUAL. Access this media through a drive letter, for
example, E:.
To define a FILE device class, use the DEVTYPE=FILE parameter in the device
class definition.
The Tivoli Storage Manager server allows for multiple client sessions (archive,
retrieve, backup, and restore) or server processes. For example, storage pool
backup, to concurrently read a volume in a storage pool that is associated with a
FILE-type device class. In addition, one client session or one server process can
write to the volume while it is being read.
The following server processes are allowed shared read access to FILE volumes:
v BACKUP DB
v BACKUP STGPOOL
v COPY ACTIVEDATA
v EXPORT/IMPORT NODE
v EXPORT/IMPORT SERVER
v GENERATE BACKUPSET
v RESTORE STGPOOL
v RESTORE VOLUME
The following server processes are not allowed shared read access to FILE
volumes:
v AUDIT VOLUME
v DELETE VOLUME
v MIGRATION
v MOVE DATA
v MOVE NODEDATA
v RECLAMATION
You can specify one or more directories as the location of the files used in the FILE
device class. The default is the current working directory of the server at the time
the command is issued.
Attention: Do not specify multiple directories from the same file system. Doing
so can cause incorrect space calculations. For example, if the directories /usr/dir1
and /usr/dir2 are in the same file system, the space check, which does a
preliminary evaluation of available space during store operations, will count each
directory as a separate file system. If space calculations are incorrect, the server
could commit to a FILE storage pool, but not be able to obtain space, causing the
operation to fail. If the space check is accurate, the server can skip the FILE pool in
the storage hierarchy and use the next storage pool if one is available.
If the server needs to allocate a scratch volume, it creates a new file in the
specified directory or directories. (The server can choose any of the directories in
which to create new scratch volumes.) To optimize performance, ensure that
multiple directories correspond to separate physical volumes.
The following table lists the file name extension created by the server for scratch
volumes depending on the type of data that is stored.
For scratch volumes used to store this data: The file extension is:
Client data .BFS
Export .EXP
Database backup .DBV
Tivoli Storage Manager supports the use of remote file systems or drives for
reading and writing storage pool data, database backups, and other data
operations. Disk subsystems and file systems must not report successful write
operations when they can fail after a successful write report to Tivoli Storage
Manager.
You must ensure that storage agents can access newly created FILE volumes. To
access FILE volumes, storage agents replace names from the directory list in the
device class definition with the names in the directory list for the associated path
definition.
The following example illustrates the importance of matching device classes and
paths to ensure that storage agents can access newly created FILE volumes.
Suppose you want to use these three directories for a FILE library:
c:\server
d:\server
e:\server
1. Use the following command to set up a FILE library named CLASSA with one
drive named CLASSA1 on SERVER1:
define devclass classa devtype=file
directory="c:\server,d:\server,e:\server"
shared=yes mountlimit=1
2. You want the storage agent STA1 to be able to use the FILE library, so you
define the following path for storage agent STA1:
define path server1 sta1 srctype=server desttype=drive device=file
directory="\\192.168.1.10\c\server,\\192.168.1.10\d\server,
\\192.168.1.10\e\server" library=classa
In this scenario, the storage agent, STA1, will replace the directory name
c:\server with the directory name \\192.168.1.10\c\server to access FILE
volumes that are in the c:\server directory on the server.
SERVER1 will still be able to access file volume c:\server\file1.dsm, but the
storage agent STA1 will not be able to access it because a matching directory name
in the PATH directory list no longer exists. If a directory name is not available in
the directory list associated with the device class, the storage agent can lose access
to a FILE volume in that directory. Although the volume will still be accessible
from the Tivoli Storage Manager server for reading, failure of the storage agent to
access the FILE volume can cause operations to be retried on a LAN-only path or
to fail.
To restrict the size of volumes, use the MAXCAPACITY parameter when you
define a device class or update its definition. When the server detects that a
volume has reached a size equal to the maximum capacity, it treats the volume as
full and stores any new data on a different volume.
When selecting a mount limit for this device class, consider how many Tivoli
Storage Manager processes you want to run at the same time.
Tivoli Storage Manager automatically cancels some processes to run other, higher
priority processes. If the server is using all available mount points in a device class
to complete higher priority processes, lower priority processes must wait until a
mount point becomes available. For example, Tivoli Storage Manager cancels the
process for a client backup if the mount point being used is needed for a server
migration or reclamation process. Tivoli Storage Manager cancels a reclamation
process if the mount point being used is needed for a client restore operation. For
additional information, see “Preemption of client or server operations” on page
652.
If processes are often canceled by other processes, consider whether you can make
more mount points available for Tivoli Storage Manager use. Otherwise, review
your scheduling of operations to reduce the contention for resources.
If you are considering mixing different generations of LTO media and drives, be
aware of the following restrictions:
Table 25. Read - write capabilities for different generations of LTO drives
Generation 1 Generation 2 Generation 3 Generation 4 Generation 5
Drives media media media media media
Generation 1 Read and n/a n/a n/a n/a
write
Generation 2 Read and Read and n/a n/a n/a
write write
Generation 3 Read only Read and Read and n/a n/a
write write
Generation 4 n/a Read only Read and Read and Read and
write write write
Generation 5 n/a n/a Read only Read and Read and
write write
Both device classes can point to the same library in which there can be Ultrium
Generation 1 and Ultrium Generation 2 drives. The drives will be shared between
the two storage pools. One storage pool will use the first device class and Ultrium
Generation 1 media exclusively. The other storage pool will use the second device
class and Ultrium Generation 2 media exclusively. Because the two storage pools
share a single library, Ultrium Generation 1 media can be mounted on Ultrium
Generation 2 drives as they become available during mount point processing.
Remember:
v If you are mixing Ultrium Generation 1 with Ultrium Generation 3 drives and
media in a single library, you must mark the Generation 1 media as read-only,
and all Generation 1 scratch volumes must be checked out.
v If you are mixing Ultrium Generation 2 with Ultrium Generation 4 or
Generation 5 drives and media in a single library, you must mark the Generation
2 media as read-only, and all Generation 2 scratch volumes must be checked out.
Consider the example of a mixed library: that consists of the following drives and
media:
v Four LTO Ultrium Generation 1 drives and LTO Ultrium Generation 1 media
v Four LTO Ultrium Generation 2 drives and LTO Ultrium Generation 2 media
The number of mount points available for use by each storage pool is specified in
the device class using the MOUNTLIMIT parameter. The MOUNTLIMIT parameter
in the LTO2CLASS device class should be set to 4 to match the number of available
drives that can mount only LTO2 media. The MOUNTLIMIT parameter in the
LTO1CLASS device class should be set to a value higher (5 or possibly 6) than the
Monitor and adjust the MOUNTLIMIT setting to suit changing workloads. If the
MOUNTLIMIT for LTO1POOL is set too high, mount requests for the LTO2POOL
might be delayed or fail because the Ultrium Generation 2 drives have been used
to satisfy Ultrium Generation 1 mount requests. In the worst scenario, too much
competition for Ultrium Generation 2 drives might cause mounts for Generation 2
media to fail with the following message:
ANR8447E No drives are currently available in the library.
If the MOUNTLIMIT for LTO1POOL is not set high enough, mount requests that
could potentially be satisfied LTO Ultrium Generation 2 drives will be delayed.
For more information about using drive encryption, refer to “Encrypting data on
tape” on page 560.
Tivoli Storage Manager supports the Application method of encryption with IBM
and HP LTO-4 drives. Only IBM LTO-4 supports the System and Library methods.
The Library method of encryption is supported only if your system hardware (for
example, IBM 3584) supports it.
Remember: You cannot use drive encryption with write-once, read-many (WORM)
media.
The Application method is defined through the hardware. To use the Application
method, in which Tivoli Storage Manager generates and manages encryption keys,
set the DRIVEENCRYPTION parameter to ON. This permits the encryption of data
for empty volumes. If the parameter is set to ON and the hardware is configured
for another encryption method, backup operations will fail.
The following simplified example shows the steps you would take to permit the
encryption of data for empty volumes in a storage pool:
To define a SERVER device class, use the DEFINE DEVCLASS command with the
DEVTYPE=SERVER parameter. For information about how to use a SERVER device
class, see “Using virtual volumes to store data on another server” on page 763.
To specify a file size, use the MAXCAPACITY parameter when you define the
device class or update its definition.
The storage pool volumes of this device type are explicitly set to full when the
volume is closed and dismounted.
When specifying a mount limit, consider your network load balancing and how
many Tivoli Storage Manager processes you want to run at the same time.
Tivoli Storage Manager automatically cancels some processes to run other, higher
priority processes. If the server is using all available sessions in a device class to
complete higher priority processes, lower priority processes must wait until a
session becomes available. For example, Tivoli Storage Manager cancels the process
for a client backup if a session is needed for a server migration or reclamation
process. Tivoli Storage Manager cancels a reclamation process if the session being
used is needed for a client restore operation.
If processes are often canceled by other processes, consider whether you can make
more sessions available for Tivoli Storage Manager use. Otherwise, review your
scheduling of operations to reduce the contention for network resources.
There are two methods for using VolSafe media and drives: This technology uses
media that cannot be overwritten; therefore, do not use this media for short-term
backups of client files, the server database, or export tapes.
v Define a device class using the DEFINE DEVCLASS command and specify
DEVTYPE=VOLSAFE. You can use this device class with EXTERNAL, SCSI, and
ACSLS libraries. All drives in a library must be enabled for VolSafe use.
v Define a device class using the DEFINE DEVCLASS command, and specify
DEVTYPE=ECARTRIDGE and WORM=YES. For VolSafe devices, WORM=YES is
required and must be specified when the device class is defined. You cannot
update the WORM parameter using the UPDATE DEVCLASS command. You
cannot specify DRIVEENCRYPTION=ON if your drives are using WORM
media.
For more information about VolSafe media, see “Write-once, read-many tape
media” on page 164.
Tivoli Storage Manager supports the Application method of encryption with Oracle
StorageTek T10000B or T10000C drives. The Library method of encryption is
supported only if your system hardware supports it.
Remember: You cannot use drive encryption with write-once, read-many (WORM)
media or VolSafe media.
The Application method, in which Tivoli Storage Manager generates and manages
encryption keys, is defined through the hardware. To use the Application method,
set the DRIVEENCRYPTION parameter to ON. This setting permits the encryption
of data for empty volumes. If the parameter is set to ON and the hardware is
configured for another encryption method, backup operations fail.
The following simplified example shows the steps you would take to permit data
encryption for empty volumes in a storage pool:
1. Define a library:
define library sl3000 libtype=scsi
2. Define a device class, ECART_ENCRYPT, and specify Tivoli Storage Manager
as the key manager:
define devclass ecart_encrypt library=sl3000
devtype=ecartridge driveencryption=on
3. Define a storage pool:
define stgpool ecart_encrypt_pool ecart_encrypt
Related concepts:
“Choosing an encryption method” on page 561
Multiple client retrieve sessions, restore sessions, or server processes can read a
volume concurrently in a storage pool that is associated with the CENTERA device
type. In addition, one client session or one server process can write to the volume
while it is being read.
The following server processes can share read access to Centera volumes:
v EXPORT NODE
v EXPORT SERVER
v GENERATE BACKUPSET
The following server processes cannot share read access to Centera volumes:
v AUDIT VOLUME
v DELETE VOLUME
When selecting a mount limit for this device class, consider how many Tivoli
Storage Manager processes you want to run at the same time.
Tivoli Storage Manager automatically cancels some processes to run other, higher
priority processes. If the server is using all available mount points in a device class
to complete higher priority processes, lower priority processes must wait until a
mount point becomes available. For example, the Tivoli Storage Manager server is
currently performing a client backup request to an output volume and another
request from another client to restore data from the same volume. The backup
request is preempted and the volume is released for use by the restore request. For
additional information, see “Preemption of client or server operations” on page
652.
To control the number of mount points concurrently open for Centera devices, use
the MOUNTLIMIT parameter when you define the device class or update its
definition.
If you specify an estimated capacity that exceeds the actual capacity of the volume
in the device class, Tivoli Storage Manager updates the estimated capacity of the
volume when the volume becomes full. When Tivoli Storage Manager reaches the
end of the volume, it updates the capacity for the amount that is written to the
volume.
You can either accept the default estimated capacity for a given device class, or
explicitly specify an estimated capacity. An accurate estimated capacity value is not
required, but is useful. Tivoli Storage Manager uses the estimated capacity of
volumes to determine the estimated capacity of a storage pool, and the estimated
percent utilized. You may want to change the estimated capacity if:
v The default estimated capacity is inaccurate because data compression is being
performed by the drives.
v You have volumes of nonstandard size.
Use either client compression or device compression, but not both. The following
table summarizes the advantages and disadvantages of each type of compression.
Either type of compression can affect tape drive performance, because compression
affects data rate. When the rate of data going to a tape drive is slower than the
drive can write, the drive starts and stops while data is written, meaning relatively
poorer performance. When the rate of data is fast enough, the tape drive can reach
streaming mode, meaning better performance. If tape drive performance is more
important than the space savings that compression can mean, you may want to
perform timed test backups using different approaches to determine what is best
for your system.
Drive compression is specified with the FORMAT parameter for the drive's device
class, and the hardware device must be able to support the compression format.
For information about how to set up compression on the client, see “Node
compression considerations” on page 442 and “Registering nodes with the server”
on page 440.
It may wrongly appear that you are not getting the full use of the capacity of your
tapes, for the following reasons:
v A tape device manufacturer often reports the capacity of a tape based on an
assumption of compression by the device. If a client compresses a file before it is
sent, the device may not be able to compress it any further before storing it.
v Tivoli Storage Manager records the size of a file as it goes to a storage pool. If
the client compresses the file, Tivoli Storage Manager records this smaller size in
the database. If the drive compresses the file, Tivoli Storage Manager is not
aware of this compression.
Figure 14 on page 232 compares what Tivoli Storage Manager sees as the amount
of data stored on tape when compression is done by the device and by the client.
In both cases, Tivoli Storage Manager considers the volume to be full. However,
Tivoli Storage Manager considers the capacity of the volume in the two cases to be
different: 2.4 GB when the drive compresses the file, and 1.2 GB when the client
compresses the file. Use the QUERY VOLUME command to see the capacity of
volumes from Tivoli Storage Manager's viewpoint. See “Monitoring the use of
storage pool volumes” on page 406.
Drive
compression
only
2.4 GB 2.4 GB
1.2 GB
Client
compression
only
2.4 GB
1.2 GB 1.2 GB
Figure 14. Comparing compression at the client and compression at the device
Tasks:
“Configuring Tivoli Storage Manager for NDMP operations” on page 240
“Determining the location of NAS backup” on page 242
“Setting up tape libraries for NDMP operations” on page 246
“Configuring Tivoli Storage Manager policy for NDMP operations” on page 241
“Registering NAS nodes with the Tivoli Storage Manager server” on page 252
“Defining a data mover for the NAS file server” on page 252
“Defining paths to libraries for NDMP operations” on page 256
“Defining paths to libraries for NDMP operations” on page 256
“Defining paths for NDMP operations” on page 253
“Labeling and checking tapes into the library” on page 256
“Scheduling NDMP operations” on page 257
“Defining virtual file spaces” on page 257
“Tape-to-tape copy to back up data” on page 257
“Tape-to-tape copy to move data” on page 258
“Backing up and restoring NAS file servers using NDMP” on page 258
“Backing up NDMP file server to Tivoli Storage Manager server backups” on page 260
“Managing table of contents” on page 239
“NDMP operations management” on page 236
“Managing NAS file server nodes” on page 237
“Managing data movers used in NDMP operations” on page 238
“Storage pool management for NDMP operations” on page 238
NDMP requirements
You must meet certain requirements when you use NDMP (network data
management protocol) for operations with network-attached storage (NAS) file
servers.
Tivoli Storage Manager Extended Edition
Licensed program product that includes support for the use of NDMP.
NAS File Server
A NAS file server. The operating system on the file server must be
supported by Tivoli Storage Manager. Visit http://www.ibm.com/
support/entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager
for a list of NAS file servers that are certified through the “Ready for IBM
Tivoli software.”
| Note: The Tivoli Storage Manager server does not include External
| Library support for the ACSLS library when the library is used for
| NDMP operations.
| VTL library
| A virtual tape library that is supported by the Tivoli Storage
| Manager server. This type of library can be attached directly either
| to the Tivoli Storage Manager server or to the NAS file server. A
| virtual tape library is essentially the same as a SCSI library but is
| enhanced for virtual tape library characteristics and allows for
| better mount performance.
Drive Sharing: The tape drives can be shared by the Tivoli Storage
Manager server and one or more NAS file servers. Also, when a SCSI,
VTL, or a 349X library is connected to the Tivoli Storage Manager server
and not to the NAS file server, the drives can be shared by one or more
NAS file servers and one or more Tivoli Storage Manager:
v Library clients
v Storage agents
Verify the compatibility of specific combinations of a NAS file server, tape devices,
and SAN-attached devices with the hardware manufacturers.
Attention: Tivoli Storage Manager supports NDMP Version 4 for all NDMP
operations. Tivoli Storage Manager continues to support all NDMP backup and
restore operations with a NAS device that runs NDMP version 3. The Tivoli
Storage Manager server negotiates the highest protocol level (either Version 3 or
Version 4) with the NDMP server when it establishes an NDMP connection. If you
experience any issues with Version 4, you might want to try Version 3.
Client Interfaces:
v Backup-archive command-line client (on a Windows, 64 bit AIX, or 64 bit Oracle
Solaris system)
v web client
Server Interfaces:
v Server console
v Command line on the administrative client
Chapter 10. Using NDMP for operations with NAS file servers 235
The Tivoli Storage Manager web client interface, available with the backup-archive
client, displays the file systems of the network-attached storage (NAS) file server in
a graphical view. The client function is not required, but you can use the client
interfaces for NDMP operations. The client function is recommended for file-level
restore operations. See “File-level backup and restore for NDMP operations” on
page 261 for more information about file-level restore.
Tivoli Storage Manager prompts you for an administrator ID and password when
you perform NDMP functions using either of the client interfaces. See the
Backup-Archive Clients Installation and User's Guide for more information about
installing and activating client interfaces.
The NDMP format is not the same as the data format used for traditional Tivoli
Storage Manager backups. When you define a NAS file server as a data mover and
define a storage pool for NDMP operations, you specify the data format. For
example, you would specify NETAPPDUMP if the NAS file server is a NetApp or
an IBM System Storage N Series device. You would specify CELERRADUMP if the
NAS file server is an EMC Celerra device. For all other devices, you would specify
NDMPDUMP.
These include:
v NAS nodes
v Data movers
v Tape libraries and drives
v Paths
v Device classes
v Storage pools
v Table of contents
For example, assume you have created a new policy domain named NASDOMAIN
for NAS nodes and you want to update a NAS node named NASNODE1 to
include it in the new domain.
1. Query the node.
query node nasnode1 type=nas
2. Change the domain of the node by issuing the following command:
update node nasnode1 domain=nasdomain
For example, to rename NASNODE1 to NAS1 you must perform the following
steps:
1. Delete all paths between data mover NASNODE1 and libraries and between
data mover NASNODE1 and drives.
2. Delete the data mover defined for the NAS node.
3. To rename NASNODE1 to NAS1, issue the following command:
rename node nasnode1 nas1
4. Define the data mover using the new node name. In this example, you must
define a new data mover named NAS1 with the same parameters used to
define NASNODE1.
Attention: When defining a new data mover for a node that you have
renamed, ensure that the data mover name matches the new node name and
that the new data mover parameters are duplicates of the original data mover
parameters. Any mismatch between a node name and a data mover name or
between new data mover parameters and original data mover parameters can
prevent you from establishing a session with the NAS file server.
5. For SCSI or 349X libraries, define a path between the NAS data mover and a
library only if the tape library is physically connected directly to the NAS file
server.
6. Define paths between the NAS data mover and any drives used for NDMP
(network data management protocol) operations.
Chapter 10. Using NDMP for operations with NAS file servers 237
Managing data movers used in NDMP operations
You can update, query, and delete the data movers that you define for NAS
(network attached storage) file servers.
For example, if you shut down a NAS file server for maintenance, you might want
to take the data mover offline.
1. Query your data movers to identify the data mover for the NAS file server that
you want to maintain.
query datamover nasnode1
2. Issue the following command to make the data mover offline:
update datamover nasnode1 online=no
To delete the data mover, you must first delete any path definitions in which
the data mover has been used as the source.
3. Issue the following command to delete the data mover:
delete datamover nasnode1
Attention: If the data mover has a path to the library, and you delete the data
mover or make the data mover offline, you disable access to the library.
Remove Tivoli Storage Manager server access by deleting the path definition with
the following command:
delete path server1 nasdrive1 srctype=server desttype=drive library=naslib
You can query and update storage pools. You cannot update the DATAFORMAT
parameter.
You cannot designate a Centera storage pool as a target pool of NDMP operations.
Maintaining separate storage pools for data from different NAS vendors is
suggested even though the data format for both is NDMPDUMP.
The following DEFINE STGPOOL and UPDATE STGPOOL parameters are ignored
because storage pool hierarchies, reclamation, and migration are not supported for
these storage pools:
MAXSIZE
NEXTSTGPOOL
LOWMIG
HIGHMIG
MIGDELAY
MIGCONTINUE
RECLAIMSTGPOOL
OVFLOLOCATION
Issue the QUERY NASBACKUP command to display information about the file system
image objects that have been backed up for a specific NAS (network attached
storage) node and file space. By issuing the command, you can see a display of all
backup images generated by NDMP (network data management protocol) and
whether each image has a corresponding table of contents.
Note: The Tivoli Storage Manager server may store a full backup in excess of the
number of versions you specified, if that full backup has dependent differential
backups. Full NAS backups with dependent differential backups behave like other
base files with dependent subfiles. Due to retention time specified in the RETAIN
EXTRA setting, the full NAS backup will not be expired, and the version will be
displayed in the output of a QUERY NASBACKUP command. See “File expiration and
expiration processing” on page 501 for details.
Use the QUERY TOC command to display files and directories in a backup image
generated by NDMP. By issuing the QUERY TOC server command, you can
display all directories and files within a single specified TOC. The specified TOC
will be accessed in a storage pool each time the QUERY TOC command is issued
because this command does not load TOC information into the Tivoli Storage
Manager database. Then, use the RESTORE NODE command with the FILELIST
parameter to restore individual files.
Chapter 10. Using NDMP for operations with NAS file servers 239
Some firewall software is configured to automatically close network connections
that are inactive for a specified length of time. If a firewall exists between a Tivoli
Storage Manager server and a NAS device, it is possible that the firewall can close
NDMP control connections unexpectedly and cause the NDMP operation to fail.
The Tivoli Storage Manager server provides a mechanism, TCP keepalive, that you
can enable to prevent long-running, inactive connections from being closed. If TCP
keepalive is enabled, small packets are sent across the network at predefined
intervals to the connection partner.
To update the server option, you can use the SETOPT command.
To update the server option, you can use the SETOPT command.
Perform the following steps to configure the Tivoli Storage Manager for NDMP
operations:
1. Set up the tape library and media. See “Setting up tape libraries for NDMP
operations” on page 246, where the following steps are described in more
detail.
a. Attach the SCSI library to the NAS file server or to the Tivoli Storage
Manager server, or attach the ACSLS library or 349X library to the Tivoli
Storage Manager server.
b. Define the library with a library type of SCSI, ACSLS, or 349X.
c. Define a device class for the tape drives.
d. Define a storage pool for NAS backup media.
e. Define a storage pool for storing a table of contents. This step is optional.
See “Configuring policy for NDMP operations” on page 548 for more information.
Complete the following steps to configure Tivoli Storage Manager policy for
NDMP operations:
1. Create a policy domain for NAS (network attached storage) file servers. For
example, to define a policy domain that is named NASDOMAIN, enter the
following command:
define domain nasdomain description=’Policy domain for NAS file servers’
2. Create a policy set in that domain. For example, to define a policy set named
STANDARD in the policy domain named NASDOMAIN, issue the following
command:
define policyset nasdomain standard
3. Define a management class, and then assign the management class as the
default for the policy set. For example, to define a management class named
MC1 in the STANDARD policy set, and assign it as the default, issue the
following commands:
define mgmtclass nasdomain standard mc1
assign defmgmtclass nasdomain standard mc1
4. Define a backup copy group in the default management class. The destination
must be the storage pool you created for backup images produced by NDMP
operations. In addition, you can specify the number of backup versions to
retain. For example, to define a backup copy group for the MC1 management
class where up to four versions of each file system are retained in the storage
pool named NASPOOL, issue the following command:
Chapter 10. Using NDMP for operations with NAS file servers 241
define copygroup nasdomain standard mc1 destination=naspool verexists=4
You can control the management classes that are applied to backup images
produced by NDMP (network data management protocol) operations regardless of
which node initiates the backup. You can do this by creating a set of options to be
used by the client nodes. The option set can include an include.fs.nas statement
to specify the management class for NAS (network attached storage) file server
backups. See “Creating client option sets on the server” on page 488 for more
information.
You can also use a backup-archive client to back up a NAS file server by mounting
the NAS file-server file system on the client machine (with either an NFS [network
file system] mount or a CIFS [common internet file system] map) and then backing
up as usual. Table 26 compares the three backup-and-restore methods.
Chapter 10. Using NDMP for operations with NAS file servers 243
Table 26. Comparing methods for backing up NDMP data (continued)
NDMP: Filer to attached Backup-archive client to
Property NDMP: Filer to server library server
Cyclic Redundancy Supported Not supported Supported
Checking (CRC) when data
is moved using Tivoli
Storage Manager processes
Validation using Tivoli Supported Not supported Supported
Storage Manager audit
commands
Disaster recovery manager Supported Supported Supported
Many of the configuration choices you have for libraries and drives are determined
by the hardware features of your libraries. You can set up NDMP operations with
any supported library and drives. However, the more features your library has, the
more flexibility you can exercise in your implementation.
All drives are defined to the Tivoli Storage Manager server. However, the same
drive may be defined for both traditional Tivoli Storage Manager operations and
NDMP operations. Figure 15 on page 245 illustrates one possible configuration. The
Tape Library
1
2
3
Legend:
Drive access
Drives 1 2 3
To create the configuration shown in Figure 15, perform the following steps:
1. Define all three drives to Tivoli Storage Manager.
2. Define paths from the Tivoli Storage Manager server to drives 2 and 3. Because
drive 1 is not accessed by the server, no path is defined.
3. Define each NAS file server as a separate data mover.
4. Define paths from each data mover to drive 1 and to drive 2.
To use the Tivoli Storage Manager back end data movement operations, the Tivoli
Storage Manager server requires two available drive paths from a single NAS data
mover. The drives can be in different libraries and can have different device types
that are supported by NDMP. You can make copies between two different tape
devices, for example, the source tape drive can be an DLT drive in a library and
the target drive can be an LTO drive in another library.
During Tivoli Storage Manager back end data movements, the Tivoli Storage
Manager server locates a NAS data mover that supports the same data format as
the data to be copied from and that has two available mount points and paths to
the drives. If the Tivoli Storage Manager server cannot locate such a data mover,
the requested data movement operation is not performed. The number of available
mount points and drives depends on the mount limits of the device classes for the
storage pools involved in the back end data movements.
If the back end data movement function supports multiprocessing, each concurrent
Tivoli Storage Manager back end data movement process requires two available
mount points and two available drives. To run two Tivoli Storage Manager
processes concurrently, at least four mount points and four drives must be
available.
See “Defining paths for NDMP operations” on page 253 for more information.
Chapter 10. Using NDMP for operations with NAS file servers 245
Setting up tape libraries for NDMP operations
You must complete several tasks to set up a tape library for NDMP (network data
management protocol) operations.
Perform the following steps to set up tape libraries for NDMP operations:
1. Connect the library and drives for NDMP operations.
a. Connect the SCSI library. Before setting up a SCSI tape library for NDMP
operations, you should have already determined whether you want to
attach your library robotics control to the Tivoli Storage Manager server or
to the NAS (network attached storage) file server. See “Tape libraries and
drives for NDMP operations” on page 244. Connect the SCSI tape library
robotics to the Tivoli Storage Manager server or to the NAS file server. See
the manufacturer's documentation for instructions.
Library Connected to Tivoli Storage Manager: Make a SCSI or Fibre
Channel connection between the Tivoli Storage Manager server and the
library robotics control port. Then connect the NAS file server with the
drives you want to use for NDMP operations.
Library Connected to NAS File Server: Make a SCSI or Fibre Channel
connection between the NAS file server and the library robotics and
drives.
b. Connect the ACSLS Library. Connect the ACSLS tape library to the Tivoli
Storage Manager server.
c. Connect the 349X Library. Connect the 349X tape library to the Tivoli
Storage Manager server.
2. Define the library for NDMP operations. (The library has to be a single device
type, not a mixed device one.)
SCSI Library
define library tsmlib libtype=scsi
ACSLS Library
define library acslib libtype=acsls acsid=1
349X Library
define library tsmlib libtype=349x
3. Define a device class for NDMP operations. Create a device class for NDMP
operations. A device class defined with a device type of NAS is not explicitly
associated with a specific drive type (for example, 3570 or 8 mm). However, we
recommend that that you define separate device classes for different drive
types.
In the device class definition:
v Specify NAS as the value for the DEVTYPE parameter.
v Specify 0 as the value for the MOUNTRETENTION parameter.
MOUNTRETENTION=0 is required for NDMP operations.
v Specify a value for the ESTCAPACITY parameter.
For example, to define a device class named NASCLASS for a library named
NASLIB and media whose estimated capacity is 40 GB, issue the following
command:
define devclass nasclass devtype=nas library=naslib mountretention=0
estcapacity=40g
4. Define a storage pool for NDMP media. When NETAPPDUMP,
CELERRADUMP, or NDMPDUMP is designated as the type of storage pool,
managing the storage pools produced by NDMP operations is different from
Attention: Ensure that you do not accidentally use storage pools that have
been defined for NDMP operations in traditional Tivoli Storage Manager
operations. Be especially careful when assigning the storage pool name as the
value for the DESTINATION parameter of the DEFINE COPYGROUP command.
Unless the destination is a storage pool with the appropriate data format, the
backup will fail.
5. Define a storage pool for a table of contents. If you plan to create a table of
contents, you should also define a disk storage pool in which to store the table
of contents. You must set up policy so that the Tivoli Storage Manager server
stores the table of contents in a different storage pool from the one where the
backup image is stored. The table of contents is treated like any other object in
that storage pool. This step is optional.
For example, to define a storage pool named TOCPOOL for a DISK device
class, issue the following command:
define stgpool tocpool disk
Then, define volumes for the storage pool. For more information see:
“Configuring random access volumes on disk devices” on page 95.
You must determine whether to attach the library robotics to the Tivoli Storage
Manager server or to the NAS file server. Regardless of where you connect library
robotics, tape drives must always be connected to the NAS file server for NDMP
operations.
Distance and your available hardware connections are factors to consider for SCSI
libraries. If the library does not have separate ports for robotics control and drive
access, the library must be attached to the NAS file server because the NAS file
server must have access to the drives. If your SCSI library has separate ports for
robotics control and drive access, you can choose to attach the library robotics to
either the Tivoli Storage Manager server or the NAS file server. If the NAS file
Chapter 10. Using NDMP for operations with NAS file servers 247
server is at a different location from the Tivoli Storage Manager server, the distance
may mean that you must attach the library to the NAS file server.
Whether you are using a SCSI, ACSLS, or 349X library, you have the option of
dedicating the library to NDMP operations, or of using the library for NDMP
operations as well as most traditional Tivoli Storage Manager operations.
Table 27. Summary of configurations for NDMP operations
Distance between Drive sharing Drive sharing
Tivoli Storage between Tivoli Drive sharing between storage
Manager server and Storage Manager between NAS agent and NAS
Configuration library Library sharing and NAS file server file servers file server
Configuration 1
(SCSI library
Limited by SCSI or
connected to the Supported Supported Supported Supported
FC connection
Tivoli Storage
Manager server)
Configuration 2
(SCSI library
No limitation Not supported Supported Supported Not supported
connected to the
NAS file server)
Configuration 3 May be limited by
Supported Supported Supported Supported
(349X library) 349X connection
Configuration 4 May be limited by
Supported Supported Supported Supported
(ACSLS library) ACSLS connection
In this configuration, the Tivoli Storage Manager server controls the SCSI library
through a direct, physical connection to the library robotics control port. For
NDMP (network data management protocol) operations, the drives in the library
are connected directly to the NAS file server, and a path must be defined from the
NAS data mover to each of the drives to be used. The NAS file server transfers
data to the tape drive at the request of the Tivoli Storage Manager server. To also
use the drives for Tivoli Storage Manager operations, connect the Tivoli Storage
Manager server to the tape drives and define paths from the Tivoli Storage
Manager server to the tape drives. This configuration also supports a Tivoli
Storage Manager storage agent having access to the drives for its LAN-free
operations, and the Tivoli Storage Manager server can be a library manager.
NAS file
server
Legend:
SCSI or Fibre Channel connection
TCP/IP connection
Data flow
Robotics control 1
NAS file server
Drive access 2 file system disks
Figure 16. Configuration 1: SCSI library connected to Tivoli Storage Manager server
The Tivoli Storage Manager server controls library robotics by sending library
commands across the network to the NAS file server. The NAS file server passes
the commands to the tape library. Any responses generated by the library are sent
to the NAS file server, and passed back across the network to the Tivoli Storage
Manager server. This configuration supports a physically distant Tivoli Storage
Manager server and NAS file server. For example, the Tivoli Storage Manager
server could be in one city, while the NAS file server and tape library are in
another city.
Chapter 10. Using NDMP for operations with NAS file servers 249
Tivoli Storage
Manager
server
Tape
Web client library
(optional)
2
1
NAS file
server
Legend:
SCSI or Fibre Channel connection
TCP/IP connection
Data flow
Robotics control 1
NAS file server
Drive access 2 file system disks
Figure 17. Configuration 2: SCSI library connected to the NAS file server
In this configuration, the 349X tape library is controlled by the Tivoli Storage
Manager server. The Tivoli Storage Manager server controls the library by passing
the request to the 349X library manager through TCP/IP.
This configuration supports a physically distant Tivoli Storage Manager server and
NAS file server. For example, the Tivoli Storage Manager server could be in one
city, while the NAS file server and tape library are in another city.
349X tape
Web client 1 library
(optional)
Figure 18. Configuration 3: 349x library connected to the Tivoli Storage Manager server
The ACSLS (automated cartridge system library software) tape library is controlled
by the Tivoli Storage Manager server. The Tivoli Storage Manager server controls
the library by passing the request to the ACSLS library server through TCP/IP. The
ACSLS library supports library sharing and LAN-free operations.
This configuration supports a physically distant Tivoli Storage Manager server and
NAS file server. For example, the Tivoli Storage Manager server could be in one
city while the NAS file server and tape library are in another city.
To also use the drives for Tivoli Storage Manager operations, connect the Tivoli
Storage Manager server to the tape drives and define paths from the Tivoli Storage
Manager server to the tape drives.
Chapter 10. Using NDMP for operations with NAS file servers 251
Tivoli Storage
Manager server
ACSLS tape
Web client 1 library
(optional)
Figure 19. Configuration 4: ACSLS library connected to the Tivoli Storage Manager server
If you are using a client option set, specify the option set when you register the
node.
You can verify that this node is registered by issuing the following command:
query node type=nas
Important: You must specify TYPE=NAS so that only NAS nodes are displayed.
To define a data mover for a NAS node named NASNODE1, enter the following
example command:
define datamover nasnode1 type=nas hladdress=netapp2 lladdress=10000 userid=root
password=admin dataformat=netappdump
In this command:
Defining paths for drives attached only to a NAS file server and to the Tivoli
Storage Manager server:
Remember: If the drive is attached to the Tivoli Storage Manager server, the
element address is automatically detected .
2. Map the NAS drive name to the corresponding drive definition on the Tivoli
Storage Manager server:
v On the Tivoli Storage Manager server, issue the QUERY DRIVE FORMAT=DETAILED
command to obtain the worldwide name (WWN) and serial number for the
drive that is to be connected to the NAS file server.
v On the NAS device, obtain the tape device name, serial number, and WWN
for the drive.
If the WWN or serial number matches, a drive on a NAS file server is the same
as the drive on the Tivoli Storage Manager server .
3. Using the drive name, define a path to the drive from the NAS file server and
a path to the drive from the Tivoli Storage Manager server.
v For example, to define a path between a tape drive with a device name of
rst01 and a NetApp file server, issue the following command:
Chapter 10. Using NDMP for operations with NAS file servers 253
define path nasnode1 nasdrive1 srctype=datamover desttype=drive
library=naslib device=rst01
v To define a path between the tape drive and the Tivoli Storage Manager
server, issue the following command:
define path server1 nasdrive1 srctype=server desttype=drive
library=naslib device=mt3.0.0.2
Related information:
Obtaining device names for devices attached to NAS file servers
Restriction: If the SCSI drive is connected only to a NAS file server, the
element address is not automatically detected, and you must supply it. If a
library has more than one drive, you must specify an element address for each
drive.
To obtain a SCSI element address, go to one of the following Tivoli
device-support websites:
v AIX, HP-UX, Solaris, and Windows: http://www.ibm.com/software/
sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html
v Linux: http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_Linux.html
Element number assignment and device WWN assignments are also available
from tape-library device manufacturers.
2. Create drive definitions by specifying the element addresses identified in the
preceding step. Specify the element address in the ELEMENT parameter of the
DEFINE DRIVE command. For example, to define a drive NASDRIVE1 with the
element address 82 for the library NASLIB, issue the following command:
define drive naslib nasdrive1 element=82
Attention: For a drive connected only to the NAS file server, do not specify
ASNEEDED as the value for the CLEANFREQUENCY parameter of the DEFINE DRIVE
command.
3. Obtain the device name, serial number, and WWN for the drive on the NAS
device.
4. Using the information obtained in steps 1 and 3, map the NAS device name to
the element address in the drive definition in the Tivoli Storage Manager
server.
5. Define a path between the tape drive and the NAS file server. For example, to
define a path between a NetApp file server and a tape drive with a device
name of rst01, issue the following command:
define path nasnode1 nasdrive1 srctype=datamover desttype=drive
library=naslib device=rst0l
For paths from a network-attached storage (NAS) data mover, the value of the
DEVICE parameter in the DEFINE PATH command is the name by which the NAS file
server knows a library or drive.
You can obtain these device names, also known as special file names, by querying
the NAS file server. For information about how to obtain names for devices that
are connected to a NAS file server, consult the product information for the file
server.
v To obtain the device names for tape libraries on a Netapp Release ONTAP 10.0
GX, or later, file server, connect to the file server using telnet and issue the
SYSTEM HARDWARE TAPE LIBRARY SHOW command. To obtain the device names for
tape drives on a Netapp Release ONTAP 10.0 GX, or later, file server, connect to
the file server using telnet and issue the SYSTEM HARDWARE TAPE DRIVE SHOW
command. For details about these commands, see the Netapp ONTAP GX file
server product documentation.
v For releases earlier than Netapp Release ONTAP 10.0 GX, continue to use the
SYSCONFIG command. For example, to display the device names for tape libraries,
connect to the file server using telnet and issue the following command:
sysconfig -m
To display the device names for tape drives, issue the following command:
sysconfig -t
v For fibre-channel-attached drives and the Celerra data mover, complete the
following steps:
1. Log on to the EMC Celerra control workstation using an administrative ID.
Issue the following command:
server_devconfig server_1 -l -s -n
Tip: The -l option for this command lists only the device information that
was saved in the database of the data mover. The command and option do
not display changes to the device configuration that occurred after the last
database refresh on the data mover. For details about how to obtain the most
recent device configuration for your data mover, see the EMC Celerra
documentation.
The output for the server_devconfig command includes the device names
for the devices attached to the data mover. The device names are listed in the
addr column, for example:
server_1:
Scsi Device Table
name addr type info
tape1 c64t0l0 tape IBM ULT3580-TD2 53Y2
ttape1 c96t0l0 tape IBM ULT3580-TD2 53Y2
2. Map the Celerra device name to the device worldwide name (WWN):
a. To list the WWN, log on to the EMC Celerra control workstation and
issue the following command. Remember to enter a period ( . ) as the
first character in this command.
.server_config server_# -v "fcp bind show"
The output for this command includes the WWN, for example:
Chapter 10. Using NDMP for operations with NAS file servers 255
Chain 0064: WWN 500507630f418e29 HBA 2 N_PORT Bound
Chain 0096: WWN 500507630f418e18 HBA 2 N_PORT Bound
These tasks are the same as for other libraries. For more information, see:
“Labeling media” on page 159
The BACKUP NODE and RESTORE NODE commands can be used only for nodes of
TYPE=NAS. See “Backing up and restoring NAS file servers using NDMP” on
page 258 for information about the commands.
The schedule is active, and is set to run at 8:00 p.m. every day. See Chapter 21,
“Automating server operations,” on page 659 for more information.
To create a virtual file space name for the directory path on the NAS device, issue
the DEFINE VIRTUALFSMAPPING command:
define virtualfsmapping nas1 /mikesdir /vol/vol1 /mikes
This command defines a virtual file space name of /MIKESDIR on the server which
represents the directory path of /VOL/VOL1/MIKES on the NAS file server
represented by node NAS1. See “Directory-level backup and restore for NDMP
operations” on page 264 for more information.
Note: When using the NDMP tape-to-tape copy function, your configuration setup
could affect the performance of the Tivoli Storage Manager back end data
movement.
To have one NAS device with paths to four drives in a library, use the MOVE DATA
command after you are done with your configuration setup. This moves data on
the volume VOL1 to any available volumes in the same storage pool as VOL1:
move data vol1
Chapter 10. Using NDMP for operations with NAS file servers 257
Tape-to-tape copy to move data
In order to move data from an old tape technology to a new tape technology, using
NDMP (network data management protocol) tape-to-tape copy operation, perform
the steps below as well as the regular steps in your configuration setup.
Note: When using the NDMP tape-to-tape copy function, your configuration setup
could affect the performance of the Tivoli Storage Manager back end data
movement.
1. Define one drive in the library, lib1, that has old tape technology:
define drive lib1 drv1 element=1035
2. Define one drive in the library, lib2, that has new tape technology:
define drive lib2 drv1 element=1036
3. Move data on volume vol1 in the primary storage pool to the volumes in
another primary storage pool, nasprimpool2:
move data vol1 stgpool=nasprimpool2
For more information on the command, see the Tivoli Storage Manager
Backup-Archive Clients Installation and User's Guide.
Tip: Whenever you use the client interface, you are asked to authenticate yourself
as a Tivoli Storage Manager administrator before the operation can begin. The
administrator ID must have at least client owner authority for the NAS node.
You can perform the same backup operation with a server interface. For example,
from the administrative command-line client, back up the file system named
/vol/vol1 on a NAS file server named NAS1, by issuing the following command:
backup node nas1 /vol/vol1
Note: The BACKUP NAS and BACKUP NODE commands do not include snapshots. To
back up snapshots see “Backing up and restoring with snapshots” on page 264.
You can restore the image using either interface. Backups are identical whether
they are backed up using a client interface or a server interface. For example,
suppose you want to restore the image backed up in the previous examples. For
this example the file system named /vol/vol1 is being restored to /vol/vol2.
Restore the file system with the following command, issued from a Windows
backup-archive client interface:
dsmc restore nas -nasnodename=nas1 {/vol/vol1} {/vol/vol2}
You can choose to restore the file system, using a server interface. For example, to
restore the file system name /vol/vol1 to file system /vol/vol2, for a NAS file
server named NAS1, enter the following command:
restore node nas1 /vol/vol1 /vol/vol2
When you store NAS backup data in the Tivoli Storage Manager server's storage
hierarchy, you can apply Tivoli Storage Manager back end data management
functions. Migration, reclamation, and disaster recovery are among the supported
features when using the NDMP file server to Tivoli Storage Manager server option.
In order to back up a NAS device to a Tivoli Storage Manager native storage pool,
set the destination storage pool in the copy group to point to the desired native
storage pool. The destination storage pool provides the information about the
library and drives used for backup and restore. You should ensure that there is
sufficient space in your target storage pool to contain the NAS data, which can be
backed up to sequential, disk, or file type devices. Defining a separate device class
is not necessary.
Firewall considerations are more stringent than they are for filer-to-attached-library
because communications can be initiated by either the Tivoli Storage Manager
server or the NAS file server. NDMP tape servers run as threads within the Tivoli
Storage Manager server and the tape server accepts connections on port of 10001.
This port number can be changed through the following option in the Tivoli
Storage Manager server options file: NDMPPORTRANGE port-number-low,
port-number-high.
Before using this option, verify that your NAS device supports NDMP operations
that use a different network interface for NDMP control and NDMP data
connections. NDMP control connections are used by Tivoli Storage Manager to
authenticate with an NDMP server and monitor an NDMP operation while NDMP
data connections are used to transmit and receive backup data during NDMP
operations. You must still configure your NAS device to route NDMP backup and
restore data to the appropriate network interface.
Chapter 10. Using NDMP for operations with NAS file servers 259
because they use the system's default network interface. You can update this server
option without stopping and restarting the server by using the SETOPT command
(Set a server option for dynamic update).
See “Backing up NDMP file server to Tivoli Storage Manager server backups” for
steps on how to perform NDMP filer-to-server backups.
The destination for NAS data is determined by the destination in the copy
group. The storage size estimate for NAS differential backups uses the
occupancy of the file space, the same value that is used for a full backup. You
can use this size estimate as one of the considerations in choosing a storage
pool. One of the attributes of a storage pool is the MAXSIZE value, which
indicates that data be sent to the NEXT storage pool if the MAXSIZE value is
exceeded by the estimated size. Because NAS differential backups to Tivoli
Storage Manager native storage pools use the base file space occupancy size as
a storage size estimate, differential backups end up in the same storage pool as
the full backup. Depending on collocation settings, differential backups may
end up on the same media as the full backup.
4. Set up a node and data mover for the NAS device. The data format signifies
that the backup images created by this NAS device are a dump type of backup
image in a NetApp specific format.
register node nas1 nas1 type=nas domain=standard
define datamover nas1 type=nas hla=nas1 user=root
password=***** dataformat=netappdump
If you specify this option at the time of backup, you can later display the table of
contents of the backup image. Through the backup-archive Web client, you can
select individual files or directories to restore directly from the backup images
generated.
You also have the option to do a backup via NDMP without collecting file-level
restore information.
To allow creation of a table of contents for a backup via NDMP, you must define
the TOCDESTINATION attribute in the backup copy group for the management
class to which this backup image is bound. You cannot specify a copy storage pool
or an active-data pool as the destination. The storage pool you specify for the TOC
destination must have a data format of either NATIVE or NONBLOCK, so it
cannot be the tape storage pool used for the backup image.
If you choose to collect file-level information, specify the TOC parameter in the
BACKUP NODE server command. Or, if you initiate your backup using the client, you
can specify the TOC option in the client options file, client option set, or client
command line. You can specify NO, PREFERRED, or YES. When you specify
PREFERRED or YES, the Tivoli Storage Manager server stores file information for a
single NDMP-controlled backup in a table of contents (TOC). The table of contents
is placed into a storage pool. After that, the Tivoli Storage Manager server can
access the table of contents so that file and directory information can be queried by
the server or client. Use of the TOC parameter allows a table of contents to be
generated for some images and not others, without requiring different
management classes for the images.
See the Administrator's Reference for more information about the BACKUP NODE
command.
To avoid mount delays and ensure sufficient space, use random access storage
pools (DISK device class) as the destination for the table of contents. For sequential
access storage pools, no labeling or other preparation of volumes is necessary if
scratch volumes are allowed.
Chapter 10. Using NDMP for operations with NAS file servers 261
See “Managing table of contents” on page 239 for more information.
You should install Data ONTAP 6.4.1 or later, if it is available, on your NetApp
NAS file server in order to garner full support of international characters in the
names of files and directories.
If your level of Data ONTAP is earlier than 6.4.1, you must have one of the
following two configurations in order to collect and restore file-level information.
Results with configurations other than these two are unpredictable. The Tivoli
Storage Manager server will print a warning message (ANR4946W) during backup
operations. The message indicates that the character encoding of NDMP file history
messages is unknown, and UTF-8 will be assumed in order to build a table of
contents. It is safe to ignore this message only for the following two configurations.
v Your data has directory and file names that contain only English (7-bit ASCII)
characters.
v Your data has directory and file names that contain non-English characters and
the volume language is set to the UTF-8 version of the proper locale (for
example, de.UTF-8 for German).
If your level of Data ONTAP is 6.4.1 or later, you must have one of the following
three configurations in order to collect and restore file-level information. Results
with configurations other than these three are unpredictable.
Tip: Using the UTF-8 version of the volume language setting is more efficient in
terms of Tivoli Storage Manager server processing and table of contents storage
space.
v You only use CIFS to create and access your data.
As with a NAS (network attached storage) file system backup, a table of contents
(TOC) is created during a directory-level backup and you are able to browse the
files in the image, using the Web client. The default is that the files are restored to
the original location. During a file-level restore from a directory-level backup,
however, you can either select a different file system or another virtual file space
name as a destination.
For a TOC of a directory level backup image, the path names for all files are
relative to the directory specified in the virtual file space definition, not the root of
the file system.
The virtual file space name cannot be identical to any file system on the NAS
node. If a file system is created on the NAS device with the same name as a virtual
file system, a name conflict will occur on the Tivoli Storage Manager server when
the new file space is backed up. See the Administrator's Reference for more
information about virtual file space mapping commands.
Note: Virtual file space mappings are only supported for NAS nodes.
Chapter 10. Using NDMP for operations with NAS file servers 263
Directory-level backup and restore for NDMP operations
The DEFINE VIRTUALFSMAPPING command maps a directory path of a NAS (network
attached storage) file server to a virtual file space name on the Tivoli Storage
Manager server. After a mapping is defined, you can conduct NAS operations such
as BACKUP NODE and RESTORE NODE, using the virtual file space names as if they
were actual NAS file spaces.
To start a backup of the directory, issue the BACKUP NODE command specifying the
virtual file space name instead of a file space name. To restore the directory subtree
to the original location, run the RESTORE NODE command and specify the virtual file
space name.
Virtual file space definitions can also be specified as the destination in a RESTORE
NODE command. This allows you restore backup images (either file system or
directory) to a directory on any file system of the NAS device.
You can use the Web client to select files for restore from a directory-level backup
image because the Tivoli Storage Manager client treats the virtual file space names
as NAS file spaces.
For example, to backup a snapshot created for a NetApp file system, perform the
following:
1. On the console for the NAS device, issue the command to create the snapshot.
SNAP CREATE is the command for a NetApp device.
snap create vol2 february17
Use the NDMP SnapMirror to Tape feature as a disaster recovery option for
copying large NetAppfile systems to auxiliary storage. For most NetAppfile
systems, use the standard NDMP full or differential backup method.
Using a parameter option on the BACKUP NODE and RESTORE NODE commands, you
can back up and restore file systems by using SnapMirror to Tape. There are
several limitations and restrictions on how SnapMirror images can be used.
Consider the following guidelines before you use it as a backup method:
| v You cannot initiate a SnapMirror to Tape backup or restore operation from the
| Tivoli Storage Manager Operations Center, Administration Center, web client, or
| command-line client.
v You cannot perform differential backups of SnapMirror images.
v You cannot perform a directory-level backup using SnapMirror to Tape, thus
Tivoli Storage Manager does not permit an SnapMirror to Tape backup operation
on a server virtual file space.
v You cannot perform an NDMP file-level restore operation from SnapMirror to
Tape images. Therefore, a table of contents is never created during SnapMirror
to Tape image backups.
v At the start of a SnapMirror to Tape copy operation, the file server generates a
snapshot of the file system. NetAppprovides an NDMP environment variable to
control whether this snapshot should be removed at the end of the SnapMirror
to Tape operation. Tivoli Storage Manager always sets this variable to remove
the snapshot.
v After a SnapMirror to Tape image is retrieved and copied to a NetAppfile
system, the target file system is left configured as a SnapMirror partner.
NetAppprovides an NDMP environment variable to control whether this
SnapMirror relationship should be broken. Tivoli Storage Manager always
"breaks" the SnapMirror relationship during the retrieval. After the restore
operation is complete, the target file system is in the same state as that of the
original file system at the point-in-time of backup.
See the BACKUP NODE and RESTORE NODE commands in the Administrator's Reference
for more information about SnapMirror to Tape feature.
Chapter 10. Using NDMP for operations with NAS file servers 265
NDMP backup operations using Celerra file server integrated
checkpoints
When the Tivoli Storage Manager server initiates an NDMP backup operation on a
Celerra data mover, the backup of a large file system might take several hours to
complete. Without Celerra integrated checkpoints enabled, any changes occurring
on the file system are written to the backup image.
As a result, the backup image includes changes made to the file system during the
entire backup operation and is not a true point-in-time image of the file system.
If you are performing NDMP backups of Celerra file servers, you should upgrade
the operating system of your data mover to Celerra file server version T5.5.25.1 or
later. This version of the operating system allows enablement of integrated
checkpoints for all NDMP backup operations from the Celerra Control
Workstation. Enabling this feature ensures that NDMP backups represent true
point-in-time images of the file system that is being backed up.
If your version of the Celerra file server operating system is earlier than version
T5.5.25.1 and if you use NDMP to back up Celerra data movers, you should
manually generate a snapshot of the file system using Celerra's command line
checkpoint feature and then initiate an NDMP backup of the checkpoint file system
rather than the original file system.
Refer to the Celerra file server documentation for instructions on creating and
scheduling checkpoints from the Celerra control workstation.
Only NDMP backup data in NATIVE data format storage pools can be replicated.
You cannot replicate NDMP images that are stored in storage pools that has the
following data formats:
v NETAPPDUMP
v CELERRADUMP
v NDMPDUMP
When you configure devices so that the server can use them to store client data,
you create storage pools and storage volumes. The procedures for configuring
devices use the set of defaults that provides for storage pools and volumes. The
defaults can work well. However, you might have specific requirements not met by
the defaults. There are three common reasons to change the defaults:
v Optimize and control storage device usage by arranging the storage hierarchy
and tuning migration through the hierarchy (next storage pool, migration
thresholds).
v Reuse tape volumes through reclamation. Reuse is also related to policy and
expiration.
v Keep a client's files on a minimal number of volumes (collocation).
You can also make other adjustments to tune the server for your systems. See the
following sections to learn more. For some quick tips, see “Task tips for storage
pools” on page 279.
Concepts
“Storage pools” on page 268
“Storage pool volumes” on page 280
“Access modes for storage pool volumes” on page 286
“Storage pool hierarchies” on page 288
“Migrating files in a storage pool hierarchy” on page 299
“Caching in disk storage pools” on page 310
“Writing data simultaneously to primary, copy, and active-data pools” on page 355
“Keeping client files together using collocation” on page 381
“Reclaiming space in sequential-access storage pools” on page 390
“Estimating space needs for storage pools” on page 401
Tasks
“Defining storage pools” on page 273
“Preparing volumes for random-access storage pools” on page 282
“Preparing volumes for sequential-access storage pools” on page 283
“Defining storage pool volumes” on page 284
“Updating storage pool volumes” on page 285
“Setting up a storage pool hierarchy” on page 288
“Monitoring storage-pool and volume usage” on page 403
“Monitoring the use of storage pool volumes” on page 406
“Moving data from one volume to another volume” on page 421
“Moving data belonging to a client node” on page 426
The examples in topics show how to perform tasks using the Tivoli Storage
Manager command-line interface. For information about the commands, see the
Administrator's Reference, or issue the HELP command from the command line of a
Tivoli Storage Manager administrative client.
Storage pools
A storage pool is a collection of storage volumes. A storage volume is the basic
unit of storage, such as allocated space on a disk or a single tape cartridge. The
server uses the storage volumes to store backed-up, archived, or space-managed
files.
The server provides three types of storage pools that serve different purposes:
primary storage pools, copy storage pools, and active-data pools. You can arrange
primary storage pools in a storage hierarchy. The group of storage pools that you set
up for the Tivoli Storage Manager server to use is called server storage.
To prevent a single point of failure, create separate storage pools for backed-up
and space-managed files. This also includes not sharing a storage pool in either
storage pool hierarchy. Consider setting up a separate, random-access disk storage
pool to give clients fast access to their space-managed files.
A primary storage pool can use random-access storage (DISK device class) or
sequential-access storage (for example, tape or FILE device classes).
For example, when a client attempts to retrieve a file and the server detects an
error in the file copy in the primary storage pool, the server marks the file as
damaged. At the next attempt to access the file, the server can obtain the file from
a copy storage pool.
You can move copy storage pool volumes off-site and still have the server track the
volumes. Moving copy storage pool volumes off-site provides a means of
recovering from an on-site disaster.
A copy storage pool can use only sequential-access storage (for example, a tape
device class or FILE device class).
Remember:
v You can back up data from a primary storage pool defined with the NATIVE,
NONBLOCK, or any of the NDMP formats (NETAPPDUMP, CELERRADUMP,
or NDMPDUMP). The target copy storage pool must have the same data format
as the primary storage pool.
v You cannot back up data from a primary storage pool defined with a CENTERA
device class.
Active-data pools
An active-data pool contains only active versions of client backup data. active-data
pools are useful for fast client restores, reducing the number of on-site or off-site
storage volumes, or reducing bandwidth when copying or restoring files that are
vaulted electronically in a remote location.
Data migrated by hierarchical storage management (HSM) clients and archive data
are not permitted in active-data pools. As updated versions of backup data
continue to be stored in active-data pools, older versions are deactivated and
removed during reclamation processing.
Restoring a primary storage pool from an active-data pool might cause some or all
inactive files to be deleted from the database if the server determines that an
inactive file needs to be replaced but cannot find it in the active-data pool. As a
best practice and to protect your inactive data, therefore, you should create a
minimum of two storage pools: one active-data pool, which contains only active
data, and one copy storage pool, which contains both active and inactive data. You
can use the active-data pool volumes to restore critical client node data, and
afterward you can restore the primary storage pools from the copy storage pool
Active-data pools can use any type of sequential-access storage (for example, a
tape device class or FILE device class). However, the precise benefits of an
active-data pool depend on the specific device type associated with the pool. For
example, active-data pools associated with a FILE device class are ideal for fast
client restores because FILE volumes do not have to be physically mounted and
because the server does not have to position past inactive files that do not have to
be restored. In addition, client sessions restoring from FILE volumes in an
active-data pool can access the volumes concurrently, which also improves restore
performance.
Active-data pools that use removable media, such as tape or optical, offer similar
benefits. Although tapes need to be mounted, the server does not have to position
past inactive files. However, the primary benefit of using removable media in
active-data pools is the reduction of the number of volumes used for on-site and
off-site storage. If you vault data electronically to a remote location, an active-data
pool associated with a SERVER device class lets you save bandwidth by copying
and restoring only active data.
Remember:
v The server will not attempt to retrieve client files from an active-data pool
during a point-in-time restore. Point-in-time restores require both active and
inactive file versions. Active-data pools contain only active file versions. For
optimal efficiency during point-in-time restores and to avoid switching between
active-data pools and primary or copy storage pools, the server retrieves both
active and inactive versions from the same storage pool and volumes.
v You cannot copy active data to an active-data pool from a primary storage pool
defined with the NETAPPDUMP, the CELERRADUMP, or the NDMPDUMP
data format.
v You cannot copy active data from a primary storage pool defined with a
CENTERA device class.
Restriction: You cannot use the BACKUP STGPOOL command for active-data pools.
During client sessions and processes that require active file versions, the Tivoli
Storage Manager server searches certain types of storage pools, if they exist.
1. An active-data pool associated with a FILE device class
2. A random-access disk (DISK) storage pool
3. A primary or copy storage pool associated with a FILE device class
4. A primary, copy, or active-data pool associated with on-site or off-site
removable media (tape or optical)
Even though the list implies a selection order, the server might select a volume
with an active file version from a storage pool lower in the order if a volume
higher in the order cannot be accessed because of the requirements of the session
or process, volume availability, or contention for resources such as mount points,
drives, and data.
Figure 20 on page 272 shows one way to set up server storage. In this example, the
storage that is defined for the server includes:
v Three disk storage pools, which are primary storage pools: ARCHIVE, BACKUP,
and HSM
v One primary storage pool that consists of tape cartridges
v One copy storage pool that consists of tape cartridges
v One active-data pool that consists of FILE volumes for fast client restore
Policies that are defined in management classes direct the server to store files from
clients in the ARCHIVE, BACKUP, or HSM disk storage pools. An additional
policy specifies the following:
v A select group of client nodes that requires fast restore of active backup data
v The active-data pool as the destination for the active-data belonging to these
nodes
v The ARCHIVE, BACKUP, or HSM disk storage pools as destinations for archive,
backup (active and inactive versions), and space-managed data
For each of the three disk storage pools, the tape primary storage pool is next in
the hierarchy. As the disk storage pools fill, the server migrates files to tape to
make room for new files. Large files can go directly to tape. For more information
about setting up a storage hierarchy, see “Storage pool hierarchies” on page 288.
For more information about backing up primary storage pools, see “Backing up
primary storage pools” on page 954.
Active
HSM backup
data only
Backup
Disk storage
Archive pool (FILE)
Tip: When you define or update storage pools that use LTO Ultrium media,
special considerations might apply.
When you define a primary storage pool, be prepared to specify some or all of the
information that is shown in Table 28. Most of the information is optional. Some
information applies only to random-access storage pools or only to
sequential-access storage pools. Required parameters are marked.
Table 28. Information for defining a storage pool
Type of
Information Explanation Storage Pool
Storage pool name The name of the storage pool. random,
sequential
(Required)
Device class The name of the device class assigned for the storage pool. random,
sequential
(Required)
Pool type The type of storage pool (primary or copy). The default is to define a random,
primary storage pool. A storage pool's type cannot be changed after it has sequential
been defined.
Maximum number of When you specify a value greater than zero, the server dynamically sequential
scratch volumes 2 acquires scratch volumes when needed, up to this maximum number.
(Required for sequential For automated libraries, set this value equal to the physical capacity of the
access) library. For details, see:
“Adding scratch volumes to automated library devices” on page 169
Do not set a maximum file size for the last storage pool in the hierarchy
unless you want to exclude very large files from being stored in server
storage.
Cyclic Redundancy Check Specifies whether the server uses CRC to validate storage pool data during random,
(CRC) 1 audit volume processing. For additional information see “Data validation sequential
during audit volume processing” on page 961.
1
This information is not available for sequential-access storage pools that use the following data formats:
v CELERRADUMP
v NDMPDUMP
v NETAPPDUMP
2
This information is not available or is ignored for Centera sequential-access storage pools.
You can define the storage pools in a storage pool hierarchy from the top down or
from the bottom up. Defining the hierarchy from the bottom up requires fewer
steps. To define the hierarchy from the bottom up, perform the following steps:
1. Define the storage pool named BACKTAPE with the following command:
define stgpool backtape tape
description=’tape storage pool for engineering backups’
maxsize=nolimit collocate=node maxscratch=100
2. Define the storage pool named ENGBACK1 with the following command:
define stgpool engback1 disk
description=’disk storage pool for engineering backups’
maxsize=5m nextstgpool=backtape highmig=85 lowmig=40
Restrictions:
v You cannot establish a chain of storage pools that lead to an endless loop. For
example, you cannot define StorageB as the next storage pool for StorageA, and
then define StorageA as the next storage pool for StorageB.
v The storage pool hierarchy includes only primary storage pools, not copy
storage pools or active-data pools.
v If a storage pool uses the data format NETAPPDUMP, CELERRADUMP, or
NDMPDUMP, the server will not perform any of the following functions:
– Migration
– Reclamation
– Volume audits
Restrictions:
v You cannot use this command to change the data format for a storage pool.
v For storage pools that have the NETAPPDUMP, the CELERRADUMP, or the
NDMPDUMP data format, you can modify the following parameters only:
– ACCESS
– COLLOCATE
– DESCRIPTION
– MAXSCRATCH
– REUSEDELAY
Table 29 gives tips on how to accomplish some tasks that are related to storage
pools.
Table 29. Task tips for storage pools
For this Goal Do This For More Information
Keep the data for a group of client Enable collocation for the storage “Keeping client files together using
nodes, a single client node, or a client pool. collocation” on page 381
file space on as few volumes as
possible.
Reduce the number of volume Disable collocation for the storage “Keeping client files together using
mounts needed to back up multiple pool. collocation” on page 381
clients.
Write data simultaneously to a Provide a list of copy storage pools “Writing data simultaneously to
primary storage pool and to copy and active-data pools when defining primary, copy, and active-data pools”
storage pools and active-data pools. the primary storage pool. on page 355
Specify how the server reuses tapes. Set a reclamation threshold for the “Reclaiming space in
storage pool. sequential-access storage pools” on
page 390
Optional: Identify a reclamation
storage pool
Move data from disk to tape Set a migration threshold for the “Migrating disk storage pools” on
automatically as needed. storage pool. page 300
You can define volumes in a sequential-access storage pool or you can specify that
the server dynamically acquire scratch volumes. You can also use a combination of
defined and scratch volumes. What you choose depends on the amount of control
you want over individual volumes.
Defined volumes
Use defined volumes when you want to control precisely which volumes are used
in the storage pool. Defined volumes can also be useful when you want to
establish a naming scheme for volumes.
You can also use defined volumes to reduce potential disk fragmentation and
maintenance overhead for storage pools associated with random-access and
sequential-access disk.
Scratch volumes
Use scratch volumes to enable the server to define a volume when needed and
delete the volume when it becomes empty. Using scratch volumes frees you from
the task of explicitly defining all of the volumes in a storage pool.
The server tracks whether a volume being used was originally a scratch volume.
Scratch volumes that the server acquired for a primary storage pool are deleted
from the server database when they become empty. The volumes are then available
for reuse by the server or other applications.
Scratch volumes in a copy storage pool or an active-data storage pool are handled
in the same way as scratch volumes in a primary storage pool, except for volumes
with the access value of off-site. If an off-site volume becomes empty, the server
does not immediately return the volume to the scratch pool. The delay prevents
the empty volumes from being deleted from the database, making it easier to
determine which volumes should be returned to the on-site location. The
administrator can query the server for empty off-site copy storage pool volumes or
active-data pool volumes, and return them to the on-site location. The volume is
returned to the scratch pool only when the access value is changed to
READWRITE, READONLY, or UNAVAILABLE.
For scratch volumes that were acquired in a FILE device class, the space that the
volumes occupied is freed by the server and returned to the file system.
If you do not specify a full path name for the volume name, the command uses the
path associated with the registry key of this server instance.
You can also define volumes in a single step using the DEFINE VOLUME
command. For example, to define ten, 5000 MB volumes in a random-access
storage pool that uses a DISK device class, you would enter the following
command:
define volume diskpool diskvol numberofvolumes=10 formatsize=5000
Tips:
1. For important disk-related information, see “Requirements for disk systems”
on page 89.
2. The file system where storage pool volumes are allocated can have an effect on
performance and reliability. For better performance in backing up and restoring
large numbers of small files, allocate storage pool volumes on a FAT file
system. To take advantage of the ability of the operating system to recover from
problems that can occur during I/O to a disk, allocate storage pool volumes on
NTFS.
You can also use a space trigger to automatically create volumes assigned to a
particular storage pool.
For sequential-access storage pools with a FILE or SERVER device type, no labeling
or other preparation of volumes is necessary. For sequential-access storage pools
associated with device types other than a FILE or SERVER, you must prepare
volumes for use.
When the server accesses a sequential-access volume, it checks the volume name in
the header to ensure that the correct volume is being accessed. To prepare a
volume:
1. Label the volume. Table 30 on page 281 shows the types of volumes that
require labels. You must label those types of volumes before the server can use
them.
For details, see:
“Labeling media” on page 159
Tip: When you use the LABEL LIBVOLUME command with drives in an
automated library, you can label and check in the volumes with one command.
2. For storage pools in automated libraries, use the CHECKIN LIBVOLUME
command to check the volume into the library. For details, see:
“Checking media into automated library devices” on page 161.
When you define a storage pool volume, you inform the server that the volume is
available for storing backup, archive, or space-managed data.
For a sequential-access storage pool, the server can use dynamically acquired
scratch volumes, volumes that you define, or a combination.
To define a volume named VOL1 in the ENGBACK3 tape storage pool, enter:
define volume engback3 vol1
Each volume used by a server for any purpose must have a unique name. This
requirement applies to all volumes, whether the volumes are used for storage
pools, or used for operations such as database backup or export. The requirement
also applies to volumes that reside in different libraries but that are used by the
same server.
For storage pools associated with FILE device classes, you can define private
volumes in a single step using the DEFINE VOLUME command. For example, to
define ten, 5000 MB volumes, in a sequential-access storage pool that uses a FILE
device class, you would enter the following command.
define volume filepool filevol numberofvolumes=10 formatsize=5000
For storage pools associated with the FILE device class, you can also use the
DEFINE SPACETRIGGER and UPDATE SPACETRIGGER commands to have the
server create volumes and assign them to a specified storage pool when
predetermined space-utilization thresholds have been exceeded. One volume must
be predefined.
Remember: You cannot define volumes for storage pools defined with a Centera
device class.
To allow the storage pool to acquire volumes as needed, set the MAXSCRATCH
parameter to a value greater than zero. The server automatically defines the
volumes as they are acquired. The server also automatically deletes scratch
volumes from the storage pool when the server no longer needs them.
Before the server can use a scratch volume with a device type other than FILE or
SERVER, the volume must have a label.
Restriction: Tivoli Storage Manager only accepts tapes labeled with IBM standard
labels. IBM standard labels are similar to ANSI Standard X3.27 labels except that
the IBM standard labels are written in EBCDIC (extended binary coded decimal
interchange code). For a list of IBM media sales contacts who can provide
compatible tapes, go to the IBM Web site. If you are using non-IBM storage devices
and media, consult your tape-cartridge distributor.
For details about labeling, see “Preparing volumes for sequential-access storage
pools” on page 283.
To change the properties of a volume that has been defined to a storage pool, issue
the UPDATE VOLUME command. For example, suppose you accidentally damage
a volume named VOL1. To change the access mode to unavailable so that the
server does not try to write or read data from the volume, issue the following
command:
update volume vol1 access=unavailable
For details about access modes, see “Access modes for storage pool volumes” on
page 286.
Table 31 on page 286 lists volume properties that you can update.
For example, if the server cannot write to a volume having read/write access
mode, the server automatically changes the access mode to read-only.
You can set up your devices so that the server automatically moves data from one
device to another, or one media type to another. The selection can be based on
characteristics such as file size or storage capacity. A typical implementation might
have a disk storage pool with a subordinate tape storage pool. When a client backs
up a file, the server might initially store the file on disk according to the policy for
that file. Later, the server might move the file to tape when the disk becomes full.
This action by the server is called migration. You can also place a size limit on files
that are stored on disk, so that large files are stored initially on tape instead of on
disk.
For example, your fastest devices are disks, but you do not have enough space on
these devices to store all data that needs to be backed up over the long term. You
have tape drives, which are slower to access, but have much greater capacity. You
define a hierarchy so that files are initially stored on the fast disk volumes in one
storage pool. This provides clients with quick response to backup requests and
some recall requests. As the disk storage pool becomes full, the server migrates, or
moves, data to volumes in the tape storage pool.
Another option to consider for your storage pool hierarchy is IBM 3592 tape
cartridges and drives, which can be configured for an optimal combination of
access time and storage capacity. For more information, see “Controlling
data-access speeds for 3592 volumes” on page 216.
Restrictions:
v You cannot establish a chain of storage pools that leads to an endless loop. For
example, you cannot define StorageB as the next storage pool for StorageA, and
then define StorageA as the next storage pool for StorageB.
v The storage pool hierarchy includes only primary storage pools. It does not
include copy storage pools or active-data pools. See “Backing up the data in a
storage hierarchy” on page 293.
v A storage pool must use the NATIVE or NONBLOCK data formats to be part of
a storage pool hierarchy. For example, a storage pool that uses the
NETAPPDUMP data format cannot be part of a storage pool hierarchy.
You can define the storage pools in a storage pool hierarchy from the top down or
from the bottom up. Defining the hierarchy from the bottom up requires fewer
steps. To define the hierarchy from the bottom up:
1. Define the storage pool named BACKTAPE with the following command:
define stgpool backtape tape
description=’tape storage pool for engineering backups’
maxsize=nolimit collocate=node maxscratch=100
2. Define the storage pool named ENGBACK1 with the following command:
define stgpool engback1 disk
description=’disk storage pool for engineering backups’
maxsize=5M nextstgpool=backtape highmig=85 lowmig=40
If you have already defined the storage pool at the top of the hierarchy, you can
update the storage hierarchy to include a new storage pool. You can update the
storage pool by using the UPDATE STGPOOL command or by using the Tivoli
Storage Manager Console, which includes a wizard. The wizard allows you to
change your storage pool hierarchy by using a drag and drop interface.
To define the new tape storage pool and update the hierarchy:
1. Define the storage pool named BACKTAPE with the following command:
The size of the aggregate depends on the sizes of the client files being stored, and
the number of bytes and files allowed for a single transaction. Two options affect
the number of files and bytes allowed for a single transaction. TXNGROUPMAX,
located in the server options file, affects the number of files allowed.
TXNBYTELIMIT, located in the client options file, affects the number of bytes
allowed in the aggregate.
v The TXNGROUPMAX option in the server options file indicates the maximum
number of logical files (client files) that a client may send to the server in a
single transaction. The server might create multiple aggregates for a single
transaction, depending on how large the transaction is.
It is possible to affect the performance of client backup, archive, restore, and
retrieve operations by using a larger value for this option. When transferring
multiple small files, increasing the TXNGROUPMAX option can improve
throughput for operations to tape.
When a Tivoli Storage Manager for Space Management client (HSM client)
migrates files to the server, the files are not grouped into an aggregate.
Server file aggregation is disabled for client nodes storing data associated with a
management class that has a copy group whose destination is a Centera storage
pool.
Using these factors, the server determines if the file can be written to that storage
pool or the next storage pool in the hierarchy.
Subfile backups: When the client backs up a subfile, it still reports the size of the
entire file. Therefore, allocation requests against server storage and placement in
the storage hierarchy are based on the full size of the file. The server does not put
a subfile in an aggregate with other files if the size of the entire file is too large to
put in the aggregate. For example, the entire file is 8 MB, but the subfile is only 10
KB. The server does not typically put a large file in an aggregate, so the server
begins to store this file as a stand-alone file. However, the client sends only 10 KB,
and it is now too late for the server to put this 10 KB file with other files in an
aggregate. As a result, the benefits of aggregation are not always realized when
clients back up subfiles.
TAPEPOOL
Read/Write Access
Assume a user wants to archive a 5 MB file that is named FileX. FileX is bound to
a management class that contains an archive copy group whose storage destination
is DISKPOOL, see Figure 21.
When the user archives the file, the server determines where to store the file based
on the following process:
1. The server selects DISKPOOL because it is the storage destination specified in
the archive copy group.
2. Because the access mode for DISKPOOL is read/write, the server checks the
maximum file size allowed in the storage pool.
The maximum file size applies to the physical file being stored, which may be a
single client file or an aggregate. The maximum file size allowed in DISKPOOL
is 3 MB. FileX is a 5 MB file and therefore cannot be stored in DISKPOOL.
3. The server searches for the next storage pool in the storage hierarchy.
If the DISKPOOL storage pool has no maximum file size specified, the server
checks for enough space in the pool to store the physical file. If there is not
enough space for the physical file, the server uses the next storage pool in the
storage hierarchy to store the file.
4. The server checks the access mode of TAPEPOOL, which is the next storage
pool in the storage hierarchy. The access mode for TAPEPOOL is read/write.
5. The server then checks the maximum file size allowed in the TAPEPOOL
storage pool. Because TAPEPOOL is the last storage pool in the storage
hierarchy, no maximum file size is specified. Therefore, if there is available
space in TAPEPOOL, FileX can be stored in it.
Restoring a primary storage pool from an active-data pool might cause some or all
inactive files to be deleted from the database if the server determines that an
inactive file needs to be replaced but cannot find it in the active-data pool.
As a best practice, therefore, and to prevent the permanent loss of inactive versions
of client backup data, you should create a minimum of one active-data pool, which
contains active-data only, and one copy storage pool, which contains both active
and inactive data. To recover from a disaster, use the active-data pool to restore
critical client node data, and then restore the primary storage pools from the copy
storage pool. Do not use active-data pools for recovery of a primary pool or
volume unless the loss of inactive data is acceptable.
“Setting up copy storage pools and active-data pools” on page 294 describes the
high-level steps for implementation.
Neither copy storage pools nor active-data pools are part of a storage hierarchy,
which, by definition, consists only of primary storage pools. Data can be stored in
copy storage pools and active-data pools using the following methods:
v Including the BACKUP STGPOOL and COPY ACTIVEDATA commands in
administrative scripts or schedules so that data is automatically backed up or
copied at regular intervals.
v Enabling the simultaneous-write function so that data is written to primary
storage pools, copy storage pools, and active-data pools during the same
transaction. Writing data simultaneously to copy storage pools is supported for
backup, archive, space-management, and import operations. Writing data
simultaneously to active-data pools is supported only for client backup
operations and only for active backup versions.
v (copy storage pools only) Manually issuing the BACKUP STGPOOL command,
specifying the primary storage pool as the source and a copy storage pool as the
target. The BACKUP STGPOOL command backs up whatever data is in the
primary storage pool (client backup data, archive data, and space-managed
data).
v (active-data pools only) Manually issuing the COPY ACTIVEDATA command,
specifying the primary storage pool as the source and an active-data pool as the
target. The COPY ACTIVEDATA command copies only the active versions of
client backup data. If an aggregate being copied contains all active files, then the
entire aggregate is copied to the active-data pool during command processing. If
an aggregate being copied contains some inactive files, the aggregate is
reconstructed during command processing into a new aggregate without the
inactive files.
For efficiency, you can use a single copy storage pool and a single active-data pool
to back up all primary storage pools that are linked in a storage hierarchy. By
backing up all primary storage pools to one copy storage pool and one active-data
pool, you do not need to repeatedly copy a file when the file migrates from its
original primary storage pool to another primary storage pool in the storage
hierarchy.
Decide which client nodes have data that needs to be restored quickly if a disaster
occurs. Only the data belonging to those nodes should be stored in the active-data
pool.
For the purposes of this example, the following definitions already exist on the
server:
v The default STANDARD domain, STANDARD policy set, STANDARD
management class, and STANDARD copy group.
v A primary storage pool, BACKUPPOOL, and a copy storage pool, COPYPOOL.
BACKUPPOOL is specified in the STANDARD copy group as the storage pool
in which the server initially stores backup data. COPYPOOL contains copies of
all the active and inactive data in BACKUPPOOL.
v Three nodes that are assigned to the STANDARD domain (NODE1, NODE2, and
NODE 3).
v Two mount points assigned for each client session.
v A FILE device class named FILECLASS.
You have identified NODE2 as the only high-priority node, so you need to create a
new domain to direct the data belonging to that node to an active-data pool. To set
up and enable the active-data pool, follow these steps:
1. Define the active-data pool:
DEFINE STGPOOL ADPPOOL FILECLASS POOLTYPE=ACTIVEDATA MAXSCRATCH=1000
2. Define a new domain and specify the active-data pool in which you want to
store the data belonging to NODE2:
DEFINE DOMAIN ACTIVEDOMAIN ACTIVEDESTINATION=ADPPOOL
3. Define a new policy set:
DEFINE POLICYSET ACTIVEDOMAIN ACTIVEPOLICY
4. Define a new management class:
DEFINE MGMTCLASS ACTIVEDOMAIN ACTIVEPOLICY ACTIVEMGMT
5. Define a backup copy group:
DEFINE COPYGROUP ACTIVEDOMAIN ACTIVEPOLICY ACTIVEMGMT DESTINATION=BACKUPPOOL
This command specifies that the active and inactive data belonging to client
nodes that are members of ACTIVEDOMAIN will be backed up to
BACKUPPOOL. Note that this is the destination storage pool for data backed
up from nodes that are members of the STANDARD domain.
6. Assign the default management class for the active-data pool policy set:
ASSIGN DEFMGMTCLASS ACTIVEDOMAIN ACTIVEPOLICY ACTIVEMGMT
7. Activate the policy set for the active-data pool:
ACTIVATE POLICYSET ACTIVEDOMAIN ACTIVEPOLICY
8. Assign the high-priority node, NODE2, to the new domain:
A node can belong to only one domain. When you update a node by changing
its domain, you remove it from its current domain.
9. (optional) Update the primary storage pool, BACKUPPOOL, with the name of
the active-data pool, ADPPOOL, where the server simultaneously will write
data during a client backup operation:
UPDATE STGPOOL BACKUPPOOL ACTIVEDATAPOOLS=ADPPOOL
Every time NODE2 stores data into BACKUPPOOL, the server simultaneously
writes the data to ADPPOOL. The schedule, COPYACTIVE_BACKUPPOOL,
ensures that any data that was not stored during simultaneous-write operations is
copied to the active-data pool. When client nodes NODE1 and NODE3 are backed
up, their data is stored in BACKUPPOOL only, and not in ADPPOOL. When the
administrative schedule runs, only the data belonging to NODE2 is copied to the
active-data pool.
Remember: If you want all the nodes belonging to an existing domain to store
their data in the active-data pool, then you can skip steps 2 through 8. Use the
UPDATE DOMAIN command to update the STANDARD domain, specifying the
name of the active-data pool, ADPPOOL, as the value of the
ACTIVEDESTINATION parameter.
In addition to using active-data pools for fast restore of client-node data, you can
also use active-data pools to reduce the number of tape volumes that are stored
either on-site or off-site for the purpose of disaster recovery. This example assumes
that, in your current configuration, all data is backed up to a copy storage pool
and taken off-site. However, your goal is to create an active-data pool, take the
volumes in that pool off-site, and maintain the copy storage pool on-site to recover
primary storage pools.
Every time data is stored into BACKUPPOOL, the data is simultaneously written
to ADPPOOL. The schedule, COPYACTIVE_BACKUPPOOL, ensures that any data
that was not stored during a simultaneous-write operation is copied to the
active-data pool. You can now move the volumes in the active-data pool to a safe
location off-site.
If your goal is to replace the copy storage pool with the active-data pool, follow
the steps below. As a best practice and to protect your inactive data, however, you
should maintain the copy storage pool so that you can restore inactive versions of
backup data if required. If the copy storage pool contains archive or files that were
migrated by a Tivoli Storage Manager for Space Management client, do not delete
it.
1. Stop backing up to the copy storage pool:
DELETE SCHEDULE BACKUP_BACKUPPOOL
UPDATE STGPOOL BACKUPPOOL COPYSTGPOOLS=""
Typically you need to ensure that you have enough disk storage to process one
night's worth of the clients' incremental backups. While not always possible, this
guideline proves to be valuable when considering storage pool backups.
For example, suppose you have enough disk space for nightly incremental backups
for clients, but not enough disk space for a FILE-type, active-data pool. Suppose
also that you have tape devices. With these resources, you can set up the following
pools:
v A primary storage pool on disk, with enough volumes assigned to contain the
nightly incremental backups for clients
v A primary storage pool on tape, which is identified as the next storage pool in
the hierarchy for the disk storage pool
v An active-data pool on tape
v A copy storage pool on tape
For more information about storage pool space, see “Estimating space needs for
storage pools” on page 401
The migration process helps to ensure that there is sufficient free space in the
storage pools at the top of the hierarchy, where faster devices can provide the most
benefit to clients. For example, the server can migrate data stored in a
random-access disk storage pool to a slower but less expensive sequential-access
storage pool.
Migration processing can differ for disk storage pools versus sequential-access
storage pools. If you plan to modify the default migration parameter settings for
storage pools or want to understand how migration works, read the following
topics:
v “Migrating disk storage pools”
v “Migrating sequential-access storage pools” on page 305
v “Starting migration manually or in a schedule” on page 308
Remember:
v Data cannot be migrated into or out of storage pools defined with a CENTERA
device class.
v If you receive an error message during the migration process, refer to IBM Tivoli
Storage Manager Messages, which can provide useful information for diagnosing
and fixing problems.
v If a migration process is started from a storage pool that does not have the next
storage pool identified in the hierarchy, a reclamation process is triggered for the
source storage pool. To prevent the reclamation process, define the next storage
pool in the hierarchy. For details, see “Setting up a storage pool hierarchy” on
page 288. As an alternative to prevent automatic migration from running, set the
HIGHMIG parameter of the storage pool definition to 100.
You can use the defaults for the migration thresholds, or you can change the
threshold values to identify the maximum and minimum amount of space for a
storage pool.
To control how long files must stay in a storage pool before they are eligible for
migration, specify a migration delay for a storage pool. For details, see “Keeping
files in a storage pool” on page 304.
If you decide to enable cache for disk storage pools, files can temporarily remain
on disks even after migration. When you use cache, you might want to set lower
migration thresholds.
For more information about migration thresholds, see “How the server selects files
to migrate” on page 301 and “Migration thresholds” on page 303. For information
about using the cache, see “Minimizing access time to migrated files” on page 305
and “Caching in disk storage pools” on page 310.
The server might not reach the low migration threshold for the pool by migrating
only files that were stored longer than the migration delay period. If so, the server
checks the storage pool characteristic that determines whether to stop migration,
even if the pool is still above the low migration threshold. For more information,
see “Keeping files in a storage pool” on page 304.
For example, Table 32 displays information that is contained in the database that is
used by the server to determine which files to migrate. This example assumes that
the storage pool contains no space-managed files. This example also assumes that
the migration delay period for the storage pool is set to zero. Any files can be
migrated regardless of the amount of time they are stored in the pool or the last
time of access.
Table 32. Database information about files stored in DISKPOOL
Archived Files (All Client File
Client Node Backed-Up File Spaces and Sizes Spaces)
TOMC TOMC/C 200 MB 55 MB
TOMC/D 100 MB
CAROL CAROL 50 MB 5 MB
PEASE PEASE/home 150 MB 40 MB
PEASE/temp 175 MB
High
Migration
Threshold
80%
Low
Migration
Threshold
20% DISKPOOL DISKPOOL DISKPOOL
TAPEPOOL
Figure 22 shows what happens when the high migration threshold defined for the
disk storage pool DISKPOOL is exceeded. When the amount of data that can be
migrated in DISKPOOL reaches 80%, the server runs the following tasks:
1. Determines that the TOMC/C file space is taking up the most space in the
DISKPOOL storage pool. It controls more space than any other single
backed-up or space-managed file space and more than any client node's
archived files.
2. Locates all data that belongs to node TOMC stored in DISKPOOL. In this
example, node TOMC backed up or archived files from file spaces TOMC/C
and TOMC/D stored in the DISKPOOL storage pool.
3. Migrates all data from TOMC/C and TOMC/D to the next available storage
pool. In this example, the data is migrated to the tape storage pool,
TAPEPOOL.
The server migrates all of the data from both file spaces that belong to node
TOMC. The migration happens, even if the occupancy of the storage pool drops
below the low migration threshold before the second file space is migrated.
If the cache option is enabled, files that are migrated remain on disk storage
(cached) until space is needed for new files. For more information about using
cache, see “Caching in disk storage pools” on page 310.
4. After all files that belong to TOMC are migrated to the next storage pool, the
server checks the low migration threshold. If the threshold is not reached, the
server determines which client node backed up or migrated the largest single
file space or archived files that occupy the most space. The server begins
migrating files that belong to that node.
In this example, the server migrates all files that belong to the client node
named PEASE to the TAPEPOOL storage pool.
5. After all the files that belong to PEASE are migrated to the next storage pool,
the server checks the low migration threshold again. If the low migration
threshold was reached or passed, then migration ends.
Choosing thresholds appropriate for your situation takes some experimenting. Start
by using the default high and low values. You need to ensure that migration
occurs frequently enough to maintain some free space but not so frequently that
the device is unavailable for other use.
High-migration thresholds:
Before changing the high-migration threshold, you need to consider the amount of
storage capacity provided for each storage pool and the amount of free storage
space needed to store additional files, without having migration occur.
If you set the high-migration threshold too high, the pool may be just under the
high threshold, but not have enough space to store an additional, typical client file.
Or, with a high threshold of 100%, the pool may become full and a migration
process must start before clients can back up any additional data to the disk
storage pool. In either case, the server stores client files directly to tape until
migration completes, resulting in slower performance.
If you set the high-migration threshold too low, migration runs more frequently
and can interfere with other operations.
Low-migration thresholds:
Before setting the low-migration threshold, you need to consider the amount of
free disk storage space needed for normal daily processing, whether you use cache
on disk storage pools, how frequently you want migration to occur, and whether
data in the next storage pool is being collocated by group.
For example, you might have backups of monthly summary data that you want to
keep in your disk storage pool for faster access until the data is 30 days old. After
the 30 days, the server moves the files to a tape storage pool.
To delay file migration of files, set the MIGDELAY parameter when you define or
update a storage pool. The number of days is counted from the day that a file was
stored in the storage pool or accessed by a client, whichever is more recent. You
can set the migration delay separately for each storage pool. When you set the
delay to zero, the server can migrate any file from the storage pool, regardless of
how short a time the file has been in the storage pool. When you set the delay to
greater than zero, the server checks how long the file has been in the storage pool
and when it was last accessed by a client. If the number of days exceeds the
migration delay, the server migrates the file.
Note: If you want the number of days for migration delay to be counted based
only on when a file was stored and not when it was retrieved, use the
NORETRIEVEDATE server option. For more information about this option, see the
Administrator's Reference.
If you set migration delay for a pool, you must decide what is more important:
either ensuring that files stay in the storage pool for the migration delay period, or
ensuring that there is enough space in the storage pool for new files. For each
storage pool that has a migration delay set, you can choose what happens as the
server tries to move enough data out of the storage pool to reach the low
migration threshold. If the server cannot reach the low migration threshold by
moving only files that have been stored longer than the migration delay, you can
choose one of the following:
If you allow more than one migration process for the storage pool and allow the
server to move files that do not satisfy the migration delay time
(MIGCONTINUE=YES), some files that do not satisfy the migration delay time
may be migrated unnecessarily. As one process migrates files that satisfy the
migration delay time, a second process could begin migrating files that do not
satisfy the migration delay time to meet the low migration threshold. The first
process that is still migrating files that satisfy the migration delay time might have,
by itself, caused the storage pool to meet the low migration threshold.
Important: For information about the disadvantages of using cache, see “Caching
in disk storage pools” on page 310.
To ensure that files remain on disk storage and do not migrate to other storage
pools, use one of the following methods:
v Do not define the next storage pool.
A disadvantage of using this method is that if the file exceeds the space
available in the storage pool, the operation to store the file fails.
v Set the high-migration threshold to 100%.
When you set the high migration threshold to 100%, files will not migrate at all.
You can still define the next storage pool in the storage hierarchy, and set the
maximum file size so that large files are stored in the next storage pool in the
hierarchy.
A disadvantage of setting the high threshold to 100% is that after the pool
becomes full, client files are stored directly to tape instead of to disk.
Performance may be affected as a result.
You probably will not want the server to migrate sequential-access storage pools
on a regular basis. An operation such as tape-to-tape migration has limited benefits
compared to disk-to-tape migration, and requires at least two tape drives.
You can migrate data from a sequential-access storage pool only to another
sequential-access storage pool. You cannot migrate data from a sequential-access
To control the migration process, set migration thresholds and migration delays for
each storage pool using the DEFINE STGPOOL and UPDATE STGPOOL
commands. You can also specify multiple concurrent migration processes to better
use your available tape drives or FILE volumes. (For details, see “Specifying
multiple concurrent migration processes” on page 309.) Using the MIGRATE
STGPOOL command, you can control the duration of the migration process and
whether reclamation is attempted prior to migration. For additional information,
see “Starting migration manually or in a schedule” on page 308.
For tape and optical storage pools, the server begins the migration process when
the ratio of volumes containing data to the total number of volumes in the storage
pool, including scratch volumes, reaches the high migration threshold. For
sequential-access disk (FILE) storage pools, the server starts the migration process
when the ratio of data in a storage pool to the pool's total estimated data capacity
reaches the high migration threshold. The calculation of data capacity includes the
capacity of all the scratch volumes specified for the pool.
Tip: When Tivoli Storage Manager calculates the capacity for a sequential-access
disk storage pool, it takes into consideration the amount of disk space available in
the file system. For this reason, be sure that you have enough disk space in the file
system to hold all the defined and scratch volumes specified for the storage pool.
For example, suppose that the capacity of all the scratch volumes specified for a
storage pool is 10 TB. (There are no predefined volumes.) However, only 9 TB of
disk space is available in the file system. The capacity value used in the migration
threshold is 9 TB, not 10 TB. If the high migration threshold is set to 70%,
migration will begin when the storage pool contains 6.3 TB of data, not 7 TB.
Because migration delay can prevent volumes from being migrated, the server can
migrate files from all eligible volumes but still find that the storage pool is above
the low migration threshold. If you set migration delay for a pool, you need to
decide what is more important: either ensuring that files stay in the storage pool
for as long as the migration delay, or ensuring there is enough space in the storage
pool for new files. For each storage pool that has a migration delay set, you can
choose what happens as the server tries to move enough files out of the storage
pool to reach the low migration threshold. If the server cannot reach the low
migration threshold by migrating only volumes that meet the migration delay
requirement, you can choose one of the following:
v Allow the server to migrate volumes from the storage pool even if they do not
meet the migration delay criteria (MIGCONTINUE=YES). This is the default.
Allowing migration to continue ensures that space is made available in the
storage pool for new files that need to be stored there.
v Have the server stop migration without reaching the low migration threshold
(MIGCONTINUE=NO). Stopping migration ensures that volumes are not
migrated for the time you specified with the migration delay. The administrator
must ensure that there is always enough space available in the storage pool to
hold the data for the required number of days.
If you decide to migrate data from one sequential-access storage pool to another,
ensure that:
v Two drives (mount points) are available, one in each storage pool.
v The access mode for the next storage pool in the storage hierarchy is set to
read/write.
For information about setting an access mode for sequential-access storage pools,
see “Defining storage pools” on page 273.
v Collocation is set the same in both storage pools. For example, if collocation is
set to NODE in the first storage pool, then collocation should be set to NODE in
the next storage pool.
Chapter 11. Managing storage pools and volumes 307
When you enable collocation for a storage pool, the server attempts to keep all
files belonging to a group of client nodes, a single client node, or a client file
space on a minimal number of volumes. For information about collocation for
sequential-access storage pools, see “Keeping client files together using
collocation” on page 381.
v You have sufficient resources (for example, staff) available to manage any
necessary media mount and dismount operations. (This is especially true for
multiple concurrent processing, For details, see “Specifying multiple concurrent
migration processes” on page 309.) More mount operations occur because the
server attempts to reclaim space from sequential-access storage pool volumes
before it migrates files to the next storage pool.
If you want to limit migration from a sequential-access storage pool to another
storage pool, set the high-migration threshold to a high percentage, such as 95%.
For information about setting a reclamation threshold for tape storage pools, see
“Reclaiming space in sequential-access storage pools” on page 390.
You can specify the maximum number of minutes the migration will run before
automatically cancelling. If you prefer, you can include this command in a
schedule to perform migration when it is least intrusive to normal production
needs.
For example, to migrate data from a storage pool named ALTPOOL to the next
storage pool, and specify that it end as soon as possible after one hour, issue the
following command:
migrate stgpool altpool duration=60
Do not use this command if you are going to use automatic migration. To prevent
automatic migration from running, set the HIGHMIG parameter of the storage
pool definition to 100. For details about the MIGRATE STGPOOL command, refer
to the Administrator's Reference.
Restriction: Data cannot be migrated into or out of storage pools defined with a
CENTERA device class.
Each migration process requires at least two simultaneous volume mounts (at least
two mount points) and, if the device type is not FILE, at least two drives. One of
the drives is for the input volume in the storage pool from which files are being
migrated. The other drive is for the output volume in the storage pool to which
files are being migrated.
When calculating the number of concurrent processes to run, carefully consider the
resources you have available, including the number of storage pools that will be
involved with the migration, the number of mount points, the number of drives
that can be dedicated to the operation, and (if appropriate) the number of mount
operators available to manage migration requests. The number of available mount
points and drives depends on other Tivoli Storage Manager and system activity
and on the mount limits of the device classes for the storage pools that are
involved in the migration. For more information about mount limit, see:
“Controlling the number of simultaneously mounted volumes” on page 213
For example, suppose that you want to migrate data on volumes in two sequential
storage pools simultaneously and that all storage pools involved have the same
device class. Each process requires two mount points and, if the device type is not
FILE, two drives. To run four migration processes simultaneously (two for each
storage pool), you need a total of at least eight mount points and eight drives if
the device type is not FILE. The device class must have a mount limit of at least
eight.
If the number of migration processes you specify is more than the number of
available mount points or drives, the processes that do not obtain mount points or
drives will wait indefinitely or until the other migration processes complete and
mount points or drives become available.
The Tivoli Storage Manager server starts the specified number of migration
processes regardless of the number of volumes that are eligible for migration. For
example, if you specify ten migration processes and only six volumes are eligible
for migration, the server will start ten processes and four of them will complete
without processing a volume.
Multiple concurrent migration processing does not affect collocation. If you specify
collocation and multiple concurrent processes, the Tivoli Storage Manager server
attempts to migrate the files for each collocation group, client node, or client file
space onto as few volumes as possible. If files are collocated by group, each
process can migrate only one group at a single time. In addition, if files belonging
to a single collocation group (or node or file space) are on different volumes and
are being migrated at the same time by different processes, the files could be
migrated to separate output volumes.
For example, suppose a copy of a file is made while it is in a disk storage pool.
The file then migrates to a primary tape storage pool. If you then back up the
primary tape storage pool to the same copy storage pool, a new copy of the file is
not needed. The server knows it already has a valid copy of the file.
The only way to store files in copy storage pools is by backing up (the BACKUP
STGPOOL command) or by using the simultaneous-write function. The only way to
store files in active-data pools is by copying active data (the COPY ACTIVEDATA
command) or by using the simultaneous-write function.
If space is needed to store new data in the disk storage pool, cached files are
erased and the space they occupied is used for the new data.
When cache is disabled and migration occurs, the server migrates the files to the
next storage pool and erases the files from the disk storage pool. By default, the
system disables caching for each disk storage pool because of the potential effects
of cache on backup performance. If you leave cache disabled, consider higher
If fast restores of active client data is your objective, you can also use active-data
pools, which are storage pools containing only active versions of client backup
data. For details, see “Active-data pools” on page 269.
For example, assume that two files, File A and File B, are cached files that are the
same size. If File A was last retrieved on 05/16/08 and File B was last retrieved on
06/19/08, then File A is deleted to reclaim space first.
If you do not want the server to update the retrieval date for files when a client
restores or retrieves the file, specify the server option NORETRIEVEDATE in the
server options file. If you specify this option, the server removes copies of files in
cache regardless how recently the files were retrieved.
Deduplicating data
Data deduplication is a method for eliminating redundant data in order to reduce
the storage that is required to retain the data. Only one instance of the data is
retained in a deduplicated storage pool. Other instances of the same data are
replaced with a pointer to the retained instance.
Restriction: When a client backs up or archives a file, the data is written to the
primary storage pool specified by the copy group of the management class that is
bound to the data. To deduplicate the client data, the primary storage pool must be
a sequential-access disk (FILE) storage pool that is enabled for data deduplication.
The ability to deduplicate data on either the backup-archive client or the server
provides flexibility in terms of resource utilization, policy management, and
security. You can also combine both client-side and server-side data deduplication
in the same production environment. For example, you can specify certain nodes
for client-side data deduplication and certain nodes for server-side data
deduplication. You can store the data for both sets of nodes in the same
deduplicated storage pool.
Backup-archive clients that can deduplicate data can also access data that was
deduplicated by server-side processes. Similarly, data that was deduplicated by
client-side processes can be accessed by the server. Furthermore, duplicate data can
be identified across objects regardless of whether the data deduplication is
performed on the client or the server.
In addition to whole files, IBM Tivoli Storage Manager can also deduplicate parts
of files that are common with parts of other files. Data becomes eligible for
duplicate identification as volumes in the storage pool are filled. A volume does
not have to be full before duplicate identification starts.
Benefits
Requirements
If the backup operation is successful and if the next storage pool is enabled for
data deduplication, the files are deduplicated by the server. If the next storage pool
is not enabled for data deduplication, the files are not deduplicated.
For details about client-side data deduplication, including options for controlling
data deduplication, see the Backup-Archive Clients Installation and User's Guide.
Only V6.2 and later storage agents can use LAN-free data movement to access
storage pools that contain data that was deduplicated by clients. V6.1 storage
agents or later can complete operations over the LAN.
Table 33. Paths for data movement
Storage pool
Storage pool contains a mixture of Storage pool
contains only client-side and contains only
client-side server-side server-side
deduplicated data deduplicated data deduplicated data
V6.1 or earlier Over the LAN Over the LAN LAN-free
storage agent
V6.2 storage agent LAN-free LAN-free LAN-free
V6.2 backup-archive clients are compatible with V6.2 storage agents and provide
LAN-free access to storage pools that contain client-side deduplicated data.
Version support
Server-side data deduplication is available only with IBM Tivoli Storage Manager
V6.1 or later servers. For optimal efficiency when using server-side data
deduplication, upgrade to the backup-archive client V6.1 or later.
Client-side data deduplication is available only with Tivoli Storage Manager V6.2
or later servers and backup-archive clients V6.2 or later.
Encrypted files
The Tivoli Storage Manager server and the backup-archive client cannot
deduplicate encrypted files. If an encrypted file is encountered during data
deduplication processing, the file is not deduplicated, and a message is logged.
Tip: You do not have to process encrypted files separately from files that are
eligible for client-side data deduplication. Both types of files can be processed in
the same operation. However, they are sent to the server in different transactions.
As a security precaution, you can take one or more of the following steps:
v Enable storage-device encryption together with client-side data deduplication.
v Use client-side data deduplication only for nodes that are secure.
v If you are uncertain about network security, enable Secure Sockets Layer (SSL).
v If you do not want certain objects (for example, image objects) to be processed
by client-side data deduplication, you can exclude them on the client. If an
object is excluded from client-side data deduplication and it is sent to a storage
pool that is set up for data deduplication, the object is deduplicated on server.
v Use the SET DEDUPVERIFICATIONLEVEL command to detect possible security
attacks on the server during client-side data deduplication. Using this command,
you can specify a percentage of client extents for the server to verify. If the
server detects a possible security attack, a message is displayed.
File size
Only files that are more than 2 KB are deduplicated. Files that are 2 KB or less are
not deduplicated.
A return code (RC=254) and message are written to the dsmerror.log file. The
message is also displayed in the command-line client. The error message is:
ANS7899E The client referenced a duplicated extent that does not exist
on the Tivoli Storage Manager server.
The workaround for this situation is to ensure that processes that can cause files to
expire are not run at the same time that back up or archive operations with
client-side data deduplication are performed.
Collocation
You can use collocation for storage pools that are set up for data deduplication.
However, collocation might not have the same benefit as it does for storage pools
that are not set up for data deduplication.
By using collocation with storage pools that are set up for data deduplication, you
can control the placement of data on volumes. However, the physical location of
duplicate data might be on different volumes. No-query-restore, and other
processes remain efficient in selecting volumes that contain non-deduplicated data.
However, the efficiency declines when additional volumes are required to provide
the duplicate data.
Using Tivoli Storage Manager data deduplication can provide several advantages.
However, there are some situations where data deduplication is not appropriate.
Those situations are:
v Your primary storage of backup data is on a Virtual Tape Library or physical
tape. If regular migration to tape is required, the benefits of using data
deduplication are lessened, since the purpose of data deduplication is to reduce
disk storage as the primary location of backup data.
v You have no flexibility with the backup processing window. Tivoli Storage
Manager data deduplication processing requires additional resources, which can
extend backup windows or server processing times for daily backup activities.
v Your restore processing times must be fast. Restore performance from
deduplicated storage pools is slower than from a comparable disk storage pool
that does not use data deduplication. If fast restore performance from disk is a
high priority, restore performance benchmarking must be done to determine
whether the effects of data deduplication can be accommodated.
Related tasks:
“Keeping client files together using collocation” on page 381
“Detecting possible security attacks on the server during client-side deduplication”
on page 329
As part of the planning process, ensure that you will benefit from using data
deduplication. In the following situations, IBM Tivoli Storage Manager data
deduplication can provide a cost-effective method for reducing the amount of disk
storage that is required for backups:
v You have to reduce the disk space that is required for backup storage.
v You must perform remote backups over limited bandwidth connections.
v You are using Tivoli Storage Manager node replication for disaster recovery
across geographically dispersed locations.
v You either have disk-to-disk backup configured (where the final destination of
backup data is on a deduplicating disk storage pool), or data is stored in the
FILE storage pool for a significant time (for example 30 days), or until
expiration.
v For guidance on the scalability of data deduplication with Tivoli Storage
Manager, see Effective Planning and Use of IBM Tivoli Storage Manager V6
Deduplication at http://www.ibm.com/developerworks/mydeveloperworks/
If you are creating a primary The Tivoli Storage Manager server does not
sequential-access storage pool and you do start any duplicate-identification processes
not specify a value, the server starts one automatically by default.
process automatically. If you are creating a
copy storage pool or an active-data pool and
you do not specify a value, the server does
not start any processes automatically.
v Decide whether to define or update a storage pool for data deduplication, but
not actually perform data deduplication. For example, suppose that you have a
primary sequential-access disk storage pool and a copy sequential-access disk
storage pool. Both pools are set up for data deduplication. You might want to
run duplicate-identification processes for only the primary storage pool. In this
way, only the primary storage pool reads and deduplicates data. However, when
the data is moved to the copy storage pool, the data deduplication is preserved,
and no duplicate identification is required.
v Determine the best time to use data deduplication for the storage pool. The
duplicate identification (IDENTIFY) processes can increase the workload on the
processor and system memory. Schedule duplicate identification processes at the
following times:
– When the process does not conflict with other processes such as reclamation,
migration, and storage pool backup
– Before node replication (if node replication is being used) so that node
replication can be used in combination with deduplication
When you use data deduplication, your system can achieve benefits such as these:
v Reduction in the storage capacity that is required for storage pools on the server
that are associated with a FILE-type device class. This reduction applies for both
server-side and client-side data deduplication.
v Reduction in the network traffic between the client and server. This reduction
applies for client-side deduplication only.
When you implement the suggested practices for data deduplication, you can help
to avoid problems such as these on your system:
v Server outages that are caused by running out of active log space or archive log
space
v Server outages or client backup failures that are caused by exceeding the IBM
DB2 internal lock list limit
v Process failures and hangs that are caused during server data management
Properly size the server database, recovery log, and system memory:
When you use data deduplication, considerably more database space is required as
a result of storing the metadata that is related to duplicate data. Data
deduplication also tends to cause longer-running transactions and a related larger
peak in recovery log usage.
In addition, more system memory is required for caching database pages that are
used during duplicate data lookup for both server-side and client-side data
deduplication.
Tips:
v Ensure that the Tivoli Storage Manager server has a minimum of 64 GB of
system memory.
v Allocate a file system with two-to-three times more capacity for the server
database than you would allocate for a server that does not use data
deduplication. You can plan for 150 GB of database storage for every 10 TB of
data that is protected in the deduplicated storage pools.
v Configure the server to have the maximum active log size of 128 GB by setting
the ACTIVELOGSIZE server option to a value of 131072.
v Use a directory for the database archive logs with an initial free capacity of at
least 500 GB. Specify the directory by using the ARCHLOGDIRECTORY server option.
For more information about managing resources such as the database and recovery
log, see the Installation Guide. Search for database and recovery log capacity
planning.
Avoid the overlap of server maintenance tasks with client backup windows:
When you schedule client backups for a period during which server maintenance
tasks are not running, you create a backup window. This practice is important when
you use data deduplication. Use this process regardless of whether data
deduplication is used with Tivoli Storage Manager.
Migration and reclamation are the tasks most likely to interfere with the success of
client backups.
Tips:
v Schedule client backups in a backup window that is isolated from data
maintenance processes, such as migration and reclamation.
v Schedule each type of data maintenance task with controlled start times and
durations so that they do not overlap with each other.
v If storage-pool backup is used to create a secondary copy, schedule storage-pool
backup operations before you start data deduplication processing to avoid
restoring objects that are sent to a non-deduplicated copy storage pool.
v If you are using node replication to keep a secondary copy of your data,
schedule the REPLICATE NODE command to run after duplicate identification
processes are completed.
For more information about tuning the schedule for daily server maintenance
tasks, see the Optimizing Performance guide. Search for tuning the schedule for daily
operations.
The lock list storage of DB2 that is automatically managed can become insufficient.
If you deduplicate data that includes large files or large numbers of files
concurrently, the data deduplication can cause insufficient storage. When the lock
list storage is insufficient, backup failures, data management process failures, or
server outages can occur.
File sizes greater than 500 GB that are processed by data deduplication are most
likely to cause storage to become insufficient. However, if many backups use
client-side data deduplication, this problem can also occur with smaller-sized files.
Tip: When you estimate the lock list storage requirements, follow the information
described in the technote to manage storage for loads that are much larger than
expected.
You can use controls to limit the potential effect of large objects on data
deduplication processing on the Tivoli Storage Manager server.
You can use the following controls when you deduplicate large-object data:
v Server controls that limit the size of objects. These controls limit the size of
objects that are processed by data deduplication.
v Controls on the data management processes of the server. These controls limit
the number of processes that can operate concurrently on the server.
v Scheduling options that control how many clients run scheduled backups
simultaneously. These scheduling options can be used to limit the number of
clients that perform client-side data deduplication at the same time.
v Client controls whereby larger objects can be processed as a collection of smaller
objects. These controls are primarily related to the Tivoli Storage Manager data
protection products.
Use the server controls that are available on Tivoli Storage Manager server to
prevent large objects from being processed by data deduplication.
Use the following parameter and server options to limit the object size for data
deduplication:
MAXSIZE
For storage pools, the MAXSIZE parameter can be used to prevent large
objects from being stored in a deduplicated storage pool. Use the default
NOLIMIT parameter value, or set the value to be greater than
CLIENTDEDUPTXNLIMIT and SERVERDEDUPTXNLIMIT option values.
Use the MAXSIZE parameter with a deduplicated storage pool to prevent
objects that are too large to be eligible for data deduplication from being
stored in a deduplicated storage pool. The objects are then redirected to the
next storage pool in the storage pool hierarchy.
SERVERDEDUPTXNLIMIT
The SERVERDEDUPTXNLIMIT server option limits the total size of objects that
can be deduplicated in a single transaction by duplicate identification
processes. This option limits the maximum file size that is processed by
server-side data deduplication. The default value for this option is 300 GB,
and the maximum value is 2048 GB. Because less simultaneous activity is
typical with server-side data deduplication, consider having a limit larger
than 300 GB on the object size for server-side data deduplication.
CLIENTDEDUPTXNLIMIT
The CLIENTDEDUPTXNLIMIT server option restricts the total size of all objects
that can be deduplicated in a single client transaction. This option limits
the maximum object size that is processed by client-side data
deduplication. However, there are some methods to break up larger
objects. The default value for this option is 300 GB, and the maximum
value is 1024 GB.
Tips:
v Set the MAXSIZE parameter for deduplicated storage pools to a value slightly
greater than CLIENTDEDUPTXNLIMIT and SERVERDEDUPTXNLIMIT option values.
Use the controls for the data management processes of the Tivoli Storage Manager
server. These controls limit the number of large objects that are simultaneously
processed by the server during data deduplication.
Use the following commands and parameters to limit the number of large objects
that are simultaneously processed by the server:
v The storage pool parameters on the DEFINE STGPOOL command or the UPDATE
STGPOOL command.
– The MIGPROCESS parameter controls the number of migration processes for a
specific storage pool.
– The RECLAIMPROCESS parameter controls the number of simultaneous processes
that are used for reclamation.
v The IDENTIFYPROCESS parameter on the IDENTIFY DUPLICATES command. The
parameter controls the number of duplicate identification processes that can run
at one time for a specific storage pool.
Tips:
v You can safely run duplicate identification processes for more than one
deduplicated storage pool at the same time. However, specify the
IDENTIFYPROCESS parameter with the IDENTIFY DUPLICATES command to limit the
total number of all simultaneous duplicate identification processes. Limit the
total number to a number less than or equal to the number of processors that are
available in the system.
v Schedule duplicate identification processes to run when the additional load does
not affect client operations or conflict with other server processes. For example,
schedule the duplicate identification process to run outside the client backup
window. The duplicate identification processes for the server intensively use the
database and system resources. These processes place additional processing on
the processor and memory of the system.
v You can use the Tivoli Storage Manager Administration Center to run a
maintenance script. The Administration Center provides a wizard that guides
you through the steps to configure and schedule an appropriate maintenance
script that runs server processes in a preferred order.
v Do not overlap different types of operations, such as expiration, reclamation,
migration, and storage pool backup.
v Read the information about data deduplication and the server storage pool. The
effect of data deduplication on system resources is also related to the size of the
file for deduplication. As the size of the file increases, more processing time,
processor resources, memory, and active log space are needed on the server.
Review the document for information about data deduplication and the server
storage pool.
For scheduled backups, you can limit the number of client backup sessions that
perform client-side data deduplication at the same time.
You can use any of the following approaches to limit the number of client backup
sessions:
v Clients can be clustered in groups by using different schedule definitions that
run at different times during the backup window. Consider spreading clients
that use client-side deduplication among these different groups.
v Increase the duration for scheduled startup windows and increase the
randomization of schedule start times. This limits the number of backups that
use client-side data deduplication that start at the same time.
v Separate client backup destinations by using the server policy definitions of the
Tivoli Storage Manager server, so that different groups of clients use different
storage pool destinations:
– Clients for which data is never to be deduplicated cannot use a management
class that has as its destination a storage pool with data deduplication
enabled.
– Clients that use client-side data deduplication can use storage pools where
they are matched with other clients for which there is a higher likelihood of
duplicate matches. For example, all clients that run Microsoft Windows
operating systems can be set up to use a common storage pool. However,
they do not necessarily benefit from sharing a storage pool with clients that
perform backups of Oracle databases.
Many of the data protection products process objects with sizes in the range of
several hundred GBs to one TB. This range exceeds the maximum object size that
is acceptable for data deduplication.
You can reduce large objects into multiple smaller objects by using the following
methods:
v Use Tivoli Storage Manager client features that back up application data with
the use of multiple streams. For example, a 1 TB database is not eligible for data
deduplication as a whole. However, when backed up with four parallel streams,
the resulting four 250 GB objects are eligible for deduplication. For Tivoli Storage
Manager Data Protection for SQL, you can specify a number of stripes to change
the backup into multiple streams.
v Use application controls that influence the maximum object size that is passed
through to Tivoli Storage Manager. Tivoli Storage Manager Data Protection for
Oracle has several RMAN configuration parameters that can cause larger
databases to be broken into smaller objects. These configuration parameters
include the use of multiple channels, or the MAXPIECESIZE option, or both.
Restriction: In some cases, large objects cannot be reduced in size, and therefore
cannot be processed by Tivoli Storage Manager data deduplication:
Processor usage
The amount of processor resources that are used depends on how many client
sessions or server processes are simultaneously active. Additionally, the amount of
processor usage is increased because of other factors, such as the size of the files
that are backed up. When I/O bandwidth is available and the files are large, for
example 1 MB, finding duplicates can use an entire processor during a session or
process. When files are smaller, other bottlenecks can occur. These bottlenecks can
include reading files from the client disk or the updating of the Tivoli Storage
Manager server database. In these bottleneck situations, data deduplication might
not use all of the resources of the processor.
You can control processor resources by limiting or increasing the number of client
sessions for a client or a server duplicate identification processes. To take
advantage of your processor and to complete data deduplication faster, you can
increase the number of identification processes or client sessions for the client. The
increase can be up to the number of processors that are on the system. It can be
more than that number if the processors support multiple hardware-assisted
threads for the core, such as with simultaneous multithreading. Consider a
minimum of at least 8 (2.2Ghz or equivalent) processor cores in any Tivoli Storage
Manager server that is configured for data deduplication.
Network bandwidth
Network bandwidth for the queries for data from the Tivoli Storage Manager client
to the server can be reduced by using the enablededupcache client option. The
cache stores information about extents that have been previously sent to the server.
If an extent is found that was previously sent, it is not necessary to query the
Restore performance
Compression
If the server detects that a security attack is in progress, the current session is
canceled. In addition, setting of the node DEDUPLICATION parameter is changed from
CLIENTORSERVER to SERVERONLY. The SERVERONLY setting disables
client-side data deduplication for that node.
The server also issues a message that a potential security attack was detected and
that client-side data deduplication was disabled for the node.
To display the current value for SET DEDUPVERIFICATIONLEVEL, issue the QUERY
STATUS command. Check the value in the Client-side Deduplication Verification
Level field.
In a FILE storage pool that is not set up for data deduplication, files on a volume
that are being restored or retrieved are read sequentially from the volume before
the next volume is mounted. This process ensures optimal I/O performance and
eliminates the need to mount a volume multiple times.
In a FILE storage pool that is set up for data deduplication, however, extents that
comprise a single file can be distributed across multiple volumes. To restore or
retrieve the file, each volume containing a file extent must be mounted. As a result,
the I/O is more random, which can lead to slower restore-and-retrieve times.
These results occur more often with small files that are less than 100 KB. In
addition, more processor resources are consumed when restoring or retrieving
from a deduplicated storage pool. The additional consumption occurs because the
data is checked to ensure that it has been reassembled properly.
Tip: To reduce the mounting and removing of FILE storage pool volumes, the
server allows for multiple volumes to remain mounted until they are no longer
needed. The number of volumes that can be mounted at a time is controlled by the
NUMOPENVOLSALLOWED option.
You can create a storage pool for data deduplication or update an existing storage
pool for data deduplication. You can store client-side deduplicated data and
server-side deduplicated data in the same storage pool.
As data is stored in the pool, the duplicates are identified. When the reclamation
threshold for the storage pool is reached, reclamation begins, and the space that is
occupied by duplicate data is reclaimed.
Attention: By default, the Tivoli Storage Manager server requires that you back
up deduplication-enabled primary storage pools before volumes in the storage
pool are reclaimed and before duplicate data is discarded.
| You can create a copy of the data by using BACKUP STGPOOL or REPLICATE NODE
| command. When you back up a primary storage pool, you create a copy of the
| entire storage pool. When you replicate data by using node replication, you copy
| data from one or more nodes from primary storage pools to a primary storage
| pool on another Tivoli Storage Manager server.
Table 35 describes the different scenarios that you can use to create a copy of data
in your deduplicated storage pools, and which value of DEDUPREQUIRESBACKUP to
use.
Table 35. Setting the value for the DEDUPREQUIRESBACKUP option
Creating a copy of your primary storage pool DEDUPREQUIRESBACKUP
data value Method
Back up your primary storage pool data to a Yes BACKUP STGPOOL
non-deduplicated copy pool, such as a copy
pool that uses tape.
Back up your primary storage pool data to a No BACKUP STGPOOL
deduplicated copy pool.
Use node replication to create a copy of your No REPLICATE NODE
data on another Tivoli Storage Manager server.
No copy is created. No
Depending on the method that you chose to create a copy of the data in the
primary storage pools, complete one of the following actions:
| v Use the storage pool backup command to back up data:
| 1. Issue the BACKUP STGPOOL command. If you set the DEDUPREQUIRESBACKUP
| option to yes, you must back up data to a copy storage pool that is not set
| up for data deduplication.
| Tip: When you copy data to an active data pool, it does not provide the
| same level of protection that occurs when you create a storage pool backup
| or use node replication.
| 2. Issue the IDENTIFY DUPLICATES command to identify duplicate data.
| Tip: If you backup storage pool data after duplicate data is identified, the
| copy process can take longer because the data must be reconstructed to find
| any duplicate data.
| v Use the node replication command to back up data:
| 1. Issue the IDENTIFY DUPLICATES command to identify duplicate data.
| 2. Issue the REPLICATE NODE command to start node replication.
The following table illustrates what happens to data deduplication when data
objects are moved or copied.
Deduplicated data, which was in the storage pool before you turned off data
deduplication, is not reassembled. Deduplicated data continues to be removed due
to normal reclamation and deletion. All information about data deduplication for
the storage pool is retained.
To turn off data deduplication for a storage pool, use the UPDATE STGPOOL
command and specify DEDUPLICATE=NO.
If you turn data deduplication on for the same storage pool, duplicate-
identification processes resume, skipping any files that have already been
processed. You can change the number of duplicate-identification processes. When
calculating the number of duplicate-identification processes to specify, consider the
workload on the server and the amount of data requiring data deduplication. The
number of duplicate-identification processes must not exceed the number of
processor cores available on the IBM Tivoli Storage Manager server.
The following table shows how the data deduplication settings on the client
interact with the data deduplication settings on the Tivoli Storage Manager server.
Table 37. Data deduplication settings: Client and server
Value of the
DEDUPLICATION Value of the client
parameter for REGISTER NODE DEDUPLICATION option
or UPDATE NODE in the client options file Data deduplication location
SERVERONLY Yes Server
You can set the DEDUPLICATION option in the client options file, in the
preference editor of the Tivoli Storage Manager client GUI, or in the client option
set on the Tivoli Storage Manager server. Use the DEFINE CLIENTOPT command to
set the DEDUPLICATION option in a client option set. To prevent the client from
overriding the value in the client option set, specify FORCE=YES.
Table 38 on page 338 shows how these two controls, the number and duration of
processes, interact for a particular storage pool.
Remember:
v When the amount of time that you specify as a duration expires, the number of
duplicate-identification processes always reverts to the number of processes
specified in the storage pool definition.
v When the server stops a duplicate-identification process, the process completes
the current physical file and then stops. As a result, it might take several
minutes to reach the value that you specify as a duration.
v To change the number of duplicate-identification processes, you can also update
the storage pool definition using the UPDATE STGPOOL command. However, when
you update a storage pool definition, you cannot specify a duration. The
processes that you specify in the storage pool definition run indefinitely, or until
you issue the IDENTIFY DUPLICATES command, update the storage pool definition
again, or cancel a process.
The following example illustrates how you can control data deduplication using a
combination of automatic and manual duplicate-identification processes. Suppose
you create two new storage pools for data deduplication, A and B. When you
create the pools, you specify two duplicate-identification processes for A and one
process for B. The IBM Tivoli Storage Manager server is set by default to run those
processes automatically. As data is stored in the pools, duplicates are identified and
marked for removal. When there is no data to deduplicate, the
duplicate-identification processes go into an idle state, but remain active.
Suppose you want to avoid resource impacts on the server during client-node
backups. You must reduce the number of duplicate-identification processes
manually. For A, you specify a value of 1 for the number of duplicate-identification
process. For B, you specify a value of 0. You also specify that these changes remain
in effect for 60 minutes, the duration of your backup window.
For example, suppose that you have four storage pools: stgpoolA, stgpoolB,
stgpoolC, and stgpoolD. All the storage pools are associated with a particular IBM
Tivoli Storage Manager server. Storage pools A and B are each running one
duplicate-identification process, and storage pools C and D are each running two.
A 60-minute client backup is scheduled to take place, and you want to reduce the
server workload from these processes by two-thirds.
Now two processes are running for 60 minutes, one third of the number running
before the change. At the end of 60 minutes, the Tivoli Storage Manager server
automatically restarts one duplicate-identification process in storage pools A and B,
and one process in storage pools C and D.
For details about client-side data deduplication options, see the Backup-Archive
Clients Installation and User's Guide.
Related concepts:
“Client-side data deduplication” on page 313
In this example, you enable client-side data deduplication for a single node. You
have a policy domain that you use to manage deduplicated data.
The name of the domain that you use to manage deduplicated data is
dedupdomain1 The primary storage pool specified by the copy group of the
default management class is a deduplication-enabled storage pool. The client,
MATT, that you want to enable for data deduplication uses a default management
class for backup operations.
To determine the amount of data that was deduplicated, start a backup or archive
operation. At the end of the operation, check the backup or archive report.
In this example, you enable client-side data deduplication for more than one client
node.
The data belonging client MATT is bound to a management class with a copy
group that specifies a deduplication-enabled destination storage pool.
To change the data deduplication location from the client to the server, issue the
following command:
update node matt deduplication=serveronly
The extent to which these symptoms occur depends on the number and size of
objects being processed, the intensity, and type of concurrent operations taking
place on the IBM Tivoli Storage Manager server, and the Tivoli Storage Manager
server configuration.
With the SERVERDEDUPTXNLIMIT server option, you can limit the size of objects that
can be deduplicated on the server. With the CLIENTDEDUPTXNLIMIT server option,
you can limit the size of transactions when client-side deduplicated data is backed
up or archived.
Data deduplication uses an average extent size of 256 KB. When deduplicating
large objects, for example, over 200 GB, the number of extents for an object can
grow large. Assuming extents are 256 KB, there are 819,200 extents for a 200 GB
object. When you need to restore this object, all 819,200 database records must be
read before the object is accessible.
Tiered data deduplication can manage larger objects because a larger average
extent size is used when deduplicating the data. For example, after an object
reaches 200 GB, the Tivoli Storage Manager server uses 1 MB as the average extent
size, instead of 256 KB. 819,200 extents become 204,800 extents.
Note: By default, objects under 100 GB in size are processed at Tier 1. Objects in
the range of 100 GB to under 400 GB are processed in Tier 2. All objects 400 GB
and larger are processed in Tier 3.
Depending on your environment, you can set different options for using tiered
data deduplication. However, if possible, avoid changing the default tier settings.
Small changes might be tolerated, but frequent changes to these settings can
prevent matches between previously stored backups and future backups.
If you want to use two tiers for data deduplication instead of three, you can set the
DEDUPTIER2FILESIZE and DEDUPTIER3FILESIZE accordingly.
Use Tier 1 and Tier 2 only
To have two tiers with an average extent size of 256 KB and 1 MB, specify
these values:
DEDUPTIER2FILESIZE 100
DEDUPTIER3FILESIZE 9999
Use Tier 1 and Tier 3 only
To have two tiers with an average extent size of 256 KB and 2 MB, specify
these values:
If you do not want to use tiered data deduplication and instead preserve your
existing environment, set the value for both of the tiered data deduplication
options to 9999. For example:
DEDUPTIER2FILESIZE 9999
DEDUPTIER3FILESIZE 9999
If both options are set to 9999, then all files that are 10 TB or less are processed
with the default extent size of 256 KB.
You can also obtain statistics about client-side data deduplication. For details, see
Backup-Archive Clients Installation and User's Guide.
To query a storage pool for statistics about data deduplication, issue the QUERY
STGPOOL command.
If you run a query before reclamation of the storage pool, the Duplicate Data Not
Stored value in the command output is inaccurate and does not reflect the most
recent data reduction.
You can display information only about files that are linked to a volume or only
about files that are stored on a volume. You can also display information about
both stored files and linked files.
To display information about files on a volume, issue the QUERY CONTENT command
and specify the FOLLOWLINKS parameter.
For example, suppose a volume in a deduplicated storage pool is physically
destroyed. You must restore this volume. Before you do, you want to determine
whether other volumes in the storage pool have files that are linked to files in the
destroyed volume. With that information, you can decide whether to restore the
other volumes. To identify links, you issue the QUERY CONTENT command for the
destroyed volume and specify the FOLLOWLINKS parameter to list all the files with
links to files on the destroyed volume.
You can use the activity log or the Tivoli Storage Manager Administration Center
to view client statistics about data deduplication. The activity log can show
historical information about one or more nodes. You can also view data reduction
information for data deduplication by using the Tivoli Storage Manager API.
To view client statistics for data deduplication, see the activity log, use the Tivoli
Storage Manager Administration Center, or use the Tivoli Storage Manager API.
The following client statistics are taken from the activity log:
tsm> incremental c:\test\* -sub=yes
Incremental backup of volume ’c:\test\*’
Normal File--> 43,387,224 \\naxos\c$\test\newfile [Sent]
Successful incremental backup of ’\\naxos\c$\test\*’
The \\naxos\c$\test directory uses approximately 143.29 MB of space. All files are
already stored on the Tivoli Storage Manager server except the c:\test\newfile
file, which is 41.37 MB (43,387,224 bytes). After client-side data deduplication, it is
determined that only approximately 21 MB will be sent to the server.
The following client statistics are produced using the Tivoli Storage Manager API:
typedef struct tsmEndSendObjExOut_t
{
dsUint16_t stVersion; /* structure version */
dsStruct64_t totalBytesSent; /* total bytes read from app */
dsmBool_t objCompressed; /* was object compressed */
dsStruct64_t totalCompressSize; /* total size after compress */
dsStruct64_t totalLFBytesSent; /* total bytes sent LAN free */
dsUint8_t encryptionType; /* type of encryption used */
dsmBool_t objDeduplicated; /* was object processed for dist. data dedup */
dsStruct64_t totalDedupSize; /* total size after de-dup */
} tsmEndSendObjExOut_t;
After each backup or archive operation, the Tivoli Storage Manager client reports
the data deduplication statistics in the server activity log. For details about the
activity log, see the Tivoli Storage Manager Information Center, and search for
activity log.
To query the data deduplication statistics for the client, issue the QUERY ACTLOG
command.
See the following example for sample information provided by the QUERY ACTLOG
command:
Date/Time Message
-------------------- ----------------------------------------------------------
03/15/10 09:56:56 ANE4952I (Session: 406, Node: MODO)
Total number of objects inspected: 1 (SESSION: 406)
03/15/10 09:56:56 ANE4954I (Session: 406, Node: MODO)
Total number of objects backed up: 1 (SESSION: 406)
03/15/10 09:56:56 ANE4958I (Session: 406, Node: MODO)
Total number of objects updated: 0 (SESSION: 406)
03/15/10 09:56:56 ANE4960I (Session: 406, Node: MODO)
Total number of objects rebound: 0 (SESSION: 406)
03/15/10 09:56:56 ANE4957I (Session: 406, Node: MODO)
Total number of objects deleted: 0 (SESSION: 406)
03/15/10 09:56:56 ANE4970I (Session: 406, Node: MODO)
Total number of objects expired: 0 (SESSION: 406)
03/15/10 09:56:56 ANE4959I (Session: 406, Node: MODO)
Total number of objects failed: 0 (SESSION: 406)
03/15/10 09:56:56 ANE4982I (Session: 406, Node: MODO)
Total objects deduplicated: 1(SESSION: 406)
03/15/10 09:56:56 ANE4977I (Session: 406, Node: MODO)
Total number of bytes inspected: 7.05 MB(SESSION: 406)
03/15/10 09:56:56 ANE4975I (Session: 406, Node: MODO)
Total number of bytes processed: 33 B(SESSION: 406)
03/15/10 09:56:56 ANE4961I (Session: 406, Node: MODO)
Total number of bytes transferred: 33 B (SESSION: 406)
03/15/10 09:56:56 ANE4963I (Session: 406, Node: MODO)
Data transfer time: 0.00 sec (SESSION: 406)
03/15/10 09:56:56 ANE4966I (Session: 406, Node: MODO)
Network data transfer rate: 77.09 KB/sec (SESSION: 406)
03/15/10 09:56:56 ANE4967I (Session: 406, Node: MODO)
Aggregate data transfer rate: 0.01 KB/sec (SESSION: 406)
03/15/10 09:56:56 ANE4968I (Session: 406, Node: MODO)
Objects compressed by: 0% (SESSION: 406)
03/15/10 09:56:56 ANE4981I (Session: 406, Node: MODO)
Deduplication reduction: 100.00%(SESSION: 406)
03/15/10 09:56:56 ANE4976I (Session: 406, Node: MODO)
Total data reduction ratio: 100.00%(SESSION: 406)
03/15/10 09:56:56 ANE4964I (Session: 406, Node: MODO)
Elapsed processing time: 00:00:02 (SESSION: 406)
The following example shows how to use the activity log to gather the data
reduction information across all nodes that belong to the DEDUP domain:
dsmadmc -id=admin -password=admin -displaymode=list -scrollprompt=no "select
DISTINCT A1.MESSAGE, A2.MESSAGE from ACTLOG A1, ACTLOG A2 where A1.NODENAME
in (select NODE_NAME from nodes where domain_name=’DEDUP’) and
A1.SESSID=A2.SESSID and A1.MSGNO=4977 and A2.MSGNO=4961 and EXISTS
(select A3.SESSID from ACTLOG A3 where A3.SESSID=A1.SESSID and A3.MSGNO=4982)"
| grep ’MESSAGE:’ | sed -r ’s/MESSAGE:.*:\s+([0-9]+(\.[0-9]+)?)\s+
(B|KB|MB|GB|TB).*(SESSION: .*)/\1 \3/’ | sed -r ’s/\.//’ | awk -f awk.txt
{ if ($2=="B") valueInKB = 0;
if ($2=="KB") valueInKB = $1;
if ($2=="MB") valueInKB = $1 * 1024;
if ($2=="GB") valueInKB = $1 * 1024 * 1024;
if ($2=="TB") valueInKB = $1 * 1024 * 1024 *1024;
The QUERY ACTLOG command gives a summary, as shown in the following example:
Number of bytes inspected: 930808832 KB
Number of bytes transferred: 640679936 KB
Data reduction ratio: 31 %
To query where client file spaces are stored and how much space they occupy,
issue the QUERY OCCUPANCY command.
In the following example, 10 MB of data is placed in the FS1 file space, and 2 MB
is marked for expiration and is removed during the next expiration process.
Therefore, Physical Space Occupied reports 10 MB and Logical Space Occupied
reports 8 MB. The Physical Space Occupied value for storage pools that use data
deduplication is not shown.
tsm: SERVER1>q occupancy dedup*
The occupancy table shows how much physical space is occupied by a file space
after the removal of the deduplication savings. These savings are gained by
removing duplicated data from the file space. You can use select * from
occupancy to get LOGICAL_MB and REPORTING_MB values.
LOGICAL_MB is the amount of space that is used by this file space. REPORTING_MB is
the amount of space that is occupied when the data is not placed in a
deduplication-enabled storage pool.
NODE_NAME: BRIAN
TYPE: Bkup
FILESPACE_NAME: \\brain\c$
STGPOOL_NAME: MYFILEPOOL
NUM_FILES: 63
PHYSICAL_MB: 0.00
LOGICAL_MB: 10.00
REPORTING_MB: 30.00
FILESPACE_ID: 17
Tip: The LOGICAL_MB value takes into account only the amount of data that is
removed or not stored because the data is identified as a duplicate of data that is
stored elsewhere.
For example, IBM Tivoli Storage Manager for Mail and IBM Tivoli Storage
Manager for Databases can use client-side data deduplication through the Tivoli
Storage Manager API to create backup sets and export node data.
Image backup can be full or incremental. In a typical scenario, full image backups
are scheduled less frequently than incremental image backups. For example, a full
image backup is scheduled weekly and incremental backups are scheduled daily,
except for the day of the selective image backup. The frequency of full image
backups is often driven by the available storage space. For example, each image
backup of a 50 GB volume might need 50 GB of space in a storage pool.
You can use VSS on Windows Server 2003, Windows Server 2008, and Windows
Vista operating systems. For details about backing up the Windows system state,
see Tivoli Storage Manager: Client Installation and User Guide.
System state can contain thousands of objects and take a large amount of storage
space on the server. It is likely that system state objects do not change much
between backups. This results in a large amount of duplicate data being stored on
the server. In addition, similar systems are likely to have a similar system state.
Therefore, when you perform system state backups on these systems, there is an
increase in duplicate data.
In the following example, a backup of the system state was performed on two
similar systems that run Windows Server 2008. There was no data backed up to
the storage pool. On the first system, the system-state data was deduplicated by
45%, as shown in Figure 23. A backup of the system state yielded a deduplication
reduction of 98% on the second system, as shown in Figure 24 on page 350.
This example shows a sample deduplication reduction of 45% for the system state
data:
This example shows a sample deduplication reduction of 98% for the system state
data:
Tivoli Storage Manager for Virtual Environments backups consist of all virtual
machines in the environment. Often, large portions of individual backups are
common with other backups. Therefore, when you perform backup operations,
there is an increase in duplicate data.
When you use client-side data deduplication in combination with backups for
Tivoli Storage Manager for Virtual Environments, you can reduce the amount of
duplicate data that is stored on the server. The reduction amount varies, depending
on the makeup of your data.
Before you use data deduplication, ensure that your system meets all prerequisites.
You can turn on client-side data deduplication by adding DEDUPLICATION YES to the
dsm.opt file.
Related concepts:
“Client-side data deduplication” on page 313
In Tivoli Storage Manager V6.1 or earlier, data protection clients do not provide
data deduplication reduction statistics in the graphical user interface. In this
situation, you can verify that data deduplication occurs.
When only the metadata of the file is changed, for example, with access control
lists or extended attributes, typically the whole file is backed up again. With
client-side data deduplication, although the file is backed up again, only the
metadata is sent to the server.
Client-side data deduplication identifies extents in the data stream and calculates
the associated hash sums. Data deduplication determines whether a data extent
with the same hash sum is already stored on the server. If it is already stored, the
backup-archive client only needs to notify the server about the hash sum, and can
avoid sending the corresponding data extent. This process reduces the amount of
data that is exchanged between the Tivoli Storage Manager backup-archive client
and the server.
The Tivoli Storage Manager client cannot use a cache for data deduplication if
there is not enough file space for a hash sum cache. Client-side data deduplication
can take place, but it has no memory of hash sums that are already sent by the
client or already found on the server. Data deduplication, generally, must query the
server to find out if hash sums are duplicates. Hash sum lists are maintained in
memory for the life of a transaction. If a hash sum is encountered multiple times
within the same transaction, the hash sum is detectable without a cache.
The cache for client-side data deduplication can become unsynchronized with the
deduplicated disk storage pool of the server. Object expiration, file space deletion,
or overflow to an associated tape storage pool can cause the cache to be
unsynchronized. When the client cache contains entries that are no longer in the
deduplicated storage pool of the Tivoli Storage Manager server, the client cache
resets. The client cache cannot delete specific entries when objects are deleted from
the storage pool of the server.
When a backup set is created for a node by using the GENERATE BACKUPSET
command, all associated node data is placed onto the backup media. It is also
placed on the backup media when node data is exported for a node by the EXPORT
NODE command. This placement ensures that the associated objects can be restored
without any server dependencies, apart from the backup media.
Compression
Consider the following factors when you use data compression in an environment
that uses multiple clients:
v Extents that are compressed by a backup-archive client that uses Tivoli Storage
Manager V6.1 or earlier are not compatible with compressed extents from a V6.2
client. Extents are also not compatible with uncompressed extents because each
version uses a different compression algorithm.
v With a deduplication storage pool that contains data from clients that are V6.2
and earlier, there is a mixture of compressed and non-compressed extents. For
example, assume that a restore operation is run from a client that is earlier than
V6.2. Compressed extents from a client at a later version of Tivoli Storage
Manager are uncompressed by the server during the restore operation.
v When backup sets are generated for clients that are at a version earlier than
V6.2, V6.2 compressed extents that are also part of the data to be backed up are
uncompressed.
Even though most data is compatible when using compression, ensure that all
clients are at V6.2 and later. This method minimizes the need for data compression
when you restore data or create a backup set.
Data that is stored by earlier client versions and processed for deduplication
extents by the server is compatible with new extents. For example, an extent that is
identified by the server from an earlier client version matches the query from
client-side data deduplication to the server. The extent is not sent to the server,
Data extents that are created by different operations are compatible. For example,
data extents are compatible that are created by file-level, image, or IBM Tivoli
Storage Manager FastBack mount backups. This can mean that a greater proportion
of the extents can be deduplicated.
Assume that you integrate the Tivoli Storage Manager FastBack mount with Tivoli
Storage Manager to back up volumes to a Tivoli Storage Manager server. The
Tivoli Storage Manager client backs up the Tivoli Storage Manager FastBack
repository to a remote server. You previously performed an image or a file-level
backup of this data with Tivoli Storage Manager client. Then it is likely that the
Tivoli Storage Manager FastBack mount backup can use many data extents that are
already stored on the server.
For example, you perform an image backup of a volume that uses the Tivoli
Storage Manager client. Then you back up the same volume with Tivoli Storage
Manager FastBack. You can expect a greater amount of data deduplication when
you back up the Tivoli Storage Manager FastBack mount.
Data extents that are created by a file-level backup can be used by the Tivoli
Storage Manager client during an image backup. For example, you perform a full
incremental backup of the C drive on your computer. Then you run an image
backup of the same drive. You can expect a greater amount of data deduplication
during the image backup. You can also expect a greater amount of data
deduplication during a file-level backup or an archive operation that immediately
follows an image backup.
Data deduplication is only permitted for storage pools that are associated with a
devtype=FILE device class. The following scenarios show how you can implement
the data deduplication of storage pools to ensure that you can restore data if a
failure occurs.
Primary storage pool is deduplicated and a single copy storage pool is not
deduplicated
The amount of time required to back up the primary storage pool to a
non-deduplicated copy storage pool can increase. While data is copied to
the copy storage pool, the deduplicated data that represents a file must be
read. The file must be recreated and stored in the copy storage pool.
You can write data simultaneously during any of the following operations:
v Client store sessions, for example:
– Backup and archive sessions by Tivoli Storage Manager backup-archive
clients.
– Backup and archive sessions by application clients using the Tivoli Storage
Manager API.
The maximum number of copy storage pools and active-data pools to which data
can be simultaneously written is three. For example, you can write data
simultaneously to three copy storage pools, or you can write data simultaneously
to two copy storage pools and one active-data pool.
You can specify the simultaneous-write function for a primary storage pool if it is
the target for client store sessions, server import processes, or server
data-migration processes. You can also specify the simultaneous-write function for
a primary storage pool when it is the target for all of the eligible operations.
Writing data simultaneously during client store sessions might be the logical choice
if you have sufficient time for mounting and removing tapes during the client store
session. However, if you choose this option you must ensure that a sufficient
number of mount points and drives are available to accommodate all the client
nodes that are storing data.
As a best practice, you are probably issuing the BACKUP STGPOOL and COPY
ACTIVEDATA commands for all the storage pools in your storage pool hierarchy. If
you are, and if you migrate only a small percentage of data from the primary
storage pool daily, writing data simultaneously during client store sessions is the
Use the simultaneous-write function during migration if you have many client
nodes and the number of mount points that are required to write data
simultaneously during client store sessions is unacceptable. Similarly, mounting
and removing tapes when writing data simultaneously during client store sessions
might be taking too much time. If so, consider writing data simultaneously during
migration.
By default, the Tivoli Storage Manager server writes data simultaneously during
client store sessions if you have copy storage pools or active-data pools defined to
the target storage pool.
You can also disable the simultaneous-write function. This option is useful if you
have copy storage pools or active-data pools defined, but you want to disable the
simultaneous-write function without deleting and redefining the pools.
Remember:
v Specify a value for the AUTOCOPY parameter on the primary storage pool that is
the target of data movement. (The default is to write data simultaneously during
client store sessions and server import processes.) For example, if you want to
write data simultaneously only during server data-migration processes, specify
AUTOCOPY=MIGRATION in the definition of the next storage pool in the storage pool
hierarchy.
v The AUTOCOPY parameter is not available for copy storage pools or active-data
pools.
IBM Tivoli Storage Manager provides the following options for controlling when
simultaneous-write operations occur:
v To disable the simultaneous-write function, specify AUTOCOPY=NONE.
This option is useful, if, for example, you have copy storage pools or active-data
pools defined, and you want to temporarily disable the simultaneous-write
function without having to delete and then redefine the pools.
v To specify simultaneous-write operations only during client store sessions and
server import processes, specify AUTOCOPY=CLIENT.
During server import processes, data is simultaneously written only to copy
storage pools. Data is not written to active-data pools during import processes.
v To specify that simultaneous-write operations take place only during server
data-migration processes, specify AUTOCOPY=MIGRATION.
During server data migration, data is simultaneously written to copy storage
pools and active-data pools only if the data does not exist in those pools.
v To specify that simultaneous-write operations take place during client store
sessions, server data-migration processes, and server import processes, specify
AUTOCOPY=ALL.
A primary storage pool can be the target for more than one type of data
movement. For example, the next storage pool in a storage pool hierarchy can be
the target for data migration from the primary storage pool at the top of the
hierarchy. The next storage pool can also be the target for direct backup of
certain types of client files (for example, image files). The AUTOCOPY=ALL setting
on a primary storage pool ensures that data is written simultaneously during
both server data-migration processes and client store sessions.
The following table provides examples of AUTOCOPY settings for some common
scenarios in which the simultaneous-write function is used.
For details about the DEFINE STGPOOL and UPDATE STGPOOL commands and
parameters, see the Administrator's Reference.
The parameters that are used to specify copy storage pools and active-data pools
are on the DEFINE STGPOOL and UPDATE STGPOOL commands.
v To specify copy storage pools, use the COPYSTGPOOLS parameter.
v To specify active-data pools, use the ACTIVEDATAPOOLS parameter.
Ensure that client sessions have sufficient mount points. Each session requires one
mount point for the primary storage pool and a mount point for each copy storage
pool and each active-data pool. To allow a sufficient number of mounts points, use
the MAXNUMMP parameter on the REGISTER NODE or UPDATE NODE commands.
For details about the DEFINE STGPOOL and UPDATE STGPOOL commands, refer to the
Administrator's Reference.
Use the COPYCONTINUE parameter on the DEFINE STGPOOL command to specify how
the server reacts to a write failure to copy storage pools during client store
sessions:
v To stop writing to failing copy storage pools for the remainder of the session,
but continue storing files into the primary pool and any remaining copy pools or
active-data pools, specify COPYCONTINUE=YES.
The copy storage pool list is active only for the life of the session and applies to
all the primary storage pools in a particular storage pool hierarchy.
v To fail the transaction and discontinue the store operation, specify
COPYCONTINUE=NO.
Restrictions:
v The setting of the COPYCONTINUE parameter does not affect active-data pools. If a
write failure occurs for any of active-data pools, the server stops writing to the
failing active-data pool for the remainder of the session, but continues storing
files into the primary pool and any remaining active-data pools and copy
storage pools. The active-data pool list is active only for the life of the session
and applies to all the primary storage pools in a particular storage pool
hierarchy.
v The setting of the COPYCONTINUE parameter does not affect the simultaneous-write
function during server import. If data is being written simultaneously and a
write failure occurs to the primary storage pool or any copy storage pool, the
server import process fails.
v The setting of the COPYCONTINUE parameter does not affect the simultaneous-write
function during migration. If data is being written simultaneously and a write
failure occurs to any copy storage pool or active-data pool, the failing storage
pool is removed and the data migration process continues. Write failures to the
primary storage pool cause the migration process to fail.
For details about the DEFINE STGPOOL and UPDATE STGPOOL commands and
parameters, refer to the Administrator's Reference.
Related concepts:
“Rules of inheritance for the simultaneous-write function” on page 362
When a client backs up, archives, or migrates a file, or when the server imports
data, the data is written to the primary storage pool that is specified by the copy
group of the management class that is bound to the data. If a data storage
operation or a server import operation switches from the primary storage pool at
the top of a storage hierarchy to a next primary storage pool in the hierarchy, the
next storage pool inherits the list of copy storage pools, the list of active-data
pools, and the value of the COPYCONTINUE parameter from the primary storage pool
at the top of the storage pool hierarchy.
The following rules apply during a client store session or a server import process
when the server must switch primary storage pools:
v If the destination primary storage pool has one or more copy storage pools or
active-data pools defined using the COPYSTGPOOLS or ACTIVEDATAPOOLS
parameters, the server writes the data to the next storage pool and to the copy
storage pools and active-data pools that are defined to the destination primary
pool, regardless whether the next pool has copy pools defined.
The setting of the COPYCONTINUE of the destination primary storage pool is
inherited by the next primary storage pool. The COPYCONTINUE parameter
specifies how the server reacts to a copy storage-pool write failure for any of the
copy storage pools listed in the COPYSTGPOOLS parameter. If the next pool has
copy storage pools or active-data pools defined, they are ignored as well as the
value of the COPYCONTINUE parameter.
v If no copy storage pools or active-data pools are defined in the destination
primary storage pool, the server writes the data to the next primary storage
pool. If the next pool has copy storage pools or active-data pools defined, they
are ignored.
These rules apply to all the primary storage pools within the storage pool
hierarchy.
Related tasks:
“Specifying copy pools and active-data pools for simultaneous-write operations”
on page 360
“Specifying how the server reacts to a write failure during simultaneous-write
operations” on page 361
With DISKPOOL and TAPEPOOL already defined as your storage pool hierarchy,
issue the following commands to enable the simultaneous-write function:
define stgpool copypool1 mytapedevice pooltype=copy
define stgpool copypool2 mytapedevice pooltype=copy
define stgpool activedatapool mydiskdevice pooltype=activedata
update stgpool diskpool copystgpools=copypool1,copypool2 copycontinue=yes
activedatapools=activedatapool
where MYTAPEDEVICE is the device-class name associated with the copy storage
pools and MYDISKDEVICE is the device-class name associated with the
active-data pool.
The storage pool hierarchy and the copy storage pools and active-data pool
associated with DISKPOOL are displayed in Figure 25 on page 364.
to
Po
ts
in
in
ts
Po
to
ACTIVEDATAPOOL
DISKPOOL
COPYPOOL2
TAPEPOOL
COPYPOOL1
Figure 25. Example of storage pool hierarchy with copy storage pools defined for DISKPOOL
E
D
C
B
C C A COPYPOOL1
D D
E B
A
E
Client in DISKPOOL
NORMAL domain
next pool
TAPEPOOL
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the backup operation has completed. Data that is simultaneously written to copy
storage pools or active-data pools during migration is not copied when storage
pools are backed up or when active data is copied.
In this example, the next storage pool in a hierarchy inherits empty copy storage
pool and active-data pool lists from the primary storage pool at the top of the
storage hierarchy.
You do not specify a list of copy storage pools for DISKPOOL. However, you do
specify copy storage pools for TAPEPOOL (COPYPOOL1 and COPYPOOL2) and
an active-data pool (ACTIVEDATAPOOL). You also specify a value of YES for the
COPYCONTINUE parameter. Issue the following commands to enable the
simultaneous-write function:
define stgpool copypool1 mytapedevice pooltype=copy
define stgpool copypool2 mytapedevice pooltype=copy
define stgpool activedatapool mydiskdevice pooltype=activedata
update stgpool tapepool copystgpools=copypool1,copypool2
copycontinue=yes activedatapools=activedatapool
where MYTAPEDEVICE is the device-class name associated with the copy storage
pools and MYDISKDEVICE is the device-class name associated with the
active-data pool. Figure 27 on page 366 displays this configuration.
Po
to
int
ts
s
in
to
Po
DISKPOOL
TAPEPOOL ACTIVEDATAPOOL
COPYPOOL2
COPYPOOL1
Figure 27. Example of storage pool hierarchy with copy storage pools defined for TAPEPOOL
When files A, B, C, D, and E are backed up, the following events occur:
v A, B, C, and D are written to DISKPOOL.
v File E is written to TAPEPOOL.
See Figure 28 on page 367.
COPYPOOL2
B
A
C C COPYPOOL1
D D
E B
A
E
Client in DISKPOOL
NORMAL domain
next pool
TAPEPOOL
Although TAPEPOOL has copy storage pools and an active-data pool defined, file
E is not copied because TAPEPOOL inherits empty copy storage pool and
active-data pool lists from DISKPOOL.
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the backup operation has completed. Data that is simultaneously that is written to
copy storage pools or active-data pools during migration is not copied when
primary storage pools are backed up or when active data is copied.
You specify COPYPOOL1 and COPYPOOL2 as copy storage pools for DISKPOOL
and you set the value of the COPYCONTINUE parameter to YES. You also specify
ACTIVEDATAPOOL as the active-data pool for DISKPOOL. This configuration is
identical to the configuration in the first example.
When files A, B, C, D, and E are backed up, the following events occur:
v An error occurs while writing to COPYPOOL1, and it is removed from the copy
storage pool list that is held in memory by the server. The transaction fails.
v Because the value of the COPYCONTINUE parameter is YES, the client tries the
backup operation again. The in-memory copy storage pool list, which is retained
by the server for the duration of the client session, no longer contains
COPYPOOL1.
v Files A and B are simultaneously written to DISKPOOL, ACTIVEDATAPOOL,
and COPYPOOL2.
v Files C and D are simultaneously written to DISKPOOL and COPYPOOL2.
v File E is simultaneously written to TAPEPOOL and COPYPOOL2.
See Figure 29 on page 368.
E
DISKPOOL
Client in
NORMAL domain
next pool
TAPEPOOL
In this scenario, if the primary storage pools and COPYPOOL2 become damaged
or lost, you might not be able to recover your data. For this reason, issue the
following BACKUP STGPOOL command for the copy storage pool that failed:
backup stgpool diskpool copystgpool1
backup stgpool tapepool copystgpool1
You can still recover the primary storage pools from COPYPOOL1 and, if
necessary, COPYPOOL2. However, if you want active backup data available in the
active-data pool for fast client restores, you must issue the following command:
copy activedata diskpool activedatapool
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the backup operation has completed. Data that is simultaneously written to copy
storage pools or active-data pools during migration is not copied when primary
storage pools are backed up or when active data is copied.
In this example, the storage pool hierarchy contains two primary storage pools.
The next storage pool has two copy storage pools defined. A copy of one of the
files to be migrated to the next storage pool exists in one of the copy storage pools.
FILEPOOL and TAPEPOOL are defined in your storage pool hierarchy. Two copy
storage pools, COPYPOOL1 and COPYPOOL2, are defined to TAPEPOOL. Files A,
B, and C are in FILEPOOL and eligible to be migrated. A copy of file C exists in
COPYPOOL2.
The storage pool hierarchy and the copy storage pools that are associated with
TAPEPOOL are displayed in Figure 30.
FILEPOOL A B
C
COPYPOOL2
Next pool
TAPEPOOL
COPYPOOL1
C
Tip: In this example, the setting of the AUTOCOPY parameter for FILEPOOL is not
relevant. TAPEPOOL is the target of the data migration.
FILEPOOL
COPYPOOL2
up
a ck
lb
o C
Next pool
e po
A ag B
or
B St A
C
TAPEPOOL Sto A
rag B
ep
oo
l ba
cku COPYPOOL1
p
C
Figure 31. Simultaneous-write operation during migration to two copy storage pools
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the migration operation has completed. Data that is simultaneously written to copy
storage pools or active-data pools during migration is not copied when primary
storage pools are backed up or when active data is copied.
In this example, the storage pool hierarchy contains two primary storage pools.
The next storage pool has two copy storage pools defined. A copy of one of the
files to be migrated to the next storage pool exists in a copy storage pool. A write
error to the pool occurs.
FILEPOOL and TAPEPOOL are defined in the storage pool hierarchy. Two copy
storage pools, COPYPOOL1 and COPYPOOL2, are defined to TAPEPOOL. Files A,
B, and C are in FILEPOOL and are eligible to be migrated. A copy of file C exists
in COPYPOOL1.
The storage pool hierarchy and the copy storage pools that are associated with
TAPEPOOL are displayed in Figure 32 on page 371.
Next pool
TAPEPOOL
COPYPOOL1
C
Tip: In this example, the setting of the AUTOCOPY parameter for FILEPOOL is not
relevant. TAPEPOOL is the target of the data migration.
FILEPOOL
COPYPOOL2
up
a ck
o lb C
Next pool
e po
A ag B
or
B St A
C
COPYPOOL1
TAPEPOOL C
(Removed for the
duration of the
migration process)
In this example, three primary storage pools are linked to form a storage pool
hierarchy. The next storage pool in the hierarchy has a storage pool list. The last
pool in the hierarchy inherits the list during a simultaneous-write operation.
The storage pool hierarchy and the copy storage pool are displayed in Figure 34.
FILEPOOL1 A B
C
Next pool
COPYPOOL1
C
FILEPOOL2
Next pool
TAPEPOOL
Figure 34. Three-tiered storage pool hierarchy with one copy storage pool
Issue the following commands for FILEPOOL2 and TAPEPOOL to enable the
simultaneous-write function only during migration:
update stgpool filepool2 autocopy=migration
update stgpool tapepool autocopy=migration
Tip: In this example, the setting of the AUTOCOPY parameter for FILEPOOL1 is not
relevant. FILEPOOL2 and TAPEPOOL are the targets of the data migration.
FILEPOOL1
Next pool
B
C FILEPOOL2
Migration COPYPOOL1
Next pool C
A A
TAPEPOOL
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the migration operation has completed. Data that is simultaneously written to copy
storage pools or active-data pools during migration is not copied when primary
storage pools are backed up or when active data is copied.
Primary storage pools FILEPOOL and TAPEPOOL are linked to form a storage
hierarchy. FILEPOOL is at the top of the storage hierarchy. TAPEPOOL is the next
pool in the storage hierarchy. Two copy storage pools, COPYPOOL1 and
COPYPOOL2, are defined to FILEPOOL. The value of the AUTOCOPY parameter for
FILEPOOL is CLIENT. The value of the AUTOCOPY parameter for TAPEPOOL is
NONE.
v Files A, B, and C were written to FILEPOOL during client backup operations.
v File C was simultaneously written to COPYPOOL1.
v The files in FILEPOOL are eligible to be migrated.
COPYPOOL2
FILEPOOL A B
C
COPYPOOL1
C
Next pool
TAPEPOOL
When files A, B and C are migrated, they are written to TAPEPOOL. See Figure 37.
COPYPOOL2
FILEPOOL
COPYPOOL1
C
Next pool
A
B
C
TAPEPOOL
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the migration operation has completed. Data that is simultaneously written to copy
storage pools or active-data pools during migration is not copied when primary
storage pools are backed up or when active data is copied.
Primary storage pools FILEPOOL and TAPEPOOL are linked to form a storage
hierarchy. FILEPOOL is at the top of the storage hierarchy. TAPEPOOL is the next
pool in the storage hierarchy. One copy storage pool, COPYPOOL, is defined to
both FILEPOOL and TAPEPOOL:
v The simultaneous-write function during client store operations was enabled.
(The setting of the AUTOCOPY parameter for FILEPOOL is CLIENT.)
v During client store operations, files A, B, and C were written to COPYPOOL. A
failure occurred while writing file D to COPYPOOL
v The simultaneous-write function during migration is enabled for TAPEPOOL.
(The setting of the AUTOCOPY parameter for TAPEPOOL is MIGRATION.)
The storage pool hierarchy and the copy storage pool that are associated with
FILEPOOL and TAPEPOOL are displayed in Figure 38.
FILEPOOL A B
C D
A B C
Next pool
COPYPOOL1
TAPEPOOL
A B C
Next pool Storage pool
D backup D
A COPYPOOL1
B
C
TAPEPOOL
Figure 39. A simultaneous-write operation during both migration and client backup operations
As a precaution, issue the BACKUP STGPOOL and COPY ACTIVEDATA commands after
the migration operation has completed. Data that is simultaneously written to copy
storage pools or active-data pools during migration is not copied when primary
storage pools are backed up or when active data is copied.
Give careful consideration to the number of mount points that are available for a
simultaneous-write operation. A client session requires a mount point to store data
to a sequential-access storage pool. For example, if a storage pool hierarchy
includes a sequential primary storage pool, the client node requires one mount
point for that pool plus one mount point for each copy storage pool and
active-data pool.
Suppose, for example, you create a storage pool hierarchy like the hierarchy shown
in Figure 25 on page 364. DISKPOOL is a random-access storage pool, and
TAPEPOOL, COPYPOOL1, COPYPOOL2, and ACTIVEDATAPOOL are
sequential-access storage pools. For each client backup session, the client might
have to acquire four mount points if it has to write data to TAPEPOOL. To run
two backup sessions concurrently, the client requires a total of eight mount points.
To indicate the number of mount points a client can have, specify a value for the
MAXNUMMP parameter on the REGISTER NODE or UPDATE NODE commands. Verify the
value of the MAXNUMMP parameter and, if necessary, update it if you want to enable
the simultaneous-write function. A value of 3 for the MAXNUMMP parameter might be
sufficient if, during a client session, all the data is stored in DISKPOOL,
COPYPOOL1, COPYPOOL2, and ACTIVEDATAPOOL.
Restrictions:
v The setting of the COPYCONTINUE parameter does not affect active-data pools. If a
write failure occurs for any of active-data pools, the server stops writing to the
failing active-data pool for the remainder of the session, but continues storing
files into the primary pool and any remaining active-data pools and copy
storage pools. The active-data pool list is active only for the life of the session
and applies to all the primary storage pools in a particular storage pool
hierarchy.
v The setting of the COPYCONTINUE parameter does not affect the simultaneous-write
function during server import. If data is being written simultaneously and a
write failure occurs to the primary storage pool or any copy storage pool, the
server import process fails.
v The setting of the COPYCONTINUE parameter does not affect the simultaneous-write
function during migration. If data is being written simultaneously and a write
failure occurs to any copy storage pool or active-data pool, the failing storage
pool is removed and the data migration process continues. Write failures to the
primary storage pool cause the migration process to fail.
If the operation involves a copy storage pool, the value of the COPYCONTINUE
parameter in the storage pool definition determines whether the client tries the
operation again:
v If the value of the COPYCONTINUE parameter is NO, the client does not try the
operation again.
Restrictions:
v The setting of the COPYCONTINUE parameter does not affect active-data pools. If a
write failure occurs for any of active-data pools, the server stops writing to the
failing active-data pool for the remainder of the session, but continues storing
files into the primary pool and any remaining active-data pools and copy
storage pools. The active-data pool list is active only for the life of the session
and applies to all the primary storage pools in a particular storage pool
hierarchy.
v The setting of the COPYCONTINUE parameter does not affect the simultaneous-write
function during server import. If data is being written simultaneously and a
write failure occurs to the primary storage pool or any copy storage pool, the
server import process fails.
v The setting of the COPYCONTINUE parameter does not affect the simultaneous-write
function during migration. If data is being written simultaneously and a write
failure occurs to any copy storage pool or active-data pool, the failing storage
pool is removed and the data migration process continues. Write failures to the
primary storage pool cause the migration process to fail.
Suppose you use a DISK primary storage pool that is accessed by many clients at
the same time during client data-storage operations. If this storage pool is
associated with copy storage pools, active-data pools, or both, the clients might
have to wait until enough tape drives are available to perform the store operation.
In this scenario, simultaneous-write operations could extend the amount of time
required for client store operations. It might be more efficient to store the data in
the primary storage pool and use the BACKUP STGPOOL command to back up the
DISK storage pool to the copy storage pools and the COPY ACTIVEDATA command to
copy active backup data from the DISK storage pool to the active-data pools.
Resources such as disk space, tape drives, and tapes are allocated at the beginning
of a simultaneous-write operation, and typically remain allocated during the entire
operation. If, for any reason, the destination primary pool cannot contain the data
being stored, the IBM Tivoli Storage Manager server attempts to store the data into
a next storage pool in the storage hierarchy. This next storage pool typically uses a
sequential-access device class. If new resources must be acquired for the next
To reduce the potential for switching storage pools, follow these guidelines:
v Ensure that enough space is available in the primary storage pools that are
targets for the simultaneous-write operation. For example, to make space
available, run the server migration operation before backing up or archiving
client data and before migration operations by Hierarchical Storage Management
(HSM) clients.
v The MAXSIZE parameter on the DEFINE STGPOOL and UPDATE STGPOOL commands
limits the size of the files that the Tivoli Storage Manager server can store in the
primary storage pools during client operations. Honoring the MAXSIZE parameter
for a storage pool during a store operation causes the server to switch pools. To
prevent switching pools, avoid using this parameter if possible.
For example, you can configure production servers to store mission critical data in
one storage pool hierarchy and use the simultaneous-write function to back up the
data to copy storage pools and an active-data pool. See Figure 40. In addition, you
can configure the servers to store noncritical, workstation data in another storage
pool hierarchy and back up that data using the BACKUP STGPOOL command.
Policy Domain
Policy Set
STANDARD Mission Critical
Management Class Management Class
Backup Backup
Copy Copy
Group Group
Points to
Points to
ACTIVEDATAPOOL B
DISKPOOL A DISKPOOL B
COPYPOOL B2
COPYPOOL B1
TAPEPOOL A TAPEPOOL B
Figure 40. Separate storage pool hierarchies for different types of data
This example also shows how to use the COPY ACTIVEDATA command to copy active
data from primary storage pools to an on-site sequential-access disk (FILE)
active-data pool. When designing a backup strategy, carefully consider your own
system, data storage, and disaster-recovery requirements.
1. Define the following storage pools:
v Two copy storage pools, ONSITECOPYPOOL and DRCOPYPOOL
v One active-data pool, ACTIVEDATAPOOL
v Two primary storage pools, DISKPOOL and TAPEPOOL
As part of the storage pool definition for DISKPOOL, specify TAPEPOOL as the
next storage pool, ONSITECOPYPOOL as the copy storage pool, and
ACTIVEDATAPOOL as the active-data pool. Set the copy continue parameter
for copy storage pools to YES. If an error occurs writing to a copy storage pool,
the operation will continue storing data into the primary pool, the remaining
copy storage pool, and the active-data pool.
define stgpool tapepool mytapedevice
define stgpool onnsitepool mytapedevice
define stgpool drcopypoool mytapedevice
define stgpool activedatapool mydiskdevice
define stgpool diskpool mydiskdevice nextstgpool=tapepool
copystgpool=onsitecopypool copycontinue=yes activedatapools=
activedatapool
You can set collocation for each sequential-access storage pool when you define or
update the pool.
Figure 41 shows an example of collocation by client node with three clients, each
having a separate volume containing that client's data.
When collocation is disabled, the server attempts to use all available space on each
volume before selecting a new volume. While this process provides better
utilization of individual volumes, user files can become scattered across many
volumes. Figure 43 on page 382 shows an example of collocation disabled, with
three clients sharing space on single volume.
Collocation by group is the Tivoli Storage Manager system default for primary
sequential-access storage pools. The default for copy storage pools and active-data
pools is no collocation.
During the following server operations, all the data belonging to a collocation
group, a single client node, or a single client file space is moved or copied by one
process. For example, if data is collocated by group, all data for all nodes
belonging to the same collocation group is migrated by the same process.
1. Moving data from random-access and sequential-access volumes
2. Moving node data from sequential-access volumes
3. Backing up a random-access or sequential-access storage pool
4. Restoring a sequential-access storage pool
5. Reclamation of a sequential-access storage pool or off-site volumes
6. Migration from a random-access storage pool.
When collocating node data, the Tivoli Storage Manager server attempts to keep
files together on a minimal number of sequential-access storage volumes. However,
when the server is backing up data to volumes in a sequential-access storage pool,
the backup process has priority over collocation settings. As a result, the server
completes the backup, but might not be able to collocate the data. For example,
suppose you are collocating by node, and you specify that a node can use two
mount points on the server. Suppose also that the data being backed up from the
node could easily fit on one tape volume. During backup, the server might mount
two tape volumes, and the node's data might be distributed across two tapes,
rather than one.
If collocation is by node or file space, nodes or file spaces are selected for
migration based on the amount of data to be migrated. The node or file space with
the most data is migrated first. If collocation is by group, all nodes in the storage
pool are first evaluated to determine which node has the most data. The node with
the most data is migrated first along with all the data for all the nodes belonging
to that collocation group regardless of the amount of data in the nodes' file spaces
or whether the low migration threshold has been reached.
One reason to collocate by group is that individual client nodes often do not have
sufficient data to fill high-capacity tape volumes. Collocating data by groups of
nodes can reduce unused tape capacity by putting more collocated data on
individual tapes. In addition, because all data belonging to all nodes in the same
collocation group are migrated by the same process, collocation by group can
reduce the number of times a volume containing data to be migrated needs to be
mounted. Collocation by group can also minimize database scanning and reduce
tape passes during data transfer from one sequential-access storage pool to
Table 41 shows how the Tivoli Storage Manager server selects the first volume
when collocation is enabled for a storage pool at the client-node, collocation-group,
and file-space level.
Table 41. How the server selects volumes when collocation is enabled
Volume Selection When collocation is by group When collocation is by node When collocation is by file
Order space
1 A volume that already A volume that already A volume that already
contains files from the contains files from the same contains files from the same
collocation group to which the client node file space of that client node
client belongs
2 An empty predefined volume An empty predefined volume An empty predefined volume
3 An empty scratch volume An empty scratch volume An empty scratch volume
4 A volume with the most A volume with the most A volume containing data
available free space among available free space among from the same client node
volumes that already contain volumes that already contain
data data
5 Not applicable Not applicable A volume with the most
available free space among
volumes that already contain
data
When the server needs to continue to store data on a second volume, it uses the
following selection order to acquire additional space:
1. An empty predefined volume
2. An empty scratch volume
3. A volume with the most available free space among volumes that already
contain data
4. Any available volume in the storage pool
When collocation is by client node or file space, the server attempts to provide the
best use of individual volumes while minimizing the mixing of files from different
clients or file spaces on volumes. This is depicted in Figure 44 on page 385, which
shows that volume selection is horizontal, where all available volumes are used
before all available space on each volume is used. A, B, C, and D represent files
from four different client nodes.
Remember:
1. If collocation is by node and the node has multiple file spaces, the server does
not attempt to collocate those file spaces.
2. If collocation is by file space and a node has multiple file spaces, the server
attempts to put data for different file spaces on different volumes.
Numbers of volumes (1 to n)
Figure 44. Using all available sequential access storage volumes with collocation enabled at
the node or file space level
When collocation is by group, the server attempts to collocate data from nodes
belonging to the same collocation group. As shown in the Figure 45, data for the
following groups of nodes has been collocated:
v Group 1 consists of nodes A, B, and C
v Group 2 consists of nodes D and E
v Group 3 consists of nodes F, G, H, and I
Whenever possible, the Tivoli Storage Manager server collocates data belonging to
a group of nodes on a single tape, as represented by Group 2 in the figure. Data
for a single node can also be spread across several tapes associated with a group
(Group 1 and 2). If the nodes in the collocation group have multiple file spaces, the
server does not attempt to collocate those file spaces.
H
C
Amount
of space E
used on B G I
each
volume
A D H
C F
Numbers of volumes (1 to n)
Figure 45. Using all available sequential access storage volumes with collocation enabled at
the group level
Remember: Normally, the Tivoli Storage Manager server always writes data to the
current filling volume for the operation being performed. Occasionally, however,
you might notice more than one filling volume in a collocated storage pool. This
can occur if different server processes or client sessions attempt to store data into
the collocated pool at the same time. In this situation, Tivoli Storage Manager will
allocate a volume for each process or session needing a volume so that both
operations complete as quickly as possible.
When the server needs to continue to store data on a second volume, it attempts to
select an empty volume. If none exists, the server attempts to select any remaining
available volume in the storage pool.
B
D
Amount C
of space C
used on A A
each
volume B D
D
C A
VOL1 VOL2 VOL3 VOL4 VOL5
Numbers of volumes (1 to n)
Figure 46. Using all available space on sequential volumes with collocation disabled
For example, if collocation is off for a storage pool and you turn it on, from then on
client files stored in the pool are collocated. Files that had previously been stored
in the pool are not moved to collocate them. As volumes are reclaimed, however,
the data in the pool tends to become more collocated. You can also use the MOVE
DATA or MOVE NODEDATA commands to move data to new volumes to increase
collocation. However, this causes an increase in the processing time and the
volume mount activity.
Remember: A mount wait can occur or increase when collocation by file space is
enabled and a node has a volume containing multiple file spaces. If a volume is
eligible to receive data, Tivoli Storage Manager will wait for that volume.
Using collocation on copy storage pools and active-data pools requires special
consideration.
Primary storage pools perform a different recovery role than those performed by
copy storage pools and active-data pools. Normally you use primary storage pools
(or active-data pools) to recover data to clients directly. In a disaster, when both
clients and the server are lost, you might use off-site active-data pool volumes to
recover data directly to clients and the copy storage pool volumes to recover the
primary storage pools. The types of recovery scenarios that concern you the most
will help you to determine whether to use collocation on your copy storage pools
and active-data pools.
Collocation typically results in partially filled volumes when you collocate by node
or by file space. (Partially filled volumes are less prevalent, however, when you
collocate by group.) Partially filled volumes might be acceptable for primary
storage pools because the volumes remain available and can be filled during the
next migration process. However, this may be unacceptable for copy storage pools
and active-data pools whose storage pool volumes are taken off-site immediately. If
you use collocation for copy storage pools or active-data pools, you must decide
among the following:
v Taking more partially filled volumes off-site, thereby increasing the reclamation
activity when the reclamation threshold is lowered or reached. Remember that
rate of reclamation for volumes in an active-data pool is typically faster than the
rate for volumes in other types of storage pools.
v Leaving these partially filled volumes on-site until they fill and risk not having
an off-site copy of the data on these volumes.
v Whether to collocate by group in order to use as much tape capacity as possible.
With collocation disabled for a copy storage pool or an active-data pool, typically
there will be only a few partially filled volumes after data is backed up to the copy
storage pool or copied to the active-data pool.
Consider your options carefully before using collocation for copy storage pools and
active-data pools. Even if you use collocation for your primary storage pools, you
may want to disable collocation for copy storage pools and active-data pools.
Collocation on copy storage pools or active-data pools might be desirable if you
have few clients, but each of them has large amounts of incremental backup data
each day.
Table 42 lists the four collocation options that you can specify on the DEFINE
STGPOOL and UPDATE STGPOOL commands. The table also describes the effects of
collocation on data which belongs to nodes that are members of collocation groups
and nodes that are not members of any collocation group.
Table 42. Collocation options and effects on node data
If a node is not defined as a member of a If a node is defined as a member of a
Collocation option collocation group... collocation group...
No The data for the node is not collocated. The data for the node is not collocated.
Group The server stores the data for the node on as The server stores the data for the node and for
few volumes in the storage pool as possible. other nodes that belong to the same
collocation group on as few volumes as
possible.
Node The server stores the data for the node on as The server stores the data for the node on as
few volumes as possible. few volumes as possible.
Filespace The server stores the data for the node's file The server stores the data for the node's file
space on as few volumes as possible. If a node space on as few volumes as possible. If a node
has multiple file spaces, the server stores the has multiple file spaces, the server stores the
data for different file spaces on different data for different file spaces on different
volumes in the storage pool. volumes in the storage pool.
When deciding whether and how to collocate data, do the following steps:
1. Familiarize yourself with the potential advantages and disadvantages of
collocation, in general. For a summary of effects of collocation on operations,
see Table 40 on page 382.
2. If the decision is to collocate, determine how the data is to be organized,
whether by client node, group of client nodes, or file space. If the decision is to
collocate by group, you must decide how to group nodes:
v If the goal is to save space, you might want to group small nodes together to
better use tapes.
v If the goal is potentially faster client restores, group nodes together so that
they fill as many tapes as possible. Doing so increases the probability that
individual node data will be distributed across two or more tapes and that
more tapes can be mounted simultaneously during a multi-session No Query
Restore operation.
v If the goal is to departmentalize data, then you can group nodes by
department.
3. If collocation by group is the wanted result:
a. Define collocation groups with the DEFINE COLLOCGROUP command.
b. Add client nodes to the collocation groups with the DEFINE COLLOCMEMBER
command.
The following query commands are available to help in collocating groups:
QUERY COLLOCGROUP
Displays the collocation groups defined on the server.
QUERY NODE
Displays the collocation group, if any, to which a node belongs.
Tip: If you use collocation, but want to reduce the number of media mounts and
use space on sequential volumes more efficiently, you can:
v Define a storage pool hierarchy and policy to require that backed-up, archived,
or space-managed files are stored initially in disk storage pools.
When files are migrated from a disk storage pool, the server attempts to migrate
all files that belong to the client node or collocation group that is using the most
disk space in the storage pool. This process works well with the collocation
option because the server tries to place all of the files from a particular client on
the same sequential-access storage volume.
v Use scratch volumes for sequential-access storage pools to allow the server to
select new volumes for collocation.
v Specify the client option COLLOCATEBYFILESPEC to limit the number of tapes to
which objects associated with one file specification are written. This collocation
option makes collocation by the server more efficient; it does not override
collocation by file space or collocation by node.
For details about the COLLOCATEBYFILESPEC option, see the Backup-Archive Clients
Installation and User's Guide.
When creating collocation groups, keep in mind that the ultimate destination of the
data that belongs to nodes in a collocation group depends on the policy domain to
which nodes belong. For example, suppose that you create a collocation group that
consists of nodes that belong to Policy Domain A. Policy Domain A specifies an
active-data pool as the destination of active data only and has a backup copy
group that specifies a primary storage pool, Primary1, as the destination for active
and inactive data. Other nodes in the same collocation group belong to a domain,
Policy Domain B, that does not specify an active-data pool, but that has a backup
copy group that specifies Primary1 as the destination for active and inactive data.
Primary1 has a designated copy storage pool. The collocation setting on
PRIMARY1, the copy storage pool, and the active-data pool is GROUP.
The server reclaims the space in storage pools based on a reclamation threshold that
you can set for each sequential-access storage pool. When the percentage of space
that can be reclaimed on a volume rises above the reclamation threshold, the
server reclaims the volume.
Restrictions:
v Storage pools defined with the NETAPPDUMP, the CELERRADUMP or the
NDMPDUMP data format cannot be reclaimed. However, you can use the
MOVE DATA command to move data out of a volume so that the volume can
be reused. The volumes in the target storage pool must have the same data
format as the volumes in the source storage pool.
v Storage pools defined with a CENTERA device class cannot be reclaimed.
The server checks whether reclamation is needed at least once per hour and begins
space reclamation for eligible volumes. During space reclamation, the server copies
files that remain on eligible volumes to other volumes. For example, Figure 47 on
page 391 shows that the server consolidates the files from tapes 1, 2, and 3 on tape
4. During reclamation, the server copies the files to volumes in the same storage
pool unless you have specified a reclamation storage pool. Use a reclamation
storage pool to allow automatic reclamation for a storage pool with only one drive.
Remember: To prevent contention for the same tapes, the server does not allow a
reclamation process to start if a DELETE FILESPACE process is active. The server
checks every hour for whether the DELETE FILESPACE process has completed so
that the reclamation process can start. After the DELETE FILESPACE process has
completed, reclamation begins within one hour.
The server also reclaims space within an aggregate. An aggregate is a physical file
that contains multiple logical files that are backed up or archived from a client in a
= valid data
After the server moves all readable files to other volumes, one of the following
occurs for the reclaimed volume:
v If you have explicitly defined the volume to the storage pool, the volume
becomes available for reuse by that storage pool.
v If the server acquired the volume as a scratch volume, the server deletes the
volume from the Tivoli Storage Manager database.
Volumes that have a device type of SERVER are reclaimed in the same way as
other sequential-access volumes. However, because the volumes are actually data
stored in the storage of another Tivoli Storage Manager server, the reclamation
process can consume network resources. See “Controlling reclamation of virtual
volumes” on page 396 for details about how the server reclaims these types of
volumes.
Volumes in a copy storage pool and active-data pools are reclaimed in the same
manner as a primary storage pool except for the following:
Reclamation thresholds
Space is reclaimable because it is occupied by files that have been expired or
deleted from the Tivoli Storage Manager database, or because the space has never
been used. The reclamation threshold indicates how much reclaimable space a
volume must have before the server reclaims the volume.
The server checks whether reclamation is needed at least once per hour. The lower
the reclamation threshold, the more frequently the server tries to reclaim space.
Frequent reclamation optimizes the use of a sequential-access storage pool’s space,
but can interfere with other processes, such as backups from clients.
If you set the reclamation threshold to 50% or greater, the server can combine the
usable files from two or more volumes onto a single new volume.
For example, if you set the reclamation threshold to 100%, first lower the threshold
to 98%. Volumes that have reclaimable space of 98% or greater are reclaimed by
the server. Lower the threshold again to reclaim more volumes.
If you lower the reclamation threshold while a reclamation process is active, the
reclamation process does not immediately stop. If an on-site volume is being
reclaimed, the server uses the new threshold setting when the process begins to
reclaim the next volume. If off-site volumes are being reclaimed, the server does
For copy storage pools and active-data pools, you can also use the RECLAIM
STGPOOL command to specify the maximum number of off-site storage pool
volumes the server should attempt to reclaim:
reclaim stgpool altpool duration=60 offsitereclaimlimit=230
Do not use this command if you are going to use automatic reclamation for the
storage pool. To prevent automatic reclamation from running, set the RECLAIM
parameter of the storage pool definition to 100.
For details about the RECLAIM STGPOOL command, refer to the Administrator's
Reference.
You can specify one or more reclamation processes for each primary
sequential-access storage pool, copy storage pool, or active-data pool using the
RECLAIMPROCESS parameter on the DEFINE STGPOOL and UPDATE STGPOOL
commands.
Each reclamation process requires at least two simultaneous volume mounts (at
least two mount points) and, if the device type is not FILE, at least two drives.
One of the drives is for the input volume in the storage pool being reclaimed. The
other drive is for the output volume in the storage pool to which files are being
moved.
When calculating the number of concurrent processes to run, you must carefully
consider the resources you have available, including the number of storage pools
that will be involved with the reclamation, the number of mount points, the
number of drives that can be dedicated to the operation, and (if appropriate) the
number of mount operators available to manage reclamation requests. The number
For more information about mount limit, see: “Controlling the number of
simultaneously mounted volumes” on page 213
For example, suppose that you want to reclaim the volumes from two sequential
storage pools simultaneously and that all storage pools involved have the same
device class. Each process requires two mount points and, if the device type is not
FILE, two drives. To run four reclamation processes simultaneously (two for each
storage pool), you need a total of at least eight mount points and eight drives. The
device class for each storage pool must have a mount limit of at least eight.
If the device class for the storage pools being reclaimed does not have enough
mount points or drives, you can use the RECLAIMSTGPOOL parameter to direct
the reclamation to a storage pool with a different device class that has the
additional mount points or drives.
If the number of reclamation processes you specify is more than the number of
available mount points or drives, the processes that do not obtain mount points or
drives will wait indefinitely or until the other reclamation processes complete and
mount points or drives become available.
The Tivoli Storage Manager server will start the specified number of reclamation
processes regardless of the number of volumes that are eligible for reclamation. For
example, if you specify ten reclamation processes and only six volumes are eligible
for reclamation, the server will start ten processes and four of them will complete
without processing a volume.
When the server reclaims volumes, the server moves the data from volumes in the
original storage pool to volumes in the reclamation storage pool. The server always
uses the reclamation storage pool when one is defined, even when the mount limit
is greater than one.
If the reclamation storage pool does not have enough space to hold all of the data
being reclaimed, the server moves as much of the data as possible into the
reclamation storage pool. Any data that could not be moved to volumes in the
reclamation storage pool still remains on volumes in the original storage pool.
The pool identified as the reclamation storage pool must be a primary sequential
storage pool. The primary purpose of the reclamation storage pool is for temporary
storage of reclaimed data. To ensure that data moved to the reclamation storage
pool eventually moves back into the original storage pool, specify the original
storage pool as the next pool in the storage hierarchy for the reclamation storage
Finally, update the reclamation storage pool so that data migrates back to the tape
storage pool:
update stgpool reclaimpool nextstgpool=tapepool1
Tip:
v In a mixed-media library, reclaiming volumes in a storage pool defined with a
device class with a single mount point (that is, a single drive) requires one of the
following:
– At least one other drive with a compatible read/write format
– Enough disk space to create a storage pool with a device type of FILE
To prevent reclamation of WORM media, storage pools that are assigned to device
classes with a device type of WORM have a default reclamation value of 100.
To allow reclamation, you can set the reclamation value to something lower when
defining or updating the storage pool.
To control when reclamation starts for these volumes, consider setting the
reclamation threshold to 100% for any primary storage pool that uses virtual
volumes. Lower the reclamation threshold at a time when your network is less
busy, so that the server can reclaim volumes.
For virtual volumes in a copy storage pool or an active-data pool, the server
reclaims a volume as follows:
1. The source server determines which files on the volume are still valid.
2. The source server obtains these valid files from volumes in a primary storage
pool, or if necessary, from removable-media volumes in an on-site copy storage
pool or in an on-site active-data pool. The server can also obtain files from
virtual volumes in a copy storage pool or an active-data pool.
3. The source server writes the files to one or more new virtual volumes in the
copy storage pool or active-data pool and updates its database.
4. The server issues a message indicating that the volume was reclaimed.
For information about using the SERVER device type, see “Using virtual volumes
to store data on another server” on page 763.
For off-site volumes, reclamation can occur when the percentage of unused space
on the volume is greater than the reclaim parameter value. The unused space in
copy storage pool volumes includes both space that has never been used on the
volume and space that has become empty because of file deletion or expiration.
For volumes in active-data pools, reclaimable space also includes inactive versions
of files. Most volumes in copy storage pools and active-data pools might be set to
an access mode of off-site, making them ineligible to be mounted. During
reclamation, the server copies valid files on off-site volumes from the original files
in the primary storage pools. In this way, the server copies valid files on off-site
volumes without having to mount these volumes. For more information, see
“Reclamation of off-site volumes” on page 397.
Reclamation of copy storage pool volumes and active-data pool volumes should be
done periodically to allow the reuse of partially filled volumes that are off-site.
Reclamation can be done automatically by setting the reclamation threshold for the
Virtual Volumes: Virtual volumes (volumes that are stored on another Tivoli
Storage Manager server through the use of a device type of SERVER) cannot be set
to the off-site access mode.
Reclamation of primary storage pool volumes does not affect copy storage pool
files or files in active-data pools.
When an off-site volume is reclaimed, the files on the volume are rewritten to a
read/write volume. Effectively, these files are moved back to the on-site location.
The files may be obtained from the off-site volume after a disaster, if the volume
has not been reused and the database backup that you use for recovery references
the files on the off-site volume.
If you are using the disaster recovery manager, see “Moving copy storage pool and
active-data pool volumes on-site” on page 1076.
Suppose you plan to make daily storage pool backups to a copy storage pool, then
mark all new volumes in the copy storage pool as offsite and send them to the
off-site storage location. This strategy works well with one consideration if you are
using automatic reclamation (the reclamation threshold is less than 100%).
Each day's storage pool backups will create a number of new copy-storage pool
volumes, the last one being only partially filled. If the percentage of empty space
on this partially filled volume is higher than the reclaim percentage, this volume
becomes eligible for reclamation as soon as you mark it off-site. The reclamation
process would cause a new volume to be created with the same files on it. The
volume you take off-site would then be empty according to the Tivoli Storage
Manager database. If you do not recognize what is happening, you could
perpetuate this process by marking the new partially filled volume off-site.
One way to resolve this situation is to keep partially filled volumes on-site until
they fill up. However, this would mean a small amount of your data would be
without an off-site copy for another day.
If you send copy storage pool volumes off-site, it is recommended you control pool
reclamation by using the default value of 100. This turns reclamation off for the
copy storage pool. You can start reclamation processing at desired times by
changing the reclamation threshold for the storage pool. To monitor off-site volume
utilization and help you decide what reclamation threshold to use, enter the
following command:
query volume * access=offsite format=detailed
Depending on your data expiration patterns, you may not need to do reclamation
of off-site volumes each day. You may choose to perform off-site reclamation on a
less frequent basis. For example, suppose you ship copy-storage pool volumes to
and from your off-site storage location once a week. You can run reclamation for
the copy-storage pool weekly, so that as off-site volumes become empty they are
sent back for reuse.
When you do perform reclamation for off-site volumes, the following sequence is
recommended:
1. Back up your primary-storage pools to copy-storage pools or copy the active
data in primary-storage pools to active-data pools.
2. Turn on reclamation for copy-storage pools and active-data pools by lowering
the reclamation threshold for copy-storage pools below 100%. The default for
active-data pools is 60.
3. When reclamation processing completes, turn off reclamation by raising the
reclamation thresholds to 100%.
4. Mark any newly created copy-storage pool volumes and active-data pool
volumes as off-site, and then move them to the off-site location.
This sequence ensures that the files on the new copy-storage pool volumes and
active-data pool volumes are sent off-site, and are not inadvertently kept on-site
because of reclamation.
Alternatively, you can use the following Tivoli Storage Manager SQL SELECT
command to obtain records from the SUMMARY table for the off-site volume
reclamation operation:
select * from summary where activity=’OFFSITE RECLAMATION’
Two kinds of records are displayed for the off-site reclamation process. One
volume record is displayed for each reclaimed off-site volume. However, the
volume record does not display the following items:
v The number of examined files.
v The number of affected files.
v The total bytes involved in the operation.
This information is summarized in the statistical summary record for the offsite
reclamation. The statistical summary record displays the following items:
v The number of examined files.
v The number of affected files.
v The total bytes involved in the operation.
v The number of off-site volumes that were processed.
v The number of parallel processes that were used.
v The total amount of time required for the processing.
The order in which off-site volumes are reclaimed is based on the amount of
unused space in a volume. (Unused space includes both space that has never been
used on the volume and space that has become empty because of file deletion.)
Volumes with the largest amount of unused space are reclaimed first.
For example, suppose a copy storage pool contains three volumes: VOL1, VOL2,
and VOL3. VOL1 has the largest amount of unused space, and VOL3 has the least
amount of unused space. Suppose further that the percentage of unused space in
each of the three volumes is greater than the value of the RECLAIM parameter. If
you do not specify a value for the OFFSITERECLAIMLIMIT parameter, all three
volumes will be reclaimed when the reclamation runs. If you specify a value of 2,
only VOL1 and VOL2 will be reclaimed when the reclamation runs. If you specify
As a best practice, delay the reuse of any reclaimed volumes in copy storage pools
and active-data pools for as long as you keep your oldest database backup. For
more information about delaying volume reuse, see “Delaying reuse of volumes
for recovery purposes” on page 958.
If collocation is disabled and reclamation occurs, the server tries to move usable
data to new volumes by using the following volume selection criteria, in the order
shown:
1. The volume that contains the most data
2. Any partially full volume
3. An empty predefined volume
4. An empty scratch volume
If you specify collocation and multiple concurrent processes, the server attempts to
move the files for each collocation group, client node, or client file space onto as
See also “Reducing the time to reclaim tape volumes with high capacity” on page
395.
As your storage environment grows, you may want to consider how policy and
storage pool definitions affect where workstation files are stored. Then you can
define and maintain multiple storage pools in a hierarchy that allows you to
control storage costs by using sequential-access storage pools in addition to disk
storage pools, and still provide appropriate levels of service to users.
To help you determine how to adjust your policies and storage pools, get
information about how much storage is being used (by client node) and for what
purposes in your existing storage pools. For more information on how to do this,
see “Obtaining information about the use of storage space” on page 417.
To estimate the amount of storage space required for each random-access disk
storage pool:
v Determine the amount of disk space needed for different purposes:
– For backup storage pools, provide enough disk space to support efficient
daily incremental backups.
– For archive storage pools, provide sufficient space for a user to archive a
moderate size file system without causing migration from the disk storage
pool to occur.
– For storage pools for space-managed files, provide enough disk space to
support the daily space-management load from HSM clients, without causing
migration from the disk storage pool to occur.
To estimate the total amount of space needed for all backed-up files stored in a
single random-access (disk) storage pool, use the following formula:
Backup space = WkstSize * Utilization * VersionExpansion * NumWkst
where:
Backup Space
The total amount of storage pool disk space needed.
WkstSize
The average data storage capacity of a workstation. For example, if the
typical workstation at your installation has a 4 GB hard drive, then the
average workstation storage capacity is 4 GB.
Utilization
An estimate of the fraction of each workstation disk space used, in the
range 0 to 1. For example, if you expect that disks on workstations are 75%
full, then use 0.75.
VersionExpansion
An expansion factor (greater than 1) that takes into account the additional
backup versions, as defined in the copy group. A rough estimate allows 5%
additional files for each backup copy. For example, for a version limit of 2,
use 1.05, and for a version limit of 3, use 1.10.
NumWkst
The estimated total number of workstations that the server supports.
If clients use compression, the amount of space required may be less than the
amount calculated, depending on whether the data is compressible.
Work with policy administrators to calculate this percentage based on the number
and type of archive copy groups defined. For example, if policy administrators
have defined archive copy groups for only half of the policy domains in your
enterprise, then estimate that you need less than 50% of the amount of space you
have defined for backed-up files.
Figure 48 shows a standard report with all storage pools defined to the system. To
monitor the use of storage pool space, review the Estimated Capacity and Pct Util
columns.
Estimated Capacity
Specifies the space available in the storage pool in megabytes (M) or
gigabytes (G).
For a disk storage pool, this value reflects the total amount of available
space in the storage pool, including any volumes that are varied offline.
For sequential-access storage pools, estimated capacity is the total
estimated space of all the sequential-access volumes in the storage pool,
regardless of their access mode. At least one volume must be used in a
sequential-access storage pool (either a scratch volume or a private
volume) to calculate estimated capacity.
For tape and FILE, the estimated capacity for the storage pool includes the
following factors:
v The capacity of all the scratch volumes that the storage pool already
acquired or can acquire. The number of scratch volumes is defined by
the MAXSCRATCH parameter on the DEFINE STGPOOL or UPDATE STGPOOL
command.
v The capacity of all the private volumes that are defined to the storage
pool using the DEFINE VOLUME command.
The calculations for estimated capacity depend on the availability of the
storage for the device assigned to the storage pool.
Tape volumes in a sequential-access storage pool, unlike those in a disk
storage pool, do not contain a precisely known amount of space. Data is
written to a tape volume as necessary until the end of the volume is
reached. For this reason, the estimated capacity is truly an estimate of the
amount of available space in a sequential-access storage pool. This
characteristic does not apply to FILE volumes in sequential-access storage
pools.
Pct Util
Specifies, as a percentage, the space used in each storage pool.
Note: The value for Pct Util can be higher than the value for Pct Migr if
you query for storage pool information while a client transaction (such as a
backup) is in progress. The value for Pct Util is determined by the amount
of space actually allocated (while the transaction is in progress). The value
for Pct Migr represents only the space occupied by committed files. At the
end of the transaction, Pct Util and Pct Migr become synchronized.
For sequential-access storage pools, this value is the percentage of the total
bytes of storage available that are currently being used to store active data
(data that is not expired). Because the server can only estimate the
available capacity of a sequential-access storage pool, this percentage also
reflects an estimate of the actual utilization of the storage pool.
Figure 48 on page 404 shows that the estimated capacity for a disk storage pool
named BACKUPPOOL is 80 MB, which is the amount of available space on disk
storage. More than half (51.6%) of the available space is occupied by either backup
files or cached copies of backup files.
The estimated capacity for the tape storage pool named BACKTAPE is 180 MB,
which is the total estimated space available on all tape volumes in the storage
pool. This report shows that 85% of the estimated space is currently being used to
store workstation files.
Note: This report also shows that volumes have not yet been defined to the
ARCHIVEPOOL and ENGBACK1 storage pools, because the storage pools show
an estimated capacity of 0.0 MB.
You can query the server for information about storage pool volumes:
v General information about a volume, for example:
– Current access mode and status of the volume
– Amount of available space on the volume
– Location
v Contents of a storage pool volume (user files on the volume)
v The volumes that are used by a client node
To request general information about all volumes defined to the server, enter:
query volume
Figure 49 on page 407 shows an example of the output of this standard query. The
example illustrates that data is being stored on the 8 mm tape volume named
WREN01, as well as on several other volumes in various storage pools.
To query the server for a detailed report on volume WREN01 in the storage pool
named TAPEPOOL, enter:
query volume wren01 format=detailed
Figure 50 shows the output of this detailed query. Table 43 gives some suggestions
on how you can use the information.
Check the Access to determine whether files can be read from or written to this
volume.
The Write Pass Number indicates the number of times the volume has been
written to, starting from the beginning of the volume. A value of one indicates
that a volume is being used for the first time.
In this example, WREN01 has a write pass number of two, which indicates space
on this volume may have been reclaimed or deleted once before.
Compare this value to the specifications provided with the media that you are
using. The manufacturer may recommend a maximum number of write passes
for some types of tape media. You may need to retire your tape volumes after
reaching the maximum passes to better ensure the integrity of your data. To
retire a volume, move the data off the volume by using the MOVE DATA
command. See “Moving data from one volume to another volume” on page 421.
Use the Number of Times Mounted, the Approx. Date Last Written, and the Approx.
Date Last Read to help you estimate the life of the volume. For example, if more
than six months have passed since the last time this volume has been written to
or read from, audit the volume to ensure that files can still be accessed. See
“Auditing storage pool volumes” on page 958 for information about auditing a
volume.
The number given in the field, Number of Times Mounted, is a count of the
number of times that the server has opened the volume for use. The number of
times that the server has opened the volume is not always the same as the
number of times that the volume has been physically mounted in a drive. After a
volume is physically mounted, the server can open the same volume multiple
times for different operations, for example for different client backup sessions.
Determine the location of a Location
volume in a sequential-access
When you define or update a sequential-access volume, you can give location
storage pool.
information for the volume. The detailed query displays this location name. The
location information can be useful to help you track volumes (for example,
off-site volumes in copy storage pools or active-data pools).
Determine if a volume in a Date Became Pending
sequential-access storage pool is
A sequential-access volume is placed in the pending state after the last file is
waiting for the reuse delay period
deleted or moved from the volume. All the files that the pending volume had
to expire.
contained were expired or deleted, or were moved from the volume. Volumes
remain in the pending state for as long as specified with the REUSEDELAY
parameter for the storage pool to which the volume belongs.
Because the server tracks the contents of a storage volume through its database,
the server does not need to access the requested volume to determine its contents.
To produce a report that shows the contents of a volume, issue the QUERY
CONTENT command.
This report can be extremely large and may take a long time to produce. To reduce
the size of this report, narrow your search by selecting one or all of the following
search criteria:
Node name
Name of the node whose files you want to include in the query.
File space name
Names of file spaces to include in the query. File space names are
case-sensitive and must be entered exactly as they are known to the server.
Use the QUERY FILESPACE command to find the correct capitalization.
Number of files to be displayed
Enter a positive integer, such as 10, to list the first ten files stored on the
volume. Enter a negative integer, such as -15, to list the last fifteen files
stored on the volume.
Filetype
Specifies which types of files, that is, backup versions, archive copies, or
space-managed files, or a combination of these. If the volume being
queried is assigned to an active-data pool, the only valid values are ANY
and Backup.
Format of how the information is displayed
Standard or detailed information for the specified volume.
Damaged
Specifies whether to restrict the query output either to files that are known
to be damaged, or to files that are not known to be damaged.
Copied
Specifies whether to restrict the query output to either files that are backed
Note: There are several reasons why a file might have no usable copy in a
copy storage pool:
The file was recently added to the volume and has not yet been backed
up to a copy storage pool
The file should be copied the next time the storage pool is backed
up.
The file is damaged
To determine whether the file is damaged, issue the QUERY
CONTENT command, specifying the DAMAGED=YES parameter.
The volume that contains the files is damaged
To determine which volumes contain damaged files, issue the
following command:
select * from contents where damaged=yes
The file is segmented across multiple volumes, and one or more of the
other volumes is damaged
To determine whether the file is segmented, issue the QUERY
CONTENT command, specifying the FORMAT=DETAILED
parameter. If the file is segmented, issue the following command to
determine whether any of the volumes containing the additional
file segments are damaged:
select volume_name from contents where damaged=yes and
file_name like ’%filename%’
For more information about using the SELECT command, see the
Administrator's Reference.
A standard report about the contents of a volume displays basic information such
as the names of files.
To view the first seven backup files on volume WREN01 from file space /usr on
client node TOMC, for example, enter:
query content wren01 node=tomc filespace=/usr count=7 type=backup
Figure 51 displays a standard report which shows the first seven files from file
space /usr on TOMC stored in WREN01.
To display detailed information about the files stored on volume VOL1, enter:
query content vol1 format=detailed
Figure 52 on page 413 displays a detailed report that shows the files stored on
VOL1. The report lists logical files and shows whether each file is part of an
aggregate. If a logical file is stored as part of an aggregate, the information in the
Segment Number, Stored Size, and Cached Copy? fields apply to the aggregate,
not to the individual logical file.
If a logical file is part of an aggregate, the Aggregated? field shows the sequence
number of the logical file within the aggregate. For example, the Aggregated? field
contains the value 2/4 for the file AB0CTGLO.IDE, meaning that this file is the
second of four files in the aggregate. All logical files that are part of an aggregate
are included in the report. An aggregate can be stored on more than one volume,
and therefore not all of the logical files in the report may actually be stored on the
volume being queried.
For disk volumes, the Cached Copy? field identifies whether the file is a cached
copy of a file that has been migrated to the next storage pool in the hierarchy.
The SELECT command queries the VOLUMEUSAGE table in the Tivoli Storage
Manager database. For example, to get a list of volumes used by the EXCH1 client
node in the TAPEPOOL storage pool, enter the following command:
select volume_name from volumeusage where node_name=’EXCH1’ and
stgpool_name=’TAPEPOOL’
For more information about using the SELECT command, see the Administrator's
Reference.
Four fields on the standard storage-pool report provide you with information
about the migration process. They include:
Pct Migr
Specifies the percentage of data in each storage pool that can be migrated.
This value is used to determine when to start or stop migration.
For random-access and sequential-access disk storage pools, this value
represents the amount of disk space occupied by backed-up, archived, or
space-managed files that can be migrated to another storage pool. The
calculation for random-access disk storage pools excludes cached data, but
includes files on volumes that are varied offline.
For sequential-access tape and optical storage pools, this value is the
percentage of the total volumes in the storage pool that actually contain
data at the moment. For example, assume a storage pool has four explicitly
defined volumes, and a maximum scratch value of six volumes. If only
two volumes actually contain data at the moment, then Pct Migr is 20%.
This field is blank for copy storage pools and active-data pools.
High Mig Pct
Specifies when the server can begin migrating data from this storage pool.
Migration can begin when the percentage of data that can be migrated
reaches this threshold. (This field is blank for copy storage pools and
active-data pools.)
Low Mig Pct
Specifies when the server can stop migrating data from this storage pool.
Migration can end when the percentage of data that can be migrated falls
below this threshold. (This field is blank for copy storage pools and
active-data pools.)
Next Storage Pool
Specifies the primary storage pool destination to which data is migrated.
(This field is blank for copy storage pools and active-data pools.)
Figure 48 on page 404 shows that the migration thresholds for BACKUPPOOL
storage pool are set to 50% for the high migration threshold and 30% for the low
migration threshold.
When the amount of migratable data stored in the BACKUPPOOL storage pool
reaches 50%, the server can begin to migrate files to BACKTAPE.
See Figure 53 on page 415 for an example of the results of this command.
If caching is on for a disk storage pool and files are migrated, the Pct Util value
does not change because the cached files still occupy space in the disk storage
You can query the server to monitor the migration process by entering:
query process
Tip: Do this only if you received an out-of-space message for the storage pool to
which data is being migrated.
The Pct Util value includes cached data on a volume (when cache is enabled) and
the Pct Migr value excludes cached data. Therefore, when cache is enabled and
migration occurs, the Pct Migr value decreases while the Pct Util value remains the
same. The Pct Util value remains the same because the migrated data remains on
the volume as cached data. In this case, the Pct Util value only decreases when the
cached data expires.
If you update a storage pool from CACHE=YES to CACHE=NO, the cached files
will not disappear immediately. The Pct Util value will be unchanged. The cache
space will be reclaimed over time as the server needs the space, and no additional
cached files will be created.
Figure 56 on page 417 displays a detailed report for the storage pool.
When Cache Migrated Files? is set to Yes, the value for Pct Util should not change
because of migration, because cached copies of files migrated to the next storage
pool remain in disk storage.
This example shows that utilization remains at 42%, even after files have been
migrated to the BACKTAPE storage pool, and the current amount of data eligible
for migration is 29.6%.
When Cache Migrated Files? is set to No, the value for Pct Util more closely
matches the value for Pct Migr because cached copies are not retained in disk
storage.
Each report gives two measures of the space in use by a storage pool:
v Logical space occupied
The amount of space used for logical files. A logical file is a client file. A logical
file is stored either as a single physical file, or in an aggregate with other logical
files. The logical space occupied in active-data pools includes the space occupied
by inactive logical files. Inactive logical files in active-data pools are removed by
reclamation.
v Physical space occupied
The amount of space used for physical files. A physical file is either a single
logical file, or an aggregate composed of logical files.
An aggregate might contain empty space that was used by logical files that are
now expired or deleted, or that were deactivated in active-data pools. Therefore,
the amount of space used by physical files is equal to or greater than the space
used by logical files. The difference gives you a measure of how much unused
space any aggregates may have. The unused space can be reclaimed in
sequential storage pools.
You can also use this report to evaluate the average size of workstation files stored
in server storage.
To determine the amount of server storage space used by the /home file space
belonging to the client node MIKE, for example, enter:
query occupancy mike /home
File space names are case-sensitive and must be entered exactly as they are known
to the server. To determine the correct capitalization, issue the QUERY FILESPACE
command. For more information, see “Managing file spaces” on page 474.
Figure 57 shows the results of the query. The report shows the number of files
backed up, archived, or migrated from the /home file space belonging to MIKE.
The report also shows how much space is occupied in each storage pool.
If you back up the ENGBACK1 storage pool to a copy storage pool, the copy
storage pool would also be listed in the report. To determine how many of the
client node's files in the primary storage pool have been backed up to a copy
storage pool, compare the number of files in each pool type for the client node.
Physical Logical
Node Name Type Filespace Storage Number of Space Space
Name Pool Name Files Occupied Occupied
(MB) (MB)
--------------- ---- ----------- ----------- --------- ---------- --------
MIKE Bkup /home ENGBACK1 513 3.52 3.01
For details about the QUERY NODEDATA command, refer to the Administrator's
Reference.
To query the server for the amount of data stored in backup tape storage pools
belonging to the TAPECLASS device class, for example, enter:
query occupancy devclass=tapeclass
Figure 58 displays a report on the occupancy of tape storage pools assigned to the
TAPECLASS device class.
Tip: For archived data, you might see “(archive)” in the Filespace Name column
instead of a file space name. This means that the data was archived before
collocation by file space was supported by the server.
For example, to request a report about backup versions stored in the disk storage
pool named BACKUPPOOL, enter:
query occupancy stgpool=backuppool type=backup
Figure 59 displays a report on the amount of server storage used for backed-up
files.
You can use this average to estimate the capacity required for additional storage
pools that are defined to the server.
For information about planning storage space, see “Estimating space needs for
storage pools” on page 401 and “Estimating space for archived files in
random-access storage pools” on page 402.
To request information about the amount of free disk space in each directory for all
device classes with a device type of FILE, issue QUERY DIRSPACE command.
Figure 60. A report of the free disk space for all device classes of device type FILE
To obtain the amount of free space associated with a particular device class, issue
the following command:
query dirspace device_class_name
During the data movement process, users cannot access the volume to restore or
retrieve files, and no new files can be written to the volume.
Remember:
v Files in a copy storage pool or an active-data pool do not move when primary
files are moved.
v You cannot move data into or out of a storage pool defined with a CENTERA
device class.
v In addition to moving data from volumes in storage pools that have NATIVE or
NONBLOCK data formats, you can also move data from volumes in storage
pools that have NDMP data formats (NETAPPDUMP, CELERRADUMP, or
NDMPDUMP). The target storage pool must have the same data format as the
source storage pool. If you are moving data out of a storage pool for the
purpose of upgrading to new tape technology, the target primary storage pool
must be associated with a library that has the new device for the tape drives.
Moving files from one volume to other volumes in the same storage pool is useful:
v When you want to free up all space on a volume so that it can be deleted from
the Tivoli Storage Manager server
See “Deleting storage pool volumes” on page 433 for information about deleting
backed-up, archived, or space-managed data before you delete a volume from a
storage pool.
v When you need to salvage readable files from a volume that has been damaged
v When you want to delete cached files from disk volumes
If you want to force the removal of cached files, you can delete them by moving
data from one volume to another volume. During the move process, the server
deletes cached files remaining on disk volumes.
If you move data between volumes within the same storage pool and you run out
of space in the storage pool before all data is moved from the target volume, then
you cannot move all the data from the target volume. In this case, consider moving
data to available space in another storage pool as described in “Data movement to
a different storage pool.”
Remember: Data cannot be moved from a primary storage pool to a copy storage
pool or to an active-data pool. Data in a copy storage pool or an active-data pool
cannot be moved to another storage pool.
You can move data from random-access storage pools to sequential-access storage
pools. For example, if you have a damaged disk volume and you have a limited
amount of disk storage space, you could move all files from the disk volume to a
tape storage pool. Moving files from a disk volume to a sequential storage pool
may require many volume mount operations if the target storage pool is
collocated. Ensure that you have sufficient personnel and media to move files from
disk to sequential storage.
When a data move from a shred pool is complete, the original data is shredded.
However, if the destination is not another shred pool, you must set the
SHREDTONOSHRED parameter to YES to force the movement to occur. If this
value is not specified, the server issues an error message and does not allow the
data to be moved. See “Securing sensitive client data” on page 563 for more
information about shredding.
Processing of the MOVE DATA command for volumes in copy -storage pools and
active-data pools is similar to that of primary-storage pools, with the following
exceptions:
v Volumes in copy-storage pools and active-data pools might be set to an access
mode of offsite, making them ineligible to be mounted. During processing of the
MOVE DATA command, valid files on off-site volumes are copied from the
original files in the primary-storage pools. In this way, valid files on off-site
volumes are copied without having to mount these volumes. These new copies
of the files are written to another volume in the copy-storage pool or active-data
pool.
v With the MOVE DATA command, you can move data from any primary-storage
pool volume to any primary-storage pool. However, you can move data from a
copy-storage pool volume only to another volume within the same-copy storage
pool. Similarly, you can move data from an active-data pool volume only to
another volume within the same active-data pool.
When you move files from a volume marked as off-site, the server performs the
following actions:
1. Determines which files are still active on the volume from which you are
moving data
2. Obtains these active files from a primary-storage pool or from another
copy-storage pool or active-data pool
3. Copies the files to one or more volumes in the destination copy-storage pool or
active-data pool
Processing of the MOVE DATA command for primary-storage pool volumes does
not affect copy-storage pool or active-data pool files.
Moving data
You can move data using the MOVE DATA command. Before moving data,
however, take steps to ensure that the move operation succeeds.
When you move data from a volume, the server starts a background process and
sends informational messages, such as:
ANR1140I Move Data process started for volume D:\STORAGE\VOL3
(process ID 32).
Remember:
v A volume might not be totally empty after a move data operation completes. For
example, the server may be unable to relocate one or more files to another
volume because of input/output errors on the device or because errors were
found in the file. You can delete the volume with DISCARDDATA=YES to delete
the volume and any remaining files. The server then deletes the remaining files
that had I/O or other errors.
v In addition to moving data from volumes in storage pools that have NATIVE or
NONBLOCK data formats, you can also move data from volumes in storage
pools that have NDMP data formats (NETAPPDUMP, CELERRADUMP, or
NDMPDUMP). The target storage pool must have the same data format as the
source storage pool. If you are moving data out of a storage pool for the
purpose of upgrading to new tape technology, the target primary storage pool
must be associated with a library that has the new device for the tape drives.
Figure 61 on page 425 shows an example of the report that you receive about the
data movement process.
Remember:
1. Reclaiming empty space in NDMP-generated images is not an issue because
NDMP-generated images are not aggregated.
2. Reconstruction removes inactive backup files in active-data pools. Specifying
RECONSTRUCT=NO when moving data from volumes in an active-data pool
prevents the inactive backup files from being removed.
For example, to see how much data has moved from the source volume in the
move operation example, enter:
query volume d:\storage\vol3 stgpool=backuppool
Near the beginning of the move process, querying the volume from which data is
being moved gives the following results:
Volume Name Storage Device Estimated Pct Volume
Pool Name Class Name Capacity Util Status
--------------- ----------- ---------- --------- ----- --------
D:\STORAGE\VOL3 BACKUPPOOL DISK 15.0 M 59.9 On-Line
Querying the volume to which data is being moved (VOL1, according to the
process query output) gives the following results:
Volume Name Storage Device Estimated Pct Volume
Pool Name Class Name Capacity Util Status
---------------- ----------- ---------- --------- ----- --------
VOL1 STGTMP1 8500DEV 4.8 G 0.3 Filling
At the end of the move process, querying the volume from which data was moved
gives the following results:
Chapter 11. Managing storage pools and volumes 425
Volume Name Storage Device Estimated Pct Volume
Pool Name Class Name Capacity Util Status
---------------- ---------- ---------- --------- ----- --------
D:\STORAGE\VOL3 BACKUPPOOL DISK 15.0 M 0.0 On-Line
When the source storage pool is a primary storage pool, you can move data to
other volumes within the same pool or to another primary storage pool. When the
source storage pool is a copy storage pool, data can only be moved to other
volumes within that storage pool. When the source storage pool is an active-data
pool, data can only be moved to other volumes within that same storage pool.
Tips:
v In addition to moving data from volumes in storage pools that have NATIVE or
NONBLOCK data formats, you can also move data from volumes in storage
pools that have NDMP data formats (NETAPPDUMP, CELERRADUMP, or
NDMPDUMP). The target storage pool must have the same data format as the
source storage pool.
v If you are moving files within the same storage pool, there must be volumes
available that do not contain the data you are moving. That is, the server cannot
use a destination volume containing data that will need to be moved.
v When moving data from volumes in an active-data pool, you have the option of
reconstructing file aggregates during data movement. Reconstruction removes
inactive backup files in the pool. Specifying no reconstruction prevents the
inactive files from being removed.
v You cannot move node data into or out of a storage pool defined with a
CENTERA device class.
Best practice: Avoid movement of data into, out of, or within a storage pool while
MOVE NODEDATA is concurrently processing data on the same storage pool.
For example, consider moving data for a single node and restricting the data
movement to files in a specific non-Unicode file space (for this example, \\eng\e$)
as well as a specific Unicode file space (for this example, \\eng\d$ ). The node
name owning the data is ENGINEERING and it currently has data stored in the
ENGPOOL storage pool. After the move is complete, the data is located in the
destination storage pool BACKUPPOOL. To move the data enter the following:
move nodedata engineering fromstgpool=engpool
tostgpool=backuppool filespace=\\eng\e$ unifilespace=\\eng\d$
Another example is to move data for a single node named MARKETING from all
primary sequential-access storage pools to a random-access storage pool named
DISKPOOL. First obtain a list of storage pools that contain data for node
MARKETING, issue either:
query occupancy marketing
or
SELECT * from OCCUPANCY where node_name=’MARKETING’;
For this example the list of resulting storage pool names all begin with the
characters FALLPLAN. To move the data repeat the following command for every
instance of FALLPLAN. The following example displays the command for
FALLPLAN3:
move nodedata marketing fromstgpool=fallplan3
tostgpool=diskpool
A final example shows moving both non-Unicode and Unicode file spaces for a
node. For node NOAH move non-Unicode file space \\servtuc\d$ and Unicode
Figure 62 shows an example of the report that you receive about the data
movement process.
When you rename a storage pool, any administrators with restricted storage
privilege for the storage pool automatically have restricted storage privilege to the
storage pool under the new name. If the renamed storage pool is in a storage pool
hierarchy, the hierarchy is preserved.
Copy groups and management classes might contain a storage pool name as a
destination. If you rename a storage pool used as a destination, the destination in a
copy group or management class is not changed to the new name of the storage
pool. To continue to use the policy with the renamed storage pool as a destination,
you must change the destination in the copy groups and management classes. You
then activate the policy set with the changed destinations.
To define a copy storage pool, issue the DEFINE STGPOOL command and specify
POOLTYPE=COPY. To define an active-data pool, issue the DEFINE STGPOOL
command and specify POOLTYPE=ACTIVEDATA. When you define a copy
storage pool or an active-data pool, be prepared to provide some or all of the
information in Table 44.
Remember:
1. To back up a primary storage pool to an active-data pool, the data format must
be NATIVE or NONBLOCK. You can back up a primary storage pool to a copy
storage pool using NATIVE, NONBLOCK, or any of the NDMP formats. The
target storage pool must have the same data format as the source storage pool.
2. You cannot define copy storage pools or active-data pools for a Centera device
class.
Table 44. Information for defining copy storage pools and active-data pools
Information Explanation
Device class Specifies the name of the device class assigned for the storage pool. This
is a required parameter.
Pool type Specifies that you want to define a copy storage pool or an active-data
pool. This is a required parameter. You cannot change the pool type
when updating a storage pool.
For automated libraries, set this value equal to the physical capacity of
the library. For details, see:
“Adding scratch volumes to automated library devices” on page 169
Collocation When collocation is enabled, the server attempts to keep all files
belonging to a group of client nodes, a single client node, or a client file
space on a minimal number of sequential-access storage volumes. See
“Collocation of copy storage pools and active-data pools” on page 387.
Reclamation Specifies when to initiate reclamation of volumes in the copy storage
threshold pool or active-data pool. Reclamation is a process that moves any
remaining files from one volume to another volume, thus making the
original volume available for reuse. A volume is eligible for reclamation
when the percentage of unused space on the volume is greater than the
reclaim parameter value.
For more information, see “Backing up primary storage pools” on page 954.
To store data in the new storage pool, you must back up the primary storage pools
(BACKUPPOOL, ARCHIVEPOOL, and SPACEMGPOOL) to the
DISASTER-RECOVERY pool. See “Backing up primary storage pools” on page 954.
If files that are not cached are deleted from a primary storage pool volume, any
copies of these files in copy storage pools and active-data pools will also be
deleted.
Files in a copy storage pool or an active-data pool are never deleted unless:
v Data retention is off, or the files have met their retention criterion.
You cannot delete a Centera volume if the data in the volume was stored using a
server with retention protection enabled and if the data has not expired.
Tip: If you are deleting many volumes, delete the volumes one at a time.
Concurrently deleting many volumes can adversely affect server performance.
You can delete empty storage pool volumes. For example, to delete an empty
volume named WREN03, enter:
delete volume wren03
Volumes in a shred pool (DISK pools only) are not deleted until shredding is
completed. See “Securing sensitive client data” on page 563 for more information.
After you respond yes, the server generates a background process to delete the
volume.
Tips:
1. The Tivoli Storage Manager server will not delete archive files that are on
deletion hold.
2. If archive retention protection is enabled, the Tivoli Storage Manager server
will delete only archive files whose retention period has expired.
3. Volumes in a shred pool (DISK pools only) are note deleted until the data on it
is shredded. See “Securing sensitive client data” on page 563 for more
information.
For example, to discard all data from volume WREN03 and delete the volume
from its storage pool, enter:
delete volume wren03 discarddata=yes
The server generates a background process and deletes data in a series of batch
database transactions. After all files have been deleted from the volume, the server
deletes the volume from the storage pool. If the volume deletion process is
canceled or if a system failure occurs, the volume might still contain data. Reissue
the DELETE VOLUME command and explicitly request the server to discard the
remaining files on the volume.
To delete a volume but not the files it contains, move the files to another volume.
See “Moving data from one volume to another volume” on page 421 for
information about moving data from one volume to another volume.
Residual data: Even after you move data, residual data may remain on the
volume because of I/O errors or because of files that were previously marked as
damaged. (Tivoli Storage Manager does not move files that are marked as
damaged.) To delete any volume that contains residual data that cannot be moved,
you must explicitly specify that files should be discarded from the volume.
When the Tivoli Storage Manager server is installed, the Tivoli Storage Manager
backup-archive client and the administrative client are installed on the same server
by default. However, many installations of Tivoli Storage Manager include remote
clients, and application clients on other servers, often running on different
operating systems.
The term “nodes” indicate the following type of clients and servers that you can
register as client nodes:
v Tivoli Storage Manager backup-archive clients
v Tivoli Storage Manager application clients, such as Tivoli Storage Manager for
Mail clients
v Tivoli Storage Manager for Space Management (HSM client)
v Tivoli Storage Manager source server registered as a node on a target server
v Network-attached storage (NAS) file server using NDMP support
Each node must be registered with the server and requires an option file with a
pointer to the server.
For details on many of the topics in this chapter, refer to the Backup-Archive Clients
Installation and User's Guide.
Related concepts:
“Accepting default closed registration or enabling open registration” on page 440
“Overview of clients and servers as nodes”
Related tasks:
“Installing client node software” on page 440
“Registering nodes with the server” on page 440
Related reference:
“Connecting nodes with the server” on page 444
“Comparing network-attached nodes to local nodes” on page 448
The following are the methods for installing client node software:
v Install directly from the CD
v Transfer installable files from the CD to a target server
v Create client software images and install the images
You can also install using the silent installation technique. For backup-archive
clients, use the client auto deployment feature in the Administration Center. This
feature deploys client code to existing backup-archive clients.
Tip: You can connect to a Web backup-archive client directly from a supported
Web browser or from a hyperlink in the Web administrative Enterprise Console. To
do so, specify the node's URL and port number during the registration process or
update the node later with this information.
Related concepts:
“Overview of remote access to web backup-archive clients” on page 469
The administrator must register client nodes with the server when registration is
set to closed. Closed registration is the default.
Windows users can also use the Minimal Configuration option in the Initial
Configuration Task List.
With open registration, the server automatically assigns the node to the
STANDARD policy domain. The server, by default, allows users to delete archive
copies, but not backups in server storage. Nodes are created with the default
authentication method that is defined on the server. Nodes are registered with the
default authentication method if it is defined on the server with the SET
DEFAULTAUTHENTICATION command. The default is LOCAL.
1. Enable open registration by entering the following command from an
administrative client command line:
set registration open
For examples and a list of open registration defaults, see the Administrator's
Reference.
2. To change the defaults for a registered node, issue the UPDATE NODE command.
Remember: Use either client compression or drive compression, but not both.
Related concepts:
“Data compression” on page 231
You can complete this task by using the Tivoli Storage Manager Management
Console and completing the following steps:
1. Double-click the desktop icon for the Tivoli Storage Manager Management
Console.
2. Expand the tree until the Tivoli Storage Manager server you want to work with
is displayed. Expand the server and click Wizards. The wizards list is
generated and displayed.
3. Select the Client Node Configuration wizard and click Start. The Client Node
Configuration wizard is displayed.
4. Progress through the instructions in the wizard.
Related tasks:
“Adding clients through the administrative command line client” on page 449
Specify an option set for a node when you register or update the node. Issue the
following example command:
register node mike pass2eng cloptset=engbackup
The REGISTER NODE and UPDATE NODE commands have a default parameter of
TYPE=CLIENT.
To register a NAS file server as a node, specify the TYPE=NAS parameter. Issue
the following command, which is an example, to register a NAS file server with a
node name of NASXYZ and a password of PW4PW:
register node nasxyz pw4pw type=nas
You must use this same node name when you later define the corresponding data
mover name.
Related reference:
Chapter 10, “Using NDMP for operations with NAS file servers,” on page 233
To use virtual volumes, register the source server as a client node on the target
server.
The REGISTER NODE and UPDATE NODE commands have a default parameter of
TYPE=CLIENT.
An administrator can issue the REGISTER NODE command to register the workstation
as a node.
You can determine the compression by using one of the following methods:
v An administrator during registration who can:
– Require that files are compressed
– Restrict the client from compressing files
– Allow the application user or the client user to determine the compression
status
v The client options file. If an administrator does not set compression on or off,
Tivoli Storage Manager checks the compression status that is set in the client
options file. The client options file is required, but the API user configuration file
is optional.
v One of the object attributes. When an application sends an object to the server,
some object attributes can be specified. One of the object attributes is a flag that
indicates whether or not the data has already been compressed. If the
application turns this flag on during either a backup or an archive operation,
then Tivoli Storage Manager does not compress the data a second time. This
process overrides what the administrator sets during registration.
For more information on setting options for the API and on controlling
compression, see IBM Tivoli Storage Manager Using the Application Program Interface
The administrator who sets the file deletion option can use the following methods:
v An administrator during registration
If an administrator does not allow file deletion, then an administrator must
delete objects or file spaces that are associated with the workstation from server
storage.
If an administrator allows file deletion, then Tivoli Storage Manager checks the
client options file.
v An application using the Tivoli Storage Manager API deletion program calls
If the application uses the dsmDeleteObj or dsmDeleteFS program call, then
objects or files are marked for deletion when the application is executed.
On the Windows platform, you can use a wizard to work with the client options
file.
Important: If any changes are made to the dsm.opt file, the client must be restarted
for changes in the options file to have any affect.
Figure 63 shows the contents of a client options file that is configured to connect to
the server using TCP/IP. The communication options specified in the client options
file satisfy the minimum requirements for the node to connect to the server.
Many non-required options are available that can be set at any time. These options
control the behavior of Tivoli Storage Manager processing.
Refer to Backup-Archive Clients Installation and User's Guide for more information
about non-required client options.
Related concepts:
“Creating or updating a client options file” on page 446
For nodes and host servers that run Windows, one more step is required.
Administrators must update passwords by using the IBM Tivoli Storage Manager
Scheduler Configuration Utility (DSMCUTIL). This utility allows administrators to
store Tivoli Storage Manager passwords in the Windows registry. After a password
is stored in the registry, the scheduler can run as a protected account under its own
authority. If the password expires, Tivoli Storage Manager automatically generates
a new password. To specify that the server provide a new password if the current
password expires, remove the asterisk from the following line in the client options
file: * passwordaccess generate.
Editing individual options files is the most direct method, but may not be suitable
for sites with many client nodes.
From the backup-archive client GUI, the client can also display the setup wizard
by selecting Utilities > Setup Wizard. The user can follow the panels in the setup
wizard to browse Tivoli Storage Manager server information in the Active
Directory. The user can determine which server to connect to and what
communication protocol to use.
The Client Options File wizard helps you to detect the network address of the
server.
To use the Tivoli Storage Manager Management Console, complete the following
steps:
1. Double-click the desktop icon for the Tivoli Storage Manager Management
Console.
Tip: With the wizard, administrators can create a client options file for a single
Tivoli Storage Manager client.
Related tasks:
“Creating a client configuration file” on page 448
The administrator uses the wizard to generate a client configuration file and stores
the file in a shared directory.
Nodes access the directory and run the configuration file to create the client
options file. This method is suitable for sites with many client nodes.
Figure 64. Networked Windows Clients with Shared Directory on a File Server
Note: The network address of the Tivoli Storage Manager server is the only
required option. However, many other options can be set to control various aspects
of Tivoli Storage Manager data management and client/server interactions.
Creating options files for one or two Windows clients may be easier with the
Client Options File wizard. However, the Remote Client Configuration wizard is
useful for creating multiple client options files.
Users can access a shared directory on the server and run a batch file that creates
an options file. Using this method, administrators allow users to create options
files for their nodes.
Tip: The Remote Client Configuration wizard also allows the administrator to
add to the minimum Tivoli Storage Manager connection options by appending a
file that contains more client options. The result is a client options file that contains
the minimum options that are required to connect a client with the server, plus any
options the administrator wants to apply to all clients.
Backup-archive client
Server
Administrative client
Figure 66 on page 449 shows that a network environment Tivoli Storage Manager
consists of a backup-archive client and an administrative client on the same
computer as the server. However, network-attached client nodes can also connect
to the server.
Server
Administrative client
Application client
Each client requires a client options file. A user can edit the client options file at
the client node. The options file contains a default set of processing options that
identify the server, communication method, backup and archive options, space
management options, and scheduling options.
To change the default to open so users can register their own client nodes, issue
the following command:
set registration open
Before you can assign client nodes to a policy domain, the policy domain must
exist.
You want to let users delete backed up or archived files from storage pools. From
an administrative client, you can use the macro facility to register more than one
client node at a time.
1. Create a macro file named REGENG.MAC, that contains the following REGISTER
NODE commands:
register node ssteiner choir contact=’department 21’
domain=engpoldom archdelete=yes backdelete=yes
register node carolh skiing contact=’department 21, second shift’
domain=engpoldom archdelete=yes backdelete=yes
The Tivoli Storage Manager server views its registered clients, application clients,
and source servers as nodes. The term “client node” refers to the following type of
clients and servers:
v Tivoli Storage Manager backup-archive clients
v Tivoli Storage Manager application clients, such as Tivoli Storage Manager for
Mail clients
v Tivoli Storage Manager source servers registered as nodes on a target server
v Network-attached storage (NAS) file servers using network data management
protocol (NDMP) support
Related concepts:
“Accepting default closed registration or enabling open registration” on page 440
“Overview of clients and servers as nodes” on page 439
Related tasks:
“Installing client node software” on page 440
“Registering nodes with the server” on page 440
Related reference:
“Connecting nodes with the server” on page 444
“Comparing network-attached nodes to local nodes” on page 448
Managing nodes
From the perspective of the server, each client and application client is a node
requiring IBM Tivoli Storage Manager services.
Administrators can perform the following activities when managing client nodes.
IBM Tivoli Storage Manager has two methods for enabling communication
between the client and the server across a firewall: client-initiated communication
and server-initiated communication. To allow either client-initiated or
server-initiated communication across a firewall, client options must be set in
concurrence with server parameters on the REGISTER NODE or UPDATE NODE
commands. Enabling server-initiated communication overrides client-initiated
communication, including client address information that the server may have
previously gathered in server-prompted sessions.
Client-initiated sessions
You can enable clients to communicate with a server across a firewall by opening
the TCP/IP port for the server and modifying the dsmserv.opt file.
1. To enable clients to communicate with a server across a firewall, open the
TCP/IP port for the server on the TCPPORT option in the dsmserv.opt file. The
default TCP/IP port is 1500. When authentication is turned on, the information
that is sent over the wire is encrypted.
2. To enable administrative clients to communicate with a server across a firewall,
open the TCP/IP port for the server on the TCPADMINPORT option in the
dsmserv.opt file. The default TCP/IP port is the TCPPORT value. When
authentication is turned on, the information that is sent over the wire is
encrypted. See the Backup-Archive Clients Installation and User's Guide for more
information.
1. If the TCPADMINPORT option is specified, sessions from clients without
administration authority can be started on the TCPPORT port only. If the server
dsmserv.opt specifies TCPADMINPORT that is different from the TCPPORT and sets
ADMINONCLIENTPORT to NO, then administrative client sessions can be started on
the TCPADMINPORT port only.
2. You can specify either IPv4 or IPv4/IPv6 in the COMMMETHOD option when you
start the server, storage agent, client, or API application. The same port
numbers are used by the server, storage agent, client, or API application for
both IPv4 and IPv6.
IPv6 address formats are acceptable for all functions that support IPv6.
However, if you use IPv6 addresses for functions that do not support IPv6,
communications fail. The following functions do not support IPv6:
Remember: You can continue to use IPv4 address formats for the following
functions:
v NDMP: backing up and restoring storage pools, copying and moving data
v ACSLS
v SNMP
v Centera device support
v Shared memory protocol
v Windows Microsoft Management Console functions
v Administration Center
Server-initiated sessions
To limit the start of backup-archive client sessions to the IBM Tivoli Storage
Manager server, specify the SESSIONINITIATION parameter on the server. You must
also synchronize the information in the client option file.
In either the REGISTER NODE or UPDATE NODE command, select the SERVERONLY option
of the SESSIONINITIATION parameter. Provide the HLADDRESS and LLADDRESS
client node addresses. For example,
register node fran secretpw hladdress=9.11.521.125 lladdress=1501
sessioninitiation=serveronly
The HLADDRESS specifies the IP address of the client node, and is used whenever the
server contacts the client. The LLADDRESS specifies the low level address of the
client node and is used whenever the server contacts the client. The client node
listens for sessions from the server on the LLADDRESS port number.
Note:
Update client node TOMC to prevent it from deleting archived files from storage
pools by entering the following example command:
update node tomc archdelete=no
After configuring each server for deploying packages, the three steps in the process
are downloading, moving, and importing the packages.
Downloading
The Import Client Deployment Packages wizard accesses the FTP site
where the packages are stored and from where you can select the packages
to import.
Moving
After you download the packages, they must be moved from the
Administration Center workstation to the Tivoli Storage Manager server.
The packages must be moved to a location that is referenced by the
IBM_CLIENT_DEPLOY_IMPORT device class. This device class is created
when you configure your server with the Configure Server for Client
Auto Deployment wizard.
Importing
If you are configured for a local import, the Administration Center finds
the packages that are stored locally and starts the process of deploying
them. An Administration Center with web access deploys the packages
from the FTP site.
See table Table 47 for a list of the software packages that are available.
Table 47. Administration Center releases and deployment requirements
Administration Windows deployment packages AIX, HP-UX, Linux, Macintosh,
Center and Solaris deployment packages
6.2 6.2 N/A
6.3 5.5 and later 5.5.1 and later
To use the feature, the Backup-Archive Client must meet these requirements:
v The IBM Tivoli Storage Manager Windows Backup-Archive Client must be at
version 5.4.0 or later. The deployment feature does not install new
backup-archive clients.
v A Backup-Archive Client on an operating system other than Windows must be
at version 5.5 or later.
v Windows backup-archive clients must have 2 GB of total disk space.
v The PASSWORDACCESS option must be set to generate.
v The client acceptor (CAD) or Backup-Archive Client schedule must be running
at the time of the deployment. The Backup-Archive Client is deployed from the
server as a scheduled task.
The Backup-Archive Client must have the additional disk space that is required for
a deployment, as shown in this table:
Table 48. Disk space required on the Backup-Archive Client workstation for deploying a
Backup-Archive Client package
Operating system Total required disk space
AIX 1500 MB
Solaris 1200 MB
HP-UX 900 MB
Macintosh 200 MB
Linux x86/x86 64 950 MB
Windows 2 GB
To access the Configure Client Auto Deployment wizard, click Tivoli Storage
Manager > Manage Servers. Select a server from the table and then select
Configure Client Auto Deployment from the table actions.
The wizard guides you in setting up the location where imported packages are to
be stored, and how long they are stored.
Configure the server by using the Configure Server for Client Auto Deployment
wizard.
The View Available Client Deployment Packages portlet shows all of the
available packages. You can either import the available deployment packages,
check for new packages on the FTP site, or refresh the table from a local copy.
Complete the following steps to use the Import Client Deployment Packages
wizard:
1. Open the Administration Center.
2. Click Tivoli Storage Manager > Manage Servers.
3. Access the wizard by selecting View Client Deployment Packages > Import
Client Deployment Packages.
The properties file holds critical information for the deployment feature and the
Administration Center can then find and import packages. The
catalog.properties file is updated automatically when you configure the server to
run deployments through the Administration Center.
In the directory descriptions, user_chosen_path is the root directory for the Tivoli
Integrated Portal installation. If the server does not have web access, you must edit
the catalog.properties file to point to the local catalog.xml file. The
catalog.properties file is in these directories:
v For Windows: user_chosen_path\tsmac\tsm\clientDeployCatalog
v For all other platforms: user_chosen_path/tsmac/tsm/clientDeployCatalog
You can copy the packages to the server from media and then access the packages
as if you are connected to the FTP site.
Complete the following steps to schedule a client deployment without direct web
access:
1. Move the packages to a local FTP server that is configured for anonymous
access.
2. Configure the servers for deployments. Access the configure server for client
deployments wizard by clicking Tivoli Storage Manager > Manage Servers.
Select a server and then select Configure Automatic Client Deployment from
the action list.
3. Edit the catalog.properties file to point to the local catalog.xml file. See this
example of the catalog.properties file:
base.url=
ftp://public.dhe.ibm.com/storage/tivoli-storage-management/catalog/client
You can schedule your deployments around your routine IBM Tivoli Storage
Manager activities. When scheduling client deployments, place those schedules on
a lower priority than regular Storage Management tasks (for example, back up,
archive, restore, and retrieve).
You are offered the option to restart the client operating system after the
deployment completes. Restarting the system can affect any critical applications
that are running on the client operating system.
v You must use the SET SERVERHLADDRESS command for all automatic client
deployments.
You can find the deployment packages in the maintenance directory on the FTP
site: ftp://public.dhe.ibm.com/storage/tivoli-storage-management/maintenance/
client.
Related tasks:
“Importing the target level to the server” on page 461
“Defining a schedule for an automatic deployment” on page 462
“Verifying the backup-archive client deployment results” on page 463
Related reference:
“Using the command-line interface to configure the server for a backup-archive
client deployment”
The following example command can be used to configure the server to deploy
backup-archive client packages with the command-line interface:
set serverhladdress=server.serveraddress.com
where:
v ibm_client_deploy_import is the temporary location from where the deployment packages
are imported. This parameter is defined by the deployment manager.
v import_directory is a previously defined directory that is accessible from the server.
v stgpool_name is the name of a storage pool of your choosing where the deployment
packages are stored on the server. The storage pool name is based on a previously
defined device class. That device class is different from the one which is used to perform
IMPORT operations.
v storage_dc_name represents the device class where the deployment packages are stored on
the server.
v retention_value (RETVER) of the DEFINE COPYGROUP command sets the retention time for the
package. You can set it to NOLimit or to a number of days. The default for the
Administration Center is five years.
| Important: The retention value must be set to a value that includes the amount of time
| that the package was on the FTP site. For example, if a deployment package is on the FTP
| site for 30 days, the retention value for the copy group must be greater than 30 days. If
| not, the package expires when the next EXPIRE INVENTORY command is issued.
v server.serveraddress.com is the server IP address or host name from which you scheduled
the client automatic deployment.
Ensure that you configure the server for backup-archive client automatic
deployments before you import the packages.
where:
upgradedev is the file device class name.
volname1.exp is the deployment package name. You can also use a
comma-separated list of package names.
If you want to view the progress, issue the QUERY PROCESS command.
3. Verify that the packages are in a location that the server can reach. Enter the
following command:
select * from ARCHIVES where node_name=’IBM_CLIENT_DEPLOY_WIN’
where ARCHIVES is the type of file that is imported through the IMPORT NODE
command.
Related reference:
“Using the command-line interface to configure the server for a backup-archive
client deployment” on page 460
where
deployment_package_location is the path to the deployment package
destination_for_package is the path to where you want to store the
deployment package
IBM_CLIENT_DEPLOY_WIN is the predefined name (for a Windows
deployment) for the -fromnode option
AUTODEPLOY can be YES, NO, or NOREBOOT. The default is YES.
| nodeinfo=TBD must be entered exactly as shown.
One result of the QUERY ACTLOG command is the publishing of the ANE4200I
message reports. Message ANE4200I displays the status of the deployment and
the session number. You can use the session number to search for more
deployment information.
3. Issue the QUERY ACTLOG command with the session number as the target.
query actlog sessnum=778 begindate=03/11/2010 begintime=00:00:01 node=testsrv
4. Issue the QUERY NODE command:
query node testsrv format=detailed
When users access the server, their IBM Tivoli Storage Manager user IDs match the
host name of their workstations. If the host name changes, you can update a client
node user ID to match the new host name.
ENGNODE retains the contact information and access to back up and archive data
that belonged to CAROLH. All files backed up or archived by CAROLH now
belong to ENGNODE.
If you rename a node that authenticates with an LDAP directory server, names for
same-named nodes on other servers that share namespace are not renamed. You
must issue a RENAME command for each node. If you want to keep the nodes in
sync, change their name to match the new name. If you do not, the node on the
other server can no longer authenticate with the LDAP directory server if you
specify SYNCLDAPDELETE=YES.
If you have a node that shares namespace on an LDAP directory server with other
nodes, you can rename each node. The renaming must, however, be done on each
server. For example, you can issue the following command on each server:
rename node starship moonship syncldapdelete=yes
The node starship, that authenticates to an LDAP directory server, changes their
name to moonship. With SYNCLDAPDELETE=YES, the entry on the LDAP directory
server changes to moonship and removes node starship from the LDAP server.
Therefore, other servers cannot authenticate node starship with the LDAP server.
You can register node starship with the LDAP server, or rename node starship to
moonship.
You can restore a locked node’s access to the server with the UNLOCK NODE
command.
1. To prevent client node MAB from accessing the server, issue the following
example command:
lock node mab
2. To let client node MAB access the server again, issue the following example
command:
unlock node mab
Before you can delete a network-attached storage (NAS) node, you must first
delete any file spaces, then delete any defined paths for the data mover with the
DELETE PATH command. Delete the corresponding data mover with the DELETE
DATAMOVER command. Then you can issue the REMOVE NODE command to delete the
NAS node.
This is useful when the server responsible for performing the backup may change
over time, such as with a cluster. Consolidating shared data from multiple servers
under a single name space on the Tivoli Storage Manager server means that the
directories and files can be easily found when restore operations are required.
Backup time can be reduced and clustered configurations can store data with
proxy node support. Client nodes can also be configured with proxy node
authority to support many of the systems which support clustering failover.
By granting client nodes proxy node authority to another node, you gain the
ability to backup, archive, migrate, restore, recall, and retrieve shared data on
multiple clients under a single node name on the Tivoli Storage Manager server.
When authorized as agent nodes, Tivoli Storage Manager nodes and Tivoli Storage
Manager for Space Management (HSM) clients can be directed to backup or restore
data on behalf of another node (the target node).
Administrators must then create scripts that change the passwords manually before
they expire. Using proxy node support, it is possible to break up a large GPFS into
smaller units for backup purposes and not have password coordination issues.
An administrator can define the schedule that does a DB2 UDB EEE backup on
behalf of NODE_Z by issuing the following command:
DEFINE SCHEDULE STANDARD BACKUP-SCHED ACTION=INCREMENTAL
OPTIONS=’-ASNODENAME=NODE_Z’
Agent nodes are considered traditional nodes in that there is usually a one-to-one
relationship between a traditional node and a physical server. A target node can be
a logical entity, meaning no physical server corresponds to the node. Or, it can be a
predefined node which corresponds to a physical server.
By using the GRANT PROXYNODE command, you can grant proxy node authority to all
nodes sharing data in the cluster environment to access the target node on the
Tivoli Storage Manager server. QUERY PROXYNODE displays the nodes to which a
proxy node relationship was authorized. See the Administrator's Reference for more
information about these commands.
Proxy node relationships will not be imported by default; however, the associations
can be preserved by specifying the PROXYNODEASSOC option on the IMPORT NODE and
IMPORT SERVER commands. Exporting to sequential media maintains proxy node
relationships, but exporting to a server requires specifying the PROXYNODEASSOC
option on EXPORT NODE and EXPORT SERVER.
Important:
v If a proxy node relationship is authorized for incompatible file spaces, there is a
possibility of data loss or other corruption.
The following example shows how to set up proxy node authority for shared
access. In the example, client agent nodes NODE_1, NODE_2, and NODE_3 all
share the same General Parallel File System (GPFS). Because the file space is so
large, it is neither practical nor cost effective to back up this file system from a
single client node. By using Tivoli Storage Manager proxy node support, the very
large file system can be backed up by the three agent nodes for the target
NODE_GPFS. The backup effort is divided among the three nodes. The end result
is that NODE_GPFS has a backup from a given point in time.
All settings used in the proxy node session are determined by the definitions of the
target node, in this case NODE_GPFS. For example, any settings for
DATAWRITEPATH or DATAREADPATH are determined by the target node, not
the agent nodes (NODE_1, NODE_2, NODE_3).
Assume that NODE_1, NODE_2 and NODE_3 each need to execute an incremental
backup and store all the information under NODE_GPFS on the server.
Perform the following steps to set up a proxy node authority for shared access:
1. Define four nodes on the server: NODE_1, NODE_2, NODE_3, and
NODE_GPFS. Issue the following commands:
register node node_1 mysecretpa5s
register node node_2 mysecret9pas
register node node_3 mypass1secret
register node node_gpfs myhiddp3as
2. Define a proxy node relationship among the nodes by issuing the following
commands:
grant proxynode target=node_gpfs agent=node_1,node_2,node_3
3. Define the node name and asnode name for each of the servers in the
respective dsm.sys files. See the Backup-Archive Clients Installation and User's
Guide for more information on the NODENAME and ASNODENAME client options.
Issue the following commands:
nodename node_1
asnodename node_gpfs
4. Optionally, define a schedule:
define schedule standard gpfs-sched action=macro options="gpfs_script"
5. Assign a schedule to each client node by issuing the following commands:
define association standard gpfs-sched node_1
define association standard gpfs-sched node_2
define association standard gpfs-sched node_3
6. Execute the schedules by issuing the following command:
dsmc schedule
For example, as a policy administrator, you might query the server about all client
nodes assigned to the policy domains for which you have authority. Or you might
query the server for detailed information about one client node.
Issue the following command to view information about client nodes that are
assigned to the STANDARD and ENGPOLDOM policy domains:
query node * domain=standard,engpoldom
The data from that command might display similar to the following output:
Node Name Platform Policy Domain Days Since Days Since Locked?
Name Last Password
Access Set
---------- -------- -------------- ---------- ---------- -------
JOE WinNT STANDARD 6 6 No
ENGNODE AIX ENGPOLDOM <1 1 No
HTANG Mac STANDARD 4 11 No
MAB AIX ENGPOLDOM <1 1 No
PEASE Linux86 STANDARD 3 12 No
SSTEINER SOLARIS ENGPOLDOM <1 1 No
For example, to review the registration parameters defined for client node JOE,
issue the following command:
query node joe format=detailed
| A web backup-archive client can be accessed from a web browser or opened from
| the Operations Center or Administration Center interface. This allows an
| administrator with the proper authority to perform backup, archive, restore, and
| retrieve operations on any server that is running the web backup-archive client.
You can establish access to a web backup-archive client for help desk personnel
that do not have system or policy privileges by granting those users client-access
authority to the nodes that they must manage. Help desk personnel can then
perform activities on behalf of the client node such as backup and restore
operations.
To use the web backup-archive client from your web browser, specify the URL and
port number of the Tivoli Storage Manager backup-archive client computer that is
running the web client. The browser that you use to connect to a web
backup-archive client must be Microsoft Internet Explorer 5.0 or Netscape 4.7 or
later. The browser must have the Java Runtime Environment (JRE) 1.3.1, which
includes the Java Plug-in software. The JRE is available at http://
www.oracle.com/.
During node registration, you have the option of granting client owner or client
access authority to an existing administrative user ID. You can also prevent the
server from creating an administrative user ID at registration. If an administrative
user ID exists with the same name as the node that is being registered, the server
registers the node but does not automatically create an administrative user ID. This
process also applies if your site uses open registration.
For more information about installing and configuring the web backup-archive
client, refer to Backup-Archive Clients Installation and User's Guide.
Administrators with system or policy privileges over the client node's domain,
have client owner authority by default. The administrative user ID created
automatically at registration has client owner authority by default. This
administrative user ID is displayed when an administrator issues a QUERY ADMIN
command.
The following definitions describe the difference between client owner and client
access authority when defined for a user that has the node privilege class:
Client owner
You can access the client through the Web backup-archive client or
native backup-archive client.
You own the data and have a right to physically gain access to the data
remotely. You can backup and restore files on the same or different
servers, you can delete file spaces or archive data.
The user ID with client owner authority can also access the data from
another server using the –NODENAME or -VIRTUALNODENAME parameter.
The administrator can change the client node's password for which they
have authority.
This is the default authority level for the client at registration. An
administrator with system or policy privileges to a client's domain has
client owner authority by default.
Client access
You can only access the client through the Web backup-archive client.
You can restore data only to the original client.
You can grant client access or client owner authority to other administrators by
specifying CLASS=NODE and AUTHORITY=ACCESS or AUTHORITY=OWNER parameters on
the GRANT AUTHORITY command. You must have one of the following privileges to
grant or revoke client access or client owner authority:
v System privilege
v Policy privilege in the client's domain
v Client owner privilege over the node
v Client access privilege over the node
You can grant an administrator client access authority to individual clients or to all
clients in a specified policy domain. For example, you may want to grant client
access privileges to users that staff help desk environments.
Related tasks:
“Example: setting up help desk access to client computers in a specific policy
domain” on page 473
The administrator FRED can now access the LABCLIENT client, and perform
backup and restore. The administrator can only restore data to the LABCLIENT
node.
2. Issue the following command to grant client owner authority to ADMIN1 for
the STUDENT1 node:
grant authority admin1 class=node authority=owner node=student1
The user ID ADMIN1 can now perform backup and restore operations for the
STUDENT1 client node. The user ID ADMIN1 can also restore files from the
STUDENT1 client node to a different client node.
When the node is created, the authentication method and Secure Sockets Layer
(SSL) settings are inherited by the administrator.
To give client owner authority to the HELPADMIN user ID when registering the
NEWCLIENT node, issue the following command:
register node newclient pass2new userid=helpadmin
This command results in the NEWCLIENT node being registered with a password
of pass2new, and also grants HELPADMIN client owner authority. This command
would not create an administrator ID. The HELPADMIN client user ID is now able
to access the NEWCLIENT node from a remote location.
You are also granting HELP1 client access authority to the FINANCE domain
without having to grant system or policy privileges.
The help desk person, using HELP1 user ID, has a Web browser with Java
Runtime Environment (JRE) 1.3.1.
1. Register an administrative user ID of HELP1.
register admin help1 05x23 contact="M. Smith, Help Desk x0001"
2. Grant the HELP1 administrative user ID client access authority to all clients in
the FINANCE domain. With client access authority, HELP1 can perform backup
and restore operations for clients in the FINANCE domain. Client nodes in the
FINANCE domain are Dave, Sara, and Joe.
grant authority help1 class=node authority=access domains=finance
The following output is generated by this command:
ANR2126I GRANT AUTHORITY: Administrator HELP1 was granted ACCESS authority for client
DAVE.
ANR2126I GRANT AUTHORITY: Administrator HELP1 was granted ACCESS authority for client
JOE.
ANR2126I GRANT AUTHORITY: Administrator HELP1 was granted ACCESS authority for client
SARA.
3. The help desk person, HELP1, opens the Web browser and specifies the URL
and port number for client computer Sara:
http://sara.computer.name:1581
A Java applet is started, and the client hub window is displayed in the main
window of the Web browser. When HELP1 accesses the backup function from
the client hub, the IBM Tivoli Storage Manager login screen is displayed in a
separate Java applet window. HELP1 authenticates with the administrative user
ID and password. HELP1 can perform a backup for Sara.
For information about what functions are not supported on the Web
backup-archive client, refer to Backup-Archive Clients Installation and User's Guide.
Tip: You can copy the files to any location on the host operating system, but
ensure that all files are copied to the same directory.
5. Ensure that guest virtual machines are running. This step is necessary to ensure
that the guest virtual machines are detected during the hardware scan.
6. To collect PVU information, issue the following command:
retrieve -v
If you restart the host machine or change the configuration, run the retrieve
command again to ensure that current information is retrieved.
Tip: When the IBM Tivoli Storage Manager for Virtual Environments license file is
installed on a VMware vStorage backup server, the platform string that is stored
on the Tivoli Storage Manager server is set to TDP VMware for any node name
that is used on the server. The reason is that the server is licensed for Tivoli
Storage Manager for Virtual Environments. The TDP VMware platform string can
be used for PVU calculations. If a node is used to back up the server with standard
backup-archive client functions, such as file-level and image backup, interpret the
TDP VMware platform string as a backup-archive client for PVU calculations.
Administrators can perform the following activities when managing file spaces:
Related reference:
“Defining client nodes and file spaces”
Typically, each client file system is represented on the server as a unique file space
that belongs to each client node. Therefore, the number of file spaces a node has
depends on the number of file systems on the client computer. For example, a
Windows desktop system may have multiple drives (file systems), such as C: and
D:. In this case, the client's node has two file spaces on the server; one for the C:
drive and a second for the D: drive. The file spaces can grow as a client stores
more data on the server. The file spaces decrease as backup and archive file
versions expire and the server reclaims the space.
IBM Tivoli Storage Manager does not allow an administrator to delete a node
unless the node's file spaces have been deleted.
For client nodes running on NetWare, file spaces map to NetWare volumes. Each
file space is named with the corresponding NetWare volume name.
For clients running on Macintosh, file spaces map to Macintosh volumes. Each file
space is named with the corresponding Macintosh volume name.
For clients running on UNIX or Linux, a file space name maps to a file space in
storage that has the same name as the file system or virtual mount point from
which the files originated. The VIRTUALMOINTPOINT option allows users to define a
virtual mount point for a file system to back up or archive files beginning with a
specific directory or subdirectory. For information on the VIRTUALMOUNTPOINT
For client nodes that are running on Windows, it is possible to create objects with
long fully qualified names. The IBM Tivoli Storage Manager clients for Windows
are able to support fully qualified names of up to 8704 bytes in length for backup
and restore functions. These long names are often generated with an automatic
naming function or are assigned by an application.
Long object names can be difficult to display and use through normal operating
system facilities, such as a command prompt window or Windows Explorer. To
manage them, Tivoli Storage Manager assigns an identifying token to the name
and abbreviates the length. The token ID is then used to display the full object
name. For example, an error message might display as follows, where
[TSMOBJ:9.1.2084] is the assigned token ID:
ANR9999D file.c(1999) Error handling file [TSMOBJ:9.1.2084] because of
lack of server resources.
The token ID can then be used to display the fully qualified object name by
specifying it in the DISPLAY OBJNAME command.
The fully qualified object name is displayed. If you are displaying long object
names that are included in backup sets, a token ID might not be included if the
For more information about fully qualified object names and issuing the DISPLAY
OBJNAME command, see the Administrator's Reference.
New clients storing data on the server for the first time require no special setup. If
the client has the latest IBM Tivoli Storage Manager client software installed, the
server automatically stores Unicode-enabled file spaces for that client.
However, if you have clients that already have data stored on the server and the
clients install the Unicode-enabled IBM Tivoli Storage Manager client software, you
need to plan for the migration to Unicode-enabled file spaces. To allow clients with
existing data to begin to store data in Unicode-enabled file spaces, IBM Tivoli
Storage Manager provides a function for automatic renaming of existing file
spaces. The file data itself is not affected; only the file space name is changed.
After the existing file space is renamed, the operation creates a new file space that
is Unicode-enabled. The creation of the new Unicode-enabled file space for clients
can greatly increase the amount of space required for storage pools and the
amount of space required for the server database. It can also increase the amount
of time required for a client to run a full incremental backup, because the first
incremental backup after the creation of the Unicode-enabled file space is a full
backup.
When clients with existing file spaces migrate to Unicode-enabled file spaces, you
need to ensure that sufficient storage space for the server database and storage
pools is available. You also need to allow for potentially longer backup windows
for the complete backups.
Attention: After the server is at the latest level of software that includes support
for Unicode-enabled file spaces, you can only go back to a previous level of the
server by restoring an earlier version of IBM Tivoli Storage Manager and the
database.
When IBM Tivoli Storage Manager cannot convert the code page, the client may
receive one or all of the following messages if they were using the command line:
ANS1228E, ANS4042E, and ANS1803E. Clients that are using the GUI may see a
“Path not found” message. If you have clients that are experiencing such backup
failures, then you need to migrate the file spaces for these clients to ensure that
these systems are completely protected with backups. If you have a large number
of clients, set the priority for migrating the clients based on how critical each
client's data is to your business.
Any new file spaces that are backed up from client systems with the
Unicode-enabled IBM Tivoli Storage Manager client are automatically stored as
Unicode-enabled file spaces in server storage.
When enabled, IBM Tivoli Storage Manager uses the rename function when it
recognizes that a file space that is not Unicode-enabled in server storage matches
the name of a file space on a client. The existing file space in server storage is
renamed, so that the file space in the current operation is then treated as a new,
Unicode-enabled file space. For example, if the operation is an incremental backup
at the file space level, the entire file space is then backed up to the server as a
Unicode-enabled file space.
If you force the file space renaming for all clients at the same time, backups can
contend for network and storage resources, and storage pools can run out of
storage space.
Related tasks:
“Planning for Unicode versions of existing client file spaces” on page 481
“Examining issues when migrating to Unicode” on page 483
“Example of a migration process” on page 484
Related reference:
“Defining options for automatically renaming file spaces”
“Defining the rules for automatically renaming file spaces” on page 481
As an administrator, you can control whether the file spaces of any existing clients
are renamed to force the creation of new Unicode-enabled file spaces. By default,
no automatic renaming occurs.
To control the automatic renaming, use the parameter AUTOFSRENAME when you
register or update a node. You can also allow clients to make the choice. Clients
can use the client option AUTOFSRENAME.
Restriction: The setting for AUTOFSRENAME affects only clients that are
Unicode-enabled.
The following table summarizes what occurs with different parameter and option
settings.
Table 50. The effects of the AUTOFSRENAME option settings
Parameter on the Option on the client Result for file spaces Is the file space
server (for each renamed?
client)
Yes Yes, No, Prompt Renamed Yes
No Yes, No, Prompt Not renamed No
Client Yes Renamed Yes
Client No Not renamed Yes
Client Prompt Command-line or GUI: The user receives a Depends on the
one-time-only prompt about renaming response from the user
(yes or no)
Client Prompt Client Scheduler: Not renamed (prompt is No
displayed during the next command-line
or GUI session)
Related reference:
“Defining the rules for automatically renaming file spaces” on page 481
With its automatic renaming function, IBM Tivoli Storage Manager renames a file
space by adding the suffix _OLD.
For example:
If the new name would conflict with the name of another file space, a number is
added to the suffix. For example:
Original file space name New file space name Other existing file spaces
\\maria\c$ \\maria\c$_OLD \\maria\c$_OLD1
\\maria\c$_OLD2
If the new name for the file space exceeds the limit of 64 characters, the file space
name is truncated before the suffix _OLD is added.
Several factors must be considered before you plan for Unicode versions of
existing client file spaces.
To minimize problems, you need to plan the storage of Unicode-enabled file spaces
for clients that already have existing file spaces in server storage.
1. Determine which clients need to migrate.
Clients that have had problems with backing up files because their file spaces
contain names of directories or files that cannot be converted to the server's
code page should have the highest priority. Balance that with clients that are
most critical to your operations. If you have a large number of clients that need
to become Unicode-enabled, you can control the migration of the clients.
Change the rename option for a few clients at a time to keep control of storage
space usage and processing time. Also consider staging migration for clients
that have a large amount of data backed up.
2. Allow for increased backup time and network resource usage when the
Unicode-enabled file spaces are first created in server storage.
Based on the number of clients and the amount of data those clients have,
consider whether you need to stage the migration. Staging the migration means
setting the AUTOFSRENAME parameter to YES or CLIENT for only a small number
of clients every day.
When you migrate to Unicode, there are several issues that you must consider.
The server manages a Unicode-enabled client and its file spaces as follows:
v When a client upgrades to a Unicode-enabled client and logs in to the server, the
server identifies the client as Unicode-enabled.
Remember: That same client (same node name) cannot log in to the server with
a previous version of IBM Tivoli Storage Manager or a client that is not
Unicode-enabled.
v The original file space that was renamed (_OLD) remains with both its active
and inactive file versions that the client can restore if needed. The original file
space will no longer be updated. The server will not mark existing active files
inactive when the same files are backed up in the corresponding
Unicode-enabled file space.
Important: Before the Unicode-enabled client is installed, the client can back up
files in a code page other than the current locale, but cannot restore those files.
After the Unicode-enabled client is installed, if the same client continues to use
file spaces that are not Unicode-enabled, the client skips files that are not in the
same code page as the current locale during a backup. Because the files are
skipped, they appear to have been deleted from the client. Active versions of the
files in server storage are made inactive on the server. When a client in this
situation is updated to a Unicode-enabled client, you should migrate the file
spaces for that client to Unicode-enabled file spaces.
v The server does not allow a Unicode-enabled file space to be sent to a client that
is not Unicode-enabled during a restore or retrieve process.
v Clients should be aware that they will not see all their data on the
Unicode-enabled file space until a full incremental backup has been processed.
When a client performs a selective backup of a file or directory and the original
file space is renamed, the new Unicode-enabled file space will contain only the
file or directory specified for that backup operation. All other directories and
files are backed up on the next full incremental backup.
If a client needs to restore a file before the next full incremental backup, the
client can perform a restore from the renamed file space instead of the new
Unicode-enabled file space. For example:
– Sue had been backing up her file space, \\sue-node\d$.
– Sue upgrades the IBM Tivoli Storage Manager client on her system to the
Unicode-enabled IBM Tivoli Storage Manager client.
– Sue performs a selective backup of the HILITE.TXT file.
– The automatic file space renaming function is in effect and IBM Tivoli Storage
Manager renames\\sue-node\d$ to \\sue-node\d$_OLD. IBM Tivoli Storage
Manager then creates a new Unicode-enabled file space on the server with the
name \\sue-node\d$. This new Unicode-enabled file space contains only the
HILITE.TXT file.
– All other directories and files in Sue's file system will be backed up on the
next full incremental backup. If Sue needs to restore a file before the next full
incremental backup, she can restore the file from the \\sue-node\d$_OLD file
space.
Refer to the Backup-Archive Clients Installation and User's Guide for more
information.
The example of a migration process includes one possible sequence for migrating
clients.
This forces the file spaces to be renamed at the time of the next backup or
archive operation on the file servers. If the file servers are large, consider
changing the renaming parameter for one file server each day.
3. Allow backup and archive schedules to run as usual. Monitor the results.
a. Check for the renamed file spaces for the file server clients. Renamed file
spaces have the suffix _OLD or _OLDn, where n is a number.
b. Check the capacity of the storage pools. Add tape or disk volumes to
storage pools as needed.
c. Check database usage statistics to ensure you have enough space.
Note: If you are using the client acceptor to start the scheduler, you must first
modify the default scheduling mode.
4. Migrate the workstation clients. For example, migrate all clients with names
that start with the letter a.
update node a* autofsrename=yes
5. Allow backup and archive schedules to run as usual that night. Monitor the
results.
6. After sufficient time passes, consider deleting the old, renamed file spaces.
Related tasks:
“Modifying the default scheduling mode” on page 604
Related reference:
“Managing the renamed file spaces” on page 485
“Defining the rules for automatically renaming file spaces” on page 481
The file spaces that were automatically renamed (_OLD) to allow the creation of
Unicode-enabled file spaces continue to exist on the server. Users can still access
the file versions in these file spaces.
Because a renamed file space is not backed up again with its new name, the files
that are active (the most recent backup version) in the renamed file space remain
active and never expire. The inactive files in the file space expire according to the
policy settings for how long versions are retained. To determine how long the files
are retained, check the values for the parameters, Retain Extra Versions and
Retain Only Versions, in the backup copy group of the management class to
which the files are bound.
When users no longer have a need for their old, renamed file spaces, you can
delete them. If possible, wait for the longest retention time for the only version
(Retain Only Version) that any management class allows. If your system has
storage constraints, you may need to delete these file spaces before that.
For example, a Version 5.1.0 client backs up file spaces, and then upgrades to
Version 5.2.0 with support for Unicode-enabled file spaces. That same client can
still restore the non-Unicode file spaces from the backup set.
You can display file space information for the following reasons:
v To identify file spaces that are defined to each client node, so that you can delete
each file space from the server before removing the client node from the server
v To identify file spaces that are Unicode-enabled and identify their file space ID
(FSID)
v To monitor the space that is used on workstation's disks
v To monitor whether backups are completing successfully for the file space
v To determine the date and time of the last backup
Note: File space names are case-sensitive and must be entered exactly as known to
the server.
To view information about file spaces that are defined for client node JOE, issue
the following command:
query filespace joe *
field might display file space names as “...”. This indicates to the administrator that
a file space does exist but could not be converted to the server's code page.
Conversion can fail if the string includes characters that are not available in the
server code page, or if the server has a problem accessing system conversion
routines.
File space names and file names that can be in a different code page or locale than
the server do not display correctly in the Operations Center, the Administration
Center, or the administrative command-line interface. The data itself is backed up
and can be restored properly, but the file space name or file name may display
with a combination of invalid characters or blank spaces.
Refer to the Administrator's Reference for details.
After you delete all of a client node's file spaces, you can delete the node with
the REMOVE NODE command.
For client nodes that support multiple users, such as UNIX or Linux, a file owner
name is associated with each file on the server. The owner name is the user ID of
the operating system, such as the UNIX Linux user ID. When you delete a file
space belonging to a specific owner, only files that have the specified owner name
in the file space are deleted.
When a node has more than one file space and you issue a DELETE FILESPACE
command for only one file space, a QUERY FILESPACE command for the node during
the delete process shows no file spaces. When the delete process ends, you can
view the remaining file spaces with the QUERY FILESPACE command. If data
retention protection is enabled, the only files which will be deleted from the file
space are those which have met the retention criterion. The file space will not be
deleted if one or more files within the file space cannot be deleted.
Note: Data stored using the System Storage Archive Manager product cannot be
deleted using the DELETE FILESPACE command if the retention period for the data
has not expired. If this data is stored in a Centera storage pool, then it is
additionally protected from deletion by the retention protection feature of the
Centera storage device.
The most important option is the network address of the server, but you can add
many other client options at any time. Administrators can also control client
options by creating client option sets on the server that are used in conjunction
with client option files on client nodes.
Related tasks:
“Creating client option sets on the server”
“Managing client option sets” on page 490
Related reference:
“Connecting nodes with the server” on page 444
Client option sets allow the administrator to specify additional options that may
not be included in the client's option file (dsm.opt). You can specify which clients
use the option set with the REGISTER NODE or UPDATE NODE commands. The client
can use these defined options during a backup, archive, restore, or retrieve process.
See the Backup-Archive Clients Installation and User's Guide for detailed information
about individual client options.
To create a client option set and have the clients use the option set, perform the
following steps:
1. Create the client option set with the DEFINE CLOPTSET command.
2. Add client options to the option set with the DEFINE CLIENTOPT command.
3. Specify which clients should use the option set with the REGISTER NODE or
UPDATE NODE command.
Related reference:
“Connecting nodes with the server” on page 444
To provide a description of the option set, issue the following example command:
define cloptset engbackup description=’Backup options for eng. dept.’
For a list of client options that you can specify, refer to Administrative client options
in the Administrator's Reference.
The server automatically assigns sequence numbers to the specified options, or you
can choose to specify the sequence number for order of processing. This is helpful
if you have defined more than one of the same option as in the following example:
define clientopt engbackup inclexcl "include d:\admin"
define clientopt engbackup inclexcl "include d:\payroll"
The options are processed starting with the highest sequence number.
Any include-exclude statements in the server client option set have priority over
the include-exclude statements in the local client options file. The server
include-exclude statements are always enforced and placed last in the
include-exclude list and evaluated before the client include-exclude statements. If
the server option set has several include-exclude statements, the statements are
processed starting with the first sequence number. The client can issue the QUERY
INCLEXCL command to show the include-exclude statements in the order that they
are processed. QUERY INCLEXCL also displays the source of each include-exclude
statement. For more information on the processing of the include-exclude
statements see the Backup-Archive Clients Installation and User's Guide.
The FORCE parameter allows an administrator to specify whether the server forces
the client to use an option value. This parameter has no affect on additive options
such as INCLEXCL and DOMAIN. The default value is NO. If FORCE=YES, the server
forces the client to use the value, and the client cannot override the value. The
following example shows how you can prevent a client from using subfile backup:
define clientopt engbackup subfilebackup no force=yes
Related reference:
“The include-exclude list” on page 510
The client node MIKE is registered with the password pass2eng. When the client
node MIKE performs a scheduling operation, his schedule log entries are kept for
five days.
Backup-archive clients are eligible for client restartable restore sessions; however,
application clients are not.
Tivoli Storage Manager can hold a client restore session in DSMC loop mode until
one of these conditions is met:
v The device class MOUNTRETENTION limit is satisfied.
v The client IDLETIMEOUT period is satisfied.
v The loop session ends.
Administrators can perform the following activities when managing IBM Tivoli
Storage Manager sessions:
Related concepts:
“Managing client restartable restore sessions” on page 494
time to determine how long (in seconds, minutes, or hours) the session has been in
the current state.
Administrators can display a session number with the QUERY SESSION command.
Users and administrators whose sessions have been canceled must reissue their last
command to access the server again.
If the session you cancel is currently waiting for a media mount, the mount request
is automatically canceled. If a volume associated with the client session is currently
being mounted by an automated library, the cancel may not take effect until the
mount is complete.
The reasons are based on the settings of the following server options:
COMMTIMEOUT
Specifies how many seconds the server waits for an expected client
message during a transaction that causes a database update. If the length
of time exceeds this time-out, the server rolls back the transaction that was
in progress and ends the client session. The amount of time it takes for a
client to respond depends on the speed and processor load for the client
and the network load.
IDLETIMEOUT
Specifies how many minutes the server waits for a client to initiate
communication. If the client does not initiate communication with the
server within the time specified, the server ends the client session. For
example, the server prompts the client for a scheduled backup operation
but the client node is not started. Another example can be that the client
program is idle while waiting for the user to choose an action to perform
(for example, backup archive, restore, or retrieve files). If a user starts the
client session and does not choose an action to perform, the session will
time out. The client program automatically reconnects to the server when
the user chooses an action that requires server processing. A large number
of idle sessions can inadvertently prevent other users from connecting to
the server.
THROUGHPUTDATATHRESHOLD
Specifies a throughput threshold, in kilobytes per second, a client session
must achieve to prevent being cancelled after the time threshold is reached.
Throughput is computed by adding send and receive byte counts and
dividing by the length of the session. The length does not include time
spent waiting for media mounts and starts at the time a client sends data
This command does not cancel sessions currently in progress or system processes
such as migration and reclamation.
To disable client node access to the server, issue the following example command:
disable sessions
You continue to access the server and current client activities complete unless a
user logs off or an administrator cancels a client session. After the client sessions
have been disabled, you can enable client sessions and resume normal operations
by issuing the following command:
enable sessions
You can issue the QUERY STATUS command to determine if the server is enabled or
disabled.
Related tasks:
“Locking and unlocking client nodes” on page 464
After a restore operation that comes directly from tape, the Tivoli Storage Manager
server does not release the mount point to IDLE status from INUSE status. The
server does not close the volume to allow additional restore requests to be made to
that volume. However, if there is a request to perform a backup in the same
session, and that mount point is the only one available, then the backup operation
When a restartable restore session is saved in the server database the file space is
locked in server storage. The following rules are in effect during the file space lock:
v Files residing on sequential volumes associated with the file space cannot be
moved.
v Files associated with the restore cannot be backed up. However, files not
associated with the restartable restore session that are in the same file space are
eligible for backup. For example, if you are restoring all files in directory A, you
can still backup files in directory B from the same file space.
To determine which client nodes have eligible restartable restore sessions, issue the
following example command:
query restore
These sessions will automatically expire when the specified restore interval has
passed.
For example:
v How and when files are backed up and archived to server storage
v How space-managed files are migrated to server storage
v The number of copies of a file and the length of time copies are kept in server
storage
IBM Tivoli Storage Manager provides a standard policy that sets rules to provide a
basic amount of protection for data on workstations. If this standard policy meets
your needs, you can begin using Tivoli Storage Manager immediately.
The server process of expiration is one way that the server enforces policies that
you define. Expiration processing determines when files are no longer needed, that
is, when the files are expired. For example, if you have a policy that requires only
four copies of a file be kept, the fifth and oldest copy is expired. During expiration
processing, the server removes entries for expired files from the database,
effectively deleting the files from server storage.
You might need more flexibility in your policies than the standard policy provides.
To accommodate individual user's needs, you may fine-tune the STANDARD
policy, or create your own policies. Some types of clients or situations require
special policy. For example, you may want to enable clients to restore backed-up
files to a specific point-in-time.
The server manages files based on whether the files are active or inactive. The
most current backup or archived copy of a file is the active version. All other
versions are called inactive versions. An active version of a file becomes inactive
when:
v A new backup is made
v A user deletes that file on the client node and then runs an incremental backup
Policy determines how many inactive versions of files the server keeps, and for
how long. When files exceed the criteria, the files expire. Expiration processing can
then remove the files from the server database.
Related reference:
“File expiration and expiration processing” on page 501
“Running expiration processing to delete expired files” on page 535
“Reviewing the standard policy”
Related reference:
“The parts of a policy” on page 505
To help users take advantage of IBM Tivoli Storage Manager, you can further tune
the policy environment by performing the following tasks:
v Define sets of client options for the different groups of users.
v Help users with creating the include-exclude list. For example:
– Create include-exclude lists to help inexperienced users who have simple file
management needs. One way to do this is to define a basic include-exclude
list as part of a client option set. This also gives the administrator some
control over client usage.
– Provide a sample include-exclude list to users who want to specify how the
server manages their files. You can show users who prefer to manage their
own files how to:
- Request information about management classes
- Select a management class that meets backup and archive requirements
- Use include-exclude options to select management classes for their files
For information on the include-exclude list, see the user’s guide for the
appropriate client.
v Automate incremental backup procedures by defining schedules for each policy
domain. Then associate schedules with client nodes in each policy domain.
Related tasks:
“Creating client option sets on the server” on page 488
Chapter 16, “Scheduling operations for client nodes,” on page 589
Related reference:
“The include-exclude list” on page 510
Other situations may also require policy changes. See “Policy configuration
scenarios” on page 545 for details.
To change policy that you have established in a policy domain, you must replace
the ACTIVE policy set. You replace the ACTIVE policy set by activating another
policy set.
Note: You cannot directly modify the ACTIVE policy set. If you want to make
a small change to the ACTIVE policy set, copy the policy to modify it and
follow the steps here.
2. Make any changes that you need to make to the management classes, backup
copy groups, and archive copy groups in the new policy set.
3. Validate the policy set.
4. Activate the policy set. The contents of your new policy set becomes the
ACTIVE policy set.
Related tasks:
“Defining and updating an archive copy group” on page 530
“Policy configuration scenarios” on page 545
Related reference:
“Validating a policy set” on page 532
“Activating a policy set” on page 533
“Defining and updating a management class” on page 523
“Defining and updating a backup copy group” on page 524
Important:
The server deletes expired files from the server database only during expiration
processing. After expired files are deleted from the database, the server can reuse
the space in the storage pools that was occupied by expired files. You should
ensure that expiration processing runs periodically to allow the server to reuse
space.
Expiration processing also removes from the database any restartable restore
sessions that exceed the time limit set for such sessions by the RESTOREINTERVAL
server option.
Related concepts:
“Managing client restartable restore sessions” on page 494
“Deletion hold” on page 538
“Expiration processing of base files and subfiles” on page 577
Related tasks:
“Reclaiming space in sequential-access storage pools” on page 390
Related reference:
“Running expiration processing to delete expired files” on page 535
Backup
To guard against the loss of information, the backup-archive client can copy files,
subdirectories, and directories to media controlled by the server. Backups can be
controlled by administrator-defined policies and schedules, or users can request
backups of their own data.
See Backup-Archive Clients Installation and User's Guide for details on backup-archive
clients that can also back up logical volumes. The logical volume must meet some
of the policy requirements that are defined in the backup copy group.
Related reference:
“Policy for logical volume backups” on page 546
Restore
When a user restores a backup version of a file, the server sends a copy of the file
to the client node. The backup version remains in server storage. Restoring a
logical volume backup works the same way.
If more than one backup version exists, a user can restore the active backup
version or any inactive backup versions.
If policy is properly set up, a user can restore backed-up files to a specific time.
Restriction: If you back up or archive data with a Tivoli Storage Manager V6.3
client, you cannot restore or retrieve that data with a V6.2 or earlier client.
Related reference:
“Setting policy to enable point-in-time restore for clients” on page 551
When a user retrieves a file, the server sends a copy of the file to the client node.
The archived file remains in server storage.
Tivoli Storage Manager for Space Management frees space for new data and makes
more efficient use of your storage resources. The installed Tivoli Storage Manager
for Space Management product is also called the space manager client or the HSM
client. Files that are migrated and recalled with the HSM client are called
space-managed files.
For details about using Tivoli Storage Manager for Space Management, see Space
Management for UNIX and Linux User's Guide.
Tivoli Storage Manager for Space Management provides selective and automatic
migration. Selective migration lets users migrate files by name. The two types of
automatic migration are:
Threshold
If space usage exceeds a high threshold set at the client node, migration
begins and continues until usage drops to the low threshold also set at the
client node.
Demand
If an out-of-space condition occurs for a client node, migration begins and
continues until usage drops to the low threshold.
To prepare for efficient automatic migration, Tivoli Storage Manager for Space
Management copies a percentage of user files from the client node to the IBM
Tivoli Storage Manager server. The premigration process occurs whenever Tivoli
Storage Manager for Space Management completes an automatic migration. The
next time free space is needed at the client node, the files that have been
pre-migrated to the server can quickly be changed to stub files on the client. The
default premigration percentage is the difference between the high and low
thresholds.
Files are selected for automatic migration and premigration based on the number
of days since the file was last accessed and also on other factors set at the client
node.
Recall
Tivoli Storage Manager for Space Management provides selective and transparent
recall. Selective recall lets users recall files by name. Transparent recall occurs
automatically when a user accesses a migrated file.
When recalling active file versions, the server searches in an active-data storage
pool associated with a FILE device class, if such a pool exists.
Related concepts:
“Active-data pools as sources of active file versions for server operations” on page
271
Reconciliation
Migration and premigration can create inconsistencies between stub files on the
client node and space-managed files in server storage.
For example, if a user deletes a migrated file from the client node, the copy
remains at the server. At regular intervals set at the client node, IBM Tivoli Storage
Manager compares client node and server storage and reconciles the two by
deleting from the server any outdated files or files that do not exist at the client
node.
Figure 68 shows the parts of a policy and the relationships among the parts.
Policy domain
Policy sets
Management classes
Backup Archive
copy copy
group group Additional Additional
policy policy
set set
Additional
management class
Additional
management class
The numbers in the following list correspond to the numbers in the figure.
Disk
Policy domain
4
Policy set Volume Volume
Management class
Storage Represents
Copy Points to
2 group pool
DISK
device class
Migrate
6
Tape
5
Volume Volume
Management class
Represents
Device class
Library
Represents
Drives
Device
Drive Drive
Figure 69. How clients, server storage, and policy work together
1 When clients are registered, they are associated with a policy domain.
Within the policy domain are the policy set, management class, and copy
groups.
2, 3
When a client backs up, archives, or migrates a file, it is bound to a
management class. A management class and the backup and archive copy
groups within it specify where files are stored and how they are managed
when they are backed up, archived, or migrated from the client.
Figure 69 on page 507 summarizes the relationships among the physical device
environment, IBM Tivoli Storage Manager storage and policy objects, and clients.
The management classes specify whether client files are migrated to storage pools
(hierarchical storage management). The copy groups in these management classes
specify the number of backup versions retained in server storage and the length of
time to retain backup versions and archive copies.
For example, if a group of users needs only one backup version of their files, you
can create a policy domain that contains only one management class whose backup
copy group allows only one backup version. Then you can assign the client nodes
for these users to the policy domain.
Related tasks:
“Registering nodes with the server” on page 440
Related reference:
“Contents of a management class”
“Default management classes” on page 509
“The include-exclude list” on page 510
“How files and directories are associated with a management class” on page 511
For clients using the server for backup and archive, you can choose what a
management class contains from the following options:
Other management classes can contain copy groups tailored either for the needs of
special sets of users or for the needs of most users under special circumstances.
The options also include how the server controls symbolic links and processing
such as image, compression and encryption.
If a user does not create an include-exclude list, the following default conditions
apply:
v All files belonging to the user are eligible for backup and archive services.
v The default management class governs backup, archive, and space-management
policies.
exclude *:\...\core
exclude *:\home\ssteiner\*
include *:\home\ssteiner\options.scr
include *:\home\ssteiner\driver5\...\* mcengbk2
IBM Tivoli Storage Manager processes the include-exclude list from the bottom up,
and stops when it finds an include or exclude statement that matches the file it is
processing. Therefore, the order in which the include and exclude options are listed
affects which files are included and excluded. For example, suppose you switch the
order of two lines in the example, as follows:
include *:\home\ssteiner\options.scr
exclude *:\home\ssteiner\*
The exclude statement comes last, and excludes all files in the following directory:
v *:\home\ssteiner
510 IBM Tivoli Storage Manager for Windows: Administrator's Guide
When IBM Tivoli Storage Manager is processing the include-exclude list for the
options.scr file, it finds the exclude statement first. This time, the options.scr file
is excluded.
Some options are evaluated after the more basic include and exclude options. For
example, options that exclude or include files for compression are evaluated after
the program determines which files are eligible for the process being run.
You can create include-exclude lists as part of client options sets that you define
for clients.
For detailed information on the include and exclude options, see the user’s guide
for the appropriate client.
Related tasks:
“Creating client option sets on the server” on page 488
The default management class is the management class identified as the default in
the active policy set.
A management class specified with a simple include option can apply to one or
more processes on the client. More specific include options (such as
include.archive) allow the user to specify different management classes. Some
examples of how this works:
v If a client backs up, archives, and migrates a file to the same server, and uses
only a single include option, the management class specified for the file applies
to all three operations (backup, archive, and migrate).
v If a client backs up and archives a file to one server, and migrates the file to a
different server, the client can specify one management class for the file for
backup and archive operations, and a different management class for migrating.
v Clients can specify a management class for archiving that is different from the
management class for backup.
See the user's guide for the appropriate client for more details.
Backup versions
The server rebinds backup versions of files and logical volume images in some
cases.
The following list highlights the cases when a server rebinds backup versions of
files:
v The user changes the management class specified in the include-exclude list and
does a backup.
v An administrator activates a policy set in the same policy domain as the client
node, and the policy set does not contain a management class with the same
name as the management class to which a file is currently bound.
v An administrator assigns a client node to a different policy domain, and the
active policy set in that policy domain does not have a management class with
the same name.
Backup versions of a directory can be rebound when the user specifies a different
management class using the DIRMC option in the client option file, and when the
directory gets backed up.
The most recently backed up files are active backup versions. Older copies of your
backed up files are inactive backup versions. You can configure management classes
to save a predetermined number of copies of a file. If a management class is saving
five backup copies, there would be one active copy saved and four inactive copies
saved. If a file from one management class is bound to a different management
class that retains a lesser number of files, inactive files are deleted.
If a file is bound to a management class that no longer exists, the server uses the
default management class to manage the backup versions. When the user does
another backup, the server rebinds the file and any backup versions to the default
management class. If the default management class does not have a backup copy
group, the server uses the backup retention grace period specified for the policy
domain.
Archive copies
Archive copies are never rebound because each archive operation creates a
different archive copy. Archive copies remain bound to the management class
name specified when the user archived them.
If the default management class does not contain an archive copy group, the server
uses the archive retention grace period specified for the policy domain.
Incremental backup
Backup-archive clients can choose to back up their files using full or partial
incremental backup. A full incremental backup ensures that clients' backed-up files
are always managed according to policies. Clients are urged to use full incremental
backup whenever possible.
If the amount of time for backup is limited, clients may sometimes need to use
partial incremental backup. A partial incremental backup should complete more
quickly and require less memory. When a client uses partial incremental backup,
only files that have changed since the last incremental backup are backed up.
Attributes in the management class that would cause a file to be backed up when
doing a full incremental backup are ignored. For example, unchanged files are not
backed up even when they are assigned to a management class that specifies
absolute mode and the minimum days between backups (frequency) has passed.
The server also does less processing for a partial incremental backup. For example,
the server does not expire files or rebind management classes to files during a
partial incremental backup.
If clients must use partial incremental backups, they should periodically perform
full incremental backups to ensure that complete backups are done and backup
files are stored according to policies. For example, clients can do partial
incremental backups every night during the week, and a full incremental backup
on the weekend.
The IBM Tivoli Storage Manager ensures the following items are identified:
1. Checks each file against the user's include-exclude list:
v Files that are excluded are not eligible for backup.
v If files are not excluded and a management class is specified with the
INCLUDE option, IBM Tivoli Storage Manager uses that management class.
v If files are not excluded but a management class is not specified with the
INCLUDE option, IBM Tivoli Storage Manager uses the default management
class.
v If no include-exclude list exists, all files in the client domain are eligible for
backup, and IBM Tivoli Storage Manager uses the default management class.
Selective backup
When a user requests a selective backup, the IBM Tivoli Storage Manager ensures
its eligibility.
IBM Tivoli Storage Manager ensures the following items are identified:
1. Checks the file against any include or exclude statements contained in the user
include-exclude list:
v Files that are not excluded are eligible for backup. If a management class is
specified with the INCLUDE option, IBM Tivoli Storage Manager uses that
management class.
v If no include-exclude list exists, the files selected are eligible for backup, and
IBM Tivoli Storage Manager uses the default management class.
2. Checks the management class of each included file:
v If the management class contains a backup copy group and the serialization
requirement is met, the file is backed up. Serialization specifies how files are
handled if they are modified while being backed up and what happens if
modification occurs.
v If the management class does not contain a backup copy group, the file is
not eligible for backup.
IBM Tivoli Storage Manager ensures the following items are identified:
1. Checks the specification of the logical volume against any include or exclude
statements contained in the user include-exclude list:
v If no include-exclude list exists, the logical volumes selected are eligible for
backup, and IBM Tivoli Storage Manager uses the default management class.
v Logical volumes that are not excluded are eligible for backup. If the
include-exclude list has an INCLUDE option for the volume with a
management class specified, IBM Tivoli Storage Manager uses that
management class. Otherwise, the default management class is used.
Archive
When a user requests the archiving of a file or a group of files, the IBM Tivoli
Storage Manager determine its eligibility.
IBM Tivoli Storage Manager ensures the following items are identified:
1. Checks the files against the user’s include-exclude list to see if any
management classes are specified:
v IBM Tivoli Storage Manager uses the default management class for files that
are not bound to a management class.
v If no include-exclude list exists, IBM Tivoli Storage Manager uses the default
management class unless the user specifies another management class. See
the user’s guide for the appropriate client for details.
2. Checks the management class for each file to be archived.
v If the management class contains an archive copy group and the serialization
requirement is met, the file is archived. Serialization specifies how files are
handled if they are modified while being archived and what happens if
modification occurs.
v If the management class does not contain an archive copy group, the file is
not archived.
If you need to frequently create archives for the same data, consider using instant
archive (backup sets) instead. Frequent archive operations can create a large
amount of metadata in the server database resulting in increased database growth
and decreased performance for server operations such as expiration. Frequently,
you can achieve the same objectives with incremental backup or backup sets.
Although the archive function is a powerful way to store inactive data with fixed
retention, it should not be used on a frequent and large scale basis as the primary
backup method.
Related concepts:
“Creating and using client backup sets” on page 566
The criteria for a file to be eligible for automatic migration from an HSM client are
displayed in the following list:
v It resides on a node on which the root user has added and activated hierarchical
storage management. It must also reside in a local file system to which the root
user has added space management, and not in the root (/) or /tmp file system.
v It is not excluded from migration in the include-exclude list.
v It meets management class requirements for migration:
Note: The situation described is valid only when Space Management is installed
and configured. You can perform automatic migration only when using the Space
Management client.
For example, if the file has not been accessed for at least 30 days and a backup
version exists, the file is migrated. You can also define a management class that
allows users to selectively migrate whether or not a backup version exists. Users
can also choose to archive files that have been migrated. IBM Tivoli Storage
Manager manages the following situations:
v If the file is backed up or archived to the server to which it was migrated, the
server copies the file from the migration storage pool to the backup or archive
storage pool. For a tape-to-tape operation, each storage pool must have a tape
drive.
v If the file is backed up or archived to a different server, Tivoli Storage Manager
accesses the file by using the migrate-on-close recall mode. The file resides on
the client node only until the server stores the backup version or the archived
copy in a storage pool.
When a client restores a backup version of a migrated file, the server deletes the
migrated copy of the file from server storage the next time reconciliation is run.
When a client archives a file that is migrated and does not specify that the file is to
be erased after it is archived, the migrated copy of the file remains in server
storage. When a client archives a file that is migrated and specifies that the file is
to be erased, the server deletes the migrated file from server storage the next time
reconciliation is run.
The Tivoli Storage Manager default management class specifies that a backup
version of a file must exist before the file is eligible for migration.
Table 52 shows that an advantage of copying existing policy parts is that some
associated parts are copied in a single operation.
Table 52. Cause and effect of copying existing policy parts
If you copy this... Then you create this...
Policy Domain A new policy domain with:
v A copy of each policy set from the original domain
v A copy of each management class in each original policy set
v A copy of each copy group in each original management class
Policy Set A new policy set in the same policy domain with:
v A copy of each management class in the original policy set
v A copy of each copy group in the original management class
Management Class A new management class in the same policy set and a copy of each
copy group in the management class
The domain contains two policy sets that are named STANDARD and TEST. The
administrator activated the policy set that is named STANDARD. When you
activate a policy set, the server makes a copy of the policy set and names it
ACTIVE. Only one policy set can be active at a time.
The ACTIVE policy set contains two management classes: MCENG and
STANDARD. The default management class is STANDARD.
Related tasks:
“Defining and updating an archive copy group” on page 530
Related reference:
“Defining and updating a policy domain”
“Defining and updating a policy set” on page 522
“Defining and updating a management class” on page 523
“Defining and updating a backup copy group” on page 524
“Assigning a default management class” on page 532
“Activating a policy set” on page 533
“Running expiration processing to delete expired files” on page 535
When you copy an existing domain, you also copy any associated policy sets,
management classes, and copy groups.
For example, perform the following steps to copy and update an existing domain:
1. Copy the STANDARD policy domain to the ENGPOLDOM policy domain by
entering the following command:
copy domain standard engpoldom
ENGPOLDOM now contains the standard policy set, management class,
backup copy group, and archive copy group.
2. Update the policy domain ENGPOLDOM so that the backup retention grace
period is extended to 90 days and the archive retention grace period is
extended to two years. Specify an active-data pool as the destination for active
versions of backup data belonging to nodes assigned to the domain. Use
engactivedata as the name of the active-data pool, as in the following example:
update domain engpoldom description=’Engineering Policy Domain’
backretention=90 archretention=730 activedestination=engactivedata
The policies in the new policy set do not take effect unless you make the new set
the ACTIVE policy set.
Related reference:
“Activating a policy set” on page 533
To create the TEST policy set in the ENGPOLDOM policy domain, the
administrator performs the following steps:
1. Copy the STANDARD policy set and name the new policy set TEST:
copy policyset engpoldom standard test
Note: When you copy an existing policy set, you also copy any associated
management classes and copy groups.
2. Update the description of the policy set named TEST:
update policyset engpoldom test
description=’Policy set for testing’
The following four parameters apply only to HSM clients (Tivoli Storage Manager
for Space Management):
Whether space management is allowed
Specifies that the files are eligible for both automatic and selective
migration, only selective migration, or no migration.
How frequently files can be migrated
Specifies the minimum number of days that must elapse since a file was
last accessed before it is eligible for automatic migration.
Whether backup is required
Specifies whether a backup version of a file must exist before the file can
be migrated.
Where migrated files are to be stored
Specifies the name of the storage pool in which migrated files are stored.
Your choice could depend on factors such as:
v The number of client nodes migrating to the storage pool. When many
user files are stored in the same storage pool, volume contention can
occur as users try to migrate files to or recall files from the storage pool.
v How quickly the files must be recalled. If users need immediate access
to migrated versions, you can specify a disk storage pool as the
destination.
Attention: You cannot specify a copy storage pool or an active-data pool as the
destination.
This attribute can be one of four values: STATIC, SHRSTATIC (shared static),
DYNAMIC, or SHRDYNAMIC (shared dynamic).
The value you choose depends on how you want IBM Tivoli Storage Manager to
manage files that are modified while they are being backed up.
Do not back up files that are modified during the backup
You will want to prevent the server from backing up a file while it is being
modified. Use one of the following values:
STATIC
Specifies that IBM Tivoli Storage Manager will attempt to back up
the file only once. If the file or directory is modified during a
backup, the server does not back it up.
SHRSTATIC (Shared static)
Specifies that if the file or directory is modified during a backup,
the server retries the backup as many times as specified by the
CHANGINGRETRIES option in the client options file. If the file is
modified during the last attempt, the file or directory is not backed
up.
Back up files that are modified during the backup
Some files are in constant use, such as an error log. Consequently, these
Attention:
v If a file is modified during backup and DYNAMIC or SHRDYNAMIC is
specified, then the backup may not contain all the changes and may not
be usable. For example, the backup version may contain a truncated
record. Under some circumstances, it may be acceptable to capture a
dynamic or “fuzzy” backup of a file (the file was changed during the
backup). For example, a dynamic backup of an error log file that is
continuously appended may be acceptable. However, a dynamic backup
of a database file may not be acceptable, since restoring such a backup
could result in an unusable database. Carefully consider dynamic
backups of files as well as possible problems that may result from
restoring potentially “fuzzy” backups.
v When certain users or processes open files, they may deny any other
access, including “read” access, to the files by any other user or process.
When this happens, even with serialization set to DYNAMIC or
SHRDYNAMIC, IBM Tivoli Storage Manager will not be able to open
the file at all, so the server cannot back up the file.
The server considers both parameters to determine how frequently files can be
backed up. For example, if frequency is 3 and mode is Modified, a file or directory
is backed up only if it has been changed and if three days have passed since the
last backup. If frequency is 3 and mode is Absolute, a file or directory is backed up
after three days have passed whether or not the file has changed.
Use the Modified mode when you want to ensure that the server retains multiple,
different backup versions. If you set the mode to Absolute, users may find that
they have three identical backup versions, rather than three different backup
versions.
Absolute mode can be useful for forcing a full backup. It can also be useful for
ensuring that extended attribute files are backed up, because Tivoli Storage
Manager does not detect changes if the size of the extended attribute file remains
the same.
When you set the mode to Absolute, set the frequency to 0 if you want to ensure
that a file is backed up each time full incremental backups are scheduled for or
initiated by a client.
These parameters interact to determine the backup versions that the server retains.
When the number of inactive backup versions exceeds the number of versions
allowed (Versions Data Exists and Versions Data Deleted), the oldest version
expires and the server deletes the file from the database the next time expiration
Important: A base file is not eligible for expiration until all its dependent subfiles
have been expired.
For example, see Table 53 and Figure 72. A client node has backed up the file
REPORT.TXT four times in one month, from March 23 to April 23. The settings in the
backup copy group of the management class to which REPORT.TXT is bound
determine how the server treats these backup versions. Table 54 on page 528 shows
some examples of how different copy group settings would affect the versions. The
examples show the effects as of April 24 (one day after the file was last backed
up).
Table 53. Status of REPORT.TXT as of april 24
Days the Version Has Been
Version Date Created Inactive
Active April 23 (not applicable)
Inactive 1 April 13 1 (since April 23)
Inactive 2 March 31 11 (since April 13)
Inactive 3 March 23 24 (since March 31)
Wednesday March 31
Default management
Tuesday April 13
class
Friday April 23
Backup
copy group
Active version
Inactive
versions
If the user deletes the REPORT.TXT file from the client node, the
server notes the deletion at the next full incremental backup of the
client node. From that point, the Versions Data Deleted and
Retain Only Version parameters also have an effect. All versions
are now inactive. Two of the four versions expire immediately (the
March 23 and March 31 versions expire). The April 13 version
expires when it has been inactive for 60 days (on June 23). The
server keeps the last remaining inactive version, the April 23
version, for 180 days after it becomes inactive.
NOLIMIT 2 versions 60 days 180 days Retain Extra Versions controls expiration of the versions. The
inactive versions (other than the last remaining version) are
expired when they have been inactive for 60 days.
If the user deletes the REPORT.TXT file from the client node, the
server notes the deletion at the next full incremental backup of the
client node. From that point, the Versions Data Deleted and
Retain Only Version parameters also have an effect. All versions
are now inactive. Two of the four versions expire immediately (the
March 23 and March 31 versions expire) because only two versions
are allowed. The April 13 version expires when it has been
inactive for 60 days (on June 22). The server keeps the last
remaining inactive version, the April 23 version, for 180 days after
it becomes inactive.
NOLIMIT NOLIMIT 60 days 180 days Retain Extra Versions controls expiration of the versions. The
server does not expire inactive versions based on the maximum
number of backup copies. The inactive versions (other than the
last remaining version) are expired when they have been inactive
for 60 days.
If the user deletes the REPORT.TXT file from the client node, the
server notes the deletion at the next full incremental backup of the
client node. From that point, the Retain Only Version parameter
also has an effect. All versions are now inactive. The three of four
versions will expire after each of them has been inactive for 60
days. The server keeps the last remaining inactive version, the
April 23 version, for 180 days after it becomes inactive.
4 versions 2 versions NOLIMIT NOLIMIT Versions Data Exists controls the expiration of the versions until
a user deletes the file from the client node. The server does not
expire inactive versions based on age.
If the user deletes the REPORT.TXT file from the client node, the
server notes the deletion at the next full incremental backup of the
client node. From that point, the Versions Data Deleted parameter
controls expiration. All versions are now inactive. Two of the four
versions expire immediately (the March 23 and March 31 versions
expire) because only two versions are allowed. The server keeps
the two remaining inactive versions indefinitely.
This new copy group must be able to complete the following tasks:
v Let users back up changed files, regardless of how much time has elapsed since
the last backup, using the default value 0 for the Frequency parameter
(frequency parameter not specified)
v Retain up to four inactive backup versions when the original file resides on the
user workstation, using the Versions Data Exists parameter (verexists=5)
v Retain up to four inactive backup versions when the original file is deleted from
the user workstation, using the Versions Data Deleted parameter
(verdeleted=4)
v Retain inactive backup versions for no more than 90 days, using the Retain
Extra Versions parameter (retextra=90)
v If there is only one backup version, retain it for 600 days after the original is
deleted from the workstation, using the Retain Only Version parameter
(retonly=600)
Note: When certain users or processes open files, they deny read access to the
files for any other user or process. When this happens, even with serialization
set to dynamic or shared dynamic, the server does not back up the file.
3. How long to retain an archived copy specifies the number of days to retain an
archived copy in storage. When the time elapses, the archived copy expires and
the server deletes the file the next time expiration processing runs.
When a user archives directories, the server uses the default management class
unless the user specifies otherwise. If the default management class does not
have an archive copy group, the server binds the directory to the management
class that currently has the shortest retention time for archive. When you
change the retention time for an archive copy group, you may also be changing
the retention time for any directories that were archived using that copy group.
The user can change the archive characteristics by using Archive Options in the
interface or by using the ARCHMC option on the command.
4. The RETMIN parameter in archive copy groups specifies the minimum number of
days an object will be retained after the object is archived. For objects that are
managed by event-based retention policy, this parameter ensures that objects
are retained for a minimum time period regardless of when an event triggers
retention
After you have defined an archive copy group, using the RETMIN=n parameter,
ensure that the appropriate archive data will be bound to the management class
with this archive copy group. You can do this either by using the default
management class or by modifying the client options file to specify the
management class for the appropriate archive data.
Placing a deletion hold on an object does not extend its retention period. For
example, if an object is thirty days away from the end of its retention period
and it is placed on hold for ninety days, it will be eligible for expiration
immediately upon the hold being released.
Related concepts:
“Deletion hold” on page 538
Related tasks:
“Using virtual volumes to store data on another server” on page 763
The STANDARD management class was copied from the STANDARD policy set to
the TEST policy set. Before the new default management class takes effect, you
must activate the policy set.
Related tasks:
“Example: defining a policy set” on page 522
Validation fails if the policy set does not contain a default management class.
Validation results in result in warning messages if any of the following conditions
exist.
Related reference:
“How files and directories are associated with a management class” on page 511
“Defining and updating a policy domain” on page 520
When you activate a policy set, the server performs a final validation of the
contents of the policy set and copies the original policy set to the ACTIVE policy
set.
You cannot update the ACTIVE policy set; the original and the ACTIVE policy sets
are two separate objects. For example, updating the original policy set has no effect
on the ACTIVE policy set. To change the contents of the ACTIVE policy set, you
must create or change another policy set and then activate that policy set.
If data retention protection is active, the following rules apply during policy set
validation and activation. The server can be a managed server and receive policy
definitions via enterprise configuration, but it will not be possible to activate
propagated policy sets if these rules are not satisfied.
v All management classes in the policy set to be validated and activated must
contain an archive copy group.
v If a management class exists in the active policy set, a management class with
the same name must exist in the policy set to be validated and activated.
v If an archive copy group exists in the active policy set, the corresponding copy
group in the policy set to be validated and activated must have RETVER and
RETMIN values at least as large as the corresponding values in the active copy
group.
You can use the IBM Tivoli Storage Manager Console or the server command line
to assign client nodes to a policy domain.
To use the Tivoli Storage Manager Console, complete the following steps:
1. Double-click the desktop icon for the Tivoli Storage Manager Console.
2. Expand the tree until the Tivoli Storage Manager server you want to work with
is displayed. Expand the server and click Wizards. The list of wizards appears
in the right pane.
3. Select the Client Node Configuration wizard and click Start. The Client Node
Configuration wizard appears.
4. Progress through the wizard to the Define Tivoli Storage Manager client nodes
and policy page.
5. Assign client nodes to a policy domain in one of the following ways:
v Select a client node and click Edit. The Properties dialog appears. Select a
policy domain from the drop-down list. To create a policy domain, click
New.
v To create new client nodes, click the Add Node button. The Properties dialog
appears. Enter the required node information, and select a policy domain
from the drop-down list.
6. Finish the wizard.
For example, to assign the client node APPCLIENT1 to the ENGPOLDOM
policy domain, issue the following command:
update node appclient1 domain=engpoldom
To create a new client node, NEWUSER, and assign it to the ENGPOLDOM
policy domain, issue the following command:
register node newuser newuser domain=engpoldom
Note:
1. A base file is not eligible for expiration until all of its dependent subfiles have
been expired.
2. An archive file is not eligible for expiration if there is a deletion hold on it. If a
file is not held, it will be handled according to existing expiration processing.
Related concepts:
“Expiration processing of base files and subfiles” on page 577
“Deletion hold” on page 538
You can set the options by editing the dsmserv.opt file (see the Administrator's
Reference).
You can also set the options by using the server options editor (available in the
Tivoli Storage Manager Console). Follow these steps to set the expiration interval
option through the Tivoli Storage Manager Server Utilities:
1. Click the Server Options through the Tivoli Storage Manager Server Utilities.
The Options Files appear to the right.
2. Select a Tivoli Storage Manager server in the Installed Servers list.
3. Click Edit.
4. Click Server Processing.
5. Set the value for expiration interval and whether to use quiet expiration, as
desired.
6. Click Save on the File menu of the Tivoli Storage Manager Server Utilities.
If you use the server options file to control automatic expiration, the server runs
expiration processing each time you start the server. After that, the server runs
expiration processing at the interval you specified with the option, measured from
the start time of the server.
After issuing EXPIRE INVENTORY, expired files are deleted from the database
according to how you specify parameters on the command.
You can control how long the expiration process runs by using the DURATION
parameter with the EXPIRE INVENTORY command. You can run several (up to 40)
expiration processes in parallel by specifying RESOURCE=x, where x equals the
number of nodes that you want to process. Inventory expiration can also be
distributed across more than one resource on a file space level to help distribute
the workload for nodes with many file spaces.
You can use the DEFINE SCHEDULE command to set a specific schedule for this
command. This automatically starts inventory expiration processing. If you
schedule the EXPIRE INVENTORY command, set the expiration interval to 0 (zero) in
the server options so that the server does not run expiration processing when you
start the server.
When expiration processing runs, the server normally sends detailed messages
about policy changes made since the last time expiration processing ran. You can
reduce those messages by using the QUIET=YES parameter with the EXPIRE
INVENTORY command, or the following options:
v The Use Quiet Expiration option in the server options
When you use the quiet option or parameter, the server issues messages about
policy changes during expiration processing only when files are deleted, and either
the default management class or retention grace period for the domain has been
used to expire the files.
For example, securities brokers and other regulated institutions enforce retention
requirements for certain records, including electronic mail, customer statements,
trade settlements, check images and new account forms. Data retention protection
prevents deliberate or accidental deletion of data until its specified retention
criterion is met.
Retention protection can only be activated on a new server that does not already
have stored objects (backup, archive, or space-managed). Activating retention
protection applies to all archive objects subsequently stored on that server. After
retention protection has been set, the server cannot store backup objects,
space-managed objects, or backupsets. Retention protection cannot be added for an
object that was previously stored on a Tivoli Storage Manager server. After an
object is stored with retention protection, retention protection cannot be removed.
Retention protection is based on the retention criterion for each object, which is
determined by the RETVER parameter of the archive copy group of the management
class to which the object is bound. If an object uses event-based retention, the
object will not expire until whatever comes later: either the date the object was
archived plus the number of days in the RETMIN parameter or the date the event
was signalled plus the number of days specified in the RETVER parameter. On
servers which have retention protection enabled, the following operations will not
delete objects whose retention criterion has not been satisfied:
v Requests from the client to delete an archive object
v DELETE FILESPACE (from either a client or administrative command)
v DELETE VOLUME DISCARDDATA=YES
v AUDIT VOLUME FIX=YES
Important: A cached copy of data can be deleted, but data in primary storage
pools, copy storage pools, and active-data pools can only be marked damaged
and is never deleted.
If your server has data retention protection activated, the following items are
restrictions:
The server does not send a retention value to an EMC Centera storage device if
retention protection is not enabled. If this is the case, you can use a Centera
storage device as a standard device from which archive and backup files can be
deleted.
Related tasks:
Chapter 34, “Protecting and recovering the server infrastructure and client data,”
on page 941
Deletion hold
If a hold is placed on an object through the client API, the object is not deleted
until the hold is released.
See the Backup-Archive Clients Installation and User's Guide for more information.
There is no limit to how often you alternate holding and releasing an object. An
object can have only one hold on it at a time, so if you attempt to hold an object
that is already held, you will get an error message.
If an object with event-based policy is on hold, an event can still be signalled. The
hold will not extend the retention period for an object. If the retention period
specified in the RETVER and RETMIN parameters expires while the object is on hold,
the object will be eligible for deletion whenever the hold is released.
If an object is held, it will not be deleted whether or not data retention protection
is active. If an object is not held, it is handled according to existing processing such
as normal expiration, data retention protection, or event-based retention. Data that
is in deletion hold status can be exported. The hold status will be preserved when
the data is imported to another system.
Note: A cached copy of data can be deleted, but data in primary storage pools,
copy storage pools, and active-data pools can only be marked damaged and is
never deleted.
Data stored with a retention date cannot be deleted from the file system before the
retention period expires. The SnapLock feature can only be used by Tivoli Storage
Manager servers that have data retention protection enabled.
Data archived by data retention protection servers and stored to NetApp NAS file
servers is stored as Tivoli Storage Manager FILE volumes. At the end of a write
transaction, a retention date is set for the FILE volume, through the SnapLock
interface. This date is calculated by using the RETVER and RETMIN parameters of the
archive copy group used when archiving the data. Having a retention date
associated with the FILE volume gives it a characteristic of WORM media by not
allowing the data to be destroyed or overwritten until the retention date has
passed. These FILE volumes are referred to as WORM FILE volumes. After a
retention date has been set, the WORM FILE volume cannot be deleted until the
retention date has passed. System Storage Archive Manager combined with
WORM FILE volume reclamation ensures protection for the life of the data.
Storage pools can be managed either by threshold or by data retention period. The
RECLAMATIONTYPE storage pool parameter indicates that a storage pool is managed
based on a data retention period. When a traditional storage pool is queried with
the FORMAT=DETAILED parameter, this output is displayed:
Reclamation Type: THRESHOLD
Tivoli Storage Manager servers that have data retention protection enabled through
System Storage Archive Manager and have access to a NetApp filer with the
SnapLock licensed feature can define a storage pool with RECLAMATIONTYPE set
to SNAPLOCK. This means that data created on volumes in this storage pool are
managed by retention date. When a SnapLock storage pool is queried with the
FORMAT=DETAILED parameter, the output displayed indicates that the storage
pools are managed by data retention period.
Reclamation Type: SNAPLOCK
See the NetApp document Data ONTAP Storage Management Guide for details on
the SnapLock filer. Note this is NetApp documentation.
Attention: It is not recommended that you use this feature to protect data with a
retention period of less than three months.
Related concepts:
“Data retention protection” on page 537
The reclamation of a WORM FILE volume to another WORM FILE volume before
the retention date expiration ensures that data is always protected by the SnapLock
feature.
Because this protection is at a Tivoli Storage Manager volume level, the data on the
volumes can be managed by Tivoli Storage Manager policy without consideration
of where the data is stored. Data stored on WORM FILE volumes is protected both
by data retention protection and by the retention period stored with the physical
file on the SnapLock volume. If a Tivoli Storage Manager administrator issues a
command to delete the data, the command fails. If someone attempt to delete the
file through a series of network file system calls, the SnapLock feature prevents the
data from being deleted.
During reclamation processing, if the Tivoli Storage Manager server cannot move
data from an expiring SnapLock volume to a new SnapLock volume, a warning
message is issued.
Retention periods
Tivoli Storage Manager policies manage the retention time for the WORM FILE
volume. The retention of some files might exceed the retention time for the WORM
FILE volume they were stored on. This could require moving them to another
volume to ensure that the files are stored on WORM media.
Some objects on the volume might need to be retained longer than other objects on
the volume for the following reasons:
v They are bound to management classes with different retention times.
v They cannot be removed because of a deletion hold.
v They are waiting for an event to occur before expiring.
v The retention period for a copy group is increased, requiring a longer retention
time than that specified in the SnapLock feature when the WORM FILE volume
was committed.
Use the DEFINE STGPOOL command to set up a storage pool for use with the
SnapLock feature. Selecting RECLAMATIONTYPE=SNAPLOCK enables Tivoli
Storage Manager to manage FILE volumes by a retention date. After a storage pool
has been set up as a SnapLock storage pool, the RECLAMATIONTYPE parameter
cannot be updated to THRESHOLD. When a SnapLock storage pool is defined, a
check is made to ensure that the directories specified in the device class are
SnapLock WORM volumes. When a file class is defined and storage pools are
created with the reclamation type of SNAPLOCK, all volumes must be WORM
volumes or the operation fails. If a device class is updated to contain additional
directories and there are SnapLock storage pools assigned to it, the same check is
made to ensure all directories are SnapLock WORM volumes.
There are three retention periods available in the NetApp SnapLock feature. These
must be configured correctly so that the Tivoli Storage Manager server can
properly manage WORM data stored in SnapLock volumes. The Tivoli Storage
Manager server sets the retention period for data being stored on NetApp
SnapLock volumes based on the values in the copy group for the data being
540 IBM Tivoli Storage Manager for Windows: Administrator's Guide
archived. The NetApp filer should not conflict with the ability of the Tivoli Storage
Manager server to set the retention period. The following settings are the Tivoli
Storage Manager recommendations for retention periods in the NetApp filer:
1. Minimum Retention Period Set the higher value: either 30 days or the
minimum number of days specified by any copy group (using a NetApp
SnapLock filer for WORM FILE storage) for the data retention period. The copy
group is the one in use storing data on NetApp SnapLock volumes.
2. Maximum Retention Period Leave default of 30 years. This allows the Tivoli
Storage Manager server to set the actual volume retention period based on the
settings in the archive copy group.
3. Default Retention Period Set to 30 days. If you do not set this value and you do
not set the maximum retention period, each volume's retention period will be
set to 30 years. If this occurs, the Tivoli Storage Manager server's ability to
manage expiration and reuse of NetApp SnapLock volumes will be largely
defeated in that no volume will be able to be reused for thirty years.
With the NetApp SnapLock retention periods appropriately set, Tivoli Storage
Manager can manage the data in SnapLock storage pools with maximum efficiency.
For each volume that is in a SNAPLOCK storage pool, a Tivoli Storage Manager
reclamation period is created. The Tivoli Storage Manager reclamation period has a
start date, BEGIN RECLAIM PERIOD, and an end date, END RECLAIM PERIOD.
View these dates by issuing the QUERY VOLUME command with the
FORMAT=DETAILED parameter on a SnapLock volume. For example:
Begin Reclaim Period: 09/05/2010
End Reclaim Period: 10/06/2010
When Tivoli Storage Manager archives files to a SnapLock volume, it keeps track
of the latest expiration date of those files, and the BEGIN RECLAIM PERIOD is set
to that latest expiration date. When more files are added to the SnapLock volume,
the starting date is set to that later date if there is a file with a later expiration date
than the one currently on the volume. The start date is set to the latest expiration
date for any file on that volume. The expectation is that all files on that volume
should have already either expired, or should be expiring on that day and the
following day there should be no valid data left on that volume.
The END RECLAIM PERIOD is set to a month later than the BEGIN RECLAIM
PERIOD. The retention date set in the NetApp filer for that volume is set to the
END RECLAIM PERIOD date. This means the NetApp filer will prevent any
deletion of that volume until the END RECLAIM PERIOD date has passed. This is
approximately a month after the data has actually expired in the Tivoli Storage
Manager server. If an END RECLAIM PERIOD date is calculated by the Tivoli
Storage Manager server for a volume, and the date is later than the current END
RECLAIM PERIOD, the new date will be reset in the NetApp filer for that volume
to the later date. This guarantees that the Tivoli Storage Manager WORM FILE
volume will not be deleted until all data on the volume has expired, or the data
has been moved to another SnapLock volume.
The Tivoli Storage Manager reclamation period is the amount of time between the
begin date and the end date. It is also the time period which the Tivoli Storage
Manager server has to delete volumes on which all the data has expired, or to
move files which have not expired on expiring SnapLock volumes to new
SnapLock volumes with new dates. This month is critical to how the server safely
and efficiently manages the data on WORM FILE volumes. Data on a SnapLock
volume typically expires by the time the beginning date arrives, and the volume
However, some events may occur which mean that there is still valid data on a
SnapLock volume:
1. Expiration processing in the Tivoli Storage Manager server for that volume may
have been delayed or has not completed yet.
2. The retention parameters on the copy group or associated management classes
may have been altered for a file after it was archived, and that file is not going
to expire for some period of time.
3. A deletion hold may have been placed on one or more of the files on the
volume.
4. Reclamation processing has either been disabled or is encountering errors
moving data to new SnapLock volumes on a SnapLock storage pool.
5. A file is waiting for an event to occur before the Tivoli Storage Manager server
can begin the expiration of the file.
If there are files which have not expired on a SnapLock volume when the
beginning date arrives, they must be moved to a new SnapLock volume with a
new begin and end date. This will properly protect that data. However, if
expiration processing on the Tivoli Storage Manager server has been delayed, and
those files will expire as soon as expiration processing on the Tivoli Storage
Manager server runs, it is inefficient to move those files to a new SnapLock
volume. To ensure that unnecessary data movement does not occur for files which
are due to expire, movement of files on expiring SnapLock volumes will be
delayed some small number of days after the BEGIN RECLAIM PERIOD date.
Since the data is protected in the SnapLock filer until the END RECLAIM PERIOD
date, there is no risk to the data in delaying this movement. This allows Tivoli
Storage Manager expiration processing to complete. After that number of days, if
there is still valid data on an expiring SnapLock volume, it will be moved to a new
SnapLock volume, thus continuing the protection of the data.
Since the data was initially archived, there may have been changes in the retention
parameters for that data (for example, changes in the management class or copy
pool parameters) or there may be a deletion hold on that data. However, the data
on that volume will only be protected by SnapLock until the END RECLAIM
PERIOD date. Data that has not expired is moved to new SnapLock volumes
during the Tivoli Storage Manager reclamation period. If errors occur moving data
to a new SnapLock volume, a distinct warning message is issued indicating that
the data will soon be unprotected. If the error persists, it is recommended that you
issue a MOVE DATA command for the problem volume.
You can avoid this situation by using the RETENTIONEXTENSION server option. This
option allows the server to set or extend the retention date of a SnapLock volume.
You can specify from 30 to 9999 days. The default is 365 days.
When selecting volumes in a SnapLock storage pool for reclamation, the server
checks if the volume is within the reclamation period.
v If the volume is not within the reclamation period, no action is taken. The
volume is not reclaimed, and the retention date is unchanged
v If the volume is within the reclamation period, the server checks if the percent of
reclaimable space on the volume is greater than the reclamation threshold of the
storage pool or of the threshold percentage passed in on the THRESHOLD
parameter of a RECLAIM STGPOOL command.
– If the reclaimable space is greater than the threshold, the server reclaims the
volume and sets the retention date of the target volume is set to the greater of
these values:
- The remaining retention time of the data plus 30 days for the reclamation
period.
- The RETENTIONEXTENSION value plus 30 days for the reclamation period.
– If the reclaimable space is not greater than the threshold, the server resets the
retention date of the volume by the amount specified in the
RETENTIONEXTENSION option. The new retention period is calculated by adding
the number of days specified to the current date.
The Tivoli Storage Manager server allows this type of movement, but if data is
moved from a WORM FILE volume to another type of media, the data may no
longer be protected from inadvertent or malicious deletion. If this data is on
WORM volumes to meet data retention and protection requirements for certain
legal purposes and is moved to other media, the data may no longer meet those
requirements. You should configure your storage pools so this type of data is kept
in storage pools which consist of SnapLock WORM volumes during the entire data
retention period.
When you configure the storage pools this way, you ensure that your data is
properly protected. If you define a next, reclaim, copy storage pool, or active-data
pool without selecting the RECLAMATIONTYPE=SNAPLOCK option, you will not have a
protected storage pool. The command succeeds, but a warning message is issued.
Complete the following steps to set up a SnapLock volume for use as a Tivoli
Storage Manager WORM FILE volume:
1. Install and set up SnapLock on the NetApp filer. See NetApp documentation
for more information.
2. Properly configure the minimum, maximum, and default retention periods. If
these retention periods are not configured properly, Tivoli Storage Manager will
not be able to properly manage the data and volumes.
3. Install and configure a Tivoli Storage Manager server with data retention
protection. Ensure the SET ARCHIVERETENTIONPROTECTION command is activated.
4. Set up policy by using the DEFINE COPYGROUP command. Select RETVER and
RETMIN values in the archive copy group which will meet your requirements
for protecting this data in WORM storage. If the RETVER or RETMIN values
are not set, the default management classes values will be used.
5. Set up storage by using the DEFINE DEVCLASS command.
v Use the FILE device class.
v Specify the DIRECTORY parameter to point to the directory or directories on
the SnapLock volumes.
6. Define a storage pool using the device class you defined above.
v Specify RECLAMATIONTYPE=SNAPLOCK.
7. Update the copy group to point to the storage pool you just defined.
8. Use the Tivoli Storage Manager API to archive your objects into the SnapLock
storage pool. This feature is not available on standard Tivoli Storage Manager
backup-archive clients.
If you back up directly to tape, the number of clients that can back up data at the
same time is equal to the number of drives available to the storage pool (through
the mount limit of the device class). For example, if you have one drive, only one
client at a time can back up data.
The direct-to-tape backup eliminates the need to migrate data from disk to tape.
However, performance of tape drives is often lower when backing up directly to
tape than when backing up to disk and then migrating to tape. Backing up data
directly to tape usually means more starting and stopping of the tape drive.
Backing up to disk then migrating to tape usually means the tape drive moves
more continuously, meaning better performance.
You may complete this task by using the Client Node Configuration wizard in the
Tivoli Storage Manager Console, or by using the server command line.
To use the Tivoli Storage Manager Console, complete the following steps:
1. Double-click the desktop icon for the Tivoli Storage Manager Console.
2. Expand the tree until the Tivoli Storage Manager server you want to work with
is displayed. Expand the server and click Wizards. The list of wizards appears
in the right pane.
3. Select the Client Node Configuration wizard and click Start. The Client Node
Configuration wizard appears.
4. Progress through the wizard to the “Define Tivoli Storage Manager client nodes
and policy” page.
5. By default, client nodes are associated with BACKUPPOOL. This storage pool
is set to immediately migrate any data it receives. Drag BACKUPPOOL and
drop it on a tape storage pool.
Note: You can also select a client, click Edit > New to create a new policy
domain that will send client data directly to any storage pool.
At the server command line, you may define a new policy domain that enables
client nodes to back up or archive data directly to tape storage pools. For example,
you may define a policy domain named DIR2TAPE with the following steps:
1. Copy the default policy domain STANDARD as a template:
copy domain standard dir2tape
This command creates the DIR2TAPE policy domain that contains a default
policy set, management class, backup and archive copy group, each named
STANDARD.
2. Update the backup or archive copy group in the DIR2TAPE policy domain to
specify the destination to be a tape storage pool. For example, to use a tape
storage pool named TAPEPOOL for backup, issue the following command:
update copygroup dir2tape standard standard destination=tapepool
To use a tape storage pool named TAPEPOOL for archive, issue the following
command:
update copygroup dir2tape standard standard type=archive
destination=tapepool
3. Activate the changed policy set.
activate policyset dir2tape standard
4. Assign client nodes to the DIR2TAPE policy domain. For example, to assign a
client node named TAPEUSER1 to the DIR2TAPE policy domain, issue the
following command:
update node tapeuser1 domain=dir2tape
The Versions Data Exists, Versions Data Deleted, and Retain Extra Versions
parameters work together to determine over what time period a client can restore a
logical volume image and reconcile later file backups. Also, you may have server
storage constraints that require you to control the number of backup versions
allowed for logical volumes. The server handles logical volume backups the same
For example, a user backs up a logical volume, and the following week deletes one
or more files from the volume. At the next incremental backup, the server records
in its database that the files were deleted from the client. When the user restores
the logical volume, the program can recognize that files have been deleted since
the backup was created. The program can delete the files as part of the restore
process. To ensure that users can use the capability to reconcile later incremental
backups with a restored logical volume, you need to ensure that you coordinate
policy for incremental backups with policy for backups for logical volumes.
For example, you decide to ensure that clients can choose to restore files and
logical volumes from any time in the previous 60 days. You can create two
management classes, one for files and one for logical volumes. Table 55 shows the
relevant parameters. In the backup copy group of both management classes, set the
Retain Extra Versions parameter to 60 days.
In the management class for files, set the parameters so that the server keeps
versions based on age rather than how many versions exist. More than one backup
version of a file may be stored per day if clients perform selective backups or if
clients perform incremental backups more than once a day. The Versions Data
Exists parameter and the Versions Data Deleted parameter control how many of
these versions are kept by the server. To ensure that any number of backup
versions are kept for the required 60 days, set both the Versions Data Exists
parameter and the Versions Data Deleted parameter to NOLIMIT for the
management class for files. This means that the server retains backup versions
based on how old the versions are, instead of how many backup versions of the
same file exist.
For logical volume backups, the server ignores the frequency attribute in the
backup copy group.
Table 55. Example of backup policy for files and logical volumes
Parameter (backup copy Management Class for Files Management Class for
group in the management Logical Volumes
class)
Versions Data Exists NOLIMIT 3 versions
Versions Data Deleted NOLIMIT 1
Retain Extra Versions 60 days 60 days
Retain Only Version 120 days 120 days
The Tivoli Storage Manager server initiates the backup, allocates a drive, and
selects and mounts the media. The NAS file server then transfers the data to tape.
Because the NAS file server performs the backup, the data is stored in its own
format. For most NAS file servers, the data is stored in the NDMPDUMP data
format. For NetApp file servers, the data is stored in the NETAPPDUMP data
format. For EMC file servers, the data is stored in the CELERRADUMP data
format. To manage NAS file server image backups, copy groups for NAS nodes
must point to a storage pool that has a data format of NDMPDUMP,
NETAPPDUMP, or CELERRADUMP.
The following backup copy group attributes are ignored for NAS images:
v Frequency
v Mode
v Retain Only Versions
v Serialization
v Versions Data Deleted
To set up the required policy for NAS nodes, you can define a new, separate policy
domain.
Backups for NAS nodes can be initiated from the server, or from a client that has
at least client owner authority over the NAS node. For client-initiated backups, you
can use client option sets that contain include and exclude statements to bind NAS
file system or directory images to a specific management class. The valid options
that can be used for a NAS node are: include.fs.nas, exclude.fs.nas, and
domain.nas. NAS backups initiated from the Tivoli Storage Manager server with
the BACKUP NODE command ignore client options specified in option files or client
option sets. For details on the options see the Backup-Archive Clients Installation and
User's Guide for your particular client platform.
When the Tivoli Storage Manager server creates a table of contents (TOC), you can
view a collection of individual files and directories backed up via NDMP and
select which to restore. To establish where to send data and store the table of
contents, policy should be set so that:
v Image backup data is sent to a storage pool with a NDMPDUMP,
NETAPPDUMP or CELERRADUMP format.
v The table of contents is sent to a storage pool with either NATIVE or
NONBLOCK format.
Related tasks:
“Creating client option sets on the server” on page 488
Related reference:
Chapter 10, “Using NDMP for operations with NAS file servers,” on page 233
The storage agent transfers data between the client and the storage device. See
Storage Agent User's Guide for details. See the Web site for details on clients that
support the feature: http://www.ibm.com/support/entry/portal/Overview/
Software/Tivoli/Tivoli_Storage_Manager.
One task in configuring your systems to use this feature is to set up policy for the
clients. Copy groups for these clients must point to the storage pool that is
associated with the SAN devices. If you have defined a path from the client to a
drive on the SAN, drives in this storage pool can then use the SAN to send data
directly to the device for backup, archive, restore, and retrieve.
To set up the required policy, either define a new, separate policy domain, or
define a new management class in an existing policy domain.
Related tasks:
“Define a new policy domain”
“Configuring Tivoli Storage Manager for LAN-free data movement” on page 150
Related reference:
“Define a new management class in an existing policy domain” on page 550
Because the new management class is not the default for the policy domain, you
must add an include statement to each client options file to bind objects to that
management class.
For example, suppose sanclientmc is the name of the management class that you
defined for clients that are using devices on a SAN. You want the client to be able
to use the SAN for backing up any file on the c drive. Put the following line at the
end of the client's include-exclude list:
include c:* sanclientmc
For details on the include-exclude list, see Backup-Archive Clients Installation and
User's Guide.
In the default management class, the destination for the archive copy group
determines where the target server stores data for the source server. Other policy
specifications, such as how long to retain the data, do not apply to data stored for
a source server.
Related tasks:
“Using virtual volumes to store data on another server” on page 763
For example, you decide to ensure that clients can choose to restore files from
anytime in the previous 60 days. In the backup copy group, set the Retain Extra
Versions parameter to 60 days. More than one backup version of a file may be
stored per day if clients perform selective backups or if clients perform incremental
backups more than once a day. The Versions Data Exists parameter and the
Versions Data Deleted parameter control how many of these versions are kept by
the server. To ensure that any number of backup versions are kept for the required
60 days, set both the Versions Data Exists parameter and the Versions Data
Deleted parameter to NOLIMIT. This means that the server essentially determines
the backup versions to keep based on how old the versions are, instead of how
many backup versions of the same file exist.
Keeping backed-up versions of files long enough to allow clients to restore their
data to a point in time can mean increased resource costs. Requirements for server
storage increase because more file versions are kept, and the size of the server
database increases to track all of the file versions. Because of these increased costs,
you may want to choose carefully which clients can use the policy that allows for
point-in-time restore operations.
Clients need to run full incremental backup operations frequently enough so that
IBM Tivoli Storage Manager can detect files that have been deleted on the client
file system. Only a full incremental backup can detect whether files have been
deleted since the last backup. If full incremental backup is not done often enough,
clients who restore to a specific time may find that many files that had actually
been deleted from the workstation get restored. As a result, a client’s file system
may run out of space during a restore process.
Important: The server will not attempt to retrieve client files from an active-data
pool during a point-in-time restore. Point-in-time restores require both active and
inactive file versions. Active-data pools contain only active file versions. For
optimal efficiency during point-in-time restores and to avoid switching between
active-data pools and primary or copy storage pools, the server retrieves both
active and inactive versions from the same storage pool and volumes.
To distribute policy, you associate a policy domain with a profile. Managed servers
that subscribe to the profile then receive the following definitions:
v The policy domain itself
v Policy sets in that domain, except for the ACTIVE policy set
v Management classes in the policy sets
v Backup and archive copy groups in the management classes
v Client schedules associated with the policy domain
The names of client nodes and client-schedule associations are not distributed. The
ACTIVE policy set is also not distributed.
The distributed policy becomes managed objects (policy domain, policy sets,
management classes, and so on) defined in the database of each managed server.
To use the managed policy, you must activate a policy set on each managed server.
If storage pools specified as destinations in the policy do not exist on the managed
server, you receive messages pointing out the problem when you activate the
policy set. You can create new storage pools to match the names in the policy set,
or you can rename existing storage pools.
On the managed server you also must associate client nodes with the managed
policy domain and associate client nodes with schedules.
Related tasks:
“Setting up enterprise configurations” on page 735
Querying policy
You can request information about the contents of policy objects. You might want
to do this before creating new objects or when helping users to choose policies that
fit their needs.
You can specify the output of a query in either standard or detailed format. The
examples in this section are in standard format.
On a managed server, you can see whether the definitions are managed objects.
Request the detailed format in the query and check the contents of the “Last
update by (administrator)” field. For managed objects, this field contains the string
$$CONFIG_MANAGER$$.
Issue the following command to request information about the backup copy group
(the default) in the ENGPOLDOM engineering policy domain:
query copygroup engpoldom * *
The following data shows the output from the query. It shows that the ACTIVE
policy set contains two backup copy groups that belong to the MCENG and
STANDARD management classes.
The following figure is the output from the query. It shows that the ACTIVE policy
set contains the MCENG and STANDARD management classes.
Issue the following command to request information about policy sets in the
ENGPOLDOM engineering policy domain:
query policyset engpoldom *
The following figure is the output from the query. It shows an ACTIVE policy set
and two inactive policy sets, STANDARD and TEST.
Issue the following command to request information about a policy domain (for
example, to determine if any client nodes are registered to that policy domain):
query domain *
The following figure is the output from the query. It shows that both the
ENGPOLDOM and STANDARD policy domains have client nodes assigned to
them.
Deleting policy
When you delete a policy object, you also delete any objects belonging to it. For
example, when you delete a management class, you also delete the copy groups in
it.
You cannot delete the ACTIVE policy set or objects that are part of that policy set.
You can delete the policy objects named STANDARD that come with the server.
However, all STANDARD policy objects are restored whenever you reinstall the
server.
Related concepts:
“Protection and expiration of archive data” on page 537
For example, to delete the backup and archive copy groups belonging to the
MCENG and STANDARD management classes in the STANDARD policy set,
enter:
delete copygroup engpoldom standard mceng type=backup
delete copygroup engpoldom standard standard type=backup
delete copygroup engpoldom standard mceng type=archive
delete copygroup engpoldom standard standard type=archive
For example, to delete the MCENG and STANDARD management classes from the
STANDARD policy set, enter:
delete mgmtclass engpoldom standard mceng
delete mgmtclass engpoldom standard standard
When you delete a management class from a policy set, the server deletes the
management class and all copy groups that belong to the management class in the
specified policy domain.
For example, to delete the TEST policy set from the ENGPOLDOM policy domain,
enter:
delete policyset engpoldom test
When you delete a policy set, the server deletes all management classes and copy
groups that belong to the policy set within the specified policy domain.
The ACTIVE policy set in a policy domain cannot be deleted. You can replace the
contents of the ACTIVE policy set by activating a different policy set. Otherwise,
the only way to remove the ACTIVE policy set is to delete the policy domain that
contains the policy set.
Move any client nodes to another policy domain, or delete the nodes.
When you delete a policy domain, the server deletes the policy domain and all
policy sets (including the ACTIVE policy set), management classes, and copy
groups that belong to the policy domain.
Related reference:
“How files and directories are associated with a management class” on page 511
Tasks:
“Validating a node's data during a client session” on page 560
“Securing communications” on page 907
“Encrypting data on tape” on page 560
“Setting up shredding” on page 564
“Generating client backup sets on the server” on page 568
“Restoring backup sets from a backup-archive client” on page 572
“Moving backup sets to other servers” on page 572
“Managing client backup sets” on page 573
“Enabling clients to use subfile backup” on page 576
“Optimizing restore operations for clients” on page 578
“Managing storage usage for archives” on page 586
Concepts:
“Performance considerations for data validation” on page 560
“Securing sensitive client data” on page 563
“Creating and using client backup sets” on page 566
Cyclic redundancy checking is performed at the client when the client requests
services from the server. For example, the client issues a query, backup, or archive
request. The server also performs a CRC operation on the data sent by the client
and compares its value with the value calculated by the client. If the CRC values
do not match, the server will issue an error message once per session. Depending
on the operation, the client may attempt to automatically retry the operation.
After Tivoli Storage Manager completes the data validation, the client and server
discard the CRC values generated in the current session.
Data validation can be enabled for one or all of the following items:
v Tivoli Storage Manager client nodes.
v Tivoli Storage Manager storage agents. For details, refer to the Storage Agent
User's Guide for your particular operating system.
Methods for enabling data validation for a node include choosing data validation
for individual nodes, specifying a set of nodes by using a wildcard search string,
or specifying a group of nodes in a policy domain.
For example, to enable data validation for existing node, ED, you can issue an
UPDATE NODE command. This user backs up the company payroll records weekly
and you have decided it is necessary to have all the user data validated: the data
itself and metadata.
update node ed validateprotocol=all
Later, the network has shown to be stable and no data corruption has been
identified when user ED has processed backups. You can then disable data
validation to minimize the performance impact of validating all of ED's data
during a client session. For example:
update node ed validateprotocol=no
IBM tape technology supports different methods of drive encryption for the
following devices:
v IBM 3592 generation 2 and generation 3
v IBM linear tape open (LTO) generation 4 and generation 5
Application encryption
Encryption keys are managed by the application, in this case, Tivoli
Storage Manager. Tivoli Storage Manager generates and stores the keys in
the server database. Data is encrypted during WRITE operations, when the
encryption key is passed from the server to the drive. Data is decrypted for
READ operations.
The methods of drive encryption that you can use with Tivoli Storage Manager are
set up at the hardware level. Tivoli Storage Manager cannot control or change
which encryption method is used in the hardware configuration. If the hardware is
set up for the application encryption method, Tivoli Storage Manager can turn
encryption on or off depending on the DRIVEENCRYPTION value on the device
class. For more information about specifying this parameter, see the following
topics:
v “Encrypting data with drives that are 3592 generation 2 and later” on page 217
v “Encrypting data using LTO generation 4 tape drives” on page 224
v “Enabling ECARTRIDGE drive encryption” on page 227 and “Disabling
ECARTRIDGE drive encryption” on page 227
This method allows Tivoli Storage Manager to manage the encryption keys. When
using Application encryption, you must take extra care to secure database backups
since the encryption keys are stored in the server database. Without access to
database backups and matching encryption keys, you will not be able to restore
your data.
If you want to encrypt all of your data in a particular logical library or encrypt
data on more than just storage pool volumes, the System or Library method can be
Library managed encryption allows you to control which volumes are encrypted
through the use of their serial numbers. You can specify a range or set of volumes
to encrypt. With Application managed encryption, you can create dedicated storage
pools that only contain encrypted volumes. This way, you can use storage pool
hierarchies and policies to manage the way data is encrypted.
The Library and System methods of encryption can share the same encryption key
manager, which allows the two modes to be interchanged. However, this can only
occur if the encryption key manager is set up to share keys. Tivoli Storage
Manager cannot currently verify if encryption key managers for both methods are
the same. Neither can Tivoli Storage Manager share or use encryption keys
between the application method and either library or system methods of
encryption.
To determine whether or not a volume is encrypted and which method was used,
you can issue the QUERY VOLUME command with FORMAT=DETAILED. For more
information on data encryption using the backup-archive client, see the
Backup-Archive Clients Installation and User's Guide.
For example, if you currently have Application managed encryption enabled, and
you decide that you don't want encryption enabled at all, only empty volumes will
be impacted by the change. Filling volumes will continue to be encrypted while
new volumes will not. If you do not want currently filling volumes to continue
being encrypted, the volume status should be changed to READONLY. This will
ensure that Tivoli Storage Manager does not append any more encrypted data to
the volumes. You can use the MOVE DATA command to transfer the data to a new
volume after the update of the DRIVEENCRYPTION parameter. The data will then
be available in an un-encrypted format.
When migrating from one hardware configuration to another, you will need to
move your data from the old volumes to new volumes with new encryption keys
and key managers. You can do this by setting up two logical libraries and storage
pools (each with a different encryption method) and migrating the data from the
old volumes to the new volumes. This will eliminate volumes that were encrypted
using the original method.Assume that you have volumes that were encrypted
using the Library method and you want to migrate to the Application method.
Tivoli Storage Manager will be unable to determine which encryption keys are
needed for data on these volumes because the library's encryption key manager
stores these keys and Tivoli Storage Manager does not have access to them.
Table 56 on page 563 illustrates considerations for changing your hardware
encryption method.
Restriction: If encryption is enabled for a device class, and the device class is
associated with a storage pool, the storage pool should not share a scratch pool
with other device classes that cannot be encrypted. If a tape is encrypted, and you
plan to use it on a drive that cannot be encrypted, you must manually relabel the
tape before it can be used on that drive.
This process increases the difficulty of discovering and reconstructing the data
later. Tivoli Storage Manager performs shredding only on data in random-access
disk storage pools. You can configure the server to ensure that sensitive data is
stored only in storage pools in which shredding is enforced (shred pools).
Shredding occurs only after a data deletion commits, but it is not necessarily
completed immediately after the deletion. The space occupied by the data to be
shredded remains occupied while the shredding takes place, and is not available as
free space for new data until the shredding is complete. When sensitive data is
written to server storage and the write operation fails, the data that was already
written is shredded.
Shredding can be done either automatically after the data is deleted or manually
by command. The advantage of automatic shredding is that it is performed
without administrator intervention whenever deletion of data occurs. This limits
the time that sensitive data might be compromised. Automatic shredding also
limits the time that the space used by deleted data is occupied. The advantage of
manual shredding is that it can be performed when it will not interfere with other
server operations.
Setting up shredding
You must configure Tivoli Storage Manager so that data identified as sensitive is
stored only in storage pools that will enforce shredding after that data is deleted.
You can also set the shredding option dynamically by using the SETOPT
command.
2. Set up one or more random access disk storage pool hierarchies that will
enforce shredding and specify how many times the data is to be overwritten
after deletion. For example,
define stgpool shred2 disk shred=5
define stgpool shred1 disk nextstgpool=shred2 shred=5
3. Define volumes to those pools, and specify disks for which write caching can
be disabled.
define volume shred1 j:\storage\bf.dsm formatsize=100
define volume shred2 m:\storage\bg.dsm formatsize=100
4. Define and activate a policy for the sensitive data. The policy will bind the data
to a management class whose copy groups specify shred storage pools.
define domain shreddom
define policyset shreddom shredpol
define mgmtclass shreddom shredpol shredclass
define copygroup shreddom shredpol shredclass type=backup
destination=shred1
define copygroup shreddom shredpol shredclass type=archive
destination=shred1
activate policyset shreddom shredpol
5. Identify those client nodes whose data should be shredded after deletion, and
assign them to the new domain.
update node engineering12 domain=shreddom
If you have specified manual shredding with the SHREDDING server option, you can
start the shredding process by issuing the SHRED DATA command. This command
lets you specify how long the process will run before it is canceled and how the
process responds to an I/O error during shredding. For objects that cannot be
shredded, the server reports each object.
Note: If you specify manual shredding, run the SHRED DATA command regularly, at
least as often as you perform other routine server-maintenance tasks (for example,
expiration, reclamation, and so on). Doing so can prevent performance degradation
To see the status and amount of data waiting to be shredded, you can issue the
QUERY SHREDSTATUS command. The server reports a summary of the number and
size of objects waiting to be shredded. To display detailed information about data
shredding on the server, issuing the following command:
query shredstatus format=detailed
When data shredding completes, a message is issued that reports the amount of
data that was successfully shredded and the amount of data that was skipped, if
any.
Some changes to objects and some server operations involving the moving or
copying of data could result in sensitive data that cannot be shredded. This would
compromise the intent and value of shredding.
Currently, the backup object types supported for backup sets include directories,
files, and image data. If you are upgrading from Tivoli Storage Manager Express®,
backup sets can also contain data from Data Protection for Microsoft SQL and Data
Protection for Microsoft Exchange servers. The backup set process is also called
instant archive.
The media may be directly readable by something such as the following device:
v A CD-ROM, JAZ, or ZIP drive attached to a client's computer.
While an administrator can generate a backup set from any client's backed up files,
backup sets can only be used by a backup-archive client.
You cannot generate a backup set with files that were backed up to Tivoli Storage
Manager using NDMP. However, you can create a backup set with files that were
backed up using NetApp SnapShot Difference.
When generating backup sets, the server searches for active file versions in an
active-data storage pool associated with a FILE device class, if such a pool exists.
For details about the complete storage-pool search-and-selection order, see
“Active-data pools as sources of active file versions for server operations” on page
271.
Data from a shred storage pool will not be included in a backup set unless you
explicitly permit it by setting the ALLOWSHREDDABLE parameter to YES in the
GENERATE BACKUPSET command. If this value is specified, and the client node data
includes data from shred pools, that data cannot be shredded. The server will not
issue a warning if the backup set operation includes data from shred pools. See
“Securing sensitive client data” on page 563 for more information about shredding.
For details about creating and using backup sets, see the following sections:
v “Generating client backup sets on the server” on page 568
v “Restoring backup sets from a backup-archive client” on page 572
v “Moving backup sets to other servers” on page 572
v “Managing client backup sets” on page 573
Generate backup set processing attempts to process all available objects onto the
backup set media. However, objects may be skipped due to being unavailable on
the server or other errors (I/O, media, hardware) that can occur at the time of
backup set generation. Some errors may lead to termination of processing before
all available data can be processed. For example, if the source data for a backup set
is on multiple sequential volumes and the second or subsequent segment of an
object spanning volumes is on a volume that is unavailable, processing is
terminated.
If objects are skipped or other problems occur to terminate processing, review all
of the messages associated with the process to determine whether or not it should
be run again. To obtain a complete backup set, correct any problems that are
indicated and reissue the GENERATE BACKUPSET command.
To improve performance when generating backup sets, you can do one or both of
the following tasks:
v Collocate the primary storage pool in which the client node data is stored. If a
primary storage pool is collocated, client node data is likely to be on fewer tape
volumes than it would be if the storage pool were not collocated. With
collocation, less time is spent searching database entries, and fewer mount
operations are required.
v Store active backup data in an active-data pool associated with a FILE device
class. When generating a backup set, the server will search this type of
active-data pool for active file versions before searching other possible sources.
You can write backup sets to sequential media: sequential tape and device class
FILE. The tape volumes containing the backup set are not associated with storage
pools and, therefore, are not migrated through the storage pool hierarchy.
For device class FILE, the server creates each backup set with a file extension of
OST. You can copy FILE device class volumes to removable media that is
associated with CD-ROM, JAZ, or ZIP devices, by using the REMOVABLEFILE
device type.
You can determine whether to use scratch volumes when you generate a backup
set. If you do not use specific volumes, the server uses scratch volumes for the
backup set.
You can use specific volumes for the backup set. If there is not enough space to
store the backup set on the volumes, the server uses scratch volumes to store the
remainder of the backup set.
Consider the following items when you select a device class for writing the backup
set:
v Generate the backup set on any sequential access devices whose device types are
supported on both the client and server. If you do not have access to compatible
devices, you will need to define a device class for a device type that is
supported on both the client and server.
v Ensure that the media type and recording format used for generating the backup
set is supported by the device that will be reading the backup set.
v You must restore, with the IBM Tivoli Storage Manager server, backup sets that
are written to more than one volume and generated to a REMOVABLEFILE
device. Issue the RESTORE BACKUPSET command and specify -location=server to
indicate that the backup set is on the Tivoli Storage Manager server.
For more information, see “Configuring removable media devices” on page 116.
To later display information about this backup set, you can include a wildcard
character with the name, such as mybackupset*, or you can specify the fully
qualified name, such as mybackupset.3099.
Backup sets are retained on the server for 365 days if you do not specify a value.
The server uses the retention period to determine when to expire the volumes on
which the backup set resides.
Backup sets are generated to a point-in-time by using one of two date and time
specifications: the date and time specified on the GENERATE BACKUPSET command, or
the date and time the that the GENERATE BACKUPSET command was issued.
Point-in-time backup set generation works best if a recent date and time are
specified. Files that have expired, or are marked as expire-immediately cannot be
included in the backup set.
You can use the DATATYPE parameter to limit the backup set to only one data type.
For example, you might do this if you don't want to store redundant data on the
backup set media. Alternatively, you can specify that both file and image backup
data be included from a machine in order to reduce the number of tapes that must
be included in your off-site tape rotation.
Image backup sets include the image and all files and directories changed or
deleted since the image was backed up so that all backup sets on the media
represent the same point in time. Tables of contents are automatically generated for
any backup sets that contain image or application data. If the GENERATE BACKUPSET
command cannot generate a table of contents for one of these backup sets, then it
will fail.
For file level backup sets, the table of contents generation is optional. By default,
the command attempts to create a table of contents for file level backup sets, but it
will not fail if a table of contents is not created. You can control the table of
contents option by specifying the TOC parameter.
A separate backup set is generated for each specified node, but all of the backup
sets will be stored together on the same set of output volumes. the backup set for
each node has its own entry in the database. The QUERY BACKUPSET command will
display information about all backup sets, whether they are on their own tape or
stacked together with other backup sets onto one tape.
On the DEFINE BACKUPSET command, you can also specify multiple nodes or node
groups, and you can use wildcards with node names. DEFINE BACKUPSET
determines what backup sets are on the set of tapes and defines any that match the
specified nodes. Specifying only a single wildcard character ('*') for the node name
has the effect of defining all the backup sets on the set of tapes. Conversely, you
can define only those backup sets belonging to a particular node by specifying just
the name of that node. Backup sets on tapes belonging to nodes that are not
specified on the command are not defined. They will still exist on the tape, but
cannot be accessed.
The QUERY, UPDATE, and DELETE BACKUPSET commands also allow the specification of
node group names in addition to node names. When you delete backup sets, the
volumes on which the backup sets are stored are not returned to scratch as long as
any backup set on the volumes remain active.
Backup sets can only be used by a backup-archive client, and only if the files in the
backup set originated from a backup-archive client.
For more information about restoring backup sets, see the Backup-Archive Clients
Installation and User's Guide for your particular operating system.
In order to query the contents of a backup set and choose files to restore, tables of
contents need to be loaded into the server database. The backup-archive client can
specify more than one backup set table of contents to be loaded to the server at the
beginning of a restore session.
Image backups and restores require a table of contents when generating a backup
set for image data. If the table of contents existed but was deleted for some reason
then the image backup set cannot be restored until the table of contents is
regenerated with the GENERATE BACKUPSETTOC command.
The level of the server defining the backup set must be equal to or greater than the
level of the server that generated the backup set.
Using the example described in “Example: generating a client backup set” on page
570, you can make the backup set that was copied to the CD-ROM available to
another server by issuing the following command:
572 IBM Tivoli Storage Manager for Windows: Administrator's Guide
define backupset johnson project devclass=cdrom volumes=BK1,BK2,BK3
description="backup set copied to CD-ROM"
If you have multiple servers connecting to different clients, the DEFINE BACKUPSET
command makes it possible for you to take a previously generated backup set and
make it available to other servers. The purpose is to allow the user flexibility in
moving backup sets to different servers, thus allowing the user the ability to
restore their data from a server other than the one on which the backup set was
created.
Important:
1. Devclass=cdrom specifies a device class of type REMOVABLEFILE that points
to your CD-ROM drive. CD-ROMs have a maximum capacity of 650MB.
2. Volumes=BK1,BK2,BK3 specifies the names of the volumes containing the
backup set. The volume label of these CD-ROMs must match the name of the
file on the volume exactly.
Tables of contents:
v Reside on the server even if the backup set's media has been moved off-site.
v Can be generated for existing backup sets that do not contain a table of contents.
v Can be re-generated when a backup set is defined on a new server, or if using a
user-generated copy on a different medium.
Backup set tables of contents are stored in the storage pool identified by the
TOCDESTINATION attribute of the backup copy group associated with the
management class to which the backup set is bound. The management class to
which the backup set is bound will either be the default management class in the
policy domain in which the backup set's node is registered, or the management
class specified by the TOCMGmtclass parameter of the GENERATE BACKUPSET,
GENERATE BACKUPSETTOC, or DEFINE BACKUPSET command. Tables of contents for
backup sets are retained until the backup set with which they are associated
expires or is deleted. They are not subject to the policy associated with their
management class. You can issue the QUERY BACKUPSET command to show whether
a given backup set has a table of contents or not. Output from the QUERY BACKUPSET
command can be filtered based on the existence of a table of contents. This allows
you to determine which backup sets may need to have a new table of contents
created, or conversely, which backup sets could be used with the client's file-level
restore.
The following figure shows the report that is displayed after you enter:
query backupset f=d
The FORMAT=DETAILED parameter on the QUERY BACKUPSET provides the client file
spaces contained in the backup set and the list of volumes of the backup set.
The server displays information about the files and directories that are contained in
a backup set. After you issue the query backupsetcontents jane engdata.3099
command, the following output is displayed:
Tip: To display the contents of an image backup set, specify DATATYPE=IMAGE on the
QUERY BACKUPSETCONTENTS command.
| File space names and file names that can be in a different code page or locale than
| the server do not display correctly in the Operations Center, the Administration
| Center, or the administrative command-line interface. The data itself is backed up
| and can be restored properly, but the file space or file name may display with a
| combination of invalid characters or blank spaces.
If the file space name is Unicode enabled, the name is converted to the server's
code page for display. The results of the conversion for characters not supported
by the current code page depends on the operating system. For names that Tivoli
Storage Manager is able to partially convert, you may see question marks (??),
blanks, unprintable characters, or “...”. These characters indicate to the
administrator that files do exist. If the conversion is not successful, the name is
displayed as "...". Conversion can fail if the string includes characters that are not
available in the server code page, or if the server has a problem accessing system
conversion routines.
To delete all backup sets belonging to client node JANE, created before 11:59 p.m.
on March 18, 1999, enter:
delete backupset jane * begindate=03/18/1999 begintime=23:59
When that date passes, the server automatically deletes the backup set when
expiration processing runs. However, you can also manually delete the client's
backup set from the server before it is scheduled to expire by using the DELETE
BACKUPSET command.
To help address this problem, you can use subfile backups. When a client's file has
been previously backed up, any subsequent backups are typically made of the
portion of the client's file that has changed (a subfile), rather than the entire file. A
base file is represented by a backup of the entire file and is the file on which
subfiles are dependent. If the changes to a file are extensive, a user can request a
backup on the entire file. A new base file is established on which subsequent
subfile backups are dependent.
This type of backup makes it possible for mobile users to reduce connection time,
network traffic, and the time it takes to do a backup.
To enable this type of backup, see “Setting up clients to use subfile backup” on
page 577.
Subfile backups
The following table describes how Tivoli Storage Manager manages backups of this
file.
Day of
subsequent
Version backup What Tivoli Storage Manager backs up
One Monday The entire CUST.TXT file (the base file)
Two Tuesday A subfile of CUST.TXT. The server compares the file backed up
on Monday with the file that needs to be backed up on
Tuesday. A subfile containing the changes between the two
files is sent to the server for the backup.
Three Wednesday A subfile of CUST.TXT. Tivoli Storage Manager compares the
file backed up on Monday with the file that needs to be
backed up on Wednesday. A subfile containing the changes
between the two files is sent to the server for the backup.
Related reference:
“Setting policy to enable point-in-time restore for clients” on page 551
“Policy for logical volume backups” on page 546
Restoring subfiles
When a client issues a request to restore subfiles, Tivoli Storage Manager restores
subfiles along with the corresponding base file back to the client. This process is
transparent to the client. That is, the client does not have to determine whether all
subfiles and corresponding base file were restored during the restore operation.
You can define (move) a backup set that contains subfiles to an earlier version of a
server that is not enabled for subfile backup. That server can restore the backup set
containing the subfiles to a client not able to restore subfiles. However, this process
is not recommended as it could result in a data integrity problem.
When the base file and its dependent subfiles are imported from the volumes to a
target server and import processing is canceled while the base file and subfiles are
being imported, the server automatically deletes any incomplete base files and
subfiles that were stored on the target server.
For example, when expiration processing runs, Tivoli Storage Manager recognizes a
base file as eligible for expiration but does not delete the file until all its dependent
subfiles have expired. For more information on how the server manages file
expiration, see “Running expiration processing to delete expired files” on page 535.
If the base file and dependent subfiles are stored on separate volumes when a
backup set is created, additional volume mounts may be required to create the
backup set.
When you optimize restore operations, the performance depends on the type of
media that you use. Reference Table 58 for information about the media that you
can use to restore data.
Table 58. Advantages and disadvantages of the different device types for restore operations
Device type Advantages Disadvantages
Random access disk v Quick access to files v No reclamation of unused
v No mount point needed space in aggregates
v No deduplication of data
Sequential access disk (FILE) v Reclamation of unused Requires mount point but
space in aggregates not as severe an impact as
real tape
v Quick access to files (disk
based)
v Allows deduplication of
data
Virtual tape library v Quick access to files v Requires mount point but
because of disk-based not as severe an impact as
media real tape
v Existing applications that v No deduplication of data
were written for real tape
do not have to be
rewritten
The following tasks can help you balance the costs against the need for optimized
restore operations:
v Identify systems that are most critical to your business. Consider where your
most important data is, what is most critical to restore, and what needs the
fastest restore. Identify which systems and applications you want to focus on,
optimizing for restore.
v Identify your goals and order the goals by priority. The following list has some
goals to consider:
– Disaster recovery or recovery from hardware crashes, requiring file system
restores
– Recovery from loss or deletion of individual files or groups of files
– Recovery for database applications (specific to the API)
– Point-in-time recovery of groups of files
The importance of each goal can vary for the different client systems that you
identified as being most critical.
For more information about restore operations for clients, see “Concepts for client
restore operations” on page 582.
Environment considerations
Tivoli Storage Manager performance depends upon the environment.
| You can also use active-data pools to store active versions of client backup data.
| Archive and space-managed data is not allowed in active-data pools. Inactive files
| are removed from the active-data pool during expiration processing. Active-data
| pools that are associated with a FILE device class do not require tape mounts, and
| the server does not have to position past inactive files. In addition, FILE volumes
| can be accessed concurrently by multiple client sessions or server processes. You
| can also create active-data pools that use tape or optical media, which can be
| moved off-site, but which require tape mounts.
| If you do not use FILE or active-data pools, consider how restore performance is
| affected by the layout of data across single or multiple tape volumes. You can have
| multiple simultaneous sessions when you use FILE to restore, and mount overhead
| is skipped with FILE volumes. Major causes of performance problems are excessive
| tape mounts and needing to skip over expired or inactive data on a tape. After a
| long series of incremental backups, perhaps over years, the active data for a single
| file space can be spread across many tape volumes. A single tape volume can have
| active data that is mixed with inactive and expired data.
Consider the following information when you run file system restore operations:
v Combine image backups with progressive incremental backups for the file
system to allow for full restore to an arbitrary point-in-time.
v To minimize disruption to the client during backup, use either hardware-based
or software-based snapshot techniques for the file system.
v Perform image backups infrequently. More frequent image backups give better
point-in-time granularity, but there is a cost. The frequent backups affect the tape
usage, there is an interruption of the client system during backup, and there is
greater network bandwidth needed.
As a guideline you can run an image backup after a percentage of data is
changed in the file system, since the last image backup.
Image backup is not available for all clients. If image backup is not available for
your client, use file-level restore as an alternative.
For more information about collocation, see “Keeping client files together using
collocation” on page 381.
For information about data protection for databases, see the Tivoli Storage
Manager information center.
If you also schedule incremental backups regularly, you might have greater
granularity in restoring to a discrete point-in-time. However, keeping many
versions can degrade restore operation performance. Setting policy to keep many
versions also has costs, in terms of database space and storage pool space. Your
policies might have overall performance implications.
If you cannot afford the resource costs of keeping the large numbers of file
versions and must restore to a point-in-time, consider the following options:
v Use backup sets
v Export the client data
v Use an archive
| v Take a volume image, including virtual machine backups
You can restore to the point-in-time when the backup set was generated, the export
was run, or the archive was created. Remember, when you restore the data, your
selection is limited to the time at which you created the backup set, export, or
archive.
Tip: If you use the archive function, create a monthly or yearly archive. Do not
use archive as a primary backup method because frequent archives with large
amounts of data can affect server and client performance.
The no-query restore requires less interaction between the client and the server,
and the client can use multiple sessions for the restore operation. The no-query
restore operation is useful when you restore large file systems on a client with
limited memory. The advantage is that no-query restore avoids some processing
that can affect the performance of other client applications. In addition, it can
achieve a high degree of parallelism by restoring with multiple sessions from the
server and storage agent simultaneously.
With no-query restore operations, the client sends a single restore request to the
server instead of querying the server for each object to be restored. The server
returns the files and directories to the client without further action by the client.
The client accepts the data that comes from the server and restores it to the
destination named on the restore command.
The no-query restore operation is used by the client only when the restore request
meets both of the following criteria:
v You enter the restore command with a source file specification that has an
unrestricted wildcard.
An example of a source file specification with an unrestricted wildcard is:
/home/mydocs/2002/*
An example of a source file specification with a restricted wildcard is:
/home/mydocs/2002/sales.*
v You do not specify any of the following client options:
inactive
latest
pick
fromdate
todate
To force classic restore operations, use ?* in the source file specification rather than
*. For example:
/home/mydocs/2002/?*
For more information about restore processes, see the Backup-Archive Clients
Installation and User's Guide.
You can issue the commands one after another in a single session or window, or
issue them at the same time from different command windows.
When you enter multiple commands to restore files from a single file space, specify
a unique part of the file space in each restore command. Be sure that you do not
use any overlapping file specifications in the commands. To display a list of the
directories in a file space, issue the QUERY BACKUP command on the client. For
example:
dsmc query backup -dirsonly -subdir=no /usr/
For more information, see the Backup-Archive Clients Installation and User's Guide.
Set the client option for resource utilization to one greater than the number of
sessions that you want. Use the number of drives that you want that single client
to use. The client option can be included in a client option set.
At the client, the option for resource utilization also affects how many drives
(sessions) the client can use. The client option, resource utilization, can be included
in a client option set. If the number specified in the MAXNUMMP parameter is too low
and there are not enough mount points for each of the sessions, it might not be
possible to achieve the benefit of the multiple sessions that are specified in the
resource utilization client option.
Archiving data
Managing archive data on the server becomes important when you have client
nodes that archive large numbers (hundreds or thousands) of files every day.
If you archive files with automated tools that start the command-line client or API,
you might encounter large numbers. If performance degrades over time during an
archive operation, or you have a large amount of storage that is used by archives,
consider advanced techniques. See “Archive operations overview” and “Managing
storage usage for archives” on page 586.
All files that are archived with the same description become members of the same
archive package. If the user does not specify a description when archiving, the
client program provides a default description with each archive request. The
default description includes the date.
When files are archived, the client program archives the paths (directories) to those
files to preserve access permissions which are specific to the operating system.
Directories are also included in archive packages. If the same directory is archived
with different descriptions, the directory is stored once with each package. If a
command line user issues a QUERY ARCHIVE command, multiple entries for the same
directory may appear. Closer inspection shows that each entry has a different
description.
The GUI and Web client programs allow a user to navigate through a client node's
archives by first displaying all descriptions (the package identifiers), then the
directories, and finally the files. Users can retrieve or delete individual files or all
files in a directory. Command line client and API users can specify a description
when they archive files, or when they send requests to query, retrieve or delete
archived files.
When retrieving files, the server searches for the most current file versions. It will
search in an active-data storage pool associated with a FILE device class, if such a
pool exists.
Consider the following two actions that you can take to minimize the storage
usage:
Minimize the number of unique descriptions
You can reduce storage usage by archiving more files into fewer packages
(by reducing the number of unique descriptions). The amount of storage
used for directories is also affected by the number of packages. If you
archive a file three different times using three different descriptions, the
server stores both the file and the directory three times, once in each
package. If you archives the same file three different times using just one
description, the server stores the file three times, but the directory is stored
just one time.
Archive directories only if needed
Archiving directories might be necessary if the directories are needed to
group files for query or retrieve, or if the directory-level access permission
information needs to be archived.
The users of the GUI and Web client programs need descriptions to aid in
navigation, to find archived files. You can minimize storage usage for archives by
reducing the number of packages. For client nodes that are always accessed via the
command-line interface you can also use some other techniques.
If the user follows these guidelines, the client node will have one or a limited
number of archive packages. Because of the small number of packages, there are
only small numbers of copies of each directory entry. The savings in storage space
that result are noticeable when files with the same path specification are archived
multiple times over multiple days.
See the Backup-Archive Clients Installation and User's Guide for details about archive
operations and client options.
Do not run the UPDATE ARCHIVE command while any other processing for the node
is running. If this command is issued for a node with any other object insertion or
deletion activity occurring at the same time, locking contention may occur. This
may result in processes and sessions hanging until the resource timeout is reached
and the processes and sessions terminate.
When you update archives for a node, you have two choices for the action to take:
Delete directory entries in all archive packages
This action preserves the archive packages, but removes directory entries
for all packages, reducing the amount of storage used for archives. Do this
only when directory entries that include access permissions are not needed
in the archive packages, and the paths are not needed to query or retrieve
a group of files. The amount of reduction depends on the number of
packages and the number of directory entries. For example, to remove
directory entries for the client node SNOOPY, enter this command:
update archive snoopy deletedirs
Attention: After you delete the directory entries, the directory entries
cannot be recreated in the archive packages. Do not use this option if users
of the client node need to archive access permissions for directories.
Reduce the number of archive packages to a single package for the node
This action removes all unique descriptions, thereby reducing the number
of archive packages to one for the client node. Do this only when the
descriptions are not needed and are causing large use of storage. This
action also removes directory entries in the archive packages. Because there
is now one package, there is one entry for each directory. For example, to
reduce the archive packages to one for the client node SNOOPY, enter this
command:
update archive snoopy resetdescriptions
After updating the archives for a node in this way, keep the archive
package count to a minimum.
Attention: You cannot recreate the packages after the descriptions have
been deleted. Do not use this option if users of the client node manage
archives by packages, or if the client node is accessed via the GUI or Web
client interface.
See Backup-Archive Clients Installation and User's Guide for details about the option.
Tip: The GUI and Web client programs use the directories to allow users to
navigate to the archived files. This option is not recommended for GUI or Web
client interface users.
Tasks:
“Scheduling a client operation” on page 590
“Starting the scheduler on the clients” on page 591
“Displaying information about schedules” on page 599
“Starting the scheduler on the clients” on page 591
“Displaying information about schedules” on page 599
“Creating schedules for running command files” on page 593
“Updating the client options file to automatically generate a new password” on page 594
You can modify, copy, and delete any schedule you create. See Chapter 17,
“Managing schedules for client nodes,” on page 597 for more information.
Tivoli Storage Manager provides two sample schedules: one for daily backups and
one for weekly backups. The sample schedules use defaults for many of their
values. You can copy and modify them to create customized schedules that meet
your requirements.
Administrators can follow these steps to create schedules for client node
operations. To later modify, copy, and delete these schedules, see Chapter 17,
“Managing schedules for client nodes,” on page 597.
1. Double-click the Tivoli Storage Manager Console icon on the server desktop.
2. Expand the tree until the Tivoli Storage Manager server that you want to work
with displays. Expand the server and click Wizards. The list of wizards appears
in the right pane.
3. Select the Schedule Configuration wizard and click Start. The Scheduling
Wizard appears.
4. Follow the instructions in the wizard, clicking Next until the Tivoli Storage
Manager Scheduling Options dialog appears.
5. Click the Add button. The Add Schedules dialog appears.
6. Click Help for assistance with this dialog.
7. When you are finished, click OK or Apply.
As an alternative to using the Tivoli Storage Manager Console, you can define and
associate schedules by using the Tivoli Storage Manager command line interface or
the Administration Center.
You must have system privilege, unrestricted policy, or restricted policy (for the
policy domain to which the schedule belongs) to associate client nodes with
schedules. Issue the DEFINE ASSOCIATION command to associate client nodes with a
schedule.
Complete the following step to associate the ENGNODE client node with the
WEEKLY_BACKUP schedule, both of which belong to the ENGPOLDOM policy
domain:
define association engpoldom weekly_backup engnode
After a client schedule is defined, you can associate client nodes with it by
identifying the following information:
v Policy domain to which the schedule belongs
v List of client nodes to associate with the schedule
Administrators must ensure that users start the Tivoli Storage Manager scheduler
on the client or application client directory, and that the scheduler is running at the
schedule start time. After the client scheduler starts, it continues to run and
initiates scheduled events until it is stopped.
The way that users start the Tivoli Storage Manager scheduler varies, depending
on the operating system that the machine is running. The user can choose to start
the client scheduler automatically when the operating system is started, or can
start it manually at any time. The user can also have the client acceptor manage
the scheduler, starting the scheduler only when needed. For instructions on these
tasks, see the Backup-Archive Clients Installation and User's Guide.
The client and the Tivoli Storage Manager server can be set up to allow all sessions
to be initiated by the server. See “Server-initiated sessions” on page 453 for
instructions.
Note: Tivoli Storage Manager does not recognize changes that you made to the
client options file while the scheduler is running. For Tivoli Storage Manager to
use the new values immediately, you must stop the scheduler and restart it.
The following output shows an example of a report for a classic schedule that is
displayed after you enter:
query schedule engpoldom
Domain * Schedule Name Action Start Date/Time Duration Period Day
------------ - ---------------- ------ -------------------- -------- ------ ---
ENGPOLDOM MONTHLY_BACKUP Inc Bk 09/04/2002 12:45:14 2 H 2 Mo Sat
ENGPOLDOM WEEKLY_BACKUP Inc Bk 09/04/2002 12:46:21 4 H 1 W Sat
For enhanced schedules, the standard schedule format displays a blank period
column and an asterisk in the day of week column. Issue FORMAT=DETAILED to
display complete information about an enhanced schedule. Refer to the
Administrator's Reference for command details. The following output shows an
example of a report for an enhanced schedule that is displayed after you enter:
query schedule engpoldom
Domain * Schedule Name Action Start Date/Time Duration Period Day
------------ - ---------------- ------ -------------------- -------- ------ ---
ENGPOLDOM MONTHLY_BACKUP Inc Bk 09/04/2002 12:45:14 2 H 2 Mo Sat
ENGPOLDOM WEEKLY_BACKUP Inc Bk 09/04/2002 12:46:21 4 H (*)
The default schedules do not support command files so you must create a new
schedule to schedule command files.
Associate the client with the schedule and ensure that the scheduler is started on
the client or application client directory. The schedule runs the file called
c:\incr.cmd once a day between 6:00 p.m. and 6:05 p.m., every day of the week.
If a password expires and is not updated, scheduled operations fail. You can
prevent failed operations by allowing Tivoli Storage Manager to generate a new
password when the current password expires. If you set the PASSWORDACCESS
option to GENERATE in the Tivoli Storage Manager client options file, dsm.opt,
Tivoli Storage Manager automatically generates a new password for your client
node each time it expires, encrypts and stores the password in a file, and retrieves
the password from that file during scheduled operations. You are not prompted for
the password.
To access the Exchange Server APIs, the application client must be running under
the Site Services Account. The Site Services Account is the account under which the
Exchange services are running. The scheduler service must logon using this
account. The scheduler service account information can be specified using the
services applet in the Windows Control Panel. For more information about the Site
Services Account, see the Microsoft Exchange Server documentation.
The Client Acceptor daemon (CAD) cannot be used by a client node when
SESSIONINITIATION=SERVERONLY.
Figure 74 on page 595 shows three Windows machines configured for scheduling.
Tasks:
“Managing node associations with schedules” on page 600
“Specifying one-time actions for client nodes” on page 610
“Managing event records” on page 601
“Managing the throughput of scheduled operations” on page 603
“Managing IBM Tivoli Storage Manager schedules”
For a description of what Tivoli Storage Manager views as client nodes, see
Chapter 12, “Adding client nodes,” on page 439. For information about the
scheduler and creating schedules, see Chapter 16, “Scheduling operations for client
nodes,” on page 589
As an alternative to using the wizard, you can add and associate schedules by
using the Tivoli Storage Manager command line interface or the Administration
Center. For more information, see “Creating Tivoli Storage Manager schedules” on
page 590.
Client node associations are not copied to the new schedule. You must associate
client nodes with the new schedule before it can be used. The associations for the
old schedule are not changed.
To copy the WINTER schedule from policy domain DOMAIN1 to DOMAIN2 and
name the new schedule WINTERCOPY, enter:
copy schedule domain1 winter domain2 wintercopy
For information, see “Associating client nodes with schedules” on page 591.
Modifying schedules
You can modify existing schedules by issuing the UPDATE SCHEDULE command.
You can also modify existing schedules by using the schedule configuration wizard
in the IBM Tivoli Storage Manager Console.
Rather than delete a schedule, you may want to remove all nodes from the
schedule and save the schedule for future use. For information, see “Removing
nodes from schedules” on page 601.
See “Associating client nodes with schedules” on page 591 for more information.
The following output shows an example of a report for a classic schedule that is
displayed after you enter:
query schedule engpoldom
Domain * Schedule Name Action Start Date/Time Duration Period Day
------------ - ---------------- ------ -------------------- -------- ------ ---
ENGPOLDOM MONTHLY_BACKUP Inc Bk 09/04/2002 12:45:14 2 H 2 Mo Sat
ENGPOLDOM WEEKLY_BACKUP Inc Bk 09/04/2002 12:46:21 4 H 1 W Sat
For enhanced schedules, the standard schedule format displays a blank period
column and an asterisk in the day of week column. Issue FORMAT=DETAILED to
display complete information about an enhanced schedule. Refer to the
Administrator's Reference for command details. The following output shows an
example of a report for an enhanced schedule that is displayed after you enter:
query schedule engpoldom
Domain * Schedule Name Action Start Date/Time Duration Period Day
------------ - ---------------- ------ -------------------- -------- ------ ---
ENGPOLDOM MONTHLY_BACKUP Inc Bk 09/04/2002 12:45:14 2 H 2 Mo Sat
ENGPOLDOM WEEKLY_BACKUP Inc Bk 09/04/2002 12:46:21 4 H (*)
You can perform the following activities to manage associations of client nodes
with schedules.
To associate client nodes with a schedule, you can use one of the following
methods:
Issue the DEFINE ASSOCIATION command from the command-line interface.
Use the Administration Center to associate a node with a schedule.
Use the Schedule Configuration wizard in the Tivoli Storage Manager Console
or the Administration Center.
For more information, see “Associating client nodes with schedules” on page 591.
For example, you should query an association before deleting a client schedule.
Figure 75 on page 601 shows the report that is displayed after you enter:
query association engpoldom
To delete the association of the ENGNOD client with the ENGWEEKLY schedule,
in the policy domain named ENGPOLDOM, enter:
delete association engpoldom engweekly engnod
Instead of deleting a schedule, you may want to delete all associations to it and
save the schedule for possible reuse in the future.
You can also find information about scheduled events by checking the log file
described in “Checking the schedule log file” on page 603.
For example, you can issue the following command to find out which events were
missed in the previous 24 hours, for the DAILY_BACKUP schedule in the
STANDARD policy domain:
query event standard daily_backup begindate=-1 begintime=now
enddate=today endtime=now exceptionsonly=yes
Figure 77 shows an example of the results of this query. To find out why a
schedule was missed or failed, you may need to check the schedule log on the
client node itself. For example, a schedule can be missed because the scheduler
was not started on the client node.
Such events are displayed with a status of Uncertain, indicating that complete
information is not available because the event records have been deleted. To
determine if event records have been deleted, check the message that is issued
after the DELETE EVENT command is processed.
The default name for the schedule log file is dsmsched.log. The file is located in
the directory where the Tivoli Storage Manager backup-archive client is installed.
You can override this file name and location by specifying the SCHEDLOGNAME option
in the client options file. See the Backup-Archive Clients Installation and User's
Guide for more information.
You can specify how long event records stay in the database before the server
automatically deletes them by using the SET EVENTRETENTION command. You
can also manually delete event records from the database, if database space is
required.
For example, to delete all event records written prior to 11:59 p.m. on June 30,
2002, enter:
delete event 06/30/2002 23:59
With client-polling mode, client nodes poll the server for the next scheduled event.
With server-prompted mode, the server contacts the nodes at the scheduled start
time. By default, the server permits both scheduling modes. The default (ANY)
allows nodes to specify either scheduling mode in their client options files. You can
modify this scheduling mode.
If you modify the default server setting to permit only one scheduling mode, all
client nodes must specify the same scheduling mode in their client options file.
Clients that do not have a matching scheduling mode will not process the
scheduled operations. The default mode for client nodes is client-polling.
The scheduler must be started on the client node's machine before a schedule can
run in either scheduling mode.
For more information about modes, see “Overview of scheduling modes” on page
605.
You can instead prevent clients from starting sessions, and allow only the server to
start sessions with clients.
To limit the start of backup-archive client sessions to the server only, complete the
following steps for each node:
1. Use the REGISTER NODE command or the UPDATE NODE command to change the
value of the SESSIONINITIATION parameter to SERVERONLY, Specify the high-level
address and low-level address options. These options must match what the
client is using, otherwise the server will not know how to contact the client.
2. Set the scheduling mode to server-prompted. All sessions must be started by
server-prompted scheduling on the port that was defined for the client with the
REGISTER NODE or the UPDATE NODE commands.
3. Ensure that the scheduler on the client is started. You cannot use the client
acceptor (dsmcad) to start the scheduler when SESSIONINITIATION is set to
SERVERONLY.
See Table 60 and Table 59 for the advantages and disadvantages of client-polling
and server-prompted modes.
Table 59. Client-Polling mode
How the mode works Advantages and disadvantages
1. A client node queries the server at v Useful when a high percentage of clients
prescribed time intervals to obtain a start the scheduler manually on a daily
schedule. This interval is set with a client basis, for example when their workstations
option, QUERYSCHEDPERIOD. For are powered off nightly.
information about client options, refer to v Supports randomization, which is the
the appropriate Backup-Archive Clients random distribution of scheduled start
Installation and User's Guide. times. The administrator can control
2. At the scheduled start time, the client randomization. By randomizing the start
node performs the scheduled operation. times, Tivoli Storage Manager prevents all
3. When the operation completes, the client clients from attempting to start the
sends the results to the server. schedule at the same time, which could
overwhelm server resources.
4. The client node queries the server for its
next scheduled operation. v Valid with all communication methods.
1. The server contacts the client node when v Useful if you change the schedule start
scheduled operations need to be time frequently. The new start time is
performed and a server session is implemented without any action required
available. from the client node.
2. When contacted, the client node queries v Useful when a high percentage of clients
the server for the operation, performs the are running the scheduler and are waiting
operation, and sends the results to the for work.
server. v Useful if you want to restrict sessions to
server-initiated.
v Does not allow for randomization of
scheduled start times.
v Valid only with client nodes that use
TCP/IP to communicate with the server.
Client-Polling Scheduling Mode: To have clients poll the server for scheduled
operations, enter:
set schedmodes polling
Ensure that client nodes specify the same mode in their client options files.
Ensure that client nodes specify the same mode in their client options files.
Any Scheduling Mode: To return to the default scheduling mode so that the
server supports both client-polling and server-prompted scheduling modes, enter:
set schedmodes any
For more information, refer to the appropriate Backup-Archive Clients Installation and
User's Guide.
When you define a schedule, you specify the length of time between processing of
the schedule. Consider how these interact to ensure that the clients get the backup
coverage that you intend.
To enable the server to complete all schedules for clients, you may need to use trial
and error to control the workload. To estimate how long client operations take, test
schedules on several representative client nodes. Keep in mind, for example, that
the first incremental backup for a client node takes longer than subsequent
incremental backups.
Of these sessions, you can set a maximum percentage to be available for processing
scheduled operations. Limiting the number of sessions available for scheduled
operations ensures that sessions are available when users initiate any unscheduled
operations, such as restoring file or retrieving files.
If the number of sessions for scheduled operations is insufficient, you can increase
either the total number of sessions or the maximum percentage of scheduled
sessions. However, increasing the total number of sessions can adversely affect
server performance. Increasing the maximum percentage of scheduled sessions can
reduce the server availability to process unscheduled operations.
For example, assume that the maximum number of sessions between client nodes
and the server is 80. If you want 25% of these sessions to be used by for scheduled
operations, enter:
set maxschedsessions 25
The following table shows the trade-offs of using either the SET
MAXSCHEDSESSIONS command or the MAXSESSIONS server option.
A startup window is defined by the start time and duration during which a
schedule must be initiated. For example, if the start time is 1:00 a.m. and the
duration is 4 hours, the startup window is 1:00 a.m. to 5:00 a.m. For the
client-polling scheduling mode, specify the percentage of the startup window that
the server can use to randomize start times for different client nodes that are
associated with a schedule.
The settings for randomization and the maximum percentage of scheduled sessions
can affect whether schedules are successfully completed for client nodes. Users
receive a message if all sessions are in use when they attempt to process a
It is possible, especially after a client node or the server has been restarted, that a
client node may not poll the server until after the beginning of the startup window
in which the next scheduled event is to start. In this case, the starting time is
randomized over the specified percentage of the remaining duration of the startup
window.
The result is that the nine client nodes that polled the server before the beginning
of the startup window are assigned randomly selected starting times between 8:00
and 8:30. The client node that polled at 8:30 receives a randomly selected starting
time that is between 8:30 and 8:45.
A larger startup window gives the client node more time to attempt initiation of a
session with the server.
Users can also set these values in their client user options files. (Root users on
UNIX and Linux systems set the values in client system options files.) However,
user values are overridden by the values that the administrator specifies on the
server.
The communication paths from client node to server can vary widely with regard
to response time or the number of gateways. In such cases, you can choose not to
set these values so that users can tailor them for their own needs.
For the client-polling scheduling mode, you can specify the maximum number of
hours that the scheduler on a client node waits between attempts to contact the
server to obtain a schedule. You can set this period to correspond to the frequency
with which the schedule changes are being made. If client nodes poll more
frequently for schedules, changes to scheduling information (through administrator
commands) are propagated more quickly to client nodes.
If you want to have all clients using polling mode contact the server every 24
hours, enter:
set queryschedperiod 24
This setting has no effect on clients that use the server-prompted scheduling mode.
The clients also have a QUERYSCHEDPERIOD option that can be set on each
client. The server value overrides the client value once the client successfully
contacts the server.
The maximum number of command retry attempts does not limit the number of
times that the client node can contact the server to obtain a schedule. The client
node never gives up when trying to query the server for the next schedule.
Be sure not to specify so many retry attempts that the total retry time is longer
than the average startup window.
If you want to have all client schedulers retry a failed attempt to process a
scheduled command up to two times, enter:
set maxcmdretries 2
Maximum command retries can also be set on each client with a client option,
MAXCMDRETRIES. The server value overrides the client value once the client
successfully contacts the server.
Typically, this setting is effective when set to half of the estimated time it takes to
process an average schedule. If you want to have the client scheduler retry every
15 minutes any failed attempts to either contact the server or process scheduled
commands, enter:
set retryperiod 15
You can use this setting in conjunction with the SET MAXCMDRETRIES command
(number of command retry attempts) to control when a client node contacts the
server to process a failed command. See “Setting the number of command retry
attempts” on page 609.
The retry period can also be set on each client with a client option, RETRYPERIOD.
The server value overrides the client value once the client successfully contacts the
server.
If the scheduling mode is set to prompted, the client performs the action within 3
to 10 minutes. If the scheduling mode is set to polling, the client processes the
command at its prescribed time interval. The time interval is set by the
QUERYSCHEDPERIOD client option. The DEFINE CLIENTACTION command
causes Tivoli Storage Manager to automatically define a schedule and associate
client nodes with that schedule. With the schedule name provided, you can later
query or delete the schedule and associated nodes. The names of one-time client
action schedules can be identified by a special character followed by numerals, for
example @1.
The schedule name and association information is returned to the server console or
the administrative client with messages ANR2500I and ANR2510I.
For example, you can issue a DEFINE CLIENTACTION command that specifies an
incremental backup command for client node HERMIONE in domain
ENGPOLDOM:
define clientaction hermione domain=engpoldom action=incremental
Tivoli Storage Manager defines a schedule and associates client node HERMIONE
with the schedule. The server assigns the schedule priority 1, sets the period units
(PERUNITS) to ONETIME, and determines the number of days to keep the
schedule active based on the value set with SET CLIENTACTDURATION
command.
For a list of valid actions, see the DEFINE CLIENTACTION command in the
Administrator's Reference. You can optionally include the OPTIONS and OBJECTS
parameters.
If the duration of client actions is set to zero, the server sets the DURUNITS
parameter (duration units) as indefinite for schedules defined with DEFINE
CLIENTACTION command. The indefinite setting for DURUNITS means that the
schedules are not deleted from the database.
| You can use the Operations Center to identify potential issues at a glance, manage
| alerts, and access the Tivoli Storage Manager command line.
| The Administration Center interface is also available, but the Operations Center is
| the preferred monitoring interface.
| Related concepts:
| Chapter 27, “Alert monitoring,” on page 833
| Related tasks:
| Chapter 28, “Sending alerts by email,” on page 835
|
| Opening the Operations Center
| You can open the Operations Center with a web browser.
| You can open the Operations Center by using any supported web browser. For a
| list of supported web browsers, see the chapter about web browser requirements in
| the Installation Guide.
| Configuring the hub server: If you are connecting to the Operations Center for
| the first time, you are redirected to the initial configuration wizard. In that wizard,
| you must provide the following information:
| v Connection information for the Tivoli Storage Manager server that you designate
| as a hub server
| v Login credentials for an administrator who is defined to that Tivoli Storage
| Manager server
| If the event-record retention period of the Tivoli Storage Manager server is less
| than 14 days, the value automatically increases to 14 days when you configure the
| server as a hub server.
| If you have multiple Tivoli Storage Manager servers in your environment, add the
| other Tivoli Storage Manager servers as spoke servers to the hub server, as
| described in “Adding spoke servers” on page 619.
| over the Help icon ( ? ) in the Operations Center menu bar and click the page
| name.
| To view general help for the Operations Center, including message help and
| conceptual and task topics, click Documentation.
| v To open the command-line interface, hover your mouse pointer over the Global
| Settings icon ( ) in the Operations Center menu bar, and click Command
| Line.
| In the command-line interface, you can run commands to manage Tivoli Storage
| Manager servers that are configured as hub or spoke servers.
| v To log out, click the administrator name in the menu bar, and click Log Out.
|
| Viewing the Operations Center on a mobile device
| You can view the Overview page of the Operations Center in the web browser of a
| mobile device to remotely monitor your storage environment. The Operations
| Center supports the Apple Safari web browser on the iPad. Other mobile devices
| can also be used.
| Open a web browser on your mobile device, and enter the web address of the
| Operations Center. See “Opening the Operations Center” on page 615.
|
| Administrator IDs and passwords
| An administrator must have a valid ID and password on the hub server to log in
| to the Tivoli Storage Manager Operations Center. An administrator ID is also
| assigned to the Operations Center so that the Operations Center can monitor
| servers.
| The following Tivoli Storage Manager administrator IDs are required to use the
| Operations Center:
| Operations Center administrator IDs
| Any administrator ID that is registered on the hub server can be used to
| log in to the Operations Center. The authority level of the ID determines
| which tasks can be completed. You can create new administrator IDs by
| using the REGISTER ADMIN command. For information about this command,
| see the Administrator's Reference.
| The Operations Center shows you a consolidated view of alerts and status
| information for the hub server and any spoke servers.
| You can install the Operations Center on the same computer as a Tivoli Storage
| Manager server or on a different computer.
| When you open the Operations Center for the first time, you connect it to one
| Tivoli Storage Manager server instance, which becomes the dedicated hub server.
| You can then connect more Tivoli Storage Manager servers as spoke servers.
| Tip: If you use library sharing, and the library manager server meets the
| Operations Center system requirements, consider designating this server as the
| hub server. Few, if any, Tivoli Storage Manager clients are typically registered to
| the library manager server. The smaller client workload of this server can make it a
| good candidate to take on the additional processing requirements of a hub server.
| Performance
| As a rule, a hub server can support 10-20 spoke servers. This number can vary,
| depending on your configuration.
| The following factors have the most significant impact on system performance:
| v The number of Tivoli Storage Manager clients or virtual machine file systems
| that are managed by the hub and spoke servers.
| v The frequency at which data is refreshed in the Operations Center.
| Consider grouping hub and spoke servers by geographic location. For example,
| managing a set of hub and spoke servers within the same data center can help
| prevent issues that can be caused by firewalls or the lack of appropriate network
| bandwidth between different locations.
| If necessary, you can further divide servers according to one or more of the
| following characteristics:
| v The administrator who manages the servers
| v The organizational entity that funds the servers
| v Server operating systems
| You can manage a hub server and multiple spoke servers from the same instance
| of the Operations Center.
| If you have more than 10-20 spoke servers, or if resource limitations require the
| environment to be partitioned, you can configure multiple hub servers and connect
| a subset of the spoke servers to each hub server.
| Restrictions:
| v A single server cannot be both a hub server and a spoke server.
| v Each spoke server can be assigned to only one hub server.
| v Each hub server requires a separate instance of the Operations Center, each of
| which has a separate web address.
| Tip: In the table on the TSM Servers page, a server might have a status of
| Unmonitored. An unmonitored server is a server that an administrator defined
| to the hub server by using the DEFINE SERVER command, but which is not yet
| configured as a spoke server.
| 2. Complete one of the following steps:
| v Click the server to highlight it, and from the table menu bar, click Monitor
| Spoke.
| v If the server that you want to add is not shown in the table, click
| Connect Spoke in the table menu bar.
| 3. Provide the necessary information, and complete the steps in the spoke
| configuration wizard.
| Note: If the event-record retention period of the server is less than 14 days, the
| value automatically increases to 14 days when you configure the server as a
| spoke server.
|
| You are not required to complete this procedure to change the following settings:
| v The frequency at which status data is refreshed
| v The duration for which alerts remain active, inactive, or closed
| v The conditions for which clients are shown as being at risk
| To change those settings, use the Settings page in the Operations Center.
| To restart the initial configuration wizard, you must delete a properties file. When
| you delete the file, you delete information about the hub server connection.
| However, any alerting, monitoring, at-risk, or multiserver settings that were
| configured for the hub server are not deleted. These settings are used as the
| default settings in the configuration wizard when the wizard restarts.
| 1. Stop the web server of the Operations Center. For instructions, see “Stopping
| and starting the web server” on page 621.
| 2. On the computer where the Operations Center is installed, go to the following
| directory:
| v AIX and Linux systems: installation_dir/ui/Liberty/usr/servers/
| guiServer
| v Windows systems: installation_dir\ui\Liberty\usr\servers\guiServer
| where installation_dir represents the directory in which the Operations
| Center is installed. For example:
| v AIX and Linux systems: /opt/tivoli/tsm/ui/Liberty/usr/servers/
| guiServer
| v Windows systems: c:\Program Files\Tivoli\TSM\ui\Liberty\usr\servers\
| guiServer
| 3. In the guiServer directory, delete the serverConnection.properties file.
| 4. Start the web server of the Operations Center.
| 5. Open the Operations Center. Start a web browser, and enter the following
| address: https://hostname:secure_port/oc, where hostname represents the
| name of the computer where the Operations Center is installed, and secure_port
| represents the port number that the Operations Center uses for HTTPS
| communication on that computer.
| 6. Use the configuration wizard to reconfigure the Operations Center. Specify a
| new password for the monitoring administrator ID.
| 7. Update the password for the monitoring administrator ID on any spoke servers
| that were previously connected to the hub server. Issue the following command
| from the Tivoli Storage Manager command-line interface:
| UPDATE ADMIN IBM-OC-hub_server_name new_password
| Restriction: Do not change any other settings for this administrator ID. After
| you specify the initial password, it is managed automatically by the Operations
| Center.
| If you must stop and start the web server for the Operations Center, for example,
| to restart the initial configuration wizard, use the following methods:
| From the Services window, stop or start the service Tivoli Storage Manager
| Operations Center.
| Tip: Consider using the new Operations Center interface to monitor your storage
| management environment, complete some administrative tasks, and access the
| Tivoli Storage Manager command-line interface. For additional information, see
| Chapter 18, “Managing servers with the Operations Center,” on page 615.
Basic items (for example, server maintenance, storage devices, and so on) are listed
in the navigation tree on the Tivoli Integrated Portal. When you click on an item, a
work page containing a portlet (for example, the Servers portlet) is displayed in a
work area. You use portlets to perform individual tasks, such as creating storage
pools.
When you click an item in the navigation tree, a new portlet populates the work
page, taking the place of the most recent portlet. To open multiple portlets, select
Open Page in New Tab from the Select Action menu. A tab is created with the
same portlet content as the original tab. To navigate among open items or to close
a specific page, use the tabs in the page bar.
Many portlets contain tables. The tables display objects like servers, policy
domains, or reports. To work with any table object, complete the following actions:
1. Click its radio button or check box in the Select column.
2. Click Select Action to display the table action list.
3. Select the action that you would like performed.
For some table objects, you can also click the object name to open a portlet or work
page pertaining to it. In most cases, a properties notebook portlet is opened. This
provides a fast way to work with table objects.
If you want more space in the work area, you can hide the navigation tree by
clicking
Do not use the Back, Forward and Refresh buttons in your browser. Doing so can
cause unexpected results. Using your keyboard's Enter key can also cause
unexpected results. Use the controls in the Administration Center interface instead.
The following task will help familiarize you with Administration Center controls.
Suppose you want to create a new client node and add it to the STANDARD
policy domain associated with a particular server.
1. If you have not already done so, access the Administration Center by entering
one of the following addresses in a supported web browser:
v http://workstation_name:16310/ibm/console
v https://workstation_name:16311/ibm/console
The workstation_name is the network name or IP address of the workstation on
which you installed the Administration Center. The default web administration
port (HTTP) is 16310. The default web administration port (HTTPS) is 16311. To
get started, log on using the Tivoli Integrated Portal user ID and password that
you created during the installation. Save this password in a safe location
because you need it to not only log on and to uninstall the Administration
Center.
2. Click Tivoli Storage Manager, and then click Policy Domains in the navigation
tree. The Policy Domains work page is displayed with a table that lists the
servers that are accessible from the Administration Center. The table also lists
the policy domains defined for each server:
3. In the Server Name column of the Policy Domains table, click the name of the
server with the STANDARD domain to which you want to add a client node. A
portlet is displayed with a table that lists the policy domains created for that
server:
6. In the client nodes table, click Select Action, and then select Create a Client
Node. The Create Client Node wizard is displayed:
In the following task descriptions, TIP_HOME is the root directory for your Tivoli
Integrated Portal installation and tip_admin and tip_pw are a valid Tivoli Integrated
Portal user ID and password.
The following table shows commands that are supported with some restrictions or
that are supported only by the command line in the Administration Center.
626 IBM Tivoli Storage Manager for Windows: Administrator's Guide
Command Supported only by command line
ACCEPT DATE Yes
AUDIT LICENSES Yes
BEGIN EVENTLOGGING Yes
CANCEL EXPIRATION Yes
CANCEL MOUNT Yes
CANCEL RESTORE Yes
CONVERT ARCHIVE Yes
COPY DOMAIN Yes
COPY MGMTCLASS Yes
COPY POLICYSET Yes
COPY PROFILE Yes
COPY SCHEDULE Yes
COPY SCRIPT Yes
COPY SERVERGROUP Yes
DEFINE EVENTSERVER Yes
DEFINE STGPOOL Supported in the user interface except for the
RECLAMATIONTYPE parameter, which is needed only
for EMC Centera devices.
DELETE DATAMOVER Yes
DELETE DISK Yes
DELETE EVENT Yes
DELETE EVENTSERVER Yes
DELETE SUBSCRIBER Yes
DISABLE EVENTS Yes
DISMOUNT DEVICE Yes
DISPLAY OBJNAME Yes
ENABLE EVENTS Yes
Event logging commands (BEGIN Yes
EVENTLOGGING, END EVENTLOGGING,
ENABLE EVENTS, DISABLE EVENTS) Some SNMP options can be viewed in the user
interface, in the properties notebook of a server.
MOVE GRPMEMBER Yes
QUERY AUDITOCCUPANCY Yes
QUERY ENABLED Yes
QUERY EVENTRULES Yes
QUERY EVENTSERVER Yes
QUERY LICENSE Yes
QUERY NASBACKUP Yes
QUERY RESTORE Yes
QUERY SSLKEYRINGPW Yes
QUERY SYSTEM Yes
QUERY TAPEALERTMSG Yes
For more information about backup operations, see the Backup-Archive Client
Installation and User's Guide.
In the following task description, TIP_HOME is the root directory for your Tivoli
Integrated Portal installation.
Tasks:
“Licensing IBM Tivoli Storage Manager”
“Starting the Tivoli Storage Manager server” on page 643
“Moving the Tivoli Storage Manager server to another system” on page 648
“Date and time on the server” on page 649
“Managing server processes” on page 650
“Preemption of client or server operations” on page 652
“Setting the server name” on page 653
“Adding or updating server options” on page 655
“Getting help on commands and error messages” on page 657
For current information about supported clients and devices, visit the IBM Tivoli
Storage Manager home page at http://www.ibm.com/support/entry/portal/
Overview/Software/Tivoli/Tivoli_Storage_Manager.
The base IBM Tivoli Storage Manager feature includes the following support:
To register a license, you must issue the REGISTER LICENSE command. The
command registers new licenses for server components, including Tivoli Storage
Manager (base), Tivoli Storage Manager Extended Edition, and System Storage
Archive Manager. You must specify the name of the enrollment certificate file
containing the license to be registered when you issue the REGISTER LICENSE
command. To unregister licenses, erase the NODELOCK file found in the server
instance directory and reregister the licenses.
The file specification can contain a wildcard character (*). The following are
possible certificate file names:
tsmbasic.lic
Registers IBM Tivoli Storage Manager base edition.
tsmee.lic
Registers IBM Tivoli Storage Manager Extended Edition. This includes the
disaster recovery manager, large libraries, and NDMP.
dataret.lic
Registers the System Storage Archive Manager. This is required to enable
Data Retention Protection and Expiration and Deletion Suspension
(Deletion Hold).
*.lic Registers all IBM Tivoli Storage Manager licenses for server components.
Notes:
v You cannot register licenses for components that are licensed on the basis of
processors. For example, Tivoli Storage Manager for Mail, Tivoli Storage
Manager for Databases, Tivoli Storage Manager for Enterprise Resource
Planning, Tivoli Storage Manager for Hardware, and Tivoli Storage Manager for
Space Management.
Monitoring licenses
When license terms change (for example, a new license is specified for the server),
the server conducts an audit to determine if the current server configuration
conforms to the license terms. The server also periodically audits compliance with
license terms. The results of an audit are used to check and enforce license terms.
If 30 days have elapsed since the previous license audit, the administrator cannot
cancel the audit. If an IBM Tivoli Storage Manager system exceeds the terms of its
license agreement, one of the following occurs:
v The server issues a warning message indicating that it is not in compliance with
the licensing terms.
v If you are running in Try Buy mode, operations fail because the server is not
licensed for specific features.
You must contact your IBM Tivoli Storage Manager account representative to
modify your agreement.
Note: During a license audit, the server calculates, by node, the amount of
backup, archive, and space management storage in use. This calculation
can take a great deal of CPU time and can stall other server activity. Use
the AUDITSTORAGE server option to specify that storage is not to be
calculated as part of a license audit.
Displaying license information
Use the QUERY LICENSE command to display details of your current
licenses and determine licensing compliance.
Scheduling automatic license audits
Use the SET LICENSEAUDITPERIOD command to specify the number of
days between automatic audits.
Important: The PVU calculations that are provided by Tivoli Storage Manager are
considered estimates and are not legally binding. The PVU information reported by
Tivoli Storage Manager is not considered an acceptable substitute for the IBM
License Metric Tool.
Metrics used to
calculate processor
value units (PVUs)
query pvuestimate
select * from
pvuestimate_details
Device classification
For purposes of PVU calculation, you can classify devices, such as workstations
and servers, as client nodes, server nodes, or other. By default, devices are
classified as client or server:
Client Backup-archive clients that run on Microsoft Windows 7, Microsoft
Windows XP Professional, and Apple systems are classified as client
devices.
Server Backup-archive clients that run on all platforms except for Microsoft
Windows 7, Microsoft Windows XP Professional, and Apple systems are
classified as server devices. All other node types are also classified as
server devices. The server on which Tivoli Storage Manager is running is
classified as a server device.
You can change the node classification to reflect how the device is used in the
system. For example, if a node is classified as a server, but functions as a client,
you can reclassify it as a client. If a node is not used in the system, you can
reclassify it as other.
When you assign a classification, consider the services that are associated with the
device. For example, a Microsoft Windows XP Professional notebook might be a
In a Tivoli Storage Manager system, you can assign multiple client node names to
the same physical workstation. For example, a clustering solution can have several
node names that are defined in the Tivoli Storage Manager server environment to
provide protection if a failover occurs. Redundant node names, or node names that
manage data for physical workstations that no longer exist, should not be counted
for licensing purposes. In this case, you might classify the node as other by using
the UPDATE NODE command.
Limitations
The PVU calculations are estimates because the software cannot determine all of
the factors that are required for a final number. The following factors affect the
accuracy of the calculations:
v PVU estimates are provided only for Tivoli Storage Manager V6.3 server devices
that have established a connection with the Tivoli Storage Manager server since
the installation of or upgrade to Tivoli Storage Manager V6.3.
v The default classification of nodes is based on assumptions, as described in
“Device classification” on page 635.
v The PVU estimate might not reflect the actual number of processors or processor
cores in use.
v The PVU estimate might not reflect cluster configurations.
v The PVU estimate might not reflect virtualization, including VMware and AIX
LPAR and WPAR.
v Common Inventory Technology might not be able to identify some processors,
and some processors might not have corresponding entries in the PVU table.
To calculate the total PVUs, sum the PVUs for all nodes.
Related information
Table 61. Information about PVUs and licensing
Information type Location
IBM PVU table ftp://public.dhe.ibm.com/software/
tivoli_support/misc/CandO/PVUTable/
PVU calculator https://www.ibm.com/software/
howtobuy/passportadvantage/
valueunitcalculator/vucalc.wss
Before you begin, review the information about how PVUs are estimated and what
the limitations are. For more information, see “Role of processor value units in
assessing licensing requirements” on page 634. Tivoli Storage Manager offers
several options for viewing PVU information. Select the option that best meets
your needs. To export the PVU estimates to a spreadsheet, use the SELECT * FROM
PVUESTIMATE_DETAILS command or export the data from the Administration Center.
Important: The PVU calculations that are provided by Tivoli Storage Manager are
considered estimates and are not legally binding.
5. To obtain a more accurate PVU estimate, you might want to change the
classifications of nodes. To change node classifications, issue the UPDATE NODE
command or update the role in the node notebook of the Administration
Center. For more information about the UPDATE NODE command, see the Tivoli
Storage Manager Administrator's Reference.
6. To calculate the PVUs for a node, use the following formula: PVUs = number of
processors on the node * processor type (core count) * pvu value. To
calculate the total PVUs, sum the PVUs for all nodes. For more information
about the PVU estimation formula, see Formula for PVU estimation.
7. After you generate a PVU report, additional analysis might include removing
redundancies, deleting obsolete information from the report, and accounting for
known systems that have not logged in to and connected to the server.
Tip: If you cannot obtain PVU information from a client node that is running
on a Linux operating system, ensure that Common Inventory Technology is
installed on that client node. After you install Common Inventory Technology,
obtain a new PVU estimate.
Tip: You can copy the files to any location on the host operating system, but
ensure that all files are copied to the same directory.
5. Ensure that guest virtual machines are running. This step is necessary to ensure
that the guest virtual machines are detected during the hardware scan.
6. To collect PVU information, issue the following command:
retrieve -v
If you restart the host machine or change the configuration, run the retrieve
command again to ensure that current information is retrieved.
Tip: When the IBM Tivoli Storage Manager for Virtual Environments license file is
installed on a VMware vStorage backup server, the platform string that is stored
on the Tivoli Storage Manager server is set to TDP VMware for any node name
that is used on the server. The reason is that the server is licensed for Tivoli
Storage Manager for Virtual Environments. The TDP VMware platform string can
be used for PVU calculations. If a node is used to back up the server with standard
backup-archive client functions, such as file-level and image backup, interpret the
TDP VMware platform string as a backup-archive client for PVU calculations.
Working with the IBM Tivoli Storage Manager Server and Active
Directory
A directory service provides a place to store information about network resources
and makes the information available to administrators and users. Tivoli Storage
Manager uses Active Directory to publish information about Tivoli Storage
Manager servers.
The Tivoli Storage Manager backup-archive client Setup wizard includes the
capability to browse server information in the Active Directory. The client can use
the information to determine which server to connect to and what communication
protocol to use.
Refer to the online help in the Active Directory Configuration wizard for more
information. You can also refer to the online help available from the Windows
Server Start menu.
For more information about the Active Directory schema, search for Active
Directory schema in the Windows online help.
The Windows Administration Tools are available on the Windows Server compact
disc (CD).
1. Load the CD into the Windows computer CD drive.
2. Open the I386 folder.
3. Double click the Adminpak.msi file.
4. Follow the instructions from the setup wizard.
Complete the following steps on the IBM Tivoli Storage Manager server:
1. Double-click the IBM Tivoli Storage Manager Console icon on the desktop.
2. Expand the tree until the IBM Tivoli Storage Manager server you want to work
with is displayed. Expand the server and click Wizards. The Wizards list
appears in the right pane.
3. Select the Active Directory Configuration wizard and click Start.
4. To start the wizard, click on Start, then Next.
5. Click on Detect, then click Next.
6. No entries are listed at this time, but the schema has been updated. Click
Cancel.
If you want to disable the permissions to extend the schema, do the following:
1. Return to the schema snap-in
2. Right-click Active Directory Schema, then click Permissions....
3. Select your account name, and uncheck the “Full Control” check box. Click OK.
Note: If you do not see any entries, you must first initialize the server. You can
use the Server Initialization Wizard in the IBM Tivoli Storage Manager Console.
6. Click the Active Directory tab. When the Active Directory options appear,
check Register with the Active Directory on Tivoli Storage Manager server
start-up.
The next time the IBM Tivoli Storage Manager server starts, it will define itself to
Active Directory and add information that includes the list of registered nodes and
communication protocol information. You can verify this information at any time
by using the Active Directory Configuration wizard in the IBM Tivoli Storage
Manager Console.
IBM Tivoli Storage Manager backup-archive clients in the domain can select an
IBM Tivoli Storage Manager server by clicking the browse button on the protocol
page of the Backup-Archive Client Setup Wizard. The wizard lists the IBM Tivoli
Storage Manager servers that the node is already registered with. It also lists the
Tivoli Storage Manager servers that support the selected protocol. When the client
selects a server and the wizard selections are complete, the wizard includes the
corresponding communication protocol information in the client options file.
The following events occur when you start or restart the IBM Tivoli Storage
Manager server:
v The server invokes the communication methods specified in the server options
file.
Windows requires that all applications be closed before you log off. As a
production server, Tivoli Storage Manager must be available to clients 24 hours a
day. At many sites, it is a security exposure to leave an administrator ID logged on
at an unattended computer. The solution is to run the server as a Windows service.
You can start the server as a console application during configuration, or when you
use it in a test environment. When starting the server as a console application,
Tivoli Storage Manager provides a special administrator user ID named
SERVER_CONSOLE. All server messages are displayed directly on the screen. The
console can be useful when debugging startup problems.
If you installed a single Tivoli Storage Manager server on a computer and start it
as a console application, you cannot start the server as a service until you have
first stopped the console application. Similarly, if you start the server as a
Windows service, you must stop the server before you can successfully start it as a
console application.
When you run the server as a service, it can be configured to start automatically
upon system reboot. Use the Tivoli Storage Manager Management Console to
change the mode of the service to start automatically or manually.
Tip: If the Tivoli Storage Manager server service is configured to run under the
Local System account, the Local System account must be explicitly granted access
to the Tivoli Storage Manager database. For more information, see “Starting the
Tivoli Storage Manager server as a service” on page 645.
For more information about starting the server, see Taking the first steps after you
install Tivoli Storage Manager.
Here are some examples of operations that require starting the server in
stand-alone mode:
v Verifying the Tivoli Storage Manager server operations after completing a server
upgrade.
v Verifying the Tivoli Storage Manager server operations after performing one of
the following operations:
– Restoring the server database by using the DSMSERV RESTORE DB
command.
– Dumping, reinitializing, and reloading the server database if a catastrophic
error occurs (recovery log corruption, for example), and if the DSMSERV
RESTORE DB command cannot be used.
v Running Tivoli Storage Manager recovery utilities when asked by IBM Customer
Support.
To perform these tasks, you should disable the following server activities:
Note: You can continue to access the server. Any current client activities
complete unless a user logs off or you cancel a client session.
4. You can perform maintenance, reconfiguration, or recovery operations, and
then halt the server.
To restart the server after completing the operations, follow this procedure:
1. To return the server options to their original settings, edit the dsmserv.opt file.
2. Start the server as described in “Starting the Tivoli Storage Manager server” on
page 643.
3. Enable client sessions, administrative sessions, and server-to-server sessions by
issuing the following command:
enable sessions all
If the Tivoli Storage Manager server service is configured to run under the Local
System account, the Local System account must be explicitly granted access to the
Tivoli Storage Manager database by using DB2 commands. To grant the Local
System account access to the Tivoli Storage Manager database, complete the
following steps:
db2 grant dbadm with dataaccess with accessctrl on database to user system
Important: When the server service is configured to run under the Local
System account, the database can be accessed by anyone who can log on to the
system. In addition, anyone who can log on to the system can run the Tivoli
Storage Manager server.
To start the Tivoli Storage Manager server as a Windows service, complete the
following steps:
1. Double-click the IBM Tivoli Storage Manager Console icon on the desktop.
2. Expand the tree until the Tivoli Storage Manager server that you want to work
with is displayed. Expand the server, and then expand the Reports tree under
the selected server.
3. Select Service Information.
4. Select the server in the right pane.
5. Click Start.
At this time, you can also set up the Tivoli Storage Manager server start mode and
options by completing the following steps:
1. Double-click the IBM Tivoli Storage Manager Console icon on the desktop.
2. Expand the tree until the Tivoli Storage Manager server that you want to work
with is displayed. Expand the server, and then expand the Reports tree under
the selected server.
3. Select Service Information.
4. Select the server in the right pane.
5. Click Properties.
6. Select the Automatic radio button.
7. In the Log on as field, enter the user ID that owns the server DB2 instance and
has permissions for starting the server service. Then, enter and confirm the
password for that user ID.
To view start and stop completion messages that are logged in the Windows
Application log, you can use the Windows Event Viewer in Administrative Tools.
IBM Tivoli Storage Manager displays the following information when the server is
started:
v Product licensing and copyright information
v Processing information about the server options file
v Communication protocol information
v Database and recovery log information
v Storage pool volume information
v Server generation date
v Progress messages and any errors encountered during server initialization
When you halt the server, all processes are abruptly stopped and client sessions are
canceled, even if they are not complete. Any in-progress transactions are rolled
back when the server is restarted. Administrator activity is not possible.
If possible, halt the server only after current administrative and client node
sessions have completed or canceled. To shut down the server without severely
impacting administrative and client node activity with the server, you must:
1. Disable the server to prevent new client node sessions from starting by issuing
the DISABLE SESSIONS command. This command does not cancel sessions
currently in progress or system processes like migration and reclamation.
2. Notify any existing administrative and client node sessions that you plan to
shut down the server. The server does not provide a network notification
facility; you must use external means to notify users.
3. Cancel any existing administrative or client node sessions by issuing the
CANCEL SESSION command and the associated session number. To obtain
session numbers and determine if any sessions are running, use the QUERY
SESSION command. If a session if running, a table will appear showing the
session number on the far left side of the screen.
Note: If the process you want to cancel is currently waiting for a tape volume
to be mounted (for example, a process initiated by EXPORT, IMPORT, or
MOVE DATA commands), the mount request is automatically cancelled. If a
volume associated with the process is currently being mounted by an automated
library, the cancel may not take effect until the mount is complete.
5. Halt the server to shut down all server operations by using the HALT
command.
To stop the IBM Tivoli Storage Manager server from the IBM Tivoli Storage
Manager Console, complete the following steps:
a. Double-click the IBM Tivoli Storage Manager Console icon on the desktop.
b. Expand the tree until the Tivoli Storage Manager server you want to work
with is displayed.
c. Expand Reports.
d. Click on Service Information.
e. Select the server, and click on Stop.
Note:
1. The HALT command can be replicated using the ALIASHALT server option.
The server option allows you to define a term other than HALT that will
perform the same function. The HALT command will still function, however
the server option provides an additional method for issuing the HALT
command.
2. In order for the administrative client to recognize an alias for the HALT
command, the client must be started with the CHECKALIASHALT option
specified. See the Administrator's Reference for more information.
These are the prerequisites to back up the database from one server and restore it
to another server:
v The same operating system must be running on both servers.
v The sequential storage pool that you use to back up the server database must be
accessible from both servers. Only manual and SCSI library types are supported
for the restore operation.
v The restore operation must be done by a Tivoli Storage Manager server at a code
level that is the same a that on the server that was backed up.
Every time the server is started and for each hour thereafter, a date and time check
occurs. An invalid date can be one of the following:
v Earlier than the server installation date and time.
v More than one hour earlier than the last time the date was checked.
v More than 30 days later than the last time the date was checked.
Most processes occur quickly and are run in the foreground, but others that take
longer to complete run as background processes.
Note: To prevent contention for the same tapes, the server does not allow a
reclamation process to start if a DELETE FILESPACE process is active. The server
checks every hour for whether the DELETE FILESPACE process has completed so
that the reclamation process can start. After the DELETE FILESPACE process has
completed, reclamation begins within one hour.
The server assigns each background process an ID number and displays the
process ID when the operation starts. This process ID number is used for tracking
purposes. For example, if you issue an EXPORT NODE command, the server
displays a message similar to the following:
EXPORT NODE started as Process 10
Some of these processes can also be run in the foreground by using the WAIT=YES
parameter when you issue the command from an administrative client. See
Administrator's Reference for details.
If you do not know the process ID, you can display information about all
background processes by entering:
query process
The following figure shows a server background process report after a DELETE
FILESPACE command was issued. The report displays a process ID number, a
description, and a completion status for each background process.
To find the process number, issue the QUERY PROCESS command . For details,
see “Requesting information about server processes.”
Note:
1. To list open mount requests, issue the QUERY REQUEST command. You can
also query the activity log to determine if a given process has a pending
mount request.
2. A mount request indicates that a volume is needed for the current process.
However, the volume might not be available in the library. If the volume is
not available, the reason might be that you either issued the MOVE MEDIA
command or CHECKOUT LIBVOLUME command, or that you manually
removed the volume from the library.
The following operations can be preempted and are listed in order of priority. The
server selects the lowest priority operation to preempt, for example, reclamation.
1. Move data
2. Migration from disk to sequential media
3. Backup, archive, or HSM migration
4. Migration from sequential media to sequential media
5. Reclamation
To disable preemption, specify NOPREEMPT in the server options file. If you specify
this option, the BACKUP DB command and the export and import commands are the
only operations that can preempt other operations.
The following high priority operations can preempt operations for access to a
specific volume:
v HSM recall
v Node replication
v Restore
v Retrieve
The following operations can be preempted, and are listed in order of priority. The
server preempts the lowest priority operation, for example reclamation.
1. Move data
2. Migration from disk to sequential media
3. Backup, archive, or HSM migration
4. Migration from sequential media
5. Reclamation
To disable preemption, specify NOPREEMPT in the server options file. If you specify
this option, the BACKUP DB command and the export and import commands are the
only operations that can preempt other operations.
You can issue the QUERY STATUS command to see the name of the server.
To specify the server name you must have system privileges. For example, to
change the server name to WELLS_DESIGN_DEPT., enter the following:
set servername wells_design_dept.
You must set unique names on servers that communicate with each other. See
“Setting up communications among servers” on page 726 for details. On a network
where clients connect to multiple servers, it is preferable that all servers have
unique names.
Attention:
v If this is a source server for a virtual volume operation, changing its name can
impact its ability to access and manage the data it has stored on the
corresponding target server.
v To prevent problems related to volume ownership, do not change the name of a
server if it is a library client.
You can change the server name with the SET SERVERNAME command. But you
might have unfortunate results that, according to the platform, can vary. Some
examples to be aware of are:
Use following steps to change a host name when the Tivoli Storage Manager server
is installed.
1. Back up the Tivoli Storage Manager database.
2. Stop the Tivoli Storage Manager server.
3. Change the startup service of the Tivoli Storage Manager server to manual
startup:
a. In the Tivoli Storage Management Console, expand the tree until the server
is displayed. Then, expand the server node and the Reports node under
the selected server.
b. Select Service Information.
c. Select the server in the right pane and right-click it. Then, click Properties.
d. In the Startup type field, select Manual.
4. Issue the following commands from the DB2 command prompt window to
update the DB2SYSTEM registry variable, turn off extended security, and
verify settings:
db2set -g DB2SYSTEM=new_host_name
db2set -g DB2_EXTSECURITY=NO
db2set -all
Tip: The DB2_EXTSECURITY parameter is reset to YES when you restart the
system.
5. Check for the presence of the db2nodes.cfg file. Depending on your version of
Windows, the db2nodes.cfg file may be in one of the following directories:
| v Windows 2008 or later:
| C:\ProgramData\IBM\DB2\DB2TSM1\<DB2 Instance name>
v Other versions of Windows:
C:\Documents and Settings\All Users\Application Data\IBM\DB2\DB2TSM1\
<DB2 Instance name>
Tip: The db2nodes.cfg file is a hidden file. Ensure that you show all files by
going to Windows Explorer and selecting Tools > Folder Options and
specifying to view hidden files.
If the db2nodes.cfg file does not exist on your system, proceed to the next
step. If the file does exist, issue the following command to update the host
name:
db2nchg /n:0 /i:<instance> /h:<new host name>
6. Change the Windows host name, as described in the documentation for the
Windows system that you are using.
7. Restart the server.
8. Update the security settings by running the following command:
You can add or update server options using the SETOPT command, the Edit Options
File editor in the Tivoli Storage Manager Console, or the dsmserv.opt file.
For information about editing the server options file, refer to the Administrator's
Reference.
You can update existing server options by issuing the SETOPT command. For
example, to update the existing server option value for MAXSESSIONS to 20,
enter:
setopt maxsessions 20
The contents of the volume history file are created by using the volume history
table in the server database. When opening a volume, the server might check the
table to determine whether the volume is already used. If the table is large, it can
take a long time to search. Other sessions or processes, such as backups and other
processes that use multiple sequential volumes, can be delayed due to locking.
For example, if you keep backups for seven days, information older than seven
days is not needed. If information about database backup volumes or export
volumes is deleted, the volumes return to scratch status. For scratch volumes of
device type FILE, the files are deleted. When information about storage pools
volumes is deleted, the volumes themselves are not affected.
To delete volume history, issue the DELETE VOLHISTORY command. For example, to
delete volume history that is seven days old or older, issue the following
command:
delete volhistory type=all todate=today-8
When deleting information about volume history, keep in mind the following
guidelines:
v Ensure that you delete volume history entries such as STGNEW, STGDELETE,
and STGREUSE that are older than the oldest database backup that is required
to perform a point-in-time database restore. If necessary, you can delete other
types of entries.
v Existing volume history files are not automatically updated with the DELETE
VOLHISTORY command.
v Do not delete information about sequential volumes until you no longer need
that information. For example, do not delete information about the reuse of
storage volumes unless you backed up the database after the time that was
specified for the delete operation.
v Do not delete the volume history for database backup or export volumes that
are stored in automated libraries unless you want to return the volumes to
scratch status. When the DELETE VOLHISTORY command removes information for
such volumes, the volumes automatically return to scratch status. The volumes
are then available for reuse by the server and the information stored on them
can be overwritten.
v To ensure that you have a backup from which to recover, you cannot remove the
most current database snapshot entry by deleting volume history. Even if a more
current, standard database backup exists, the latest database snapshot is not
deleted.
v To display volume history, issue the QUERY VOLHISTORY command. For example,
to display volume history up to yesterday, issue the following command:
query volhistory enddate=today-1
DRM: DRM automatically expires database backup series and deletes the volume history
entries.
You can issue the HELP command with no operands to display a menu of help
selections. You also can issue the HELP command with operands that specify help
menu numbers, commands, or message numbers.
Tivoli Storage Manager includes a central scheduling component that allows the
automatic processing of administrative commands during a specific time period
when the schedule is activated. Schedules that are started by the scheduler can run
in parallel. You can process scheduled commands sequentially by using scripts that
contain a sequence of commands with WAIT=YES. You can also use a scheduler
external to invoke the administrative client to start one or more administrative
commands.
Each scheduled administrative command is called an event. The server tracks and
records each scheduled event in the database. You can delete event records as
needed to recover database space.
Concepts:
“Automating a basic administrative command schedule” on page 660
“Tailoring schedules” on page 661
“Copying schedules” on page 664
“Deleting schedules” on page 664
Notes:
1. Scheduled administrative command output is directed to the activity log. This
output cannot be redirected. For information about the length of time activity
log information is retained in the database, see “Using the Tivoli Storage
Manager activity log” on page 829.
2. You cannot schedule MACRO or QUERY ACTLOG commands.
To later update or tailor your schedules, see “Tailoring schedules” on page 661.
Include the following parameters when you define a schedule with the DEFINE
SCHEDULE command:
v Specify the administrative command to be issued (CMD= ).
v Specify whether the schedule is activated (ACTIVE= ).
The following figure shows an example of a report that is displayed after you
enter:
query schedule backup_archivepool type=administrative
Note: The asterisk (*) in the first column specifies whether the corresponding
schedule has expired. If there is an asterisk in this column, the schedule has
expired.
You can check when the schedule is projected to run and whether it ran
successfully by using the QUERY EVENT command. For information about
querying events, see “Querying events” on page 665.
Tailoring schedules
To control your schedules more precisely, specify values for the schedule
parameters instead of accepting the default settings when you define or update
schedules.
You can specify the following values when you issue the DEFINE SCHEDULE or
UPDATE SCHEDULE command:
Schedule name
All schedules must have a unique name, which can be up to 30 characters.
Schedule style
You can specify either classic or enhanced scheduling. With classic
scheduling, you can define the interval between the startup windows for a
schedule. With enhanced scheduling, you can choose the days of the week,
days of the month, weeks of the month, and months the startup window
can begin on.
Initial start date, initial start time, and start day
You can specify a past date, the current date, or a future date for the initial
start date for a schedule with the STARTDATE parameter.
You can specify a start time, such as 6 p.m. with the STARTTIME parameter.
Copying schedules
You can create a new schedule by copying an existing administrative schedule.
When you copy a schedule, Tivoli Storage Manager copies the following
information:
v A description of the schedule
v All parameter values from the original schedule
You can then update the new schedule to meet your needs.
Deleting schedules
To delete the administrative schedule ENGBKUP, enter:
delete schedule engbkup type=administrative
All scheduled events, including their status, are tracked by the server. An event
record is created in the server database whenever processing of a scheduled
command is created or missed.
To minimize the processing time when querying events, minimize the time range.
To query an event for an administrative command schedule, you must specify the
TYPE=ADMINISTRATIVE parameter. Figure 79 shows an example of the results of
the following command:
query event * type=administrative
If you issue a query for events, past events may display even if the event records
have been deleted. The events displayed with a status of Uncertain indicate that
complete information is not available because the event records have been deleted.
To determine if event records have been deleted, check the message that is issued
after the DELETE EVENT command is processed.
Event records are automatically removed from the database after both of the
following conditions are met:
v The specified retention period has passed
v The startup window for the event has elapsed
You can change the retention period from the default of 10 days by using the SET
EVENTRETENTION command.
Use the DELETE EVENT command manually remove event records. For example,
to delete all event records written prior to 11:59 p.m. on June 30, 2002, enter:
delete event type=administrative 06/30/2002 23:59
The administrator can run the script from the Administration Center, or schedule
the script for processing using the administrative command scheduler on the
server.
You can define a script with the DEFINE SCRIPT command. You can initially
define the first line of the script with this command. For example:
define script qaixc "select node_name from nodes where platform=’aix’"
desc=’Display AIX clients’
To define additional lines, use the UPDATE SCRIPT command. For example, you
want to add a QUERY SESSION command, enter:
update script qaixc "query session *"
You can also easily define and update scripts using the Administration Center
where you can also use local workstation cut and paste functions.
Note: The Administration Center only supports ASCII characters for input. If you
need to enter characters that are not ASCII, do not use the Administration Center.
Issue the DEFINE SCRIPT and UPDATE SCRIPT commands from the server
console.
You can specify a WAIT parameter with the DEFINE CLIENTACTION command.
This allows the client action to complete before processing the next step in a
command script or macro. To determine where a problem is within a command in
a script, use the ISSUE MESSAGE command.
Restriction: You cannot redirect the output of a command within a Tivoli Storage
Manager script. Instead, run the script and then specify command redirection. For
example, to direct the output of script1 to the c:\temp\test.out directory, run the
script and specify command redirection as in the following example:
run script1 > c:\temp\test.out
For example, to define a script whose command lines are read in from the file
BKUP12.MAC, issue:
define script admin1 file=bkup12.mac
The script is defined as ADMIN1, and the contents of the script have been read in
from the file BKUP12.MAC.
Note: The file must reside on the server, and be read by the server.
You must schedule the maintenance script to run. The script typically includes
commands to back up, copy, and delete data. You can automate your server
maintenance by creating a maintenance script, and running it when your server is
not in heavy use.
A custom maintenance script can be created using the maintenance script editor or
by converting a predefined maintenance script.
When you click Server Maintenance in the navigation tree, a list of servers is
displayed in the Maintenance Script table with either None, Custom, or
Predefined noted in the Maintenance Script column.
Perform the following steps to create a custom maintenance script using the
maintenance script editor:
1. Select a server.
2. Click Select Action > Create Custom Maintenance Script.
3. Click Select an Action and construct your maintenance script by adding a
command to the script. The following actions are available:
v Back Up Server Database
v Back Up Storage Pool
v Copy Active Data to Active-data Pool
v Create Recovery Plan File
v Insert Comment
v Delete Volume History
v Delete Expired Data
v Migrate Stored Data
v Move Disaster Recovery Media
v Run Script Commands in Parallel
v Run Script Commands Serially
v Reclaim Primary Storage Pool
v Reclaim Copy Storage Pool
To edit your custom script after it is created and saved, click Server Maintenance
in the navigation tree, select the server with the custom script and click Select
Action > Modify Maintenance Script. Your custom maintenance script opens in
the script editor where you can add, remove, or change the order of the
commands.
You can produce a predefined maintenance script using the maintenance script
wizard.
When you click Server Maintenance in the navigation tree, a list of servers is
displayed in the Maintenance Script table with either None, Custom, or
Predefined noted in the Maintenance Script column.
Perform the following steps to create a maintenance script using the maintenance
script wizard:
1. Select a server that requires a maintenance script to be defined (None is
specified in the Maintenance Script column).
2. Click Select Action > Create Maintenance Script.
3. Follow the steps in the wizard.
After completing the steps in the wizard, you can convert your predefined
maintenance script into a custom maintenance script. If you choose to convert your
script into a custom script, select the server and click Select Action > Convert to
Custom Maintenance Script. Your predefined maintenance script is converted and
opened in the maintenance script editor where you can modify the schedule and
the maintenance actions.
Running commands serially in a script ensures that any preceding commands are
complete before proceeding and ensures that any following commands are run
serially. When a script starts, all commands are run serially until a PARALLEL
command is encountered. Multiple commands running in parallel and accessing
common resources, such as tape drives, can run serially.
Script return codes remain the same before and after a PARALLEL command is run.
When a SERIAL command is encountered, the script return code is set to the
maximum return code from any previous commands run in parallel.
When using server commands that support the WAIT parameter after a PARALLEL
command, the behavior is as follows:
v If you specify (or use the default) WAIT=NO, a script does not wait for the
completion of the command when a subsequent SERIAL command is
encountered. The return code from that command reflects processing only up to
In most cases, you can use WAIT=YES on commands that are run in parallel.
The following example illustrates how the PARALLEL command is used to back up,
migrate, and reclaim storage pools.
/*run multiple commands in parallel and wait for
them to complete before proceeding*/
PARALLEL
/*back up four storage pools simultaneously*/
BACKUP STGPOOL PRIMPOOL1 COPYPOOL1 WAIT=YES
BACKUP STGPOOL PRIMPOOL2 COPYPOOL2 WAIT=YES
BACKUP STGPOOL PRIMPOOL3 COPYPOOL3 WAIT=YES
BACKUP STGPOOL PRIMPOOL4 COPYPOOL4 WAIT=YES
/*wait for all previous commands to finish*/
SERIAL
/*after the backups complete, migrate stgpools
simultaneously*/
PARALLEL
MIGRATE STGPOOL PRIMPOOL1 DURATION=90 WAIT=YES
MIGRATE STGPOOL PRIMPOOL2 DURATION=90 WAIT=YES
MIGRATE STGPOOL PRIMPOOL3 DURATION=90 WAIT=YES
MIGRATE STGPOOL PRIMPOOL4 DURATION=90 WAIT=YES
/*wait for all previous commands to finish*/
SERIAL
/*after migration completes, relcaim storage
pools simultaneously*/
PARALLEL
RECLAIM STGPOOL PRIMPOOL1 DURATION=120 WAIT=YES
RECLAIM STGPOOL PRIMPOOL2 DURATION=120 WAIT=YES
RECLAIM STGPOOL PRIMPOOL3 DURATION=120 WAIT=YES
RECLAIM STGPOOL PRIMPOOL4 DURATION=120 WAIT=YES
When you run the script you must specify two values, one for $1 and one for $2.
For example:
run sqlsample node_name aix
The command that is processed when the SQLSAMPLE script is run is:
select node_name from nodes where platform=’aix’
As each command is processed in a script, the return code is saved for possible
evaluation before the next command is processed. The return code can be one of
three severities: OK, WARNING, or ERROR. Refer to Administrator's Reference for a
list of valid return codes and severity levels.
You can use the IF clause at the beginning of a command line to determine how
processing of the script should proceed based on the current return code value. In
the IF clause you specify a return code symbolic value or severity.
The server initially sets the return code at the beginning of the script to RC_OK.
The return code is updated by each processed command. If the current return code
from the processed command is equal to any of the return codes or severities in
the IF clause, the remainder of the line is processed. If the current return code is
not equal to one of the listed values, the line is skipped.
The following script example backs up the BACKUPPOOL storage pool only if
there are no sessions currently accessing the server. The backup proceeds only if a
return code of RC_NOTFOUND is received:
/* Backup storage pools if clients are not accessing the server */
select * from sessions
/* There are no sessions if rc_notfound is received */
if(rc_notfound) backup stg backuppool copypool
The following script example backs up the BACKUPPOOL storage pool if a return
code with a severity of warning is encountered:
The following example uses the IF clause together with RC_OK to determine if
clients are accessing the server. If a RC_OK return code is received, this indicates
that client sessions are accessing the server. The script proceeds with the exit
statement, and the backup does not start.
/* Back up storage pools if clients are not accessing the server */
select * from sessions
/* There are sessions if rc_ok is received */
if(rc_ok) exit
backup stg backuppool copypool
The GOTO statement is used in conjunction with a label statement. The label
statement is the target of the GOTO statement. The GOTO statement directs script
processing to the line that contains the label statement to resume processing from
that point.
The label statement always has a colon (:) after it and may be blank after the colon.
The following example uses the GOTO statement to back up the storage pool only
if there are no sessions currently accessing the server. In this example, the return
code of RC_OK indicates that clients are accessing the server. The GOTO statement
directs processing to the done: label which contains the EXIT statement that ends
the script processing:
/* Back up storage pools if clients are not accessing the server */
select * from sessions
/* There are sessions if rc_ok is received */
if(rc_ok) goto done
backup stg backuppool copypool
done:exit
The following is an example of the QSTATUS script. The script has lines 001, 005,
and 010 as follows:
001 /* This is the QSTATUS script */
005 QUERY STATUS
010 QUERY PROCESS
To append the QUERY SESSION command at the end of the script, issue the
following:
update script qstatus "query session"
The QUERY SESSION command is assigned a command line number of 015 and
the updated script is as follows:
001 /* This is the QSTATUS script */
005 QUERY STATUS
010 QUERY PROCESS
015 QUERY SESSION
You can change an existing command line by specifying the LINE= parameter.
Line number 010 in the QSTATUS script contains a QUERY PROCESS command.
To replace the QUERY PROCESS command with the QUERY STGPOOL command,
specify the LINE= parameter as follows:
update script qstatus "query stgpool" line=10
To add the SET REGISTRATION OPEN command as the new line 007 in the
QSTATUS script, issue the following:
update script qstatus "set registration open" line=7
The QUERY1 command script now contains the same command lines as the
QSTATUS command script.
The various formats you can use to query scripts are as follows:
Format Description
Standard Displays the script name and description. This is the default.
Detailed Displays commands in the script and their line numbers, date of
last update, and update administrator for each command line in the
script.
Lines Displays the name of the script, the line numbers of the commands,
comment lines, and the commands.
File Outputs only the commands contained in the script without all
other attributes. You can use this format to direct the script to a file
so that it can be loaded into another server with the DEFINE script
command specifying the FILE= parameter.
You can create additional server scripts by querying a script and specifying the
FORMAT=FILE and OUTPUTFILE parameters. You can use the resulting output as
input into another script without having to create a script line by line.
The following is an example of querying the SRTL2 script and directing the output
to newscript.script:
query script srtl2 format=raw outputfile=newscript.script
You can then edit the newscript.script with an editor that is available to you on
your system. To create a new script using the edited output from your query, issue:
define script srtnew file=newscript.script
For example, to delete the 007 command line from the QSTATUS script, issue:
delete script qstatus line=7
Note: There is no Tivoli Storage Manager command that can cancel a script after it
starts. To stop a script, an administrator must halt the server.
You can preview the command lines of a script without actually executing the
commands by using the PREVIEW=YES parameter with the RUN command. If the
script contains substitution variables, the command lines are displayed with the
substituted variables. This is useful for evaluating a script before you run it.
Enter:
run qaixc node_name aix
Using macros
Tivoli Storage Manager supports macros on the administrative client. A macro is a
file that contains one or more administrative client commands. You can only run a
macro from the administrative client in batch or interactive modes. Macros are
stored as a file on the administrative client. Macros are not distributed across
servers and cannot be scheduled on the server.
The name for a macro must follow the naming conventions of the administrative
client running on your operating system. For more information about file naming
conventions, refer to the Administrator's Reference.
In macros that contain several commands, use the COMMIT and ROLLBACK
commands to control command processing within the macro. For more information
about using these commands, see “Command processing in a macro” on page 679.
You can include the MACRO command within a macro file to invoke other macros
up to ten levels deep. A macro invoked from the Tivoli Storage Manager
administrative client command prompt is called a high-level macro. Any macros
invoked from within the high-level macro are called nested macros.
The administrative client ignores any blank lines included in your macro.
However, a completely blank line terminates a command that is continued (with a
continuation character).
The following is an example of a macro called REG.MAC that registers and grants
authority to a new administrator:
register admin pease mypasswd -
contact=’david pease, x1234’
grant authority pease -
classes=policy,storage -
domains=domain1,domain2 -
stgpools=stgpool1,stgpool2
This example uses continuation characters in the macro file. For more information
on continuation characters, see “Using continuation characters” on page 678.
After you create a macro file, you can update the information that it contains and
use it again. You can also copy the macro file, make changes to the copy, and then
run the copy. Refer to the Administrator's Reference for more information on how
commands are entered and the general rules for entering administrative
commands.
To write a comment:
v Write a slash and an asterisk (/*) to indicate the beginning of the comment.
v Write the comment.
v Write an asterisk and a slash (*/) to indicate the end of the comment.
You can put a comment on a line by itself, or you can put it on a line that contains
a command or part of a command.
For example, to use a comment to identify the purpose of a macro, write the
following:
/* auth.mac-register new nodes */
Comments cannot be nested and cannot span lines. Every line of a comment must
contain the comment delimiters.
To use a continuation character, enter a dash or a back slash at the end of the line
that you want to continue. With continuation characters, you can do the following:
v Continue a command. For example:
register admin pease mypasswd -
contact="david, ext1234"
v Continue a list of values by entering a dash or a back slash, with no preceding
blank spaces, after the last comma of the list that you enter on the first line.
Then, enter the remaining items in the list on the next line with no preceding
blank spaces. For example:
stgpools=stg1,stg2,stg3,-
stg4,stg5,stg6
v Continue a string of values enclosed in quotation marks by entering the first
part of the string enclosed in quotation marks, followed by a dash or a back
slash at the end of the line. Then, enter the remainder of the string on the next
line enclosed in the same type of quotation marks. For example:
contact="david pease, bldg. 100, room 2b, san jose,"-
"ext. 1234, alternate contact-norm pass,ext 2345"
Tivoli Storage Manager concatenates the two strings with no intervening blanks.
You must use only this method to continue a quoted string of values across more
than one line.
For example, to create a macro named AUTH.MAC to register new nodes, write it
as follows:
/* register new nodes */
register node %1 %2 - /* userid password */
contact=%3 - /* ’name, phone number’ */
domain=%4 /* policy domain */
Then, when you run the macro, you enter the values you want to pass to the
server to process the command.
For example, to register the node named DAVID with a password of DAVIDPW,
with his name and phone number included as contact information, and assign him
to the DOMAIN1 policy domain, enter:
If your system uses the percent sign as a wildcard character, the administrative
client interprets a pattern-matching expression in a macro where the percent sign is
immediately followed by a numeric digit as a substitution variable.
Running a macro
Use the MACRO command when you want to run a macro. You can enter the
MACRO command in batch or interactive mode.
If the macro does not contain substitution variables (such as the REG.MAC macro
described in the “Writing commands in a macro” on page 677), run the macro by
entering the MACRO command with the name of the macro file. For example:
macro reg.mac
If you enter fewer values than there are substitution variables in the macro, the
administrative client replaces the remaining variables with null strings.
If you want to omit one or more values between values, enter a null string ("") for
each omitted value. For example, if you omit the contact information in the
previous example, you must enter:
macro auth.mac pease mypasswd "" domain1
If an error occurs in any command in the macro or in any nested macro, the server
terminates processing and rolls back any changes caused by all previous
commands.
If you specify the ITEMCOMMIT option when you enter the DSMADMC
command, the server commits each command in a script or a macro individually,
after successfully completing processing for each command. If an error occurs, the
server continues processing and only rolls back changes caused by the failed
command.
You can control precisely when commands are committed with the COMMIT
command. If an error occurs while processing the commands in a macro, the server
terminates processing of the macro and rolls back any uncommitted changes.
Uncommitted changes are commands that have been processed since the last
COMMIT. Make sure that your administrative client session is not running with the
ITEMCOMMIT option if you want to control command processing with the
COMMIT command.
Chapter 21. Automating server operations 679
Note: Commands that start background processes cannot be rolled back. For a list
of commands that can generate background processes, see “Managing server
processes” on page 650.
You can test a macro before implementing it by using the ROLLBACK command.
You can enter the commands (except the COMMIT command) you want to issue in
the macro, and enter ROLLBACK as the last command. Then, you can run the
macro to verify that all the commands process successfully. Any changes to the
database caused by the commands are rolled back by the ROLLBACK command
you have included at the end. Remember to remove the ROLLBACK command
before you make the macro available for actual use. Also, make sure your
administrative client session is not running with the ITEMCOMMIT option if you
want to control command processing with the ROLLBACK command.
If you have a series of commands that process successfully via the command line,
but are unsuccessful when issued within a macro, there are probably dependencies
between commands. It is possible that a command issued within a macro cannot
be processed successfully until a previous command that is issued within the same
macro is committed. Either of the following actions allow successful processing of
these commands within a macro:
v Insert a COMMIT command before the command dependent on a previous
command. For example, if COMMAND C is dependent upon COMMAND B,
you would insert a COMMIT command before COMMAND C. An example of
this macro is:
command a
command b
commit
command c/
v Start the administrative client session using the ITEMCOMMIT option. This
causes each command within a macro to be committed before the next command
is processed.
The following sections provide detailed concept and task information about the
database and recovery log.
Concepts:
“Database and recovery log overview”
Tasks:
“Estimating database space requirements” on page 689
“Estimating recovery log space requirements” on page 693
“Monitoring the database and recovery log” on page 708
“Increasing the size of the database” on page 709
“Reducing the size of the database” on page 710
“Increasing the size of the active log” on page 712
“Step 4: Running database backups” on page 946
“Restoring the database” on page 970
“Moving the database and recovery log on a server” on page 713
“Adding optional logs after server initialization” on page 718
“Transaction processing” on page 718
Tivoli Storage Manager version 6.3 is installed with the IBM DB2 database
application. Users who are experienced DB2 administrators can choose to perform
advanced SQL queries and use DB2 tools to monitor the database. However, do
not use DB2 tools to change DB2 configuration settings from those settings that are
preset by Tivoli Storage Manager. Do not alter the DB2 environment for Tivoli
Storage Manager in other ways, such as with other products. The Tivoli Storage
Manager Version 6.3 server was built and tested with the data definition language
(DDL) and database configuration that Tivoli Storage Manager deploys.
Database: Overview
The database does not store client data; it points to the locations of the client files
in the storage pools. The Tivoli Storage Manager database contains information
about the Tivoli Storage Manager server. The database also contains information
about the data that is managed by the Tivoli Storage Manager server.
The database cannot be mirrored through Tivoli Storage Manager, but it can be
mirrored by using hardware mirroring, such as Redundant Array of Independent
Disks (RAID) 5.
The database manager manages database volumes, and there is no need to format
them. Some advantages of the database manager are:
Automatic backups
When the server is started for the first time, a full backup begins
Using TCP/IP to communicate with DB2 can greatly extend the number of
concurrent connections. The TCP/IP connection is part of the default configuration.
When the Tivoli Storage Manager V6.3 server is started for the first time, it
inspects the current configuration of the DB2 instance. It then makes any necessary
changes to ensure that both IPC and TCP/IP can be used to communicate with the
database manager. Any changes are made only as needed. For example, if the
TCP/IP node exists and has the correct configuration, it is not changed. If the node
was cataloged but has an incorrect IP address or port, it is deleted and replaced by
a node having the correct configuration.
When cataloging the remote database, the Tivoli Storage Manager server generates
a unique alias name based on the name of the local database. By default, a remote
database alias of TSMAL001 is created to go with the default database name of
TSMDB1.
Tip: Tivoli Storage Manager disables the TCP/IP connections if it cannot find an
alias in the range TSMAL001-TSMAL999 that is not already in use.
By default, the Tivoli Storage Manager server uses IPC to establish connections for
the first two connection pools, with a maximum of 480 connections for each pool.
After the first 960 connections are established, the Tivoli Storage Manager server
uses TCP/IP for any additional connections.
You can use the DBMTCPPORT server option to specify the port on which the TCP/IP
communication driver for the database manager waits for requests for client
sessions. The port number must be reserved for use by the database manager.
If Tivoli Storage Manager cannot connect to the database by using TCP/IP, it issues
an error message and halts. The administrator must determine the cause of the
problem and to correct it before restarting the server. The server verifies that it can
connect by using TCP/IP at startup even if it is configured to initially favor IPC
connections over TCP/IP connections.
Recovery log
The recovery log helps to ensure that a failure (such as a system power outage or
application error) does not leave the database in an inconsistent state. The recovery
log is essential when you restart the Tivoli Storage Manager or the database, and is
required if you must restore the database.
When you issue a command to make changes, the changes are committed to the
database to complete. A committed change is permanent and cannot be rolled
back. If a failure occurs, the changes that were made but not committed are rolled
back. Then all committed transactions, which might not have been physically
written to disk, are reapplied and committed again.
During the installation process, you specify the directory location, the size of the
active log, and the location of the archive logs. You can also specify the directory
location of a log mirror if you want the additional protection of mirroring the
active log. The amount of space for the archive logs is not limited, which improves
the capacity of the server for concurrent operations compared to previous versions.
The space that you designate for the recovery log is managed automatically by the
database manager program. Space is used as needed, up to the capacity of the
defined log directories. You do not need to create and format volumes for the
recovery log.
Ensure that the recovery log has enough space. Monitor the space usage for the
recovery log to prevent problems.
Attention: To protect your data, locate the database directories and all the log
directories on separate physical disks.
Related concepts:
“Transaction processing” on page 718
“Active log”
Changes to the database are recorded in the recovery log to maintain a consistent
database image. You can restore the server to the latest time possible, by using the
active and archive log files, which are included in database backups.
To help ensure that the required log information is available for restoring the
database, you can specify that the active log is mirrored to another file system
location. For the best availability, locate the active log mirror on a different
physical device.
Active log
The active log files record transactions that are in progress on the server.
The active log stores all the transactions that have not yet been committed. The
active log always contains the most recent log records. If a failure occurs, the
changes that were made but not committed are rolled back, and all committed
transactions, which might not have been physically written to disk, are reapplied
and committed again.
The location and size of the active log are set during initial configuration of a new
or upgraded server. You can also set these values by specifying the
ACTIVELOGDIRECTORY and the ACTIVELOGSIZE parameters of the DSMSERV FORMAT or
DSMSERV LOADFORMAT utilities. Both the location and size can be changed later. To
change the size of the active log, see “Increasing the size of the active log” on page
712. To change the location of the active log directory, see “Moving only the active
log, archive log, or archive failover log” on page 715.
Mirroring the active log can protect the database when a hardware failure occurs
on the device where the active log is stored. Mirroring the active log provides
another level of protection in addition to placing the active log on hardware that
has high-availability features. Creating a log mirror is optional but recommended.
Place the active log directory and the log mirror directory on different physical
devices. If you increase the size of the active log, the log mirror size is increased
automatically.
Mirroring the log can affect performance, because of the doubled I/O activity that
is required to maintain the mirror. The additional space that the log mirror requires
is another factor to consider.
You can create the log mirror during initial configuration of a new or upgraded
server. If you use the DSMSERV LOADFORMAT utility instead of the wizard to configure
the server, specify the MIRRORLOGDIRECTORY parameter. If the log mirror directory is
not created at that time, you can create it later by specifying the
MIRRORLOGDIRECTORY option in the server options file, dsmserv.opt.
Archive log
The archive log contains copies of closed log files that had been in the active log.
The archive log is not needed for normal processing, but it is typically needed for
recovery of the database.
To provide roll-forward recovery of the database to the current point in time, all
logs since the last database backup must be available for the restore operation. The
archive log files are included in database backups and are used for roll-forward
recovery of the database to the current point-in-time. All logs since the last full
database backup must be available to the restore function. These log files are
stored in the archive log. The pruning of the archive log files is based on full
database backups. The archive log files that are included in a database backup are
automatically pruned after a full database backup cycle has been completed.
The archive log is not needed during normal processing, but it is typically needed
for recovery of the database. Archived log files are saved until they are included in
a full database backup. The amount of space for the archive log is not limited.
Archive log files are automatically deleted as part of the full backup processes and
must not be deleted manually. Monitor both the active and archive logs. If the
active log is close to filling, check the archive log. If the archive log is full or close
to full, run one or more full database backups.
If the file systems or drives where the archive log directory and the archive
failover log directory are located become full, the archived logs are stored in the
active log directory. Those archived logs are returned to the archive log directory
when the space problem is resolved, or when a full database backup is run.
Specifying an archive failover log directory can prevent problems that occur if the
archive log runs out of space. Place the archive log directory and the archive
failover log directory on different physical drives.
You can specify the location of the failover log directory during initial
configuration of a new or upgraded server. You can also specify its location with
the ARCHFAILOVERLOGDIRECTORY parameter of the DSMSERV FORMAT or DSMSERV
LOADFORMAT utility. If it is not created through the utilities, it can be created later by
specifying the ARCHFAILOVERLOGDIRECTORY option in the server options file,
dsmserv.opt. See “Adding optional logs after server initialization” on page 718 for
details.
For information about the space required for the log, see “Archive failover log
space” on page 706.
The active log files contain information about in-progress transactions. This
information is needed to restart the server and database after a disaster.
Transactions are stored in the log files of the active log, and a transaction can span
multiple log files.
When all transactions that are part of an active log file complete, that log file is
copied from the active log to the archive log. Transactions continue to be written to
the active log files while the completed active log files are copied to the archive
log. If a transaction spans all the active log files, and the files are filled before the
transaction is committed, the Tivoli Storage Manager server halts.
When an active log file is full, and there are no active transactions referring to it,
the file is copied to the archive log directory. An active log file cannot be deleted
until all transactions in the log file are either committed or discontinued.
If the archive log is full and there is no failover archive log, the log files remain in
the active log. If the active log then becomes full and there are in-progress
transactions, the Tivoli Storage Manager server halts. If there is an archive failover
log, it is used only if the archive log fills. It is important to monitor the archive log
directory to ensure that there is space in the active log.
When the database is backed up, the database manager deletes the archive log files
that are no longer needed for future database backups or restores.
The archive log is included in database backups and is used for roll-forward
recovery of the database. The archive log files that are included in a database
backup are automatically pruned after a full database backup cycle has completed.
Therefore, ensure that the archive log has enough space to store the log files for the
database backups.
The user data limit that is displayed when you issue the ulimit -d command is the
soft user data limit. It is not necessary to set the hard user data limit for DB2. The
default soft user data limit is 128 MB. This is equivalent to the value of 262,144
512-byte units as set in /etc/security/limits folder, or 131,072 KB units as
displayed by the ulimit -d command. This setting limits private memory usage to
about one half of what is available in the 256 MB private memory segment
available for a 32-bit process on AIX.
Note: A DB2 server instance cannot make use of the Large Address Space or of
very large address space AIX 32-bit memory models due to shared memory
requirements. On some systems, for example those requiring large amounts of sort
memory for performance, it is best to increase the user data limit to allow DB2 to
allocate more than 128 MB of memory in a single process.
You can set the user data memory limit to "unlimited" (a value of "-1"). This setting
is not recommended for 32-bit DB2 because it allows the data region to overwrite
the stack, which grows downward from the top of the 256 MB private memory
segment. The result would typically be to cause the database to end abnormally. It
is, however, an acceptable setting for 64-bit DB2 because the data region and stack
are allocated in separate areas of the very large address space available to 64-bit
AIX processes.
Disk space requirements for the server database and recovery log
The drives or file systems on which you locate the database and log directories are
important to the proper operation of your IBM Tivoli Storage Manager server.
Placing each database and recovery log directory on a separate disk provides the
best performance and the best disaster protection.
For the optimal database performance, choose the fastest and most reliable disks
that are configured for random access I/O, such as Redundant Array of
Independent Disks (RAID) hardware. The internal disks included by default in
most servers and consumer grade Parallel Advanced Technology Attachment
(PATA) disks and Serial Advanced Technology Attachment (SATA) disks are too
slow.
It is best to use multiple directories for the database, with four to eight directories
for a large Tivoli Storage Manager database. Locate each database directory on a
disk volume that uses separate physical disks from other database directories. The
Tivoli Storage Manager server database I/O workload is spread over all
directories, thus increasing the read and write I/O performance. Having many
small capacity physical disks is better than having a few large capacity physical
disks with the same rotation speed.
Locate the active log, mirror log, and archive log directories also on high-speed,
reliable disks. The failover archive log can be on slower disks, assuming that the
archive log is sufficiently large and that the failover log is used infrequently.
The access pattern for the active log is always sequential. Physical placement on
the disk is important. It is best to isolate the active log from the database and from
the disk storage pools. If they cannot be isolated, then place the active log with
storage pools and not with the database.
Enable read cache for the database and recovery log, and enable write cache if the
disk subsystems support it.
Restriction: You cannot use raw logical volumes for the database. To reuse space
on the disk where raw logical volumes were located for an earlier version of the
server, create file systems on the disk first.
Capacity planning
Capacity planning for Tivoli Storage Manager includes managing resources such as
the database and recovery log. To maximize resources as part of capacity planning,
you must estimate space requirements for the database and the recovery log.
| For information about the benefits of deduplication and guidance on how to make
| effective use of the Tivoli Storage Manager deduplication feature, see Optimizing
| Performance.
Consider using at least 25 GB for the initial database space. Provision file system
space appropriately. A database size of 25 GB is adequate for a test environment or
a library-manager-only environment. For a production server supporting client
workloads, the database size is expected to be larger. If you use random-access
disk (DISK) storage pools, more database and log storage space is needed than for
sequential-access storage pools.
| Restriction: The guideline does not include space that is used during data
| deduplication.
| v 100 - 200 bytes for each cached file, copy storage pool file, active-data pool file,
| and deduplicated file.
| v Additional space is required for database optimization to support varying
| data-access patterns and to support server back-end processing of the data. The
| amount of extra space is equal to 50% of the estimate for the total number of
| bytes for file objects.
| In the following example for a single client, the calculations are based on the
| maximum values in the preceding guidelines. The examples do not take into
| account that you might use file aggregation. In general, when you aggregate small
| files, it reduces the amount of required database space. File aggregation does not
| affect space-managed files.
| 1. Calculate the number of file versions. Add each of the following values to
| obtain the number of file versions:
| a. Calculate the number of backed-up files. For example, as many as 500,000
| client files might be backed up at a time. In this example, storage policies
| are set to keep up to three copies of backed up files:
| 500,000 files * 3 copies = 1,500,000 files
| b. Calculate the number of archive files. For example, as many as 100,000
| client files might be archived copies.
| c. Calculate the number of space-managed files. For example, as many as
| 200,000 client files might be migrated from client workstations.
| Using 1000 bytes per file, the total amount of database space that is required
| for the files that belong to the client is 1.8 GB:
| (1,500,000 + 100,000 + 200,000) * 1000 = 1.8 GB
| 2. Calculate the number of cached files, copy storage-pool files, active-data pool
| files, and deduplicated files:
| a. Calculate the number of cached copies. For example, caching is enabled in a
| 5 GB disk storage pool. The high migration threshold of the pool is 90%
| and the low migration threshold of the pool is 70%. Thus, 20% of the disk
| pool, or 1 GB, is occupied by cached files.
| If the average file size is about 10 KB, approximately 100,000 files are in
| cache at any one time:
| 100,000 files * 200 bytes = 19 MB
| b. Calculate the number of copy storage-pool files. All primary storage pools
| are backed up to the copy storage pool:
| (1,500,000 + 100,000 + 200,000) * 200 bytes = 343 MB
Tip: In the preceding examples, the results are estimates. The actual size of the
database might differ from the estimate because of factors such as the number of
directories and the length of the path and file names. Periodically monitor your
database and adjust its size as necessary.
During normal operations, the Tivoli Storage Manager server might require
temporary database space. This space is needed for the following reasons:
v To hold the results of sorting or ordering that are not already being kept and
optimized in the database directly. The results are temporarily held in the
database for processing.
v To give administrative access to the database through one of the following
methods:
– A DB2 open database connectivity (ODBC) client
– An Oracle Java database connectivity (JDBC) client
– Structured Query Language (SQL) to the server from an administrative-client
command line
| Consider using an extra 50 GB of temporary space for every 500 GB of space for
| file objects and optimization. See the guidelines in the following table. In the
| example that is used in the preceding step, a total of 1.7 TB of database space is
| required for file objects and optimization for 500 clients. Based on that calculation,
| 200 GB is required for temporary space. The total amount of required database
| space is 1.9 TB.
For example, expiration processing can use a large amount of database space. If
there is not enough system memory in the database to store the files identified for
expiration, some of the data is allocated to temporary disk space. During
expiration processing, if a node or file space is selected that is too large to process,
the database manager cannot sort the data.
To run database operations, consider adding more database space for the following
scenarios:
v The database has a small amount of space and the server operation that requires
temporary space uses the remaining free space.
v The file spaces are large, or the file spaces have a policy assigned to it that
creates many file versions.
v The Tivoli Storage Manager server must run with limited memory.
v An out of database space error is displayed when you deploy a Tivoli Storage
Manager V6 server.
Attention: Do not alter the DB2 software that is installed with IBM Tivoli
Monitoring for Tivoli Storage Manager installation packages and fix packs. Do not
install or upgrade to a different version, release, or fix pack of DB2 software
because doing so can damage the database.
The database manager sorts data in a specific sequence, as per the SQL statement
that you issue to request the data. Depending on the workload on the server, and
if there is more data than the database manager can manage, the data (that is
ordered in sequence) is allocated to temporary disk space. Data is allocated to
For example, expiration processing can produce a large result set. If there is not
enough system memory on the database to store the result set, some of the data is
allocated to temporary disk space. During expiration processing, if a node or file
space are selected that are too large to process, the database manager does not
have enough memory to sort the data.
To run database operations, consider adding more database space for the following
scenarios:
v The database has a small amount of space and the server operation that requires
temporary space uses the remaining free space.
v The file spaces are large, or the file spaces has a policy assigned to it which
creates many file versions.
v The Tivoli Storage Manager server must run with limited memory. The database
uses the Tivoli Storage Manager server main memory to run database
operations. However, if there is insufficient memory available, the Tivoli Storage
Manager server allocates temporary space on disk to the database. For example,
if 10G of memory is available and database operations require 12G of memory,
the database uses temporary space.
v An out of database space error is displayed when you deploy a Tivoli Storage
Manager V6 server. Monitor the server activity log for messages related to
database space.
Important: Do not change the DB2 software that is installed with the Tivoli
Storage Manager installation packages and fix packs. Do not install or upgrade to a
different version, release, or fix pack, of DB2 software to avoid damage to the
database.
In Tivoli Storage Manager servers V6.1 and later, the active log can be a maximum
size of 128 GB. The archive log size is limited to the size of the file system that it is
installed on.
Use the following general guidelines when you estimate the size of the active log:
v The suggested starting size for the active log is 16 GB.
| v Ensure that the active log is at least large enough for the amount of concurrent
| activity that the server typically handles. As a precaution, try to anticipate the
| largest amount of work that the server manages at one time. Provision the active
| log with extra space that can be used if needed. Consider using 20% of extra
| space.
The archive log directory must be large enough to contain the log files that are
generated since the previous full backup. For example, if you perform a full
backup of the database every day, the archive log directory must be large enough
to hold the log files for all the client activity that occurs during 24 hours. To
recover space, the server deletes obsolete archive log files after a full backup of the
database. If the archive log directory becomes full and a directory for archive
failover logs does not exist, log files remain in the active log directory. This
condition can cause the active log directory to fill up and stop the server. When the
server restarts, some of the existing active-log space is released.
| After the server is installed, you can monitor archive log utilization and the space
| in the archive log directory. If the space in the archive log directory fills up, it can
| cause the following problems:
| v The server is unable to perform full database backups. Investigate and resolve
| this problem.
| v Other applications write to the archive log directory, exhausting the space that is
| required by the archive log. Do not share archive log space with other
| applications including other Tivoli Storage Manager servers. Ensure that each
| server has a separate storage location that is owned and managed by that
| specific server.
| For guidance about the layout and tuning of the active log and archive log, see
| Optimizing Performance.
Related tasks:
“Increasing the size of the active log” on page 712
Example: Estimating active and archive log sizes for basic client-store
operations:
Basic client-store operations include backup, archive, and space management. Log
space must be sufficient to handle all store transactions that are in progress at one
time.
To determine the sizes of the active and archive logs for basic client-store
operations, use the following calculation:
number of clients x files stored during each transaction
x log space needed for each file
3.5 + 16 = 19.5 GB
1
Archive log: Suggested size 58.5 GB Because of the requirement to be able to store archive logs
across three server database-backup cycles, multiply the
estimate for the active log by 3 to estimate the total archive
log requirement.
3.5 x 3 = 10.5 GB
10.5 + 48 = 58.5 GB
1
The example values in this table are used only to illustrate how the sizes for active logs and archive logs are
calculated. In a production environment that does not use deduplication, 16 GB is the suggested minimum size for
an active log. The suggested minimum size for an archive log in a production environment that does not use
deduplication is 48 GB. If you substitute values from your environment and the results are larger than 16 GB and 48
GB, use your results to size the active log and archive log.
If the client option RESOURCEUTILIZATION is set to a value that is greater than the
default, the concurrent workload for the server increases.
To determine the sizes of the active and archive logs when clients use multiple
sessions, use the following calculation:
number of clients x sessions for each client x files stored
during each transaction x log space needed for each file
10.5 + 16 = 26.5 GB
35 + 16 = 51 GB
10.5 x 3 = 31.5 GB
35 x 3 = 105 GB
31.5 + 48 = 79.5 GB
105 + 48 = 153 GB
1
The example values in this table are used only to illustrate how the sizes for active logs and archive logs are
calculated. In a production environment that does not use deduplication, 16 GB is the suggested minimum size for
an active log. The suggested minimum size for an archive log in a production environment that does not use
deduplication is 48 GB. If you substitute values from your environment and the results are larger than 16 GB and 48
GB, use your results to size the active log and archive log.
Example: Estimating active and archive log sizes for simultaneous write
operations:
If client backup operations use storage pools that are configured for simultaneous
write, the amount of log space that is required for each file increases.
The log space that is required for each file increases by about 200 bytes for each
copy storage pool that is used for a simultaneous write operation. In the example
in the following table, data is stored to two copy storage pools in addition to a
primary storage pool. The estimated log size increases by 400 bytes for each file. If
you use the suggested value of 3053 bytes of log space for each file, the total
number of required bytes is 3453.
4 + 16 = 20 GB
1
Archive log: Suggested size 60 GB Because of the requirement to be able to store archive logs
across three server database-backup cycles, multiply the
estimate for the active log by 3 to estimate the archive log
requirement:
4 GB x 3 = 12 GB
12 + 48 = 60 GB
1
The example values in this table are used only to illustrate how the sizes for active logs and archive logs are
calculated. In a production environment that does not use deduplication, 16 GB is the suggested minimum size for
an active log. The suggested minimum size for an archive log in a production environment that does not use
deduplication is 48 GB. If you substitute values from your environment and the results are larger than 16 GB and 48
GB, use your results to size the active log and archive log.
Example: Estimating active and archive log sizes for basic client store operations
and server operations:
For example, migration of files from the random-access (DISK) storage pool to a
sequential-access disk (FILE) storage pool uses approximately 110 bytes of log
space for each file that is migrated. For example, suppose that you have 300
Add this value to the estimate for the size of the active log that calculated for basic
client store operations.
Example: Estimating active and archive log sizes under conditions of extreme
variation:
Problems with running out of active log space can occur if you have many
transactions that complete quickly and some transactions that take much longer to
complete. A typical case occurs when many workstation or file-server backup
sessions are active and a few very large database server-backup sessions are active.
If this situation applies to your environment, you might need to increase the size
of the active log so that the work completes successfully.
The Tivoli Storage Manager server deletes unnecessary files from the archive log
only when a full database backup occurs. Consequently, when you estimate the
space that is required for the archive log, you must also consider the frequency of
full database backups.
For example, if a full database backup occurs once a week, the archive log space
must be able to contain the information in the archive log for a full week.
The difference in archive log size for daily and full database backups is shown in
the example in the following table.
Table 65. Full database backups
Example
Item values Description
Maximum number of client nodes 300 The number of client nodes that back up, archive, or migrate
that back up, archive, or migrate files files every night.
concurrently at any time
Files stored during each transaction 4096 The default value of the server option TXNGROUPMAX is 4096.
Log space that is required for each 3453 bytes 3053 bytes for each file plus 200 bytes for each copy storage
file pool.
4 + 16 = 20 GB
1
Archive log: Suggested size with a 60 GB Because of the requirement to be able to store archive logs
full database backup every day across three backup cycles, multiply the estimate for the
active log by 3 to estimate the total archive log requirement:
4 GB x 3 = 12 GB
12 + 48 = 60 GB
1
Archive log: Suggested size with a 132 GB Because of the requirement to be able to store archive logs
full database every week across three server database-backup cycles, multiply the
estimate for the active log by 3 to estimate the total archive
log requirement. Multiply the result by the number of days
between full database backups:
(4 GB x 3 ) x 7 = 84 GB
84 + 48 = 132 GB
1
The example values in this table are used only to illustrate how the sizes for active logs and archive logs are
calculated. In a production environment that does not use deduplication, 16 GB is the suggested minimum size for
an active log. The suggested starting size for an archive log in a production environment that does not use
deduplication is 48 GB. If you substitute values from your environment and the results are larger than 16 GB and 48
GB, use your results to size the active log and archive log.
Example: Estimating active and archive log sizes for data deduplication
operations:
If you deduplicate data, you must consider its effects on space requirements for
active and archive logs.
The following factors affect requirements for active and archive log space:
The amount of deduplicated data
The effect of data deduplication on the active log and archive log space
depends on the percentage of data that is eligible for deduplication. If the
percentage of data that can be deduplicated is relatively high, more log
space is required.
The size and number of extents
Approximately 1,500 bytes of active log space are required for each extent
50 + 16 = 66 GB
63.8 + 16 = 79.8 GB
50 GB x 3 = 150 GB
150 + 48 = 198 GB
63.8 GB x 3 = 191.4 GB
191.4 + 48 = 239.4 GB
1
The example values in this table are used only to illustrate how the sizes for active logs and archive logs are
calculated. In a production environment that uses deduplication, 32 GB is the suggested minimum size for an active
log. The suggested minimum size for an archive log in a production environment that uses deduplication is 96 GB.
If you substitute values from your environment and the results are larger than 32 GB and 96 GB, use your results to
size the active log and archive log.
55.6 + 16 = 71.6 GB
93.4 + 16 = 109.4 GB
1 1
Archive log: 214.8 GB 328.2 GB The estimated size of the active log multiplied by a factor of
Suggested size 3.
55.6 GB x 3 = 166.8 GB
166.8 + 48 = 214.8 GB
280.2 + 48 = 328.2 GB
1
The example values in this table are used only to illustrate how the sizes for active logs and archive logs are
calculated. In a production environment that uses deduplication, 32 GB is the suggested minimum size for an active
log. The suggested minimum size for an archive log in a production environment that uses deduplication is 96 GB.
If you substitute values from your environment and the results are larger than 32 GB and 96 GB, use your results to
size the active log and archive log.
Clustering indexes are prone to filling up the index pages, causing index splits and
merges that must also be logged. A number of the tables implemented by the
server have more than one index. A table that has four indexes would require 16
index log records for each row that is moved for the reorganization.
The server monitors characteristics of the database, the active log, and the archive
log to determine if a database backup is needed. For example, during an online
table reorganization, if the file system for the archive log space begins to fill up,
the server triggers a database backup. When a database backup is started, any
online table reorganization in progress is paused so that the database backup can
operate without contending for resources with the reorganization.
Creating a log mirror is a suggested option. If you increase the size of the active
log, the log mirror size is increased automatically. Mirroring the log can affect
performance because of the doubled I/O activity that is required to maintain the
mirror. The additional space that the log mirror requires is another factor to
consider when deciding whether to create a log mirror.
If the mirror log directory becomes full, the server issues error messages to the
activity log and to the db2diag.log. Server activity continues.
Specifying an archive failover log directory is optional, but it can prevent problems
that occur if the archive log runs out of space. If both the archive log directory and
the drive or file system where the archive failover log directory is located become
full, the data remains in the active log directory. This condition can cause the
active log to fill up, which causes the server to halt. If you use an archive failover
log directory, place the archive log directory and the archive failover log directory
on different physical drives.
Important: Maintain adequate space for the archive log directory, and consider
using an archive failover log directory. For example, suppose the drive or file
system where the archive log directory is located becomes full and the archive
failover log directory does not exist or is full. If this situation occurs, the log files
that are ready to be moved to the archive log remain in the active log directory. If
the active log becomes full, the server stops.
By monitoring the usage of the archive failover log, you can determine whether
additional space is needed for the archive log. The goal is to minimize the need to
use the archive failover log by ensuring that the archive log has adequate space.
The locations of the archive log and the archive failover log are set during initial
configuration. If you use the DSMSERV LOADFORMAT utility instead of the wizard to
configure the server, you specify the ARCHLOGDIRECTORY parameter for the archive
log directory. In addition, you specify the ARCHFAILOVERLOGDIRECTORY parameter for
the archive failover log directory. If the archive failover log is not created at initial
configuration, you can create it by specifying the ARCHFAILOVERLOGDIRECTORY option
in the server options file.
Active log
If the amount of available active log space is too low, the following messages are
displayed in the activity log:
ANR4531I: IC_AUTOBACKUP_LOG_USED_SINCE_LAST_BACKUP_TRIGGER
This message is displayed when the active log space exceeds the maximum
specified size. The Tivoli Storage Manager server starts a full database
backup.
To change the maximum log size, halt the server. Open the dsmserv.opt
file, and specify a new value for the ACTIVELOGSIZE option. When you are
finished, restart the server.
ANR0297I: IC_BACKUP_NEEDED_LOG_USED_SINCE_LAST_BACKUP
This message is displayed when the active log space exceeds the maximum
specified size. You must back up the database manually.
Archive log
If the amount of available archive log space is too low, the following message is
displayed in the activity log:
ANR0299I: IC_BACKUP_NEEDED_ARCHLOG_USED
The ratio of used archive-log space to available archive-log space exceeds
the log utilization threshold. The Tivoli Storage Manager server starts a full
automatic database backup.
Database
If the amount of space available for database activities is too low, the following
messages are displayed in the activity log:
ANR2992W: IC_LOG_FILE_SYSTEM_UTILIZATION_WARNING_2
The used database space exceeds the threshold for database space
utilization. To increase the space for the database, use the EXTEND DBSPACE
command, the EXTEND DBSPACE command, or the DSMSERV FORMAT
utility with the DBDIR parameter.
ANR1546W: FILESYSTEM_DBPATH_LESS_1GB
The available space in the directory where the server database files are
located is less than 1 GB.
When a Tivoli Storage Manager server is created with the DSMSERV
FORMAT utility or with the configuration wizard, a server database and
recovery log are also created. In addition, files are created to hold database
information used by the database manager. The path specified in this
message indicates the location of the database information used by the
database manager. If space is unavailable in the path, the server can no
longer function.
You must add space to the file system or make space available on the file
system or disk.
You can monitor the database and recovery log space whether the server is online
or offline.
v When the Tivoli Storage Manager server is online, you can issue the QUERY
DBSPACE command to view the total space, used space, and free space for the file
systems or drives where your database located. To view the same information
when the server is offline, issue the DSMSERV DISPLAY DBSPACE command. The
following example shows the output of this command:
Location: d:\tsm\db001
Total Space (MB): 46,080.00
Used Space (MB): 20,993.12
Free Space (MB): 25,086.88
Location: e:\tsm\db002
Total Space (MB): 46,080.00
Used Space (MB): 20,992.15
Free Space (MB): 25,087.85
Location: f:\tsm\db003
Total Space (MB): 46,080.00
Used Space (MB): 20,993.16
Free Space (MB): 25,086.84
Location: g:\tsm\db004
Total Space (MB): 46,080.00
Used Space (MB): 20,992.51
Free Space (MB): 25,087.49
v To view more detailed information about the database when the server is online,
issue the QUERY DB command. The following example shows the output of this
command if you specify FORMAT=DETAILED:
Database Name: TSMDB1
Total Size of File System (MB): 184,320
Space Used by Database (MB): 83,936
Free Space Available (MB): 100,349
Total Pages: 6,139,995
Usable Pages: 6,139,451
Used Pages: 6,135,323
Free Pages: 4,128
Buffer Pool Hit Ratio: 100.0
Total Buffer Requests: 97,694,823,985
Sort Overflows: 0
Package Cache Hit Ratio: 100.0
Last Database Reorganization: 06/25/2009 01:33:11
Full Device Class Name: LTO1_CLASS
Incrementals Since Last Full: 0
Last Complete Backup Date/Time: 06/06/2009 14:01:30
v When the Tivoli Storage Manager server is online, issue the QUERY LOG
FORMAT=DETAILED command to display the total space, used space, and free space
for the active log, and the locations of all the logs. To display the same
information when the Tivoli Storage Manager server is offline, issue the DSMSERV
DISPLAY LOG command. The following example shows the output of this
command:
v You can view information about the database on the server console and in the
activity log. You can set the level of database information by using the SET
DBREPORTMODE command. Specify that no diagnostic information is displayed
(NONE), that all diagnostic information is displayed (FULL), or that the only
events that are displayed are those that are exceptions and might represent
errors (PARTIAL). The default is PARTIAL.
The server can use all the space that is available to the drives or file systems where
the database directories are located. To ensure that database space is always
available, monitor the space in use by the server and the file systems where the
directories are located.
The QUERY DB command displays the number of free pages in the table space and
the free space available to the database. If the number of free pages are low and
there is a lot of free space available, the database allocates additional space.
However, if free space is low, it might not be possible to expand the database.
For example, to add a directory and a drive to the storage space for the database,
issue the following command:
extend dbspace h:\tsmdb005,I:
After a directory is added to a Tivoli Storage Manager server, the directory might
not be used to its full extent. Some Tivoli Storage Manager events can cause the
added database space to become used, over time. For example, table
reorganizations or some temporary database transactions, such as long running
select statements, can help the added database space to begin filling up. The
database space redistribution among all directories can require a few days or
weeks. If the existing database directories are nearly full when the directory is
added, the server might encounter an out-of-space condition, as reported in the
db2diag.log.
If this condition occurs, halt and restart the server. If the restart does not correct
the condition, remove the database and then restore it to the same or new
directories.
Reorganization of table data can be initiated by the Tivoli Storage Manager server
or by DB2. If server-initiated reorganization is enabled. The server analyzes
selected database tables and indexes based on table activity, and determines when
reorganization is required. The database manager runs a reorganization while
server operations continue. If reorganization by DB2 is enabled, DB2 controls the
reorganization process. Reorganization by DB2 is not recommended.
The best time to start a reorganization is when server activity is low and when
access to the database is optimal. Schedule table reorganization for databases on
servers that are not running deduplication. Schedule table and index
reorganization on servers that are running deduplication.
Important: Ensure that the system on which the Tivoli Storage Manager server is
running has sufficient memory and processor resources. To assess how busy the
system is over time, use operating system tools to assess the load on the system.
You can also review the db2diag.log file and the server activity log. If the system
does not have sufficient resources, reorganization processing might be incomplete,
or it might degrade or destabilize the system.
Table reorganization
Index reorganization
| If you set only the REORGBEGINTIME option, reorganization is enabled for an entire
| day. If you do not specify the REORGBEGINTIME option, but you specify a value for
| the REORGDURATION option, the reorganization interval starts at 6:00 a.m. and runs
| for the specified number of hours.
To increase the size of the active log while the server is halted, complete the
following steps:
1. Issue the DSMSERV DISPLAY LOG offline utility to display the size of the active
log.
2. Ensure that the location for the active log has enough space for the increased
log size. If a log mirror exists, its location must also have enough space for the
increased log size.
3. Halt the server.
4. In the dsmserv.opt file, update the ACTIVELOGSIZE option to the new maximum
size of the active log, in megabytes. For example, to change the active log to its
maximum size of 128 GB, enter the following server option:
activelogsize 131072
5. If you plan to use a new active log directory, update the directory name
specified in the ACTIVELOGDIRECTORY server option. The new directory must be
empty and must be accessible to the user ID of the database manager.
6. Restart the server.
| If you have too much active log space, you can reduce the size of the active log by
| completing the following steps:
| 1. Stop the Tivoli Storage Manager server.
| 2. In the dsmserv.opt file, change the ACTIVELOGSIZE option to the new size of the
| active log, in megabytes. For example, to reduce the active log by 8 GB, enter
| the following server option:
| dsmserv activelogsize 8000
| 3. Restart the server.
| When you reduce the size of the active log, you must restart the Tivoli Storage
| Manager server twice. The first restart changes the DB2 parameters. The second
| restart removes the log files that are no longer required on the disk.
You might want to move the database and logs to take advantage of a larger or
faster disk. You have the following options:
v “Moving both the database and recovery log”
v “Moving only the database” on page 714
v “Moving only the active log, archive log, or archive failover log” on page 715
For information about moving a Tivoli Storage Manager server to another machine,
see “Moving the Tivoli Storage Manager server to another system” on page 648
To move the database from one location on the server to another location, follow
this procedure:
1. Back up the database by issuing the following command:
backup db type=full devclass=files
2. Halt the server.
3. Create directories for the database. The directories must be accessible to the
user ID of the database manager. For example:
mkdir l:\tsm\db005
mkdir m:\tsm\db006
mkdir n:\tsm\db007
mkdir o:\tsm\db008
4. Create a file that lists the locations of the database directories. This file will be
used if the database must be restored. Enter each location on a separate line.
For example, here are the contents of the dbdirs.txt file:
l:\tsm\db005
m:\tsm\db006
n:\tsm\db007
o:\tsm\db008
5. Remove the database instance by issuing the following command:
dsmserv removedb TSMDB1
6. Issue the DSMSERV RESTORE DB utility to move the database to the new
directories. For example:
dsmserv restore db todate=today on=dbdir.file
7. Start the server.
| To specify alternative locations for the database log files, complete the following
| steps:
| 1. To specify the location of subdirectories RstDbLog and failarch, use the
| ARCHFAILOVERLOGDIRECTORY server option. The Tivoli Storage Manager server
| creates the RstDbLog and failarch subdirectories in the directory that is
| specified by the server option.
| Restriction: If you do not specify the location of the subdirectories, the Tivoli
| Storage Manager server automatically creates the two subdirectories under the
| archive log directory
| If the archive log directory becomes full, it can limit the amount of space that is
| available for archived log files. If you must use the archive log directory, you
| can increase its size to accommodate both the RstDbLog and failarch
| directories.
| 2. Use a file system that is different from the file system that is specified by the
| ACTIVELOGDIRECTORY and ARCHLOGDIRECTORY parameters.
| Tip: If you do not set the ARCHFAILOVERLOGDIRECTORY option, the Tivoli Storage
| Manager server creates the RstDbLog and failarch subdirectories automatically
| in the directory that is specified for the ARCHLOGDIRECTORY parameter on the
| DSMSERV FORMAT or DSMSERV LOADFORMAT command. You must specify the
| ARCHLOGDIRECTORY parameter for these commands.
| 3. For a database restore operation, you can specify the location of the RstDbLog
| subdirectory, but not the failarch subdirectory, by using the RECOVERYDIR
| parameter on the DSMSERV RESTORE DB command. Consider allocating a
| relatively large amount of temporary disk space for the restore operation.
| Because database restore operations occur relatively infrequently, the RstDbLog
| subdirectory can contain many logs from backup volumes that are stored in
| preparation for pending roll-forward-restore processing.
The server also updates the DB2 parameter OVERFLOWLOGPATH that points to the
RstDbLog subdirectory and the DB2 parameter FAILARCHPATH, that points to the
failarch subdirectory. For details about these parameters, see the DB2 information
center at http://pic.dhe.ibm.com/infocenter/db2luw/v9r7.
For example, suppose that you specify archlogfailover as the value of the
ARCHFAILOVERLOGDIRECTORY parameter on the DSMSERV FORMAT command:
The server creates the subdirectories RstDbLog and failarch in the parent directory
archlogfailover. The server also updates the following DB2 parameters:
OVERFLOWLOGPATH=d:\archlogfailover\RstDbLog
FAILARCHPATH=d:\archlogfailover\failarch
The server also updates the value of the ARCHFAILOVERLOGDIRECTORY option in the
server options file, dsmserv.opt:
ARCHFAILOVERLOGDIRECTORY d:\archlogfailover
For details about these parameters, see the DB2 Information Center at
http://pic.dhe.ibm.com/infocenter/db2luw/v9r7.
For example, suppose that you specify a value of archlog for the ARCHLOGDIRECTORY
parameter in a DSMSERV FORMAT command. You do not specify the
ARCHFAILOVERLOGDIRECTORY parameter:
dsmserv format
dbdir=a:\db001
activelogdirectory=b:\activelog
archlogdirectory=c:\archlog
The Tivoli Storage Manager server creates the subdirectories RstDbLog and
failarch under the archlog parent directory. The server also updates the following
DB2 parameters:
OVERFLOWLOGPATH=c:\archlog\RstDbLog
FAILARCHPATH=c:\archlog\failarch
The server also updates the value of the ARCHLOGDIRECTORY option in the server
options file, dsmserv.opt:
ARCHLOGDIRECTORY c:\archlog
The server also updates the DB2 parameter, OVERFLOWLOGPATH, that points to
RstDbLog. For details about this parameter, see the DB2 Information Center at
http://pic.dhe.ibm.com/infocenter/db2luw/v9r7.
For example, for a point-in-time database restore, you can issue the following
command:
dsmserv restore db
todate=5/12/2011
totime=14:45
recoverydir=e:\recovery
The server creates the RstDbLog subdirectory in the parent recovery directory. The
server also updates the OVERFLOWLOGPATH parameter:
OVERFLOWLOGPATH=e:\recovery\RstDbLog
After the database is restored, the RstDbLog subdirectory reverts to its location as
specified by the server option ARCHFAILOVERLOGDIRECTORY or ARCHLOGDIRECTORY in
the server options file, dsmserv.opt.
Transaction processing
A transaction is the unit of work exchanged between the client and server.
The log records for a given transaction are moved into stable storage when the
transaction is committed. The database information that is stored on disk remains
consistent because the server ensures that the recovery log records, which represent
the updates to these database pages, are written to disk.
During restart-recovery, the server uses the active and archive log information to
maintain the consistency of the server by redoing and, if necessary, undoing
ongoing transactions from the time that the server was halted. The transaction is
then committed to the database.
Transaction commit is a function of all the log records for that transaction being
written to the recovery log. This function ensures that the necessary redo and undo
information is available to replay these transaction changes against the database
information.
If you increase the value of TXNGROUPMAX by a large amount, monitor the effects on
the recovery log. A larger value for the TXNGROUPMAX option can have the following
impact:
v Affect the performance of client backup, archive, restore, and retrieve operations.
v Increase utilization of the recovery log, as well as increase the length of time for
a transaction to commit.
Also consider the number of concurrent sessions to be run. It might be possible to
run with a higher TXNGROUPMAX value with a few clients running. However, if there
are hundreds of clients running concurrently, you might need to reduce the
TXNGROUPMAX to help manage the recovery log usage and support this number of
concurrent clients. If the performance effects are severe, they might affect server
operations. See “Monitoring the database and recovery log” on page 708 for more
information.
The following examples show how the TXNGROUPMAX option can affect performance
throughput for operations to tape and the recovery log.
v The TXNGROUPMAX option is set to 20. The MAXSESSIONS option, which specifies the
maximum number of concurrent client/server sessions, is set to 5. Five
concurrent sessions are processing, and each file in the transaction requires 10
logged database operations. This would be a concurrent load of:
20*10*5=1000
This represents 1000 log records in the recovery log. Each time a transaction
commits the data, the server can free 200 log records.
v The TXNGROUPMAX option is set to 2000. The MAXSESSIONS option is set to 5. Five
concurrent sessions are processing, and each file in the transaction requires 10
logged database operations, resulting in a concurrent load of:
2000*10*5=100 000
This represents 100 000 log records in the recovery log. Each time a transaction
commits the data, the server can free 20 000 log records.
Remember: Over time and as transactions end, the recovery log can release the
space that is used by the oldest transactions. These transactions complete, and the
log space usage increases.
You can use several server options to tune server performance and reduce the risk
of running out of recovery log space:
v Use the THROUGHPUTTIMETHRESHOLD and THROUGHPUTDATATHRESHOLD options with
the TXNGROUPMAX option to prevent a slower performing node from holding a
transaction open for extended periods.
v Increase the size of the recovery log when you increase the TXNGROUPMAX setting.
Evaluate the performance and characteristics of each node before increasing the
TXNGROUPMAX setting. Nodes that have only a few larger objects to transfer do not
benefit as much as nodes that have multiple, smaller objects to transfer. For
example, a file server benefits more from a higher TXNGROUPMAX setting than does a
database server that has one or two large objects. Other node operations can
consume the recovery log at a faster rate. Be careful when increasing the
TXNGROUPMAX settings for nodes that often perform high log-usage operations. The
raw or physical performance of the disk drives that are holding the database and
recovery log can become an issue with an increased TXNGROUPMAX setting. The
drives must handle higher transfer rates to handle the increased load on the
recovery log and database.
You can set the TXNGROUPMAX option as a global server option value, or you can set
it for a single node. For optimal performance, specify a lower TXNGROUPMAX value
(between 4 and 512). Select higher values for individual nodes that can benefit
from the increased transaction size.
Refer to the REGISTER NODE command and the server options in the Administrator's
Reference.
An administrator working at one Tivoli Storage Manager server can work with
Tivoli Storage Manager servers at other locations around the world.
Concepts:
“Concepts for managing server networks”
“Enterprise configuration” on page 722
Tasks:
“Setting up communications among servers” on page 726
“Setting up communications for enterprise configuration and enterprise event logging” on
page 726
“Setting up communications for command routing with multiple source servers” on page
731
“Completing tasks on multiple servers” on page 757
“Using virtual volumes to store data on another server” on page 763
To manage a network of servers, you can use the following Tivoli Storage Manager
capabilities:
v Configure and manage multiple servers with enterprise configuration.
Distribute a consistent configuration for Tivoli Storage Manager servers through
a configuration manager to managed servers. By having consistent
configurations, you can simplify the management of a large number of servers
and clients.
v Perform tasks on multiple servers by using command routing, enterprise logon,
and enterprise console.
v Send server and client events to another server for logging.
v Monitor many servers and clients from a single server.
v Store data on another server by using virtual volumes.
In the descriptions for working with a network of servers, when a server sends
data, that server is sometimes referred to as a source server, and when a server
receives data, it is sometimes referred to as a target server. In other words, one
For details, see “Licensing IBM Tivoli Storage Manager” on page 631.
Enterprise configuration
The Tivoli Storage Manager enterprise configuration functions make it easier to
consistently set up and manage a network of Tivoli Storage Manager servers. You
can set up configurations on one server and distribute the configurations to other
servers. You can make changes to configurations and have the changes
automatically distributed.
On each server that is to receive the configuration information, identify the server
as a managed server by defining a subscription to one or more profiles owned by the
configuration manager. All the definitions associated with the profiles are then
copied into the managed server's database. Things defined to the managed server
in this way are managed objects that cannot be changed by the managed server.
From then on, the managed server gets any changes to the managed objects from
the configuration manager via the profiles. Managed servers receive changes to
configuration information at time intervals set by the servers, or by command.
Configuration Administrators
Manager Profiles
Schedules
Scripts
Subscriptions to profiles
Managed
servers
Managed
objects
Command routing
| Use the command-line interface to route commands to other servers.
The other servers must be defined to the server to which you are connected. You
must also be registered on the other servers as an administrator with the
administrative authority that is required for the command. To make routing
commands easier, you can define a server group that has servers as members.
Commands that you route to a server group are sent to all servers in the group.
For details, see “Setting up server groups” on page 761 and “Routing commands”
on page 758.
The following methods are ways in which you can centrally monitor activities:
v Enterprise event logging, in which events are sent from one or more of servers
to be logged at an event server.
For a description of the function, see “Enterprise event logging: logging events
to another server” on page 897. For information about communications setup,
see “Setting up communications for enterprise configuration and enterprise
event logging” on page 726.
| v Use the Operations Center to view server status and alerts. See “Monitoring
| operations daily using the Operations Center” on page 817 for more information.
v Allowing designated administrators to log in to any of the servers in the
network with a single user ID and password.
The data can also be a recovery plan file created by using disaster recovery
manager (DRM). The source server is a client of the target server, and the data for
the source server is managed only by the source server. In other words, the source
server controls the expiration and deletion of the files that comprise the virtual
volumes on the target server.
To use virtual volumes to store database and storage pool backups and recovery
plan files, you must have the disaster recovery manager function. For details, see
“Licensing IBM Tivoli Storage Manager” on page 631.
For information about using virtual volumes with DRM, see Chapter 36, “Disaster
recovery manager,” on page 1053.
Here are two scenarios to give you some ideas about how you can use the
functions:
v Setting up and managing Tivoli Storage Manager servers primarily from one
location. For example, an administrator at one location controls and monitors
servers at several locations.
v Setting up a group of Tivoli Storage Manager servers from one location, and
then managing the servers from any of the servers. For example, several
administrators are responsible for maintaining a group of servers. One
administrator defines the configuration information on one server for
distributing to servers in the network. Administrators on the individual servers
in the network manage and monitor the servers.
For example, suppose that you are an administrator who is responsible for Tivoli
Storage Manager servers at your own location, plus servers at branch office
locations. Servers at each location have similar storage resources and client
requirements. You can set up the environment as follows:
v Set up an existing or new Tivoli Storage Manager server as a configuration
manager.
After you complete the setup, you can manage many servers as if there was just
one. You can perform any of the following tasks:
v Have administrators that can manage the group of servers from anywhere in the
network by using the enterprise console, an interface available through a Web
browser.
v Have consistent policies, schedules, and client option sets on all servers.
v Make changes to configurations and have the changes automatically distributed
to all servers. Allow local administrators to monitor and tune their own servers.
v Perform tasks on any server or all servers by using command routing from the
enterprise console.
v Back up the databases of the managed servers on the automated tape library
that is attached to the server that is the configuration manager. You use virtual
volumes to accomplish this.
v Log on to individual servers from the enterprise console without having to
re-enter your password, if your administrator ID and password are the same on
each server.
For example, suppose that you are an administrator responsible for servers located
in different departments on a college campus. The servers have some requirements
in common, but also have many unique client requirements. You can set up the
environment as follows:
v Set up an existing or new Tivoli Storage Manager server as a configuration
manager.
v Set up communications so that commands can be sent from any server to any
other server.
v Define any configuration that you want to distribute by defining policy
domains, schedules, and so on, on the configuration manager. Associate the
configuration information with profiles.
v Have the managed servers subscribe to profiles as needed.
v Activate policies and set up storage pools as needed on the managed servers.
v Set up enterprise monitoring by setting up one server as an event server. The
event server can be the same server as the configuration manager or a different
server.
After setting up in this way, you can manage the servers from any server. You can
do any of the following tasks:
v Use enterprise console to monitor all the servers in your network.
Enterprise-administration planning
To take full advantage of the functions of enterprise administration, you should
decide on the servers you want to include in the enterprise network, the server
from which you want to manage the network, and other important issues.
The examples shown here apply to both functions. If you are set up for one, you
are set up for the other. However, be aware that the configuration manager and
event server are not defined simply by setting up communications. You must
identify a server as a configuration manager (SET CONFIGMANAGER command)
or an event server (DEFINE EVENTSERVER command). Furthermore, a
configuration manager and an event server can be the same server or different
servers.
Enterprise configuration
Each managed server must be defined to the configuration manager, and
the configuration manager must be defined to each managed server.
Figure 81 on page 728 shows the servers and the commands issued on each:
Munich Strasbourg
Figure 82 on page 729 shows the servers and the commands issued on each:
Munich Strasbourg
Note: Issuing the SET SERVERNAME command can affect scheduled backups
until a password is re-entered. Windows clients use the server name to identify
which passwords belong to which servers. Changing the server name after the
clients are connected forces the clients to re-enter the passwords. On a network
where clients connect to multiple servers, it is recommended that all of the servers
have unique names. See the Administrator's Reference for more details.
Communication security
Security for this communication configuration is enforced through the exchange of
passwords (which are encrypted) and, in the case of enterprise configuration only,
verification keys.
Communication among servers, which is through TCP/IP, requires that the servers
verify server passwords (and verification keys). For example, assume that
HEADQUARTERS begins a session with MUNICH:
1. HEADQUARTERS, the source server, identifies itself by sending its name to
MUNICH.
2. The two servers exchange verification keys (enterprise configuration only).
3. HEADQUARTERS sends its password to MUNICH, which verifies it against
the password stored in its database.
4. If MUNICH verifies the password, it sends its password to HEADQUARTERS,
which, in turn, performs password verification.
Note: You must be registered as an administrator with the same name and
password on the source server and all target servers. The privilege classes do not
need to be the same on all servers. However, to successfully route a command to
another server, an administrator must have the minimum required privilege class
for that command on the server from which the command is being issued.
For command routing in which one server will always be the sender, you would
only define the target servers to the source server. If commands can be routed from
any server to any other server, each server must be defined to all the others.
The example provided shows you how you can set up communications for
administrator HQ on the server HEADQUARTERS who will route commands to
the servers MUNICH and STRASBOURG. Administrator HQ has the password
SECRET and has system privilege class.
The procedure for setting up communications for command routing with one
source server is shown in the following list:
v On HEADQUARTERS: register administrator HQ and specify the server names
and addresses of MUNICH and STRASBOURG:
register admin hq secret
grant authority hq classes=system
Note: Command routing uses the ID and password of the Administrator. It does
not use the password or server password set in the server definition.
v On MUNICH and STRASBOURG Register administrator HQ with the required
privilege class on each server:
register admin hq secret
grant authority hq classes=system
Note: If your server network is using enterprise configuration, you can automate
the preceding operations. You can distribute the administrator and server lists to
MUNICH and STRASBOURG. In addition, all server definitions and server groups
are distributed by default to a managed server when it first subscribes to any
profile on a configuration manager. Therefore, it receives all the server definitions
that exist on the configuration manager, thus enabling command routing among
the servers.
The examples provided below show you how to set up communications if the
administrator, HQ, can route commands from any of the three servers to any of the
other servers. You can separately define each server to each of the other servers, or
you can “cross define” the servers. In cross definition, defining MUNICH to
HEADQUARTERS also results in automatically defining HEADQUARTERS to
MUNICH.
When setting up communications for command routing, you can define each
server to each of the other servers.
Figure 83 on page 732 shows the servers and the commands issued on each.
Munich Strasbourg
When setting up communications for command routing, you can cross-define the
other servers.
Note: If your server network is using enterprise configuration, you can automate
the preceding operations. You can distribute the administrator lists and server lists
to MUNICH and STRASBOURG. In addition, all server definitions and server
groups are distributed by default to a managed server when it first subscribes to
any profile on a configuration manager. Therefore, it receives all the server
definitions that exist on the configuration manager, thus enabling command
routing among the servers.
Figure 84 on page 734 shows the servers and the commands issued on each.
Munich Strasbourg
You can update a server definition by issuing the UPDATE SERVER command.
v For server-to-server virtual volumes:
– If you update the node name, you must also update the password.
– If you update the password but not the node name, the node name defaults
to the server name specified by the SET SERVERNAME command.
v For enterprise configuration and enterprise event logging: If you update the
server password, it must match the password specified by the SET
SERVERPASSWORD command at the target server.
v For enterprise configuration: When a server is first defined at a managed server,
that definition cannot be replaced by a server definition from a configuration
You can delete a server definition by issuing the DELETE SERVER command. For
example, to delete the server named NEWYORK, enter the following:
delete server newyork
The deleted server is also deleted from any server groups of which it is a member.
You cannot delete a server if any of the following conditions are true:
v The server is defined as an event server.
You must first issue the DELETE EVENTSERVER command.
v The server is a target server for virtual volumes.
A target server is named in a DEFINE DEVCLASS (DEVTYPE=SERVER)
command. You must first change the server name in the device class or delete
the device class.
v The server is named in a device class definition whose device type is SERVER.
v The server has paths defined to a file drive.
v The server has an open connection to or from another server.
You can find an open connection to a server by issuing the QUERY SESSION
command.
See “Setting up server groups” on page 761 for information about server groups.
Each managed server stores the distributed information as managed objects in its
database. Managed servers receive periodic updates of the configuration
information from the configuration manager, or an administrator can trigger an
update by command.
If you use an LDAP directory server to authenticate passwords, any target servers
must be configured for LDAP passwords. Data that is replicated from a node that
authenticates with an LDAP directory server is inaccessible if the target server is
not properly configured. If your target server is not configured, replicated data
from an LDAP node can still go there. But the target server must be configured to
use LDAP in order for you to access the data.
“Enterprise configuration scenario” gives you an overview of the steps to take for
one possible implementation of enterprise configuration. Sections that follow give
more details on each step. For details on the attributes that are distributed with
these objects, see “Associating configuration information with a profile” on page
741. After you set up server communication as described in “Setting up
communications for enterprise configuration and enterprise event logging” on page
726, you set up the configuration manager and its profiles.
Headquarters
Configuration Manager
Managed
servers
London Munich New York Santiago Delhi Tokyo
The following sections give you an overview of the steps to take to complete this
setup. For details on each step, see the section referenced.
Figure 86 illustrates the commands that you must issue to set up one Tivoli Storage
Manager server as a configuration manager. The following procedure gives you an
overview of the steps required to set up a server as a configuration manager.
Headquarters
set configmanager on
define profile
Configuration define profassocation
Manager
1. Decide whether to use the existing Tivoli Storage Manager server in the
headquarters office as the configuration manager or to install a new Tivoli
Storage Manager server on a system.
2. Set up the communications among the servers.
3. Identify the server as a configuration manager.
Use the following command:
set configmanager on
This command automatically creates a profile named DEFAULT_PROFILE. The
default profile includes all the server and server group definitions on the
configuration manager. As you define new servers and server groups, they are
also associated with the default profile.
4. Create the configuration to distribute.
The tasks that might be involved include:
v Register administrators and grant authorities to those that you want to be
able to work with all the servers.
v Define policy objects and client schedules
v Define administrative schedules
v Define Tivoli Storage Manager server scripts
v Define client option sets
v Define servers
v Define server groups
Example 1: You need a shorthand way to send commands to different groups
of managed servers. You can define server groups. For example, you can define
a server group named AMERICAS for the servers in the offices in North
America and South America.
Note: You must set up the storage pool itself (and associated device class) on
each managed server, either locally or by using command routing. If a
managed server already has a storage pool associated with the automated
tape library, you can rename the pool to TAPEPOOL.
Example 4: You want to ensure that client data is consistently backed up and
managed on all servers. You want all clients to be able to store three backup
versions of their files. You can do the following:
v Verify or define client schedules in the policy domain so that clients are
backed up on a consistent schedule.
v In the policy domain that you will point to in the profile, update the backup
copy group so that three versions of backups are allowed.
v Define client option sets so that basic settings are consistent for clients as
they are added.
5. Define one or more profiles.
For example, you can define one profile named ALLOFFICES that points to all
the configuration information (policy domain, administrators, scripts, and so
on). You can also define profiles for each type of information, so that you have
one profile that points to policy domains, and another profile that points to
administrators, for example.
See “Setting up communications among servers” on page 726 for details. For
more information, see “Creating the default profile on a configuration
manager” on page 740. See “Defining a server group and members of a server
group” on page 761 for details. For details, see “Creating and changing
configuration profiles” on page 740.
Figure 87 on page 739 shows the specific commands needed to set up one Tivoli
Storage Manager server as a managed server. The following procedure gives you
an overview of the steps required to set up a server as a managed server.
A server becomes a managed server when that server first subscribes to a profile
on a configuration manager.
1. Query the server to look for potential conflicts.
Look for definitions of objects on the managed server that have the same name
as those defined on the configuration manager. With some exceptions, these
objects will be overwritten when the managed server first subscribes to the
profile on the configuration manager.
If the managed server is a new server and you have not defined anything, the
only objects you will find are the defaults (for example, the STANDARD policy
domain).
2. Subscribe to one or more profiles.
A managed server can only subscribe to profiles on one configuration manager.
If you receive error messages during the configuration refresh, such as a local
object that could not be replaced, resolve the conflict and refresh the
configuration again. You can either wait for the automatic refresh period to be
reached, or kick off a refresh by issuing the SET CONFIGREFRESH command,
setting or resetting the interval.
3. If the profile included policy domain information, activate a policy set in the
policy domain, add or move clients to the domain, and associate any required
schedules with the clients.
You may receive warning messages about storage pools that do not exist, but
that are needed for the active policy set. Define any storage pools needed by
the active policy set, or rename existing storage pools.
4. If the profile included administrative schedules, make the schedules active.
Administrative schedules are not active when they are distributed by a
configuration manager. The schedules do not run on the managed server until
you make them active on the managed server. See “Tailoring schedules” on
page 661.
5. Set how often the managed server contacts the configuration manager to
update the configuration information associated with the profiles.
The initial setting for refreshing the configuration information is 60 minutes.
For more information, see the following topics:
v “Associating configuration information with a profile” on page 741
v “Defining storage pools” on page 273
v “Getting information about profiles” on page 748
v “Refreshing configuration information” on page 754
v “Renaming storage pools” on page 429
v “Subscribing to a profile” on page 750
After you define the profile and its associations, a managed server can subscribe to
the profile and obtain the configuration information.
After you define a profile and associate information with the profile, you can
change the information later. While you make changes, you can lock the profiles to
prevent managed servers from refreshing their configuration information. To
distribute the changed information associated with a profile, you can unlock the
Before you can associate specific configuration information with a profile, the
definitions must exist on the configuration manager. For example, to associate a
policy domain named ENGDOMAIN with a profile, you must have already
defined the ENGDOMAIN policy domain on the configuration manager.
Suppose you want the ALLOFFICES profile to distribute policy information from
the STANDARD and ENGDOMAIN policy domains on the configuration manager.
Enter the following command:
define profassociation alloffices domains=standard,engdomain
You can make the association more dynamic by specifying the special character, *
(asterisk), by itself. When you specify the *, you can associate all existing objects
with a profile without specifically naming them. If you later add more objects of
the same type, the new objects are automatically distributed via the profile. For
example, suppose that you want the ADMINISTRATORS profile to distribute all
administrators registered to the configuration manager. Enter the following
commands on the configuration manager:
define profile administrators
description=’Profile to distribute administrators IDs’
The administrator with the name SERVER_CONSOLE is never distributed from the
configuration manager to a managed server.
For administrator definitions that have node authority, the configuration manager
only distributes information such as password and contact information. Node
authority for the managed administrator can be controlled on the managed server
using the GRANT AUTHORITY and REVOKE AUTHORITY commands specifying
the CLASS=NODE parameter.
A subscribing managed server may already have a policy domain with the same
name as the domain associated with the profile. The configuration refresh
overwrites the domain defined on the managed server unless client nodes are
already assigned to the domain. Once the domain becomes a managed object on
the managed server, you can associate clients with the managed domain. Future
configuration refreshes can then update the managed domain.
If nodes are assigned to a domain with the same name as a domain being
distributed, the domain is not replaced. This safeguard prevents inadvertent
replacement of policy that could lead to loss of data. To replace an existing policy
domain with a managed domain of the same name, perform the following steps on
the managed server:
1. Copy the domain.
2. Move all clients assigned to the original domain to the copied domain.
3. Trigger a configuration refresh.
4. Activate the appropriate policy set in the new, managed policy domain.
5. Move all clients back to the original domain, which is now managed.
Any servers and server groups that you define later are associated automatically
with the default profile and the configuration manager distributes the definitions at
the next refresh. For a server definition, the following attributes are distributed:
v Communication method
v TCP/IP address (high-level address), Version 4 or Version 6
v Port number (low-level address)
v Server password
v Server URL
v The description
When server definitions are distributed, the attribute for allowing replacement is
always set to YES. You can set other attributes, such as the server's node name, on
the managed server by updating the server definition.
A managed server may already have a server defined with the same name as a
server associated with the profile. The configuration refresh does not overwrite the
local definition unless the managed server allows replacement of that definition.
On a managed server, you allow a server definition to be replaced by updating the
local definition. For example:
update server santiago allowreplace=yes
A configuration refresh does not replace or remove any local schedules that are
active on a managed server. However, a refresh can update an active schedule that
is already managed by a configuration manager.
Changing a profile
You can change a profile and its associated configuration information.
For example, if you want to add a policy domain named FILESERVERS to objects
already associated with the ALLOFFICES profile, enter the following command:
define profassociation alloffices domains=fileservers
You can also delete associated configuration information, which results in removal
of configuration from the managed server. Use the DELETE PROFASSOCIATION
command.
You can change the description of the profile. Enter the following command:
update profile alloffices
description=’Configuration for all offices with file servers’
See “Removing configuration information from managed servers” on page 746 for
details.
For example, to lock the ALLOFFICES profile for two hours (120 minutes), enter
the following command:
lock profile alloffices 120
You can let the lock expire after two hours, or unlock the profile with the following
command:
unlock profile alloffices
From the configuration manager, to notify all servers that are subscribers to the
ALLOFFICES profile, enter the following command:
notify subscribers profile=alloffices
The managed servers then refresh their configuration information, even if the time
period for refreshing the configuration has not passed.
See “Refreshing configuration information” on page 754 for how to set this period.
On the configuration manager, you can delete the association of objects with a
profile. For example, you may want to remove some of the administrators that are
associated with the ADMINISTRATORS profile. With an earlier command, you had
included all administrators defined on the configuration manager (by specifying
ADMINS=*). To change the administrators included in the profile you must first
delete the association of all administrators, then associate just the administrators
that you want to include. Do the following:
1. Before you make these changes, you may want to prevent any servers from
refreshing their configuration until you are done. Enter the following
command:
lock profile administrators
2. Now make the change by entering the following commands:
delete profassociation administrators admins=*
When you delete the association of an object with a profile, the configuration
manager no longer distributes that object via the profile. Any managed server
subscribing to the profile deletes the object from its database when it next contacts
the configuration manager to refresh configuration information. However, a
managed server does not delete the following objects:
v An object that is associated with another profile to which the server subscribes.
v A policy domain that has client nodes still assigned to it. To delete the domain,
you must assign the affected client nodes to another policy domain on the
managed server.
v An administrator that currently has a session open with the server.
v An administrator that is the last administrator with system authority on the
managed server.
Also the managed server does not change the authority of an administrator if
doing so would leave the managed server without any administrators having
the system privilege class.
You can avoid both problems by ensuring that you have locally defined at least
one administrator with system privilege on each managed server.
Deleting profiles
You can delete a profile from a configuration manager. Before deleting a profile,
you should ensure that no managed server still has a subscription to the profile. If
the profile still has some subscribers, delete the subscriptions on each managed
server first.
When you delete subscriptions, consider whether you want the managed objects to
be deleted on the managed server at the same time. For example, to delete the
subscription to profile ALLOFFICES from managed server SANTIAGO without
deleting the managed objects, log on to the SANTIAGO server and enter the
following command:
delete subscription alloffices
Note: You can use command routing to issue the DELETE SUBSCRIPTION
command for all managed servers.
If you try to delete a profile, that still has subscriptions, the command fails unless
you force the operation:
delete profile alloffices force=yes
If you do force the operation, managed servers that still subscribe to the deleted
profile will later contact the configuration manager to try to get updates to the
deleted profile. The managed servers will continue to do this until their
subscriptions to the profile are deleted. A message will be issued on the managed
server alerting the administrator of this condition.
See “Deleting subscriptions” on page 753 for more details about deleting
subscriptions on a managed server.
For example, from a configuration manager, you can display information about
profiles defined on that server or on another configuration manager. From a
managed server, you can display information about any profiles on the
configuration manager to which the server subscribes. You can also get profile
information from any other configuration manager defined to the managed server,
even though the managed server does not subscribe to any of the profiles.
You may need to get detailed information about profiles and the objects associated
with them, especially before subscribing to a profile. You can get the names of the
objects associated with a profile by entering the following command:
query profile server=headquarters format=detailed
If the server from which you issue the query is already a managed server
(subscribed to one or more profiles on the configuration manager being queried),
by default the query returns profile information as it is known to the managed
server. Therefore the information is accurate as of the last configuration refresh
done by the managed server. You may want to ensure that you see the latest
version of profiles as they currently exist on the configuration manager. Enter the
following command:
query profile uselocal=no format=detailed
To get more than the names of the objects associated with a profile, you can do one
of the following:
v If command routing is set up between servers, you can route query commands
from the server to the configuration manager. For example, to get details on the
ENGDOMAIN policy domain on the HEADQUARTERS server, enter this
command:
headquarters: query domain engdomain format=detailed
Subscribing to a profile
After an administrator at a configuration manager has created profiles and
associated objects with them, managed servers can subscribe to one or more of the
profiles.
Note:
v Unless otherwise noted, the commands in this section would be run on a
managed server:
v An administrator at the managed server could issue the commands.
v You could log in from the enterprise console and issue them.
v If command routing is set up, you could route them from the server that you are
logged in to.
Before a managed server subscribes to a profile, be aware that if you have defined
any object with the same name and type as an object associated with the profile
that you are subscribing to, those objects will be overwritten. You can check for
such occurrences by querying the profile before subscribing to it.
Note: Although a managed server can subscribe to more than one profile on a
configuration manager, it cannot subscribe to profiles on more than one
configuration manager at a time.
Subscription scenario
The scenario that is documented is a typical one, where a server subscribes to a
profile on a configuration manager, in this case HEADQUARTERS.
You might want to get detailed information on some of the objects by issuing
specific query commands on either your server or the configuration manager.
Note: If any object name matches and you subscribe to a profile containing an
object with the matching name, the object on your server will be replaced, with
the following exceptions:
v A policy domain is not replaced if the domain has client nodes assigned to it.
v An administrator with system authority is not replaced by an administrator
with a lower authority level if the replacement would leave the server
without a system administrator.
v The definition of a server is not replaced unless the server definition on the
managed server allows replacement.
v A server with the same name as a server group is not replaced.
v A locally defined, active administrative schedule is not replaced
2. Subscribe to the ADMINISTRATORS and ENGINEERING profiles.
After the initial subscription, you do not have to specify the server name on the
DEFINE SUBSCRIPTION commands. If at least one profile subscription already
exists, any additional subscriptions are automatically directed to the same
configuration manager. Issue these commands:
define subscription administrators server=headquarters
The object definitions in these profiles are now stored on your database. In
addition to ADMINISTRATORS and ENGINEERING, the server is also
subscribed by default to DEFAULT_PROFILE. This means that all the server
and server group definitions on HEADQUARTERS are now also stored in your
database.
3. Set the time interval for obtaining refreshed configuration information from the
configuration manager.
Chapter 23. Managing a network of Tivoli Storage Manager servers 751
If you do not perform this step, your server checks for updates to the profiles
at start up and every 60 minutes after that. Set up your server to check
HEADQUARTERS for updates once a day (every 1440 minutes). If there is an
update, HEADQUARTERS sends it to the managed server automatically when
the server checks for updates.
set configrefresh 1440
Note: You can initiate a configuration refresh from a managed server at any time.
To initiate a refresh, simply reissue the SET CONFIGREFRESH with any value
greater than 0. The simplest approach is to use the current setting:
set configrefresh 1440
Querying subscriptions
From time to time you might want to view the profiles to which a server is
subscribed. You might also want to view the last time that the configuration
associated with that profile was successfully refreshed on your server.
The QUERY SUBSCRIPTION command gives you this information. You can name
a specific profile or use a wildcard character to display all or a subset of profiles to
which the server is subscribed. For example, the following command displays
ADMINISTRATORS and any other profiles that begin with the string “ADMIN”:
query subscription admin*
To see what objects the ADMINISTRATORS profile contains, use the following
command:
query profile administrators uselocal=no format=detailed
The field Managing profile shows the profile to which the managed server
subscribes to get the definition of this object.
Deleting subscriptions
If you decide that a server no longer needs to subscribe to a profile, you can delete
the subscription.
When you delete a subscription to a profile, you can choose to discard the objects
that came with the profile or keep them in your database. For example, to request
that your subscription to PROFILEC be deleted and to keep the objects that came
with that profile, issue the following command:
delete subscription profilec discardobjects=no
After the subscription is deleted on the managed server, the managed server issues
a configuration refresh request to inform the configuration manager that the
subscription is deleted. The configuration manager updates its database with the
new information.
When you choose to delete objects when deleting the subscription, the server may
not be able to delete some objects. For example, the server cannot delete a
managed policy domain if the domain still has client nodes registered to it. The
server skips objects it cannot delete, but does not delete the subscription itself. If
you take no action after an unsuccessful subscription deletion, at the next
configuration refresh the configuration manager will again send all the objects
associated with the subscription. To successfully delete the subscription, do one of
the following:
v Fix the reason that the objects were skipped. For example, reassign clients in the
managed policy domain to another policy domain. After handling the skipped
objects, delete the subscription again.
v Delete the subscription again, except this time do not discard the managed
objects. The server can then successfully delete the subscription. However, the
objects that were created because of the subscription remain.
By issuing this command with a value greater than zero, you cause the managed
server to immediately start the refresh process.
At the configuration manager, you can cause managed servers to refresh their
configuration information by notifying the servers. For example, to notify
subscribers to all profiles, enter the following command:
notify subscribers profile=*
The managed servers then start to refresh configuration information to which they
are subscribed through profiles.
The configuration manager sends the objects that it can distribute to the managed
server. The configuration manager skips (does not send) objects that conflict with
local objects. If the configuration manager cannot send all objects that are
associated with the profile, the managed server does not record the configuration
refresh as complete. The objects that the configuration manager successfully sent
are left as local instead of managed objects in the database of the managed server.
The local objects left as a result of an unsuccessful configuration refresh become
managed objects at the next successful configuration refresh of the same profile
subscription.
See “Associating configuration information with a profile” on page 741 for details
on when objects cannot be distributed.
To do this from the configuration manager, you do not simply delete the
association of the object from the profile, because that would cause the object to be
deleted from subscribing managed servers. To ensure the object remains in the
databases of the managed servers as a locally managed object, you can copy the
current profile, make the deletion, and change the subscriptions of the managed
servers to the new profile.
For example, servers are currently subscribed to the ENGINEERING profile. The
ENGDOMAIN policy domain is associated with this profile. You want to return
control of the ENGDOMAIN policy domain to the managed servers. You can do
the following:
1. Copy the ENGINEERING profile to a new profile, ENGINEERING_B:
copy profile engineering engineering_b
2. Delete the association of the ENGDOMAIN policy domain from
ENGINEERING_B:
delete profassociation engineering_b domains=engdomain
3. Use command routing to delete subscriptions to the ENGINEERING profile:
americas,europe,asia: delete subscription engineering
discardobjects=no
4. Delete the ENGINEERING profile:
delete profile engineering
5. Use command routing to define subscriptions to the new ENGINEERING_B
profile:
americas,europe,asia: define subscription engineering_b
To return objects to local control when working on a managed server, you can
delete the subscription to one or more profiles. When you delete a subscription,
you can choose whether to delete the objects associated with the profile. To return
objects to local control, you do not delete the objects. For example, use the
following command on a managed server:
delete subscription engineering discardobjects=no
To ensure passwords stay valid for as long as expected on all servers, set the
password expiration period to the same time on all servers. One way to do this is
to route a SET PASSEXP command from one server to all of the others.
Ensure that you have at least one administrator that is defined locally on each
managed server with system authority. This avoids an error on configuration
refresh when all administrators for a server would be removed as a result of a
change to a profile on the configuration manager.
It might appear that the configuration information is more recent on the managed
server than on the configuration manager. This could occur in the following
situations:
v The database on the configuration manager has been restored to an earlier time
and now has configuration information from profiles that appear to be older
than what the managed server has obtained.
v On the configuration manager, an administrator deleted a profile, forcing the
deletion even though one or more managed servers still subscribed to the
profile. The administrator redefined the profile (using the same name) before the
managed server refreshed its configuration information.
If the configuration manager still has a record of the managed server's subscription
to the profile, the configuration manager does not send its profile information at
the next request for refreshed configuration information. The configuration
manager informs the managed server that the profiles are not synchronized. The
managed server then issues a message indicating this condition so that an
administrator can take appropriate action. The administrator can perform the
following steps:
1. If the configuration manager's database has been restored to an earlier point in
time, the administrator may want to query the profile and associated objects on
the managed server and then manually update the configuration manager with
that information.
2. Use the DELETE SUBSCRIPTION command on the managed server to delete
subscriptions to the profile that is not synchronized. If desired, you can also
delete definitions of the associated objects, then define the subscription again.
It is possible that the configuration manager may not have a record of the
managed server's subscription. In this case, no action is necessary. When the
managed server requests a refresh of configuration information, the configuration
manager sends current profile information and the managed server updates its
database with that information.
When you issue the DELETE SUBSCRIPTION command, the managed server
automatically notifies the configuration manager of the deletion by refreshing its
configuration information. As part of the refresh process, the configuration
manager is informed of the profiles to which the managed server subscribes and to
which it does not subscribe. If the configuration manager cannot be contacted
immediately for a refresh, the configuration manager will find out that the
subscription was deleted the next time the managed server refreshes configuration
information.
See “Setting the server name” on page 653 for more information before using the
SET SERVERNAME command.
| You can use the Operations Center to view status and alerts for multiple Tivoli
| Storage Manager servers, to issue commands to those servers, and to access web
| clients.
| You can also use the Administration Center and access all of the Tivoli Storage
| Manager servers and web clients for which you have administrative authority.
| Tip: You can use the Operations Center to view status and alerts for multiple
| Tivoli Storage Manager servers, and to issue commands to those servers.
For more information, see Chapter 18, “Managing servers with the Operations
Center,” on page 615, and Chapter 19, “Managing servers with the Administration
Center,” on page 623.
Routing commands
Command routing enables an administrator to send commands for processing to
one or more servers at the same time. The output is collected and displayed at the
server that issued the routed commands.
You can route commands to one server, multiple servers, servers defined to a
named group, or a combination of these servers. A routed command cannot be
further routed to other servers; only one level of routing is allowed.
Each server that you identify as the target of a routed command must first be
defined with the DEFINE SERVER command. If a server has not been defined, that
server is skipped and the command routing proceeds to the next server in the
route list.
Tivoli Storage Manager does not run a routed command on the server from which
you issue the command unless you also specify that server. To be able to specify
the server on a routed command, you must define the server just as you did any
other server.
Routed commands run independently on each server to which you send them. The
success or failure of the command on one server does not affect the outcome on
any of the other servers to which the command was sent.
The return codes for command routing can be one of three severities: 0, ERROR, or
WARNING. See Administrator's Reference for a list of valid return codes and
severity levels.
To route a command to a single server, enter the defined server's name, a colon,
and then the command to be processed.
For example, to route a QUERY STGPOOL command to the server that is named
ADMIN1, enter:
admin1: query stgpool
The colon after the server name indicates the end of the routing information. This
is also called the server prefix. Another way to indicate the server routing
information is to use parentheses around the server name, as follows:
(admin1) query stgpool
Note: When writing scripts, you must use the parentheses for server routing
information.
To route a command to more than one server, separate the server names with a
comma. For example, to route a QUERY OCCUPANCY command to three servers named
ADMIN1, GEO2, and TRADE5 enter:
admin1,geo2,trade5: query occupancy
or
(admin1,geo2,trade5) query occupancy
The routed command output of each server is displayed in its entirety at the server
that initiated command routing. In the previous example, output for ADMIN1
would be displayed, followed by the output of GEO2, and then the output of
TRADE5.
Processing of a command on one server does not depend upon completion of the
command processing on any other servers in the route list. For example, if GEO2
server does not successfully complete the command, the TRADE5 server continues
processing the command independently.
A server group is a named group of servers. After you set up the groups, you can
route commands to the groups.
or
(west_complex) query stgpool
The QUERY STGPOOL command is sent for processing to servers BLD12 and
BLD13 which are members of group WEST_COMPLEX.
or
(west_complex,north_complex) query stgpool
The QUERY STGPOOL command is sent for processing to servers BLD12 and
BLD13 which are members of group WEST_COMPLEX, and servers NE12 and
NW13 which are members of group NORTH_COMPLEX.
See “Setting up server groups” on page 761 for how to set up a server group.
You can route commands to multiple single servers and to server groups at the
same time.
For example, to route the QUERY DB command to servers HQSRV, REGSRV, and
groups WEST_COMPLEX and NORTH_COMPLEX, enter:
hqsrv,regsrv,west_complex,north_complex: query db
or
(hqsrv,regsrv,west_complex,north_complex) query db
After you have the server groups set up, you can manage the groups and group
members.
To route commands to a server group you must perform the following steps:
1. Define the server with the DEFINE SERVER command if it is not already
defined.
2. Define a new server group with the DEFINE SERVERGROUP command. Server
group names must be unique because both groups and server names are
allowed for the routing information.
3. Define servers as members of a server group with the DEFINE GRPMEMBER
command.
You can obtain information about server groups using the QUERY SERVERGROUP
command.
You can copy a server group using the COPY SERVERGROUP command.
This command creates the new group. If the new group already exists, the
command fails.
You can rename a server group using the RENAME SERVERGROUP command.
You can update a server group using the UPDATE SERVERGROUP command.
You can delete a server group using the DELETE SERVERGROUP command.
To delete WEST_COMPLEX server group from the Tivoli Storage Manager server,
enter:
delete servergroup west_complex
This command removes all members from the server group. The server definition
for each group member is not affected. If the deleted server group is a member of
other server groups, the deleted group is removed from the other groups.
You can move group members to another group using the MOVE GRPMEMBER
command.
You can delete group members from a group using the DELETE GROUPMEMBER
command.
To delete group member BLD12 from the NEWWEST server group, enter:
delete grpmember newwest bld12
When you delete a server, the deleted server is removed from any server groups of
which it was a member.
The PING SERVER command uses the user ID and password of the administrative
ID that issued the command. If the administrator is not defined on the server
being pinged, the ping fails even if the server may be running.
Tivoli Storage Manager allows a server (a source server) to store these items on
another server (a target server):
v database backups
v export operations
v storage pool operations
v DRM PREPARE command
The data is stored as virtual volumes, which appear to be sequential media volumes
on the source server, but which are actually stored as archive files on a target
server. Virtual volumes can be any of these:
The source server is a client of the target server, and the data for the source server
is managed only by the source server. In other words, the source server controls
the expiration and deletion of the files that comprise the virtual volumes on the
target server. You cannot use virtual volumes when the source server and the
target server are located on the same Tivoli Storage Manager server.
At the target server, the virtual volumes from the source server are seen as archive
data. The source server is registered as a client node (of TYPE=SERVER) at the
target server and is assigned to a policy domain. The archive copy group of the
default management class of that domain specifies the storage pool for the data
from the source server.
Note: If the default management class does not include an archive copy group,
data cannot be stored on the target server.
You can benefit from the use of virtual volumes in the following ways:
v Smaller Tivoli Storage Manager source servers can use the storage pools and
tape devices of larger Tivoli Storage Manager servers.
v For incremental database backups, virtual volumes can decrease wasted space on
volumes and under-utilization of high-end tape drives.
v The source server can use the target server as an electronic vault for recovery
from a disaster.
For details, see “Reconciling virtual volumes and archive files” on page 769.
Related concepts:
“Performance limitations for virtual volume operations” on page 766
Related tasks:
“Setting up source and target servers for virtual volumes”
In the following example (illustrated in Figure 88 on page 766), the source server is
named TUCSON and the target server is named MADERA.
v At Tucson site:
1. Define the target server:
– MADERA has a TCP/IP address of 127.0.0.1:1845
– Assign the password CALCITE to MADERA.
– Assign TUCSON as the node name by which the source server TUCSON
will be known by the target server. If no node name is assigned, the server
name of the source server is used. To see the server name, you can issue
the QUERY STATUS command.
2. Define a device class for the data to be sent to the target server. The device
type for this device class must be SERVER, and the definition must include
the name of the target server.
v At Madera site:
Register the source server as a client node. The target server can use an existing
policy domain and storage pool for the data from the source server. However,
you can define a separate management policy and storage pool for the source
server. Doing so can provide more control over storage pool resources.
1. Use the REGISTER NODE command to define the source server as a node of
TYPE=SERVER. The policy domain to which the node is assigned determines
where the data from the source server is stored. Data from the source server
is stored in the storage pool specified in the archive copy group of the
default management class of that domain.
2. You can set up a separate policy and storage pool for the source server.
a. Define a storage pool named SOURCEPOOL:
define stgpool sourcepool autotapeclass maxscratch=20
b. Copy an existing policy domain STANDARD to a new domain named
SOURCEDOMAIN:
copy domain standard sourcedomain
c. Assign SOURCEPOOL as the archive copy group destination in the
default management class of SOURCEDOMAIN:
TUCSON MADERA
Related tasks:
“Changing policy” on page 501
Some of the factors that can affect volume performance when using virtual
volumes are:
v Distance between locations
v Network infrastructure and bandwidth between locations
v Network configuration
v Data size and distribution
v Data read and write patterns
Use the server-to-server virtual volumes feature to share a single tape library with
multiple servers. Although there are other situations that can use this feature, such
as cross-server or off-site vaulting, this feature is not optimized for long distances.
Avoid moving large amounts of data between the servers, which might slow down
communications significantly, depending on the network bandwidth and
availability.
Specify, in the device class definition (DEVTYPE=SERVER) how often, and how
long a time period you want the source server to attempt to contact the target
server. Keep in mind that frequent attempts to contact the target server over an
extended period can affect your communications.
To minimize mount wait times, set the total mount limit for all server definitions
that specify the target server to a value that does not exceed the mount total limit
at the target server. For example, a source server has two device classes, each
specifying a mount limit of 2. A target server has only two tape drives. In this case,
the source server mount requests might exceed the target server tape drives.
For example, to perform an incremental backup of the source server and send the
volumes to the target server, issue the following command:
backup db type=incremental devclass=targetclass
See “Moving copy storage pool and active-data pool volumes on-site” on page
1076 for more information.
For example, a primary storage pool named TAPEPOOL is on the source server.
You can define a copy storage pool named TARGETCOPYPOOL, also on the
source server. TARGETCOPYPOOL must have an associated device class whose
device type is SERVER. When you back up TAPEPOOL to TARGETCOPYPOOL,
the backup is sent to the target server. To accomplish this, issue the following
commands:
define stgpool targetcopypool targetclass pooltype=copy
maxscratch=20
backup stgpool tapepool targetcopypool
To configure your system, ensure that the management policy for those nodes
specifies a storage pool that has a device class whose device type is SERVER. For
example, the following command defines the storage pool named TARGETPOOL.
define stgpool targetpool targetclass maxscratch=20
reclaim=100
For details about storage pool reclamation and how to begin it manually, see
“Reclaiming space in sequential-access storage pools” on page 390.
For example, storage pool TAPEPOOL is on the source server. The TAPEPOOL
definition specifies NEXTSTGPOOL=TARGETPOOL. TARGETPOOL has been
defined on the source server as a storage pool of device type SERVER. When data
is migrated from TAPEPOOL, it is sent to the target server.
define stgpool tapepool tapeclass nextstgpool=targetpool
maxscratch=20
For example, to copy server information directly to a target server, issue the
following command:
export server devclass=targetclass
If data has been exported from a source server to a target server, you can import
that data from the target server to a third server. The server that will import the
data uses the node ID and password of the source server to open a session with
the target server. That session is in read-only mode because the third server does
not have the proper verification code.
For example, to import server information from a target server, issue the following
command:
import server devclass=targetclass
Two methods are available to perform the export and import operation:
v Export directly to another server on the network. This results in an immediate
import process without the need for compatible sequential device types between
the two servers.
v Export to sequential media. Later, you can use the media to import the
information to another server that has a compatible device type.
This chapter takes you through the export and import tasks. See the following
sections:
Concepts:
“Reviewing data that can be exported and imported”
Tasks for Exporting Directly to Another Server:
“Exporting data directly to another server” on page 774
“Preparing to export to another server for immediate import” on page 778
“Monitoring the server-to-server export process” on page 780
Tasks for Exporting to Sequential Media:
“Exporting and importing data using sequential media volumes” on page 782
“Exporting tasks” on page 784
“Importing data from sequential media volumes” on page 787
Exporting restrictions
The export function does have some limitations and restrictions. One restriction is
that you can export information from an earlier version and release of Tivoli
Storage Manager to a later version and release, but not from a later version and
release to an earlier version and release.
For example, you can export from a V6.1 server to a V6.2 server, but you cannot
export from V6.2 server to V6.1 server.
Important:
1. Because results could be unpredictable, ensure that expiration, migration,
backup, or archive processes are not running when the EXPORT NODE command
is issued.
2. The EXPORT NODE and EXPORT SERVER commands will not export data from shred
pools unless you explicitly permit it by setting the ALLOWSHREDDABLE parameter
to YES. If this value is specified, and the exported data includes data from
shred pools, but that data can no longer be shredded.
Related concepts:
“Securing sensitive client data” on page 563
When you export to sequential media, administrators or users may modify data
shortly after it has been exported, then the information copied to tape may not be
consistent with data stored on the source server. If you want to export an exact
point-in-time copy of server control information, you can prevent administrative
and other client nodes from accessing the server.
When you export directly to another server, administrators or users may modify
data shortly after it has been exported. You can decide to merge file spaces, use
incremental export, or prevent administrative and other client nodes from
accessing the server.
Related concepts:
“Preventing administrative clients from accessing the server” on page 774
Related tasks:
“Preventing client nodes from accessing the server” on page 774
Related reference:
“Options to consider before exporting” on page 774
To prevent users from accessing the server during export operations, cancel
existing client sessions. Then you can perform one of the following steps:
1. Disable server access to prevent client nodes from accessing the server.
This option is useful when you export all client node information from the
source server and want to prevent all client nodes from accessing the server.
2. Lock out particular client nodes from server access.
This option is useful when you export a subset of client node information from
the source server and want to prevent particular client nodes from accessing
the server until the export operation is complete.
After the export operation is complete, allow client nodes to access the server
again by:
v Enabling the server
v Unlocking client nodes
If you do not want to merge file spaces, see the topic on how duplicate file spaces
are managed.
Choosing to merge file spaces allows you to restart a cancelled import operation
because files that were previously imported can be skipped in the subsequent
import operation. This option is available when you issue an EXPORT SERVER or
EXPORT NODE command.
When you merge file spaces, the server performs versioning of the imported
objects based on the policy bound to the files. An import operation may leave the
target file space with more versions than policy permits. Files are versioned to
maintain the policy intent for the files, especially when incremental export (using
the FROMDATE and FROMTIME parameters) is used to maintain duplicate client file
copies on two or more servers.
The following definitions show how the server merges imported files, based on the
type of object, when you specify MERGEFILESPACES=YES.
Archive Objects
If an archive object for the imported node having the same TCP/IP
address, TCP/IP port, name, insert date, and description is found to
already exist on the target server, the imported object is skipped.
Otherwise, the archive object is imported.
Backup Objects
If a backup object for the imported node has the same TCP/IP address,
TCP/IP port, insert date, and description as the imported backup object,
the imported object is skipped. When backup objects are merged into
existing file spaces, versioning will be done according to policy just as it
occurs when backup objects are sent from the client during a backup
operation. Setting their insert dates to zero (0) will mark excessive file
versions for expiration.
Otherwise, the server performs the following tasks:
v If the imported backup object has a later (more recent) insert date than
an active version of an object on the target server with the same node,
file space, TCP/IP address, and TCP/IP port, then the imported backup
object becomes the new active copy, and the active copy on the target
server is made inactive. Tivoli Storage Manager expires this inactive
version based on the number of versions that are allowed in policy.
v If the imported backup object has an earlier (less recent) insert date than
an active copy of an object on the target server with the same node, file
space, TCP/IP address, TCP/IP port, then the imported backup object is
inserted as an inactive version.
v If there are no active versions of an object with the same node, file
space, TCP/IP address, and TCP/IP port on the target server, and the
imported object has the same node, file space, TCP/IP address, and
TCP/IP port as the versions, then:
– An imported active object with a later insert date than the most recent
inactive copy will become the active version of the file.
The number of objects imported and skipped is displayed with the final statistics
for the import operation.
Related concepts:
“Managing duplicate file spaces” on page 795
Related tasks:
“Querying the activity log for export or import information” on page 800
You can use the FROMDATE and FROMTIME parameters to export data based on the
date and time the file was originally stored in the server. The FROMDATE and
FROMTIME parameters only apply to client user file data; these parameters have no
effect on other exported information such as policy. If clients continue to back up
to the originating server while their data is moving to a new server, you can move
the backup data that was stored on the originating server after the export
operation was initiated. This option is available when you issue an EXPORT SERVER
or EXPORT NODE command.
You can use the TODATE and TOTIME parameters to further limit the time you specify
for your export operation.
Alternatively, you can have the server skip duplicate definitions. This option is
available when you issue any of the EXPORT commands.
Related concepts:
“Determining whether to replace existing definitions” on page 789
The resumed export continues at a point where the suspension took place.
Therefore, data that has already been exported is not exported again and only the
data that was not sent is included in the restarted export. Issue the QUERY EXPORT
command to view all running and suspended restartable export operations, the
RESTART EXPORT command to restart an export operation, or the SUSPEND EXPORT to
suspend a running server-to-server EXPORT NODE or EXPORT SERVER process.
Suspended server-to-server export operations are not affected by a server restart.
Note: Do not issue the CANCEL PROCESS command if you want to restart the
operation at a later time. CANCEL PROCESS ends the export process and deletes all
saved status.
If an export operation fails prior to identifying all eligible files, when the export
operation is restarted it continues to identify eligible files and may export files that
were backed up while the operation was suspended.
A restarted export operation will export only the data that was identified. During a
suspension, some files or nodes identified for export might be deleted or might
expire. To ensure that all data is exported, restart the export operation at the
earliest time and restrict operations on the selected data.
At any given time, a restartable export operation will be in one of the following
states:
Running - Not Suspendible
This state directly corresponds to phase 1 of a restartable export, “Creating
definitions on target server.”
Attention: Ensure that the target server's Tivoli Storage Manager level is newer
or the same as the source server's level. If you suspend export operations and
upgrade the source server's database, the target server may stop the export
operation if the new source server's Tivoli Storage Manager level is incompatible
with the target server's level.
To determine how much space is required to export all server data, issue the
following command:
export server filedata=all previewimport=yes
After you issue the EXPORT SERVER command, a message similar to the following
message is issued when the server starts a background process:
EXPORT SERVER started as Process 4
You can view the preview results by querying the activity log.
You can also view the results on the following applications:
v Server console
Related tasks:
“Requesting information about an export or import process” on page 798
“Canceling server processes” on page 651
You can direct import messages to an output file to capture any error messages
that are detected during the import process. Do this by starting an administrative
client session in console mode before you invoke the import command.
If you want to view the status of any server-to-server exports that can be
suspended, issue the QUERY EXPORT command. The QUERY EXPORT command lists all
running or suspended operations.
If a process completes, you can query the activity log for status information from
an administrative client running in batch or interactive mode.
You can also query the activity log for status information from the server console.
The process first builds a list of what is to be exported. The process can therefore
be running for some time before any data is transferred. The connection between
the servers might time-out. You may need to adjust the COMMTIMEOUT and
IDLETIMEOUT server options on one or both servers.
If a process completes, you can query the activity log for status information from
the server console or from an administrative client running in batch or interactive
mode. The process first builds a list of what is to be exported. The process can
therefore be running for some time before any data is transferred. The connection
between the servers might time-out. You may need to adjust the COMMTIMEOUT and
IDLETIMEOUT server options on one or both servers.
You can specify a list of administrator names, or you can export all administrator
names.
Issue the following command to export all the administrator definitions to the
target server defined as OTHERSERVER.
export admin * toserver=otherserver previewimport=yes
This lets you preview the export without actually exporting the data for immediate
import.
You can also specify whether to export file data. File data includes file space
definitions and authorization rules. You can request that file data be exported in
any of the following groupings of files:
v Active and inactive versions of backed up files, archive copies of files, and
space-managed files
v Active versions of backed up files, archive copies of files, and space-managed
files
v Active and inactive versions of backed up files
v Active versions of backed up files
v Archive copies of files
v Space-managed files
To export client node information and all client files for NODE1 directly to
SERVERB, issue the following example command:
export node node1 filedata=all toserver=serverb
Important: When you specify a list of node names or node patterns, the server
will not report the node names or patterns that do not match any entries in the
database. Check the summary statistics in the activity log to verify that the server
exported all intended nodes.
To export server data to another server on the network and have the file spaces
merged with any existing file spaces on the target server, as well as replace
definitions on the target server and have the data, that is to be exported, to begin
with any data inserted in the originating server beginning on 10/25/2007, issue the
following command:
export server toserver=serv23 fromdate=10/25/2007 filedata=all
mergefilespaces=yes dates=relative
You can view the preview results by querying the activity log or the following
place:
v Server console
You can request information about the background process. If necessary, you
can cancel an export or import process.
Related tasks:
“Requesting information about an export or import process” on page 798
“Canceling server processes” on page 651
Note:
a. If the mount limit for the device class selected is reached when you request
an export (that is, if all the drives are busy), the server automatically cancels
lower priority operations, such as reclamation, to make a mount point
available for the export.
b. You can export data to a storage pool on another server by specifying a
device class whose device type is SERVER.
2. Estimate the number of removable media volumes to label.
To estimate the number of removable media volumes to label, divide the
number of bytes to be moved by the estimated capacity of a volume.
You can estimate the following forms of removable media volumes:
v The number of removable media volumes needed to store export data
For example, you have 8 mm tapes with an estimated capacity of 2472 MB. If
the preview shows that you need to transfer 4 GB of data, then label at least
two tape volumes before you export the data.
3. Use scratch media. The server allows you to use scratch media to ensure that
you have sufficient space to store all export data. If you use scratch media,
record the label names and the order in which they were mounted.
Or, use the USEDVOLUMELIST parameter on the export command to create a file
containing the list of volumes used.
4. Label the removable media volumes.
Exporting tasks
You can export all server control information or a subset of server control
information.
When you export data, you must specify the device class to which export data will
be written. You must also list the volumes in the order in which they are to be
mounted when the data is imported.
You can specify the USEDVOLUMELIST parameter to indicate the name of a file where
a list of volumes used in a successful export operation will be stored. If the
specified file is created without errors, it can be used as input to the IMPORT
command on the VOLUMENAMES=FILE:filename parameter. This file will contain
comment lines with the date and time the export was done, and the command
issued to create the export.
Note: An export operation will not overwrite an existing file. If you perform an
export operation and then try the same operation again with the same volume
name, the file is skipped, and a scratch file is allocated. To use the same volume
name, delete the volume entry from the volume history file.
Related tasks:
“Planning for sequential media used to export data” on page 783
You can specify a list of administrator names, or you can export all administrator
names.
Issue the following command to export definitions for the DAVEHIL and PENNER
administrator IDs to the DSM001 tape volume, which the TAPECLASS device class
supports, and to not allow any scratch media to be used during this export
process:
export admin davehil,penner devclass=tapeclass
volumenames=dsm001 scratch=no
You can also specify whether to export file data. File data includes file space
definitions and authorization rules. You can request that file data be exported in
any of the following groupings of files:
v Active and inactive versions of backed up files, archive copies of files, and
space-managed files
v Active versions of backed up files, archive copies of files, and space-managed
files
v Active and inactive versions of backed up files
v Active versions of backed up files
v Archive copies of files
v Space-managed files
When exporting active versions of client backup data, the server searches for active
file versions in an active-data pool associated with a FILE device class, if such a
pool exists. This process minimizes the number of mounts that are required during
the export process.
If you do not specify that you want to export file data, then the server only exports
client node definitions.
For example, suppose that you want to perform the following steps:
When you issue the EXPORT POLICY command, the server exports the following
information belonging to each specified policy domain:
v Policy domain definitions
v Policy set definitions, including the active policy set
v Management class definitions, including the default management class
v Backup copy group and archive copy group definitions
v Schedule definitions
v Associations between client nodes and schedules
For example, suppose that you want to export policy and scheduling definitions
from the policy domain named ENGPOLDOM. You want to use tape volumes
DSM001 and DSM002, which belong to the TAPECLASS device class, but allow the
server to use scratch tape volumes if necessary.
For example, you want to export server data to four defined tape cartridges, which
the TAPECLASS device class supports. You want the server to use scratch volumes
if the four volumes are not enough, and so you use the default of SCRATCH=YES.
After Tivoli Storage Manager is installed and set up on the target server, a system
administrator can import all server control information or a subset of server
control information by specifying one or more of the following import commands:
v IMPORT ADMIN
v IMPORT NODE
v IMPORT POLICY
v IMPORT SERVER
You can merge imported client backup, archive, and space-managed files into
existing file spaces, and automatically skip duplicate files that may exist in the
target file space on the server. Optionally, you can have new file spaces created.
If you do not want to merge file spaces, look into how duplicate file spaces are
managed. Choosing to merge file spaces allows you to restart a cancelled import
operation since files that were previously imported can be skipped in the
subsequent import operation.
When you merge file spaces, the server performs versioning of the imported
objects based on the policy bound to the files. An import operation may leave the
target file space with more versions than policy permits. Files are versioned to
maintain the policy intent for the files, especially when incremental export (using
the FROMDATE and FROMTIME parameters) is used to maintain duplicate client file
copies on two or more servers.
The following definitions show how the server merges imported files, based on the
type of object, when you specify MERGEFILESPACES=YES.
Archive Objects
If an archive object for the imported node having the same TCP/IP
address, TCP/IP port, insert date, and description is found to already exist
on the target server, the imported object is skipped. Otherwise, the archive
object is imported.
Backup Objects
If a backup object for the imported node has the same TCP/IP address,
TCP/IP port, insert date, and description as the imported backup object,
the imported object is skipped. When backup objects are merged into
existing file spaces, versioning will be done according to policy just as it
occurs when backup objects are sent from the client during a backup
operation. Setting their insert dates to zero (0) will mark excessive file
versions for expiration.
Otherwise, the server performs the following tasks:
v If the imported backup object has a later (more recent) insert date than
an active version of an object on the target server with the same node,
file space, TCP/IP address, and TCP/IP port, then the imported backup
object becomes the new active copy. The active copy on the target server
is made inactive. Tivoli Storage Manager expires this inactive version
based on the number of versions that are allowed in policy.
The number of objects imported and skipped is displayed with the final statistics
for the import operation.
Related concepts:
“Managing duplicate file spaces” on page 795
Related tasks:
“Querying the activity log for export or import information” on page 800
By using the REPLACEDEFS parameter with the IMPORT command, you can specify
whether to replace existing definitions on the target server when Tivoli Storage
Manager encounters an object with the same name during the import process.
For example, if a definition exists for the ENGPOLDOM policy domain on the
target server before you import policy definitions, then you must specify
REPLACEDEFS=YES to replace the existing definition with the data from the
export tape.
When you import file data, you can keep the original creation date for backup
versions and archive copies, or you can specify that the server use an adjusted
date.
If you want to keep the original dates set for backup versions and archive copies,
use DATES=ABSOLUTE, which is the default. If you use the absolute value, any
files whose retention period has passed will be expired shortly after they are
imported to the target server.
When you specify a relative date, the dates of the file versions are adjusted to the
date of import on the target server. This is helpful when you export from a server
When you set PREVIEW=YES, tape operators must mount export tape volumes so
that the target server can calculate the statistics for the preview.
Issue the following design to preview information for the IMPORT SERVER
command:
import server devclass=tapeclass preview=yes
volumenames=dsm001,dsm002,dsm003,dsm004
Figure 89 on page 791 shows an example of the messages sent to the activity log
and the following place:
Server console
Figure 89. Sample report created by issuing preview for an import server command
Use the value reported for the total number of bytes copied to estimate storage
pool space needed to store imported file data.
For example, Figure 89 shows that 8 856 358 bytes of data will be imported.
Ensure that you have at least 8 856 358 bytes of available space in the backup
storage pools defined to the server. You can issue the QUERY STGPOOL and QUERY
VOLUME commands to determine how much space is available in the server storage
hierarchy.
In addition, the preview report shows that 0 archive files and 462 backup files will
be imported. Because backup data is being imported, ensure that you have
sufficient space in the backup storage pools used to store this backup data.
Importing definitions
When previewing information before importing data, you must import server
control information. This includes administrator definitions, client node definitions,
policy domain, policy set, management class, and copy group definitions, schedule
definitions, and client node associations.
However, do not import file data at this time, because some storage pools named
in the copy group definitions may not exist yet on the target server.
Before you import server control information, perform the following tasks:
1. Read the following topics:
v “Determining whether to replace existing definitions” on page 789
v “Determining how the server imports active policy sets”
2. Start an administrative client session in console mode to capture import
messages to an output file.
3. Import the server control information from specified tape volumes.
Related tasks:
“Directing import messages to an output file” on page 793
“Importing server control information” on page 793
When the server imports policy definitions, several objects are imported to the
target server.
If the server encounters a policy set named ACTIVE on the tape volume during the
import process, it uses a temporary policy set named $$ACTIVE$$ to import the
active policy set.
After each $$ACTIVE$$ policy set has been activated, the server deletes that
$$ACTIVE$$ policy set from the target server. To view information about active
policy on the target server, you can use the following commands:
v QUERY COPYGROUP
v QUERY DOMAIN
v QUERY MGMTCLASS
v QUERY POLICYSET
Results from issuing the QUERY DOMAIN command show the activated policy set as
$$ACTIVE$$. The $$ACTIVE$$ name shows you that the policy set which is
currently activated for this domain is the policy set that was active at the time the
export was performed.
The information generated by the validation process can help you define a storage
hierarchy that supports the storage destinations currently defined in the import
data.
You can direct import messages to an output file to capture any error messages
that are detected during the import process. Do this by starting an administrative
client session in console mode before you invoke the import command.
If you have completed the prerequisite steps, you might be ready to import the
server control information.
Based on the information generated during the preview operation, you know that
all definition information has been stored on the first tape volume named DSM001.
Specify that this tape volume can be read by a device belonging to the
TAPECLASS device class.
You can issue the command from an administrative client session or from the
following:
To tailor server storage definitions on the target server, complete the following
steps:
1. Identify any storage destinations specified in copy groups and management
classes that do not match defined storage pools:
v If the policy definitions you imported included an ACTIVE policy set, that
policy set is validated and activated on the target server. Error messages
generated during validation include whether any management classes or
copy groups refer to storage pools that do not exist on the target server. You
have a copy of these messages in a file if you directed console messages to
an output file.
v Query management class and copy group definitions to compare the storage
destinations specified with the names of existing storage pools on the target
server.
To request detailed reports for all management classes, backup copy groups,
and archive copy groups in the ACTIVE policy set, enter these commands:
query mgmtclass * active * format=detailed
query copygroup * active * standard type=backup format=detailed
query copygroup * active * standard type=archive format=detailed
2. If storage destinations for management classes and copy groups in the ACTIVE
policy set refer to storage pools that are not defined, perform one of the
following tasks:
v Define storage pools that match the storage destination names for the
management classes and copy groups.
v Change the storage destinations for the management classes and copy
groups. perform the following steps:
a. Copy the ACTIVE policy set to another policy set
b. Modify the storage destinations of management classes and copy groups
in that policy set, as required
c. Activate the new policy set
Depending on the amount of client file data that you expect to import, you may
want to examine the storage hierarchy to ensure that sufficient storage space is
available. Storage pools specified as storage destinations by management classes
and copy groups may fill up with data. For example, you may need to define
additional storage pools to which data can migrate from the initial storage
destinations.
Related tasks:
“Directing import messages to an output file” on page 793
“Defining storage pools” on page 273
Related reference:
“Defining and updating a policy set” on page 522
You can request that file data be imported in any of the following groupings:
v Active and inactive versions of backed up files, archive copies of files, and
space-managed files
v Active versions of backed up files, archive copies of files, and space-managed
files
v Active and inactive versions of backed up files
v Active versions of backed up files
v Archive copies of files
v Space-managed files
Data being imported will not be stored in active-data pools. Use the COPY
ACTIVEDATA command to store newly imported data into an active-data pool.
When the server imports file data information, it imports any file spaces belonging
to each specified client node. If a file space definition already exists on the target
server for the node, the server does not replace the existing file space name.
If the server encounters duplicate file space names when it imports file data
information, it creates a new file space name for the imported definition by
replacing the final character or characters with a number. A message showing the
old and new file space names is written to the system log and to the activity log. A
message showing the old and new file space names is written to the activity log
and to the following place:
v server console
For example, if the C_DRIVE and D_DRIVE file space names reside on the target
server for node FRED and on the tape volume for FRED, then the server imports
the C_DRIVE file space as C_DRIV1 file space and the D_DRIVE file space as
D_DRIV1 file space, both assigned to node FRED.
When you import file data, you can keep the original creation date for backup
versions and archive copies, or you can specify that the server use an adjusted
date.
Because tape volumes containing exported data might not be used for some time,
the original dates defined for backup versions and archive copies may be old
enough that files are expired immediately when the data is imported to the target
server.
For example, assume that data exported to tape includes an archive copy archived
five days prior to the export operation. If the tape volume resides on the shelf for
six months before the data is imported to the target server, the server resets the
archival date to five days prior to the import operation.
If you want to keep the original dates set for backup versions and archive copies,
use DATES=ABSOLUTE, which is the default. If you use the absolute value, any
files whose retention period has passed will be expired shortly after they are
imported to the target server.
You can import file data, either by issuing the IMPORT SERVER or IMPORT NODE
command. When you issue either of these commands, you can specify which type
of files should be imported for all client nodes specified and found on the export
tapes.
You can specify any of the following values to import file data:
All Specifies that all active and inactive versions of backed up files, archive
copies of files, and space-managed files for specified client nodes are
imported to the target server
None Specifies that no files are imported to the target server; only client node
definitions are imported
Archive
Specifies that only archive copies of files are imported to the target server
Backup
Specifies that only backup copies of files, whether active or inactive, are
imported to the target server
Backupactive
Specifies that only active versions of backed up files are imported to the
target server
Allactive
Specifies that only active versions of backed up files, archive copies of files,
and space-managed files are imported to the target server
Spacemanaged
Specifies that only files that have been migrated from a user’s local file
system (space-managed files) are imported
For example, suppose you want to import all backup versions of files, archive
copies of files, and space-managed files to the target server. You do not want to
replace any existing server control information during this import operation.
Specify the four tape volumes that were identified during the preview operation.
These tape volumes can be read by any device in the TAPECLASS device class. To
issue this command, enter:
import server filedata=all replacedefs=no
devclass=tapeclass volumenames=dsm001,dsm002,dsm003,dsm004
If the ENGDOM policy domain exists on the target server, the imported nodes are
assigned to that domain. If ENGDOM does not exist on the target server, the
imported nodes are assigned to the STANDARD policy domain.
If you do not specify a domain on the IMPORT NODE command, the imported node
is assigned to the STANDARD policy domain.
While the server allows you to issue any import command, data cannot be
imported to the server if it has not been exported to tape. For example, if a tape is
created with the EXPORT POLICY command, an IMPORT NODE command will not find
any data on the tape because node information is not a subset of policy
information.
See Table 74 for the commands that you can use to import a subset of exported
information to a target server.
Table 74. Importing a subset of information from tapes
If tapes were created with You can issue this import You cannot issue this import
this export command: command: command:
EXPORT SERVER IMPORT SERVER
IMPORT ADMIN Not applicable.
IMPORT NODE
IMPORT POLICY
EXPORT NODE IMPORT NODE IMPORT ADMIN
IMPORT SERVER IMPORT POLICY
EXPORT ADMIN IMPORT ADMIN IMPORT NODE
IMPORT SERVER IMPORT POLICY
EXPORT POLICY IMPORT POLICY IMPORT ADMIN
IMPORT SERVER IMPORT NODE
If invalid data is encountered during an import operation, the server uses the
default value for the new object's definition. If the object already exists, the existing
parameter is not changed.
During import and export operations, the server reports on the affected objects to
the activity log and also to the:
server console
You should query these objects when the import process is complete to see if they
reflect information that is acceptable.
A file space definition may already exist on the target server for the node. If so, an
administrator with system privilege can issue the DELETE FILESPACE command to
remove file spaces that are corrupted or no longer needed. For more information
on the DELETE FILESPACE command, refer to the Administrator's Reference.
Related concepts:
“Managing duplicate file spaces” on page 795
An imported file space can have the same name as a file space that already exists
on a client node. In this case, the server does not overlay the existing file space,
and the imported file space is given a new system generated file space name.
This new name may match file space names that have not been backed up and are
unknown to the server. In this case, you can use the RENAME FILESPACE command
to rename the imported file space to the naming convention used for the client
node.
You can use the following two ways to monitor export or import processes:
v You can view information about a process that is running on the server console
or from an administrative client running in console mode.
v After a process has completed, you can query the activity log for status
information from an administrative client running in batch or interactive mode.
Watch for mount messages, because the server might request mounts of volumes
that are not in the library. The process first builds a list of what is to be exported.
The process can therefore be running for some time before any data is transferred.
You can query an export or import process by specifying the process ID number.
For example, to request information about the EXPORT SERVER operation, which
started as process 4, enter:
query process 4
If you issue a preview version of an EXPORT or IMPORT command and then query
the process, the server reports the types of objects to be copied, the number of
objects to be copied, and the number of bytes to be copied.
When you export or import data and then query the process, the server displays
To minimize processing time when querying the activity log for export or import
information, restrict the search by specifying EXPORT or IMPORT in the SEARCH
parameter of the QUERY ACTLOG command.
To determine how much data will be moved after issuing the preview version of
the EXPORT SERVER command, query the activity log by issuing the following
command:
query actlog search=export
Tasks:
Chapter 26, “Basic monitoring methods,” on page 819
Chapter 25, “Daily monitoring tasks,” on page 805
“Using IBM Tivoli Storage Manager queries to display information” on page 819
“Using SQL to query the IBM Tivoli Storage Manager database” on page 824
“Using the Tivoli Storage Manager activity log” on page 829
Chapter 32, “Logging IBM Tivoli Storage Manager events to receivers,” on page 885
Chapter 29, “Monitoring Tivoli Storage Manager accounting records,” on page 837
Tivoli Monitoring for Tivoli Storage Manager
“Cognos Business Intelligence” on page 846
Backing up and restoring Tivoli Monitoring for Tivoli Storage Manager
| You can complete the monitoring tasks by using the command-line interface (CLI).
| A subset of tasks can also be completed by using the Operations Center, the
| Administration Center, or IBM Tivoli Monitoring for Tivoli Storage Manager.
The following list describes some of the items that are important to monitor daily.
Instructions for monitoring these items, and other monitoring tasks can be found
in the topics in this section. Not all of these tasks apply to all environments.
v Verify that the database file system has enough space.
v Examine the database percent utilization, available free space, and free-pages.
v Verify that there is enough disk space in the file systems that contain these log
files.
– Active log
– Archive log
– Mirror log
– Archive failover log
v Verify that the instance directory file system has enough space.
v Verify that the database backups completed successfully, and that they are
running frequently enough.
v Check the database and recovery log statistics.
v Verify that you have current backup files for device configuration and volume
history information. You can find the file names for the backups by looking in
the dsmserv.opt file for the DEVCONFIG and VOLUMEHISTORY options. Ensure that
file systems where the files are stored have sufficient space.
v Search the summary table for failed processes.
v Search the activity log for error messages.
v For storage pools that have deduplication enabled, ensure that processes are
completing successfully.
v Check the status of your storage pools to ensure that there is enough space
available.
v Check for any failed storage pool migrations.
v Check the status of sequential access storage pools.
v Check how many scratch volumes are available.
v Determine if there are any tape drives offline, or their paths that are offline.
v Determine if there are any libraries offline, or their paths that are offline.
v Verify that all of the tapes have the appropriate write-access.
v Verify the status and settings for disaster recovery manager (DRM).
v Check for failed or missed schedules.
v Check the summary table for scheduled client operations such as backup,
restore, archive, and retrieve.
For detailed information about the commands mentioned here, see the
Administrator's Reference.
The examples used here are based on a 24-hour period, but your values can differ
depending on the time frame you specify.
The following steps describe the commands that you can use to monitor server
processes:
1. Search the summary table for any server processes that failed within the
previous 24-hour period:
select activity as process, number as processnum from summary where
activity in (’EXPIRATION’,’RECLAMATION’,’MIGRATION’,’STGPOOL BACKUP’,
’FULL_DBBACKUP’,’INCR_DBBACKUP’,’REPLICATION’) and successful=’NO’
and end_time> (current_timestamp - interval ’24’ hours)
This example output indicates that backup storage pool process number 7
failed:
PROCESS: STGPOOL BACKUP
PROCESSNUM: 7
2. Search the activity log for the messages associated with the failed process
number that was indicated in the output of the command in Step 1.
select message from actlog where process=7 and date_time>(current_timestamp
- interval ’24’ hours) and severity in (’W’,’E’,’S’)
Example output:
MESSAGE: ANR1221E BACKUP STGPOOL: Process 7 terminated - insufficient space in
target storage pool FILECOPYPOOL. (SESSION: 1, PROCESS: 7)
Example output:
FREQUENCY
------------
3
Example output:
ACTIVITY: IDENTIFY
NUMBER: 5
FILESPROCESSED: 12946
DUPLICATEEXTENTS: 10504
DUPLICATEBYTES: 127364341
SUCCESSFUL: YES
Related tasks:
“Monitoring your database daily”
“Monitoring disk storage pools daily” on page 810
“Monitoring sequential access storage pools daily” on page 811
“Monitoring scheduled operations daily” on page 814
“Monitoring operations daily with Tivoli Monitoring for Tivoli Storage Manager”
on page 815
“Monitoring operations daily using the Operations Center” on page 817
For detailed information about the commands mentioned here, see the
Administrator's Reference.
The following steps describe the commands that you can use to monitor the
database:
1. Use the QUERY DBSPACE command, and then examine the file system information
reported through the query to ensure that the file system has adequate space.
Examine the total, used, and free space.
query dbspace
Example output:
2. Examine the file systems where the database is located, using the appropriate
operating system commands for the following:
v Ensure that the file systems are not approaching full.
v Ensure that other applications, or unexpected users of the file system space
are not storing data in the server database directories.
v Check the operating system and device error logs for any early signs or
indications of device failures.
3. Query the database to ensure that the percent utilization is acceptable, and that
the remaining space is sufficient for the next few days or weeks of expected
activity. This includes examining the free space available, and the free-pages
values. If you find that you are approaching your space limits, take action to
ensure that you get additional space provisioned to avoid any potential
problems.
query db format=detailed
Example output:
Database Name: mgsA2
Total Size of File System (MB): 253,952
Space Used by Database(MB): 544
Free Space Available (MB): 191,821
Total Pages: 40,964
Usable Pages: 40,828
Used Pages: 33,116
Free Pages: 7,712
Buffer Pool Hit Ratio: 97.7
Total Buffer Requests: 102,279
Sort Overflows: 0
Package Cache Hit Ratio: 78.9
Last Database Reorganization: 08/24/2011 17:28:28
Full Device Class Name: FILECLASS
Incrementals Since Last Full: 1
Last Complete Backup Date/Time: 08/25/2011 15:02:31
4. Monitor the file systems to ensure that they are not running out of space. Verify
that there is enough disk space in the file systems that contain these log files:
v Active log
v Archive log
v Mirror log
v Archive failover log
If the archive log directory fills up it will overflow to the active log directory. If
you see archive log space file systems filling up, it might be an indication that
a database backup is not being run, or not being run often enough. It might
also be an indication that the space is shared with other applications that are
contending for the same space.
Issue this command to look at the total space used, free space, and so on.
query log format=detailed
Example output:
5. Examine the instance directory to ensure that it has enough space. If there is
insufficient space in this directory, the Tivoli Storage Manager server fails to
start.
You should also examine the instance_dir/sqllib/db2dump directory and delete
*.trap.txt and *.dump.bin files regularly.
V6.1 servers:
v Servers that are running version 6.1 must periodically delete the db2diag.log
file.
6. Verify that the database backups completed successfully, and examine the
details to determine if there are any problems:
select * from summary where end_time>(current_timestamp - interval
’24’ hours) and activity in (’FULL_DBBACKUP’,’INCR_DBBACKUP’)
If there are no results to this select command, then there were no database
backups in the previous 24-hour period.
a. Issue the QUERY PROCESS command to look at current status of an active
backup:
query process
Example output:
Process Process Description Status
Number
-------- -------------------- -------------------------------------------------
5 Database Backup TYPE=FULL in progress. 62,914,560 bytes
backed up to volume /fvt/kolty/srv/Storage/143-
12072.DSS .
7. Check to ensure that the DEVCONFIG and VOLUMEHISTORY files configured in the
dsmserv.opt file are current and up-to-date. Ensure that the file systems where
these files are being written to are not running out of space. If there are old or
unnecessary volume history entries, consider pruning the old entries using the
DELETE VOLHISTORY command.
Important: Save the volume history file to multiple locations. Ensure that these
different locations represent different underlying disks and file systems.
For detailed information about the commands mentioned here, see the
Administrator's Reference.
The following steps describe the commands that you can use to monitor disk
storage pools:
1. Check the status of storage pools, and ensure that there is enough space
available.
v Examine the percent utilization to ensure that the amount of space is
sufficient for ingestion rates.
v The high and low migration thresholds should be set to values that will
allow for proper migration cycles.
v If the storage pool is set to CACHE=YES, the percent migration should be
approaching zero.
v This indicates that items are being cleared out of the pool appropriately.
Issue the QUERY STGPOOL command to display information about one or more
storage pools.
query stgpool
Example output:
Storage Device Estimated Pct Pct High Low Next
Pool Class Capacity UtilMigr Mig Mig Storage-
Name Name Pct Pct Pool
----------- ---------- ---------- ----- ----- ---- --- -----------
ARCHIVEPOOL DISK 1,000.0 M 0.0 0.0 90 70 storage_pool
BACKUPPOOL DISK 1,000.0 M 0.0 0.0 5 1 storage_pool
2. Check the status of the disk volumes. Issue the SELECT command and specify a
particular device class name:
select volume_name, status from volumes
where devclass_name=’devclass name’
Example output:
VOLUME_NAME: /fvt/kolty/srv/Storage/ar1
STATUS: ONLINE
VOLUME_NAME: /fvt/kolty/srv/Storage/bk1
STATUS: ONLINE
Example output:
START_TIME: 2011-08-23 14:53:37.000000
END_TIME: 2011-08-23 14:53:38.000000
PROCESS: MIGRATION
PROCESSNUM: 7
POOLNAME: storage_pool_example
Related tasks:
“Monitoring your server processes daily” on page 806
“Monitoring your database daily” on page 807
“Monitoring sequential access storage pools daily”
“Monitoring scheduled operations daily” on page 814
“Monitoring operations daily with Tivoli Monitoring for Tivoli Storage Manager”
on page 815
“Monitoring operations daily using the Operations Center” on page 817
For detailed information about the commands mentioned here, see the
Administrator's Reference.
The following steps describe the commands that you can use to monitor sequential
access storage pools:
1. Check the status of your storage pools, and ensure that there is enough space
available. Examine the percent utilization to ensure that the amount of space is
sufficient for the amount of data that is being taken in. Set the high and low
migration thresholds to values that will allow for proper migration cycles.
Issue the QUERY STGPOOL command to display information about one or more
storage pools.
query stgpool
Example output:
Storage Device Estimated Pct Pct High Low Next Storage-
Pool Name Class Name Capacity UtilMigr Mig Mig Pool
Pct Pct
----------- ---------- ---------- ----- ----- ---- --- -----------
ARCHIVEPOOL DISK 1,000.0 M 0.0 0.0 90 70 storage_pool
BACKUPPOOL DISK 1,000.0 M 0.0 0.0 5 1 storage_pool
2. Check the status of the sequential access storage pool volumes with this SELECT
command:
select volume_name,status,access,write_errors,read_errors,
error_state from volumes where stgpool_name=’STORAGE_POOL_NAME’
The select statement can be modified to limit the results based on error state,
read-write errors, or current-access state. Example output:
VOLUME_NAME: /fvt/kolty/srv/Storage/00000153.BFS
STATUS: FULL
ACCESS: READWRITE
WRITE_ERRORS: 0
READ_ERRORS: 0
ERROR_STATE: NO
3. Verify that all of the tapes have the appropriate write-access by issuing this
command:
select volume_name,access from volumes
where stgpool_name=’TAPEPOOL’ and access!=’READWRITE’
For example, this output indicates that the following volumes are not available
for use:
VOLUME_NAME: A00011L4
ACCESS: DESTROYED
VOLUME_NAME: KP0033L3
ACCESS: UNAVAILABLE
4. Use the QUERY DIRSPACE command to display information about free space in
the directories that are associated with a device class with a device type of
FILE.
query dirspace
Example ouput:
Device Class Directory Estimated Estimated
Name Capacity Available
------------ ------------------------------------ --------- ---------
FILECLASS /fvt/kolty/srv/Storage 253,952 M 185,616 M
Tip: Ensure that the amount of available space is higher than the total capacity
of all storage pools assigned to the device class or classes using that directory.
5. Determine how many scratch volumes are available in tape libraries with this
SELECT command:
select library_name,count(*) "Scratch volumes" from libvolumes
where status=’Scratch’ group by library_name
Example output:
LIBRARY_NAME Scratch volumes
------------------------- ----------------
TS3310 6
6. Determine how many scratch volumes can be potentially allocated out of the
storage pools using those tape libraries.
select stgpool_name,(maxscratch-numscratchused)
as "Num Scratch Allocatable" from stgpools
where devclass=’DEVICE_CLASS_NAME’
Example output:
Tip: Ensure that the number of allocatable scratch volumes is equal to the
number of available scratch library volumes in the assigned tape library.
7. Issue these SELECT commands to determine if there are any tape drives or paths
that are offline:
a. Check to ensure that the drives are online:
select drive_name,online from drives
where online<>’YES’
Example output:
DRIVE_NAME ONLINE
-------------------------------- -----------------------------------------
DRIVEA NO
b. Check to ensure that the paths to the drives are also online. A drive can be
online, while the path is offline.
select library_name,destination_name,online
from paths where online<>’YES’ and destination_type=’DRIVE’
Example output:
LIBRARY_NAME: TS3310
DESTINATION_NAME: DRIVEA
ONLINE: NO
8. Check to see if there are any library paths that are offline with this SELECT
command:
select destination_name,device,online from paths
where online<>’YES’ and destination_type=’LIBRARY’
Example output:
DESTINATION_NAME: TS3310
DEVICE: /dev/smc0
ONLINE: NO
9. If you are using the DRM, check the status and settings.
a. Check to see which copy storage pool volumes are onsite:
select stgpool_name,volume_name,upd_date,voltype from drmedia
where state in (’MOUNTABLE’,’NOTMOUNTABLE’)
Example output:
STGPOOL_NAME: COPYPOOL
VOLUME_NAME: CR0000L5
UPD_DATE: 2011-04-17 16:09:47.000000
VOLTYPE: Copy
Example output:
PLANPREFIX:
INSTRPREFIX:
PLANVPOSTFIX: @
NONMOUNTNAME: NOTMOUNTABLE
COURIERNAME: COURIER
VAULTNAME: VAULT
DBBEXPIREDAYS: 60
CHECKLABEL: Yes
FILEPROCESS: No
CMDFILENAME:
RPFEXPIREDAYS: 60
Related tasks:
“Monitoring your server processes daily” on page 806
“Monitoring your database daily” on page 807
“Monitoring disk storage pools daily” on page 810
“Monitoring scheduled operations daily”
“Monitoring operations daily with Tivoli Monitoring for Tivoli Storage Manager”
on page 815
“Monitoring operations daily using the Operations Center” on page 817
For detailed information about the commands mentioned here, see the
Administrator's Reference.
The following steps describe the commands that you can use to monitor scheduled
operations:
1. The most valuable command that you can use to check the status of your
scheduled operations is the QUERY EVENT command. Issue this command and
look for any missed or failed scheduled operations that might indicate a
problem:
query event * * type=client
query event * type=admin
All of these steps are completed from the Tivoli Enterprise Portal. For additional
information about logging on to the Tivoli Enterprise Portal, see Monitoring Tivoli
Storage Manager real time.
1. Start the Tivoli Enterprise Portal, log on with your sysadmin ID and password,
and navigate to Tivoli Storage Manager.
2. Many of the items that you want to check on a daily basis are displayed in the
dashboard view when it opens. The dashboard displays a grouping of
commonly viewed items in a single view. Examine these items and look for any
values that might indicate a potential problem:
Node storage space used
Check this graph for disk, storage, and tape space used.
For some commands, you can display the information in either a standard or
detailed format. The standard format presents less information than the detailed
format, and is useful in displaying an overview of many objects. For displaying
more information about a particular object, use the detailed format when
supported by a given command.
For information about creating customized queries of the database, see “Using SQL
to query the IBM Tivoli Storage Manager database” on page 824.
Most of these definition queries let you request standard format or detailed format.
Standard format limits the information and usually displays it as one line per
object. Use the standard format when you want to query many objects, for
example, all registered client nodes. Detailed format displays the default and
specific definition parameters. Use the detailed format when you want to see all
the information about a limited number of objects.
Here is an example of the standard output for the QUERY NODE command:
Node Name Platform Policy Days Days Locked?
Domain Since Since
Name Last Password
Access Set
---------- -------- --------- ------ -------- -------
CLIENT1 AIX STANDARD 6 6 No
GEORGE Linux86 STANDARD 1 1 No
JANET HPUX STANDARD 1 1 No
JOE2 Mac STANDARD <1 <1 No
TOMC WinNT STANDARD 1 1 No
Here is an example of the detailed output for the QUERY NODE command:
You can use the QUERY SESSION command to request information about client
sessions. Figure 92 shows a sample client session report.
Sess Comm. Sess Wait Bytes Bytes Sess Platform Client Name
Number Method State Time Sent Recvd Type
------ ------ ------ ------ ------- ------- ----- -------- --------------------
3 Tcp/Ip IdleW 9 S 7.8 K 706 Admin WinNT TOMC
5 Tcp/Ip IdleW 0 S 1.2 K 222 Admin AIX GUEST
6 Tcp/Ip Run 0 S 117 130 Admin Mac2 MARIE
Check the wait time to determine the length of time (seconds, minutes, hours) the
server has been in the current state. The session state reports status of the session
and can be one of the following:
Start Connecting with a client session.
Run Running a client request.
End Ending a client session.
Most commands run in the foreground, but others generate background processes.
In some cases, you can specify that a process run in the foreground. Tivoli Storage
Manager issues messages that provide information about the start and end of
processes. In addition, you can request information about active background
processes. If you know the process ID number, you can use the number to limit the
search. However, if you do not know the process ID, you can display information
about all background processes by issuing the QUERY PROCESS command.
This list is not all-inclusive. For a detailed explanation of the QUERY STATUS
command, see the Administrator's Reference.
You can issue the QUERY OPTION command with no operands to display general
information about all defined server options. You also can issue it with a specific
option name or pattern-matching expression to display information on one or more
server options. You can set options by editing the server options file.
Options can also be set through the IBM Tivoli Storage Manager Console.
See the QUERY OPTION command in the Administrator's Reference for more
information.
When you enter the QUERY SYSTEM command, the server issues the following
queries:
QUERY ASSOCIATION
Displays all client nodes that are associated with one or more client
schedules
IBM Tivoli Storage Manager Versions 6.1 and later use the DB2 open database
connectivity (ODBC) driver to query the database and display the results.
DB2 provides its own ODBC driver which can also be used to access the Tivoli
Storage Manager server DB2 database. For more information on the DB2 native
ODBC driver, refer to DB2 documentation at: http://pic.dhe.ibm.com/infocenter/
db2luw/v9r7. Search on Introduction to DB2 CLI and ODBC
You can issue the SELECT command from the command line of an administrative
client. You cannot issue this command from the server console.
To help you find what information is available in the database, Tivoli Storage
Manager provides three system catalog tables:
SYSCAT.TABLES
Contains information about all tables that can be queried with the SELECT
command.
SYSCAT.COLUMNS
Describes the columns in each table.
SYSCAT.ENUMTYPES
Defines the valid values for each enumerated type and the order of the
values for each type.
You can issue the SELECT command to query these tables and determine the
location of the information that you want. For example, to get a list of all tables
available for querying in the database TSMDB1 enter the following command:
select tabname from syscat.tables where tabschema=’TSMDB1’ and type=’V’
You can also issue the SELECT command to query columns. For example, to get a
list of columns for querying in the database TSMDB1 and the table name ACTLOG,
enter the following command:
select colname from syscat.columns where tabschema=’TSMDB1’and tabname=’ACTLOG’
COLNAME: DATE_TIME
COLNAME: DOMAINNAME
COLNAME: MESSAGE
COLNAME: MSGNO
COLNAME: NODENAME
COLNAME: ORIGINATOR
COLNAME: OWNERNAME
COLNAME: PROCESS
COLNAME: SCHEDNAME
COLNAME: SERVERNAME
COLNAME: SESSID
COLNAME: SESSION
COLNAME: SEVERITY
For many more examples of the command, see the Administrator's Reference.
Example 1: Find the number of nodes by type of operating system by issuing the
following command:
select platform_name,count(*) as "Number of Nodes" from nodes
group by platform_name
Example 2: For all active client sessions, determine how long they have been
connected and their effective throughput in bytes per second:
select session_id as "Session", client_name as "Client", state as "State",
current_timestamp-start_time as "Elapsed Time",
(cast(bytes_sent as decimal(18,0)) /
cast(second(current_timestamp-start_time) as decimal(18,0)))
as "Bytes sent/second",
(cast(bytes_received as decimal(18,0)) /
cast(second(current_timestamp-start_time) as decimal(18,0)))
as "Bytes received/second"
from sessions
Session: 24
Client: ALBERT
State: Run
Elapsed Time: 4445.000000
Bytes sent/second: 564321.9302768451
Bytes received/second: 0.0026748857944
Session: 26
Client: MILTON
State: Run
Elapsed Time: 373.000000
Bytes sent/second: 1638.5284210992221
Bytes received/second: 675821.6888561849
For example:
DATABASE_NAME: mgsA62
TOT_FILE_SYSTEM_MB: 511872
USED_DB_SPACE_MB: 448
FREE_SPACE_MB: 452802
PAGE_SIZE: 16384
TOTAL_PAGES: 32772
USABLE_PAGES: 32636
USED_PAGES: 24952
FREE_PAGES: 768
BUFF_HIT_RATIO: 99.7
TOTAL_BUFF_REQ: 385557
SORT_OVERFLOW: 0
LOCK_ESCALATION: 0
PKG_HIT_RATIO: 99.8
LAST_REORG:
FULL_DEV_CLASS:
NUM_BACKUP_INCR: 0
LAST_BACKUP_DATE:
PHYSICAL_VOLUMES: 1
A script can be run from an administrative client or the server console. You can
also include it in an administrative command schedule to run automatically. See
“Tivoli Storage Manager server scripts” on page 666 for details.
Tivoli Storage Manager is shipped with a file that contains a number of sample
scripts. The file, scripts.smp, is in the server directory. To create and store the
scripts as objects in your server's database, issue the DSMSERV RUNFILE
command during installation:
> dsmserv runfile scripts.smp
You can also run the file as a macro from an administrative command line client:
macro scripts.smp
The sample scripts file contains Tivoli Storage Manager commands. These
commands first delete any scripts with the same names as those to be defined,
then define the scripts. The majority of the samples create SELECT commands, but
others do such things as back up storage pools. You can also copy and change the
sample scripts file to create your own scripts.
Some of the client operations recorded to the table are BACKUP, RESTORE,
ARCHIVE and RETRIEVE. Server processes include MIGRATION,
RECLAMATION and EXPIRATION.
To list column names and their descriptions from the activity summary table, enter
the following command:
select colname,remarks from columns where tabname=’summary’
You can determine how long to keep information in the summary table. For
example, to keep the information for 5 days, enter the following command:
set summaryretention 5
Tivoli Storage Manager does not create records in the SQL activity summary table
for manual backups or for successful scheduled backups of 0 bytes. Records are
created in the summary table for successful scheduled backups only if data is
backed up.
For details about using command line options and redirecting command output,
see the Administrator's Reference.
You can also query the activity log for client session information. For example,
issue the following command to search the activity log for any messages that were
issued in relation to session 4:
query actlog search="(SESSION:4)"
Any error messages sent to the server console are also stored in the activity log.
Use the following sections to adjust the size of the activity log, set an activity log
retention period, and request information about the activity log.
To minimize processing time when querying the activity log, you can:
v Specify a time period in which messages have been generated. The default for
the QUERY ACTLOG command shows all activities that have occurred in the
previous hour.
v Specify the message number of a specific message or set of messages.
v Specify a string expression to search for specific text in messages.
v Specify the QUERY ACTLOG command from the command line for large
queries instead of using the graphical user interface.
v Specify whether the originator is the server or client. If it is the client, you can
specify the node, owner, schedule, domain, or session number. If you are doing
client event logging to the activity log and are only interested in server events,
then specifying the server as the originator will greatly reduce the size of the
results.
For example, to review messages generated on May 30 between 8 a.m. and 5 p.m.,
enter:
query actlog begindate=05/30/2002 enddate=05/30/2002
begintime=08:00 endtime=17:00
To request information about messages related to the expiration of files from the
server storage inventory, enter:
query actlog msgno=0813
You can also request information only about messages logged by one or all clients.
For example, to search the activity log for messages from the client for node JEE:
query actlog originator=client node=jee
Note: With retention-based management, you lose some control over the amount
of space that the activity log occupies. For more information on size-based activity
log management, see “Setting a size limit for the activity log.”
The server will periodically remove the oldest activity log records until the activity
log size no longer exceeds the configured maximum size allowed. To manage the
activity log by size, the parameter MGMTSTYLE must be set to the value SIZE. To
change the maximum size of the activity log to 12 MB, for example, enter:
set actlogretention 12 mgmtstyle=size
Note: With size-based management, you lose some control over the length of time
that activity log messages are kept. For more information on retention-based
activity log management, see “Setting a retention period for the activity log.”
| For a newly installed server or for an upgraded server without defined alerts, a
| default set of messages is defined to trigger alerts. The administrator can add
| messages to, or remove messages from, the default set.
| You can configure alert monitoring and its characteristics, such as defining which
| messages trigger alerts and configuring email notification for administrators about
| alerts.
| To configure alert monitoring, use the following server commands, which are
| grouped according to the general configuration task to which they apply. For more
| information about these commands and about configuring alerts, see the
| Administrator's Reference.
| Activate alert monitoring
| v SET ALERTMONITOR
| v SET ALERTUPDATEINTERVAL
| Define which messages trigger alerts
| v DEFINE ALERTTRIGGER
| v UPDATE ALERTTRIGGER
| v DELETE ALERTTRIGGER
| v QUERY ALERTTRIGGER
| Define the time interval for alerts to be kept in the database
| v SET ALERTACTIVEDURATION
| v SET ALERTINACTIVEDURATION
| v SET ALERTCLOSEDDURATION
| Query existing alerts
| v QUERY ALERTSTATUS
| Update the status of an alert
| v UPDATE ALERTSTATUS
| Configure email notification for administrators about alerts
| v QUERY MONITORSETTINGS
| v SET ALERTEMAIL
| v SET ALERTEMAILFROMADDR
| v SET ALERTEMAILSMTPHOST
| v SET ALERTEMAILSMTPPORT
| v REGISTER ADMIN
| For detailed information about the commands that are mentioned here, see the
| Administrator's Reference.
| An administrator with system privilege can complete the following steps on the
| server to enable alerts to be sent by email:
| 1. Issue the QUERY MONITORSETTINGS command to verify that alert monitoring is set
| to ON. If the monitoring settings output indicates Off, issue the SET
| ALERTMONITOR command to start alert monitoring on the server:
| set alertmonitor on
| Tip: If alert monitoring is on, alerts are displayed in the Operations Center
| even though the alert email feature might not be enabled.
| 2. Enable alerts to be sent by email by issuing the SET ALERTEMAIL command:
| set alertemail on
| 3. Define the SMTP host server that is used to send email by issuing the SET
| ALERTEMAILSMTPHOST command:
| set alertemailsmtphost
| 4. Set the SMTP port by issuing the SET ALERTEMAILSMTPPORT command:
| set alertemailsmtpport port_number
| Tip: You can suspend email alerts for an administrator by using one of the
| following methods:
| v Use the UPDATE ADMIN command, and specify ALERT=no.
| v Use the ALERTTRIGGER command, and specify the DELADMIN parameter.
| The following example describes the commands that are used to enable the
| administrators myadmin, djadmin, and csdadmin to receive email alerts for
| ANR1075E messages.
| set alertmonitor on
| set alertmaiil on
| set alertemailsmtphost mymailserver.domain.com
| set alertemailsmtpport 450
| set alertemailfromaddr [email protected]
| update admin myadmin alert=yes [email protected]
| update admin djadmin alert=yes [email protected]
| update admin csadmin alert=yes [email protected]
| define alerttrigger anr0175e admin=myadmin,djadmin,csdadmin
| Related concepts:
| Chapter 27, “Alert monitoring,” on page 833
| Related tasks:
| Chapter 18, “Managing servers with the Operations Center,” on page 615
The accounting file contains text records that can be viewed directly or can be read
into a spreadsheet program. The file remains opened while the server is running
and accounting is set to ON. The file continues to grow until you delete it or prune
old records from it. To close the file for pruning, either temporarily set accounting
off or stop the server.
There are 31 fields, which are delimited by commas (,). Each record ends with a
new-line character. Each record contains the following information:
Field Contents
1 Product version
2 Product sublevel
3 Product name, ‘ADSM',
4 Date of accounting (mm/dd/yyyy)
5 Time of accounting (hh:mm:ss)
6 Node name of Tivoli Storage Manager client
7 Client owner name (UNIX)
8 Client Platform
9 Authentication method used
10 Communication method used for the session
11 Normal server termination indicator (Normal=X'01', Abnormal=X'00')
12 Number of archive store transactions requested during the session
13 Amount of archived files, in kilobytes, sent by the client to the server
14 Number of archive retrieve transactions requested during the session
15 Amount of space, in kilobytes, retrieved by archived objects
16 Number of backup store transactions requested during the session
17 Amount of backup files, in kilobytes, sent by the client to the server
18 Number of backup retrieve transactions requested during the session
19 Amount of space, in kilobytes, retrieved by backed up objects
20 Amount of data, in kilobytes, communicated between the client node and the
server during the session
21 Duration of the session, in seconds
22 Amount of idle wait time during the session, in seconds
Tivoli Monitoring for Tivoli Storage Manager also provides reports based on the
historical data retrieved. You can use the existing historical reports provided, or
you can create your own custom reports.
Tivoli Monitoring for Tivoli Storage Manager consists of the following components:
IBM DB2
Stores historical data that is obtained from Tivoli Storage Manager servers
that are monitored by IBM Tivoli Monitoring.
IBM Tivoli Monitoring
Consists of a number of components that accumulate and monitor
historical data for reporting:
v Tivoli Enterprise Portal server
v Tivoli Data Warehouse
v Tivoli Enterprise Monitoring server
v Summarization Pruning agent
v Warehouse Proxy agent
v Tivoli Monitoring for Tivoli Storage Manager agent
The Tivoli Monitoring for Tivoli Storage Manager agent queries and formats data
to be presented to you in the following ways:
v As workspaces from the Tivoli Enterprise Portal
v As reports using the Tivoli Data Warehouse and the reporting portion of Tivoli
Monitoring for Tivoli Storage Manager
The agent is installed on the Tivoli Storage Manager server or the IBM Tivoli
Monitoring server, and is a multi-instance data collection agent.
The Tivoli Monitoring for Tivoli Storage Manager agent communicates with the
Tivoli Monitoring for Tivoli Storage Manager server to retrieve data from its
database and return this data to the Tivoli Monitoring server.
Tivoli Monitoring for Tivoli Storage Manager reports on the Tivoli Storage
Manager server activities from data that is collected using the Tivoli Storage
Manager monitoring agent. The monitoring feature uses the Tivoli Enterprise
Portal to view the current status of the Tivoli Storage Manager server.
User ID:
and K. Reports E and F shouldC. Reports B and C should
Reports G and H should be be run 4 times daily.
mailed to personnel S,be T, terminated after one hour.
and K. Reports E and F should
Reports G and H should be
mailed to personnel S,be T, terminated after one hour.
and K.
Reports G and H should be
mailed to personnel S, T,
and K.
sysadmin
Report on Monitoring
Machine A is functioning
Report at a
on Monitoring
B level.
Historical
Machine A is functioning
Report at a
on Monitoring
MachineBB is functioning at a
level.
B level with some issues.
Machine A is functioning at a
Machine BB is functioning at a
level.
There are two with
B level machines
some that
issues.
need immediate attention.
Machine B is functioning at a
reports
There are two with
B level machines
some that
issues.
Machine C needs
need maintenenc
immediate attention.
Machine D is terminal.
There are two machines that
Machine C needs
need maintenenc
immediate attention.
Machine D is terminal.
Machine C needs maintenenc
Machine D is terminal.
User ID:
Tivoli Enterprise Tivoli Enterprise
AIX Linux
Portal server Monitoring server Tivoli Data Warehouse itmuser
Windows
ITMuser
User ID:
DB2 database AIX Linux
db2inst1
Windows
db2admin
Tivoli Monitoring
for Tivoli Storage
Manager agent
instances
You can create your own custom reports using IBM Cognos 8 Business Intelligence,
or you can install the Business Intelligence and Reporting Tools (BIRT) software.
See the IBM Tivoli Storage Manager Installation Guide for details on installing BIRT
software.
When you open the Tivoli Enterprise Portal and navigate to the Tivoli Storage
Manager view, a dashboard workspace displays commonly viewed information in
a single location. To view more details click the first column, which is a chain-link
icon. To return to the dashboard view, click the back arrow in the upper left.
The dashboard workspace can be customized to suit your monitoring needs, but
the default settings display the following information:
v Storage space that is used for each node that is defined on the server
v Storage pool summary details
v Unsuccessful client and server schedules, including all missed or failed
schedules
v Client node activity for all nodes on the server
v Activity log errors, including all severe error messages
Tip: The data in these reports can be sorted by clicking the column that you want
to sort by. To display subworkspaces, select the main workspace, right-click, select
Workspace, and click the subworkspace that you want to view.
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 841
Table 75 lists the attribute groups, their workspaces, and descriptions.
Table 75. Tivoli Enterprise Portal workspaces and subworkspaces
Attribute group name Description
Activity log This workspace provides information about activity log messages based on the parameters
selected. The data can be used to generate aggregated reports that are grouped by server,
and subgrouped by client.
Activity summary This workspace provides summarized activity log information about virtual environments.
Agent log This workspace provides trace file information that is produced by the agent without
having to enable tracing. It provides messages information such as login successes and
failures, and agent processes.
Availability This workspace provides the status and the performance of the agent that is running for
each of the different workspaces that are listed under the Tivoli Storage Manager agent. It
can help to identify problems with the gathering of historical data.
Client node storage The main workspace displays information about client node storage, disk, and tape usage
data. This data can help you identify the clients that are using the most resources on the
server. Disk and tape usage information is displayed in graph format.
The subworkspaces display data in a tabular format and a graph format. To display the
subworkspaces, select the Client Node Storage workspace, right-click and select
Workspace, and click the subworkspace that you want to view.
Total capacity and total space used data is displayed in a bar chart format, and database
details such as percent space used, and total space used is displayed in a tabular format.
Drives This workspace provides status about the drives, including drive name, library name,
device type, drive status such as loaded or empty, the volume name, and whether the
drive is online.
Additional subworkspace:
v Drives drill down
Libraries This workspace provides status about libraries, such as the library name, type, if it is
shared or not, LAN-free, auto label, number of available scratch volumes, whether the
path is online, and the serial number.
The subworkspaces display data in a tabular format and a graph format. To display the
subworkspaces, select the Node Activity workspace, right-click and select Workspace, and
click the subworkspace that you want to view.
The subworkspace displays data in a tabular format and a graph format. To display the
subworkspaces, select the Occupancy workspace, right-click and select Workspace, and
click the subworkspace that you want to view.
Additional subworkspace:
v Drives drill down
Processor Value Unit This workspace provides PVU details by product, and PVU details by node. It includes
(PVU) details information such as node name, product, license name, last used date, try buy, release,
and level. If the Tivoli Storage Manager server is not a version 6.3 server the workspace
will be blank.
Replication details This workspace provides byte by byte replication details. It describes all of the replication
details such as node name file space ID and name, version, start and end times, status,
complete stat, incomplete reason, estimated percent complete, estimated time remaining,
and estimated time to completion.
Replication status This workspace provides the replication status for a node without all the details that the
replication details workspace provides. It displays node name, server, file space type,
name and ID, target server, source and target server number of files.
Schedule This workspace provides details about client and server schedules. You can group the
data by node name, schedule name, or status to help in identifying any potential
problems. It displays information such as schedule name, node name, server name,
scheduled start, actual start, and the status of the schedule which can be success, missed,
or failed, along with any error or warning text.
Sessions This workspace provides a view of all the client sessions that are running on the specified
server. This workspace is useful for determining which clients are connected to the Tivoli
Storage Manager server and how much data has been sent or received. The workspace
also shows tape mount information which can give an indication about library and tape
usage.
Note: By default, historical data collection is not enabled by this workspace, and is used
more as a monitoring tool. You can modify the historical collection settings to enable this
data to be stored, but this type of data can cause the WAREHOUS database to grow very
large over time.
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 843
Table 75. Tivoli Enterprise Portal workspaces and subworkspaces (continued)
Attribute group name Description
Storage pool This workspace provides you with detailed information about your storage pools. Tivoli
Storage Manager can contain multiple storage pools. These storage pools define the
methods and resources that are used to store data being backed up or archived to the
Tivoli Storage Manager server. The data displayed in this workspace includes storage pool
names, server name, device classes, total space, utilized space, total volumes used, percent
space used, disk space used, and deduplication savings. It also displays a graph with the
total space, total usage, and total volumes used.
Server This workspace provides the operational status of the Tivoli Storage Manager server.
These operations are measured by Megabytes per-operation. After they are reported, the
values are reset back to zero. The counts reported for each operation are not cumulative
over time. You can view the following activities or status:
v What activities are taking time to complete?
v As the server migrates data or mounts storage onto devices, what are the possible
problem activities?
v The status of server-only activities.
The data that is displayed includes information such as server name, current disk storage
pool space, tape usage count, current database size, previous days information for client
operations, object count reclamation by byte and duration, migration by byte and
duration, backup by byte and duration.
Bar graphs are also provided to display server operation duration and server operation
byte counts.
Storage device This workspace provides you with the read and write error status of the storage devices.
This status helps you identify possible problems with any of your storage devices. Bar
chart graphs also display read and write error count.
Tape usage This workspace provides you with tape usage data for each client.
Tape volume This workspace provides the status of all tape storage devices. This information can help
you identify any storage devices that are near full capacity.
To view the available Tivoli Storage Manager monitoring workspaces, complete the
following steps:
1. Log in to Tivoli Enterprise Portal with the sysadmin user ID and password
using one of the following methods:
a. Start the Tivoli Enterprise Monitoring Services console:
b. Double-click the Tivoli Enterprise Portal icon on your desktop. IBM Tivoli
Monitoring creates a shortcut on your desktop to open Tivoli Enterprise
Portal.
Tip: If you do not have a shortcut on your desktop you can click Start >
Programs > IBM Tivoli Monitoring > Manage Tivoli Monitoring Services
and select Tivoli Enterprise Portal under Service/Application.
c. Open a web browser and enter the address of the server where the Tivoli
Enterprise Portal server is installed, similar to the following example:
http://hostname:1920///cnp/kdh/lib/cnp.html
Tip: Some of these attribute groups have sub-workspaces that you can view
when you right-click the main attribute group. See the section on the overview
of the monitoring workspaces to learn more details about using the
workspaces.
6. The details of your selection are displayed in the workspace in the right panel
and in the bottom panel.
Related reference:
“Types of information to monitor with Tivoli Enterprise Portal workspaces” on
page 841
After you complete the installation and created and configured your Tivoli
Monitoring for Tivoli Storage Manager agent instance, you can view reports from
the Tivoli Integrated Portal.
To run the available Tivoli Storage Manager client and server reports, complete
these steps:
1. Log in to the Tivoli Storage Manager Tivoli Integrated Portal.
a. If the Tivoli Integrated Portal is not running, start it. For additional details,
see Starting and stopping the Tivoli Integrated Portal.
b. Open a supported web browser and enter the following address:
https://hostname:port/ibm/console, where port is the port number
specified when you installed the Tivoli Integrated Portal. The default port is
16311.
If you are using a remote system, you can access the Tivoli Integrated Portal
by entering the IP address or fully qualified host name of the remote
system. If there is a firewall, you must authenticate to the remote system.
c. The Tivoli Integrated Portal window opens. In the User ID field, enter the
Tivoli Integrated Portal user ID that was defined when you installed Tivoli
Monitoring for Tivoli Storage Manager. For example, tipadmin.
d. In the Password field, enter the Tivoli Integrated Portal password that you
defined in the installation wizard and click Log in.
Tip: Create a desktop shortcut, or bookmark in your browser for quick access
to the portal in the future.
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 845
2. On the left side of the window, expand and click Reporting > Common
Reporting.
3. In the Work with reports pane, click the Public Folders tab.
4. To work with Cognos reports, select IBM Tivoli Storage Manager Cognos
Reports.
5. To work with BIRT reports, select Tivoli Products > Tivoli Storage Manager.
The report name and descriptions are displayed in the Reports pane. Double-click
the report to open the parameter selections page, or use the icons at the top of the
reports listing. You can view reports in HTML, PDF, Excel, and CSV formats.
Items added from the package to your report are called report items. Report items
display as columns in list reports, and as rows and columns in cross-tab reports. In
charts, report items display as data markers and axis labels.
You can expand the scope of an existing report by inserting additional report
items, or you can focus on specific data by removing unnecessary report items.
If you frequently use items from different query subjects or dimensions in the
same reports, ask your modeler to organize these items into a folder or model
query subject and then to republish the relevant package. For example, if you use
the product code item in sales reports, the modeler can create a folder that contains
the product code item and the sales items you want.
IBM Cognos Business Intelligence includes many components that you can use, but
only the basic report tasks are documented here. For additional information
regarding Cognos you can visit the IBM Cognos Information Center at:
http://publib.boulder.ibm.com/infocenter/c8bi/v8r4m0/topic/
com.ibm.swg.im.cognos.wig_cr.8.4.0.doc/wig_cr_id262gtstd_c8_bi.html.
These Cognos reports are available in HTML, PDF, Microsoft Excel, XML, and CSV
(delimited text) formats. There are limitations when producing reports in Microsoft
Excel formats, such as timestamps not displaying. For a complete list of all
limitations see: Limitations when producing reports in Microsoft Excel format.
| You can customize the data that is displayed in your reports by specifying the
| parameter values that you want to include or exclude. After you run the report,
| the parameter values that you specified are displayed at the bottom.
| Important: When you modify existing reports in Report Studio, be sure to save the
| new report with a different name. Customized, or modified reports are not
| supported by our technical support staff.
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 847
Table 76. Report parameters (continued)
Parameter Description
Client node Use this parameter to specify a client from the server to report on. This
name parameter can also accept wildcard characters by using the percent
symbol (%). The default selects all the client nodes.
Summarization Use this parameter to select how to group or summarize the data. You
type can specify daily, hourly, weekly, monthly, quarterly, or yearly. The
default is monthly.
Number of Use this parameter to specify the number of top clients that you want to
clients to display in the report.
display
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 849
Table 77. Cognos status and trend reports (continued)
Report name Description Report folder
| Client This report provides details about how much storage Status reports
| storage space client nodes are currently using.
|| summary v Details are grouped by server and domain.
| and details
| v Ability to sort on multiple columns in ascending or
| descending order.
| v Details for the client nodes include storage space used
| by each type of storage. These storage space types
| include disk and file storage, server storage, and tape
| storage, where server storage are virtual volumes used
| by the client node.
| v Data is displayed in a tabular table format, with totals
| at the bottom of every Tivoli Storage Manager server.
| Client This report provides details about the storage usage of a Trending reports
| storage client node over a specified time period.
|| usage trends v Data can be summarized daily, weekly, monthly,
| quarterly, and yearly.
| v Details for the client nodes include storage space used
| by each type of storage. These storage space types
| include disk and file storage, server storage, and tape
| storage, where server storage are virtual volumes used
| by the client node.
| v Report shows one client for one server at a time.
| v Data is displayed in a line chart format, and a tabular
| table.
Current This report provides details about servers, client nodes, Status reports
client and associated storage pools to show how much space
occupancy your nodes are using.
summary v Details are grouped by node name, storage pool name.
Details include file space name, filespace ID,
occupancy date, MB, physical MB, logical MB, and
number of files.
v Data is displayed in a tabular table format.
v Node names are links that provide more details when
clicked.
Current This report displays the space used on the server within Status reports
storage pool each storage pool. The report also displays deduplication
summary savings to help with evaluating the effectiveness of
deduplication.
v Details include total space, space used, file space used,
disk space used, dedup space saved, and % of
deduped saved.
v Data is displayed in a tabular table format.
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 851
Table 77. Cognos status and trend reports (continued)
Report name Description Report folder
| VE backup This report shows the number of incremental and full Status reports
| type backups for each selected client node. The report is
| summary useful to determine which client node backups might be
| having problems when the backups are always full
| instead of incremental.
| v It includes the data center node name, virtual machine
| name, the number of full backups, and the number of
| incremental backups, over the specified amount of
| time.
| v Data is displayed in a tabular table format.
| Important: Run this report on Tivoli Storage Manager
| servers that are at version 6.3.3 or later.
| VE current This report provides current details about the storage Status reports
| occupancy occupancy that a VE guest operating system is using on
| summary the Tivoli Storage Manager server.
| v Details are grouped by data center node and virtual
| machine name. Details include file space information,
| reporting MB, physical MB, logical MB, and number of
| files.
| v Data is displayed in a tabular table format.
| v Data center node names are links that provide more
| details when clicked by linking to the VE Node
| Activity Status report to get current information about
| the activity of the VE on the Tivoli Storage Manager
| server.
| Important: Run this report on Tivoli Storage Manager
| servers that are at version 6.3.3 or later.
Yesterday's This report provides details about client schedule Status reports
missed and completion status from the day before the report is run.
failed client v Data is displayed in a tabular table format.
schedules
v Failed schedules are highlighted in red.
v Missed schedules are highlighted in yellow.
In Report Studio you can view data, create reports, change the appearance of
reports, and then use that data for comparison and analysis purposes.
1. Log in to the Tivoli Storage Manager Tivoli Integrated Portal.
a. If the Tivoli Integrated Portal is not running, start it. For additional details,
see Starting and stopping the Tivoli Integrated Portal.
b. Open a supported web browser and enter the following address:
https://hostname:port/ibm/console, where port is the port number
specified when you installed the Tivoli Integrated Portal. The default port is
16311.
If you are using a remote system, you can access the Tivoli Integrated Portal
by entering the IP address or fully qualified host name of the remote
system. If there is a firewall, you must authenticate to the remote system.
Tip: Create a desktop shortcut, or bookmark in your browser for quick access
to the portal in the future.
2. On the left side of the window, expand and click Reporting > Common
Reporting.
3. In the upper-right corner, click the Launch icon, and select Report Studio.
4. Select the Tivoli Storage Manager Cognos Reports package as your data
source.
5. Click Allow access to allow data to be written to your clipboard, and Report
Studio to access it.
6. Choose whether you want to create a report or template, or open an existing
report or template. To learn more about creating a custom report, see Creating a
custom Cognos report.
For additional information, visit the IBM Cognos Information Center at:
http://publib.boulder.ibm.com/infocenter/c8bi/v8r4m0/topic/
com.ibm.swg.im.cognos.wig_cr.8.4.0.doc/wig_cr_id262gtstd_c8_bi.html.
Complete these example steps to create a simple custom report that displays
details about your IBM Tivoli Storage Manager server databases:
1. Open the Report Studio portal application in a web browser and provide the
logon ID and password if prompted. See Opening the Cognos Report Studio
portal.
2. In the Welcome window, click the Create a new report or template, or from
the main menu, click File > New.
3. Click the blank icon.
4. From the Insertable Objects pane, click the Toolbox tab, and drag in a
container for your report values. For example, drag the list container over to
the report.
5. From the Insertable Objects pane, click the Source tab, and expand Tivoli
Storage Manager Cognos Reports > Consolidation View > Tivoli Storage
Manager Report Data > Key Metrics > Performance > Detailed.
6. A list of attribute groups are displayed. Expand any of the attribute groups to
display attributes you can use to build your report.
7. Drag any of the attributes in to the list container to include this data to your
report. For example, from the Database attribute group, click and drag the
Server Name and Total Capacity GB attributes in to the list container
side-by-side.
8. Run the report. Click Run from the main menu, and select the format that you
want your report to display. For example, html, or PDF.
9. To save the new report, click File > Save as.
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 853
Tip: To avoid naming conflicts, save all reports with unique report names.
You can create a folder in the Public Folders directory to store your new
reports. For example, create a folder that is called Server Reports for your
server reports.
10. Inside the directory that you want to save the report to, specify a unique
report name and click Save.
You can view the newly created report in Tivoli Common Reporting. The name of
the report is the name that you saved it as and it is in the folder where you saved
it. For example, if you created a report that is called Server Storage Details in the
Server Reports directory in the Public Folders directory, from Tivoli Common
Reporting, click Server Reports to find your report.
For more information, visit the IBM Cognos Information Center at:
http://publib.boulder.ibm.com/infocenter/c8bi/v8r4m0/topic/
com.ibm.swg.im.cognos.wig_cr.8.4.0.doc/wig_cr_id262gtstd_c8_bi.html.
Tip: You can select Run Options to configure options to run your reports. For
example, you can specify the format, and number of rows per page.
Note: If prompted, specify the fields that you want to display in the report
and click Finish.
4. From the drop-down list in the upper-right corner, click Keep this Version >
Email report.
5. Type the email address or addresses of the people that you want to receive
the report.
6. Optionally, click the Attach the report check box.
7. Click OK to complete the process.
Tip: Click the house icon in the upper-right corner to return to the previous
menu.
v Automatically schedule a report to run and email to recipients
1. Log on to the Tivoli Integrated Portal with the tipadmin ID and password
and click Reporting > Common Reporting > IBM Tivoli Storage Manager
Cognos Reports.
2. Navigate to the report that you want to schedule. For example, select the
Status Reports or Trending Reports folder to display a list of reports.
3. Click the small calendar icon that is located to the right of the report that
you want scheduled.
4. Type in the start dates, end dates, times, the days you want the report to
run, and so on.
5. Check the Override the default values check box to display further options.
6. Select the report format that you want.
7. Click the Send the report and a link to the report by email check box.
8. Click the Edit the options check box.
9. Type the email address or addresses of the people that you want to receive
this report.
10. Optionally, you can also click the Attach the report check box.
11. Click OK to complete the email process.
12. Click OK again to complete the scheduling process.
For additional information, visit the IBM Cognos Information Center at:
http://publib.boulder.ibm.com/infocenter/c8bi/v8r4m0/topic/
com.ibm.swg.im.cognos.wig_cr.8.4.0.doc/wig_cr_id262gtstd_c8_bi.html.
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 855
| Sharing Cognos Reports
| Cognos reports can be distributed to other organizations by importing the reports
| into any supported Tivoli Common Reporting instance.
| To share Cognos reports, you must export them from one Administration Center
| instance and import them into another. Alternatively, you can use a stand-alone
| Tivoli Common Reporting instance to export and import Cognos reports.
| After a custom Cognos report is created, the report can be shared and used by
| other Tivoli Common Reporting instances. Tivoli Common Reporting can be a
| stand-alone instance or a component that is installed in the Administration Center.
| To share a report, you must export it to an XML format and then you can import it
| in to another Tivoli Common Reporting instance.
| You are now ready to import the report into any other Tivoli Common Reporting
| instance. For more information, see “Importing a Cognos report.”
| After you export Cognos reports, you can distribute them to be used by other
| teams and organizations.
| You can import Cognos reports in to any supported Tivoli Common Reporting
| instance. Tivoli Common Reporting can be a stand-alone instance or a component
| that is installed in the Administration Center. To complete the task of importing
| Cognos reports, complete the following steps:
| 1. In a text editor, open the report file that you want to import and copy the XML
| code to the clipboard. For more information about exporting a report, see
| “Exporting a Cognos report”
| 2. Log on to the Tivoli Integrated Portal.
| 3. Expand Reporting in the navigation tree, and select Common Reporting to
| open the reporting workspace.
| When you refresh the Tivoli Integrated Portal web browser window, the Cognos
| report you imported is available.
| You can import the packaged Tivoli Monitoring for Tivoli Storage Manager Cognos
| reports, or you can import your own custom report. The packaged reports are the
| reports that come with the Tivoli Monitoring for Tivoli Storage Manager software.
| To use a stand-alone Tivoli Common Reporting instance to view historical reports,
| you must install the DB2 client and configure it to set up a connection to the
| WAREHOUS database.
| The Cognos reports and data model are bundled together in the Administration
| Center software package. The data model allows for the connection between the
| Tivoli Common Reporting user interface and the DB2 database. The Cognos
| reports and data model require Tivoli Common Reporting V2.1 and Cognos V8.4.1
| and a connection to DB2. For Cognos to communicate with DB2, the DB2 client
| libraries for Cognos must be installed.
| You must import the Tivoli Storage Manager packaged Cognos into a stand-alone
| Tivoli Common Reporting environment. After this is complete, you can import a
| custom Cognos report. For more information, see “Importing a Cognos report” on
| page 856.
| Complete the following steps to import the Tivoli Storage Manager packaged
| Cognos reports in to a stand-aloneTivoli Common Reporting instance:
| 1. Log on to the Tivoli Common Reporting system.
| 2. Obtain the TSM_Cognos.zip file, from the Administration Center installation
| media (DVD or downloaded package), in the COI\PackageSteps\BirtReports\
| FILES directory. This compressed file contains the Cognos reports and data
| model.
| 3. From a command prompt, change directories to the following location:
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 857
| C:\IBM\tivoli\tipv2Components\TCRComponent\bin\
| 4. Import Cognos reports by issuing the following command:
| trcmd.bat -import -bulk path/TSM_Cognos.zip -username tipadmin
| -password password
| where path refers to the path to the compressed file from Step 2. Replace
| password with your password for tipadmin.
| If the command was successful, the following message is displayed:
| CTGTRQ092I Import operation successfully performed
| If the command failed, complete the following steps to restart the Tivoli
| Common Reporting server, and then try the trcmd command again:
| a. Open a command prompt window, and change directories to
| install_dir\tipv2Components\TCRComponent\bin, where install_dir is the
| path where the Tivoli Common Reporting instance is located. The default
| path is C:\IBM\tivoli.
| b. Stop the server by issuing the following command:
| stopTCRserver server1
| c. Start the server by issuing the following command:
| startTCRserver server1
| In order for the Cognos reports to run successfully, Tivoli Common Reporting must
| be configured to connect to the WAREHOUS database. Complete the following
| steps:
| 1. Install and configure the DB2 client by completing the steps in one of the
| following topics, based on your operating system:
| Installing and configuring the DB2 client on Windows
| 2. Configure the data source from within Cognos by following the steps in
| “Creating a data source by using Cognos Administration” on page 860.
| 3. Optional: You can import custom Cognos reports by completing the
| instructions in Importing a Cognos report.
| Related tasks:
| “Importing a Cognos report” on page 856
| “Installing and configuring the DB2 client on AIX and Linux”
| “Installing and configuring the DB2 client on Windows” on page 859
| “Creating a data source by using Cognos Administration” on page 860
| Tip: Use the Search the network option to find the WAREHOUS database on
| another system.
| 11. In the Add Database Confirmation window, click Test Connection and enter
| the user name and password for the database. The user name is itmuser.
| 12. For Cognos to communicate with DB2, the DB2 libraries must be in the path.
| Open the /opt/IBM/tivoli/tipv2Components/TCRComponent/bin/
| startTCRserver.sh file in a text editor and add the following call:
| # Add call to db2profile to set DB2 library path for Cognos
| . /home/db2inst1/sqllib/db2profile
| 13. Recycle the Tivoli Common Reporting server by completing the following
| steps:
| a. Open a command prompt window, and go to /opt/IBM/tivoli/
| tipv2Components/TCRComponent/bin.
| b. Stop the server by issuing the following command:
| ./stopTCRserver
| a. Start the server by issuing the following command:
| ./startTCRserver
| Related tasks:
| “Importing Cognos reports in to a Tivoli Common Reporting instance” on page 857
| “Installing and configuring the DB2 client on Windows”
| “Creating a data source by using Cognos Administration” on page 860
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 859
| 6. Click Start > All Programs > IBM DB2 > DB2COPY1 (Default) > Set-up
| Tools > Configuration Assistant.
| 7. In the DB2 Message window, click Yes to the message that states: Would you
| like to add a database now?
| 8. In the Add Database Wizard window, select an option to add one or more
| existing WAREHOUS databases.
| Tip: Use the Search the network option to find the WAREHOUS database on
| another system.
| 9. In the Add Database Confirmation window, click Test Connection and enter
| the user name and password for the database. The user name is ITMUser.
| 10. Recycle the Tivoli Common Reporting server by completing the following
| steps:
| a. Open a command window, and go to install_dir\tipv2Components\
| TCRComponent\bin, where install_dir is the path where the Tivoli Common
| Reporting instance is located. The default path is C:\IBM\tivoli.
| b. Stop the server by issuing the following command:
| stopTCRserver server1
| c. Start the server by issuing the following command:
| startTCRserver server1
| Related tasks:
| “Importing Cognos reports in to a Tivoli Common Reporting instance” on page 857
| “Installing and configuring the DB2 client on AIX and Linux” on page 858
| “Creating a data source by using Cognos Administration”
| You must create and configure a new data source to allow Tivoli Common
| Reporting to access the WAREHOUS database when you run Cognos reports.
| The data source defines the connection between Cognos and the WAREHOUS
| database. After you import Cognos reports in to a stand-alone Tivoli Common
| Reporting environment, you must create a data source. Create and configure a data
| source by completing the following steps:
| 1. Open a web browser and log on to Tivoli Common Reporting.
| 2. Select Reporting > Common Reporting.
| 3. Click Launch, and select Administration.
| 4. Click the Configuration tab.
| 5. Create a data source by clicking the New Data Source icon, in the upper right
| of the Configuration tabbed window.
| 6. For the name of the data source, enter TDW and click Next.
| 7. For the type, select DB2. Leave the isolation level set to Use the default
| object gateway, and click Next.
| 8. For the DB2 database name, enter WAREHOUS.
| 9. Click Signon and select the Password check box.
| 10. Under Create a signon that the Everyone group can use section, enter the user
| ID as itmuser.
| 11. Enter the itmuser password and confirm the password in the fields.
| 12. Click Test the connection, and then click Test to test the connection. The
| Status column in the table displays Succeeded.
These reports are generated by the Tivoli Monitoring for Tivoli Storage Manager
agent, and are available in HTML, PDF, Microsoft Excel, XML, and CSV (delimited
text) formats.
You can customize the data that gets displayed in your reports by specifying the
values that you want in the On-Demand report parameters window.
This list specifies the client reports that you can run. The report descriptions are
described in Table 79 on page 862.
v Client activity details
v Client activity history
v Client backup currency
v Client backup missed files
v Client storage summary details
v Client storage pool media details
v Client storage summary details
v Client top activity
v Node replication details
v Node replication growth
v Node replication summary
v Schedule status
Table 78. On-demand BIRT report parameters
Parameter Description
Activity type This parameter is used to select the following different client activities:
v Backup (incremental only)
v Archive
v Restore
v Retrieve
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 861
Table 78. On-demand BIRT report parameters (continued)
Parameter Description
Report period This parameter is used to select one the following date ranges to display.
v All
v Today
v Yesterday
v The last 24 hours
v The last 7 days
v The last 30 days
v The last 90 days
v The last 365 days
v The current week
v The last month
v The last 3 months
v Year to date
Start date and This parameter is used to overwrite the report period by choosing a start
end date date and an end date.
Server name This parameter is used to select which server to report on.
Client node This parameter is used to supply a client from the server or a wildcard
name (% or A%) to report on.
Summarization This parameter is used to select how to group or summarize the data by
type either daily (default), hourly, weekly, monthly, quarterly, or yearly.
Number of This parameter displays the number of top clients you want to see in the
clients to report.
display
This report displays only scheduled backups and does not display
manual backups. If a node runs manual backups daily, this report shows
that the node has never run a backup.
Client backup This report lists the details and reasons that a file was not backed up for
missed files a specific client. The report can be run for a specific date range, server,
or client. This data is displayed in a tabular format.
Related tasks:
“Viewing historical data and running reports” on page 845
Related reference:
“BIRT Server reports”
These reports are generated by the Tivoli Monitoring for Tivoli Storage Manager
agent and are available in HTML, PDF, PostScript, and Microsoft Excel format.
This list specifies the server reports that you can view and run. The report
descriptions are described in Table 81 on page 864.
v Activity log
v Server activity details
v Server database details
v Server resource usage
v Server throughput
v Server throughput (pre version 6.3 agents)
v Tape volume capacity analysis
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 863
Depending on the type of report you want to run, and the parameters available for
that report, you can choose the parameters in the On-Demand Report Parameters
window to customize how the data is displayed in the reports. Table 80 describes
these parameters.
Table 80. Reporting parameters
Parameter Description
Activity type This parameter is used to select the following server activity:
v Database backup
Report period This parameter is used to select one the following date ranges to
display
v All
v Today
v Yesterday
v The last 24 hours
v The last 7 days
v The last 30 days
v The last 90 days
v The last 365 days
v The current week
v The last month
v The last 3 months
v Year to date
Start date and end date This parameter is used to overwrite the report period by
choosing a start date and an end date.
Server name This parameter is used to select which server to report on.
Summarization type This parameter is used to specify how you want to display the
summarized data. You can specify either daily (default), hourly,
weekly, monthly, quarterly, or yearly.
Related tasks:
“Viewing historical data and running reports” on page 845
Related reference:
“BIRT Client reports” on page 861
When you create a Tivoli Storage Manager monitoring agent instance in the Tivoli
Enterprise Monitoring server application, a new environment file is created. You
can modify this file to change the behavior of the monitoring agent.
There are many variables that can be configured, but care must be taken to not
destroy performance of the Tivoli Storage Manager server by setting variables
incorrectly.
The environment file is named KSKENV_xxx, where xxx is the instance name of the
monitoring agent you created. This file is located in the IBM Tivoli Monitoring
installation directory. For example, C:\IBM\ITM\TMAITM6.
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 865
IBM Tivoli Monitoring environment file reporting queries
An environment file is created for each agent instance. You can modify the
environment variables to customize the data that is collected from the Tivoli
Storage Manager server where the agent instance is installed.
You can use any text editor to edit the environment file.
v Valid values are 0 and 1.
v A value of 0 disables the query.
v A value of 1 enables the query
v An invalid value disables the query.
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 867
For example: pompeii2150020110620101220000.txt, where instance name =
pompeii2, port number = 1500, and date = June 20, 2011 at 10:12 a.m.
There are other variables included in this environment file that can affect the
performance of the server. See the IBM Tivoli Storage Manager Performance Tuning
Guide for details of these environment variables.
After Tivoli Monitoring for Tivoli Storage Manager is installed and the agent
instance is created and configured, the agent begins collecting data. The data
collected is not written directly to the database, but is first stored as temporary
files on the host system where the agent is running. Over time, the data gets
moved to the DB2 database named WAREHOUS, where it is permanently stored
and used to create reports by the Tivoli Integrated Portal Common Reporting
function.
If you modified your configuration, or customized any reports, you might need to
back up and restore those modified configurations.
If a system failure occurs that affects your data and configuration modifications,
you must first reinstall and configure Tivoli Monitoring for Tivoli Storage Manager,
then restore the backed-up data and configurations.
These are the tasks that you must perform to back up your system, ensure that
your backups are successful, and then to restore your system.
v Backing up the system includes these tasks:
– Installing the Tivoli Storage Manager client
– Backing up the IBM Tivoli Monitoring server
– Configuring the system to back up the DB2 WAREHOUS database, and
performing backups
– Validating the success of the backups
– Exporting any customized Tivoli Enterprise Portal workspaces and queries to
the file system and backing them up using the Tivoli Storage Manager client
– Backing up any customized configuration files for the storage agent using the
Tivoli Storage Manager client
– Exporting any customized reports to the file system and backup using Tivoli
Storage Manager client
v Restoring the system includes these tasks:
– Reinstalling and configuring Tivoli Monitoring for Tivoli Storage Manager
– Restoring the DB2 WAREHOUS database from backups
– Restoring the IBM Tivoli Monitoring and Tivoli Enterprise Monitoring server
data from backups
– Importing any customized storage agent configuration files
– Importing any customized Cognos reports
The following scenario outlines the tasks that must be completed to back up your
system, and verify that your backups are successful.
1. Install the Tivoli Storage Manager client (both 32 bit and 64 bit runtimes:
v Installing Tivoli Storage Manager clients
2. Back up the IBM Tivoli Monitoring and Tivoli Enterprise Monitoring server
data by using Tivoli Storage Manager client:
v “Backing up IBM Tivoli Monitoring, Tivoli Enterprise Portal server, and
agent configuration settings” on page 874
3. Configure the system to back up the DB2 WAREHOUS database, and then
perform backups:
v “Backing up the DB2 WAREHOUS database on Windows systems”
4. Validate the success of your backups:
v “Verifying and deleting backups of Tivoli Monitoring for Tivoli Storage
Manager” on page 873
5. Export any customized Tivoli Enterprise Portal workspaces and queries to the
file system and back them up using the Tivoli Storage Manager client:
v “Exporting and importing Tivoli Enterprise Portal workspaces and queries”
on page 875
6. Back up any customized configuration files for the storage agent by using the
Tivoli Storage Manager client:
v “Backing up IBM Tivoli Monitoring, Tivoli Enterprise Portal server, and
agent configuration settings” on page 874
7. Export any customized Cognos reports to the file system and back them up by
using the Tivoli Storage Manager client:
v “Exporting customized Cognos reports” on page 876
Related tasks:
“Backing up the DB2 WAREHOUS database on Windows systems”
The following steps describe how you can back up the historical data that is
gathered by Tivoli Monitoring for Tivoli Storage Manager and stored in the DB2
WAREHOUS database. They also describe how to use the Tivoli Storage Manager
server as the backup repository. You can also back up the database to other media
such as a hard disk drive. Learn more from the IBM DB2 Database Information
Center at: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/
index.jsp?topic=/com.ibm.db2.luw.admin.ha.doc/doc/c0052073.html.
1. To back up your database to a Tivoli Storage Manager server, install the Tivoli
Storage Manager backup-archive client on the same system where IBM Tivoli
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 869
Monitoring is installed. See Installing Tivoli Storage Manager clients in the
Backup-Archive Clients Installation and User's Guide for additional information.
2. From the Tivoli Storage Manager server, create a management class for the
DB2 WAREHOUS backups and log files.
Notes:
a. You can use the Administration Center to create the management class, or
you can use the DEFINE MGMTCLASS command.
b. The management class in these examples is called
WAREHOUS_BACKUPS.
3. In the backup and archive copy groups of the management class you created,
apply these settings:
Note: You can use the Administration Center to apply these settings, or you
can use the DEFINE COPYGROUP or UPDATE COPYGROUP commands.
a. Apply these settings to the backup copy group:
verexists=1
verdeleted=0
retextra=0
retonly=0
b. Apply this setting to the archive copy group:
retver=nolimit
4. Register a node for the DB2 backup client, and note the node name and
password for later use.
register node node_name password domain=domain_name backdelete=yes
5. As Administrator, log in to the system where IBM Tivoli Monitoring is
installed and create a file called dsm.opt in the client installation directory,
which by default is: c:\Program Files\Common Files\Tivoli\TSM\api\
directory, with this information in it:
servername myserver
tcpport 1500
tcpserveraddress myaddress.mycompany.com
passwordaccess generate
nodename mynode
tcpclientaddress 11.22.33.44
*This is the include list that binds the mgmtclass to backup and logs files
INCLUDE \...\*
INCLUDE \WAREHOUS\...\* WAREHOUS_BACKUPS
INCLUDE \WAREHOUS\...\*.LOG WAREHOUS_BACKUPS
6. From the System Properties > Advanced tab, select Environment Variables,
and specify these system environment variables that are required by the Tivoli
Storage Manager client:
DSMI_DIR set to c:\Program Files\Common Files\Tivoli\TSM\api64\
DSMI_CONFIG set to c:\Program Files\Common Files\Tivoli\TSM\api64\dsm.opt
DSMI_LOG set to c:\Program Files\Common Files\Tivoli\TSM\api64\
Notes:
a. The DSM_DIR variable must point to the API client installation directory.
b. The DSMI_CONFIG must be set to the location of dsm.opt file created in
Step 5. The default directory is c:\Program Files\Common
Files\Tivoli\TSM\api\.
c. The DSMI_LOG is the logging directory. The default directory is
c:\Program Files\Common Files\Tivoli\TSM\api\.
Important: Perform this step using the db2inst1 user ID, because
the/home/db2inst1/tsm/dsierror.log file is owned by the first ID to issue this
command.
db2adutl query
If the command returns a message that states that no db2 objects are found,
you successfully set the password.
15. Optional: You can check the activity log on the Tivoli Storage Manager server
to confirm that the node successfully authenticated when you ran the
db2adutl command.
16. Configure DB2 to roll forward:
db2 update db cfg for WAREHOUS using logarchmeth1 tsm
17. Configure the database to use the management class you created in Step 2 on
page 870
db2 update db cfg for WAREHOUS using tsm_MGMTCLASS WAREHOUS_BACKUPS
18. Set TRACKMOD to ON by issuing this command:
db2 update db cfg for WAREHOUS using TRACKMOD ON
a. If you see the SQL1363W message displayed in response to these
commands, one or more of the parameters submitted for modification
were not dynamically changed, so issue this command:
db2 force applications all
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 871
b. Ensure that the settings for LOGARCHMETH1, TSM_MGMTCLASS, and
TRACKMOD have been updated by issuing this command:
db2 get db cfg for warehous
19. Perform a full offline backup of the database:
db2 backup db WAREHOUS use tsm
20. Restart IBM Tivoli Monitoring for Tivoli Storage Manager services using the
Manage Tivoli Monitoring Services console. Start all of the agents and
services in this order:
a. Tivoli Storage Manager agents
b. Summarization and Pruning agent
c. Warehouse Proxy agent
d. Tivoli Enterprise Portal server
e. Tivoli Enterprise Monitoring server
Note: All backups, incremental and full, can now be performed online
without stopping and restarting these services.
21. The first online backup must be a full backup, followed by as many
incremental backups as needed:
db2 backup db WAREHOUS online use tsm
22. Start the non-cumulative incremental backups by using the following
command:
db2 backup db warehous online incremental delta use tsm
Tip: Specify the keyword delta to ensure that the incremental backups are
not cumulative. This reduces the size of the backups and the amount of time
each backup takes to run. If you want your incremental backups to be
cumulative, do not specify the keyword delta. This increases the size of the
backups, but reduces the number of incremental backups required to perform
a restore. If your backups do not take much time or storage space, you might
choose to only perform full backups, which would only require a single
backup to restore.
23. After you completed a full set of incremental backups, perform a full backup:
db2 backup db warehous online use tsm
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 873
Tip: The db2adutl utility uses the keyword delta to mean a non-cumulative,
incremental backup
v If you perform cumulative incremental backups, you can issue this command
to retain the most recent backup:
db2adutl delete full incremental keep 1 db warehous
You can back up the entire contents of the repository directories, and the agent
configuration file, using an application such as the Tivoli Storage Manager client.
The monitoring agent must be stopped for the duration of the backup process.
Failing to stop the agent might result file-in-use errors, and an internally
inconsistent snapshot of the data. The agent can be restarted after the backup is
complete.
Complete these steps to back up the IBM Tivoli Monitoring and Tivoli Enterprise
Portal server configuration settings:
1. Back up the Derby database cache that is stored in a directory named DERBY.
This directory is created by the monitoring agent on the system where the
agent runs. If there are multiple monitoring agents installed on one system,
they all use this directory.
The default directory is:
C:\IBM\ITM\TMAITM6\DERBY
Tip: If the monitoring agent is started from a command shell, the DERBY
directory is created in the directory where the agent is started.
2. Back up the collection of binary files that are created by the monitoring agent.
The system where these files reside depends on the collection location that is
specified in the historical settings for the Tivoli Enterprise Portal server. See the
configuration steps for more information about accessing these settings.
v TEMA binary files are kept on the monitoring agent system in the following
directory:
installation_directory\ITM\TMAITM6\logs\History\KSK\agent_instance_name
Complete these steps to export and import the workspaces and queries:
1. Log in to the Tivoli Enterprise Portal client with the sysadmin user ID to
modify the authority that is necessary to export and import the workspaces and
queries.
2. From the main menu click Edit > Administer Users.
3. Select the SYSADMIN user ID, and in the Authorities pane, select Workspace
Administration.
4. Select the Workspace Administration Mode check box, and click OK.
Note: Ensure that the Workspace Administration Mode and Workspace Author
Mode check boxes are selected.
5. Export the workspaces to a file. From a shell or command window, navigate to
the directory containing the tacmd command and issue this command.
cd C:\IBM\itm\BIN\
tacmd exportworkspaces -t sk -x workspaces_output_filename -u sysadmin
-p sysadmin_password -f
Note: The tacmd file is located in the bin directory where you installed IBM
Tivoli Monitoring.
After exporting the queries to the two .xml output files, you can back them up
using a backup utility such as the Tivoli Storage Manager client.
Import the workspaces and queries with the following commands, after you have
reinstalled and configured your Tivoli Monitoring for Tivoli Storage Manager
system:
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 875
./tacmd importworkspaces -x workspaces_output_filename -u sysadmin -p
sysadmin_password -f
./tacmd importqueries -x queries_output_filename -u sysadmin -p
sysadmin_password -f
Related tasks:
“Exporting customized Cognos reports”
Related information:
IBM DB2 Data recovery
After you have exported your reports to a file, ensure that they are backed up and
restored. Validate that you can back up and restore the data.
Related tasks:
“Restoring Tivoli Monitoring for Tivoli Storage Manager” on page 878
Related information:
IBM DB2 Data recovery
Complete these steps to export any customized BIRT reports to a .zip file:
1. Log on to the system where Tivoli Integrated Portal is installed and open a
command prompt.
2. Navigate to the applicable directory:
v If you are exporting from version 6.3 to version 6.3:
C:\IBM\tivoli\tipv2Components\TCRComponent\bin
v If you are exporting from version 6.2 to version 6.3:
C:\installation_directory\ac\products\tcr\bin
v If you are exporting from version 6.1 to version 6.3:
C:\installation_directory\ac\bin
3. To obtain a list of all reports, issue this command:
trcmd.bat -list reports
Your output should look similar to this:
"/TSMReports/TSM_server_tape_volume_capacity"
"/Custom Reports/TSM_client_storage_details_test"
"/TSMReports/TSM_client_activity_details"
"/TSMReports/TSM_client_backup_currency"
"/TSMReports/TSM_server_database_details"
"/TSMReports/TSM_client_backup_missed_files"
"/TSMReports/TSM_client_schedule_status"
"/Custom Reports/TSM_client_top_activity_test"
"/TSMReports/TSM_client_storage"
"/TSMReports/TSM_server_Throughput"
"/TSMReports/TSM_server_activity_details"
"/TivoliProducts/TCR/Overview"
"/TSMReports/TSM_server_resource_usage"
"/TSMReports/TSM_client_top_activity"
"/TSMReports/TSM_client_activity_history"
4. Issue this command, on one line, to export the .zip file to your home directory.
Specify the names of the reports to be exported, within quotation marks, and a
name for the output file. You can also specify a different directory, if you
prefer:
Tip: Do not specify report names unless they have been added or customized.
Doing so results in overwriting the installed version 6.3 reports of the same
name.
trcmd.bat -export -bulk C:\users\Administrator\customized_reports.zip
-reports "/Custom Reports/TSM_client_storage_details_test"
"/Custom Reports/TSM_client_top_activity_test"
After you have exported your reports to a .zip file, ensure that they are backed up,
and perform a restore to validate that you have a successful backup.
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 877
Restoring Tivoli Monitoring for Tivoli Storage Manager
You can restore your Tivoli Monitoring for Tivoli Storage Manager system, which
includes the Tivoli Integrated Portal, data collected by Tivoli Monitoring for Tivoli
Storage Manager, the DB2 WAREHOUS database, any customized BIRT or Cognos
reports, and any configuration settings setting that might be needed.
This scenario outlines the tasks required to restore your Tivoli Monitoring for
Tivoli Storage Manager system by using your backups.
1. Reinstall and configure Tivoli Monitoring for Tivoli Storage Manager: Installing
Tivoli Monitoring for Tivoli Storage Manager.
2. Restore your DB2 WAREHOUS database from backup: “Restoring backups of
Tivoli Monitoring for Tivoli Storage Manager.”
3. Restore your IBM Tivoli Monitoring, Tivoli Enterprise Portal server, and agent
configuration files from backup: “Restoring IBM Tivoli Monitoring, Tivoli
Enterprise Portal server, and agent configuration settings” on page 880.
4. Import any customized Cognos reports: “Importing customized Cognos
reports” on page 881.
5. Import any customized BIRT reports: “Importing customized BIRT reports” on
page 881.
Related tasks:
“Restoring backups of Tivoli Monitoring for Tivoli Storage Manager”
This procedure assumes that the system where Tivoli Monitoring for Tivoli Storage
Manager was installed has been lost. Before you can perform a restore from
backups, you must reinstall and configure Tivoli Monitoring for Tivoli Storage
Manager and the Tivoli Storage Manager client.
To learn more about reinstalling see Installing Tivoli Monitoring for Tivoli Storage
Manager, and Installing the Tivoli Storage Manager backup-archive clients, in the
Installation Guide.
1. To restore the DB2 WAREHOUS database, you must first stop all Tivoli
Monitoring for Tivoli Storage Manager agents and services. From the Manage
Tivoli Monitoring Services console, which is also referred to as CandleManage,
stop these agents and services in this order:
a. Tivoli Storage Manager agents
b. Summarization and Pruning agent
c. Warehouse Proxy agent
d. Tivoli Enterprise Portal server
e. Tivoli Enterprise Monitoring server
2. Open a DB2 command window, select Start > Programs > IBM DB2 >
DB2COPY1 > Command line tools > Command Window.
3. Determine if there are any existing application connections by issuing this
command:
db2 list applications for db warehous
4. Stop the active connections by issuing this command:
db2 force applications all
The output lists the available full, delta, and incremental backups:
Query for database WAREHOUS
6. To perform a restore, you must issue a restore command for each backup
involved in the restore. DB2 requires configuration information that is
contained in your most recent backup, therefore you must restore it first before
proceeding to restore the entire series.
For example, if you perform daily backups with #7 being the most recent
backup and #1 being the oldest backup, restore backup #7 first, followed by
backup #1, #2, #3,# 4, #5, #6, and then #7 again.
Table 82. Backup scenario: restore order for backups
Backup # Day Type of backup Restore order
7 Sunday, December 31 Incremental 1st
1 Monday, December 25 Full 2nd
2 Tuesday, December 26 Incremental 3rd
3 Wednesday, December 27 Incremental 4th
4 Thursday, December 28 Incremental 5th
5 Friday, December 29 Incremental 6th
6 Saturday, December 30 Incremental 7th
7 Sunday, December 31 Incremental 8th
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 879
db2 restore database warehous incremental use tsm taken at 20101231110157
db2 restore database warehous incremental use tsm taken at 20101225110426
db2 restore database warehous incremental use tsm taken at 20101226110346
db2 restore database warehous incremental use tsm taken at 20101227110224
db2 restore database warehous incremental use tsm taken at 20101228110145
db2 restore database warehous incremental use tsm taken at 20101229110234
db2 restore database warehous incremental use tsm taken at 20101230110157
db2 restore database warehous incremental use tsm taken at 20101231110257
7. If the most recent backup completed was a full backup, you can restore only
that backup without having to restore the whole series of incremental backups.
For example:
db2 restore database warehous use Tivoli Storage Manager taken at
20101229110234
8. Because the backups were configured for rollforward recovery, you must
complete the restore process with the rollforward command:
db2 rollforward database warehous to end of logs and complete
After completing this restore procedure, perform a full, offline backup before
starting the IBM Tivoli Monitoring services and agents.
Related tasks:
“Backing up the DB2 WAREHOUS database on Windows systems” on page 869
Related information:
IBM DB2 Data recovery
Complete these steps to restore the IBM Tivoli Monitoring and Tivoli Enterprise
Portal repository directories and the agent configuration files:
1. Restore the Derby database that you backed up to the directory that was
created by the monitoring agent on the system where the agent runs. If there
are multiple monitoring agents installed on one system, they all use this
directory. The default directory is:
C:\IBM\ITM\TMAITM6\DERBY
Tip: If the monitoring agent is started from a command shell, the DERBY
directory is created in the current directory from where it is started.
2. Restore the collection of binary files that were created by the monitoring agent
to their directories. The system where these files reside depends on the
collection location that is specified in the historical settings for the Tivoli
Enterprise Portal server.
v TEMA binary files are restored to the monitoring agent system in the
following directory:
installation_directory\ITM\TMAITM6\logs\History\KSK\agent_instance_name
Complete these steps to import a Cognos .zip package file that has been restored
from backup:
1. Log in to the Tivoli Integrated Portal.
2. Expand the Reporting item in the navigation tree and select Common
Reporting to open the reporting workspace.
3. Copy the restored Cognos .zip package file into the appropriate directory:
\IBM\tivoli\tip\products\tcr\Cognos\c8\deployment
4. Click Launch > Administration. This switches to a tabbed workspace.
5. Select the Configuration tab, and then select Content Administration in the
box on the left.
6. Click the New Import icon on the Administration toolbar.
7. Start the New Import wizard.
8. Click Refresh in the upper-right corner until you see the final status of the
export.
Related tasks:
“Exporting customized Cognos reports” on page 876
Related information:
IBM DB2 Data recovery
Before you can import the previously exported BIRT reports, you must remove the
reportdata.xml file from the .zip file You can either use a tool that allows you to
remove the file without extracting the files, or you can extract the files, remove the
reportdata.xml, and then zip up the files in the directory.
Complete these steps to import the previously exported .zip file of your
customized BIRT reports:
Chapter 30. Reporting and monitoring with Tivoli Monitoring for Tivoli Storage Manager 881
1. Log on to the Tivoli Integrated Portal and open a command prompt.
2. Copy the previously exported .zip file containing the customized reports to a
directory on the system where Tivoli Integrated Portal is installed. To a
temporary directory, for example.
3. Navigate to this directory:
C:\IBM\tivoli\tipv2Components\TCRComponent\bin\
4. Issue this command to import the .zip file, specifying the directory where the
.zip file is located:
trcmd.bat -import -bulk C:\users\Administrator\customized_reports.zip
After you have imported the customized BIRT reports, log on to Tivoli Integrated
Portal, and validate that your customized reports were successfully imported.
The client performance monitor uses the Tivoli Storage Manager API to collect
performance data about backup and restore operations.
The client performance monitor is automatically installed with the Tivoli Storage
Manager Administration Center. You can access the client performance monitor
and detailed information about how to view and analyze performance information
from the Reporting portlet of the Administration Center.
To change the client performance monitor parameters after installation, specify the
following parameters in the assist.cfg file:
validTime
The number of hours that the client performance monitor keeps state
information about unfinished operations. The default is 24 hours.
validOperationSaveTime
The number of days that operation data is kept in the client performance
monitor history. The default is 14 days.
port The communication port where the client performance monitor listens for
performance data. The default is 5129.
If you change the configuration file after the client performance monitor is
installed, restart the client performance monitor server for the changes to become
effective.
The client performance monitor runs as a Windows service. The display name for
the service is Tivoli Storage Manager Client Performance Monitor. Use the
Windows service management tools to start, stop, or view the status of the client
performance monitor.
You can log the events to any combination of the following receivers:
Tivoli Storage Manager server console and activity log
See “Logging events to the IBM Tivoli Storage Manager server console and
activity log” on page 887.
File and user exits
See “Logging events to a file exit and a user exit” on page 888.
Tivoli event console
See “Logging events to the Tivoli Enterprise Console” on page 889.
Event server receiver (Enterprise Event Logging)
Routes the events to an event server. See “Enterprise event logging:
logging events to another server” on page 897.
Simple Network Management Protocol (SNMP)
See “Logging events to an SNMP manager” on page 893.
The Windows Event Log
See “Logging events to the Windows event log” on page 897.
In addition, you can filter the types of events to be enabled for logging. For
example, you might enable only severe messages to the event server receiver and
one or more specific messages, by number, to another receiver. Figure 95 shows a
possible configuration in which both server and client messages are filtered by the
event rules and logged to a set of specified receivers.
Tivoli Storage
Manager Server Activity Log
Server Console
Client
File
Messages Event
Rules
User Exit
Tivoli Event
Console
Event Server
Server
Messages
When you enable or disable events, you can specify the following:
v A message number or an event severity (ALL, INFO, WARNING, ERROR, or
SEVERE).
v Events for one or more client nodes (NODENAME) or for one or more servers
(SERVERNAME).
To enable or disable events, issue the ENABLE EVENTS and DISABLE EVENTS
commands. For example,
v To enable event logging to a user exit for all error and severe server messages,
enter:
enable events userexit error,severe
v To enable event logging to a user exit for severe client messages for all client
nodes, enter:
enable events userexit severe nodename=*
v To disable event logging to a user exit for error server messages, enter:
disable events userexit error
If you specify a receiver that is not supported on any platform, or if you specify an
invalid event or name, Tivoli Storage Manager issues an error message. However,
any valid receivers, events, or names that you specified are still enabled. Certain
events, such as messages that are issued during server startup and shutdown,
automatically go to the console. They do not go to other receivers, even if they are
enabled.
Note: Server messages in the SEVERE category and message ANR9999 can provide
valuable diagnostic information if there is a serious problem. For this reason, you
should not disable these messages. Use the SET CONTEXTMESSAGING ON command to
get additional information that could help determine the cause of ANR9999D
messages. The IBM Tivoli Storage Manager polls the server components for
information that includes process name, thread name, session ID, transaction data,
locks that are held, and database tables that are in use.
At server startup, event logging begins automatically to the server console and
activity log and for any receivers that are started based on entries in the server
options file. A receiver for which event logging has begun is an active receiver.
To begin logging events to receivers for which event logging is not started
automatically, issue the BEGIN EVENTLOGGING command. You can also use this
command after you have disabled event logging to one or more receivers. To end
event logging for an active receiver issue the END EVENTLOGGING command.
For example,
v To begin logging events to the event server, enter:
begin eventlogging eventserver
v To end logging events to the event server, enter:
end eventlogging eventserver
Logging events to the IBM Tivoli Storage Manager server console and
activity log
Logging events to the server console and activity log begins automatically at server
startup.
Enabling client events to the activity log will increase the database utilization. You
can set a retention period or size limit for the log records by using the SET
ACTLOGRETENTION command (see “Setting a retention period for the activity
log” on page 831 and “Setting a size limit for the activity log” on page 831). At
server installation, activity log management is retention-based, and this value is set
to one day. If you increase the retention period or the size limit, utilization is
further increased. For more information about the activity log, see “Using the
Tivoli Storage Manager activity log” on page 829.
You can disable server and client events to the server console and client events to
the activity log. However, you cannot disable server events to the activity log.
Also, certain messages, such as those issued during server startup and shutdown
and responses to administrative commands, will still be displayed at the console
even if disabled.
To enable all error and severe client events to the console and activity log, you can
issue the ENABLE EVENTS command. See the Administrator's Reference for more
information.
Chapter 32. Logging IBM Tivoli Storage Manager events to receivers 887
Logging events to a file exit and a user exit
A file exit is a file that receives all the information related to its enabled events.
You can log events to a file exit and a user exit.
Be aware that this file can rapidly grow in size depending on the events enabled
for it. There are two versions of the file exit: binary and text. The binary file exit
stores each logged event as a record, while the text file exit stores each logged
event as a fixed-sized, readable line. For more information about the text file exit,
see “Readable text file exit (FILETEXTEXIT) format” on page 903.
See “Beginning and ending event logging” on page 887 for more information.
Application clients, Data Protection for IBM ESS for DB2, and Data Protection for
IBM ESS for Oracle must have enhanced Tivoli Enterprise Console support enabled
in order to route the events to the Tivoli Enterprise Console. Because of the
number of messages, you should not enable all messages from a node to be logged
to the Tivoli Enterprise Console.
Enabling either of these options not only changes the event class format, but also
generates a unique event class for individual Tivoli Storage Manager messages for
the client, the server, application clients, Data Protection for IBM ESS for DB2, Data
Protection for IBM ESS for Oracle, and Data Protection for IBM ESS for R/3.
Chapter 32. Logging IBM Tivoli Storage Manager events to receivers 889
Option Name Function
UNIQUETDPTECEVENTS Changes the event class format and
generates a unique event class for all client,
server, and all Data Protection messages
where #### represents the message number. For exact details of the event class
format, look at the appropriate baroc file.
Application clients can issue unique events in the following ranges. All events
follow the IBM 3.4 naming convention, which uses a three-character prefix
followed by four digits.
Based upon the setting of the option or options on the Tivoli Storage Manager
server, the Tivoli Enterprise Console administrator must create a rule base using
one of the following baroc files:
Each successive baroc file accepts the events of the previous baroc file. For
example, itsmuniq.baroc accepts all events in ibmtsm.baroc, and itsmdpex.baroc
accepts all events contained in itsmuniq.baroc.
To determine whether this option is enabled, issue the QUERY OPTION command.
To set up Tivoli as a receiver for event logging, complete the following procedure:
1. Define the Tivoli Storage Manager event classes to the Tivoli Enterprise Console
with the baroc file for your operating system:
ibmtsm.baroc
This file is distributed with the server.
Chapter 32. Logging IBM Tivoli Storage Manager events to receivers 891
Note: Please refer to Tivoli Enterprise Console documentation for instructions
on removing an existing baroc file, if needed, and installing a new baroc file.
Before the events are displayed on a Tivoli Enterprise Console, you must
import the baroc file into an existing rule base or create a new rule base and
activate it. To do this, complete the following steps:
a. From the Tivoli desktop, click on the Rule Base icon to display the pop-up
menu.
b. Select Import, then specify the location of the baroc file.
c. Select the Compile pop-up menu.
d. Select the Load pop-up menu and Load, but activate only when server
restarts from the resulting dialog.
e. Shut down the event server and restart it.
To create a new rule base, complete the following steps:
a. Click on the Event Server icon from the Tivoli desktop. The Event Server
Rules Bases window will open.
b. Select Rule Base from the Create menu.
c. Optionally, copy the contents of an existing rule base into the new rule base
by selecting the Copy pop-up menu from the rule base to be copied.
d. Click on the RuleBase icon to display the pop-up menu.
e. Select Import and specify the location of the baroc file.
f. Select the Compile pop-up menu.
g. Select the Load pop-up menu and Load, but activate only when server
restarts from the resulting dialog.
h. Shut down the event server and restart it.
2. To define an event source and an event group:
a. From the Tivoli desktop, select Source from the EventServer pop-up menu.
Define a new source whose name is Tivoli Storage Manager from the
resulting dialog.
b. From the Tivoli desktop, select Event Groups from the EventServer pop-up
menu. From the resulting dialog, define a new event group for Tivoli
Storage Manager and a filter that includes event classes
IBMTSMSERVER_EVENT and IBMTSMCLIENT_EVENT.
c. Select the Assign Event Group pop-up menu item from the Event Console
icon and assign the new event group to the event console.
d. Double-click on the Event Console icon to start the configured event
console.
3. Enable events for logging to the Tivoli receiver. See “Enabling and disabling
events” on page 886 for more information.
4. In the server options file, specify the location of the host on which the Tivoli
server is running. For example, to specify a Tivoli server at the IP address
9.114.22.345:1555, enter the following:
techost 9.114.22.345
tecport 1555
5. Begin event logging for the Tivoli receiver. You do this in one of two ways:
v To begin event logging automatically at server start up, specify the following
server option:
tecbegineventlogging yes
Or
v Enter the following command:
See “Beginning and ending event logging” on page 887 for more information.
Tivoli Storage Manager also implements an SNMP subagent that can be configured
to report exception conditions and provide support for a management information
base (MIB). The management information base (MIB), which is shipped with Tivoli
Storage Manager, defines the variables that will run server scripts and return the
server scripts' results. You must register SNMPADMIN, the administrative client
the server runs these scripts under. Although a password is not required for the
subagent to communicate with the server and run scripts, a password should be
defined for SNMPADMIN to prevent access to the server from unauthorized users.
An SNMP password (community name) is required, however, to access the SNMP
agent, which forwards the request to the subagent.
Note: Because the SNMP environment has weak security, you should consider not
granting SNMPADMIN any administrative authority. This restricts SNMPADMIN
to issuing only Tivoli Storage Manager queries.
SNMP SET requests are accepted for the name and input variables associated with
the script names stored in the MIB by the SNMP subagent. This allows a script to
be processed by running a GET request for the ibmAdsm1ReturnValue and
ibmAdsm2ReturnValue variables. A GETNEXT request will not cause the script to
run. Instead, the results of the previous script processed will be retrieved. When an
entire table row is retrieved, the GETNEXT request is used. When an individual
variable is retrieved, the GET request is used.
Chapter 32. Logging IBM Tivoli Storage Manager events to receivers 893
1. Choose the name and parameters for a Tivoli Storage Manager script.
2. Use the application to communicate with the SNMP agent. This agent changes
the Tivoli Storage Manager MIB variable for one of the two script names that
the Tivoli Storage Manager subagent maintains. The SNMP agent also sets the
parameter variables for one of the two scripts.
3. Use the application to retrieve the variable ibmAdsmReturnValue1.x or
ibmAdsmReturnValue2.x, where x is the index of the server that is registered
with the subagent.
To set the variables associated with the script, the nodes on which the subagent
and the agent are run must have read-write authority to the MIB variables. This is
done through the SNMP configuration process on the system that the SNMP agent
runs on.
An SNMP agent is needed for communication between an SNMP manager and its
managed systems. The SNMP agent is realized through the snmpd daemon. The
Distributed Protocol Interface (DPI) Version 2 is an extension of this SNMP agent.
SNMP managers can use the MIB that is shipped with Tivoli Storage Manager to
manage the server. Therefore, an SNMP agent supporting DPI Version 2 must be
used to communicate with the Tivoli Storage Manager subagent. This SNMP agent
is not included with Tivoli Storage Manager. A supported DPI agent ships with
AIX. The Tivoli Storage Manager subagent is included with Tivoli Storage Manager
and, before server startup, must be started as a separate process communicating
with the DPI-enabled SNMP agent.
The SNMP manager system can reside on the same system as the Tivoli Storage
Manager server, but typically would be on another system connected through
SNMP. The SNMP management tool can be any application, such as NetView or
Tivoli Enterprise Console, which can manage information through SNMP MIB
monitoring and traps. The Tivoli Storage Manager server system runs the processes
needed to send Tivoli Storage Manager event information to an SNMP
management system. The processes are:
v SNMP agent (snmpd)
v Tivoli Storage Manager SNMP subagent (dsmsnmp)
v Tivoli Storage Manager server (dsmserv)
SNMP DPI
Solaris HP-UX
Figure 97 shows how the communication for SNMP works in a Tivoli Storage
Manager system:
v The SNMP manager and agent communicate with each other through the SNMP
protocol. The SNMP manager passes all requests for variables to the agent.
v The agent then passes the request to the subagent and sends the answer back to
the manager. The agent responds to the manager's requests and informs the
manager about events by sending traps.
v The agent communicates with both the manager and subagent. It sends queries
to the subagent and receives traps that inform the SNMP manager about events
taking place on the application monitored through the subagent. The SNMP
agent and subagent communicate through the Distributed Protocol Interface
(DPI). Communication takes place over a stream connection, which typically is a
TCP connection but could be another stream-connected transport mechanism.
v The subagent answers MIB queries of the agent and informs the agent about
events by sending traps. The subagent can also create and delete objects or
subtrees in the agent's MIB. This allows the subagent to define to the agent all
the information needed to monitor the managed application.
SNMP Manager
get/set
respond
SNMP Protocol SNMP Protocol
trap
SNMP Agent
query
reply
SNMP DPI register SNMP DPI
trap
SNMP Subagent
Note:
Chapter 32. Logging IBM Tivoli Storage Manager events to receivers 895
1. You can start dsmsnmp and the server in any order. However, starting dsmsnmp
first is more efficient in that it avoids retries.
2. The MIB file name is adsmserv.mib. The file name is located in the directory in
which the server is installed.
3. Merge the contents of the adsmserv.mib file into the /etc/mib.defs file.
SNMP Manager
AIX
SNMP Protocol SNMP Protocol
snmpd
SNMP DPI
Windows
SNMP DPI
dsmsnmp
dsmserv.opt
Tivoli Storage
Manager server
commmethod snmp
snmpsubagent hostname jimbo communityname public timeout 600
snmpsubagentport 1521
snmpheartbeatinterval 5
snmpmessagecategory severity
For details about server options, see the server options section in
Administrator's Reference.
2. Install, configure, and start the SNMP agent as described in the documentation
for that agent. The SNMP agent must support the DPI Version 2.0 standard.
Tivoli Storage Manager supports the SNMP agent that is built into the AIX
operating system.
Before starting the agent, ensure that the dpid2 and snmpd subsystems have
been started.
To enable severe and error events for logging on the Event Log, you can issue the
ENABLE EVENTS command. For example:
enable events nteventlog severe,error
The sending server receives the enabled events and routes them to a designated
event server. This is done by a receiver that IBM Tivoli Storage Manager provides.
At the event server, an administrator can enable one or more receivers for the
events being routed from other servers.Figure 100 on page 898 shows the
relationship of a sending Tivoli Storage Manager server and a Tivoli Storage
Manager event server.
Chapter 32. Logging IBM Tivoli Storage Manager events to receivers 897
Tivoli Storage Tivoli Storage Manager
Manager Server Event Server
Activity Log
Server Console
Client
Messages Event Event File
Rules EVENTS Rules
User Exit
Tivoli Event
Console
Server
Messages
The following scenario is a simple example of how enterprise event logging can
work.
Then the administrator enables the events by issuing the ENABLE EVENTS
command for each sending server. For example, for SERVER_A the
administrator would enter:
enable events file severe,error servername=server_a
Note: By default, logging of events from another server is enabled to the event
server activity log. However, unlike events originating from a local server,
events originating from another server can be disabled for the activity log at an
event server.
Because the lists of enabled and disabled events could be very long, Tivoli Storage
Manager displays the shorter of the two lists.
For example, assume that 1000 events for client node HSTANFORD were enabled
for logging to the user exit and that later two events were disabled. To query the
enabled events for HSTANFORD, you can enter:
query enabled userexit nodename=hstanford
The output would specify the number of enabled events and the message names of
disabled events:
998 events are enabled for node HSTANFORD for the USEREXIT receiver.
The following events are DISABLED for the node HSTANFORD for the USEREXIT
receiver:
ANE4000, ANE49999
The QUERY EVENTRULES command displays the history of events that are
enabled or disabled by a specific receiver for the server or for a client node.
query enabled userexit nodename=hstanford
The samples for the C, H, and make files are shipped with the server code in the
\win32app\ibm\adsm directory.
You can also use Tivoli Storage Manager commands to control event logging. For
details, see Chapter 32, “Logging IBM Tivoli Storage Manager events to receivers,”
on page 885 and Administrator's Reference.
Chapter 32. Logging IBM Tivoli Storage Manager events to receivers 899
Sample user-exit declarations
USEREXITSAMPLE.H contains declarations for a user-exit program.
/***********************************************************************
* Name: USEREXITSAMPLE.H
* Description: Declarations for a user exit
* Environment: WINDOWS NT
***********************************************************************/
#ifndef _H_USEREXITSAMPLE
#define _H_USEREXITSAMPLE
#include <stdio.h>
#include <sys/types.h>
#ifndef uchar
typedef unsigned char uchar;
#endif
/* DateTime Structure Definitions - TSM representation of a timestamp */
typedef struct
{
uchar year; /* Years since BASE_YEAR (0-255) */
uchar mon; /* Month (1 - 12) */
uchar day; /* Day (1 - 31) */
uchar hour; /* Hour (0 - 23) */
uchar min; /* Minutes (0 - 59) */
uchar sec; /* Seconds (0 - 59) */
} DateTime;
/******************************************
* Some field size definitions (in bytes) *
******************************************/
#define MAX_SERVERNAME_LENGTH 64
#define MAX_NODE_LENGTH 64
#define MAX_COMMNAME_LENGTH 16
#define MAX_OWNER_LENGTH 64
#define MAX_HL_ADDRESS 64
#define MAX_LL_ADDRESS 32
#define MAX_SCHED_LENGTH 30
#define MAX_DOMAIN_LENGTH 30
#define MAX_MSGTEXT_LENGTH 1600
/**********************************************
* Event Types (in elEventRecvData.eventType) *
**********************************************/
/***************************************************
* Application Types (in elEventRecvData.applType) *
***************************************************/
/*****************************************************
* Event Severity Codes (in elEventRecvData.sevCode) *
*****************************************************/
/************************************************************
* Data Structure of Event that is passed to the User-Exit. *
* The same structure is used for a file receiver *
************************************************************/
/************************************
* Size of the Event data structure *
************************************/
/*************************************
* User Exit EventNumber for Exiting *
*************************************/
/**************************************
*** Do not modify above this line. ***
**************************************/
#endi
Chapter 32. Logging IBM Tivoli Storage Manager events to receivers 901
Sample user exit program
USEREXITSAMPLE.C is a sample user-exit program invoked by the Tivoli Storage
Manager server.
/***********************************************************************
* Name: USEREXITSAMPLE.C
* Description: Example user-exit program that is invoked by
* the TSM V3 Server
* Environment: *********************************************
* ** This is a platform-specific source file **
* ** versioned for: "WINDOWS NT" **
* *********************************************
***********************************************************************/
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <io.h>
#include <windows.h>
#include "USEREXITSAMPLE.H"
/**************************************
*** Do not modify below this line. ***
**************************************/
/****************
*** DLL MAIN ***
****************/
BOOL WINAPI
DllMain(HMODULE hMod, DWORD fdwReason, LPVOID lpvReserved)
{
return(TRUE);
} // End of WINAPI
/******************************************************************
* Procedure: adsmV3UserExit
* If the user-exit is specified on the server, a valid and
* appropriate event will cause an elEventRecvData structure
* (see USEREXITSAMPLE.H) to be passed to a procedure named
* adsmV3UserExit that returns a void.
*
* This procedure can be named differently:
* ----------------------------------------
* The procedure name must match the function name specified in
* the server options file (4th arg). The DLL name generated from
* this module must also match in the server options file
* (3rd arg).
* INPUT : A (void *) to the elEventRecvData structure
* RETURNS: Nothing
******************************************************************/
/**************************************
*** Do not modify above this line. ***
**************************************/
/* Be aware that certain function calls are process-wide and can cause
* synchronization of all threads running under the TSM Server process!
* Among these is the system() function call. Use of this call can
* cause the server process to hang and otherwise affect performance.
* Also avoid any functions that are not thread-safe. Consult your
* system’s programming reference material for more information.
*/
The following table presents the format of the output. Fields are separated by
blank spaces.
Table 83. Readable text file exit (FILETEXTEXIT) format
Column Description
0001-0006 Event number (with leading zeros)
0008-0010 Severity code number
0012-0013 Application type number
0015-0023 Session ID number
0025-0027 Event structure version number
0029-0031 Event type number
0033-0046 Date/Time (YYYYMMDDDHHmmSS)
0048-0111 Server name (right padded with spaces)
1
0113-0176 Node name
1
0178-0193 Communications method name
1
0195-0258 Owner name
1
0260-0323 High-level internet address (n.n.n.n)
1
0325-0356 Port number from high-level internet address
1
0358-0387 Schedule name
1
0389-0418 Domain name
0420-2019 Event text
2020-2499 Unused spaces
Chapter 32. Logging IBM Tivoli Storage Manager events to receivers 903
Table 83. Readable text file exit (FILETEXTEXIT) format (continued)
Column Description
2500 New line character
1
Columns 113 - 418 contain data only for events that originate in a client or in another
Tivoli Storage Manager server. Otherwise, columns 113 - 418 contain blanks.
The security of your data is the most important aspect of managing data. You can
control access to the server and client nodes, encrypt data transmission, and
protect administrator and node passwords through authentication processes. The
two methods of authentication are LOCAL and LDAP. The LOCAL password
authentication takes place on the Tivoli Storage Manager server, and those
passwords are not case-sensitive.
LDAP password authentication takes place on the LDAP directory server, and the
passwords are case-sensitive. When using LDAP authentication, the password is
sent to the server by the client. By default, Secure Sockets Layer (SSL) is required
when LDAP authentication is used, to avoid exposing the password. SSL is used
when authenticating the server to the client and secures all communication
between the client and server. You can choose not to use SSL with LDAP
authentication if other security measures are in place to protect the password. One
example of an alternative security measure is a virtual private network (VPN)
connection.
Related concepts:
“Managing Tivoli Storage Manager administrator IDs” on page 920
“Managing passwords and logon procedures” on page 926
“Securing the server console” on page 918
“Securing sensitive client data” on page 563
Related reference:
“Managing access to the server and clients” on page 925
“Administrative authority and privilege classes” on page 918
Securing communications
You can add more protection for your data and passwords by using Secure Sockets
Layer (SSL).
SSL is the standard technology for creating encrypted sessions between servers and
clients. SSL provides a secure channel for servers and clients to communicate over
open communication paths. With SSL, the identity of the server is verified through
the use of digital certificates.
To ensure better system performance, use SSL only for sessions when it is needed.
Consider adding additional processor resources on the Tivoli Storage Manager
server to manage the increased requirements.
Tip: The SSL implementation described here is different from the Administration
Center SSL, which is implemented in Tivoli Integrated Portal. Both methods use
the same SSL technology, but they have different implementations and purposes.
See “Finding documentation about TLS for Tivoli Integrated Portal” on page 912.
Setting up TLS
The Tivoli Storage Manager server and client installation procedures include the
silent installation of the Global Security Kit (GSKit). The backup-archive client and
server communicate with Transport Layer Security (TLS) through services provided
by GSKit.
If you use passwords that are authenticated with an LDAP directory server, the
Tivoli Storage Manager server connects securely to the LDAP server with TLS.
LDAP server connections are secured by the TLS protocol. The LDAP directory
server must supply a trusted certificate to the Tivoli Storage Manager server. If the
Tivoli Storage Manager server “trusts” the certificate, a TLS connection is
established. If not, the connection fails. The root certificate that helps sign the
LDAP Directory server certificate must be added to the Tivoli Storage Manager
server key database file. If the certificate is not added, the certificate cannot be
trusted.
Tip: Any Tivoli Storage Manager documentation that indicates “SSL” or to “select
SSL” applies to TLS.
For more information about TLS, see “Configuring TLS for LDAP directory
servers” on page 914.
The backup-archive client must import a .arm file according to the default label
that is used. The following table shows you which file to import:
Table 84. Determining the .arm file to use according to the default label
Default label in the key
Type of certificate database Import this file
Server self-signed certificate “TSM Server SelfSigned Key” cert.arm
“TSM Server SelfSigned SHA cert256.arm
Key,”
The cert256.arm file is generated by the V6.3 server for distribution to the V6.3 or
later backup-archive clients. The TLS protocol requires the cert256.arm file. The
cert.arm file might also be generated by the V6.3 server, but is not designed for
passwords that authenticate with an LDAP server. To show the available
certificates, issue the gsk8capicmd_64 -cert -list -db cert.kdb -stashed
command.
Important: To use TLS, the default label must be “TSM Server SelfSigned SHA
key” and you must specify the SSLTLS12 YES server option.
To configure Tivoli Storage Manager servers and clients for TLS, complete the
following steps:
1. Specify the TCP/IP port on which the server waits for TLS-enabled client
communications. You can use the SSLTCPADMINPORT or SSLTCPPORT options, or
both to specify TLS port numbers.
If the key database file (cert.kdb) does not exist, it is created. For Tivoli
Storage Manager server V6.3.3 and later, the cert256.arm file and other
TLS-related files are created when the server is first started. If a password exists
for the server database, it is reused for the new key database. After creating the
database, the key database access password is generated and stored.
The label, in this case TSM061, must be unique within the client key database.
Choose a label that identifies the server to which it is associated. Ensure that
the transfer method is secure. This public certificate is made the default
certificate if a self-signed certificate from an earlier release is not found in the
key database.
4. Using a backup-archive client user ID, specify SSL YES in the dsm.opt client
options file. The TLS communications start and the TCPPORT administrative
client option value is updated.
5. If you want to use a different certificate, install the certificate authority (CA)
root certificate on all clients. A set of default root certificates are already
installed if you specified the -populate parameter in the command when you
created the key database file.
For more information, see the Backup-Archive Clients Installation and User's Guide.
Related reference:
“Adding a certificate to the key database” on page 910
For IPv4 or IPv6, the COMMMETHOD server option must specify either TCPIP or
V6TCPIP. The server options for TLS communications are SSLTCPPORT and
SSLTCPADMINPORT. The server can listen on separate ports for the following
communications:
v Backup-archive clients that use the regular protocol
v Administrator IDs that use the regular protocol
The backup-archive client user decides which protocol to use and which port to
specify in the dsmserv.opt file for the SSLTCPADMINPORT option. If the
backup-archive client requires TLS authentication but the server is not in TLS
mode, the session fails.
Related concepts:
“Managing passwords and logon procedures” on page 926
Related tasks:
“Configuring Tivoli Directory Server for TLS on the iKeyman GUI” on page 914
“Configuring Tivoli Directory Server for TLS on the command line” on page 916
Related reference:
“Configuring Windows Active Directory for TLS/SSL” on page 917
You can use your own certificates or purchase certificates from a CA. Either can be
installed and added to the key database. If you include the -stashpw parameter on
a GSKit gsk8capicmd_64 command, the password that you define is saved for later
use.
The key database is created when you start the Tivoli Storage Manager server. If
the certificate is signed by a trusted CA, obtain the certificate, install it in the key
database, and restart the server. Because the certificate is provided by a trusted
authority, the certificate is accepted by Tivoli Storage Manager and communication
between server and client can start.
You can use a Transport Layer Security (TLS) certificate if the client trusts the
certificate authority (CA). Trust is established when you add a signed certificate to
the server key database and use a root certificate for the CA in the client key
database.
The Global Security Kit (GSKit) is included in the Tivoli Storage Manager server
installation. The backup-archive client and server communicate with SSL through
services provided by GSKit.
Complete the following steps to add a certificate to the key database using GSKit:
1. Obtain a signed, server key database certificate from your CA.
2. To receive the signed certificate and make it the default for communicating
with clients, issue the following command:
gsk8capicmd_64 -cert -receive -db cert.kdb
-pw password -stash -file cert_signed.arm -default_cert yes
Tip: For this example, the client key database name is dsmcert.kdb.
6. To verify that the client can successfully connect, issue the dsmc query session
command.
If you do not have a backup copy of the cert.kdb file, perform the following steps:
1. Issue the DELETE KEYRING server command to delete the entry for it that is
located in the Tivoli Storage Manager database.
2. Delete all remaining cert.* files.
3. Shut down the server.
4. Start the server. The server automatically creates a new cert.kdb file and a
corresponding entry in the Tivoli Storage Manager database. If you do not issue
the DELETE KEYRING command, the server attempts, on startup, to create the key
database with the previous password.
5. Redistribute the new cert.arm file to all backup-archive clients that are using
TLS. Reinstall any third-party certificates on the backup-archive client. If you
are using an LDAP directory server to authenticate passwords, add the root
certificate that was used to sign the LDAP server’s certificate. If the root
certificate is already a default trusted certificate, you do not have to add it
again.
The documentation for configuring TLS for Tivoli Integrated Portal is available
within the Tivoli Integrated Portal information center.
Log on to the Administration Center and click Help to open the information center.
Search for “SSL.”
To set up the storage agent to use SSL communication with the Tivoli Storage
Manager server and client, complete the following steps:
1. On the storage agent, issue the DSMSTA SETSTORAGESERVER command to
initialize the storage agent and add communication information to the device
configuration file and the storage agent options file dsmsta.opt:
Hint: The following command is entered on one line, but is displayed here on
multiple lines to make it easier to read.
dsmsta setstorageserver myname=sta
mypa=sta_password
myhla=ip_address
servername=server_name
serverpa=server_password
hla=ip_address
lla=ssl_port
STAKEYDBPW=password
ssl=yes
Requirement:
v When you set the SSL=YES and STAKEYDBPW=password parameters, a key
database file is set up in the storage agent options file, dsmsta.opt. All
passwords are obfuscated in dsmsta.opt.
v To enable SSL communication, ensure that the Tivoli Storage Manager LLA
parameter specifies the server SSLTCPADMIN port and set the SSL
parameter to YES.
2. Import the Tivoli Storage Manager server certificate, cert256.arm, to the key
database file for the storage agent. Ensure that the required SSL certificates are
in the key database file that belongs to each storage agent that uses SSL
communication. To import the SSL certificate, switch to the storage agent
directory and issue the following command:
gskcapicmd_64 -cert -add -label server_example_name
-db cert.kdb -stashed -file cert256.arm -format ascii
3. Specify the SSLTCPPORT and the SSLTCPADMINPORT options in the dsmsta.opt
options file.
4. Create the key database certificate and default certificates by starting the
storage agent.
Tip: To provide the new password to the storage agent, specify the
STAKEYDBPW=newpassword parameter with the DSMSTA SETSTORAGESERVER
command. Rerun the DSMSTA SETSTORAGESERVER command.
5. On the Tivoli Storage Manager server, issue the following command:
define server sta
hla=ip_address
lla=ssl_port
serverpa=password
ssl=yes
6. Stop the storage agent.
7. Stop the Tivoli Storage Manager server.
8. Import the cert256.arm certificate from the storage agent to the key database
file for the Tivoli Storage Manager server. Ensure that the required SSL
When the Tivoli Storage Manager server and storage agent initiate communication,
SSL certificate information is displayed to indicate that SSL is in use.
Related reference:
“Adding a certificate to the key database” on page 910
The directory servers that are available are IBM Tivoli Directory Server V6.2 or 6.3
or Windows Active Directory V2003 or 2008. You can configure Tivoli Directory
Server with the graphical user interface (GUI) or with the command-line interface.
See the following topics for more information about configuring a directory server
for TLS:
v “Configuring Tivoli Directory Server for TLS on the iKeyman GUI”
v “Configuring Tivoli Directory Server for TLS on the command line” on page 916
v “Configuring Windows Active Directory for TLS/SSL” on page 917
Configuring IBM Tivoli Directory Server is one of the preliminary tasks you must
do before you can authenticate passwords with an LDAP directory server. The
Tivoli Directory Server can use a self-signed certificate to secure the
communication between server and backup-archive client, and the LDAP directory
server.
You can use the iKeyman graphical user interface (GUI) to set up Tivoli Directory
Server. If the Tivoli Storage Manager server already has a trusted certificate from
your LDAP server, you do not have to complete the steps that are documented
here. If the LDAP directory server already has a signed certificate, you do not have
to complete these steps.
To configure Tivoli Directory Server for Transport Layer Security (TLS) by using
the iKeyman GUI, complete the following steps:
1. Install and configure Java Runtime Environment 1.4.1 or later before you
install Tivoli Directory Server.
You must configure IBM Tivoli Directory Server before you can authenticate
passwords with an LDAP directory server. The Tivoli Directory Server can use a
self-signed certificate to secure the communication between server and
backup-archive client, and the LDAP directory server.
If the Tivoli Storage Manager server already has a trusted certificate from your
LDAP server, you do not have to complete the steps that are documented here. If
the LDAP directory server already has a signed certificate, you do not have to
complete these steps.
To configure Tivoli Directory Server for Transport Layer Security (TLS), complete
the following steps:
1. Using the Tivoli Directory Server instance user name, create the key database
by issuing the following command:
gsk8capicmd_64 -keydb -create -db "directory/filename.kdb"
-pw “pa$$=w0rd” -stashpw -populate
2. Create a self-signed certificate or get one from a certificate authority (CA). To
create a self-signed certificate, issue the following command:
gsk8capicmd_64 -cert -create -db "directory/filename.kdb" -stashed -label
"LDAP_directory_server" -dn "cn=ldapserver.company.com"
-san_dnsname ldapserver.company.com -size 2048
-sigalg SHA256WithRSA -expire 3650
3. Extract the certificate to a file by issuing the following command:
gsk8capicmd_64 -cert -extract -db "directory/filename.kdb" -stashed -label
"LDAP_directory_server" -target ldapcert.arm -format ascii
4. Copy the certificate file (ldapcert.arm) to the Tivoli Storage Manager server.
5. To add the certificate to the Tivoli Storage Manager server key database, issue
the following command from the Tivoli Storage Manager server. You must issue
the command from the instance user ID from the instance directory.
gsk8capicmd_64 -cert -add -db "cert.kdb" -stashed -label
"LDAP_directory_server" -format ascii -file ldapcert.arm
6. Configure the key database file to work with Tivoli Directory Server. To set the
key database for TLS, issue the following command:
idsldapmodify -D <adminDN> -w <adminPW> -i <filename>
Tip: The Tivoli Storage Manager server authenticates with the “LDAP simple
password authentication” method.
You must configure Windows Active Directory before the Tivoli Storage Manager
server can authenticate passwords.
To set up the Windows Active Directory server, complete the following steps:
1. Turn off automatic root certificate updates to Windows Update if your
Windows Active Directory server does not have access to the internet.
2. Synchronize the system times of the Tivoli Storage Manager server and the
Windows Active Directory system. You can use a Network Time Protocol (NTP)
server. For more information about synchronizing the system times, see your
operating system documentation. You can also see the Microsoft website for
information about synchronizing Active Directory (http://
technet.microsoft.com/en-us/library/cc786897).
3. Set up Transport Layer Security (TLS) for LDAP server connections. Go to the
Microsoft website (http://www.microsoft.com) and search for LDAP and SSL.
a. Obtain a signed certificate. Active Directory requires that a signed certificate
be in the Windows certificate store to enable TLS. You can obtain a signed
certificate from the following sources:
v A third-party certificate authority (CA)
v Install the Certificate Services role on a system that is joined to the Active
Directory domain and configure an enterprise root CA
Tip: To determine whether the file is DER binary or ASCII, open the certificate
in a text editor. If you can read the characters, then the file is ASCII.
Ensure that you have the root certificate and that the subject on the certificate
matches the CA name. The “Issued by” and “Issued to/subject” for the root
certificate must be the same. Export the CA certificate by using one of the
following methods:
v Export the CA certificate from the “Certificates (Local Computer)” Microsoft
Management Console (MMC) snap-in.
v Copy the certificate from C:\Windows\system32\certsrv\CertEnroll\*.crt
into the server key database. The file is in DER binary format.
v Download the CA certificate file from the Certificate Services web interface
http://<certificate server hostname>/certsrv/, if it is enabled through
the Certificate Enrollment Web Services.
6. Copy the certificate to the Tivoli Storage Manager server.
Tip: The Tivoli Storage Manager server authenticates with the “LDAP simple
password authentication” method.
Related tasks:
Setting up TLS
An administrator with system privilege can revoke or grant new privileges to the
SERVER_CONSOLE user ID. However, an administrator cannot update, lock,
rename, or remove the SERVER_CONSOLE user ID. The SERVER_CONSOLE user
ID does not have a password.
Therefore, you cannot use the user ID from an administrative client unless you set
authentication to off.
Important: Two server options give you additional control over the ability of
administrators to perform tasks.
v QUERYAUTH allows you to select the privilege class that an administrator must
have to issue QUERY and SELECT commands. By default, no privilege class is
required. You can change the requirement to one of the privilege classes,
including system.
v REQSYSAUTHOUTFILE allows you to specify that system authority is required for
commands that cause the server to write to an external file (for example,
BACKUP DB). By default, system authority is required for such commands.
See the Administrator's Reference for details on server options.
Restricted Restricted
Unrestricted Unrestricted
Table 85 summarizes the privilege classes, and gives examples of how to set
privilege classes.
Table 85. Authority and privilege classes
Privilege Class Capabilities
System Perform any administrative task with the
grant authority rocko classes=system server.
v System-wide responsibilities
v Manage the enterprise
v Manage IBM Tivoli Storage Manager
security
Unrestricted Policy Manage the backup and archive services for
grant authority smith classes=policy nodes assigned to any policy domain.
v Manage nodes
v Manage policy
v Manage schedules
Restricted Policy Same capabilities as unrestricted policy
grant authority jones domains=engpoldom except authority is limited to specific policy
domains.
Unrestricted Storage Manage server storage, but not definition or
grant authority coyote classes=storage deletion of storage pools.
v Manage the database and recovery log
v Manage IBM Tivoli Storage Manager
devices
v Manage IBM Tivoli Storage Manager
storage
Restricted Storage Manage server storage, but limited to specific
grant authority holland stgpools=tape* storage pools.
v Manage IBM Tivoli Storage Manager
devices
v Manage IBM Tivoli Storage Manager
storage
Related concepts:
“Overview of remote access to web backup-archive clients” on page 469
“Managing Tivoli Storage Manager administrator IDs”
Related reference:
“Administrative authority and privilege classes” on page 918
To query the system for a detailed report on administrator ID DAVEHIL, issue the
following example QUERY ADMIN command:
query admin davehil format=detailed
Only administrator IDs that authenticate to the LDAP directory server are listed in
the report.
For example, JONES has restricted policy privilege for policy domain
ENGPOLDOM.
1. To extend JONES’ authority to policy domain MKTPOLDOM and add operator
privilege, issue the following example command:
grant authority jones domains=mktpoldom classes=operator
2. As an additional example, assume that three tape storage pools exist:
TAPEPOOL1, TAPEPOOL2, and TAPEPOOL3. To grant restricted storage
privilege for these storage pools to administrator HOLLAND, you can issue the
following command:
grant authority holland stgpools=tape*
3. HOLLAND is restricted to managing storage pools with names that begin with
TAPE, if the storage pools existed when the authority was granted. HOLLAND
is not authorized to manage any storage pools that are defined after authority
has been granted. To add a new storage pool, TAPEPOOL4, to HOLLAND’s
authority, issue the following command:
grant authority holland stgpools=tapepool4
For example, rather than revoking all of the privilege classes for administrator
JONES, you want to revoke only the operator authority and the policy authority to
policy domain MKTPOLDOM.
Issue the following command to revoke only the operator authority and the policy
authority to policy domain MKTPOLDOM:
revoke authority jones classes=operator domains=mktpoldom
For example, administrator HOGAN has system authority. To reduce authority for
HOGAN to the operator privilege class, perform the following steps:
1. Revoke the system privilege class by issuing the following command:
revoke authority hogan classes=system
2. Grant operator privilege class by issuing the following command:
grant authority hogan classes=operator
For example, to revoke both the storage and operator privilege classes from
administrator JONES, issue the following command:
revoke authority jones
Tip: If you authenticate a password with an LDAP directory server, the letters and
characters that comprise the password are case-sensitive.
Renaming an administrator ID
You can rename an administrator ID if it needs to be identified by a new ID. You
can also assign an existing administrator ID to another person by issuing the
RENAME command. You cannot rename an administrator ID to one that exists on the
system.
For example, if administrator HOLLAND leaves your organization, you can assign
administrative privilege classes to another user by completing the following steps:
1. Assign HOLLAND's user ID to WAYNESMITH by issuing the RENAME ADMIN
command:
rename admin holland waynesmith
By renaming the administrator's ID, you remove HOLLAND as a registered
administrator from the server. In addition, you register WAYNESMITH as an
administrator with the password, contact information, and administrative
privilege classes previously assigned to HOLLAND.
2. Change the password to prevent the previous administrator from accessing the
server by entering:
update admin waynesmith new_password contact="development"
Important:
1. You cannot remove the last system administrator from the system.
2. You cannot remove the administrator SERVER_CONSOLE.
You can also lock or unlock administrator IDs according to the form of
authentication that they use. When you specify AUTHENTICATION=LOCAL in the
command, all administrator IDs that authenticate with the Tivoli Storage Manager
server are affected. When you specify AUTHENTICATION=LDAP in the command, all
administrator IDs that authenticate with an LDAP directory server are affected.
Table 86 describes the typical tasks for managing access to the server and clients.
Table 86. Managing access
Task Details
Allow a new administrator to access the 1. “Registering administrator IDs” on page
server 920
2. “Granting authority to administrators” on
page 922
Modify authority for registered “Managing Tivoli Storage Manager
administrators administrator IDs” on page 920
Give a user authority to access a client “Managing client access authority levels” on
remotely page 471
Give an administrator authority to create a “Generating client backup sets on the server”
backup set for a client node on page 568
Prevent administrators from accessing the “Locking and unlocking administrator IDs
server from the server” on page 924
Prevent new sessions with the server, but “Disabling or enabling access to the server”
allow current sessions to complete on page 494
Prevent clients from accessing the server “Locking and unlocking client nodes” on
page 464
Change whether passwords are required to “Disabling the default password
access IBM Tivoli Storage Manager authentication” on page 937
Change requirements for passwords v “Modifying the default password
expiration period for passwords that are
managed by the Tivoli Storage Manager
server” on page 933
v “Setting a limit for invalid password
attempts” on page 936
v “Setting a minimum length for a
password” on page 937
Prevent clients from initiating sessions within “Server-initiated sessions” on page 453
a firewall
Tip: For information on connecting with IBM
Tivoli Storage Manager across a firewall, refer
to the Installation Guide.
Tivoli Storage
Manager server LDAP server
Tivoli Storage
Manager backup-
archive clients DB2
Figure 105. Configuring the server to authenticate passwords with an LDAP directory server
The LDAP directory server interprets letters differently from the Tivoli Storage
Manager server. The LDAP directory server distinguishes the case that is used,
either uppercase or lowercase. For example, the LDAP directory server can
distinguish between secretword and SeCretwOrd. The Tivoli Storage Manager server
interprets all letters for LOCAL passwords as uppercase.
The following terms are used when describing the LDAP directory server
environment:
Distinguished name (DN)
A unique name in an LDAP directory. The DN consists of the following
information. The information must be ordered in this way.
v The relative distinguished name (RDN)
v The organizational unit (ou)
v The organization (o)
v The country (c)
For example:
uid=jackspratt,ou=marketing,o=corp.com,c=us
uid=cbukowski,ou=manufacturing,o=corp.com,c=us
uid=abbynormal,ou=sales,o=corp.com,c=us
You must know the user ID that was specified in the SET LDAPUSER command. For
information about the Tivoli Directory access control lists, go to the Tivoli
Directory server information center (http://publib.boulder.ibm.com/infocenter/
tivihelp/v2r1/topic/com.ibm.IBMDS.doc/admin_gd410.htm).
Note: Windows Active Directory users who change passwords when the “Enforce
password history” policy is enabled can authenticate with the previous password
for one hour. For more information, see the Microsoft site (http://
support.microsoft.com/?id=906305).
Complete the following steps to set up the LDAP directory server so that it can
authenticate passwords:
1. Ensure that you have a directory server installed on the LDAP server. Use one
of the following directory servers:
v IBM Tivoli Directory Server V6.2 or 6.3
v Windows Active Directory version 2003 or 2008
Requirement: If you use Tivoli Directory Server V6.2, you must update Global
Security Kit (GSKit) to V7.0.4.33 or later. For more information, see SSL errors
after upgrading to ITDS 6.3 client (http://www.ibm.com/support/
docview.wss?uid=swg21469388).
2. Create the base distinguished name (Base DN) on the LDAP directory server
for the Tivoli Storage Manager namespace. The Base DN is the part of the
LDAP directory structure from which Tivoli Storage Manager operates,
specified in the LDAPURL option. For example, ou=armonk,cn=tsmdata can be a
Base DN. See your LDAP documentation for how to create a Base DN.
3. Edit the access controls on the LDAP directory server and grant access to the
Base DN to the user ID, which is specified in the SET LDAPUSER command. This
ID cannot be a part of the Base DN. You can grant access to the Base DN to
more than one user ID. However, the security of the LDAP server can be easily
compromised if you have too many user IDs with full permission over the Base
DN.
4. Set up the directory server. See “Configuring TLS for LDAP directory servers”
on page 914.
You establish policies for passwords that will be authenticated by each server.
Restriction: You can issue Tivoli Storage Manager server commands to manage
your password policies. If you set a password policy on both the LDAP server and
Tivoli Storage Manager server, the settings might conflict. The result might be that
you are not able to access a node or log on with an administrator ID. For
information on the maximum invalid attempts policy, see the table in “Setting a
limit for invalid password attempts” on page 936.
In addition to setting a policy for case sensitivity, you can configure the
LDAP-authenticated password policy to set the following options:
Password history
The password history is the number of times that you must define a new
password before you can reuse a password.
Minimum age
The minimum age is the length of time before you can change the
password.
Maximum age
The maximum age is the length of time before you must change the
password.
A combination of characters
You can determine the number of special characters, numbers, and
alphabetical characters for your passwords. For example, some products set
up a password policy to enforce the following rules:
v The password cannot contain the user account name or parts of the user
full name that exceed three consecutive characters
v The password must be at least eight characters in length
v The password must contain characters from two of the following four
categories:
– English uppercase characters (A through Z)
– English lowercase characters (a through z)
The LDAP server that you use determines the complexity that you can
have for passwords outside of Tivoli Storage Manager.
Complete the following steps on the Tivoli Storage Manager server to authenticate
passwords with an LDAP directory server:
1. Import the key database file from the LDAP directory server. You can use any
method to copy the file from the LDAP directory server to the Tivoli Storage
Manager server.
2. Open the dsmserv.opt file and specify the LDAP directory server with the
LDAPURL option. Specify the LDAP directory server URL and the base
distinguished name (DN) on the LDAPURL option. For example:
LDAPURL ldap://server.dallas.gov/cn=project_x
The default port is 389. If you want to use a different port number, specify it as
part of the LDAPURL option. For example, to specify a port of 222:
LDAPURL ldap://server.dallas.gov:222/cn=project_x
3. Restart the Tivoli Storage Manager server.
4. Issue the SET LDAPUSER command to define the ID of the user who can
administer Tivoli Storage Manager operations on the LDAP directory server.
This user ID must have full administrative authority over the Base DN and be
able to add, delete, and modify all Base DN entries. For example:
set ldapuser "cn=apastolico,ou=manufacturing,o=dhs,c=us"
See the Administrator’s Reference for more information about the SET LDAPUSER
command.
5. Issue the SET LDAPPASSWORD command to define the password for the user ID
that is defined in the LDAPUSER option. For example:
set ldappassword "boX=T^p$"
If the user ID and password are verified to be correct, communication lines are
opened and the node or administrator ID can run Tivoli Storage Manager
applications.
For example:
register admin admin1 “c0m=p1e#Pa$$w0rd?s” authentication=ldap
register node node1 “n0de^Passw0rd%s” authentication=ldap
After you issue the commands, the passwords for administrator ID admin1 and
the node ID node1 can be authenticated with an LDAP directory server.
Tip: A node and its password or an administrator ID and its password each
occupy one inetOrgPerson object on the LDAP directory server. For information
about inetOrgPerson objects, see Definition of the inetOrgPerson LDAP Object
Class (http://www.ietf.org/rfc/rfc2798.txt).
To know which authentication method is in use, issue the QUERY NODE
FORMAT=DETAILED or QUERY ADMIN FORMAT=DETAILED command.
2. Optional: To register all new nodes and administrator IDs with a default
authentication method, issue the SET DEFAULTAUTHENTICATION command. Any
REGISTER NODE or REGISTER ADMIN commands that are issued after you issue the
SET DEFAULTAUTHENTICATION command create nodes and administrators with the
default authentication method. You can set the authentication methods to
LDAP or LOCAL.
For information about the SET DEFAULTAUTHENTICATION command, see the
Administrator's Reference.
When you authenticate nodes and administrator IDs with an LDAP directory
server, you ensure more protection for your passwords. Communication lines
between the LDAP directory server and Tivoli Storage Manager are protected with
Transport Layer Security (TLS).
You can change a password authentication method after you configure the LDAP
directory server and the Tivoli Storage Manager server. However, you cannot
update the authentication method for your own user ID unless you have system
authority. If necessary, another administrator must change the authentication
method.
The following example UPDATE NODE command has a password that is made up
of characters that are supported by the Tivoli Storage Manager server:
update node node1 n0de^87^n0de authentication=ldap
Tip: A shared LDAP server might have a password that is on the LDAP
directory server. In that case, the user is not prompted to enter a new
password.
2. Optional: Issue the QUERY NODE FORMAT=DETAILED or the QUERY ADMIN
FORMAT=DETAILED command to view the results. If you must change the
authentication method for several nodes or administrator IDs, you can use a
wildcard character (*). For example,
update node * authentication=ldap
In the preceding example, the authentication method for all nodes is changed
to “LDAP pending.”
All nodes and administrator IDs require new passwords after you run the
UPDATE command. Before the node and administrative IDs are given a password,
they are in the LDAP pending state. The node and administrator IDs are updated
to use LDAP authentication, but you must first give them a password.
Find the nodes that are authenticated with the LDAP directory server:
query node authentication=ldap
Find the administrator IDs that do not authenticate their passwords with an LDAP
directory server:
query admin authentication=local
You can query individual nodes or administrator IDs to determine whether they
authenticate with an LDAP directory server. To determine the password
authentication method for node tivnode_12 issue the following command:
query node tivnode_12 format=detailed
Issue the SET PASSEXP command to set the password expiration period for selected
administrator IDs or client nodes. You must specify the administrator ID or node
name with the ADMIN or NODE parameter in the SET PASSEXP command. If you set
the expiration period only for selected users, the expiration period can be 0 - 9999
days. A value of 0 means that user's password never expires.
Restriction: The SET PASSEXP command does not affect administrator IDs and
nodes if their passwords are authenticated with an LDAP directory server.
Issue the following command to set the expiration period of client node
node_tsm12 to 120 days:
| The Tivoli Storage Manager server administrator has a new node that must
| authenticate its password with an LDAP directory server. The first action is to
| create the “cn=tsmdata” entry and Base DN on the LDAP directory server. The
| server administrator can then set up the LDAPURL option that is based on the Base
| DN. Here is an example entry for the LDAPURL option:
| dsmserv.opt
| LDAPURL ldaps://mongo.storage.tucson.ibm.com:389/cn=tsmdata
| After you set the LDAPURL option, restart the server. Complete the following steps
| to configure the server:
| 1. Issue the query option ldapurl command to validate that you entered all of
| the values correctly.
| 2. Issue the set ldapuser uid=tsmserver,ou=Users,cn=aixdata command to
| configure the LDAPUSER.
| 3. Issue the SET LDAPPASSWORD adsm4Data command to define the password.
| 4. For this scenario, the node that must be added is NODE1. Issue the following
| command:
| register node c0mplexPassw0rd NODE1 authentication=ldap
| command.
| A single node (UPDNODE1) that currently authenticates with the Tivoli Storage
| Manager server is now required to authenticate with an LDAP directory server. For
| UPDNODE1, use the AUTHENTICATION parameter in the UPDATE NODE command. For
| example:
| update node updnode1 newC0mplexPW$ authentication=ldap
| If you want to update all your nodes to authenticate with an LDAP directory
| server, you can use a wildcard. Issue the following command to have all the nodes
| authenticate with an LDAP directory server:
| update node * authentication=ldap
| If you have nodes that authenticate with the Tivoli Storage Manager server and
| nodes that authenticate with an LDAP directory server, you can determine where
| nodes are authenticating. Issue the following command to determine which nodes
| authenticate with an LDAP directory server:
| query node authentication=ldap
| Issue the following command to determine which nodes authenticate with the
| Tivoli Storage Manager server:
| query node authentication=local
| You can issue a LOCK NODE command to lock all nodes that authenticate with the
| Tivoli Storage Manager server. These nodes might be rarely used, and you might
| not know by which password authentication method they are supposed to be
| managed. When you lock the nodes, the node owners must consult with you. At
| that point, you can find out whether they want to use the LDAP directory server
| or stay with the Tivoli Storage Manager server. You can issue the LOCK NODE or
| UNLOCK NODE commands with a wildcard to lock or unlock all nodes in that group.
| To lock all nodes that authenticate with the Tivoli Storage Manager server, issue
| the following command:
| lock node * authentication=local
| After you configure everything, you can design it so that every new node and
| administrator authenticate with an LDAP directory server. After you issue the SET
| DEFAULTAUTH command, you do not have to designate the authentication method
| for any REGISTER NODE or REGISTER ADMIN commands. Issue the following
| command to set the default authentication method to LDAP:
| set defaultauth=ldap
| Any REGISTER NODE or REGISTER ADMIN command that is issued after this SET
| DEFAULTAUTH command inherits the authentication method (LDAP). If you want to
| register a node that authenticates with the Tivoli Storage Manager server, include
| AUTHENTICATION=LOCAL in the REGISTER NODE command.
On the Tivoli Storage Manager server, issue the SET INVALIDPWLIMIT command to
limit the invalid password attempts for the Tivoli Storage Manager namespace.
If you initially set a limit of 4 and then lower the limit, some clients might fail
verification during the next logon attempt.
After a client node is locked, only an administrator with storage authority can
unlock the node.
An administrator can also force a client to change their password on the next
logon by specifying the FORCEPWRESET=YES parameter on the UPDATE NODE or UPDATE
ADMIN command. For more information, see the Administrator's Reference.
Related tasks:
“Locking and unlocking client nodes” on page 464
“Locking and unlocking administrator IDs from the server” on page 924
This feature affects all node and administrator passwords, whether the password
authenticates with the Tivoli Storage Manager server or the LDAP directory server.
To set the minimum password length to eight characters, issue the following
example command:
set minpwlength 8
The default value at installation is 0. A value of 0 means that the password length
is not checked. You can set the length value from 0 to 64.
You can only disable password authentication for passwords that authenticate with
the Tivoli Storage Manager server (LOCAL).
To allow administrators and client nodes to access the Tivoli Storage Manager
server without entering a password, issue the following command:
set authentication off
With this feature, a user can log on to a Windows computer and access the
backup-archive client without entering another password. When unified logon is
enabled, the server continues to use its normal authentication methods for
protocols other than Named Pipes.
The procedure described here assumes that the Tivoli Storage Manager server and
all the Tivoli Storage Manager client computers are in the same Windows domain.
A Windows domain is a way of allowing the Windows Domain Controller to
manage the user accounts for all members of the domain. The Tivoli Storage
Manager unified logon procedure takes advantage of the convenience of allowing
the Windows domain to manage user accounts.
Tip: The Tivoli Storage Manager server can run successfully on the Windows
server or workstation operating system and does not have to reside on the
Windows Domain Controller computer.
To enable unified logon, you must have the following system requirements:
v The backup-archive client must be installed on a supported Windows operating
system.
v The Tivoli Storage Manager server must enable the Named Pipes protocol.
v The backup-archive client must use the Named Pipes communications method.
The unified logon feature applies only to node and administrator passwords that
authenticate to the Tivoli Storage Manager server. Passwords that are stored and
authenticate to an LDAP directory server are not affected by this feature.
Authentication must be LOCAL (Tivoli Storage Manager server) for unified logon to
apply.
Unified logon affects node and administrator passwords that authenticate to the
Tivoli Storage Manager server. Passwords that authenticate to an LDAP directory
server are not affected.
Tip: The preceding procedure is the same as entering the following lines
in the server options file (dsmserv.opt):
adsmgroupname tsmserver
commmethod namedpipe
namedpipename \\.\pipe\server1
securepipes yes
5. Perform the following steps to restart the Tivoli Storage Manager server:
a. From the left pane, expand the Tivoli Storage Manager server.
b. Expand Reports.
c. Click Service Information.
d. From the right pane, select the server and click Start. The status for the
server changes to Running.
6. Ensure that the backup-archive clients that you added to the Tivoli Storage
Manager server group are registered Tivoli Storage Manager client nodes.
Tip:
a. In the example, server_name is the NetBIOS name of the computer where
the Tivoli Storage Manager server is running.
b. In the example, nodename can be substituted with the name of the
workstation where the Tivoli Storage Manager server is installed.
c. The username must be the same as the Windows account name that the
user is logged in as.
10. To verify that unified logon is enabled, start the backup-archive client. You can
also perform a backup and restore.
Related tasks:
“Registering nodes with the server” on page 440
Database backups, infrastructure setup files, and copies of client data can be stored
offsite, as shown in Figure 106.
On-site Off-site
Backup
Archive Database
Database and
backups
recovery log
Disk storage pool
Server
Infrastructure
setup files
DRM: The disaster recovery manager (DRM ) can automate some disaster recovery tasks. A
note like this one identifies those tasks.
Related tasks:
“Storage pool hierarchies” on page 288
Related information:
Configuring clustered environments
DRM: To store database backup media and setup files offsite, you can use disaster recovery
manager.
Related tasks:
Chapter 36, “Disaster recovery manager,” on page 1053
Automatic backups by the database manager are based on the following values
that are set by Tivoli Storage Manager:
v The active log space that was used since the last backup, which triggers a full
database backup
v The active log utilization ratio, which triggers an incremental database backup
You can back up the database to tape, FILE, or to remote virtual volumes.
Reserve the device class that you want to use for backups so that the server does
not attempt to back up the database if a device is not available. If a database
backup shares a device class with a lower priority operation, such as reclamation,
and all the devices are in use, the lower priority operation is automatically
canceled. The canceled operation frees a device for the database backup.
Restriction: Tivoli Storage Manager does not support database backup (loading
and unloading) to a Centera device.
To specify the device class to be used for database backups, issue the SET
DBRECOVERY command. For example, to specify a device class named DBBACK,
issue the following command:
set dbrecovery dbback
Tips:
v When you issue the SET DBRECOVERY command, you can also specify the number
of number of concurrent data streams to use for the backup. Use the NUMSTREAMS
parameter.
v To change the device class, reissue the SET DBRECOVERY command.
v If you issue the BACKUP DB command with the TYPE=FULL parameter, and the
device class is not the one that is specified in the SET DBRECOVERY command, a
warning message is issued. However, the backup operation continues and is not
affected.
v Device class definitions are saved in the device configuration files.
Related concepts:
“Configuring concurrent multistreaming” on page 944
Related tasks:
“Protecting the device configuration file” on page 950
By default, the percentage of the virtual address space that is dedicated to all
database manager processes is set to 70 - 80 percent of system random-access
memory.
To change this setting, specify the DBMEMPERCENT server option. Ensure that the
value that you specify provides adequate memory for applications other than the
Tivoli Storage Manager are running on the system.
Chapter 34. Protecting and recovering the server infrastructure and client data 943
Configuring concurrent multistreaming
Multiple, concurrent data streams reduce the time required to back up or restore
the database. You can specify the number of data streams that the IBM Tivoli
Storage Manager server uses for backup and restore operations.
For example, if you assign four drives to database backup processing, Tivoli
Storage Manager attempts to write to all four drives concurrently. For restore
operations, the server uses the information that is in the volume history file to
determine the number of data streams that were used during the backup
operation. The server attempts to use the same number of data streams during the
restore operation. For example, if the backup operation used four data streams, the
server attempts the restore operation using four data streams.
Operation If the number of available If the number of available If the number of available drives is
drives exceeds the specified drives equals the specified less than the specified number of
number of streams, the server number of streams, the streams, the server uses
uses server uses
Backup The number of drives that is The number of drives that is All available drives.
equal to the specified number equal to the specified
of streams. number of streams.
Restore The number of drives that is The number of drives that is All available drives. At least one drive
equal to the number of equal to the number of is required for restore processing
streams that were used in the streams that were used in the
backup operation. A restore backup operation.
process never uses more
drives than the number of
streams that were used to
back up the database.
Suppose that you specify four data streams for database backup operations. To
indicate the maximum number of volumes that can be simultaneously mounted,
you specify 4 as the value of the MOUNTLIMIT parameter in the device class
definition. If only three drives are available at the time of the backup operation,
the operation runs using three drives. A message is issued that indicates that fewer
drives are being used for the backup operation than the number requested. If all
four drives for the device class are online, but one drive is in use by another
operation, the backup operation has a higher priority and preempts use of the
drive. If you specify four data streams, but the value of the MOUNTLIMIT parameter
is 2, only two streams are used.
Important: Although multiple, concurrent data streams can reduce the time that is
required for a backup operation, the amount of time that you can save depends on
the size of the database. In general, the benefit of using multiple, concurrent data
streams for database backup and restore operations is limited if the database is less
than 100 GB.
Another potential disadvantage is that more volumes are required for multistream
processing than for single-stream processing. For example, if the backup of an 850
GB database requires a single linear tape open (LTO) volume, switching to four
data streams requires four volumes. Furthermore, those volumes might be partially
filled, especially if you use high-capacity volumes and device compression. For
The decision to use multiple, concurrent data streams for database backup and
restore operations depends on the size of the database, the cost of media, and
performance impacts.
When deciding whether to use data streaming, consider the following issues to
determine whether the benefits of concurrent data streaming are sufficient. If the
disadvantages of multiple, concurrent data streaming exceed the benefits, continue
to use single-stream processing.
v What is the size of your database? In general, the amount of time that you save
by using multiple, concurrent data streams decreases as the size of the database
decreases because of the extra time caused by additional tape mounts. If your
database is less than 100 GB, the amount of time that you save might be
relatively small.
In many environments with databases larger than 100 GB, two database-backup
streams can provide superior performance. However, depending on your
environment, additional streams might not provide enough I/O throughput
relative to the size of your database, the devices that you use, and the I/O
capability of your environment. Consider using three or four database-backup
streams only for environments in which the following conditions apply:
– The Tivoli Storage Manager database is located on very high-performing disk
subsystems.
– The database is spread across several different RAID arrays that use multiple
database directories.
v How many drives are available for the device class to be used for database
backup?
v Will server operations other than database backup operations compete for
drives?
v If drives are preempted by a database backup operation, what will be the effect
on server operations?
v What is the cost of the tape volumes that you use for database backup
operations? For example, suppose that the backup of an 850 GB database
requires a single high-capacity LTO volume. If you specify four streams, the
same backup operation requires four volumes.
You can specify multiple data streams for automatic or manual database-backup
operations. For database restore operations, the server attempts to use the same
number of data streams that you specified for the backup operation.
Chapter 34. Protecting and recovering the server infrastructure and client data 945
v For manual database-backup operations, issue the BACKUP DB command and
specify a value for the NUMSTREAMS parameter. The value of the NUMSTREAMS
parameter that you specify with the BACKUP DB command overrides the value for
the NUMSTREAMS parameter that you specify with the SET DBRECOVERY command.
For example, if you have a device class DBBACK, issue the following command
to specify three data streams:
backup db dbback numstreams=3
Tips:
v To change the number of data streams for automatic database backup
operations, reissue the SET DBRECOVERY command and specify a different value
for the NUMSTREAMS parameter. For example, reissue the SET DBRECOVERY
command if you add additional drives to the target library or if drives are not
available because of maintenance or device failure. The new value specified by
the NUMSTREAMS parameter is used for the next backup operation.
v To display the number of data streams that are to be used for a database backup
operation, issue the QUERY DB command.
v During a database backup operation, the number of sessions that is displayed in
the output of the QUERY SESSION command or the SELECT command is equal to or
less than the number of specified data streams. For example, if you specified
four data streams, but only three drives are online, 3 sessions are displayed in
the output. If you issue the QUERY DRIVE command, the number of drives in use
is also 3.
v If you reduce the number of data streams after a database backup operation, this
information will not be available to the server when the database is restored. To
specify fewer data streams for the restore operation, take one or both of the
following actions in the device configuration file:
– Reduce the number of online and usable drive definitions by removing
DEFINE DRIVE commands.
– Update the value of the MOUNTLIMIT parameter of the DEFINE DEVCLASS
command.
– During database backup operations, stop other Tivoli Storage Manager
database activities. Other database activities compete for database I/O and
affect throughput during database backup operations that use multiple
streams.
Ensure that you can recover the database to its most current state or to a specific
point-in-time by making both full and incremental database backups:
v To restore the database to its most current state, you need the last full backup,
the last incremental backup after that full backup, and the active and archive log
files.
Tivoli Storage Manager can make full and incremental database backups to tape
while the server is running and available to clients. However, when deciding what
backups to do and when to do them, consider the following properties of backups:
v Full backups take longer than incremental backups.
v Full backups have shorter recovery times than incremental backups because you
must load only one set of volumes to restore the entire database.
v Full backups are required for the first backup and after extending the database
size.
v Only full backups prune archive log space in the archive log directory. If the
available active and archive log space gets low, full database backups occur
automatically. To help prevent space problems, schedule regular full backups
frequently.
For a full database backup, specify TYPE=FULL. For an incremental database backup,
specify TYPE=INCREMENTAL. For example, to run a full database backup using a
device class LTOTAPE, three volumes, and three concurrent data streams, issue the
following command:
backup db devclass=ltotape type=full volumenames=vol1,vol2,vol3
numstreams=3
Database backups require devices, media, and time. Consider scheduling backups
at specific times of the day and after major storage operations.
To schedule database backups, use the DEFINE SCHEDULE command. For a full
database backup, specify TYPE=FULL. For an incremental database backup, specify
Chapter 34. Protecting and recovering the server infrastructure and client data 947
TYPE=INCREMENTAL. For example, to set up a schedule to run a full backup to device
class FILE every day at 1:00 a.m., enter the following command:
define schedule daily_backup type=administrative
cmd="backup db deviceclass=file type=full" starttime=01:00
Tip: You can also schedule a database backup schedule as part of a maintenance
script that you create in the Administration Center.
A snapshot database backup is a full database backup that does not interrupt the
full and incremental backup series. Consider using snapshot database backups in
addition to full and incremental backups.
To make a snapshot database backup, issue the BACKUP DB command. For example,
to make a snapshot database backup to the TAPECLASS device class, enter the
following command:
backup db type=dbsnapshot devclass=tapeclass
New volume history entries are created for the snapshot database volumes.
Restriction: To prevent the accidental loss of what might only way to recover the
server, you cannot delete the most current snapshot database using the DELETE
VOLHISTORY command.
Related concepts:
“Volume history file and volume reuse” on page 98
Related tasks:
“Protecting the volume history file” on page 949
For protection against database and log media failures, place the active log and the
archive log in different file systems. In addition, mirror both logs. Mirroring
simultaneously writes data to two independent disks. For example, suppose that a
sudden power outage causes a partial page write. The active log is corrupted and
is not readable. Without mirroring, recovery operations cannot complete when the
server is restarted. However, if the active log is mirrored and a partial write is
detected, the log mirror can be used to construct valid images of the missing data.
To protect the active log, the archive log, and the archive failover log, take the
following steps:
v To specify the active log mirror, use the MIRRORLOGDIRECTORY parameter on the
DSMSERV FORMAT command. Mirror the active log in a file system that exists on a
different disk drive than the primary active log.
Tips:
v Consider mirroring the active log and the archive log if retention protection is
enabled. If a database restore is needed, you can restore the database to the
current point in time with no data loss.
v You can dynamically start or stop mirroring while Tivoli Storage Manager is
running.
v Despite its benefits, mirroring does not protect against a disaster or a hardware
failure that affects multiple drives or causes the loss of the entire system. In
addition, mirroring doubles the amount of disk space that is required for logs.
Mirroring also results in decreased performance.
Related concepts:
“Active log” on page 685
“Archive log” on page 686
“Archive failover log” on page 687
The following volume history is stored in the Tivoli Storage Manager database and
updated in the volume history files:
v Sequential-access storage-pool volumes that were added, reused through
reclamation or move data operations, or deleted during delete volume or
reclamation operations
v Full and incremental database-backup volumes
v Export volumes for administrator, node, policy, and server data
v Snapshot database-backup volumes
v Backup set volumes
To specify the file path and name for a volume history file, use the VOLUMEHISTORY
server option. To specify more than one path and name, use multiple
VOLUMEHISTORY entries. Tivoli Storage Manager stores duplicate volume histories in
all the files that are specified with VOLUMEHISTORY options. To find the required
volume-history information during a database restore operation, the server tries to
open volume history files in the order in which the VOLUMEHISTORY entries occur in
the server options file. If the server cannot read a file, the server tries to open the
Chapter 34. Protecting and recovering the server infrastructure and client data 949
next volume history file.
Ensure that volume history is protected by taking one or more of the following
steps:
v Store at least one copy of the volume history file offsite or on a disk separate
from the database.
v Store a printout of the file offsite.
v Store a copy of the file offsite with your database backups and device
configuration file.
v Store a remote copy of the file, for example, on an NFS-mounted file system.
Tip: To manually update the volume history file, you can use the BACKUP
VOLHISTORY command. Ensure that updates are complete by following these
guidelines:
v If you must halt the server, wait a few minutes after issuing the BACKUP
VOLHISTORY command.
v Specify multiple VOLUMEHISTORY options in the server options file.
v Review the volume history files to verify that the files were updated.
DRM: DRM saves a copy of the volume history file in its disaster recovery plan file.
Related tasks:
“Deleting information about volume history” on page 656
To specify the file path and name for a device configuration file, use the DEVCONFIG
server option. To specify more than one path and name, use multiple DEVCONIG
entries. Tivoli Storage Manager stores duplicate device configuration information in
all the files that are specified with DEVCONFIG options.
To find the required device-configuration information during a database restore
operation, the server tries to open device configuration files in the order in which
the DEVCONFIG entries occur in the server options file. If the server cannot read a
file, the server tries to open the next device configuration file.
To ensure the availability of device configuration information, take one or more of
the following steps:
Tips:
v To manually update the device configuration file, use the BACKUP DEVCONFIG
command. Ensure that updates are complete by following these guidelines:
– If you must halt the server, wait a few minutes after issuing the BACKUP
DEVCONIG command.
– Specify multiple DEVCONFIG options in the server options file.
– Review the device configuration files to verify that the files were updated.
– If you are using automated tape libraries, volume location information is
saved in the device configuration file. The file is updated whenever CHECKIN
LIBVOLUME, CHECKOUT LIBVOLUME, and AUDIT LIBRARY commands are issued,
and the information is saved as comments (/*....*/). This information is used
during restore or load operations to locate a volume in an automated library.
If a disaster occurs, you might have to restore Tivoli Storage Manager with devices
that are not included in the device configuration file.
DRM: DRM automatically saves a copy of the device configuration file in its disaster
recovery plan file.
Related tasks:
“Updating the device configuration file” on page 975
To ensure the availability of server options file, take one or more of the following
steps:
v Store at least one copy of the server options file offsite or on a disk separate
from the database.
v Store a printout of the file offsite.
v Store a copy of the file offsite with your database backups and device
configuration file.
v Store a remote copy of the file, for example, on an NFS-mounted file system.
DRM: DRM automatically saves a copy of the server options file in its disaster recovery
plan file.
Chapter 34. Protecting and recovering the server infrastructure and client data 951
Protecting information about the database and recovery logs
To restore the database, you need detailed information about the database and
recovery log. The recovery log includes the active log, the active log mirror, the
archive log, and the archive failover log. The recovery log contains records of
changes to the database.
You can determine the following information from the recovery log:
v The directory where the recovery log is located
v The amount of disk space required
If you lose the recovery log, you lose the changes that were made since the last
database backup.
DRM: DRM helps you save database and recovery log information.
The cert.kdb file includes the server's public key, which allows the client to
encrypt data. The digital certificate file cannot be stored in the server database
because the Global Security Kit (GSKit) requires a separate file in a certain format.
The cert256.arm file is generated by the V6.3 server for distribution to the V6.3
clients.
Keep backup copies of the cert.kdb and cert256.arm file in a secure location. If
both of the original files and any copies are lost or corrupted, you can generate a
new certificate file.
Attention: If client data object encryption is in use and the encryption key is not
available, data cannot be restored or retrieved under any circumstance. When
using ENABLECLIENTENCRYPTKEY for encryption, the encryption key is stored
on the server database. This means that for objects using this method, the server
database must exist and have the proper values for the objects for a proper restore
operation. Ensure that you back up the server database frequently to prevent data
loss.
For more information about encryption keys, see IBM Tivoli Storage Manager Using
the Application Programming Interface.
Related tasks:
“Troubleshooting the certificate key database” on page 912
You can use server-to-server communications to store copies of the recovery plan
on a remote target server, in addition to traditional disk-based files. Storing
recovery plan files on a target server provides the following advantages:
v A central repository for recovery plan files
v Automatic expiration of plan files
v Query capabilities for displaying information about plan files and their contents
v Fast retrieval of a recovery plan file if a disaster occurs
You can also store the recovery plan locally, on CD, or in print.
DRM: DRM can query the server and generate a detailed recovery plan for your
installation.
Related tasks:
“Storing the disaster recovery plan locally” on page 1067
“Storing the disaster recovery plan on a target server” on page 1067
Related reference:
“The disaster recovery plan file” on page 1096
A typical Tivoli Storage Manager configuration includes a primary disk pool and
primary tape pool for data backup. Copy storage pools contain active and inactive
versions of data that is backed up from primary storage pools. Figure 107 on page
954 shows a configuration with an onsite FILE-type active-data pool and an offsite
copy storage pool.
Chapter 34. Protecting and recovering the server infrastructure and client data 953
Server Storage On-site storage
HSM
Backup Active backup
data only
Archive
Disk Storage
Pool (FILE)
Related concepts:
“Active-data pools” on page 269
“Copy storage pools” on page 269
“Primary storage pools” on page 268
Related tasks:
“Storage pool hierarchies” on page 288
Tip: Backing up storage pools requires an additional 200 bytes of space in the
database for each file copy. As more files are added to the copy storage pools and
active-data pools, reevaluate your database size requirements.
Each of the commands in the following examples uses four parallel processes
(MAXPROCESS=4) to perform an incremental backup of the primary storage pool
to the copy storage pool or a copy to the active-data pool. Set the MAXPROCESS
parameter in the BACKUP STGPOOL command to the number of mount points or
drives that can be dedicated to this operation.
v To back up data in a primary storage pool to a copy storage pool, use the BACKUP
STGPOOL command. For example, to back up a primary storage pool named
ARCHIVEPOOL to a copy storage pool named DISASTER-RECOVERY, issue the
following command:
backup stgpool archivepool disaster-recovery maxprocess=4
The only files backed up to the DISASTER-RECOVERY pool are files for which a
copy does not exist in the copy storage pool. The data format of the copy
storage pool and the primary storage pool can be NATIVE, NONBLOCK, or the
NDMP formats NETAPPDUMP, CELERRADUMP, or NDMPDUMP. The server
copies data from the primary storage pool only to a copy storage pool that has
the same format.
Tip: To further minimize the potential loss of data, you can mark the backup
volumes in the copy storage pool as OFFSITE and move them to an offsite
location. In this way, the backup volumes are preserved and are not reused or
mounted until they are brought on-site. Ensure that you mark the volumes as
OFFSITE before you back up the database. To avoid marking volumes as offsite
or physically move volumes:
– Specify a device class of SERVER in your database backup.
– Back up a primary storage pool to a copy storage pool or associated with a
device class of SERVER.
v To copy active data, use the COPY ACTIVEDATA command. For example, to copy
active data from a primary storage pool named BACKUPPOOL to an active-data
pool named CLIENT-RESTORE, issue the following command:
copy activedata backuppool client-restore maxprocess=4
The primary storage pool must have a data format of NATIVE or NONBLOCK.
Copies from primary storage pools with any of the NDMP formats are not
permitted. The only files copied to the CLIENT-RESTORE pool are active backup
files for which a copy does not exist in the active-data pool.
Because backups and active-data copies are made incrementally, you can cancel the
processes. If you reissue the BACKUP STGPOOL or COPY ACTIVEDATA command, the
backup or active-data copy continues from the point at which the process was
canceled.
Restrictions:
v If a backup is to be made to a copy storage pool and the file exists with the
same insertion date, no action is taken. Similarly, if a copy is to be made to an
active-data pool and the file exists with the same insertion data, no action is
taken.
v When a disk storage pool is backed up, cached files (copies of files that remain
on disk after being migrated to the next storage pool) are not backed up.
v Files in a copy storage pool or an active-data pool do not migrate to another
storage pool.
v After a file is backed up to a copy storage pool or a copy is made to an
active-data pool, the file might be deleted from the primary storage pool. When
an incremental backup of the primary storage pool occurs, the file is then
deleted from the copy storage pool. Inactive files in active-data pools are deleted
during the process of reclamation. If an aggregate being copied to an active-data
pool contains some inactive files, the aggregate is reconstructed into a new
aggregate without the inactive files.
Chapter 34. Protecting and recovering the server infrastructure and client data 955
Related concepts:
“Active-data pools” on page 269
“Copy storage pools” on page 269
“Primary storage pools” on page 268
“Securing sensitive client data” on page 563
Related tasks:
“Backing up the data in a storage hierarchy” on page 293
Chapter 21, “Automating server operations,” on page 659
Create a schedule for backing up two primary storage pools to the same copy
storage pool.
Assume that you have two primary storage pools: one random access storage pool
(DISKPOOL) and one tape storage pool (TAPEPOOL, with device class
TAPECLASS). Files stored in DISKPOOL are migrated to TAPEPOOL. You want to
back up the files in both primary storage pools to a copy storage pool.
Note:
a. Because scratch volumes are allowed in this copy storage pool, you do not
need to define volumes for the pool.
b. All storage volumes in COPYPOOL are located onsite.
2. Perform the initial backup of the primary storage pools by issuing the
following commands:
backup stgpool diskpool copypool maxprocess=2
backup stgpool tapepool copypool maxprocess=2
3. Define schedules to automatically run the commands for backing up the
primary storage pools. The commands to schedule are those that you issued in
step 2.
Tips:
v To minimize tape mounts, you can take one or both of the following steps:
– Back up the disk storage pool first, then the tape storage pool.
– If you schedule storage pool backups and migrations and have enough disk
storage, back up or copy as many files as possible from the disk storage pool
to copy storage pools and active-data pools. After the backup and copy
operations are complete, migrate the files from the disk storage pools to
primary tape storage pools.
v if you have active-data pools, you can schedule the COPY ACTIVEDATA command
to copy the active data that is in primary storage pools to the active-data pools.
Performing a storage pool backup for data stored in a Centera storage pool is not
supported. To ensure the safety of the data, therefore, consider using the
replication feature of the Centera storage device.
With this feature, you can copy data to a replication Centera storage device at a
different location. If the data in the primary Centera storage pool become
unavailable, you can access the replication Centera storage device by specifying its
IP address using the HLADDRESS parameter on the UPDATE DEVCLASS command for
the device class pointed to by the Centera storage pool. After the primary Centera
storage device is re-established, you can issue the UPDATE DEVCLASS command again
and change the value of the HLADDRESS parameter to point back to the primary
Centera storage device. You must restart the server each time you update the
HLADDRESS parameter on the UPDATE DEVCLASS command.
Related concepts:
“Files on sequential volumes (CENTERA)” on page 67
You can also enable the simultaneous-write function so that active client backup
data is written to active-data pools at the same time it is written to the primary
storage pool. The active-data pools must be specified in the definition of the
primary storage pool, and the clients whose active data is to be saved must be
members of a policy domain that specifies the active-data pool as the destination
for active backup data.
Chapter 34. Protecting and recovering the server infrastructure and client data 957
Delaying reuse of volumes for recovery purposes
When you define or update a sequential access storage pool, you can use the
REUSEDELAY parameter. This parameter specifies the number of days that must
elapse before a volume can be reused or returned to scratch status after all files are
expired, deleted, or moved from the volume.
When you delay reuse of such volumes and they no longer contain any files, they
enter the pending state. Volumes remain in the pending state for the time that is
specified with the REUSEDELAY parameter for the storage pool to which the volume
belongs.
Delaying reuse of volumes can be helpful under certain conditions for disaster
recovery. When files are expired, deleted, or moved from a volume, they are not
erased from the volumes: The database references to these files are removed. Thus
the file data might still exist on sequential volumes if the volumes are not
immediately reused.
A disaster might force you to restore the database using a database backup that is
not the most recent backup. In this case, some files might not be recoverable
because the server cannot find them on current volumes. However, the files might
exist on volumes that are in pending state. You might be able to use the volumes
in pending state to recover data by doing the following steps:
1. Restore the database to a point-in-time before file expiration.
2. Use a primary, copy-storage, or active-data pool volume that is not rewritten
and that contains the expired file at the time of database backup.
If you back up your primary storage pools, set the REUSEDELAY parameter for the
primary storage pools to 0 to efficiently reuse primary scratch volumes. For your
copy storage pools and active-data pools, delay the reuse of volumes for as long as
you keep your oldest database backup.
Related tasks:
“Scenario: Protecting the database and storage pools” on page 968
Related reference:
“Running expiration processing to delete expired files” on page 535
Use this section to help you audit storage pool volumes for data integrity.
To ensure that all files are accessible on volumes in a storage pool, audit any
volumes you suspect might have problems by using the AUDIT VOLUME command.
You have the option of auditing multiple volumes using a time range criteria, or
auditing all volumes in a storage pool.
If a storage pool has data validation enabled, run an audit for the volumes in the
storage pool to have the server validate the data.
Note: If Tivoli Storage Manager detects a damaged file on a Centera volume, then
a command is sent to Centera to delete the file. If Centera is unable to delete the
file because the retention period for the file is not expired, then the volume that
contains the file is not be deleted.
To display the results of a volume audit after it completes, use the QUERY ACTLOG
command.
Related tasks:
“Requesting information from the activity log” on page 830
During the auditing process, the server performs the following actions:
v Sends informational messages about processing to the server console.
v Prevents new files from being written to the volume.
v Generates a cyclic redundancy check, if data validation is enabled for the storage
pool.
You can specify whether you want the server to correct the database if
inconsistencies are detected. Tivoli Storage Manager corrects the database by
deleting database records that refer to files on the volume that cannot be accessed.
The default is to report inconsistencies that are found (files that cannot be
accessed), but to not correct the errors.
If files with read errors are detected, their handling depends on the following
conditions:
v The type of storage pool to which the volume is assigned
v The FIX parameter on the AUDIT VOLUME command
v The location of file copies (whether a copy of the file exists in a copy storage
pool)
Chapter 34. Protecting and recovering the server infrastructure and client data 959
Errors in an audit of a primary storage pool volume:
When a volume in a primary storage pool is audited, the setting of the FIX
parameter determines how errors are handled.
The FIX parameter on an AUDIT VOLUME command can have the following effects:
FIX=NO
The server reports, but does not delete, any database records that refer to
files found with logical inconsistencies. If the AUDIT VOLUME command
detects a read error in a file, the file is marked as damaged in the database.
You can do one of the following actions:
v If a backup copy of the file is stored in a copy storage pool, you can
restore the file by using the RESTORE VOLUME or RESTORE STGPOOL
command.
v If the file is a cached copy, you can delete references to the file on this
volume by using the AUDIT VOLUME command again. Specify FIX=YES.
If the AUDIT VOLUME command does not detect a read error in a damaged
file, the file state is reset, and the file can be used. For example, if a dirty
tape head caused some files to be marked damaged, you can clean the
head and then audit the volume to make the files accessible again.
FIX=YES
Any inconsistencies are fixed as they are detected.
If the AUDIT VOLUME command detects a read error in a file:
v If the file is not a cached copy and a backup copy is stored in a copy
storage pool, the file is marked as damaged in the database. The file can
then be restored using the RESTORE VOLUME or RESTORE STGPOOL
command.
v If the file is not a cached copy and a backup copy is not stored in a copy
storage pool, all database records that refer to the file are deleted.
v If the file is a cached copy, the database records that refer to the cached
file are deleted. The primary file is stored on another volume.
If the AUDIT VOLUME command does not detect a read error in a damaged
file, the file state is reset, and the file can be used. For example, if a dirty
tape head caused some files to be marked damaged, you can clean the
head and then audit the volume to make the files accessible again.
When a volume in a copy storage pool is audited, the setting of the FIX parameter
determines how errors are handled.
The FIX parameter on an AUDIT VOLUME command can have the following effects:
FIX=NO
The server reports the error and marks the file copy as damaged in the
database.
FIX=YES
The server deletes references to the file on the audited volume from the
database.
When a volume in an active-data storage pool is audited, the setting of the FIX
parameter determines how errors are handled.
The FIX parameter on an AUDIT VOLUME command can have the following effects:
FIX=NO
The server reports the error and marks the file copy as damaged in the
database.
FIX=YES
The server deletes references to the file on the audited volume from the
database. The physical file is deleted from the active-data pool.
When auditing a volume in an active-data pool, the server skips inactive files in
aggregates that were removed by reclamation. These files are not reported as
skipped or marked as damaged.
Data validation is helpful if you introduce new hardware devices. The validation
assures that the data is not corrupted as it moves through the hardware, and then
is written to the volume in the storage pool. You can use the DEFINE STGPOOL or
UPDATE STGPOOL commands to enable data validation for storage pools.
When you enable data validation for an existing storage pool, the server validates
data that is written from that time forward. The server does not validate existing
data which was written to the storage pool before data validation was enabled.
When data validation is enabled for storage pools, the server generates a cyclic
redundancy check (CRC) value and stores it with the data when it is written to the
storage pool. The server validates the data when it audits the volume, by
generating a cyclic redundancy check and comparing this value with the CRC
value stored with the data. If the CRC values do not match, then the server
processes the volume in the same manner as a standard audit volume operation.
This process can depend on the following conditions:
v The type of storage pool to which the volume is assigned
v The FIX parameter of the AUDIT VOLUME command
v The location of file copies (whether a copy of the file exists in a copy storage
pool or an active-data pool)
Check the activity log for details about the audit operation.
The server removes the CRC values before it returns the data to the client node.
Related reference:
“Errors in an audit of active-data storage pool volumes”
“Errors in an audit of copy storage pool volumes” on page 960
“Errors in an audit of a primary storage pool volume” on page 960
Chapter 34. Protecting and recovering the server infrastructure and client data 961
Choosing when to enable data validation:
Data validation is available for nodes and storage pools. The forms of validation
are independent of each other.
Tivoli Storage
Manager
client
1 2
3 Tivoli Storage
Storage
Manager
Agent
server
4 5
Storage
Pool
Table 89 provides information that relates to Figure 108. This information explains
the type of data being transferred and the appropriate command to issue.
Table 89. Setting data validation
Where to Set
Numbers in Data Type of Data
Figure 108 Validation Transferred Command Command Parameter Setting
1 Node File Data and See Note See Note
definition Metadata
2 Node File Data and REGISTER NODE UPDATE NODE VALIDATEPROTOCOL=ALL or
definition Metadata VALIDATEPROTOCOL=DATAONLY
3 Server Metadata DEFINE SERVER UPDATE VALIDATEPROTOCOL=ALL
definition SERVER
(storage agent
only)
Note: The storage agent reads the VALIDATEPROTOCOL setting for the client from the
Tivoli Storage Manager server.
Figure 109 is similar to the previous figure, however note that the top section
encompassing 1, 2, and 3 is shaded. All three of these data validations are
related to the VALIDATEPROTOCOL parameter. What is significant about this validation
is that it is active only during the client session. After validation, the client and
server discard the CRC values generated in the current session. This is in contrast
to storage pool validation, 4 and 5, which is always active when the storage
pool CRCDATA setting is YES.
The validation of data transfer between the storage pool and the storage agent 4
is managed by the storage pool CRCDATA setting defined by the Tivoli Storage
Manager server. Even though the flow of data is between the storage agent and the
storage pool, data validation is determined by the storage pool definition.
Therefore, if you always want your storage pool data validated, set your primary
storage pool CRCDATA setting to YES.
Tivoli Storage
Manager
client
1 2
4 5
Storage
Pool
Figure 109. Protocol data validation versus storage pool data validation
Chapter 34. Protecting and recovering the server infrastructure and client data 963
If the network is unstable, you might decide to enable only data validation for
nodes. Tivoli Storage Manager generates a cyclic redundancy check when the data
is sent over the network to the server. Certain nodes might have more critical data
than others and might require the assurance of data validation. When you identify
the nodes that require data validation, you can choose to have only the user's data
validated or all the data validated. Tivoli Storage Manager validates both the file
data and the file metadata when you choose to validate all data.
If the network is fairly stable but your site is perhaps using new hardware devices,
you might decide to enable only data validation for storage pools. When the server
sends data to the storage pool, the server generates cyclic redundancy checking,
and stores the CRC value with the data. The server validates the CRC value when
the server audits the volume. Later, you might decide that data validation for
storage pools is no longer required after the devices prove to be stable.
Related tasks:
“Using virtual volumes to store data on another server” on page 763
“Auditing storage pool volumes” on page 958
Related reference:
“Validating a node's data during a client session” on page 560
Consider the impact on performance when you decide whether data validation is
necessary for storage pools. This method of validation is independent of validating
data during a client session with the server. When you choose to validate storage
pool data, there is no performance impact on the client.
If you enable CRC for storage pools on devices that later prove to be stable, you
can increase performance by updating the storage pool definition to disable data
validation.
Use the AUDIT VOLUME command to specify an audit for data written to volumes
within a range of days, or to run an audit for a storage pool.
You can manage when the validation of data in storage pools occurs by scheduling
the audit volume operation. You can choose a method suitable to your
environment, for example:
v Select volumes at random to audit. A random selection does not require
significant resources or cause much contention for server resources but can
provide assurance that the data is valid.
v Schedule a daily audit of all volumes written in the last day. This method
validates data written to a storage pool on a daily basis.
To display the results of a volume audit after it completes, you can issue the QUERY
ACTLOG command.
To specify that only summary messages for d:\tsm\admvol.1 are sent to the
activity log and server console, issue the following command:
audit volume d:\adsm\admvol.1 quiet=yes
The audit volume process is run in the background and the server returns the
following message:
ANR2313I Audit Volume (Inspect Only) process started for volume
D:\TSM\ADMVOL.1 (process ID 4).
To view the status of the audit volume process, issue the following command:
query process 4
The server then begins the audit process with the first volume on which the first
file is stored. For example, Figure 110 shows five volumes defined to ENGBACK2.
In this example, File A spans VOL1 and VOL2, and File D spans VOL2, VOL3,
VOL4, and VOL5.
A
D
B
A D D
E
C
D
VOL1 VOL2 VOL3 VOL4 VOL5
If you request that the server audit volume VOL3, the server first accesses volume
VOL2, because File D begins at VOL2. When volume VOL2 is accessed, the server
only audits File D. It does not audit the other files on this volume.
Chapter 34. Protecting and recovering the server infrastructure and client data 965
Because File D spans multiple volumes, the server accesses volumes VOL2, VOL3,
VOL4, and VOL5 to ensure that there are no inconsistencies between the database
and the storage pool volumes.
For volumes that require manual mount and demount operations, the audit
process can require significant manual intervention.
This option is useful when the volume you want to audit contains part of a file,
the rest of which is stored on a different, damaged volume. For example, to audit
only volume VOL5 in the example in Figure 110 on page 965 and have the server
fix any inconsistencies found between the database and the storage volume, enter:
audit volume vol5 fix=yes skippartial=yes
When you use the parameters FROMDATE, TODATE, or both, the server limits the audit
to only the sequential media volumes that meet the date criteria, and automatically
includes all online disk volumes. When you include the STGPOOL parameter you
limit the number of volumes that might include disk volumes.
Issue the AUDIT VOLUME command with the FROMDATE and TODATE parameters.
For example, to audit the volumes in storage pool BKPOOL1 for volumes written
from March 20, 2002 to March 22, 2002.
audit volume stgpool=bkppool1 fromdate=03/20/2002 todate=03/22/2002
The server audits all volumes that were written to starting at 12:00:01 a.m. on
March 20 and ending at 11:59:59 p.m. on March 22, 2002.
For example, you can audit the volumes in storage pool BKPOOL1 by issuing the
following command:
audit volume stgpool=bkppool1
For example, if your critical users store data in storage pool STPOOL3 and you
want all volumes in the storage pool audited every two days at 9:00 p.m., issue the
following command:
define schedule crcstg1 type=administrative
cmd=’audit volume stgpool=stgpool3’ active=yes starttime=21:00 period=2
A data error, which results in a file being unreadable, can be caused by such things
as a tape deteriorating or being overwritten or by a drive needing cleaning. If a
data error is detected when a client tries to restore, retrieve, or recall a file or
during a volume audit, the file is marked as damaged. If the same file is stored in
other copy storage pools or active-data pools, the status of those file copies is not
changed.
If files are marked as damaged, you can perform the following operations on them:
v Restore, retrieve, or recall the files
v Move the files by migration, reclamation, or the MOVE DATA command
v Back up during a BACKUP STGPOOL operation if the primary file is damaged
v Restore during a RESTORE STGPOOL or RESTORE VOLUME operation if the backup
copy in a copy storage pool or active-data pool volume is damaged
v Migrate or reclaim during migration and reclamation
To maintain the data integrity of user files, you can perform the following steps:
1. Detect damaged files before the users do. The AUDIT VOLUME command marks a
file as damaged if a read error is detected for the file. If an undamaged copy is
in an on-site copy storage pool or an active-data pool volume, it is used to
provide client access to the file.
2. Reset the damaged status of files if the error that caused the change to
damaged status was temporary. You can use the AUDIT VOLUME command to
correct situations when files are marked damaged due to a temporary hardware
problem, such as a dirty tape head. The server resets the damaged status of
files if the volume in which the files are stored is audited and no read errors
are detected.
3. Correct files that are marked as damaged. If a primary file copy is marked as
damaged and a usable copy exists in a copy storage pool or an active-data pool
volume, the primary file can be corrected using the RESTORE VOLUME or RESTORE
STGPOOL command.
4. Regularly run commands to identify files that are marked as damaged:
v The RESTORE STGPOOL command displays the name of each volume in the
restored storage pool that contains one or more damaged primary files. Use
this command with the preview option to identify primary volumes with
damaged files without actually performing the restore operation.
v The QUERY CONTENT command with the DAMAGED parameter displays damaged
files on a specific volume.
Related tasks:
“Data validation during audit volume processing” on page 961
“Restoring damaged files” on page 968
Chapter 34. Protecting and recovering the server infrastructure and client data 967
Restoring damaged files
If you use copy storage pools, you can restore damaged client files. You can also
check storage pools for damaged files and restore the files.
This section explains how to restore damaged files based on the scenario in
“Example: Scheduling a backup with one copy storage pool” on page 956.
If a client tries to access a file stored in TAPEPOOL and a read error occurs, the file
in TAPEPOOL is automatically marked as damaged. Future accesses to the file
automatically use the copy in COPYPOOL as long as the copy in TAPEPOOL is
marked as damaged.
To restore any damaged files in TAPEPOOL, you can define a schedule that issues
the following command periodically:
restore stgpool tapepool
You can check for and replace any files that develop data-integrity problems in
TAPEPOOL or in COPYPOOL. For example, every three months, query the
volumes in TAPEPOOL and COPYPOOL by entering the following commands:
query volume stgpool=tapepool
query volume stgpool=copypool
Then issue the following command for each volume in TAPEPOOL and
COPYPOOL:
audit volume <volname> fix=yes
If a read error occurs on a file in TAPEPOOL, that file is marked damaged and an
error message is produced. If a read error occurs on file in COPYPOOL, that file is
deleted and a message is produced.
This scenario assumes a storage hierarchy that consists of the following storage
pools:
v Default random-access storage pools named BACKUPPOOL, ARCHIVEPOOL,
and SPACEMGPOOL
v A tape storage pool named TAPEPOOL
To provide extra levels of protection for client data, the scenario also specifies an
offsite copy storage pool and an onsite active-data pool.
The standard procedures for the company include the following activities:
Chapter 34. Protecting and recovering the server infrastructure and client data 969
c. Back up the database by using the BACKUP DB command. For example, issue
the following command:
backup db type=incremental devclass=tapeclass scratch=yes
| Restriction: Do not run the MOVE DRMEDIA and BACKUP STGPOOL or BACKUP DB
| commands concurrently. Ensure that the storage pool backup processes are
| complete before you issue the MOVE DRMEDIA command.
5. Perform the following operations nightly after the scheduled operations
completes:
a. Back up the volume history and device configuration files. If they change,
back up the server options files and the database and recovery log setup
information.
b. Move the copy storage pool volumes marked offsite, the database backup
volumes, volume history files, device configuration files, server options
files, and the database and recovery log setup information to the offsite
location.
c. Identify offsite volumes that must be returned onsite. For example, issue the
following command:
query volume stgpool=disaster-recovery access=offsite status=empty
For database restore operations, the Tivoli Storage Manager server reads the
information that is in the volume history file to determine the number of data
streams to read. The server attempts to match the number of streams that were
used during the backup operation. For example, if the backup operation used four
streams, the Tivoli Storage Manager server attempts the restore operation using
four streams.
If you reduce the number of data streams after a database backup operation, this
information will not be available to the server when the database is restored. To
specify fewer data streams for the restore operation, take one or both of the
following actions in the device configuration file:
To restore a database to point in time, you need the latest full backup before the
point in time. You also need the latest incremental backup after that last full
backup. You can also use snapshot database backups to restore a database to a
specific point in time.
Before restoring the database, have available the following infrastructure setup
files:
v Server options file
v Volume history file:
Copy the volume history file pointed to by the server options file. The backup
copy must a different name. If the restore fails and you must try it again, you
might need the backup copy of the volume history file. After the database is
restored, any volume history information pointed to by the server options is lost.
This information is required to identify the volumes to be audited.
If your old volume history file shows that any of the copy storage pool volumes
that are required to restore your storage pools were reused (STGREUSE) or
deleted (STGDELETE), you might not be able to restore all your files. You can
avoid this problem by including the REUSEDELAY parameter when you define
your copy storage pools.
Chapter 34. Protecting and recovering the server infrastructure and client data 971
v Device configuration file:
You might need to modify the device configuration file based on the hardware
available at the recovery site. For example, the recovery site might require a
different device class, library, and drive definitions.
v Detailed query output about the database and recovery log
If files were migrated, reclaimed, or moved after a backup, the files might be lost
and the space occupied by those files might be reused. You can minimize this loss
by using the REUSEDELAY parameter when defining or updating sequential-access
storage pools. This parameter delays volumes from being returned to scratch or
being reused.
Similarly, the volume inventories for Tivoli Storage Manager and for any
automated libraries might also be inconsistent. Issue the AUDIT LIBRARY command
to synchronize these inventories.
Related tasks:
“Updating the device configuration file” on page 975
“Restoring to a point-in-time in a shared library environment” on page 983
“Delaying reuse of volumes for recovery purposes” on page 958
You can use full and incremental backups to restore a database to its most current
state. Snapshot database backups are complete database copies of a point in time.
You can restore a database to its most current state if the last backup series that
was created for the database is available. A backup series consists of a full backup,
the latest incremental backup, and all active and archive logs for database changes
since the last backup in the series was run.
Attention: Recovering the database to its most current state is not possible if the
active or archive logs are lost.
To restore a database to its most current state, issue the DSMSERV RESTORE DB
command. For example:
dsmserv restore db
If the original database and recovery log directories are available, use the DSMSERV
RESTORE DB utility to restore the database. However, if the database and recovery
log directories are lost, recreate them first, and then issue the DSMSERV RESTORE DB
utility.
In a Tivoli Storage Manager shared library environment, the server that manages
and controls the shared library is known as the library manager. The library
manager maintains a database of the volumes within the shared library.
Chapter 34. Protecting and recovering the server infrastructure and client data 973
3. Gather the outputs from your detailed queries about your database and
recovery log setup information.
4. Determine whether the original database and recovery log directories exist. If
the original database or recovery log directories were lost, recreate them using
the operating system mkdir command.
Note: The directories must have the same name as the original directories.
5. Use the DSMSERV RESTORE DB utility to restore the database to the current time.
6. Start the Tivoli Storage Manager server instance.
7. Issue an AUDIT LIBRARY command from each library client for each shared
library.
8. Create a list from the old volume history information (generated by the QUERY
VOLHISTORY command) that shows all of the volumes that were reused
(STGREUSE), added (STGNEW), and deleted (STGDELETE) since the original
backup. Use this list to perform the rest of this procedure.
9. Audit all disk volumes, all reused volumes, and any deleted volumes located
by the AUDIT VOLUME command using the FIX=YES parameter.
10. Issue the RESTORE STGPOOL command to restore those files detected as
damaged by the audit. Include the FIX=YES parameter on the AUDIT VOLUME
command to delete database entries for files not found in the copy storage
pool or active-data pool.
11. Mark any volumes that cannot be located as destroyed, and recover those
volumes from copy storage pool backups. Recovery from active-data pool
volumes is not suggested unless the loss of inactive data is acceptable. If no
backups are available, delete the volumes from the database by using the
DELETE VOLUME command with the DISCARDDATA=YES parameter.
12. Redefine any storage pool volumes that were added since the database
backup.
In a Tivoli Storage Manager shared library environment, the servers that share a
library and rely on a library manager to coordinate and manage the library usage
are known as library clients. Each library client maintains a database of volume
usage and volume history. If the database of the library client becomes corrupted,
it might be restored by following these steps:
1. Copy the volume history file to a temporary location and rename the file.
After the database is restored, any volume history information that is pointed
to by the server options is lost. You need this information to identify the
volumes to be audited.
2. Put the device configuration file and the server options file in the server
working directory. You can no longer recreate the device configuration file;
you must have a copy of the original.
Note: The directories must have the same name as the original directories.
5. Use the DSMSERV RESTORE DB utility to restore the database to the current time.
6. Create a list from the old volume history information (generated by the QUERY
VOLHISTORY command) that shows all of the volumes that were reused
(STGREUSE), added (STGNEW), and deleted (STGDELETE) since the original
backup. Use this list to perform the rest of this procedure.
7. Audit all disk volumes, all reused volumes, and any deleted volumes located
by the AUDIT VOLUME command using the FIX=YES parameter.
8. Issue the RESTORE STGPOOL command to restore those files detected as
damaged by the audit. Include the FIX=YES parameter on the AUDIT VOLUME
command to delete database entries for files not found in the copy storage
pool.
9. Mark any volumes that cannot be located as destroyed, and recover those
volumes from copy storage pool backups. If no backups are available, delete
the volumes from the database by using the DELETE VOLUME command with the
DISCARDDATA=YES parameter.
10. Issue the AUDIT LIBRARY command for all shared libraries on this library client.
11. Redefine any storage pool volumes that were added since the database
backup.
If this occurs, you must update the device configuration files manually with
information about the new devices. Whenever you define, update, or delete device
information in the database, the device configuration file is automatically updated.
This information includes definitions for device classes, libraries, drives, and
servers.
For virtual volumes, the device configuration file stores the password (in encrypted
form) for connecting to the remote server. If you regressed the server to an earlier
point-in-time, this password might not match what the remote server expects. In
this case, manually set the password in the device configuration file. Then ensure
that the password on the remote server matches the password in the device
configuration file.
Note: Set the password in clear text. After the server is operational again, you can
issue a BACKUP DEVCONFIG command to store the password in encrypted form.
Related tasks:
“Recovering with different hardware at the recovery site” on page 1088
“Automated SCSI library at the original and recovery sites” on page 1088
Related reference:
Automated SCSI library at the original site and a manual scsi library at the
recovery site
The RESTORE STGPOOL command restores specified primary storage pools that have
files with the following problems:
v The primary copy of the file had read errors during a previous operation. Files
with read errors are marked as damaged.
v The primary copy of the file on a volume that has an access mode of
DESTROYED..
v The primary file is in a storage pool that is UNAVAILABLE, and the operation is
for restore, retrieve, or recall of files to a user, or export of file data.
Restrictions:
v Cached copies of files in a disk storage pool are never restored. References to
any cached files were identified with read errors or cached files that are stored
on a destroyed volume are removed from the database during restore processing.
v Restoring from an active-data pool might cause some or all inactive files to be
deleted from the database if the server determines that an inactive file needs to
be replaced but cannot find it in the active-data pool. Do not consider
active-data pools for recovery of a primary pool unless the loss of inactive data
is acceptable.
v You cannot restore a storage pool defined with a CENTERA device class.
v Restoring from an active-data pool might cause some or all inactive files to be
deleted from the database if the server determines that an inactive file needs to
be replaced but cannot find it in the active-data pool.
Restore processing copies files from a copy storage pool or an active-data pool
onto new primary storage pool volumes. The server then deletes database
After the files are restored, the old references to these files in the primary storage
pool are deleted from the database. Tivoli Storage Manager locates these files on
the volumes to which they were restored, rather than on the volumes on which
they were previously stored. If a destroyed volume becomes empty because all
files were restored to other locations, the destroyed volume is automatically
deleted from the database.
To restore a storage pool, use the RESTORE STGPOOL command. To identify volumes
that contain damaged primary files, use the PREVIEW=YES parameter. During
restore processing, a message is issued for every volume in the restored storage
pool that contains damaged, noncached files. To identify the specific files that are
damaged on these volumes, use the QUERY CONTENT command.
DRM: DRM can help you track your on-site and offsite primary and copy storage pool
volumes. DRM can also query the server and generate a current, detailed disaster recovery
plan for your installation.
Related tasks:
“Fixing damaged files” on page 967
This process preserves the collocation of client files. However, if the copy storage
pool or active-data pool being used to restore files does not have collocation
enabled, restore processing can be slow.
If you need to use a copy storage pool or an active-data pool that is not collocated
to restore files to a primary storage pool that is collocated, you can improve
performance by completing the following steps:
1. Restore the files first to a random access storage pool (on disk).
2. Allow or force the files to migrate to the target primary storage pool.
For the random access pool, set the target storage pool as the next storage pool.
Adjust the migration threshold to control when migration occurs to the target
storage pool.
Related tasks:
“Keeping client files together using collocation” on page 381
Chapter 34. Protecting and recovering the server infrastructure and client data 977
Fixing an incomplete storage pool restoration
If the restoration of storage pool volumes is incomplete, you can get more
information about the remaining files on those volumes.
The restoration might be incomplete for one or more of the following reasons:
v Either files were never backed up, or the backup copies were marked as
damaged.
v A copy storage pool or active-data pool was specified on the RESTORE STGPOOL
command, but files were backed up to a different copy storage pool or
active-data pool. If you suspect this problem, use the RESTORE STGPOOL command
again without specifying a copy storage pool or active-data pool from which to
restore files. You can specify the PREVIEW parameter on the second RESTORE
STGPOOL command, if you do not actually want to restore files.
v Volumes in the copy storage pool or active-data pool needed to perform the
restore operation are offsite or unavailable. Check the activity log for messages
that occurred during restore processing.
v Backup file copies in copy storage pools or active-data pools were moved or
deleted by other processes during restore processing. To prevent this problem,
do not issue the following commands for copy storage pool volumes or
active-data pool volumes while restore processing is in progress:
– MOVE DATA
– DELETE VOLUME and with the DISCARDDATA parameter to YES
– AUDIT VOLUME with FIX parameter set to YES
– MIGRATE STGPOOL
– RECLAIM STGPOOL
v You can prevent reclamation processing for your copy storage pools and
active-data pools by setting the RECLAIM parameter to 100 with the UPDATE
STGPOOL command.
After files are restored, the server deletes database references to files on the
original primary storage pool volumes. Tivoli Storage Manager now locates these
files on the volumes to which they were restored, rather than on the volume on
which they were previously stored. A primary storage pool volume becomes empty
if all files that were stored on that volume are restored to other volumes. In this
case, the server automatically deletes the empty volume from the database.
To recreate files for one or more volumes that were lost or damaged, use the
RESTORE VOLUME command. The RESTORE VOLUME command changes the access mode
of the volumes being restored to destroyed. When the restoration is complete (when
Attention:
v Cached copies of files in a disk storage pool are never restored. References to
any cached files that are on a volume that is being restored are removed from
the database during restore processing.
v You can also recreate active versions of client backup files in storage pool
volumes by using duplicate copies in active-data pools. However, do not
consider active-data pools for recovery of a volume unless the loss of inactive
data is acceptable. If the server determines that an inactive file must be replaced
but cannot find it in the active-data pool, restoring from an active-data pool
might cause some or all inactive files to be deleted from the database.
v You cannot restore volumes in a storage pool defined with a CENTERA device
class.
Note: This precaution prevents the movement of files stored on these volumes
until volume DSM087 is restored.
3. Bring the identified volumes to the on-site location and set their access mode to
READONLY to prevent accidental writes. If these offsite volumes are being
used in an automated library, the volumes must be checked into the library
when they are brought back on-site.
4. Restore the destroyed files. Issue this command:
restore volume dsm087
This command sets the access mode of DSM087 to DESTROYED and attempts
to restore all the files that were stored on volume DSM087. The files are not
restored to volume DSM087, but to another volume in the TAPEPOOL storage
pool. All references to the files on DSM087 are deleted from the database and
the volume itself is deleted from the database.
5. Set the access mode of the volumes used to restore DSM087 to OFFSITE using
the UPDATE VOLUME command.
6. Set the access mode of the restored volumes that are now on-site, to
READWRITE.
7. Return the volumes to the offsite location. If the offsite volumes used for the
restoration were checked into an automated library, these volumes must be
checked out of the automated library when the restoration process is complete.
Chapter 34. Protecting and recovering the server infrastructure and client data 979
Fixing an incomplete volume restoration:
When the restoration of a volume might be incomplete, you can get more
information about the remaining files on volumes for which restoration was
incomplete.
The restoration might be incomplete for one or more of the following reasons:
v Files were either never backed up or the backup copies are marked as damaged.
v A copy storage pool or active-data pool was specified on the RESTORE VOLUME
command, but files were backed up to a different copy storage pool or a
different active-data pool. If you suspect this problem, use the RESTORE VOLUME
command again without specifying a copy storage pool or active-data pool from
which to restore files. You can specify the PREVIEW parameter on the second
RESTORE VOLUME command, if you do not actually want to restore files.
v Volumes in the copy storage pool or active-data pool needed to perform the
restore operation are offsite or unavailable. Check the activity log for messages
that occurred during restore processing.
v Backup file copies in copy storage pools or active-data pools were moved or
deleted by other processes during restore processing. To prevent this problem,
do not issue the following commands for copy storage pool volumes or
active-data pool volumes while restore processing is in progress:
– MOVE DATA
– DELETE VOLUME with the DISCARDDATA parameter set to YES
– AUDIT VOLUME with the FIX parameter set to YES
– MIGRATE STGPOOL
– RECLAIM STGPOOL
You can prevent reclamation processing for your copy storage pools and
active-data pools by setting the RECLAIM parameter to 100 with the UPDATE
STGPOOL command.
The destroyed volume access mode designates primary volumes for which files are
to be restored.
If duplication occurs, Tivoli Storage Manager uses volumes from multiple copy
storage pools or active-data pools to restore the data. This process can result in
duplicate data being restored. To prevent this duplication, keep one complete set of
copy storage pools and one complete set of active-data pools available to the
server. Alternatively, ensure that only one copy storage pool or one active-data
pool has an access of read/write during the restore operation.
The primary storage pool Main contains volumes Main1, Main2, and Main3.
v Main1 contains files File11, File12, File13
v Main2 contains files File14, File15, File16
v Main3 contains files File17, File18, File19
The copy storage pool DuplicateA contains volumes DupA1, DupA2, and DupA3.
v DupA1 contains copies of File11, File12
v DupA2 contains copies of File13, File14
v DupA3 contains copies of File15, File16, File17, File18 (File19 is missing because
BACKUP STGPOOL was run on the primary pool before the primary pool
contained File 19.)
The copy storage pool DuplicateB contains volumes DupB1 and DupB2.
v DupB1 contains copies of File11, File12
v DupB2 contains copies of File13, File14, File15, File16, File17, File18, File19
If you do not designate copy storage pool DuplicateB as the only copy storage
pool to have read/write access for the restore operation, then Tivoli Storage
Manager can choose the copy storage pool DuplicateA, and use volumes DupA1,
DupA2, and DupA3. Because copy storage pool DuplicateA does not include file
File19, Tivoli Storage Manager would then use volume DupB2 from the copy
storage pool DuplicateB. The program does not track the restoration of individual
files, so File15, File16, File17, and File18 are restored a second time, and duplicate
copies are generated when volume DupB2 is processed.
Chapter 34. Protecting and recovering the server infrastructure and client data 981
Restoring and recovering an LDAP server
If you use an LDAP directory server to authenticate passwords, you might need to
restore its contents at some time.
There are ways to avoid locking your ID and not being able to logon to the server
or rendering data unavailable.
v Give system privilege class to the console administrator ID.
v Make sure that at least one administrator with system privilege class can access
the server with LOCAL authentication.
v Do not back up the LDAP directory server to the IBM Tivoli Storage Manager
server. An administrator who backs up the Windows Active Directory or the
IBM Tivoli Directory Server to the Tivoli Storage Manager server might render
them unusable. The Tivoli Storage Manager server requires an external directory
for the initial administrator authentication. Backing up the directory server to
the Tivoli Storage Manager server locks the administrator ID and renders them
unable to logon to the LDAP directory server.
You must configure the LDAP settings on a target server before replicating,
exporting, or importing nodes and administrators onto it.
You must run the SET LDAPUSER and SET LDAPPASSWORD commands, and define the
LDAPURL option on the target server. If it is not configured properly, you can
replicate, export, import, or use Enterprise Configuration on the target server. But
all nodes and administrators that are transferred from the source to the target with
the LDAP server are then changed to use LDAP authentication. Nodes and
administrators that changed to LDAP authentication on the target server become
inaccessible.
You can configure the target server for LDAP authentication after replicating or
exporting to it, but the data is unavailable until that occurs. After configuring the
LDAP settings at the target server level, the node or administrator entries must be
set up on the LDAP server. Either share the LDAP server between the source and
the target server, or replicate the source LDAP server to the target server. All
applicable nodes and administrators are transferred to the target.
If the transfer is unsuccessful, the LDAP administrator must manually add the
node and administrator passwords onto the LDAP server. Or you can issue the
UPDATE NODE or UPDATE ADMIN commands on the IBM Tivoli Storage Manager server.
After you issue the AUDIT LDAPDIRECTORY FIX=YES command, the following events
occur:
v All nodes and administrators that were removed from the LDAP directory
server are listed for you.
v All nodes and administrators that are missing from the LDAP directory server
are listed for you. You can correct these missing entries by issuing the UPDATE
NODE or UPDATE ADMIN command.
If multiple Tivoli Storage Manager servers share an LDAP directory server, avoid
issuing the AUDIT LDAPDIRECTORY FIX=YES command.
The restore operation removes all library client server transactions that occurred
after the point in time from the volume inventory of the library manager server.
However, the volume inventory of the library client server still contains those
transactions. New transactions can then be written to these volumes, resulting in a
loss of client data. Complete the following steps after the restore:
1. Halt further transactions on the library manager server: Disable all schedules,
migration, and reclamations on the library client and library manager servers.
2. Audit all libraries on all library client servers. The audits re-enter those volume
transactions that were removed by the restore on the library manager server.
Audit the library clients from the oldest to the newest servers. Use the volume
history file from the library client and library manager servers to resolve any
conflicts.
3. Delete the volumes from the library clients that do not own the volumes.
4. Resume transactions by enabling all schedules, migration, and reclamations on
the library client and library manager servers.
If a library client server acquired scratch volumes after the point-in-time to which
the server is restored, these volumes would be set to private in the volume
inventories of the library client and library manager servers. After the restore, the
volume inventory of the library client server can be regressed to a point-in-time
before the volumes were acquired, thus removing them from the inventory. These
volumes would still exist in the volume inventory of the library manager server as
private volumes owned by the client.
Chapter 34. Protecting and recovering the server infrastructure and client data 983
The restored volume inventory of the library client server and the volume
inventory of the library manager server would be inconsistent. The volume
inventory of the library client server must be synchronized with the volume
inventory of the library manager server in order to return those volumes to scratch
and enable them to be overwritten. To synchronize the inventories, complete the
following steps:
1. Audit the library on the library client server to synchronize the volume
inventories of the library client and library manager servers.
2. To resolve any remaining volume ownership concerns, review the volume
history and issue the UPDATE VOLUME command as needed.
The processor on which Tivoli Storage Manager is located, the database, and all
on-site storage pool volumes are destroyed by fire. You can use either full and
incremental backups or snapshot database backups to restore a database to a
point-in-time.
Note: Do not change the access mode of these volumes until after you
complete step 7.
3. If a current, undamaged volume history file exists, save it.
4. Restore the volume history and device configuration files, the server options,
and the database and recovery log setup. For example, the recovery site might
require different device class, library, and drive definitions.
5. Restore the database from the latest backup level by issuing the DSMSERV
RESTORE DB utility.
6. Change the access mode of all the existing primary storage pool volumes in
the damaged storage pools to DESTROYED. For example, issue the following
commands:
update volume * access=destroyed wherestgpool=backuppool
update volume * access=destroyed wherestgpool=archivepool
update volume * access=destroyed wherestgpool=spacemgpool
update volume * access=destroyed wherestgpool=tapepool
7. Issue the QUERY VOLUME command to identify any volumes in the
DISASTER-RECOVERY storage pool that were on-site at the time of the
disaster. Any volumes that were on-site would were destroyed in the disaster
and could not be used for restore processing. Delete each of these volumes
from the database by using the DELETE VOLUME command with the
DISCARDDATA option. Any files backed up to these volumes cannot be
restored.
8. Change the access mode of the remaining volumes in the
DISASTER-RECOVERY pool to READWRITE. Issue the following command:
Clients can now access files. If a client tries to access a file that was stored on
a destroyed volume, the retrieval request goes to the copy storage pool. In this
way, clients can restore their files without waiting for the primary storage pool
to be restored. When you update volumes brought from offsite to change their
access, you greatly speed recovery time.
9. Define new volumes in the primary storage pool so the files on the damaged
volumes can be restored to the new volumes. With the new volumes, clients
can also back up, archive, or migrate files to the server. If you use only scratch
volumes in the storage pool, you are not required to complete this step.
10. Restore files in the primary storage pool from the copies in the
DISASTER-RECOVERY pool. To restore files from DISASTER-RECOVERY
pool, issue the following commands:
restore stgpool backuppool maxprocess=2
restore stgpool tapepool maxprocess=2
restore stgpool archivepool maxprocess=2
restore stgpool spacemgpool maxprocess=2
Chapter 34. Protecting and recovering the server infrastructure and client data 985
986 IBM Tivoli Storage Manager for Windows: Administrator's Guide
Chapter 35. Replicating client node data
Node replication is the process of incrementally copying, or replicating, data that
belongs to backup-archive client nodes. Data is replicated from one IBM Tivoli
Storage Manager server to another Tivoli Storage Manager server.
The server from which client node data is replicated is called a source replication
server. The server to which client node data is replicated is called a target replication
server. A server can function as the source of replicated data for some client nodes
and as the target of replicated data for other client nodes.
The purpose of replication is to maintain the same level of files on the source and
the target replication servers. As part of replication processing, client node data
that was deleted from the source replication server is also deleted from the target
replication server. When client node data is replicated, only the data that is not on
the target replication server is copied.
| You can use only Tivoli Storage Manager V6.3 servers for node replication.
| However, you can replicate data for client nodes that are V6.3 or earlier. You can
| also replicate data that was stored on a Tivoli Storage Manager V6.2 or earlier
| server before you upgraded it to V6.3. You cannot replicate nodes from a Tivoli
| Storage Manager V6.3.3 server to a server that is running on an earlier version of
| Tivoli Storage Manager.
Before you configure your system, however, be sure to read about basic replication
concepts in the overview topic. When you are ready to begin implementation, read
“Setting up the default replication configuration” on page 1012.
Related concepts:
Managing passwords and logon procedures
NODE4
NODE1
CHICAGO_SRV
PHOENIX_SRV DALLAS_SRV
NODE4 data
NODE1 and NODE2 data
NODE3 data
NODE5 data
NODE2 ATLANTA_SRV
NODE3
NODE5
When a client node is registered on a target replication server, the domain for the
node is sent to the target server. If the target server does not have a domain with
the same name, the node on the target server is placed in the standard domain on
the target server and bound to the default management class.
| To maintain the same number of file versions on the source and the target
| replication servers, the source replication server manages file expiration and
| deletion. If a file on the source replication server is marked for deletion, but not
| yet deleted by the expiration processing, the target replication server deletes the
| file during the next replication process. Expiration processing on the target
| replication server is disabled for replicated data. The file on the target replication
| server is deleted by the source replication server after the file is expired and
| deleted on the source.
If a client node is removed from replication on the target replication server, the
policies on the target replication server are enabled. Data on the target replication
server is then managed by the policies that are on the target replication server, and
expiration processing can delete expired files.
Important: Policies that are defined on replication servers and that are dissimilar
can cause undesirable side-effects. As newer versions of backup files are replicated,
versions that exceed the value of the VEREXISTS parameter for the copy group are
marked for immediate deletion. If the node that owns the files is configured for
replication, expiration does not delete the files. However, because these files are
marked for immediate deletion, they are not available for the client to restore. The
files remain in the storage pool until replication deletes them based on the policy
on the source server.
Tips:
v Policies and storage pool hierarchies on the source and target replication servers
can be different. You can use deduplicated storage pools on the source server, on
the target server, or both. However, to keep the data on source and target
replication servers synchronized, configure the management classes on the
source and target servers to manage data similarly. To coordinate policies,
consider using Tivoli Storage Manager enterprise configuration.
v Ensure that sufficient space is available in the storage pool on the target
replication server.
Replication rules
Replication rules control what data is replicated and the order in which it is
replicated.
The Tivoli Storage Manager server has the following predefined set of replication
rules. You cannot create replication rules.
ALL_DATA
Replicates backup, archive, or space-managed data. The data is replicated
with a normal priority. For example, you can assign the ALL_DATA rule to
backup data and archive data, and assign a different rule to
space-managed data.
ACTIVE_DATA
Replicates only active backup data. The data is replicated with a normal
priority. You can assign this rule only to the backup data type.
ALL_DATA_HIGH_PRIORITY
Replicates backup, archive, or space-managed data. The data is replicated
with a high priority. In a replication process that includes both
normal-priority and high-priority data, high-priority data is replicated first.
ACTIVE_DATA_HIGH_PRIORITY
Replicates active backup data. The data is replicated with a high priority.
You can assign this rule only to the backup data type.
DEFAULT
Replicates data according to the rule that is assigned to the data type at the
next higher level in the replication-rule hierarchy. The replication-rule
hierarchy comprises file space rules, individual client-node rules, and
server rules. Server rules apply collectively to all nodes that are defined to
a source replication server and that are configured for replication.
Rules that are assigned to data types in file spaces take precedence over
rules that are assigned to data types for individual nodes. Rules that are
assigned to data types for individual nodes take precedence over server
rules. For example, if the DEFAULT replication rule is assigned to backup
data in a file space, the server checks the replication rule for backup data
that is assigned to the client node. If the client node rule for backup data is
DEFAULT, the server checks the server rule for backup data.
The DEFAULT rule is valid only for data types at the file space level and
the client node level. It is not valid for data types at the server level.
Tip: When you set up the default replication configuration, you do not have to
assign or change replication rules. Tivoli Storage Manager automatically assigns
the DEFAULT replication rule to all data types in the file spaces and in the client
nodes that you configured. The system-wide replication rules are automatically set
to ALL_DATA. You can change file-space, client-node, and system-wide rules after
you set up the default configuration.
If a file space is added to a client node that is configured for replication, the file
space rules for data types are initially set to DEFAULT. If you do not change the
file space rules, the client node and server rules determine whether data in the file
space is replicated.
To display the attributes of replication rules, issue the QUERY REPLRULE command.
In a client node that is configured for replication, each file space has three
replication rules. One rule applies to backup data in the file space. The other rules
apply to archive data and to space-managed data. The rules for the file space exist
regardless of whether the file space has backup, archive, or space-managed data. If
Similarly, each client node that is configured for replication has replication rules for
backup data, archive data, and space-managed data. Client node rules apply to all
the file spaces that belong to a node. Replication rules also exist at the server level
that apply collectively to every client node that is configured for replication on a
source replication server.
During replication processing, file space rules take precedence over rules for
individual nodes. Rules for individual nodes take precedence over server rules.
The replication rule that has precedence is called the controlling replication rule.
Replication process
File space /a, NODE1: File space /b, NODE1: File space /a, NODE2:
ALL BACKUP DATA
ARCHIVE DATA ARCHIVE DATA ARCHIVE DATA
File space /a, NODE1: File space /b, NODE1: File space /a, NODE2:
Target replication
server
When the REPLICATE NODE command is issued, a single replication process begins.
The source replication server identifies client nodes that are configured for
replication and the rules that apply to the file spaces in nodes that are enabled.
The backup data in file space /a that belongs to NODE2 is also high priority. The
file space rule for backup data, which is ALL_DATA_HIGH_PRIORITY, takes
precedence over the client node rule of DEFAULT and the server rule of
ALL_DATA.
Tips:
v Figure 113 on page 993 shows one possible configuration to achieve the specified
results. In general, multiple configurations can exist that accomplish the same
purpose.
For example, to replicate archive data first, you can assign the
ALL_DATA_HIGH_PRIORITY replication rule to the archive data type in each
file space that belongs to NODE1 and NODE2.
v Figure 113 on page 993 shows one replication process. To replicate certain client
nodes ahead of other client nodes, you can issue multiple REPLICATE NODE
commands in sequence, either manually or in a maintenance script. Each
command can specify a different client node or different file spaces in an
individual client node. For example, suppose NODE1 contains a large amount of
data and you want to conserve bandwidth. To replicate client node data
sequentially, you can specify NODE1 in a single REPLICATE NODE command and
NODE2 in another REPLICATE NODE command.
Related concepts:
“Replication rule hierarchy” on page 991
“Replication rule definitions” on page 990
Replication state
Replication state indicates whether replication is enabled or disabled. When you
disable replication, replication does not occur until you enable it.
Figure 114 on page 996 shows the interaction of replication states and replication
rules. In the example, NODE1 has a single file space /a that contains archive data.
Assume that the replication state of NODE1 on the target replication server is
ENABLED and that replication processing for all nodes is enabled.
ALL_DATA
File space ENABLED Replication Archive data
ALL_DATA_HIGH_PRIORITY DISABLED
rule for state of archive in /a
archive data data type is not replicated
? in /a
?
DEFAULT
ALL_DATA
ALL_DATA_HIGH_PRIORITY Node rule NONE Archive data
for in /a
archive data is not replicated
?
DEFAULT
ALL_DATA
ALL_DATA_HIGH_PRIORITY
ENABLED
Archive data
in /a
is replicated
Replication
processing
ends
To determine the replication state of a file space, issue the QUERY FILESPACE
command. To determine the replication state of a client node, issue the QUERY NODE
command, and to determine the replication state of a rule, issue the QUERY
REPLRULE command.
Replication mode
Replication mode is part of a client node definition and indicates whether a client
node is set up to send or receive replicated data. The replication mode can also
indicate whether the data that belongs to a client node is to be synchronized the
first time that replication occurs. Data synchronization applies only to client nodes
whose data was exported from the source replication server and imported on the
target replication server.
The following modes are possible for a client node whose data is not being
synchronized:
SEND Indicates that the client node is set up to send data to the target replication
server. The SEND replication mode applies only to the client node
definition on a source replication server.
RECEIVE
Indicates that the client node is set up to receive replicated data from the
source replication server. The RECEIVE replication mode applies only to
the client node definition on a target replication server.
NONE
The client node is not configured for replication. To be configured for
replication, the client node must be enabled or disabled for replication.
If the data that belongs to a client node was previously exported from a source
replication server and imported on a target replication server, the data must be
synchronized. Synchronization is also required after a database restore to preserve
the client node data that is on the target replication server. When the data that
belongs to a client node is synchronized, entries in the databases of the source and
target replication servers are updated.
The following special settings for replication mode are required to synchronize
data.
Restriction: To synchronize data, the date of the imported data on the target
replication server must be the original creation date.
SYNCSEND
Indicates that data that belongs to the client node on the source replication
server is to be synchronized with the client node data on the target
replication server. The SYNCSEND mode applies only to the client node
definition on a source replication server.
When data synchronization is complete, the replication mode for the node
on the source replication server is set to SEND.
SYNCRECEIVE
Indicates that data that belongs to the client node on the target replication
server is synchronized with the client node data on the source replication
server. This SYNCRECEIVE mode applies only to the client node definition
on the target replication server.
The following table shows the results when storage pools on source and target
replication servers are enabled for data deduplication. The destination storage pool
is specified in the backup or archive copy-group definition of the management
class for each file. If the destination storage pool does not have enough space and
data is migrated to the next storage pool, the entire file is sent, whether or not the
next storage pool is set up for deduplication.
Tip: If you have a primary storage pool that is enabled for deduplication on a
source replication server, you can estimate a size for a new deduplicated storage
pool on the target replication server. Issue the QUERY STGPOOL command for the
primary deduplicated storage pool on the source replication server. Obtain the
The following client node attributes are updated during node replication:
v Aggregation (on or off)
v Automatic file space rename
v Archive delete authority
v Backup delete authority
v Backup initiation (root user or all users)
v Cipher strength
v Compression option
v Contact
v Data-read path
v Data-write path
v Email address
v High-level address
v Low-level address
v Node lock state
v Option set name
v Password
Attention: A conflict occurs if a node password is authenticated on the source
server by one server and on the target server by a different server. Because
authentication can happen on a Lightweight Directory Access Protocol (LDAP)
directory server or the Tivoli Storage Manager server, data can be lost. In the
case of this kind of dual authentication, the password is not updated during
replication.
v Password expiration period
v Operating system
v Role override (client, server, other, or usereported)
v Session initiation (client or server, or server only)
v Transaction group maximum
v URL
v Validate protocol (no, data only, or all)
The following client node attributes are not updated during node replication:
v Domain name (might not exist on target server)
v File-space access rules that are created with the client SET ACCESS command
v Node conversion type
v Client option sets
Tip: If you want to convert client nodes for store operations to a target
replication server, you can duplicate client schedules that are on the
source replication server.
v Client option sets.
Tip: If you want client option sets on the target replication server, you
must duplicate them.
v Backup sets.
Tip: You can generate backup sets on the target replication server for a
replicated client node.
v Network-attached storage data in nonnative storage pools.
Retention protection
You cannot configure servers for replication on which archive retention
protection is enabled.
Replication and file groups
When you are replicating files from one server to another, it is possible that
some of the files that are being replicated belong to a group of files that
are managed as a single logical entity. If a replication process ends without
replicating all the files in a group, client nodes will be unable to restore,
retrieve, or recall the file group. When replication runs again, the source
replication server attempts to replicate the missing files.
Renaming a node
If a node is configured for replication, it cannot be renamed.
Backing up a single client node to two source replication servers
If you have been backing up, archiving, or migrating a client node to two
different servers, do not set up replication of the node from both source
replication servers to the same target replication server. Replicating from
two source servers might create different versions of the same file on the
target server and cause unpredictable results when restoring, retrieving, or
recalling the file.
Password propagation to the target replication server
When client node data is replicated for the first time, the source server
sends the node definition, including the password, to the target server.
During subsequent replications, if the node password is updated, the
source server attempts to send the updated password to the target server.
Whether these attempts succeed depends on the node authentication
method and on the combination of methods that are used on the source
and target servers. A conflict occurs if a node password is authenticated on
the source server by one server and on the target server by a different
server. Because authentication can happen on an LDAP (Lightweight
Directory Access Protocol) directory server or the Tivoli Storage Manager
If you want to... Use these commands... For more information, see...
Add client nodes for REGISTER NODE and UPDATE “Adding client nodes for
replication. NODE replication processing” on
page 1027
If you want to... Use these commands... For more information, see...
Set up Secure Sockets Layer DEFINE SERVER and UPDATE “Configuring a server for
(SSL) communications SERVER SSL communications” on
between a source and target page 1031
replication server.
Change a target replication SET REPLSERVER “Selecting a new target
server. replication server” on page
1030
Remove a target replication SET REPLSERVER “Removing a target
server. replication server” on page
1031
Control the number of node REPLICATE NODE “Controlling throughput for
replication sessions. node replication” on page
1038
Disable or enable inbound or DISABLE SESSIONS and ENABLE “Disabling and enabling
outbound sessions from a SESSIONS outbound or inbound
source or target replication sessions” on page 1042
server.
Disable or enable outbound DISABLE REPLICATION and “Disabling and enabling
replication processing from a ENABLE REPLICATION outbound node replication
source replication server. processing” on page 1043
Remove a replication REMOVE REPLNODE and SET “Removing a node
configuration. REPLSERVER replication configuration” on
page 1051
If you want to... Use these commands... For more information, see...
Replicate data. You can REPLICATE NODE and DEFINE “Replicating data by
replicate data by individual SCHEDULE command” on page 1034
file space, by priority, and by
data type.
Temporarily disable UPDATE FILESPACE “Disabling and enabling
replication for a data type in replication of data types in a
a file space. file space” on page 1040
Temporarily disable UPDATE NODE “Disabling and enabling
replication for an individual replication for individual
client node. client nodes” on page 1041
Temporarily disable UPDATE REPLRULE “Disabling and enabling
replication of data that is replication rules” on page
assigned a particular 1043
replication rule.
Temporarily disable inbound DISABLE SESSIONS and ENABLE “Disabling and enabling
and outbound server SESSIONS outbound or inbound
sessions, including sessions” on page 1042
replication sessions for all
client nodes.
Temporarily disable DISABLE REPLICATION and “Disabling and enabling
outbound replication ENABLE REPLICATION outbound node replication
processing from a source processing” on page 1043
replication server.
Prevent replication of UPDATE FILESPACE “Purging replicated data in a
backup, archive, or file space” on page
space-managed data in a file 1044“Purging replicated data
space on a source replication in a file space” on page 1044
server, and delete the data
from the target replication
server.
Cancel all replication CANCEL REPLICATION “Canceling replication
processes. processes” on page 1046
If you want to... Use these commands... For more information, see...
Specify the number of days SET REPLRETENTION “Retaining replication
to retain replication records records” on page 1049
in the Tivoli Storage
Manager database.
Display information about QUERY FILESPACE “Displaying information
the replication settings for a about node replication
file space. settings for file spaces” on
page 1046
Display information about QUERY NODE “Displaying information
the replication settings for a about node replication
client node. settings for client nodes” on
page 1047
Display information about QUERY REPLRULE “Displaying information
replication rules. about node replication rules”
on page 1047
Display records of running QUERY REPLICATION “Displaying information
and ended replication about node replication
processes. processes” on page 1047
Determine whether QUERY REPLNODE “Measuring the effectiveness
replication to the target of a replication
replication server is keeping configuration” on page 1048
pace with the number of files
that are eligible for
replication on the source
replication server.
Measure the effects of data QUERY REPLICATION “Measuring the effects of
deduplication. data deduplication on node
replication processing” on
page 1049
As you plan, remember that a target replication server must be accessible from a
source replication server by using an IP connection. The connection must provide
sufficient bandwidth to accommodate the volume of data to be replicated. If the
connection is insufficient and becomes a bottleneck for replication, keeping the
data on the two servers synchronized can be a problem. Keep in mind that you
can use client-side data deduplication with node replication to reduce network
bandwidth requirements and storage requirements.
The destination storage pool on a target replication server must have sufficient
space to store replicated data.
To determine whether the database can manage more space requirements, you
must estimate how much more database space that node replication will use.
Requirement: Place the database and database logs on separate disks that have a
high performance capability. Use a separate disk or mount point for the following
options:
v Other applications that use the database and logs
v System tasks, such as system paging
1. Determine number of files for each node and data type that is in use. Issue the
QUERY OCCUPANCY command for each node and data type that you plan to
replicate. For example, you can display information about the file spaces that
are assigned to the node named PAYROLL by issuing the following command:
query occupancy payroll
2. Determine how much more database space is required by using the value for
the total number of files that are used by all nodes and data types. Use the
following formula to calculate the amount of database space that is required:
Total_number_of_files_from_all_nodes_and_data_types * 300 (the number of
additional bytes needed for each replicated file)
Important: You must increase the available database space when the additional
required space approaches or exceeds the size of your database. Ensure that
you examine both replication servers and their databases and increase the
database size if necessary.
3. Increase the size of the database by the additional database space required and
include an additional 10% of the database size.
Tip: Tune the performance of replication to the data type. For example, if you
do not plan to replicate a data type in a file space, exclude the number of files
for that data type.
2. Determine the amount of data that is backed up daily by the client nodes.
Complete the following steps to estimate the amount of data that is replicated
incrementally daily:
a. When client nodes complete a store operation, the client logs completion
messages with the server. The completion messages report statistics or
If the value for required network bandwidth exceeds the capabilities of your
network, you must adjust the values in the formula. Reduce the TD value or
increase the replication time, to reduce the value for Required_Network_Bandwidth. If
you cannot adjust the TD or the RWT time values, adjust or replace your existing
network to reduce the additional workload.
When you determine how long it takes for replication to finish, you can decide
which method you use to complete the initial replication. The method that you use
for the initial replication is based on the data, time, and bandwidth values that you
calculated.
Related tasks:
“Selecting a method for the initial replication”
| Tip: You can also export client data directly to another server so that it can be
| immediately imported. For example, to export client node information and all
| client files for NODE1 directly to SERVERB, issue the following command:
| export node node1 filedata=all toserver=serverb
When you decide how many nodes to add to a group, consider the amount of data
that is replicated daily by the nodes.
1. Prioritize the subset of nodes that have critical data. Replicate critical data first,
by issuing the REPLICATE NODE command.
2. Continue to replicate the high-priority nodes daily while incrementally adding
the replication of other subsets of nodes that contain important, but not critical,
data.
3. Repeat this process until all subsets of all nodes that must be replicated
complete their initial replication.
Related concepts:
“Node replication processing” on page 990
“Replication rules” on page 990
During the next scheduled replication, any new active versions, including all
inactive versions, are replicated. The files that were active but are now inactive are
not replicated again.
Remember: If you do not have time to complete replication, you can cancel it
after it has started, by issuing the CANCEL REPLICATION command.
4. Use the summary information to determine whether the values of the
controlled test match the actual replication values. You calculate the values of
the controlled test in “Tuning replication processing” on page 1039. For
example, to display information about a replication process 23, issue the
following command:
query process 23
| If you are unable to complete the replication process in the amount of time that
| you scheduled, increase the number of data sessions that transfer data to the target
| server. Replication performance improves when more deduplicated data is stored
| on the target server. When more extents are stored on the target server, more
| duplicates are found for an extent.
| If you are replicating data from storage pools that are enabled for data
| deduplication, run processes in the following order:
1. To identify duplicates, issue the IDENTIFY DUPLICATES command. Break files
into extents to reduce the amount of data that is sent to the target server when
replication occurs.
The following figure shows the replication rules that are created in the default
configuration. Backup data includes both active and inactive backup data.
Replication process
Normal priority data
Target replication
server
After you complete the default configuration, you can change the replication rules
to meet your specific replication requirements.
Server definitions are required for the source replication server to communicate
with the target replication server and for the target replication server to report
status to the source replication server.
Important: You can specify only one target replication server for a source
replication server. However, you can specify one or more source replication servers
for a single target replication server. Source and target replication servers must be
V6.3.
The method that you use to set up servers depends on whether the server
definitions exist and on whether you are using the cross-define function to
automatically define one server to another.
Remember: If you want an SSL connection, the value for the SET
SERVERLLADDRESS command on the target replication server must be an SSL
port. The value of the SET SERVERNAME command must match the server
name in the server definition.
2. On the source replication server, issue the following commands:
| set servername source_server_name
| set serverpassword source_server_password
| set serverhladdress source_server_ip_address
| set serverlladdress source_server_tcp_port
Remember: If you want an SSL connection, the value for the SET
SERVERLLADDRESS command on the source replication server must be an SSL
port. The value of the SET SERVERNAME command must match the server
name in the server definition.
| 3. On the source replication server connect to the target replication server by
| using the DEFINE SERVER command. If you want an SSL connection, specify
| SSL=YES, for example:
A server definition is created on the source replication server, and the source
replication server is connected to the target replication server. A definition for
the target replication server is created that points to the source replication
server.
v If server definitions do not exist and you are not using the cross-define function,
complete the following steps:
1. Issue the following commands on both the source and target replication
servers:
set servername server_name
set serverpassword server_password
set serverhladdress ip_address
set serverlladdress tcp_port
Remember: If you want an SSL connection, the value for the SET
SERVERLLADDRESS command on the replication servers must be an SSL port.
The value of the SET SERVERNAME command must match the server name in
the server definition.
2. Issue the DEFINE SERVER command on each server. Do not specify the
CROSSDEFINE parameter. If you want an SSL connection, specify SSL=YES, for
example:
– On the source server:
define server target_server_name hladdress=target_server_ip_address
lladdress=target_server_tcp_port serverpassword=target_server_password
ssl=yes
– On the target server:
define server source_server_name hladdress=source_server_ip_address
lladdress=source_server_tcp_port serverpassword=source_server_password
ssl=yes
v If definitions exist for both the source and target replication servers, issue the
UPDATE SERVER command on each server. Do not specify the CROSSDEFINE
parameter. You can use the QUERY STATUS command to determine the server
names. If you want an SSL connection, specify SSL=YES, for example:
– On the source server:
update server target_server_name hla=target_server_ip_address
lladdress=target_server_tcp_port serverpassword=target_server_password
ssl=yes
– On the target server:
update server source_server_name hladdress=source_server_ip_address
lladdress=source_server_tcp_port serverpassword=
source_server_password
ssl=yes
Before beginning this procedure, issue the PING SERVER command to verify that the
definitions for the source and target replication servers are valid and that the
servers are connected.
To specify a target replication server, issue the SET REPLSERVER command on the
source replication server. For example, to specify a server named PHOENIX_SRV
as the target replication server, issue the following command:
set replserver phoenix_srv
Issuing the SET REPLSERVER command also sets replication rules to ALL_DATA. To
display replication rules, you can issue the QUERY STATUS command.
Related concepts:
“Replication server configurations” on page 988
Restrictions:
v If a client node definition does not exist on the target replication server, do not
create it. The definition for the client node on the target server is created
automatically when the node's data is replicated the first time.
v If a client node definition exists on both the source and target replication servers,
but the data that belongs to the client node was not exported and imported, you
must rename or remove the client node on the target replication server before
data can be replicated.
v If you previously removed a client node from replication on the source
replication server, but not on the target replication server, you do not have to
rename or remove the node on the target replication server.
To configure a client node for replication, take one of the following actions,
depending on whether a node’s data was exported from the source server and
imported on the target server:
v If the node’s data was not exported from the source server and imported on the
target server, complete one of the following steps:
1016 IBM Tivoli Storage Manager for Windows: Administrator's Guide
– If the client node is not already registered on a source replication server, issue
the REGISTER NODE command on the source replication server. Specify
REPLSTATE=ENABLED or REPLSTATE=DISABLED.
For example, to enable a new client node, NODE1, for replication, issue the
following command:
register node node1 password replstate=enabled
– If the client node is already registered on a source replication server, issue the
UPDATE NODE command on the source replication server. Specify
REPLSTATE=ENABLED or REPLSTATE=DISABLED.
For example, to enable an existing client node, NODE1, for replication, issue
the following command:
update node node1 replstate=enabled
v If the node’s data was exported from the source replication server and imported
to the target replication server, complete the following steps:
1. On the source replication server, issue the UPDATE NODE command:
a. Specify REPLSTATE=ENABLED or REPLSTATE=DISABLED.
b. Specify REPLMODE=SYNCSEND.
2. On the target replication server, issue the UPDATE NODE command and specify
REPLMODE=SYNCRECEIVE.
Data is synchronized during replication. After replication is complete, the
REPLMODE parameter in the client node definition on the source replication server
is set to SEND. The REPLMODE parameter in the client node definition on the
target replication server is set to RECEIVE, and the REPLSTATE parameter is set to
ENABLED.
If you set the replication state of the client node to DISABLED, the replication
mode is set to SEND, but replication does not occur. If you set the replication state
of the client node to ENABLED, the client node definition is created on the target
replication server when replication occurs for the first time. In addition, the
replication mode of the client node on the target replication server is set to
RECEIVE, and the replication state is set to ENABLED.
If you add a file space to a client node that is configured for replication, the
file-space replication rules for data types are automatically set to DEFAULT. To
change file-space replication rules, issue the UPDATE FILESPACE command.
To determine the replication mode and the replication state that a client node is in,
issue the QUERY NODE command.
The default configuration is complete after client nodes are configured for
replication. You are now ready to replicate. If you do not change the default
replication rules, all backup, archive, and space-managed data in all
replication-enabled client nodes is replicated.
Related concepts:
“Replication mode” on page 997
“Replication state” on page 994
Rules for file spaces are either normal priority or high priority. In a replication
process that includes both normal-priority and high-priority data, high-priority
data is replicated first. If you issue the REPLICATE NODE command for two or more
clients, all high priority data for all file spaces in the specified nodes is processed
before normal priority data.
Before you select a rule, consider the order in which you want the data to be
replicated. For example, suppose that a file space contains active backup data and
archive data. Replication of the active backup data is a higher priority than the
archive data. To prioritize the active backup data, specify DATATYPE=BACKUP
REPLRULE=ACTIVE_DATA_HIGH_PRIORITY. To prioritize the archive data, issue the
UPDATE FILESPACE command again, and specify DATATYPE=ARCHIVE
REPLRULE=ALL_DATA.
Attention:
v If you specify ACTIVE_DATA, inactive backup data in the file space is
not replicated, and inactive backup data in the file space on the target
replication server is deleted.
v If you specify ACTIVE_DATA, you cannot specify ARCHIVE or
SPACEMANAGED as values for the parameter DATATYPE in the same
command instance.
ALL_DATA_HIGH_PRIORITY
Replicates backup, archive, or space-managed data. The data is replicated
with a high priority.
ACTIVE_DATA_HIGH_PRIORITY
Replicates active backup data. The data is replicated with a high priority.
To display the replication rules for a file space, issue the QUERY FILESPACE
command. Specify FORMAT=DETAILED.
In the following example, assume that you have two client nodes, NODE1 and
NODE2. The nodes have the following file spaces:
v NODE1: /a, /b, /c
v NODE2: /a, /b, /c, /d, /e
All the file space rules are set to DEFAULT. The backup, archive, and
space-managed rules for NODE1 and NODE2 are also set to DEFAULT. The server
The data that belongs to the two nodes is replicated in the following order:
1. High Priority: Data in file space /a that belongs to NODE1 and data in file
space /c in NODE2
2. Normal priority: Data in file spaces /b and /c that belongs to NODE1 and data
in file spaces /a, /b, /d, and /e that belongs to NODE2
Important: Data types in new file spaces that are added to a client node after the
node is configured for replication are automatically assigned the DEFAULT
replication rule.
Related concepts:
“Replication rules” on page 990
Rules for client nodes are either normal priority or high priority. In a replication
process that includes both normal-priority and high-priority data, high-priority
data is replicated first. If you issue the REPLICATE NODE command for two or more
clients, all high priority data for all file spaces in the specified nodes is processed
before normal priority data.
Before you select a rule, consider the order in which you want the data to be
replicated. For example, suppose that a client node contains active backup data
and archive data. Replication of the active backup data is a higher priority than
replication of the archive data. To prioritize the active backup data, specify the
ACTIVE_DATA_HIGH_PRIORITY replication rule for backup data. Specify the
ALL_DATA rule for archive data.
Attention:
v If you specify ACTIVE_DATA, inactive backup data that belongs to the
client node is not replicated.
v If the replication rule for backup data in any file spaces that belong to
the client node is DEFAULT, inactive backup data in those file spaces on
the target replication server is deleted.
ALL_DATA_HIGH_PRIORITY
Replicates backup, archive, or space-managed data. The data is replicated
with a high priority.
Attention:
v If you specify ACTIVE_DATA_HIGH_PRIORITY, inactive backup data
that belongs to the client node is not replicated.
v If the replication rule for backup data in any file spaces that belong to
the client node is DEFAULT, inactive backup data in those file spaces on
the target replication server is deleted.
DEFAULT
Replicates data according to the server rule for the data type.
For example, suppose that you want to replicate the archive data in all
client nodes that are configured for replication. Replication of the archive
data is a high priority. One method to accomplish this task is to set the
file-space and client-node replication rules for archive data to DEFAULT.
Set the server rule for archive data to ALL_DATA_HIGH_PRIORITY.
NONE
Data is not replicated. For example, if you do not want to replicate the
space-managed data in a client node, specify the NONE replication rule for
space-managed data.
To display the replication rules that apply to all file spaces that belong to a node,
issue the QUERY NODE command and specify FORMAT=DETAILED.
Remember: File spaces are not displayed for client nodes that are registered on the
source replication server but that have not performed store operations. Only after
the client stores data to the source replication server are file spaces created.
Replication rules for data types in file spaces are automatically assigned values of
DEFAULT.
To change replication rules for a node, issue one or more of the following
commands on the source replication server:
v To change a replication rule for backup data, issue the UPDATE NODE command
and specify the BKREPLRULEDEFAULT parameter. For example, to specify the
ACTIVE_DATA rule for backup data in NODE1, issue the following command:
update node node1 bkreplruledefault=active_data
v To change a replication rule for archive data, issue the UPDATE NODE command
and specify the ARREPLRULEDEFAULT parameter. For example, to specify the
ALL_DATA_HIGH_PRIORITY rule for archive data in NODE1, issue the
following command:
update node node1 arreplruledefault=all_data_high_priority
v To change a replication rule for space-managed data, issue the UPDATE NODE
command and specify the SPREPLRULEDEFAULT parameter. For example, to specify
the NONE rule for space-managed data in NODE1, issue the following
command:
update node node1 spreplruledefault=none
Related concepts:
“Replication rules” on page 990
Server rules are either normal priority or high priority. In a replication process that
includes both normal-priority and high-priority data, high-priority data is
replicated first. If you issue the REPLICATE NODE command for two or more clients,
all high priority data for all file spaces in the specified nodes is processed before
normal priority data.
Before you select a rule, consider the order in which you want the data to be
replicated. For example, suppose that your client nodes contain active backup data
and archive data. Replication of the active backup data is a high priority. To
prioritize the active backup data, specify the ACTIVE_DATA_HIGH_PRIORITY
replication rule. Specify the ALL_DATA rule for archive data.
Attention:
v If you specify ACTIVE_DATA, inactive backup data that belongs to
client nodes is not replicated.
v If the replication rules for backup data in any file spaces and any client
nodes is DEFAULT, inactive backup data in those file spaces on the
target replication server is deleted. For example, suppose the rules for
backup data in file space /a in NODE1 and file space /c in NODE2 are
DEFAULT. The rules for backup data in NODE1 and NODE2 are also
DEFAULT. If you specify ACTIVE_DATA as the server rule, inactive data
in file spaces /a and /c is deleted.
ALL_DATA_HIGH_PRIORITY
Replicates backup, archive, or space-managed data. The data is replicated
with a high priority.
ACTIVE_DATA_HIGH_PRIORITY
Replicates only the active backup data in client nodes. The data is
replicated with a high priority.
To change server replication rules, issue one or more of the following commands
on the source replication server:
v To change the server replication rule that applies to backup data, issue the SET
BKREPLRULEDEFAULT command on the source replication server. For example, to
specify the ACTIVE_DATA rule for backup data, issue the following command:
set bkreplruledefault active_data
v To change the server replication rule that applies to archive data, issue the SET
ARREPLRULEDEFAULT command on the source replication server. For example, to
specify the ALL_DATA_HIGH_PRIORITY rule for archive data, issue the
following command:
set arreplruledefault all_data_high_priority
v To change the server replication rule that applies to space-managed data, issue
the SET SPREPLRULEDEFAULT command on the source replication server. For
example, to specify the NONE rule for space-managed data, issue the following
command:
set spreplruledefault none
Related concepts:
“Replication rules” on page 990
NODE1 has two file spaces, /a and /b. NODE2 has one file space, /a. File space
and client replication rules for backup, archive, and space-managed data are set to
DEFAULT. Server replication rules are set to ALL_DATA. You have the following
goals:
v Replicate only the active backup data in file space /a that belongs to NODE1.
v Do not replicate any space-managed data in any of the file spaces that belong to
NODE1.
v Replicate the archive data in all file spaces that belong to NODE1 and NODE2.
Make the replication of this data a high priority.
v Replicate the active and inactive backup data in file space /a that belongs to
NODE2. Make replication of this data a high priority.
Replication process
File space /a, NODE 1: File space /b, NODE 1: File space /a, NODE 2:
ALL BACKUP DATA
ARCHIVE DATA ARCHIVE DATA ARCHIVE DATA
File space /a, NODE 1: File space /b, NODE 1: File space /a, NODE 2:
Target replication
server
Tips:
v In Figure 115 on page 1024, all the data in all the files spaces of both client nodes
is replicated in one process. However, if the amount of node data is large and
you do not have enough bandwidth to replicate data in a single process, you can
use one of the following methods:
– Schedule or manually issue separate REPLICATE NODE commands at different
times for NODE1 and NODE2.
– Replicate high-priority and normal-priority data separately at different times
by specifying the PRIORITY parameter on the REPLICATE NODE command.
– Replicate different data types at different times by specifying the DATATYPE
parameter on the REPLICATE NODE command.
– Combine replication by priority and by data type by specifying both the
PRIORITY and DATATYPE parameters on the REPLICATE NODE command.
v To verify the replication rules that apply to the file spaces in the client nodes,
issue the VALIDATE REPLICATION command. You can also use this command to
verify that the source replication server can communicate with the target
replication server. To preview results issue the REPLICATE NODE command and
specify PREVIEW=YES.
Related concepts:
“Replication rules” on page 990
Client node data that was exported and imported must be synchronized between
the source and target replication servers. You set up client nodes to synchronize
their data as part of the process of configuring nodes for replication. Data is
synchronized the first time that replication occurs. To synchronize data, the data
must be imported to the disaster recovery server using ABSOLUTE as the value for
the DATES parameter on the IMPORT NODE command.
Important: You cannot display information about running replication processes for
client nodes that are being converted from import and export operations to
replication operations. The conversion process might run for a long time, but it
occurs only once for a client node that is being converted.
After you set up a basic replication configuration, you can change file-space,
client-node, and server replication rules. To replicate data, issue the REPLICATE NODE
command in an administrative schedule or on a command line.
Before adding a client node for replication, ask the following questions:
v Was the data that belongs to the client node previously exported from the server
that is to be the source for replicated data?
v If the data was exported, was it imported on the server that is now the target for
replicated data?
v When you imported the data, did you specify DATES=ABSOLUTE on the IMPORT
NODE command?
If you answered "yes" to all of the preceding questions, you must set up to
synchronize the data on the source and target replication servers. The following
procedure explains how to set up synchronization when adding client nodes for
replication. Synchronization occurs during replication.
Restrictions:
v If a client node definition does not exist on the target replication server, do not
create it. The definition for the client node on the target server is created
automatically when the node's data is replicated the first time.
v If a client node definition exists on both the source and target replication servers,
but the data that belongs to the client node was not exported and imported, you
must rename or remove the client node on the target replication server before
data can be replicated.
v If you previously removed a client node from replication on the source
replication server, but not on the target replication server, you do not have to
rename or remove the node on the target replication server.
If you set the replication state of the client node to DISABLED, the replication
mode is set to SEND, but replication does not occur. If you set the replication state
of the client node to ENABLED, the client node definition is created on the target
replication server when replication occurs for the first time. In addition, the
replication mode of the client node on the target replication server is set to
RECEIVE, and the replication state is set to ENABLED.
If you add a file space to a client node that is configured for replication, the
file-space replication rules for data types are automatically set to DEFAULT.
After you add client nodes for replication, ensure that they are included in any
existing administrative schedules for replication. Alternatively, you can create a
schedule for replication that includes the new client nodes.
Related concepts:
“Replication mode” on page 997
“Replication state” on page 994
Removing a client node from replication deletes only information about replication
from the server database. Removing a node from replication does not delete the
data that belongs to the node that was replicated.
To completely remove a client node from replication, issue the REMOVE REPLNODE
command on the source and target replication servers that have the node
configured for replication. For example, to remove NODE1 and NODE2 from
replication, issue the following command:
remove replnode node1,node2
To verify that the node was removed, issue the QUERY NODE command on the source
and the target replication servers. For example, to verify that NODE1 and NODE2
were removed, issue the following command:
query node node1,node2 format=detailed
If the node was removed, the fields Replication State and Replication Mode are
blank. If you do not want to keep the node data that is stored on the target
If you remove a client node from replication, rename the node, or delete the node
data, and then remove the node, you can later add the node for replication. All the
data that belongs to the node will be replicated to the target replication server.
For example, suppose that you updated the definition of a client node whose data
you wanted to replicate. The data that belongs to the node was previously
exported from the source replication server and imported to the target replication
server. You specified ENABLED as the setting of the REPLSTATE parameter.
However, you did not specify SYNCSEND as the replication mode on the source
replication server. As a result, the REPLMODE parameter was automatically set to
SEND, and data that belongs to the node was not synchronized or replicated.
To reconfigure the client node for replication, complete the following steps:
1. Issue the REMOVE REPLNODE command for the client node. For example, to
remove a client node, NODE1, from replication, issue the following command:
remove replnode node1
Issuing the REMOVE REPLNODE command resets the replication state and the
replication mode for the client node to NONE.
2. Issue the UPDATE NODE command with the correct parameters and values.
For example, to enable NODE1 for replication and synchronize the data that
belongs to the node, complete the following steps:
a. On the source replication server, issue the following command:
update node node1 replstate=enabled replmode=syncsend
b. On the target replication server, issue the following command:
update node node1 replstate=enabled replmode=syncreceive
After synchronization and replication are complete, the REPLMODE parameter in the
client node definition on the source replication server is set to SEND. The REPLMODE
parameter in the client node definition on the target replication server is set to
RECEIVE.
Related concepts:
“Replication mode” on page 997
“Replication state” on page 994
You can add a source replication server to an existing configuration. For example,
suppose that you have a replication configuration comprising a single
source-replication server and a single target-replication server. You can add another
source server that replicates data to the existing target server.
Related concepts:
“Replication server configurations” on page 988
To change a target replication server, issue the SET REPLSERVER command on the
source replication server. Specify the name of the new target replication server. For
example, to specify NEW_TGTSRV as the new target replication server, issue the
following command:
set replserver new_tgtsrv
The following example describes what occurs when you change or add target
replication servers. Suppose that DRSERVER is the target replication server for
PRODSERVER. PRODSERVER has one client, NODE1.
1. Files A, B, and C that belong to NODE1 are replicated to TGTSRV.
2. You change the target replication server to NEW_TGTSRV.
3. NODE1 backs up files D, E, and F to SRCSRV.
4. Replication occurs for NODE 1. Files, A, B, and C, which were replicated to
TGTSRV, are replicated to NEW_TGTSRV. New files D, E, and F are also
replicated to NEW_TGTSRV.
Before you begin this procedure, delete any administrative schedules on the source
replication server that issue the REPLICATE NODE command.
To remove a target replication server, issue the SET REPLSERVER command. Do not
specify the name of a target replication server. For example, to remove a target
server TGTSRV, issue the following command:
set replserver
Remember: If you do not want to keep replicated node data on the target
replication server, you can delete it.
A server that uses SSL can obtain a unique certificate that is signed by a certificate
authority (CA), or the server can use a self-signed certificate. Before starting the
source and target replication servers, install the certificates and add them to the
key database files. Required SSL certificates must be in the key database file that
belongs to each server. SSL support is active if the server options file contains the
SSLTCPPORT or SSLTCPADMINPORT option or if a server is defined with SSL=YES at
startup.
The server and its database are updated with the new password. After updating
the password, shut down the server, add the certificates, and start the server.
To determine whether a server is using SSL, issue the QUERY SERVER command.
To update a server definition for SSL, issue the UPDATE SERVER command. For
example, to update the server definition for server PHOENIX_SRV, issue the
following command:
update server phoenix_srv ssl=yes
Restriction: For event servers, library servers, and target replication servers, the
name of the source replication server must match the value of the SET SERVERNAME
command on the target. Because the source replication server uses the name of the
If you enable SSL communications and are using the following functions, you must
create separate source and target definitions that use TCP/IP for the corresponding
server-to-server communications:
v Enterprise configuration
v Command routing
v Virtual volumes
v LAN-free
Replication is the only server-server function that can use SSL.
If you use SSL with node replication, you must create separate server definitions
for enterprise configuration, command routing, virtual volumes, and LAN-free
communications.
Suppose that you want to use a source replication server to replicate data and to
route commands. In the option file of the target replication server, the value of the
TCPPORT option is 1500. The value of the SSLTCPPORT option is 1542.
You can use the server name SSL for node replication:
define server ssl hladdress=1.2.3.4 lladdress=1542 ssl=yes
serverpassword=xxxxx
A controlling rule is the rule that the source replication server uses to replicate data
in a file space. For example, suppose the replication rule for backup data in file
space /a is DEFAULT. If the client-node rule for backup data is ALL_DATA, the
controlling rule for the backup data in file space /a is ALL_DATA.
All file spaces are displayed regardless of whether the state of the data types in
the file spaces is enabled or disabled.
v To display the controlling replication rules and verify the connection with the
target replication server, issue the following command:
validate replication node1,node2 verifyconnection=yes
Specifying the LISTFILES parameter signifies that the WAIT parameter is set to
YES and that you cannot the issue the WAIT parameter from the server console.
If the data that belongs to a client node is being replicated, any attempt to replicate
the data by issuing another REPLICATE NODE command fails. For example, suppose
the backup data that belongs to a client node is scheduled for replication at 6:00
a.m. Replication of the archive data is scheduled for 8:00 a.m. Replication of the
backup-data must complete before replication of the archive data starts.
Example
If you have many client nodes and are replicating a large amount of data, you can
replicate data more efficiently by issuing several REPLICATE NODE commands in
separate schedules. For example, replicate the data that belongs to the most
important client nodes first in a single command. After the data that belongs to
those client nodes is replicated, replicate the data that belongs to the other nodes.
Tip: To ensure that replication for first group of client nodes finishes before the
replication for the other nodes starts, specify WAIT=YES on the first REPLICATE NODE
command. For example, if you want to replicate the data that belongs to NODE1
and NODE2 before the data that belongs to NODE3 and NODE4, issue the
following commands:
replicate node node1,node2 wait=yes
replicate node node3,node4
Data is replicated for a file space only when the following conditions are true:
v The replication state for data types in file spaces are enabled. For example, if the
replication state for archive data in a file space is enabled, archive data in the
file space is replicated.
v The controlling rule for the data type in the file space cannot be NONE. For
example, suppose the replication rule for archive data in a file space is
DEFAULT. If the file space rules and client node rules for archive data are both
DEFAULT and the server rule for archive data is NONE, archive data in the file
space is not replicated.
To replicate data by file space, issue the REPLICATE NODE command and specify the
file space name or file space identifier. For example, to replicate data in file space
/a in NODE1, issue the following command:
replicate node node1 /a
Tip: With the REPLICATE NODE command, you can also replicate data by priority
and by data type. To achieve greater control over replication processing, you can
combine replication by file space, data type, and priority.
To obtain information about the node replication process while it is running, issue
the QUERY PROCESS command:
query process
For node replication purposes, each file space contains three logical file spaces:
v One for backup objects
v One for archive objects
v One for space-managed objects
By default, the QUERY PROCESS command reports results for each logical file space.
Other factors also affect the output of the QUERY PROCESS command:
v If a file space has a replication rule that is set to NONE, the file space is not
included in the count of file spaces that are being processed.
v If you specify data types in the REPLICATE NODE command, only those data types
are included in the count of file spaces that are being processed, minus any file
spaces that are specifically excluded.
In this example, NODE1 has four file spaces with three object types. The QUERY
PROCESS command generates the following output for node replication:
Because the example includes four file spaces with three object types, 12 logical file
spaces are processed for replication. The QUERY PROCESS command output shows
that 11 logical file spaces completed replication.
Related concepts:
“Node replication processing” on page 990
If you do not specify a type on the REPLICATE NODE command, all data types are
replicated.
Tip: Using the REPLICATE NODE command, you can also replicate data by file space
and by priority. To achieve greater control over replication processing, you can
combine replication by data type, file space, and priority.
Related concepts:
“Node replication processing” on page 990
Tip: Using the REPLICATE NODE command, you can also replicate data by file space
and by data type. To achieve greater control over replication processing, you can
combine replication by priority, file space, and data type.
Related concepts:
“Node replication processing” on page 990
The name of the file space is /a. It is common to NODE1 and NODE2.
To replicate the data in the file space, issue the following command:
replicate node node1,node2 /a priority=normal datatype=archive,spacemanaged
Issuing this command replicates archive and space-managed data that is assigned
the replication rule ALL_DATA.
Related concepts:
“Node replication processing” on page 990
Use the MAXSESSIONS parameter to specify the maximum number of sessions to use.
When you calculate the value for the MAXSESSIONS parameter, consider the available
network bandwidth and the processor capacity of source and target replication
servers.
Consider the number of logical and physical drives that can be dedicated to the
replication process. You must ensure that there are enough drives available for
replication processing because other server processes or client sessions might also
be using drives. The number of mount points and drives available for replication
operations depends on the following factors:
v Tivoli Storage Manager server activity that is not related to replication
v System activity
v The mount limits of the device classes for the sequential-access storage pools
that are involved
v The availability of a physical drive on the source and target replication servers,
if the device type is not FILE
v The available network bandwidth and the processor capacity of source and
target replication servers
Issue the REPLICATE NODE command and specify the MAXSESSIONS parameter to
determine the number of data sessions. For example, to set the maximum number
of replication sessions to 6 for NODE_GROUP1, issue the following command:
replicate node node_group1 maxsessions=6
| Do not use a storage pool that is enabled for data deduplication to test replication.
| By using storage pools that are not enabled for data deduplication to test
| replication processing, you avoid processing extents that can increase the amount
| of preprocessing time of the replication process. By determining the data transfer
| and network capability of your replication operation without extent processing,
| you get a better representation of the capability of your system. Test replication
| processing with storage pools that are enabled for data deduplication if you want
| to determine the effect of data deduplication on replication performance alone.
| You must calculate the bytes-per-hour value for each source server individually.
| You can determine which method is the most suitable for the server, based on its
| bytes-per-hour value.
Complete the following steps to determine how much data you can replicate
during a specified timeframe so that you can tune replication processing for a
server. Repeat these steps to obtain bytes-per-hour value for each server that you
want to use for replication processing.
| 1. Complete the following steps to select the appropriate data:
| a. Select one or more nodes and file spaces that have approximately 500 GB to
| 1 TB of total data.
| b. Select data that is typical of the data that you replicate on a routine basis.
| c. Select nodes that are configured for replication.
| 2. To display the amount of data in a file space, issue the QUERY OCCUPANCY
| command.
3. Select a timeframe during which replication is running normally.
4. If you plan to use Secure Sockets Layer (SSL) as the communication protocol
for replication processing, ensure that SSL is enabled.
When you determine the bytes-per-hour value for each server, you can determine a
method to use for the initial replication.
To see how your network manages more workload during replication, complete
the following tasks:
1. Increase the value of the MAXSESSIONS parameter by 10 on the REPLICATE NODE
command and run the test again.
2. Increase the number of replication sessions by 10 to transfer more data
concurrently during replication. Alternatively, if you determine that 10
replication sessions (the default MAXSESSIONS value) cause your network to
degrade below acceptable levels, decrease the value of the MAXSESSIONS
parameter.
3. Repeat the process, and adjust the value of the MAXSESSIONS parameter to
determine optimal data transfer capability.
To determine the replication state of a data type in a file space, issue the QUERY
FILESPACE command with the FORMAT parameter set to DETAILED.
Restriction: You cannot disable or enable replication for an entire file space. You
can only disable and enable replication of a data type in a file space.
To determine the replication state of a node, issue the QUERY NODE command.
v To disable replication for a node, issue the UPDATE NODE command and specify
REPLSTATE=DISABLED. For example, to disable replication for NODE1, issue the
following command:
update node node1 replstate=disabled
v To enable replication for a node, issue the UPDATE NODE command and specify
REPLSTATE=ENABLED. For example, to enable replication for NODE1, issue the
following command:
update node node1 replstate=enabled
Remember: If you disable replication for a client node while data that belongs to
the node is being replicated, the replication process is not affected. Replication of
the data continues until all the data that belongs to the client node is replicated.
However, replication for the client node will be skipped the next time that
replication runs.
Related concepts:
“Replication state” on page 994
Disabling outbound or inbound sessions can be useful if, for example, you have a
planned network outage that will affect communication between source and target
replication servers. Disabling and enabling sessions affects not only node
replication operations but also certain other types of operations.
To display the status and direction of sessions for a particular server, issue the
QUERY STATUS command.
Remember:
v When you disable sessions for a particular server, you disable the following
types of sessions in addition to replication:
– Server-to-server event logging
– Enterprise management
– Server registration
– LAN-free sessions between storage agents and the Tivoli Storage Manager
server
– Data storage using virtual volumes
v If you disable only outbound sessions on a source replication server, data that
belongs to client nodes that store data on the source server do not have their
data replicated. However, inbound sessions to the target server can occur.
If a server is the target for multiple source replication servers and you disable
outbound sessions on a single source server, the target replication server
continues to receive replicated data from the other source replication servers.
When you disable outbound node replication processing, you prevent new
replication processes from starting on a source replication server. Enabling
outbound node replication processing is required after a database restore.
Restriction: When you restore the Tivoli Storage Manager database, replication is
automatically disabled. Disabling replication prevents the server from deleting
copies of data on the target replication server that are not referenced by the
restored database. After a database restore, you must re-enable replication.
To display the status of replication processing for a particular server, issue the
QUERY STATUS command.
Issue the following commands on the source replication server to disable and
enable replication processing:
v To disable replication, issue the DISABLE REPLICATION command.
v To enable replication, issue the ENABLE REPLICATION command.
Disabling a replication rule can be useful if, for example, you replicate groups of
normal-priority and high-priority client nodes on different schedules. For example,
suppose that the data that belongs to some client nodes is assigned the
ALL_DATA_HIGH_PRIORITY replication rule. The data that belongs to other
client nodes is assigned the ALL_DATA replication rule. The client nodes are
separated into groups, in which some of the nodes in each group have
high-priority data and other nodes in the group have normal-priority data.
You schedule replication for each group to take place at different times. However, a
problem occurs, and replication processes take longer than expected to complete.
As a result, the high-priority data that belongs to client nodes in groups that are
scheduled late in the replication cycle is not being replicated.
To replicate the high-priority data as soon as possible, you can disable the
ALL_DATA rule and rerun replication. When you rerun replication, only the client
node data that is assigned the ALL_DATA_HIGH_PRIORITY rule is replicated.
To disable and enable replication rules, complete one of the following steps:
v To disable a replication rule, issue the UPDATE REPLRULE command and specify
STATE=DISABLED. For example, to disable the replication rule
ACTIVE_DATA_HIGH_PRIORITY, issue the following command:
update replrule active_data_high_priority state=disabled
v To enable a replication rule, issue the UPDATE REPLRULE command and specify
STATE=ENABLED. For example, to enable the replication rule
ACTIVE_DATA_HIGH_PRIORITY, issue the following command:
update replrule active_data_high_priority state=enabled
Related concepts:
“Replication state” on page 994
To prevent replication of a data type and purge the data from the file space on the
target replication server, issue the UPDATE FILESPACE command and specify
REPLSTATE=PURGEDATA. For example, to prevent replication of backup data in file
space /a on NODE1 and delete the backup data in file space /a on the target
replication server, issue the following command:
update filespace node1 /a datatype=backup replstate=purgedata
Data is purged the next time that replication runs for the file space. After data is
purged, the replication rule for the specified data type is set to DEFAULT.
Replication for the data type is disabled.
Disabling replication prevents the Tivoli Storage Manager server from deleting
copies of data on the target replication server that are not referenced by the
restored database. Before re-enabling replication, determine whether copies of data
that are on the target replication server are needed. If they are, complete the steps
described in the following example. In the example, the name of the source
replication server is PRODSRV. DRSRV is the name of the target replication server.
NODE1 is a client node with replicated data on PRODSRV and DRSRV.
Restriction: You cannot use Secure Sockets Layer (SSL) for database restore
operations.
1. Remove NODE1 from replication on PRODSRV and DRSRV by issuing the
REMOVE REPLNODE command:
remove replnode node1
2. Update NODE1 definitions PRODSRV and DRSRV. When replication occurs,
DRSRV sends the data to PRODSRV that was lost because of the database
restore.
a. On DRSRV, issue the UPDATE NODE command and specify the replication
mode SYNCSEND:
update node node1 replstate=enabled replmode=syncsend
b. On PRODSRV, issue the UPDATE NODE command and specify the replication
mode SYNCRECEIVE:
update node node1 replstate=enabled replmode=syncreceive
3. On DRSRV, set the replication rules to match the rules on PRODSRV. For
example, if only archive data was being replicated from PRODSRV to DRSRV,
set the rules on DRSRV to replicate only archive data from DRSRV to
PRODSRV. Backup and space-managed data will not be replicated to
PRODSRV.
To set rules, you can issue the following commands:
v UPDATE FILESPACE
v UPDATE NODE
v SET ARREPLRULEDEFAULT
v SET BKREPLRULEDEFAULT
v SET SPREPLRULE
4. On DRSRV, issue the SET REPLSERVER command to set PRODSRV as the target
replication server:
set replserver prodsrv
5. On DRSRV, issue the REPLICATE NODE command to replicate data belonging to
NODE1:
replicate node node1
The original replication configuration is restored. PRODSRV has all the data that
was lost because of the database restore.
Remember: In step 4 on page 1045 you set the PRODSRV as the target replication
server for DRSRV. If, in your original configuration, you were replicating data from
DRSRV to another server, you must reset the target replication server on DRSRV.
For example, if you were replicating data from DRSRV to BKUPDRSRV, issue the
following command on DRSRV:
set replserver bkupdrsrv
Important: You cannot display information about running replication processes for
client nodes that are being converted from import and export operations to
replication operations. The data synchronization process might run for a long time,
but it occurs only once for a client node that is being converted.
The default record-retention period for completed processes is 30 days. To display
the retention period, issue the QUERY STATUS command and check the value in the
Replication Record Retention Period field.
The record for a running process is updated only after a group of files is processed
and committed. A file group consists of 2,000 files or 2 GB of data, whichever is
smaller. For example, if a single file is 450 GB, the record is not updated for a
relatively long time. If you notice that the number of files not yet replicated for a
running process is not decreasing fast enough, network bandwidth or time might
be insufficient to replicate the amount of data. Take one of the following actions:
v Provide more time for replication.
v Decrease the amount of data to replicate.
v Create more parallel data-transmission sessions between the source and target
replication servers by increasing the value of the MAXSESSIONS parameter.
Increase the value of the MAXSESSIONS parameter only if network bandwidth and
processor resources for the source and target replication servers are sufficient.
The server activity log contains messages with the following information:
v The nodes that were enabled or disabled for replication
v The number of files that were eligible to be replicated compared to the number
of those files that were already stored on the target server
v The number of files that were successfully replicated and the number of files
that were missed
v The number of files on the target server that were deleted
To display the number of files stored on source and target replication servers, issue
the QUERY REPLNODE command. You can issue the command on a source or a target
replication server.
The information in the output for QUERY REPLNODE includes files that are stored at
the time the command is issued. If a replication process is running, the information
does not include files that are waiting to be transferred. Information is reported by
data type. For example, you can determine the number of backup files that belong
to a client node that are stored on the source and the target replication servers.
In the output, check the values in the fields that represent bytes replicated and
bytes transferred for each data type:
v Replicated bytes are bytes that were replicated to the target replication server. If
a file was stored in a deduplicated storage pool, the number of bytes in the
stored file might be less than the number of bytes in the original file. This value
in this field represents the number of physical bytes in the original file.
v Transferred bytes represent the number of bytes that were sent to the target
replication server. For files stored in a deduplicated storage pool, the value in
this field includes the number of bytes in the original file before duplicate
extents were removed. If duplicate extents were already on the target replication
server, the number of bytes in the original file is more than the number of bytes
transferred.
Related concepts:
“Replication of deduplicated data” on page 998
“Active log mirror” on page 686
Related tasks:
Part 6, “Protecting the server,” on page 905
To display the retention period for replication records, issue the QUERY STATUS
command on the source replication server.
To set the retention period for replication records, issue the SET REPLRETENTION
command.
Replication records that exceed the retention period are deleted from the database
by Tivoli Storage Manager during automatic inventory-expiration processing. As a
result, the amount of time that retention records are retained can exceed the
specified retention period
If a replication process runs longer than the retention period, the record of the
process is not deleted until the process ends, the retention period passes, and
expiration runs.
If any schedules were defined on the source replication server, you can redefine
them on the target replication server. Client node data on the target replication
server is now managed by policies on the target replication server. For example,
file expiration and deletion are managed by the target replication server.
Before you begin this procedure, delete any administrative schedules on source
replication servers that issue the REPLICATE NODE command for the client nodes that
are included in the configuration.
To verify that the target replication server was removed, issue the QUERY STATUS
command on the source replication server. If the target replication server was
removed, the field Target Replication Server is blank.
Tip: If you do not want to keep replicated node data on the target replication
server, you can delete it.
To recover from a disaster, you must know the location of your offsite recovery
media. DRM helps you to determine which volumes to move offsite and back
onsite and track the location of the volumes.
| You can use complementary technologies to protect the Tivoli Storage Manager
| server and to provide an alternative to disaster recovery. For example, you can use
| DB2 HADR to replicate the Tivoli Storage Manager database or device-to-device
| replication.
Before you use DRM, familiarize yourself with Chapter 34, “Protecting and
recovering the server infrastructure and client data,” on page 941.
| Note: Unless otherwise noted, you need system privilege class to perform DRM
| tasks.
Related reference:
“Disaster recovery manager checklist” on page 1091
“The disaster recovery plan file” on page 1096
The following table describes how to set defaults for the disaster recovery plan file.
Table 90. Defaults for the disaster recovery plan file
Process Default
Primary storage pools to be When the recovery plan file is generated, you can limit processing to specified pools.
processed The recovery plan file will not include recovery information and commands for
storage pools with a data format of NETAPPDUMP.
For example, to specify that only the primary storage pools named PRIM1 and PRIM2
are to be processed, enter:
set drmprimstgpool prim1,prim2
Note: To remove all previously specified primary storage pool names and thus select
all primary storage pools for processing, specify a null string ("") in SET
DRMPRIMSTGPOOL.
To override the default: Specify primary storage pool names in the PREPARE
command
For example, to specify that only the copy storage pools named COPY1 and COPY2
are to be processed, enter:
set drmcopystgpool copy1,copy2
To remove any specified copy storage pool names, and thus select all copy storage
pools, specify a null string ("") in SET DRMCOPYSTGPOOL. If you specify both
primary storage pools (using the SET DRMPRIMSTGPOOL command) and copy
storage pools (using the SET DRMCOPYSTGPOOL command), the specified copy
storage pools should be those used to back up the specified primary storage pools.
To override the default: Specify copy storage pool names in the PREPARE command
Active-data pools to be When the recovery plan file is generated, you can limit processing to specified pools.
processed
The default at installation: None
For example, to specify that only the active-data pools named ACTIVEPOOL1 and
ACTIVEPOOL2 are to be processed, enter:
set drmactivedatastgpool activepool1,activepool2
To remove any specified active-data pool names, specify a null string ("") in SET
DRMACTIVEDATASTGPOOL.
Active-data pool volumes in MOUNTABLE state are processed only if you specify the
active-data pools using the SET DRMACTIVEDATASTGPOOL command or the
ACTIVEDATASTGPOOL parameter on the MOVE DRMEDIA, QUERY DRMEDIA,
and PREPARE commands. Processing of active-data pool volumes in MOUNTABLE
state is different than the processing of copy storage pool volumes in MOUNTABLE
state. All MOUNTABLE copy storage pool volumes are processed regardless whether
you specify copy storage pools with either the SET DRMCOPYSTGPOOL command
or the COPYSTGPOOL parameter.
To override the default: Specify active-data pool names using the MOVE DRMEDIA,
QUERY DRMEDIA, or PREPARE command.
The default at installation: For a description of how DRM determines the default
prefix, see the INSTRPREFIX parameter of the PREPARE command section in the
Administrator's Reference or enter HELP PREPARE from administrative client
command line.
The disaster recovery plan file will include, for example, the following file:
c:\Program Files\Tivoli\TSM\server2\recinstr\
rpp.RECOVERY.INSTRUCTIONS.GENERAL
To override the default: The INSTRPREFIX parameter with the PREPARE command
Prefix for the recovery plan You can specify a prefix to the path name of the recovery plan file. DRM uses this
file prefix to identify the location of the recovery plan file and to generate the macros and
script file names included in the RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE
and RECOVERY.SCRIPT.NORMAL.MODE stanzas.
The default at installation: For a description of how DRM determines the default
prefix, see the PLANPREFIX parameter of the PREPARE command section in the
Administrator's Reference or enter HELP PREPARE from administrative client
command line.
The disaster recovery plan file name created by PREPARE processing will be in the
following format:
c:\Program Files\Tivoli\TSM\server2\recplans\20000603.013030
To override the default: The PLANPREFIX parameter with the PREPARE command
The default at installation: All copy storage pool volumes in the MOUNTABLE state
For example, to specify that DRM should not read the volume labels, enter:
set drmchecklabel no
Expiration period of a A database backup series (full plus incremental and snapshot) is eligible for
database backup series expiration if all of these conditions are true:
v The volume state is VAULT or the volume is associated with a device type of
SERVER (for virtual volumes).
v It is not the most recent database backup series.
v The last volume of the series exceeds the expiration value, number of days since
the last backup in the series.
For example, to specify the vault name as IRONVAULT, the contact name as J.
SMITH, and the telephone number as 1-555-000-0000, enter:
set drmvaultname "Ironvault, J. Smith, 1-555-000-0000"
Tip: Enter your site-specific information in the stanzas when you first create the
plan file or after you test it.
Enter your instructions in flat files that have the following names:
v prefix.RECOVERY.INSTRUCTIONS.GENERAL
v prefix.RECOVERY.INSTRUCTIONS.OFFSITE
v prefix.RECOVERY.INSTRUCTIONS.INSTALL
v prefix.RECOVERY.INSTRUCTIONS.DATABASE
v prefix.RECOVERY.INSTRUCTIONS.STGPOOL
Note: The files created for the recovery instructions must be physical sequential
files.
RECOVERY.INSTRUCTIONS.GENERAL
Include information such as administrator names, telephone numbers, and
location of passwords. For example:
Recovery Instructions for Tivoli Storage Manager Server ACMESRV on system ZEUS
Joe Smith (wk 002-000-1111 hm 002-003-0000): primary system programmer
Sally Doe (wk 002-000-1112 hm 002-005-0000): primary recovery administrator
Jane Smith (wk 002-000-1113 hm 002-004-0000): responsible manager
Security Considerations:
Joe Smith has the password for the Admin ID ACMEADM. If Joe is unavailable,
you need to either issue SET AUTHENTICATION OFF or define a new
administrative user ID at the replacement Tivoli Storage Manager server console.
RECOVERY.INSTRUCTIONS.OFFSITE
Include information such as the offsite vault location, courier name, and
telephone numbers. For example:
RECOVERY.INSTRUCTIONS.INSTALL
Include the following installation information:
Server installation and the location of installation volumes and the
license number.
For example:
You will need to reinstall the Tivoli Storage Manager server and administrative
client after installing the Windows operating system.
The install volume for the Tivoli Storage Manager server is INS001. If that is
lost, you will need to contact Copy4You Software, at 1-800-000-0000, and obtain
a new copy. Another possibility is the local IBM Branch office at 555-7777.
A sample file, recinsti.txt, is shipped with DRM. You may want to copy
recinsti.txt into your RECOVERY.INSTRUCTIONS.INSTALL file to
supplement your installation-specific instructions.
- Obtain a workstation with at least the following:
- Install the Tivoli Storage Manager Server and Tivoli Storage Manager
Administrative Client
RECOVERY.INSTRUCTIONS.DATABASE
Include information about how to recover the database and about how
much hardware space requirements. For example:
You will need to find replacement disk space for the server database. We
have an agreement with Joe Replace that in the event of a disaster, he
will provide us with disk space.
RECOVERY.INSTRUCTIONS.STGPOOL
Include information on primary storage pool recovery instructions. For
example:
Tip: The plan file that DRM generates is a template that contains information,
including commands for recovering the database, that might not apply to your
replacement systems or to your particular recovery scenario. To modify the plan or
to store additional instructions that you will need during recovery from an actual
disaster, use the RECOVERY.INSTRUCTIONS stanzas. Enter your site-specific
information in the stanzas when you first create the plan file or after you test it.
Use the following procedure to specify information about server and client
machines and to store it in the server database:
1. Specify server machine information by issuing the DEFINE MACHINE
command with ADSMSERVER=YES. For example, to define machine MACH22
in building 021, 2nd floor, in room 2929, with a priority of 1, enter the
following command:
define machine tsm1 adsmserver=yes priority=1
2. Specify the client node location and business priority by issuing the DEFINE
MACHINE command. For example, to define machine MACH22 in building
021, 2nd floor, in room 2929, with a priority of 1, enter:
define machine mach22 building=021 floor=2 room=2929 priority=1
3. Associate one or more client nodes with a machine by issuing the DEFINE
MACHNODEASSOCIATION command. Use this association information to
identify client nodes on machines that were destroyed. You should restore the
file spaces associated with these nodes. For example, to associate node
CAMPBELL with machine MACH22, enter:
define machnodeassociation mach22 campbell
4. To query machine definitions, issue the QUERY MACHINE command. See the
example, in “Client recovery scenario” on page 1085.
5. To add machine characteristics and recovery instructions to the database, issue
the INSERT MACHINE command. You must first query the operating system
to identify the characteristics for your client machine.
You can add the information manually or use a Microsoft VBScript command
procedure. A sample program is shipped with DRM.
v Add information manually:
The following partial output is from a query on an AIX client machine.
--1 Host Name: mach22 with 256 MB Memory Card
--- 256 MB Memory Card
---
--4 Operating System: AIX Version 4 Release 3
---
--- Hardware Address: 10:00:5x:a8:6a:46
’***************************************************************************
’ Get input arguments: MACHINENAME =machinename
’ INFILE =inputmachinefilename
’ OUTFILE =outputmacrofilename
’***************************************************************************
’***************************************************************************
’ Create the TSM macro file.
’***************************************************************************
’***************************************************************************
’ Place a TSM command in the TSM macro to delete any existing machine
’ characteristics for this machine from the TSM server database.
’***************************************************************************
’***************************************************************************
’ Read a line from the input machine characteristics file, add the TSM
’ command to insert the line of machine characteristics into the TSM server
’ database, and write the result to the output TSM macro.
’***************************************************************************
SEQUENCE = 1
INLINE = fi.ReadLine
fo.WriteLine "insert machine " & MACHINENAME & " " & SEQUENCE & "
char=’" & INLINE &"’"
SEQUENCE = SEQUENCE + 1
Loop
’***************************************************************************
’ Close the files.
’***************************************************************************
fo.Close
fi.Close
You can define media that contain softcopy manuals that you would need during
recovery. For example, to define a CD-ROM containing the AIX 5.1 manuals that
are on volume CD0001, enter:
define recoverymedia aix51manuals type=other volumes=cd0001
description="AIX 5.1 Bookshelf"
For details about the recovery plan file, see “The disaster recovery plan file” on
page 1096.
DRM creates one copy of the disaster recovery plan file each time you issue the
PREPARE command. You should create multiple copies of the plan for safekeeping.
For example, keep copies in print, on CD, on disk space that is located offsite, or
on a remote server.
Before creating a disaster recovery plan, back up your storage pools then backup
the database. See “Backing up primary storage pools” on page 954 and “Backing
up the server database” on page 942 for details about these procedures.
If you manually send backup media offsite, see “Moving copy storage pool and
active-data pool volumes offsite” on page 1074. If you use virtual volumes, see
“Using virtual volumes to store data on another server” on page 763.
When your backups are both offsite and marked offsite, you can create a disaster
recovery plan.
You can use the Tivoli Storage Manager scheduler to periodically run the
PREPARE command (see Chapter 21, “Automating server operations,” on page
659).
Tips:
v The plan file that DRM generates is a template that contains information,
including commands for recovering the database, that might not apply to your
replacement systems or to your particular recovery scenario. To modify the plan
or to store additional instructions that you will need during recovery from an
actual disaster, use the RECOVERY.INSTRUCTIONS stanzas. Enter your
site-specific information in the stanzas when you first create the plan file or after
you test it.
v DRM creates a plan that assumes that the latest database full plus incremental
series would be used to restore the database. However, you may want to use
DBSNAPSHOT backups for disaster recovery and retain your full plus
incremental backup series on site to recover from possible availability problems.
In this case, you must specify the use of DBSNAPSHOT backups in the
PREPARE command. For example:
For example, to store the recovery plan file locally in the c:\Program
Files\Tivoli\TSM\server2\recplans\ directory, enter:
prepare planprefix=c:\Program Files\Tivoli\TSM\server2\recplans\
Recovery plan files that are stored locally are not automatically expired. You
should periodically delete down-level recovery plan files manually. DRM appends
to the file name the date and time (yyyymmdd.hhmmss). For example:
c:\Program Files\Tivoli\TSM\server2\recplans\20000925.120532
Set up the source and target servers and define a device class a device type of
SERVER (see “Setting up source and target servers for virtual volumes” on page
765 for details). For example, assume a device class named TARGETCLASS is
defined on the source server where you create the recovery plan file. Then to
create the plan file, enter:
prepare devclass=targetclass
Each instance of the server has a unique set of files. For example, you might see
the following in this instance-specific directory:
v Server options file, disk file, database and log paths, and storage pool volumes:
c:\Program Files\Tivoli\TSM\server2\dsmserv.opt
c:\Program Files\Tivoli\TSM\server2\dsmserv.dsk
c:\\Program Files\Tivoli\TSM\server2\db1
c:\ \Program Files\Tivoli\TSM\server2\activlog
c:\ \Program Files\Tivoli\TSM\server2\archlog
c:\Program Files\Tivoli\TSM\server2\data1.dsm
The database and log paths, and storage pool volumes could also be in a different
directory. For example, you might see:
c:\c:\Program Files\Tivoli\TSM\server2\stg\db1
c:\ c:\Program Files\Tivoli\TSM\server2\stg\activelog
c:\Program Files\Tivoli\TSM\server2\stg\data1.dsm
When the disaster recovery plan is created, information about the server
environment is used in the stanzas within the plan file. This environmental
information includes the location of dsmserv.exe, the location of the disk formatting
utility, the instance-specific directory, the directories storage pool volumes, and so
on. During a recovery, it is assumed that the same server environment exists.
Additionally, the plan file itself will reside in a directory that you may have
specified or it may reside in the default directory (which is the instance–specific
The disaster recovery plan file prefix specified (or the instance–specific directory if
no disaster recovery plan file prefix was specified) is also used in the stanzas
within the plan file. During a recovery, when the plan file has been split into
individual files, it is assumed that these individual files will reside in this same
directory.
To summarize, the environment for a recovery using the disaster recovery plan file
is assumed to be the same as the original environment which includes:
v The directory structure and location of the server executable and enrollment
certificates (for licensing)
v The directory structure and location of the administrative command line client
v The directory structure for server instance-specific files
v The directory structure for the database path, active log, and archive log
directories and storage pool volumes.
v The directory structure and the files created when the plan file was split into
multiple files, such as the following based on the earlier plan file example (the
following is not the entire output):
c:\Program Files\Tivoli\TSM\server2\prepare\COPYSTGPOOL.VOLUMES.AVAILABLE.MAC
c:\Program Files\Tivoli\TSM\server2\prepare\COPYSTGPOOL.VOLUMES.DESTROYED.MAC
c:\Program Files\Tivoli\TSM\server2\prepare\ACTIVEDATASTGPOOL.VOLUMES.AVAILABLE.MAC
c:\Program Files\Tivoli\TSM\server2\prepare\ACTIVEDATASTGPOOL.VOLUMES.DESTROYED.MAC
If the recovery environment is not the same, then you must edit the plan file to
account for the changes in the environment.
To help understand where these various directories and expected locations for
executables are used within the plan file, see “Example disaster recovery plan file”
on page 1102 and you will see the following usage:
Usage Directory
Server executable c:\Program Files\Tivoli\TSM\server
Enrollment certificates c:\Program Files\Tivoli\TSM\server
(licensing)
Administrative command line c:\Program Files\Tivoli\TSM\saclient
client
Disk formatting utility c:\Program Files\Tivoli\TSM\console
Instance-specific files c:\Program Files\Tivoli\TSM\server2
Storage pool volumes c:\Program Files\Tivoli\TSM\server2\stg
Plan file location c:\Program Files\Tivoli\TSM\server2\prepare
Individual files split out from c:\Program Files\Tivoli\TSM\server2\prepare
plan
For an example of the contents of a recovery plan file, see “The disaster recovery
plan file” on page 1096. You cannot issue the commands shown below from a
server console. An output delay can occur if the plan file is located on tape.
v From the source server: Issue the following command for a recovery plan file
created on September 1, 2000 at 4:39 a.m. with the device class TARGETCLASS:
query rpfcontent marketing.20000901.043900 devclass=targetclass
v From the target server: Issue the following command for a recovery plan file
created on August 31,2000 at 4:50 a.m. on a source server named MARKETING
whose node name is BRANCH8:
query rpfcontent marketing.20000831.045000 nodename=branch8
To display a list of recovery plan files, use the QUERY RPFILE command. See
“Displaying information about recovery plan files” on page 1070 for more
information.
All recovery plan files that meet the criteria are eligible for expiration if both of the
following conditions exist:
v The last recovery plan file of the series is over 90 days old.
v The recovery plan file is not associated with the most recent backup series. A
backup series consists of a full database backup and all incremental backups that
apply to that full backup. Another series begins with the next full backup of the
database.
Expiration applies to plan files based on both full plus incremental and snapshot
database backups. Note, however, that expiration does not apply to plan files
stored locally. See “Storing the disaster recovery plan locally” on page 1067.
When the records are deleted from the source server and the grace period is
reached, the objects are deleted from the target server The record for the latest
recovery plan file is not deleted.
To limit the operation to recovery plan files that were created assuming database
snapshot backups, specify TYPE=RPFSNAPSHOT.
1. Move new backup media offsite and update the database with their locations.
See “Moving copy storage pool and active-data pool volumes offsite” on page
1074 for details.
2. Return expired or reclaimed backup media onsite and update the database with
their locations. See “Moving copy storage pool and active-data pool volumes
on-site” on page 1076 for details.
3. Offsite recovery media management does not process virtual volumes. To
display all virtual copy storage pool, active-data pool, and database backup
volumes that have their backup objects on the remote target server, issue the
QUERY DRMEDIA command. For example, enter the following command.
query drmedia * wherestate=remote
Offsite recovery media management does not move or display any two-sided
volumes that have a REMOVABLEFILE device type.
The disaster recovery plan includes the location of copy storage pool volumes and
active-data pool volumes. The plan can provide a list of offsite volumes required to
restore a server.
The following diagram shows the typical life cycle of the recovery media:
Storage Hierarchy
COURIER
Backup Active-
storage data
pool pool
NOTMOUNTABLE
MOUNTABLE
KUPEXPIREDAYS
VAULT
EDELAY
Backup
database
US
Private Scratch
RE
BAC
r/w
DB
M
DR
Scratch
VAULTRETRIEVE
ONSITERETRIEVE
COURIERRETRIEVE
DRM assigns the following states to volumes. The location of a volume is known
at each state.
MOUNTABLE
The volume contains valid data, and Tivoli Storage Manager can access it.
NOTMOUNTABLE
The volume contains valid data and is onsite, but Tivoli Storage Manager
cannot access it.
COURIER
The volume contains valid data and is in transit to the vault.
VAULT
The volume contains valid data and is at the vault.
VAULTRETRIEVE
The volume, which is located at the offsite vault, no longer contains valid
data and is to be returned to the site. For more information about
reclamation of offsite copy storage pool volumes and active-data pool
volumes, see “Reclamation of off-site volumes” on page 397. For
information on expiration of database backup volumes, see step 1 on page
1076.
Complete the following steps to identify the database backup, copy storage pool,
and active-data pool volumes and move them offsite:
1. Identify the copy storage pool, active-data pool, and database backup volumes
to be moved offsite For example, issue the following command:
query drmedia * wherestate=mountable
DRM displays information similar to the following output:
Volume Name State Last Update Automated
Date/Time LibName
--------------- ---------------- ------------------- -----------------
TPBK05 Mountable 01/01/2000 12:00:31 LIBRARY
TPBK99 Mountable 01/01/2000 12:00:32 LIBRARY
TPBK06 Mountable 01/01/2000 12:01:03 LIBRARY
| Restriction: Do not run the MOVE DRMEDIA and BACKUP STGPOOL commands
| concurrently. Ensure that the storage pool backup processes are complete before
| you issue the MOVE DRMEDIA command.
For all volumes in the MOUNTABLE state, DRM does the following:
v Updates the volume state to NOTMOUNTABLE and the volume location
according to the SET DRMNOTMOUNTABLENAME. If this command is not
issued, the default location is NOTMOUNTABLE.
v For a copy storage pool volume or active-data pool volume, updates the
access mode to unavailable.
v For a volume in an automated library, checks the volume out of the library.
a. During checkout processing, SCSI libraries request operator intervention. To
bypass these requests and eject the cartridges from the library, first issue the
following command:
move drmedia * wherestate=mountable remove=no
b. Access a list of the volumes by issuing the following command:
query drmedia wherestate=notmountable
From this list identify and remove the cartridges (volumes) from the library.
For all volumes in the NOTMOUNTABLE state, DRM updates the volume state
to COURIER and the volume location according to the SET
DRMCOURIERNAME. If the SET command is not yet issued, the default
location is COURIER. For more information, see “Specifying defaults for offsite
recovery media management” on page 1057
4. When the vault location confirms receipt of the volumes, issue the MOVE
DRMEDIA command in the COURIER state. For example:
move drmedia * wherestate=courier
For all volumes in the COURIER state, DRM updates the volume state to
VAULT and the volume location according to the SET DRMVAULTNAME command.
If the SET command is not yet issued, the default location is VAULT. For more
information, see “Specifying defaults for offsite recovery media management”
on page 1057.
5. Display a list of volumes that contain valid data at the vault. Issue the
following command:
query drmedia wherestate=vault
6. If you do not want to step through all the states, you can use the TOSTATE
parameter on the MOVE DRMEDIA command to specify the destination state. For
example, to change the volumes from NOTMOUNTABLE state to VAULT state,
issue the following command:
move drmedia * wherestate=notmountable tostate=vault
For all volumes in the NOTMOUNTABLE state, DRM updates the volume state
to VAULT and the volume location according to the SET DRMVAULTNAME
command. If the SET command is not yet issued, the default location is VAULT.
See “Preparing for disaster recovery” on page 1079 for an example that
demonstrates sending server backup volumes offsite using MOVE DRMEDIA and QUERY
DRMEDIA commands.
To ensure that the database can be returned to an earlier level and database
references to files in the copy storage pool or active-data pool are still valid,
specify the same value for the REUSEDELAY parameter in your copy storage
pool and active-data pool definitions. If copy storage pools or active-data pools
managed by DRM have different REUSEDELAY values, set the
DRMDBBACKUPEXPIREDAYS value to the highest REUSEDELAY value.
A database backup volume is considered eligible for expiration if all of the
following conditions are true:
v The age of the last volume of the series has exceeded the expiration value.
This value is the number of days since the last backup in the series. At
installation, the expiration value is 60 days. To override this value, issue the
SET DRMDBBACKUPEXPIREDAYS command.
v For volumes that are not virtual volumes, all volumes in the series are in the
VAULT state.
v The volume is not part of the most recent database backup series.
Database backup volumes that are virtual volumes are removed during
expiration processing. This processing is started manually by issuing the
EXPIRE INVENTORY command or automatically through the EXPINTERVAL
option setting specified in the server options file.
2. Move a copy storage pool volume or an active-data pool volume on-site for
reuse or disposal. A copy storage pool volume or an active-data pool volume
can be moved on-site if it has been EMPTY for at least the number of days
specified with the REUSEDELAY parameter on the DEFINE STGPOOL
command. A database backup volume can be moved on-site if the database
backup series is EXPIRED according to the rules outlined in step 1. To
determine which volumes to retrieve, issue the following command:
query drmedia * wherestate=vaultretrieve
The server dynamically determines which volumes can be moved back on-site.
When you issue QUERY DRMEDIA WHERESTATE=VAULTRETRIEVE, the
field Last Update Date/Time in the output will contain the data and time that
the state of the volume was moved to VAULT, not VAULTRETRIEVE. Because
the server makes the VAULTRETRIEVE determination dynamically, issue
QUERY DRMEDIA WHERESTATE=VAULTRETRIEVE without the
BEGINDATE, ENDDATE, BEGINTIME or ENDTIME parameters. Doing so will
ensure that you identify all volumes that are in the VAULTRETRIEVE state.
3. After the vault location acknowledges that the volumes have been given to the
courier, issue the MOVE DRMEDIA command.
move drmedia * wherestate=vaultretrieve
The server does the following for all volumes in the VAULTRETRIEVE state:
v Change the volume state to COURIERRETRIEVE.
The server does the following for all volumes with in the VAULTRETRIEVE
state:
v Moves the volumes on-site where they can be can be reused or disposed of.
v Deletes the database backup volumes from the volume history table.
v For scratch copy storage pool volumes or active-data pool volumes, deletes
the record in the database. For private copy storage pool volumes or
active-data pool volumes, updates the access to read/write.
If IBM Tivoli Storage Manager is set up to use Secure Sockets Layer (SSL) for
client/server authentication, a digital certificate file, cert.kdb, is created as part of
the process. This file includes the server's public key, which allows the client to
encrypt data. The digital certificate file cannot be stored in the server database
because the Global Security Kit (GSKit) requires a separate file in a certain format.
1. Keep backup copies of the cert.kdb and cert256.arm files.
2. Regenerate a new certificate file, if both the original files and any copies are
lost or corrupted. For details about this procedure, see “Troubleshooting the
certificate key database” on page 912.
| Ensure that you set up the DRM and perform the daily operations to protect the
| database, data, and storage pools.
Setup
| 1. License DRM by issuing the REGISTER LICENSE command.
| 2. Ensure that the device configuration and volume history files exist.
| 3. Back up the storage pools by issuing the BACKUP STGPOOL command.
| 4. Copy active data to active-data pools by using the COPY ACTIVEDATA
| command.
| Restriction: Ensure that the BACKUP STGPOOL command and the BACKUP
| DB command are complete before you issue the MOVE DRMEDIA
| command.
6. Send the backup volumes and disaster recovery plan file to the vault.
7. Generate the disaster recovery plan.
Day 2
1. Back up client files
2. Back up active and inactive data that is in the primary storage pools to
copy storage pools. Copy the active data that is in primary storage
pools to active-data pools.
3. Back up the database (for example, a database snapshot backup).
| Restriction: Ensure that the BACKUP STGPOOL command and the BACKUP
| DB command are complete before you issue the MOVE DRMEDIA
| command.
5. Send the backup volumes and disaster recovery plan file to the vault.
6. Generate the disaster recovery plan.
Day 3
1. Automatic storage pool reclamation processing occurs.
2. Back up client files.
3. Back up the active and inactive data that is in primary storage pools to
copy storage pools. Copy the active data that is in primary storage
pools to active-data pools.
4. Back up the database (for example, a database snapshot backup).
| Tip: You can maintain and schedule custom maintenance scripts, by using the
| Administration Center.
1. Record the following information in the RECOVERY.INSTRUCTIONS stanza
source files:
v Software license numbers
v Sources of replacement hardware
v Any recovery steps specific to your installation
2. Store the following information in the database:
| v Server and client node machine information (DEFINE MACHINE, DEFINE
| MACHINENODE ASSOCIATION, and INSERT MACHINE commands)
| v The location of the boot recovery media (DEFINE RECOVERYMEDIA command)
3. Schedule automatic nightly backups to occur in the following order:
| Restriction: Ensure that the BACKUP STGPOOL command and the BACKUP DB
| command are complete before you issue the MOVE DRMEDIA command.
b. Send the volumes offsite and record that the volumes were given to the
courier:
move drmedia * wherestate=notmountable
| 5. Create a recovery plan:
prepare
6. Give a copy the recovery plan file to the courier.
7. Create a list of tapes that contain data that is no longer valid and that should
be returned to the site:
query drmedia * wherestate=vaultretrieve
8. Give the courier the database backup tapes, storage pool backup tapes,
active-data pool tapes, the recovery plan file, and the list of volumes to be
returned from the vault.
9. The courier gives you any tapes that were on the previous day's return from
the vault list.
Update the state of these tapes and check them into the library:
move drmedia * wherestate=courierretrieve cmdf=c:\drm\checkin.mac
cmd="checkin libvol libauto &vol status=scratch"
| The volume records for the tapes that were in the COURIERRETRIEVE state
| are deleted from the database. The MOVE DRMEDIA command also generates
| the CHECKIN LIBVOL command for each tape that is processed in the file
| c:\drm\checkin.mac. For example:
checkin libvol libauto tape01 status=scratch
checkin libvol libauto tape02 status=scratch
...
| Restriction: Ensure that the BACKUP STGPOOL command and the BACKUP DB
| command complete before you issue other commands, for example, the MOVE
| DRMEDIA command.
Related tasks:
“Creating a custom maintenance script” on page 668
Recovering the Server: Here are guidelines for recovering your server:
1. Obtain the latest disaster recovery plan file.
2. Break out the file to view, update, print, or run as macros or scripts (for
example, batch programs or batch files).
3. Obtain the copy storage pool volumes and active-data pool volumes from the
vault.
4. Locate a suitable replacement machine.
5. Restore the Windows operating system and Tivoli Storage Manager to your
replacement machine. When using the Tivoli Storage Manager device driver
(ADSMSCSI), you will also need to start ADSMSCSI.
6. Review the RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE
RECOVERY.SCRIPT.NORMAL.MODE scripts because they are important for
restoring the server to a point where clients can be recovered (see “Disaster
recovery mode stanza” on page 1108).
Restriction: When running the disaster recovery script or the commands that the
script contains, the determination must be made whether to run as root or as the
DB2 instance user ID.
1. Review the recovery steps described in the
RECOVERY.INSTRUCTIONS.GENERAL stanza of the plan.
2. Request the server backup tapes from the offsite vault.
3. Break out the recovery plan file stanzas into multiple files (see “Breaking out a
disaster recovery plan file” on page 1096.) These files can be viewed, updated,
printed, or run as Tivoli Storage Manager macros or scripts.
4. Print the RECOVERY.VOLUMES.REQUIRED file. Give the printout to the
courier to retrieve the copy storage pool volumes and active-data pool
volumes.
5. Find a replacement server. The RECOVERY.DEVICES.REQUIRED stanza
specifies the device type that is needed to read the backups. The
SERVER.REQUIREMENTS stanza specifies the disk space required.
Note: When using the Tivoli Storage Manager device driver (ADSMSCSI), you
must start ADSMSCSI.
6. The recovery media names and their locations are specified in the
RECOVERY.INSTRUCTIONS.INSTALL stanza and the
MACHINE.RECOVERY.MEDIA.REQUIRED stanza. Ensure that the
environment is the same as when the disaster recovery plan file was created.
The environment includes:
v The directory structure of the Tivoli Storage Manager server executable and
disk formatting utility
v The directory structure for Tivoli Storage Manager server configuration files
(disk log, volume history file, device configuration file, and server options
file)
v The directory structure and the files created when the disaster recovery plan
file was split into multiple files
7. Restore the operating system, the Tivoli Storage Manager server software, the
Tivoli Storage Manager licenses, and the administrative client on the
replacement hardware.
a. Select "Minimal configuration" from the Tivoli Storage Manager Console. If
the Tivoli Storage Manager server had been running as a service, ensure
that you specify this on the Server Service Logon Parameters panel in the
wizard. Recovery information and media names and locations are specified
in the RECOVERY.INSTRUCTIONS.INSTALL stanza and the
The copy storage pool volumes and active-data pool volumes used in the
recovery already have the correct ORMSTATE.
15. Issue the BACKUP DB command to back up the newly restored database.
16. Issue the following command to check the volumes out of the library:
move drmedia * wherestate=mountable
17. Create a list of the volumes to be given to the courier:
query drmedia * wherestate=notmountable
18. Give the volumes to the courier and issue the following command:
move drmedia * wherestate=notmountable
19. Issue the PREPARE command.
Identify which client machines have the highest priority so that restores can
begin using active-data pool volumes.
2. For each machine, issue the following commands:
a. Determine the location of the boot media. For example:
query recoverymedia mksysb1
The server displays the following information:
Note: You may also need to audit the library after the database is restored in order
to update the server inventory of the library volumes.
In this example, database backup volume DBBK01 was placed in element 1 of the
automated library. Then a comment is added to the device configuration file to
identify the location of the volume. Tivoli Storage Manager needs this information
to restore the database restore. Comments that no longer apply at the recovery site
are removed.
For example, if an automated tape library was used originally and cannot be used
at the recovery site, update the device configuration file. Include the DEFINE
LIBRARY and DEFINE DRIVE commands that are needed to define the manual
drive to be used. In this case, you must manually mount the backup volumes.
Note: If you are using an automated library, you may also need to update the
device configuration file to specify the location of the database backup volume.
After you restore the database, modify the device configuration file in the
database. After starting the server, define, update, and delete your library and
drive definitions to match your new configuration.
Note: If you are using an automated library, you may need to use the AUDIT
LIBRARY command to update the server inventory of the library volumes.
The restored server uses copy storage pool volumes to satisfy requests (for
example, from backup/archive clients) and to restore primary storage pool
volumes that were destroyed. If they are available, the server uses active-data
pools to restore critical client data.
After the database is restored, you can handle copy storage pool volumes and
active-data pool volumes at the recovery site in three ways:
v Mount each volume as requested by Tivoli Storage Manager. If an automated
library is used at the recovery site, check the volumes into the library.
v Check the volumes into an automated library before Tivoli Storage Manager
requests them.
If you are using an automated library, you may also need to audit the library after
the database is restored in order to update the Tivoli Storage Manager inventory of
the volumes in the library.
Locally:
v What is the recovery plan file pathname
prefix?
v How will recovery plan files be made
available at the recovery site?
– Print and store offsite
– Copy stored offsite
– Copy sent/NFS to recovery site
On Another Server:
v What server is to be used as the target
server?
v What is the name of the target server's
device class?
v How long do you want to keep recovery
plan files?
Determine where you want to create the
user-specified recovery instructions
Issue:
v SET DRMDBBACKUPEXPIREDAYS to
define the database backup expiration
v SET DRMPRIMSTGPOOL to specify the
DRM-managed primary storage pools
v SET DRMCOPYSTGPOOL to specify the
DRM-managed copy storage pools
v SET DRMACTIVEDATASTGPOOL to
specify the DRM-managed active-data
pools
v SET DRMPLANVPOSTFIX to specify a
character to be appended to new storage
pools
v SET DRMPLANPREFIX to specify the
RPF prefix
v SET DRMINSTRPREFIX to specify the
user instruction file prefix
v SET DRMNOTMOUNTABLENAME to
specify the default location for media to
be sent offsite
v SET DRMCOURIERNAME to specify the
default courier
v SET DRMVAULTNAME to specify the
default vault
v SET DRMCMDFILENAME to specify the
default file name to contain the
commands specified with the CMD
parameter on MOVE and QUERY
DRMEDIA
v SET DRMCHECKLABEL to specify
whether volume labels are verified when
checked out by the MOVE DRMEDIA
command
v SET DRMRPFEXPIREDAYS to specify a
value for the frequency of RPF expiration
(when plan files are stored on another
server)
Identify:
v Target disaster recovery server location
v Target server software requirements
v Target server hardware requirements
(storage devices)
v Tivoli Storage Manager administrator
contact
v Courier name and telephone number
v Vault location and contact person
Create:
v Enter the site-specific recovery
instructions data into files created in the
same path/HLQ as specified by SET
DRMINSTRPREFIX
Test disaster recovery manager
Test the installation and customization
v QUERY DRMSTATUS to display the
DRM setup
v Back up the active and inactive data that
is in primary storage pools to copy
storage pools. Copy the active data that
is in primary storage pools to active-data
pools.
v Back up the Tivoli Storage Manager
database
v QUERY DRMEDIA to list the copy
storage pool and active-data pool
volumes
v MOVE DRMEDIA to move offsite
v PREPARE to create the recovery plan file
Examine the recovery plan file created
Test the recovery plan file break out
v VBScript procedure planexpl.vbs
v Locally written procedure
Set up the schedules for automated
functions
Tip: The plan file that DRM generates is a template that contains information,
including commands for recovering the database, that might not apply to your
replacement systems or to your particular recovery scenario. To modify the plan or
to store additional instructions that you will need during recovery from an actual
disaster, use the RECOVERY.INSTRUCTIONS stanzas. Enter your site-specific
information in the stanzas when you first create the plan file or after you test it.
You can use a Microsoft VBScript command procedure or an editor to break out
the stanzas in the disaster recovery plan file into individual files. A sample
procedure, planexpl.vbs, is shipped with DRM. You can modify the procedure for
your installation. Store a copy of the procedure offsite for recovery.
Dim args
Dim PLANFILE, OUTDIR, OUTFILE
Dim STANZAS
Dim VOLNAMES(100),NbrV,LOGDBVOLS
Dim fso, fi, fo
Dim WORDS
Dim CRLF
Dim RESULTS, RESULTS2
CRLF = Chr(13) & Chr(10)
LOGDBVOLS = False : NbrV = 0
OUTDIR = "" : OUTFILE = ""
RESULTS = "" : RESULTS2 = ""
’*****************************************************************************
’* Get input arguments: PLANFILE=recoveryplanfilename
’*****************************************************************************
’****************************************************************************
’ Read a line from the input recovery plan file
’****************************************************************************
ALINE = fi.ReadLine
’****************************************************************************
’ Get the first 2 words. We’re looking for ’begin’/’end’ and a stanza name
’****************************************************************************
RESULTS = RESULTS & "Creating file " & OUTFILE & CRLF
’****************************************************************************
’ If the first word is ’end’ and this was a stanza that we created a file
’ for then close the output file.
’****************************************************************************
’****************************************************************************
’ This is the line within the plan file that identifies the plan file prefix.
’****************************************************************************
Elseif OUTDIR = "" And WORD1 = "DRM" And WORD2 = "PLANPREFIX" Then
OUTDIR = THEREST
If Not Right(OUTDIR,1) = "\" Then
OUTDIR = OUTDIR & "."
End If
RESULTS = RESULTS & "set planprefix to " & OUTDIR & CRLF
End If ’/* select on first word of input line from the recovery plan file */
fi.close
Tip: The plan file that DRM generates is a template that contains information,
including commands for recovering the database, that might not apply to your
replacement systems or to your particular recovery scenario. To modify the plan or
to store additional instructions that you will need during recovery from an actual
disaster, use the RECOVERY.INSTRUCTIONS stanzas. Enter your site-specific
information in the stanzas when you first create the plan file or after you test it.
Command stanzas
Consist of scripts (for example, batch programs or batch files) and Tivoli
Storage Manager macros. You can view, print, and update these stanzas,
and run them during recovery.
Table 93 lists the recovery plan file stanzas, and indicates what type of
administrative action is required during set up or periodic updates, routine
processing, and disaster recovery. The table also indicates whether the stanza
contains a macro, a script, or a configuration file.
Tip: The plan file that DRM generates is a template that contains information,
including commands for recovering the database, that might not apply to your
replacement systems or to your particular recovery scenario. To modify the plan or
to store additional instructions that you will need during recovery from an actual
disaster, use the RECOVERY.INSTRUCTIONS stanzas. Enter your site-specific
information in the stanzas when you first create the plan file or after you test it.
PLANFILE.DESCRIPTION
begin PLANFILE.DESCRIPTION
end PLANFILE.DESCRIPTION
PLANFILE.TABLE.OF.CONTENTS
begin PLANFILE.TABLE.OF.CONTENTS
PLANFILE.DESCRIPTION
PLANFILE.TABLE.OF.CONTENTS
end PLANFILE.TABLE.OF.CONTENTS
The replacement server must have enough disk space to install the database and
recovery log.
This stanza also identifies the Tivoli Storage Manager installation directory. When
Tivoli Storage Manager is re-installed on the replacement server, specify this
directory on the Setup Type panel during installation. If you specify a different
directory, edit the plan file to account for this change.
Location: E:\tsmdata\DBSpace
Total Space(MB): 285,985
Used Space(MB): 457
Free Space(MB): 285,527
end SERVER.REQUIREMENTS
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
begin RECOVERY.VOLUMES.REQUIRED
Location = dkvault
Device Class = VTL
Volume Name =
003902L4
Location = dkvault
Copy Storage Pool = COPYPOOL
Device Class = VTL
Volume Name =
003900L4
Location = dkvault
Active-data Storage Pool = ADP1
Device Class = VTL
Volume Name =
003901L4
end RECOVERY.VOLUMES.REQUIRED
See “Specifying recovery instructions for your site” on page 1059 for details. In the
following descriptions, prefix represents the prefix portion of the file name. See
“Specifying defaults for the disaster recovery plan file” on page 1054 for details.
RECOVERY.INSTRUCTIONS.GENERAL
Identifies site-specific instructions that the administrator has entered in the file
identified by prefix RECOVERY.INSTRUCTIONS.GENERAL. The instructions
should include the recovery strategy, key contact names, an overview of key
applications backed up by this server, and other relevant recovery instructions.
begin RECOVERY.INSTRUCTIONS.GENERAL
This server contains the backup and archive data for FileRight Company
accounts receivable system. It also is used by various end users in the
finance and materials distribution organizations.
The storage administrator in charge of this server is Jane Doe 004-001-0006.
If a disaster is declared, here is the outline of steps that must be completed.
1. Determine the recovery site. Our alternate recovery site vendor is IBM
BRS in Tampa, Fl, USA 213-000-0007.
2. Get the list of required recovery volumes from this recovery plan file
and contact our offsite vault so that they can start pulling the
volumes for transfer to the recovery site.
3. etc...
end RECOVERY.INSTRUCTIONS.GENERAL
RECOVERY.INSTRUCTIONS.OFFSITE
Contains instructions that the administrator has entered in the file identified by
prefix RECOVERY.INSTRUCTIONS.OFFSITE. The instructions should include the
name and location of the offsite vault, and how to contact the vault (for example, a
name and phone number).
begin RECOVERY.INSTRUCTIONS.OFFSITE
end RECOVERY.INSTRUCTIONS.OFFSITE
RECOVERY.INSTRUCTIONS.INSTALL
Contains instructions that the administrator has entered in the file identified by
prefix RECOVERY.INSTRUCTIONS.INSTALL. The instructions should include how
to rebuild the base server machine and the location of the system image backup
copies.
end RECOVERY.INSTRUCTIONS.INSTALL
RECOVERY.INSTRUCTIONS.DATABASE
Contains instructions that the administrator has entered in the file identified by
prefix RECOVERY.INSTRUCTIONS.DATABASE. The instructions should include
how to prepare for the database recovery. For example, you may enter instructions
on how to initialize or load the backup volumes for an automated library. No
sample of this stanza is provided.
RECOVERY.INSTRUCTIONS.STGPOOL
Contains instructions that the administrator has entered in the file identified by
prefix RECOVERY.INSTRUCTIONS.STGPOOL. The instructions should include the
names of your software applications and the copy storage pool names containing
the backup of these applications. No sample of this stanza is provided.
RECOVERY.VOLUMES.REQUIRED
Provides a list of the database backup, copy storage-pool volumes, and active-data
pool volumes required to recover the server. This list can include both virtual
volumes and nonvirtual volumes. A database backup volume is included if it is
part of the most recent database backup series. A copy storage pool volume or an
active-data pool volume is included if it is not empty and not marked destroyed.
If you are using a nonvirtual volume environment and issuing the MOVE
DRMEDIA command, a blank location field means that the volumes are onsite and
available to the server. This volume list can be used in periodic audits of the
volume inventory of the courier and vault. You can use the list to collect the
required volumes before recovering the server.
For virtual volumes, the location field contains the target server name.
Location = dkvault
Device Class = VTL
Volume Name =
003902L4
Location = dkvault
Copy Storage Pool = COPYPOOL
Device Class = VTL
Volume Name =
003900L4
Location = dkvault
Active-data Storage Pool = ADP1
Device Class = VTL
Volume Name =
003901L4
end RECOVERY.VOLUMES.REQUIRED
RECOVERY.DEVICES.REQUIRED
Provides details about the devices needed to read the backup volumes.
begin RECOVERY.DEVICES.REQUIRED
end RECOVERY.DEVICES.REQUIRED
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE
You can use the script as a guide and run the commands from a command line. Or
you can copy it to a file, modify it and the files it refers to, and run the script.
Tip: The commands in the plan file that is generated by DRM might not work on
your replacement systems. If necessary, use the recovery.instructions stanzas in the
plan file to store information about the particular commands to be used during
recovery from an actual disaster. Enter your site-specific information in the
recovery.instructions stanzas when you first create the plan file or after you test it.
At the completion of these steps, client requests for file restores are satisfied
directly from copy storage pool volumes and active-data pool volumes.
The disaster recovery plan issues commands by using the administrative client.
Note: Because this script invokes the administrative command-line client, ensure
that the communications options in the administrative client options file are set to
communicate with the recovered server before running this script. To review the
communications options used in the recovered server, see the server options file in
the DSMSERV.OPT.FILE stanza.
For more information, see the entry for the recovery plan prefix in Table 90 on
page 1054.
@echo off
rem Purpose: This script contains the steps required to recover the server
rem to the point where client restore requests can be satisfied
rem directly from available copy storage pool volumes.
rem Note: This script assumes that all volumes necessary for the restore have
rem been retrieved from the vault and are available. This script assumes
rem the recovery environment is compatible (essentially the same) as the
rem original. Any deviations require modification to this script and the
rem macros and scripts it runs. Alternatively, you can use this script
rem as a guide, and manually execute each step.
rem Restore the server database to latest version backed up per the
rem volume history file.
"D:\TSM\SERVER\DSMSERV" -k "Server1" restore db todate=09/26/2008 totime=13:28:52 +
source=dbb
rem Active-data pool volumes in this macro were not marked as ’offsite’ at the time
rem PREPARE ran. They were likely destroyed in the disaster.
rem Recovery Administrator: Remove from macro any volumes not destroyed.
dsmadmc -id=%1 -pass=%2 -ITEMCOMMIT +
-OUTFILE="D:\TSM\SERVER1\PLANPRE.ACTIVEDATASTGPOOL.VOLUMES.DESTROYED.LOG" +
macro "D:\TSM\SERVER1\PLANPRE.ACTIVEDATASTGPOOL.VOLUMES.DESTROYED.MAC"
rem Tell the server these copy storage pool volumes are available for use.
rem Recovery Administrator: Remove from macro any volumes not obtained from vault.
dsmadmc -id=%1 -pass=%2 -ITEMCOMMIT +
-OUTFILE="D:\TSM\SERVER1\PLANPRE.COPYSTGPOOL.VOLUMES.AVAILABLE.LOG" +
macro "D:\TSM\SERVER1\PLANPRE.COPYSTGPOOL.VOLUMES.AVAILABLE.MAC"
rem Copy storage pool volumes in this macro were not marked as ’offsite’ at the time
rem PREPARE ran. They were likely destroyed in the disaster.
rem Recovery Administrator: Remove from macro any volumes not destroyed.
dsmadmc -id=%1 -pass=%2 -ITEMCOMMIT +
-OUTFILE="D:\TSM\SERVER1\PLANPRE.COPYSTGPOOL.VOLUMES.DESTROYED.LOG" +
macro "D:\TSM\SERVER1\PLANPRE.COPYSTGPOOL.VOLUMES.DESTROYED.MAC"
:end
end RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script
Related tasks:
“Restoring to a point-in-time in a shared library environment” on page 983
“Scenario: Protecting the database and storage pools” on page 968
“Scenario: Recovering a lost or damaged storage pool volume” on page 979
“Example: Restoring a library manager database” on page 973
“Example: Restoring a library client database” on page 974
Related reference:
“Recovery instructions stanzas” on page 1105
RECOVERY.SCRIPT.NORMAL.MODE
You can use the script as a guide and run the commands from a command line. Or
you can copy it to a file, modify it and the files it refers to, and run the script. You
may need to modify the script because of differences between the original and the
replacement systems.
The disaster recovery plan issues commands using the administrative client.
Note: Because this script invokes the administrative client, you should ensure that
the communications options in the client options file are set to communicate with
the recovered server before running this script. To review the communications
options used in the recovered server, see the server options file in the
DSMSERV.OPT.FILE stanza.
For more information, see the entry for the recovery plan prefix in Table 90 on
page 1054.
The following stanza contains text strings that are too long to display in the
hardcopy or softcopy publications. The long text strings utilize a plus symbol (+)
to indicate string continuation on the next line.
@echo off
rem Purpose: This script contains the steps required to recover the server
rem primary storage pools. This mode allows you to return the
rem copy storage pool volumes to the vault and to run the
rem server as normal.
rem Note: This script assumes that all volumes necessary for the restore
rem have been retrieved from the vault and are available. This script
rem assumes the recovery environment is compatible (essentially the
rem same) as the original. Any deviations require modification to this
rem this script and the macros and scripts it runs. Alternatively, you
rem can use this script as a guide, and manually execute each step.
rem Restore the primary storage pools from the copy storage pools.
dsmadmc -id=%1 -pass=%2 -ITEMCOMMIT +
-OUTFILE="D:\TSM\SERVER1\PLANPRE.STGPOOLS.RESTORE.LOG" +
macro "D:\TSM\SERVER1\PLANPRE.STGPOOLS.RESTORE.MAC"
:end
end RECOVERY.SCRIPT.NORMAL.MODE script
Related tasks:
“Restoring to a point-in-time in a shared library environment” on page 983
“Scenario: Protecting the database and storage pools” on page 968
“Scenario: Recovering a lost or damaged storage pool volume” on page 979
“Example: Restoring a library manager database” on page 973
“Example: Restoring a library client database” on page 974
LICENSE.REGISTRATION
COPYSTGPOOL.VOLUMES.AVAILABLE
Contains a macro to mark copy storage pool volumes that were moved offsite and
then moved back onsite. This stanza does not include copy storage pool virtual
volumes. You can use the information as a guide and issue the administrative
commands, or you can copy it to a file, modify it, and run it. This macro is
invoked by the RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
After a disaster, compare the copy storage pool volumes listed in this stanza with
the volumes that were moved back onsite. You should remove entries from this
stanza for any missing volumes.
begin COPYSTGPOOL.VOLUMES.AVAILABLE macro
/* Purpose: Mark copy storage pool volumes as available for use in recovery. */
/* Recovery Administrator: Remove any volumes that have not been obtained */
/* from the vault or are not available for any reason. */
/* Note: It is possible to use the mass update capability of the server */
/* UPDATE command instead of issuing an update for each volume. However, */
/* the ’update by volume’ technique used here allows you to select */
/* a subset of volumes to be processed. */
COPYSTGPOOL.VOLUMES.DESTROYED
After a disaster, compare the copy storage pool volumes listed in this stanza with
the volumes that were left onsite. If you have any of the volumes and they are
usable, you should remove their entries from this stanza.
begin COPYSTGPOOL.VOLUMES.DESTROYED macro
ACTIVEDATASTGPOOL.VOLUMES.AVAILABLE
Contains a macro to mark active-data pool volumes that were moved offsite and
then moved back onsite. This stanza does not include active-data pool virtual
volumes. You can use the information as a guide and issue the administrative
commands, or you can copy it to a file, modify it, and run it. This macro is
invoked by the RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
After a disaster, compare the active-data pool volumes listed in this stanza with the
volumes that were moved back onsite. You should remove entries from this stanza
for any missing volumes.
begin ACTIVEDATASTGPOOL.VOLUMES.AVAILABLE macro
/* Purpose: Mark active-data storage pool volumes as available for use in recovery. */
/* Recovery Administrator: Remove any volumes that have not been obtained */
/* from the vault or are not available for any reason. */
/* Note: It is possible to use the mass update capability of the server */
/* UPDATE command instead of issuing an update for each volume. However, */
/* the ’update by volume’ technique used here allows you to select */
/* a subset of volumes to be processed. */
ACTIVEDATASTGPOOL.VOLUMES.DESTROYED
After a disaster, compare the active-data pool volumes listed in this stanza with the
volumes that were left onsite. If you have any of the volumes and they are usable,
you should remove their entries from this stanza.
begin ACTIVEDATASTGPOOL.VOLUMES.DESTROYED macro
PRIMARY.VOLUMES.DESTROYED
During recovery, compare the primary storage pool volumes listed in this stanza
with the volumes that were onsite. If you have any of the volumes and they are
usable, remove their entries from the stanza.
This stanza does not include primary storage pool virtual volumes. These volumes
are considered offsite and have not been destroyed in a disaster.
begin PRIMARY.VOLUMES.DESTROYED macro
PRIMARY.VOLUMES.REPLACEMENT
Contains a macro to define primary storage pool volumes to the server. You can
use the macro as a guide and run the administrative commands from a command
line, or you can copy it to a file, modify it, and execute it. This macro is invoked
by the RECOVERY.SCRIPT.NORMAL.MODE script.
Primary storage pool volumes with entries in this stanza have at least one of the
following three characteristics:
v Original volume in a storage pool whose device class was DISK.
The SET DRMPLANVPOSTFIX command adds a character to the end of the names
of the original volumes listed in this stanza. This character does the following:
v Improves the retrievability of volume names that must be renamed in the
stanzas. Before using the volume names, change these names to new names that
are valid for the device class on the replacement system.
v Generates a new name that can be used by the replacement server. Your naming
convention must take into account the appended character.
Note:
1. Replacement primary volume names must be different from any other
original volume name or replacement name.
2. The RESTORE STGPOOL command restores storage pools on a logical basis.
There is no one-to-one relationship between an original volume and its
replacement.
3. There could be entries for the same volume in
PRIMARY.VOLUMES.REPLACEMENT if the volume has a device class of
DISK.
This stanza does not include primary storage pool virtual volumes. These volumes
are considered offsite and have not been destroyed in a disaster.
STGPOOLS.RESTORE
You can use the stanza as a guide and execute the administrative commands from
a command line. You can also can copy it to a file, modify it, and execute it. This
macro is invoked by the RECOVERY.SCRIPT.NORMAL.MODE script.
This stanza does not include primary storage pool virtual volumes. These volumes
are considered offsite and have not been destroyed in a disaster.
/* Purpose: Restore the primary storage pools from copy storage pool(s). */
/* Recovery Administrator: Delete entries for any primary storage pools */
/* that you do not want to restore. */
Configuration stanzas
These stanzas contain copies of the following information: volume history, device
configuration, and server options.
VOLUME.HISTORY.FILE
Contains a copy of the volume history information when the recovery plan was
created. The DSMSERV RESTORE DB command uses the volume history file to
determine what volumes are needed to restore the database. It is used by the
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
The following rules determine where to place the volume history file at restore
time:
v If the server option file contains VOLUMEHISTORY options, the server uses the
fully qualified file name associated with the first entry. If the file name does not
begin with a directory specification, the server uses the prefix volhprefix.
v If the server option file does not contain VOLUMEHISTORY options, the server
uses the default name volhprefix followed by drmvolh.txt. The volhprefix is set to
the directory representing this instance of the server, which is typically the
directory that the server was originally installed from.
If a fully qualified file name was not specified in the server options file for the
VOLUMEHISTORY option, the server adds it to the DSMSERV.OPT.FILE stanza.
DEVICE.CONFIGURATION.FILE
Contains a copy of the server device configuration information when the recovery
plan was created. The DSMSERV RESTORE DB command uses the device
configuration file to read the database backup volumes. It is used by the
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
At recovery time, you may need to modify this stanza. You must update the device
configuration information if the hardware configuration at the recovery site has
changed. Examples of changes requiring updates to the configuration information
are:
v Different device names
v Use of a manual library instead of an automated library
v For automated libraries, the requirement to manually place the database backup
volumes in the automated library and update the configuration information to
identify the element within the library. This allows the server to locate the
required database backup volumes.
For details, see “Updating the device configuration file” on page 975.
The following rules determine where the device configuration file is placed at
restore time:
Note: The devcprefix is set to the directory representing this instance of the server
which is typically the directory from which the server was originally installed.
If a fully qualified file name was not specified for the DEVCONFIG option in the
server options file, the server adds it to the stanza DSMSERV.OPT.FILE.
begin DEVICE.CONFIGURATION.FILE
end DEVICE.CONFIGURATION.FILE
DSMSERV.OPT.FILE
Contains a copy of the server options file. This stanza is used by the
RECOVERY.SCRIPT.DISASTER.RECOVERY.MODE script.
Note: The following figure contains text strings that are too long to display in
hardcopy or softcopy publications. The long text strings have a plus symbol (+) at
the end of the string to indicate that they continue on the next line.
The disaster recovery plan file adds the DISABLESCHEDS option to the server
options file and sets it to YES. This option disables administrative and client
schedules while the server is being recovered. After the server is recovered, you
can enable scheduling by deleting the option or setting it to NO and then
restarting the server.
begin DSMSERV.OPT.FILE
end DSMSERV.OPT.FILE
LICENSE.INFORMATION
begin LICENSE.INFORMATION
end LICENSE.INFORMATION
MACHINE.GENERAL.INFORMATION
Provides information for the server machine (for example, machine location). This
stanza is included in the plan file if the machine information is saved in the
database using the DEFINE MACHINE with ADSMSERVER=YES.
begin MACHINE.GENERAL.INFORMATION
Purpose: General information for machine DSMSRV1.
This is the machine that contains DSM server DSM.
Machine Name: DSMSRV1
Machine Priority: 1
Building: 21
Floor: 2
Room: 2749
Description: DSM Server for Branch 51
Recovery Media Name: DSMSRVIMAGE
end MACHINE.GENERAL.INFORMATION
MACHINE.RECOVERY.INSTRUCTIONS
Provides the recovery instructions for the server machine. This stanza is included
in the plan file if the machine recovery instructions are saved in the database.
begin MACHINE.RECOVERY.INSTRUCTIONS
Purpose: Recovery instructions for machine DSMSRV1.
Primary Contact:
Jane Smith (wk 520-000-0000 hm 520-001-0001)
Secondary Contact:
John Adams (wk 520-000-0001 hm 520-002-0002)
end MACHINE.RECOVERY.INSTRUCTIONS
MACHINE.RECOVERY.CHARACTERISTICS
Provides the hardware and software characteristics for the server machine. This
stanza is included in the plan file if the machine characteristics are saved in the
database.
end MACHINE.CHARACTERISTICS
MACHINE.RECOVERY.MEDIA
Provides information about the media (for example, boot media) needed for
rebuilding the machine that contains the server. This stanza is included in the plan
file if recovery media information is saved in the database and it has been
associated with the machine that contains the server.
begin MACHINE.RECOVERY.MEDIA.REQUIRED
Purpose: Recovery media for machine DSMSRV1.
Recovery Media Name: DSMSRV
Type: Other
Volume Names:
Location: IRONMNT
Description: Server Installation CD
Product:
Product Information:
end MACHINE.RECOVERY.MEDIA.REQUIRED
| The framework for evaluating disaster recovery strategies consists of the following
| tiers:
|
|
|
| Figure 117. Tiers of disaster recovery
|
| Each tier corresponds to different recovery times and potentials for data loss. For
| example, in a tier 1 production site data is typically saved only selectively, and
| volumes that are stored at an offsite facility can be difficult to track. In addition,
| recovery time is unpredictable. After a disaster, hardware and software must be
| restored, and storage volumes must be sent back to the production site.
| Use the following questions as a guide to help you in the planning process:
| Cost How much can you afford for your disaster recovery implementation?
| Performance
| Do you want a low or a high performance disaster recovery solution?
| Recovery Time Objective (RTO) and Recovery Point Objective (RPO)
| What are your system requirements?
| Current disaster recovery strategy
| What disaster recovery strategy is implemented in your environment?
| Data What data do you need? Categorize and prioritize the data that you
| require.
| When you plan a disaster recovery strategy that might be suitable for your site,
| consider using DRM and Tivoli Storage Manager node replication for these
| reasons:
| v DRM is an effective tool for managing offsite vaulting. With DRM, you can
| configure and automatically generate a disaster recovery plan that contains the
| information, scripts, and procedures that are required to automatically restore
| the server and recover client data after a disaster.
| DRM also manages and tracks the media on which client data is stored, whether
| the data is on site, in-transit, or in a vault, so that the data can be more easily
| located if disaster strikes. DRM also generates scripts that assist you in
| documenting information-technology systems and recovery procedures that you
| can use, including procedures to rebuild the server.
| Use DRM alone to meet the disaster recovery objectives in tier 1, or use it
| together with other backup-and-recovery tools and technologies in tiers 2, 3 and
| 4.
| v Tivoli Storage Manager node replication meets the objectives of tier 5. After a
| successful node replication, the target server contains all metadata updates and
| data that is stored on the source server.
| In addition to fast recovery and minimal potential data loss, Tivoli Storage
| Manager node replication offers the following advantages:
| – Node replication is easier to manage than device-based replication.
| Device-based replication requires that you keep the database and the data it
| represents synchronized. You manually schedule database backups to match
| the point in time when the device synchronizes.
| – Results for Tivoli Storage Manager operations are reported in terms such as
| "node names" and "file names." In contrast, device-based replication results
| are reported in terms of "disks," "sectors," and "blocks."
|
| In the following figure, the Tivoli Storage Manager server and database, tape
| libraries, and tapes are in a single facility. If a disaster occurs, recovery time is
| unpredictable. Tier 0 is not recommended and data might never be recovered.
Data center A
| As shown in the following figure, storage volumes, such as tape cartridges and
| media volumes, and are vaulted at an offsite location. Transportation is typically
| handled by couriers. If a disaster occurs, the volumes are sent back to the
| production site after hardware and the Tivoli Storage Manager server is restored.
Data center A
DRM
Offsite vault
Daily backups
Chapter 37. Integrating disaster recovery manager and node replication into your disaster recovery strategy 1125
| consider that an extended recovery time can impact business operations for several
| months or longer.
Daily At recovery
backups time
| A dedicated recovery site can reduce recovery time compared to the single
| production site in tier 1. The potential for data loss is also less. However, tier 2
| architecture increases the cost of disaster recovery because more hardware and
| software must be maintained. The recovery site must also have hardware and
| software that are compatible with the hardware and software at the primary site.
| For example, the recovery site must have compatible tape devices and Tivoli
| Storage Manager server software. Before the production site can be recovered, the
| hardware and software at the recovery site must be set up and running.
| Transporting the storage volumes to the recovery site also affects recovery time.
Electronic vaulting moves critical data offsite faster and more frequently than
traditional courier methods. Recovery time is reduced because critical data is
already stored at the recovery site. The potential for lost or misplaced data is also
reduced. However, because the recovery site runs continuously, a tier 3 strategy is
relatively more expensive than a tier 1 or a tier 2 strategy.
As shown in the following figure, the recovery site is physically separated from the
production site. Often, the recovery site is a second data center that is operated by
the same organization or by a storage service provider. If a disaster occurs at the
primary site, storage media with the non-critical data are transported from the
offsite storage facility to the recovery site.
Electronic vaulting
Daily At recovery
backups time
| If you implement a tier 3 strategy, you can use Tivoli Storage Manager
| server-to-server communications for enterprise configuration of the Tivoli Storage
| Manager servers and command routing.
| As shown in the following figure, critical data is replicated the two sites by using
| high-bandwidth connections and data replication technology, for example,
| Peer-to-Peer Remote Connection (PPRC). Data is transmitted over long distances
| by using technologies such as extended storage area network (SAN), Dense Wave
| Division Multiplexing (DWDM), and IP/WAN channel extenders.
Chapter 37. Integrating disaster recovery manager and node replication into your disaster recovery strategy 1127
High bandwidth connections
Data center A Data center B
(hotsite)
DRM
Daily backups
Offsite
vault
At recovery time
Non-critical backups from both sites are moved to a single offsite storage facility. If
a disaster occurs, the backup volumes are recovered by courier from the offsite
vault and transported the designated recovery site.
If you implement a tier-4 disaster-recovery strategy, you can use Tivoli Storage
Manager server-to-server communications for enterprise configuration of multiple
Tivoli Storage Manager servers and command routing.
| Recovery time for a tier 4 strategy is faster than the recovery time for a tier 1, tier
| 2, or tier 3 strategy. Recovery time is faster because hardware, software, and data
| are available or can be made available at two sites.
High bandwidth
connections
| Copies of critical data are available at both sites, and each server is able to recover
| the server at the alternate site. Only the data transactions that are in-flight are lost
| during a disaster.
| If you implement a tier-5 disaster-recovery strategy, you can also use Tivoli Storage
| Manager server-to-server communications to configure multiple Tivoli Storage
| Manager servers and command routing.
| As shown in the following figure, two sites are fully synchronized by using a
| high-bandwidth connection.
Data sharing
Tier 6 is the most expensive disaster recovery strategy because it requires coupling
or clustering applications, additional hardware to support data sharing, and
high-bandwidth connections over extended distances. However, this strategy also
offers the fastest recovery time and the least amount of data loss. The typical
length of time for recovery is normally a few minutes.
Chapter 37. Integrating disaster recovery manager and node replication into your disaster recovery strategy 1129
1130 IBM Tivoli Storage Manager for Windows: Administrator's Guide
Part 7. Appendixes
You can use a clustered environment for the following operating systems:
v IBM PowerHA® SystemMirror for AIX
v IBM Tivoli System Automation for Multiplatforms for AIX and Linux
v Microsoft Failover Cluster for Windows
You can use other cluster products with Tivoli Storage Manager, however,
documentation is not available and support is limited. For the latest information
about support for clustered environments, see http://www.ibm.com/support/
docview.wss?uid=swg21609772.
Before you use another cluster product, verify that DB2 supports the required file
systems. For more information about the level of DB2 that you are using, refer to
the DB2 documentation at: http://pic.dhe.ibm.com/infocenter/db2luw/v9r7.
Search on Recommended file systems.
For more information about upgrading the server in a clustered environment, see
the Installation Guide.
This configuration provides the nodes with the ability to share data, which allows
higher server availability and minimized downtime. For example, you can
configure, monitor, and control applications and hardware components that are
deployed on a cluster. You can use an administrative cluster interface and Tivoli
Storage Manager to designate cluster arrangements and define a failover pattern.
The server is part of the cluster which provides an extra level of security by
ensuring that no transactions are missed due to a failed server. The failover pattern
you establish prevents future failures.
Components in a server cluster are known as cluster objects. Cluster objects are
associated with a set of properties that have data values that describe the identity
and behavior of an object in the cluster. Cluster objects can include the following
components:
v Nodes
v Storage
v Services and applications
v Networks
You manage cluster objects by manipulating their properties, typically through a
cluster management application.
The Tivoli Storage Manager instance network name is independent of the name of
the physical node on which the Tivoli Storage Manager cluster group runs and
migrates from node to node. Clients connect to a Tivoli Storage Manager server by
using the Tivoli Storage Manager instance network name, rather than the Windows
node name. The Tivoli Storage Manager instance network name maps to a primary
or backup node. The mapping is dependent on which node owns the Tivoli
Storage Manager cluster group. Any client that uses Windows Internet Name
Service (WINS) or directory services to locate servers can automatically track the
Tivoli Storage Manager clustered server as it moves between nodes. You can
automatically track the Tivoli Storage Manager clustered server without modifying
or reconfiguring the client.
Each Tivoli Storage Manager cluster group has its own disk as part of a cluster
resource group. Tivoli Storage Manager cluster groups cannot share data between
the cluster groups. Each Tivoli Storage Manager server that is configured in a
Tivoli Storage Manager cluster group has its database, active logs, recovery logs,
and set of storage pool volumes on a separate disk owned by that Tivoli Storage
Manager cluster group.
The following example demonstrates the way that a Microsoft Failover Cluster
Manager for a Tivoli Storage Manager cluster server works.
Assume that a clustered Tivoli Storage Manager server that is named JUPITER is
running on Node Z and a clustered Tivoli Storage Manager server that is named
SATURN is running on Node X. Clients connect to the Tivoli Storage Manager
server JUPITER and the Tivoli Storage Manager server SATURN without knowing
which node hosts their server.
Node Z
SATURN
Node X
Client Local
disk
Client
Client
When one of the software or hardware resources fails, failover occurs. Resources
(for example: applications, disks, and an IP address) migrate from the failed node
to the remaining node. The remaining node:
v Takes over the Tivoli Storage Manager cluster group
v Brings the disk resources, the network resources, and the DB2 resource online
v Restarts the Tivoli Storage Manager service
v Provides access to administrators and clients
The following table describes the hardware and software that you can use with
Tivoli Storage Manager fiber-tape failover.
Table 94. Hardware and software supported for fiber-tape failover
Operating system Fibre Channel adapter Fibre Channel tape library
and drives
| Microsoft Windows Server QLogic QLE2462 with a IBM or other vendor Fibre
| 2008, Microsoft Windows Storport driver. Channel directly attached
| Server 2008 R2 (64-bit), or tape and library devices. For
| Microsoft Windows 2012. the latest list of supported
devices, see the IBM Support
Portal.
| Ensure that the same level of Windows (Windows 2008, Windows 2008 R2, or
| Windows 2012) is installed on all computers in the cluster.
To meet the minimum requirements for failover, ensure that one of the following
components is installed on the Windows system:
v Directly attached Fibre Channel tape or library device from IBM
v Directly attached tape or library device from another vendor
For best results, install both a Fibre Channel adapter and a Fibre Channel tape
library or drive on the Windows system. If you install a Fibre Channel adapter, it
must use a Storport driver.
If you use persistent reservation, ensure that you select a tape drive that you can
use for persistent reservation for all clusters within the system. For IBM tape
When a node fails or needs to be taken offline, which node or nodes in the cluster
picks up the transaction processing? In a two-node cluster, there is little planning
necessary. In a more complex arrangement, you want to consider how your
transaction processing is best handled. A form of load balancing among your
nodes needs to be accounted for so that you maintain peak performance. Another
consideration is to ensure that your customers do not see any lag and little drop in
productivity.
Microsoft Cluster Servers and Microsoft Failover Clusters require each Tivoli
Storage Manager server instance to have a private set of disk resources. Although
nodes can share disk resources, only one node can actively control a disk at a time.
| Attention: Ensure that the same level of Windows (Windows 2008, Windows 2008
| R2, or Windows 2012) is installed on all computers in the cluster.
Is one configuration better than the other? To determine your best installation, you
need to look at the differences in performance and cost. Assume that you have a
Tivoli Storage Manager server-dedicated cluster whose nodes have comparable
power. During failover, the performance of a configuration might degrade because
one node must manage both Tivoli Storage Manager Cluster Instances. If each
node handles 100 clients in a normal operation, one node must handle 200 clients
during a failure.
The following guidelines help determine what resources are needed for a
successful Tivoli Storage Manager cluster:
1. Decide which cluster configuration you must use with servers that use disk
devices. Each Tivoli Storage Manager Cluster Instance needs a separate set of
disk resources on the shared disk subsystem. Therefore, you might have
problems if you configure the I/O subsystem as one large array when you
configure a two server cluster and later decide to expand to a four server
cluster.
2. Identify the disk resources to be dedicated to Tivoli Storage Manager. Do not
divide a shared disk into multiple partitions with each partition assigned to a
different application and thus a different cluster group.
For example, Application A, a stable application, might be forced to fail over
because of a software problem with Application B if both applications use
partitions that are part of the same physical disk. This might happen, for
example, when a software problem with Application B occurs. This problem
causes the Cluster Services to fail over Application B and its corequisite disk
resource. Because the partitions exist on the same physical drive, Application A
is also forced to fail over. Therefore, as a best practice, when you install and
configure a Tivoli Storage Manager application, dedicate a shared disk as a
resource that can fail if necessary.
3. Ensure that you have an IP address and network name for each Tivoli Storage
Manager server instance that you configure. For a cluster that involves two
Tivoli Storage Manager cluster instances, two network names are required.
4. Create a cluster resource group and move disk resources to it. Each Tivoli
Storage Manager server instance requires a cluster resource group. Initially, the
group should contain only disk resources. You might choose just to rename an
existing resource group that contains only disk resources.
5. Tivoli Storage Manager is installed to a local disk on each node in the cluster.
Determine the disk to be used on each node. It is a best practice to use the
same drive letter on each system.
6. You can attach tape devices in either of the following configurations, if you
choose not to use Tivoli Storage Manager tape failover support:
Steps for the procedure vary depending upon which node you are currently
configuring. When configuring the primary node in the set, the Tivoli Storage
Manager server instance is created and configured. When configuring the
remaining nodes in the set, each node is updated in such a way that permits it to
host the Tivoli Storage Manager server instance created on the primary node. A
Tivoli Storage Manager server must be installed and configured on the first node
in the set before configuring the remaining nodes in the set. Violating this
requirement causes the configuration to fail.
Ensure that you completely configure one Tivoli Storage Manager cluster group
before moving on to the next when configuring multiple Tivoli Storage Manager
cluster groups. Because you are dealing with separate IP addresses and network
names for each Tivoli Storage Manager cluster group, you lessen the possibility of
mistakes by configuring each Tivoli Storage Manager cluster group separately.
Use the Failover Cluster Manager program on the computer that owns the shared
disk or tape resource to prepare your resource group. Initially, the group must
contain only disk resources. You can create a group and move disk resources to it.
You can also choose to rename an existing resource group that contains only disk
resources.
To prepare a resource group for cluster configuration, complete the following steps:
1. Open the Failover Cluster Manager program. Right-click on Services and
Applications and then choose More Actions > Create Empty Service or
Application.
2. Right-click on New Service or Application, select Change the name and
choose a new name for the resource group, for example, TSMGROUP.
3. Right-click on the resource group TSMGROUP and select Add storage.
4. On the Add storage area panel, select the shared volume or volumes for Tivoli
Storage Manager and click OK. The resource group TSMGROUP, which
contains the disk volumes you just added, is displayed.
Complete the following steps for each node in your cluster to install the Tivoli
Storage Manager server:
1. Log in with the domain user ID. The domain user must be a member of the
Domain Administrators group.
2. Install the Tivoli Storage Manager server to a local disk on each node. Use the
same local disk drive letter for each node.
3. Restart the system after the server installation completes.
To verify that the Tivoli Storage Manager server instance in a Microsoft Failover
Cluster is created and configured correctly, complete the following steps:
1. From the Failover Cluster Manager, select the server instance. The network
name that you configured is displayed in the Server Name pane.
2. In the Other Resources pane, confirm that the server instance and the IBM DB2
server resource are displayed.
3. Right-click the Tivoli Storage Manager server instance and select Bring this
resource online.
Check your Windows Event log on a regular, if not daily, basis to monitor the
activity of the nodes in the cluster. By checking the log, you are informed that a
node has failed and needs maintenance.
The following list of topics describes situations that might affect the configuration
or format of your cluster after it is operational.
To migrate an existing Tivoli Storage Manager server into a cluster, you can either
move the clients or perform a backup and restore procedure. The choice depends
primarily on the availability and capacity of other Tivoli Storage Manager server
computers in your site and your familiarity with the backup and restore procedure.
If you move clients from a non-clustered Tivoli Storage Manager server computer
to a clustered one, you can have the time to gradually migrate your users to the
new system and not interrupt services. However, you must have the hardware
needed to run two Tivoli Storage Manager servers simultaneously.
For example, suppose that you have no hardware other than the two servers to be
clustered and you plan to use the computer that is currently running the Tivoli
Storage Manager server as a node. Follow this procedure to remove Tivoli Storage
Manager from the computer and reinstall it in the cluster:
1. Back up all disk storage pools to a copy storage pool.
2. Back up the database of the existing Tivoli Storage Manager server.
3. Perform the installation and configuration of the cluster.
4. Restore the database to the clustered Tivoli Storage Manager server.
5. Restore the disk storage pool volumes from the copy storage pool.
6. After you verify that all of your data is on the clustered server, delete the old
server.
There are reasons other than a systems failure for manually moving a virtual Tivoli
Storage Manager server. For example, if the Windows server acts as the primary
node requires hardware or system maintenance, you might use the Cluster
Administrator to move control of the virtual Tivoli Storage Manager server to the
secondary node until the maintenance is completed. Clients experience a failover
as if the primary server failed and the secondary server had taken over the virtual
Tivoli Storage Manager server. After the Tivoli Storage Manager server is moved to
the secondary node, the Tivoli Storage Manager console is no longer available from
the primary node. Run the Tivoli Storage Manager Console from the secondary
node of the cluster.
The cluster log is a complete record of cluster activity compared to the Microsoft
Windows Event Log. The cluster log records the cluster service activity that is
recorded in the event log. Although the event log can point you to a problem, the
cluster log helps you resolve the problem.
The cluster log is enabled by default in Windows. Its output is printed as a log file
in %SystemRoot%\Cluster. For more information, see the Windows online help
documentation.
To use the interface, you must first define an EXTERNAL-type Tivoli Storage
Manager library that represents the media manager. You do not define drives, label
volumes, or check in media. Refer to your media manager's documentation for that
product's setup information and instructions for operational usage.
The details of the request types and the required processing are described in the
sections that follow. The request types are:
v Initialization of the external program
v Begin Batch
v End Batch
v Volume Query
v Volume Eject
v Volume Release
v Volume Mount
v Volume Dismount
The libraryname passed in a request must be returned in the response. The volume
specified in an eject request or a query request must be returned in the response.
The volume specified in a mount request (except for 'SCRTCH') must be returned
in the response. When 'SCRTCH' is specified in a mount request, the actual volume
mounted must be returned.
CreateProcess call
The server creates two anonymous unidirectional pipes and maps them to the
stdin and stdout streams during the CreateProcess call. When a standard handle is
redirected to refer to a file or a pipe, the handle can only be used by the ReadFile
and WriteFile functions.
This precludes normal C functions such as gets or printf. Since the server will
never terminate the external program process, it is imperative that the external
program recognize a read or write failure on the pipes and exit the process. In
addition, the external program should exit the process if it reads an unrecognized
command.
The external program may obtain values for the read and write handles using the
following calls:
readPipe=GetStdHandle(STD_INPUT-HANDLE)
For each external library defined to the server, the following must occur during
server initialization:
1. The server loads the external program (CreateProcess) in a newly created
process and creates pipes to the external program.
2. The server sends an initialization request description string, in text form, into
the standard input (stdin) stream of the external program. The server waits for
the response.
3. When the external process completes the request, the process must write an
initialization response string, in text form, into its standard output (stdout)
stream.
4. The server closes the pipes.
5. When the agent detects that the pipes are closed, it performs any necessary
cleanup and calls the stdlib exit routine.
The move commands cause a QUERY to be issued for a volume. If the QUERY
indicates that the volume is in the library, a subsequent EJECT for that volume is
issued. Because the move commands can match any number of volumes, a QUERY
and an EJECT request is issued for each matching volume.
The QUERY MEDIA command results in QUERY requests being sent to the agent.
During certain types of processing, Tivoli Storage Manager might need to know if
a volume is present in a library. The external agent should verify that the volume
is physically present in the library.
1. The server loads the external program in a newly created process and creates
pipes to the external program.
2. The server sends an initialization request description string (in text form) into
the standard input (stdin) stream of the external program. The server waits for
the response.
3. When the external process completes the request, the process must write an
initialization response string (in text form) into its standard output (stdout)
stream.
4. The server sends the BEGIN BATCH request (stdin).
5. The agent sends the BEGIN BATCH response (stdout).
6. The server sends 1 to n volume requests (n > 1). These can be any number of
QUERY or EJECT requests. For each request, the agent will send the applicable
QUERY response or EJECT response.
7. The server sends the END BATCH request (stdin).
8. The agent sends the END BATCH response (stdout), performs any necessary
cleanup, and calls the stdlib exit routine.
If the code for any response (except for EJECT and QUERY) is not equal to
SUCCESS, Tivoli Storage Manager does not proceed with the subsequent steps.
After the agent sends a non-SUCCESS return code for any response, the agent will
perform any necessary cleanup and call the stdlib exit routine.
However, even if the code for EJECT or QUERY requests is not equal to SUCCESS,
the agent will continue to send these requests.
If the server gets an error while trying to write to the agent, it will close the pipes,
perform any necessary cleanup, and terminate the current request.
where:
resultCode
One of the following:
v SUCCESS
v INTERNAL_ERROR
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volume
Specifies the volume name to be queried.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volume
Specifies the volume name queried.
resultCode
One of the following:
v SUCCESS
v LIBRARY_ERROR
v VOLUME_UNKNOWN
v VOLUME_UNAVAILABLE
v CANCELLED
v TIMED_OUT
v INTERNAL_ERROR
If resultCode is not SUCCESS, the exit must return statusValue set to UNDEFINED.
If resultCode is SUCCESS, STATUS must be one of the following values:
v IN_LIBRARY
v NOT_IN_LIBRARY
IN_LIBRARY means that the volume is currently in the library and available to be
mounted.
Tivoli Storage Manager does not attempt any other type of operation with that
library until an initialization request has succeeded. The server sends an
initialization request first. If the initialization is successful, the request is sent. If the
initialization is not successful, the request fails. The external media management
program can detect whether the initialization request is being sent by itself or with
another request by detecting end-of-file on the stdin stream. When end-of-file is
detected, the external program must end by using the stdlib exit routine (not the
return call).
When a valid response is sent by the external program, the external program must
end by using the exit routine.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
resultcode
One of the following:
v SUCCESS
v NOT_READY
v INTERNAL_ERROR
where:
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volume
Specifies the ejected volume.
resultCode
One of the following:
v SUCCESS
v LIBRARY_ERROR
v VOLUME_UNKNOWN
v VOLUME_UNAVAILABLE
v CANCELLED
v TIMED_OUT
v INTERNAL_ERROR
The external program must send a response to the release request. No matter what
response is received from the external program, Tivoli Storage Manager returns the
volume to scratch. For this reason, Tivoli Storage Manager and the external
program can have conflicting information on which volumes are scratch. If an error
occurs, the external program should log the failure so that the external library
inventory can be synchronized later with Tivoli Storage Manager. The
synchronization can be a manual operation.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the name of the volume returned to scratch (released).
resultcode
One of the following:
v SUCCESS
v VOLUME_UNKNOWN
v VOLUME_UNAVAILABLE
v INTERNAL_ERROR
The volume mounted by the external media management program must be a tape
with a standard IBM label that matches the external volume label. When the
external program completes the mount request, the program must send a response.
If the mount was successful, the external program must remain active. If the
mount failed, the external program must end immediately by using the stdlib exit
routine.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the actual volume name if the request is for an existing volume. If a
scratch mount is requested, the volname is set to SCRTCH.
accessmode
Specifies the access mode required for the volume. Possible values are
READONLY and READWRITE.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the name of the volume mounted for the request.
specialfile
The fully qualified path name of the device special file for the drive in which
the volume was mounted. If the mount request fails, the value should be set to
/dev/null.
The external program must ensure that the special file is closed before the
response is returned to the server.
resultcode
One of the following:
v SUCCESS
v DRIVE_ERROR
v LIBRARY_ERROR
v VOLUME_UNKNOWN
v VOLUME_UNAVAILABLE
v CANCELLED
v TIMED_OUT
v INTERNAL_ERROR
After the dismount response is sent, the external process ends immediately by
using the stdlib exit routine.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the name of the volume to be dismounted.
where:
libraryname
Specifies the name of the EXTERNAL library as defined to Tivoli Storage
Manager.
volname
Specifies the name of the volume dismounted.
resultcode
One of the following:
v SUCCESS
v DRIVE_ERROR
v LIBRARY_ERROR
v INTERNAL_ERROR
The samples for the C, H, and make files are shipped with the server code in the
\win32app\ibm\adsm directory.
You can also use Tivoli Storage Manager commands to control event logging. For
details, see Chapter 32, “Logging IBM Tivoli Storage Manager events to receivers,”
on page 885 and Administrator's Reference.
/***********************************************************************
* Name: USEREXITSAMPLE.H
* Description: Declarations for a user exit
* Environment: WINDOWS NT
***********************************************************************/
#ifndef _H_USEREXITSAMPLE
#define _H_USEREXITSAMPLE
#include <stdio.h>
#include <sys/types.h>
#ifndef uchar
typedef unsigned char uchar;
#endif
/* DateTime Structure Definitions - TSM representation of a timestamp */
typedef struct
{
uchar year; /* Years since BASE_YEAR (0-255) */
uchar mon; /* Month (1 - 12) */
uchar day; /* Day (1 - 31) */
uchar hour; /* Hour (0 - 23) */
uchar min; /* Minutes (0 - 59) */
uchar sec; /* Seconds (0 - 59) */
} DateTime;
/******************************************
* Some field size definitions (in bytes) *
******************************************/
#define MAX_SERVERNAME_LENGTH 64
#define MAX_NODE_LENGTH 64
/**********************************************
* Event Types (in elEventRecvData.eventType) *
**********************************************/
/***************************************************
* Application Types (in elEventRecvData.applType) *
***************************************************/
/*****************************************************
* Event Severity Codes (in elEventRecvData.sevCode) *
*****************************************************/
/************************************************************
* Data Structure of Event that is passed to the User-Exit. *
* The same structure is used for a file receiver *
************************************************************/
/************************************
* Size of the Event data structure *
************************************/
/**************************************
*** Do not modify above this line. ***
**************************************/
#endi
/***********************************************************************
* Name: USEREXITSAMPLE.C
* Description: Example user-exit program that is invoked by
* the TSM V3 Server
* Environment: *********************************************
* ** This is a platform-specific source file **
* ** versioned for: "WINDOWS NT" **
* *********************************************
***********************************************************************/
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <io.h>
#include <windows.h>
#include "USEREXITSAMPLE.H"
/**************************************
*** Do not modify below this line. ***
**************************************/
/****************
*** DLL MAIN ***
****************/
BOOL WINAPI
DllMain(HMODULE hMod, DWORD fdwReason, LPVOID lpvReserved)
{
return(TRUE);
} // End of WINAPI
/******************************************************************
* Procedure: adsmV3UserExit
* If the user-exit is specified on the server, a valid and
* appropriate event will cause an elEventRecvData structure
* (see USEREXITSAMPLE.H) to be passed to a procedure named
* adsmV3UserExit that returns a void.
*
* This procedure can be named differently:
* ----------------------------------------
/**************************************
*** Do not modify above this line. ***
**************************************/
/* Be aware that certain function calls are process-wide and can cause
* synchronization of all threads running under the TSM Server process!
* Among these is the system() function call. Use of this call can
* cause the server process to hang and otherwise affect performance.
* Also avoid any functions that are not thread-safe. Consult your
* system’s programming reference material for more information.
*/
The following table presents the format of the output. Fields are separated by
blank spaces.
Table 95. Readable text file exit (FILETEXTEXIT) format
Column Description
0001-0006 Event number (with leading zeros)
0008-0010 Severity code number
0012-0013 Application type number
0015-0023 Session ID number
0025-0027 Event structure version number
0029-0031 Event type number
0033-0046 Date/Time (YYYYMMDDDHHmmSS)
Active Directory can be used to automate Tivoli Storage Manager client node
registration and management, and Active Directory services are scalable, so
administrators can add and remove Tivoli Storage Manager servers and their entire
inventories of client nodes as required.
When Tivoli Storage Manager servers are added, their registered client nodes also
become part of the domain and are included in the Active Directory. Tivoli Storage
Manager provides an Active Directory Configuration wizard that can be used to
add and remove Tivoli Storage Manager servers. This wizard can be accessed from
the Tivoli Storage Manager Console. Tivoli Storage Manager commands that add,
remove, and rename Tivoli Storage Manager client nodes automatically update the
Active Directory.
The Tivoli Storage Manager server interacts with the Active Directory service when
it is started. At that time the following sequence of events takes place with respect
to Active Directory:
Tivoli Storage Manager Server:
Defines itself to the Active Directory when it is started
Tivoli Storage Manager Client:
1. Connects to the Active Directory server for communication protocol
information
2. Looks up protocol information in Active Directory and stores it in its
options file
3. Connects with the Tivoli Storage Manager server
The following tasks are required to set up the Active Directory environment and
Tivoli Storage Manager:
v Configure Active Directory on the Windows machine
v Perform a one-time configuration for Tivoli Storage Manager and Active
Directory
v Configure each Tivoli Storage Manager server instance
From the domain controller containing the Active Directory schema, perform the
following steps:
1. Click Start > Run. The Run dialog opens.
2. Type schmmgmt.msc in the Run dialog entry field and click OK. The Active
Directory schema snap-in opens.
3. In the console tree, right-click Active Directory Schema and select Operations
Master.
4. Click The Schema may be modified on this domain controller.
5. In the console tree, right-click Active Directory Schema and select Permissions.
6. If you do not see your name in the name section, click Add.
7. Select your account name, click Add, and click OK.
8. Select the account name and check the Full Control checkbox in the Allow
column of the Permissions area. Click OK.
This one-time configuration allows Tivoli Storage Manager to extend the schema
by adding objects to the schema that define Tivoli Storage Manager servers.
At this point, you can disable the permissions to extend the schema. To disable
permissions, return to the schema snap-in, right-click Active Directory Schema,
and click Permissions. Select your account name, uncheck the Full Control
checkbox, and click OK. If you want to disable further schema updates you can
right-click on the Active Directory Schema, and click Operations Master. Uncheck
The Schema may be modified on this Domain Controller and click OK.
Tivoli Storage Manager allows administrators to add or edit server entries so that
they can define non-Windows Tivoli Storage Manager servers to Active Directory.
When Tivoli Storage Manager clients look up Tivoli Storage Manager servers in
Active Directory they do not know what platforms the servers are running on nor
do they care. They are just looking for communication parameters that they can
use to connect to a Tivoli Storage Manager server that knows about them.
Administrators can modify the three server options that control Tivoli Storage
Manager server behavior regarding Active Directory.
Note: Typically, the Tivoli Storage Manager server is run as a Windows service.
The Tivoli Storage Manager server service should be configured to run under an
account other than the default System Account because the System Account does
not have the permissions needed to access Active Directory over the network.
Administrators can modify the service account using the Service Configuration
wizard in the Tivoli Storage Manager Console.
To define the Tivoli Storage Manager server to Active Directory, perform the
following steps:
1. Expand the Tivoli Storage Manager Console tree until the Tivoli Storage
Manager server for which you want to modify options is displayed. Expand the
Server and expand Reports.
2. Click Service Information. The Service Information report appears in the right
pane. The Tivoli Storage Manager server, running as a service, should appear in
the Service Information report. If the server does not appear in the report,
ensure that you have initialized the server using the Server Initialization wizard
in the Tivoli Storage Manager Console.
3. Right click the Tivoli Storage Manager server service and select Edit Options
File. The Server Options File tabbed dialog appears.
4. Click the Active Directory tab. The Active Directory options appear.
5. Check Register with Active Directory on TSM server startup.
6. Check Unregister with Active Directory on TSM server shutdown.
7. Select Automatically Detect in the Domain Controller section and Click OK.
The next time the Tivoli Storage Manager server starts, it defines itself to Active
Directory and adds information including the list of registered nodes and protocol
information. This can be verified at any time using the Active Directory
Configuration wizard in the Tivoli Storage Manager Console.
Perform the following steps to remove a Tivoli Storage Manager server from Active
Directory:
1. Expand the Tivoli Storage Manager Console tree until the Tivoli Storage
Manager server you want to remove from the Active Directory is displayed.
2. Expand the server and click Wizards. The Wizards list appears in the right
pane.
You can also connect a client node with a Tivoli Storage Manager server during the
client configuration process. To select a server, click the Browse button on the
communications protocol parameters page of the Client Configuration Wizard. The
wizard displays a list of Tivoli Storage Manager servers with which the node is
registered and that support the selected protocol. When you select a server and
complete the wizard, the corresponding communication protocol information is
included in the client options file.
Table 96 describes the attributes Tivoli Storage Manager uses to store information
in the Active Directory.
Table 96. Tivoli Storage Manager Attribute Names
Attribute Common Name Description Parent Container/Class
IBM-TSM-SRV-ADDRESS HTTP Address IBM-TSM-SRV-
TCPHTTPCLASS
IBM-TSM-SRV-PORT HTTP Port IBM-TSM-SRV-
TCPHTTPCLASS
IBM-TSM-SRV-ADDRESS Named Pipe Address IBM-TSM-SRV-
NAMEDPIPECLASS
IBM-TSM-SRV-ADDRESS TCPIP Address IBM-TSM-SRV-
TCPHTTPCLASS
IBM-TSM-SRV-PORT TCPIP Port IBM-TSM-SRV-
TCPHTTPCLASS
IBM-TSM-SRV-NODENAME Node Name IBM-TSM-SRV-NODECLASS
The Active Directory disk storage requirements are dependent on the number of
Tivoli Storage Manager servers and clients registered for the particular installation.
The disk storage requirement for a full replica can be represented by the following
formula:
Disk Usage = NumberOfServers * ( 4.1KB + ( NumberofClients * 2.04KB ) )
The Active Directory disk storage requirements for Global Catalog servers (partial
replicas only) are dependent on the same factors. The disk storage requirement for
a partial replica can be represented by the following formula:
Disk Usage = NumberOfServers * ( 4.1KB + ( NumberofClients * 2.04KB ) )
More common Tivoli Storage Manager usage will consist of only minor changes to
the data stored in the Active Directory. This information will only change when
new servers are defined, client nodes are registered, or communications parameters
are changed. Since these parameters change very infrequently on a day-to-day
basis, the network traffic requirement is very low. Network bandwidth required
will be under 100 kilobytes of data per day for both partial and full replicas.
Accessibility features
The following list includes the major accessibility features in the Tivoli Storage
Manager family of products:
v Keyboard-only operation
v Interfaces that are commonly used by screen readers
v Keys that are discernible by touch but do not activate just by touching them
v Industry-standard devices for ports and connectors
v The attachment of alternative input and output devices
If you install the IBM Tivoli Storage Manager Operations Center in console mode,
the installation is fully accessible.
The accessibility features of the Operations Center are fully supported only in the
Mozilla Firefox browser that is running on a Windows system.
The Tivoli Storage Manager Information Center, and its related publications, are
accessibility-enabled. For information about the accessibility features of the
information center, see the following topic: http://pic.dhe.ibm.com/infocenter/
tsminfo/v6r3/topic/com.ibm.help.ic.doc/iehs36_accessibility.html.
Keyboard navigation
Vendor software
The Tivoli Storage Manager product family includes certain vendor software that is
not covered under the IBM license agreement. IBM makes no representation about
the accessibility features of these products. Contact the vendor for the accessibility
information about its products.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
Licensees of this program who want to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758
U.S.A.
The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
This information is for planning purposes only. The information herein is subject to
change before the products described become available.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work, must
include a copyright notice as follows: © (your company name) (year). Portions of
this code are derived from IBM Corp. Sample Programs. © Copyright IBM Corp.
_enter the year or years_.
If you are viewing this information in softcopy, the photographs and color
illustrations may not appear.
Trademarks
IBM, the IBM logo, and ibm.com® are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the Web at “Copyright and
trademark information” at http://www.ibm.com/legal/copytrade.shtml.
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.
LTO and Ultrium are trademarks of HP, IBM Corp. and Quantum in the U.S. and
other countries.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other product and service names might be trademarks of IBM or other companies.
Notices 1175
1176 IBM Tivoli Storage Manager for Windows: Administrator's Guide
Glossary
This glossary includes terms and definitions for IBM Tivoli Storage Manager and IBM Tivoli Storage
FlashCopy Manager products.
Glossary 1179
client A software program or computer that distributed data processing in which a
requests services from a server. program on one computer sends a request
to a program on another computer and
client acceptor
awaits a response. The requesting
An HTTP service that serves the applet
program is called a client; the answering
for the web client to web browsers. On
program is called a server.
Windows systems, the client acceptor is
installed and run as a service. On AIX, client system-options file
UNIX, and Linux systems, the client A file, used on AIX, UNIX, or Linux
acceptor is run as a daemon, and is also system clients, containing a set of
called the client acceptor daemon (CAD). processing options that identify the
servers to be contacted for services. This
client acceptor daemon (CAD)
file also specifies communication methods
See client acceptor.
and options for backup, archive,
client domain hierarchical storage management, and
The set of drives, file systems, or volumes scheduling. This file is also called the
that the user selects to back up or archive dsm.sys file. See also client user-options file.
data, using the backup-archive client.
client user-options file
client node A file that contains the set of processing
A file server or workstation on which the options that the clients on the system use.
backup-archive client program has been The set can include options that
installed, and which has been registered determine the server that the client
to the server. contacts, and options that affect backup
operations, archive operations,
client node session
hierarchical storage management
A session in which a client node
operations, and scheduled operations.
communicates with a server to perform
This file is also called the dsm.opt file.
backup, restore, archive, retrieve, migrate,
For AIX, UNIX, or Linux systems, see also
or recall requests. Contrast with
client system-options file.
administrative session.
closed registration
client option set
A registration process in which only an
A group of options that are defined on
administrator can register workstations as
the server and used on client nodes in
client nodes with the server. Contrast
conjunction with client options files.
with open registration.
client options file
collocation
An editable file that identifies the server
The process of keeping all data belonging
and communication method, and
to a single-client file space, a single client
provides the configuration for backup,
node, or a group of client nodes on a
archive, hierarchical storage management,
minimal number of sequential-access
and scheduling.
volumes within a storage pool.
client-polling scheduling mode Collocation can reduce the number of
A method of operation in which the client volumes that must be accessed when a
queries the server for work. Contrast with large amount of data must be restored.
server-prompted scheduling mode.
collocation group
client schedule A user-defined group of client nodes
A database record that describes the whose data is stored on a minimal
planned processing of a client operation number of volumes through the process
during a specific time period. The client of collocation.
operation can be a backup, archive,
commit point
restore, or retrieve operation, a client
A point in time when data is considered
operating system command, or a macro.
consistent.
See also administrative command schedule.
client/server
Pertaining to the model of interaction in
Glossary 1181
media. Other instances of the same data destination
are replaced with a pointer to the retained A copy group or management class
instance. attribute that specifies the primary storage
pool to which a client file will be backed
data manager server
up, archived, or migrated.
A server that collects metadata
information for client inventory and device class
manages transactions for the storage A named set of characteristics that are
agent over the local area network. The applied to a group of storage devices.
data manager server informs the storage Each device class has a unique name and
agent with applicable library attributes represents a device type of disk, file,
and the target volume identifier. optical disk, or tape.
data mover device configuration file
A device that moves data on behalf of the (1) For a server, a file that contains
server. A network-attached storage (NAS) information about defined device classes,
file server is a data mover. and, on some servers, defined libraries
and drives. The information is a copy of
data storage-management application-
the device configuration information in
programming interface (DSMAPI)
the database.
A set of functions and semantics that can
monitor events on files, and manage and (2) For a storage agent, a file that contains
maintain the data in a file. In an HSM the name and password of the storage
environment, a DSMAPI uses events to agent, and information about the server
notify data management applications that is managing the SAN-attached
about operations on files, stores arbitrary libraries and drives that the storage agent
attribute information with a file, supports uses.
managed regions in a file, and uses
device driver
DSMAPI access rights to control access to
A program that provides an interface
a file object.
between a specific device and the
deduplication application program that uses the device.
See data deduplication.
disaster recovery manager (DRM)
default management class A function that assists in preparing and
A management class that is assigned to a using a disaster recovery plan file for the
policy set. This class is used to govern server.
backed up or archived files when a file is
disaster recovery plan
not explicitly associated with a specific
A file that is created by the disaster
management class through the
recovery manager (DRM) that contains
include-exclude list.
information about how to recover
demand migration computer systems if a disaster occurs and
The process that is used to respond to an scripts that can be run to perform some
out-of-space condition on a file system for recovery tasks. The file includes
which hierarchical storage management information about the software and
(HSM) is active. Files are migrated to hardware that is used by the server, and
server storage until space usage drops to the location of recovery media.
the low threshold that was set for the file
domain
system. If the high threshold and low
A grouping of client nodes with one or
threshold are the same, one file is
more policy sets, which manage data or
migrated.
storage resources for the client nodes. See
desktop client policy domain or client domain.
The group of backup-archive clients that
DRM See disaster recovery manager.
includes clients on Microsoft Windows,
Apple, and Novell NetWare operating DSMAPI
systems. See data storage-management
application-programming interface.
Glossary 1183
external library management support that is required. If
A type of library that is provided by no space management support is
Tivoli Storage Manager that permits required, the operation is passed to the
LAN-free data movement for StorageTek operating system, which performs its
libraries that are managed by Automated normal functions. The file system
Cartridge System Library Software migrator is mounted over a file system
(ACSLS). To activate this function, the when space management is added to the
Tivoli Storage Manager library type must file system.
be EXTERNAL.
file system state
F The storage management mode of a file
system that resides on a workstation on
file access time
which the hierarchical storage
On AIX, UNIX, or Linux systems, the
management (HSM) client is installed. A
time when the file was last accessed.
file system can be in one of these states:
file age native, active, inactive, or global inactive.
For migration prioritization purposes, the
frequency
number of days since a file was last
A copy group attribute that specifies the
accessed.
minimum interval, in days, between
file device type incremental backups.
A device type that specifies the use of
FSID See file space ID.
sequential access files on disk storage as
volumes. FSM See file system migrator.
file server full backup
A dedicated computer and its peripheral The process of backing up the entire
storage devices that are connected to a server database. A full backup begins a
local area network that stores programs new database backup series. See also
and files that are shared by users on the database backup series and incremental
network. backup. Contrast with database snapshot.
file space fuzzy backup
A logical space in server storage that A backup version of a file that might not
contains a group of files that have been accurately reflect what is currently in the
backed up or archived by a client node, file because the file was backed up at the
from a single logical partition, file system, same time as it was being modified.
or virtual mount point. Client nodes can
fuzzy copy
restore, retrieve, or delete their file spaces
A backup version or archive copy of a file
from server storage. In server storage,
that might not accurately reflect the
files belonging to a single file space are
original contents of the file because it was
not necessarily stored together.
backed up or archived the file while the
file space ID (FSID) file was being modified. See also backup
A unique numeric identifier that the version and archive copy.
server assigns to a file space when it is
G
stored in server storage.
General Parallel File System
file state
A high-performance shared-disk file
The space management mode of a file
system that can provide data access from
that resides in a file system to which
nodes in a cluster environment.
space management has been added. A file
can be in one of three states: resident, gigabyte (GB)
premigrated, or migrated. See also resident In decimal notation, 1 073 741 824 when
file, premigrated file, and migrated file. referring to memory capacity; in all other
cases, it is defined as 1 000 000 000.
file system migrator (FSM)
A kernel extension that intercepts all file global inactive state
system operations and provides any space The state of all file systems to which
Glossary 1185
for storage pools and file sets. See also the local area network. This process is
General Parallel File System. also referred to as LAN-free data transfer.
inode The internal structure that describes the LAN-free data transfer
individual files on AIX, UNIX, or Linux See LAN-free data movement.
systems. An inode contains the node,
leader data
type, owner, and location of a file.
Bytes of data, from the beginning of a
inode number migrated file, that are stored in the file's
A number specifying a particular inode corresponding stub file on the local file
file in the file system. system. The amount of leader data that is
stored in a stub file depends on the stub
IP address
size that is specified.
A unique address for a device or logical
unit on a network that uses the IP library
standard. (1) A repository for demountable recorded
media, such as magnetic disks and
J
magnetic tapes.
job file
(2) A collection of one or more drives, and
A generated file that contains
possibly robotic devices (depending on
configuration information for a migration
the library type), which can be used to
job. The file is XML format and can be
access storage volumes.
created and edited in the hierarchical
storage management (HSM) client for library client
Windows client graphical user interface. A server that uses server-to-server
communication to access a library that is
journal-based backup
managed by another storage management
A method for backing up Windows clients
server. See also library manager.
and AIX clients that exploits the change
notification mechanism in a file to library manager
improve incremental backup performance A server that controls device operations
by reducing the need to fully scan the file when multiple storage management
system. servers share a storage device. See also
library client.
journal daemon
On AIX, UNIX, or Linux systems, a local (1) Pertaining to a device, file, or system
program that tracks change activity for that is accessed directly from a user
files residing in file systems. system, without the use of a
communication line.
journal service
In Microsoft Windows, a program that (2) For HSM products, pertaining to the
tracks change activity for files residing in destination of migrated files that are
file systems. being moved.
K local area network (LAN)
A network that connects several devices
kilobyte (KB)
in a limited area (such as a single
For processor storage, real and virtual
building or campus) and that can be
storage, and channel volume, 210 or 1 024
connected to a larger network.
bytes. For disk storage capacity and
communications volume, 1 000 bytes. local shadow volumes
Data that is stored on shadow volumes
L
localized to a disk storage subsystem.
LAN See local area network.
LOFS See loopback virtual file system.
LAN-free data movement
logical file
The movement of client data between a
A file that is stored in one or more server
client system and a storage device on a
storage pools, either by itself or as part of
storage area network (SAN), bypassing
an aggregate. See also aggregate and
physical file.
Glossary 1187
(2) For processor storage, real and virtual been modified since the last time the file
storage, and channel volume, 2 to the was backed up. See modified mode and
power of 20 or 1 048 576 bits. For disk absolute mode.
storage capacity and communications
modified mode
volume, 1 000 000 bits.
In storage management, a backup
metadata copy-group mode that specifies that a file
Data that describes the characteristics of is considered for incremental backup only
data; descriptive data. if it has changed since the last backup. A
file is considered a changed file if the
migrate
date, size, owner, or permissions of the
To move data from one storage location to
file have changed. See also absolute mode.
another. In Tivoli Storage Manager
products, migrating can mean moving mount limit
data from a client node to server storage, The maximum number of volumes that
or moving data from one storage pool to can be simultaneously accessed from the
the next storage pool defined in the same device class. The mount limit
server storage hierarchy. In both cases the determines the maximum number of
movement is controlled by policy, such as mount points. See also mount point.
thresholds that are set. See also migration
mount point
threshold.
On the Tivoli Storage Manager server, a
migrated file logical drive through which volumes in a
A file that has been copied from a local sequential access device class are
file system to Tivoli Storage Manager accessed. For removable-media device
storage. For HSM clients on UNIX or types, such as tape, a mount point is a
Linux systems, the file is replaced with a logical drive that is associated with a
stub file on the local file system. On physical drive. For the file device type, a
Windows systems, creation of the stub file mount point is a logical drive that is
is optional. See also stub file and resident associated with an I/O stream. The
file. For HSM clients on UNIX or Linux number of mount points for a device class
systems, contrast with premigrated file. is defined by the value of the mount limit
attribute for that device class. See also
migrate-on-close recall mode
mount limit.
A mode that causes a migrated file to be
recalled back to its originating file system mount retention period
temporarily. Contrast with normal recall The maximum number of minutes that
mode and read-without-recall recall mode. the server retains a mounted
sequential-access media volume that is
migration job
not being used before it dismounts the
A specification of files to migrate, and
sequential-access media volume.
actions to perform on the original files
after migration. See also job file. mount wait period
The maximum number of minutes that
migration threshold
the server waits for a sequential-access
High and low capacities for storage pools
volume mount request to be satisfied
or file systems, expressed as percentages,
before canceling the request.
at which migration is set to start and
stop. MTU See maximum transmission unit.
mirroring N
The process of writing the same data to
Nagle algorithm
multiple locations at the same time.
An algorithm that reduces congestion of
Mirroring data protects against data loss
TCP/IP networks by combining smaller
within the recovery log.
packets and sending them together.
mode A copy group attribute that specifies
named pipe
whether to back up a file that has not
A type of interprocess communication
Glossary 1189
called dsm.opt. On AIX, UNIX, Linux, relationship between a source and a
and Mac OS X systems, the file is called destination. Using the path, the source
dsm.sys. accesses the destination. Data can flow
from the source to the destination, and
originating file system
back. An example of a source is a data
The file system from which a file was
mover (such as a network-attached
migrated. When a file is recalled using
storage [NAS] file server), and an
normal or migrate-on-close recall mode, it
example of a destination is a tape drive.
is always returned to its originating file
system. pattern-matching character
See wildcard character.
orphaned stub file
A file for which no migrated file can be physical file
found on the Tivoli Storage Manager A file that is stored in one or more
server that the client node is contacting storage pools, consisting of either a single
for space management services. For logical file, or a group of logical files that
example, a stub file can be orphaned are packaged together as an aggregate.
when the client system-options file is See also aggregate and logical file.
modified to contact a server that is
physical occupancy
different than the one to which the file
The amount of space that is used by
was migrated.
physical files in a storage pool. This space
out-of-space protection mode includes the unused space that is created
A mode that controls whether the when logical files are deleted from
program intercepts out-of-space aggregates. See also physical file, logical file,
conditions. See also execution mode. and logical occupancy.
P plug-in
A self-contained software component that
pacing
modifies (adds, or changes) the function
In SNA, a technique by which the
in a particular system. When a plug-in is
receiving system controls the rate of
added to a system, the foundation of the
transmission of the sending system to
original system remains intact.
prevent overrun.
policy domain
packet In data communication, a sequence of
A grouping of policy users with one or
binary digits, including data and control
more policy sets, which manage data or
signals, that is transmitted and switched
storage resources for the users. The users
as a composite whole.
are client nodes that are associated with
page A defined unit of space on a storage the policy domain.
medium or within a database volume.
policy privilege class
partial-file recall mode A privilege class that gives an
A recall mode that causes the hierarchical administrator the authority to manage
storage management (HSM) function to policy objects, register client nodes, and
read just a portion of a migrated file from schedule client operations for client
storage, as requested by the application nodes. Authority can be restricted to
accessing the file. certain policy domains. See also privilege
class.
password generation
A process that creates and stores a new policy set
password in an encrypted password file A group of rules in a policy domain. The
when the old password expires. rules specify how data or storage
Automatic generation of a password resources are automatically managed for
prevents password prompting. Password client nodes in the policy domain. Rules
generation can be set in the options file can be contained in management classes.
(passwordaccess option). See also options See also active policy set and management
file. class.
path An object that defines a one-to-one
Glossary 1191
is migrated back to Tivoli Storage a local file system that might also be a
Manager storage when it is closed, or is migrated file because a migrated copy can
read from Tivoli Storage Manager storage exist in Tivoli Storage Manager storage.
without storing it on the local file system. On a UNIX or Linux system, a complete
file on a local file system that has not
receiver
been migrated or premigrated, or that has
A server repository that contains a log of
been recalled from Tivoli Storage Manager
server and client messages as events. For
storage and modified. Contrast with stub
example, a receiver can be a file exit, a
file and premigrated file. See migrated file.
user exit, or the Tivoli Storage Manager
server console and activity log. See also restore
event. To copy information from its backup
location to the active storage location for
reclamation
use. For example, to copy information
The process of consolidating the
from server storage to a client
remaining data from many
workstation.
sequential-access volumes onto fewer,
new sequential-access volumes. retention
The amount of time, in days, that inactive
reclamation threshold
backed-up or archived files are kept in the
The percentage of space that a
storage pool before they are deleted.
sequential-access media volume must
Copy group attributes and default
have before the server can reclaim the
retention grace periods for the domain
volume. Space becomes reclaimable when
define retention.
files are expired or are deleted.
retrieve
reconciliation
To copy archived information from the
The process of synchronizing a file system
storage pool to the workstation for use.
with the Tivoli Storage Manager server,
The retrieve operation does not affect the
and then removing old and obsolete
archive version in the storage pool.
objects from the Tivoli Storage Manager
server. roll back
To remove changes that were made to
recovery log
database files since the last commit point.
A log of updates that are about to be
written to the database. The log can be root user
used to recover from system and media A system user who operates without
failures. The recovery log consists of the restrictions. A root user has the special
active log (including the log mirror) and rights and privileges needed to perform
archive logs. administrative tasks.
register S
To define a client node or administrator
SAN See storage area network.
ID that can access the server.
schedule
registry
A database record that describes client
A repository that contains access and
operations or administrative commands to
configuration information for users,
be processed. See administrative command
systems, and software.
schedule and client schedule.
remote
scheduling mode
(1) Pertaining to a system, program, or
The type of scheduling operation for the
device that is accessed through a
server and client node that supports two
communication line.
scheduling modes: client-polling and
(2) For HSM products, pertaining to the server-prompted.
origin of migrated files that are being
scratch volume
moved.
A labeled volume that is either blank or
resident file contains no valid data, that is not defined,
On a Windows system, a complete file on and that is available for use.
Glossary 1193
serialization. Contrast with dynamic stanza. Depending on the type of file, a
serialization, shared dynamic serialization, stanza is ended by the next occurrence of
and static serialization. a stanza name in the file, or by an explicit
end-of-stanza marker. A stanza can also
snapshot
be ended by the end of the file.
An image backup type that consists of a
point-in-time view of a volume. startup window
A time period during which a schedule
space-managed file
must be initiated.
A file that is migrated from a client node
by the space manager client. The space static serialization
manager client recalls the file to the client A copy-group serialization value that
node on demand. specifies that a file must not be modified
during a backup or archive operation. If
space management
the file is in use during the first attempt,
The process of keeping sufficient free
the storage manager cannot back up or
storage space available on a local file
archive the file. See also serialization.
system for new data by migrating files to
Contrast with dynamic serialization, shared
server storage. Synonymous with
dynamic serialization, and shared static
hierarchical storage management.
serialization.
space manager client
storage agent
A program that runs on a UNIX or Linux
A program that enables the backup and
system to manage free space on the local
restoration of client data directly to and
file system by migrating files to server
from storage attached to a storage area
storage. The program can recall the files
network (SAN).
either automatically or selectively. Also
called hierarchical storage management storage area network (SAN)
(HSM) client. A dedicated storage network that is
tailored to a specific environment,
space monitor daemon
combining servers, systems, storage
A daemon that checks space usage on all
products, networking products, software,
file systems for which space management
and services.
is active, and automatically starts
threshold migration when space usage on storage hierarchy
a file system equals or exceeds its high (1) A logical order of primary storage
threshold. pools, as defined by an administrator. The
order is typically based on the speed and
sparse file
capacity of the devices that the storage
A file that is created with a length greater
pools use. The storage hierarchy is
than the data it contains, leaving empty
defined by identifying the next storage
spaces for the future addition of data.
pool in a storage pool definition. See also
special file storage pool.
On AIX, UNIX, or Linux systems, a file
(2) An arrangement of storage devices
that defines devices for the system, or
with different speeds and capacities. The
temporary files that are created by
levels of the storage hierarchy include:
processes. There are three basic types of
main storage, such as memory and
special files: first-in, first-out (FIFO);
direct-access storage device (DASD)
block; and character.
cache; primary storage (DASD containing
SSL See Secure Sockets Layer. user-accessible data); migration level 1
(DASD containing data in a space-saving
stabilized file space
format); and migration level 2 (tape
A file space that exists on the server but
cartridges containing data in a
not on the client.
space-saving format).
stanza A group of lines in a file that together
storage pool
have a common function or define a part
A named set of storage volumes that are
of the system. Each stanza is identified by
the destination that is used to store client
a name that occurs in the first line of the
Glossary 1195
timeout supports the interchange, processing, and
A time interval that is allotted for an display of text that is written in the
event to occur or complete before common languages around the world,
operation is interrupted. plus some classical and historical texts.
The Unicode standard has a 16-bit
timestamp control mode
character set defined by ISO 10646.
A mode that determines whether
commands preserve the access time for a Unicode-enabled file space
file or set it to the current time. Unicode file space names provide support
for multilingual workstations without
Tivoli Storage Manager command script
regard for the current locale.
A sequence of Tivoli Storage Manager
administrative commands that are stored Unicode transformation format 8
in the database of the Tivoli Storage Unicode Transformation Format (UTF),
Manager server. The script can run from 8-bit encoding form, which is designed
any interface to the server. The script can for ease of use with existing ASCII-based
include substitution for command systems. The CCSID value for data in
parameters and conditional logic. UTF-8 format is 1208.
tombstone object Universal Naming Convention (UNC) name
A small subset of attributes of a deleted The server name and network name
object. The tombstone object is retained combined. These names together identify
for a specified period, and at the end of the resource on the domain.
the specified period, the tombstone object
Universally Unique Identifier (UUID)
is permanently deleted.
The 128-bit numeric identifier that is used
Transmission Control Protocol/Internet Protocol to ensure that two components do not
(TCP/IP) have the same identifier.
An industry-standard, nonproprietary set
UTF-8 See Unicode transformation format 8.
of communication protocols that provides
reliable end-to-end connections between UUID See Universally Unique Identifier.
applications over interconnected networks
V
of different types.
validate
transparent recall
To check a policy set for conditions that
The process that is used to automatically
can cause problems if that policy set
recall a file to a workstation or file server
becomes the active policy set. For
when the file is accessed. See also recall
example, the validation process checks
mode. Contrast with selective recall.
whether the policy set contains a default
trusted communications agent (TCA) management class.
A program that handles the sign-on
version
password protocol when clients use
A backup copy of a file stored in server
password generation.
storage. The most recent backup copy of a
U file is the active version. Earlier copies of
the same file are inactive versions. The
UCS-2 A 2-byte (16-bit) encoding scheme based
number of versions retained by the server
on ISO/IEC specification 10646-1. UCS-2
is determined by the copy group
defines three levels of implementation:
attributes in the management class.
Level 1-No combining of encoded
elements allowed; Level 2-Combining of virtual file space
encoded elements is allowed only for A representation of a directory on a
Thai, Indic, Hebrew, and Arabic; Level network-attached storage (NAS) file
3-Any combination of encoded elements system as a path to that directory.
are allowed.
virtual volume
UNC See Universal Naming Convention name. An archive file on a target server that
represents a sequential media volume to a
Unicode
source server.
A character encoding standard that
Glossary 1197
1198 IBM Tivoli Storage Manager for Windows: Administrator's Guide
Index
Special characters active data 1010
active data, protecting with active-data pools 269
$$CONFIG_MANAGER$$ 753 active directory
configure schema 641, 1164
overview 640, 1163
Numerics Active Directory 642
3480 tape drive add or remove client nodes 1168
cleaner cartridge 200 configuration 1163
device support 62 configuring on a Windows server 1164
device type 211 configuring the server 1167
mixing drive generations 215 create Windows account with required permissions 1165
3490 tape drive extending the schema 1165
cleaner cartridge 200 installing Windows administration tools 1165
device support 62 one-time configuration 1164
device type 211 remove a server 1167
mixing drive generations 215 storage and replication impact 1169
3494 automated library device 64, 107, 123 active directory schema 642
3494 library 107 active files, storage-pool search order 271
configuration with a single drive device 125 active log 713, 948
defining devices 133 description 685
3494SHARED server option 88 increasing the size 712
3570 tape drive 106 move to another directory 715
ASSISTVCRRECOVERY server option 88 out of space 712
defining device class 84, 209 space requirements 693
device support 62 active log mirror 948
3590 tape drive description 686
defining device class 84, 209, 211 active log size 713
device class 106 ACTIVE policy set
device support 124, 159 creating 522, 532
3592 drives and media replacing 501
as element of storage hierarchy 288 active-data pool
cleaning 199 auditing volumes in 961
data encryption 188, 217, 560 backup-set file source 566
defining device class 84 client restore operations, optimizing 579
enabling for WORM media 164 collocation on 387
mixing drive generations 215 defining 429
4mm tape device support 211 export file source 777, 785, 786
8mm tape device support 211 import operations 795
overview 269, 293
reclamation of 396
A RECONSTRUCT parameter 581
simultaneous-write function 355
absolute mode, description of 525 specifying in policy definition 520
access authority, client 470, 471 storage pool search-and-selection order 271
access mode, volume ACTIVELOGDIRECTORY server option 712, 715
changing 285 ACTIVELOGSIZE server option 712
description 286 activity log
determining for storage pool 273, 430 description of 829
access, managing 907, 925 logging events to 887
accessibility features 1171 monitoring 829
accounting record querying 830
description of 837 setting size limit 831
monitoring 837 setting the retention period 831
accounting variable 837 adding media to RSM libraries 182
ACSLS (Automated Cartridge System Library Software) administration center
StorageTek library 63 deploying backup-archive packages automatically 454
configuring 136 using 623
description 65 Administration center
mixing 3592 drive generations 215 managing servers 623
Tivoli Storage Manager server options for 88 Administration Center
ACTIVATE POLICYSET command 532 backing up 629
activating server options 56
Index 1201
administrator (continued) archiving
removing 924 file 503, 517
renaming 923 file management 503
restrictions when registering 920 FILE-type volume, archiving many small objects to 219
unlocking 924 ASCII restriction for browser script definition 666
updating 920, 923 ASSIGN DEFMGMTCLASS command 532
viewing information about 921 association, client with schedule 55
administrator ID 617 defining 591
administrator password 617 deleting 601
administrators association, client with storage pools 40
managing 920 association, file with management class 511, 512
aggregates association, object with profile
controlling the size of 290 administrative command schedule 744
estimating size 410 administrator 741, 755
RECONSTRUCT parameter 425, 581 client option set 741
reconstructing 391, 397, 425 deleting 746
viewing information about 412, 418 policy domain 742
Alert monitoring 833 script 741
alerts AUDIT LIBRARY command 168
email alerts 835 AUDIT LICENSE command 633
sending and receiving by email 835 AUDIT VOLUME command 958, 965
alias name, device 101 auditing
ANR9999D message 886 LDAP directory server 982
application client library's volume inventory 168
adding node for 440 license, automatic by server 633
description 4 multiple volumes in sequential access storage pool 965
policy for 546 single volume in sequential access storage pool 966
application program interface (API) volume in disk storage pool 965
client, registering 443 volume, reasons for 958
compression option 444 volumes by date 966
deletion option 444 volumes by storage pool 966
registering to server 443 authority
simultaneous-write function, version support for 357 client access 471
application programming interface (API) granting to administrators 918
description of 3 privilege classes 918
ARCHFAILOVERLOGDIRECTORY server option 718 server options 918
archive auto deployment 454
allowing while file changes 530 auto-update 454, 455, 456, 457, 458
backup set, uses for 9, 13 AUTOFSRENAME parameter 479
determining storage usage 420 AUTOLABEL parameter for tape volumes 159
directory 585 Automated Cartridge System Library Software (ACSLS)
increasing throughput using DSMMAXSG utility 154 StorageTek library
instant 9, 13 configuring 136
package 585 description 65
performing 47 mixing 3592 drive generations 215
policy, defining 519 Tivoli Storage Manager server options for 88
policy, introduction 28 automated library device
process description 517 adding, using commands 119
storage usage, minimizing 586 adding, using wizard 113
storage usage, reducing 586, 587 auditing 168
uses for 9, 12 checking in media 161
archive copy group defining 64
defining 530, 531 managing media 166
deleting 555 overflow location 273
description of 505 replacing tape drive 193
archive data reusing media 172
expiration 537 scratch and private volumes 72
managing 585 updating 185
protection 537 volume inventory 72, 158
archive failover log 948 automatically renaming file spaces 479
description 687 automating
move to another directory 715 administrative commands 30
archive log 948 client operations 590
description 686 server operations 660
move to another directory 715 awk script 1096
space requirements 693
Index 1203
center, administration class, device (continued)
using 623 requesting information about 229
Centera libraries 110 selecting for import and export 783
Centera SDK sequential 211
installing 110 SERVER 209, 211, 765
Centera storage device StorageTek devices 211, 226
concurrent access 228 tape 211, 218
overview 67 Ultrium, LTO 211
restore improve 228 updating 211, 218
unsupported functions 277 VOLSAFE 226
unsupported server operations 228 WORM 209, 211
Centera storage pool, backing up 957 WORM12 209, 211
central monitoring 723 WORM14 209, 211
Central Processing Unit class, policy privilege
CPU 634 description 918, 920
central scheduling granting 922
client operations 559, 589, 597, 603 revoking 922, 923
controlling the workload 606 class, storage privilege
coordinating 603 description 918
description of 28, 30, 589 granting 922
server operations 660 reducing 922
certificate revoking 922, 923
adding to the key database 910, 911 CLEAN DRIVE command 197
homegrown certificate authority 911 cleaner cartridge
changing date and time on server 649 checking in 199
changing hostname 654 operations with 200
changing SSL settings 620 client
characteristics, machine 1061 access user ID 471
check in administrative 3
cleaner cartridge 199 API (application program interface) 443
library volume 161 API (application programming interface) 4
setting a time interval for volume 214 application client 4, 546
VolSafe-enabled volumes 226 backup-archive 4
CHECKIN LIBVOLUME command 161 controlling resource utilization 584
checking the log file generated by processed schedules 603 how to protect 8
checklist for DRM project plan 1091 operations summary 10
CHECKOUT LIBVOLUME command 167 options file 444
CHECKTAPEPOS server option 88 restore without primary volumes available 981
class, administrator privilege Tivoli Storage Manager for Space Management (HSM
description 918 client) 4, 508
granting authority 918 using to back up NAS file server 242, 258
reducing 922 client configuration file 448
revoking all 923 client file
class, device allowing archive while changing 499
3570 209, 211 allowing backup while changing 499, 524
3590 209, 211 archive package 585
3592 211 associating with management class 511, 512
4MM 209, 211 damaged 981
8MM 209, 211 delaying migration of 304
amount of space used 419 deleting 433
CARTRIDGE 211 deleting from a storage pool 432
CENTERA 67 deleting from cache 311
defining 209 deleting when deleting a volume 433
description of 22 duplication when restoring 981
DISK 209 eligible for archive 499, 514
DLT 209, 211 eligible for backup 499, 514
DTF 209, 211 eligible for expiration 501
ECARTRIDGE 211 eligible for space management 517
FILE 209 how IBM Tivoli Storage Manager stores 290
for devices using native Windows device driver 116 on a volume, querying 410
FORMAT parameter 212 server migration of 299
GENERICTAPE 116, 209, 211 client migration 517, 518
LTO 222 client node
OPTICAL 209 adding 439
overview 121 agent 466
QIC 209, 211 amount of space used 418
REMOVABLEFILE 218 associating with storage pools 40
Index 1205
commands, administrative (continued) commands, administrative (continued)
CANCEL PROCESS 651 DELETE SUBSCRIBER 757
CANCEL REQUEST 175 DELETE SUBSCRIPTION 747, 753
CANCEL RESTORE 495 DELETE VOLHISTORY 656
CANCEL SESSION 492 DELETE VOLUME 434, 435
CHECKIN LIBVOLUME 161 DISABLE EVENTS 886
CHECKOUT LIBVOLUME 167 DISABLE SESSIONS 494
CLEAN DRIVE 197 DISMOUNT VOLUME 176
COMMIT 679 DSMSERV DISPLAY DBSPACE 708
COPY ACTIVEDATA 270, 293 DSMSERV DISPLAY LOG 708
COPY CLOPTSET 490 ENABLE EVENTS 886
COPY DOMAIN 522 ENABLE SESSIONS 494
COPY POLICYSET 522 END EVENTLOGGING 887
COPY SCHEDULE 598 EXPIRE INVENTORY 30
COPY SCRIPT 674 EXPORT ADMIN 771
COPY SERVERGROUP 762 EXPORT NODE 784
DEFINE ASSOCIATION 591 EXPORT POLICY 784
DEFINE BACKUPSET 572 EXPORT SERVER 784
DEFINE CLIENTACTION 610 EXTEND DBSPACE 709, 710
DEFINE CLIENTOPT 610 GENERATE BACKUPSET 568
DEFINE CLOPTSET 488 GRANT AUTHORITY 918
DEFINE COPYGROUP 524, 529, 530 HALT 647
DEFINE DATAMOVER 207, 252 HELP 657
DEFINE DEVCLASS IMPORT 798, 799
3592 215 IMPORT ADMIN 787
FILE device classes 218 IMPORT NODE 787, 796
LTO device classes 222 IMPORT POLICY 787
REMOVEABLEFILE device classes 218 IMPORT SERVER 787, 796
SERVER device classes 225 LOCK ADMIN 924
tape device classes 211 LOCK NODE 464
VOLSAFE device classes 226 LOCK PROFILE 745, 746
DEFINE DRIVE 206 MACRO 169
DEFINE GRPMEMBER 761 MOVE DATA 422
DEFINE LIBRARY 64, 205 MOVE MEDIA 169
DEFINE MACHINE 1061 MOVE NODEDATA 426
DEFINE MACHNODEASSOCIATION 1061 NOTIFY SUBSCRIBERS 745, 746
DEFINE PATH 208 PING SERVER 763
DEFINE POLICYSET 522, 523 PREPARE 1066
DEFINE PROFASSOCIATION 741 QUERY ACTLOG 830
DEFINE PROFILE 741 QUERY BACKUPSETCONTENTS 575
DEFINE RECMEDMACHASSOCIATION 1065 QUERY CONTENT 410
DEFINE RECOVERYMEDIA 1065 QUERY COPYGROUP 553, 794
DEFINE SCHEDULE 661 QUERY DB 708
DEFINE SCRIPT 666 QUERY DBSPACE 708
DEFINE SERVER 728, 758, 763 QUERY DEVCLASS 229
DEFINE SERVERGROUP 761 QUERY DOMAIN 554
DEFINE STGPOOL 277, 278, 289 QUERY DRIVE 186
DEFINE SUBSCRIPTION 751 QUERY DRMSTATUS 1054
DEFINE VIRTUALFSMAPPING 264 QUERY ENABLED 899
DEFINE VOLUME 71, 284 QUERY EVENT 592
DELETE ASSOCIATION 601 QUERY FILESPACE 486
DELETE BACKUPSET 575 QUERY LIBRARY 185
DELETE COPYGROUP 555 QUERY LICENSE 633
DELETE DOMAIN 556 QUERY MEDIA 169
DELETE DRIVE 202 QUERY MGMTCLASS 553
DELETE EVENT 603 QUERY MOUNT 175
DELETE GRPMEMBER 763 QUERY NODE 468
DELETE LIBRARY 186 QUERY NODEDATA 419
DELETE MGMTCLASS 556 QUERY OCCUPANCY
DELETE POLICYSET 556 backed-up, archived, and space-managed files 420
DELETE PROFASSOCIATION 746 client file spaces 418
DELETE PROFILE 747 client nodes 418
DELETE SCHEDULE 599 device classes 419
DELETE SCRIPT 675 storage pools 419
DELETE SERVER 734 QUERY OPTION 822
DELETE SERVERGROUP 762 QUERY POLICYSET 554
DELETE STGPOOL 433 QUERY PROCESS 424
Index 1207
configuration information, enterprise management (continued) creating backup sets
script 738, 741 benefits of 568
server 743 example for 570
server group 743 creating custom Cognos report 853
configuration manager cross definition 727, 728, 732
communication setup 726 current server status workspaces 841
default profile 737, 743 custom Cognos report 852
scenario 737 customer support
setting up 737, 740 contact xxi
configuration wizard 37 cyclic redundancy check
configure during a client session 559
server instance 1166 for storage pool volumes 961
configuring 35, 38 for virtual volumes 763
client nodes 40 performance considerations for nodes 560
communication protocols 56 performance considerations for storage pools 964
connect 349x library to server 250
connecting ACSLS library to server 251
devices not supported by the IBM Tivoli Storage Manager
device driver 116
D
daily monitoring
devices, automated library example 124
Tivoli Monitoring for Tivoli Storage Manager 815
devices, with device configuration wizard 112, 113
daily monitoring disk storage pools 810
hub server 615, 618
daily monitoring of databases 807
NDMP operations for NAS file servers 240
daily monitoring of server processes 806
network clients 37, 40, 47
daily monitoring scheduled operations 814
planning your storage environment 86
daily monitoring sequential access storage pools 811
remote client nodes 53
damaged files 967, 968
shared library 129, 146
data
spoke server 618, 619
active backup versions, storing 269
storage pool hierarchy 40
considering user needs for recovering 85
VTL 145
exporting 771
configuring a Windows system 37
importing 771
configuring storage 111
data compression 442
configuring the environment 37
data deduplication 342
configuring the server 37
checklist for configuration 320
console mode 799
client-side 339
contents of a volume 410
changing location 341
context messaging for ANR9999D 886
client and server settings 312, 336
continuation characters, using 670
multiple nodes 340
controlling devices 109
overview 313
conventions
single node 340
typographic xxiii
controlling duplicate-identification manually 337
COPY CLOPTSET command 490
data deduplication 322, 323, 324, 325, 326, 327, 328, 348,
COPY DOMAIN command 522
349, 350, 351, 352, 353, 354
copy group
DEDUPLICATION parameter 336
archive, description of 505
DEDUPREQUIRESBACKUP server option 333
backup, description of 505
definition 311
defining archive 530
detecting security attacks 329
defining backup 524
duplicate-identification processes 332, 336, 339
deleting 555
IDENTIFY DUPLICATES command 337
COPY MGMTCLASS command 523
limitations 315
COPY POLICYSET command 522
managing 332
COPY SCHEDULE command 598, 664
moving or copying data 334
COPY SCRIPT command 674
node replication 998, 1049
COPY SERVERGROUP command 762
options for 342
copy storage pool
planning 317, 318
compared with primary 431
processing 331
defining a 429
protecting data 333
restore from multiple 981
reclamation 333
role in storage pool migration 310
requirements 320
simultaneous-write function 355
server-side 312, 336
creating
specifying the size of objects to be deduplicated 341
backup sets 26
statistics
node name, local client 38
displaying information about files with links to a
password, administrative 38
volume 344
password, local client 37
querying a duplicate-identification process 344, 345,
server scripts 666
347
user id, administrative 38
querying a storage pool 343
Index 1209
define drive 145 deployment (continued)
DEFINE DRIVE command 206 importing backup-archive client packages 457
DEFINE GRPMEMBER command 761 mirror FTP site 457
DEFINE LIBRARY command 205 overview 455
DEFINE MACHNODEASSOCIATION command 1061 properties file 457
DEFINE MGMTCLASS command 523 requirements 456
define path 145 schedule 458, 462
DEFINE POLICYSET command 522 verifying 463
DEFINE PROFASSOCIATION command 741 descriptions, for archive packages 585, 586
DEFINE PROXYNODE command 466 DESTINATION parameter (storage pool) 499, 524
DEFINE RECMEDMACHASSOCIATION command 1065 destroyed volume access mode 287, 980
DEFINE RECOVERYMEDIA command 1065 determining
DEFINE SCHEDULE command 30, 661 cause of ANR9999D messages 886
DEFINE SCRIPT command 666 the time interval for volume check in 214
DEFINE SERVER command 728, 758, 763 device 102
DEFINE STGPOOL command 277, 278, 289 adding 112
DEFINE SUBSCRIPTION command 751 adding using commands 119
DEFINE VIRTUALFSMAPPING command 264 alias name for IBM Tivoli Storage Manager 102
DEFINE VOLUME command 284 attaching to server 119, 247
defining configuring manual 113
administrator 57 configuring optical 114
client nodes 475 configuring removable media 116
defining RSM libraries to IBM Tivoli Storage Manager defining to IBM Tivoli Storage Manager 120
using IBM Tivoli Storage Manager commands 183 multiple types in a library 78
using the device configuration wizard 181 selecting device drivers 107
definitions 1177 supported by IBM Tivoli Storage Manager 62
delaying migration for files 304 troubleshooting 152
delaying reuse of volumes 400 unsupported 116
DELETE ASSOCIATION command 601 Windows device drivers 108
DELETE BACKUPSET command 575 device class
DELETE CLIENTOPT command 490 3570 209, 211
DELETE COPYGROUP command 555 3590 209, 211
DELETE DOMAIN command 556 3592 211
DELETE EVENT command 603, 666 4MM 209, 211
DELETE FILESPACE command 487 8MM 209, 211
DELETE GRPMEMBER command 763 amount of space used 419
DELETE KEYRING command 912 CARTRIDGE 211
DELETE MGMTCLASS command 556 CENTERA 67
DELETE POLICYSET command 556 defining 209
DELETE PROFASSOCIATION command 746 description of 22
DELETE PROFILE command 747 DISK 209
DELETE SCHEDULE command 599, 664 DLT 209, 211
DELETE SCRIPT command 675 DTF 209, 211
DELETE SERVER command 735 ECARTRIDGE 211
DELETE SERVERGROUP command 762 FILE 209
DELETE STGPOOL command 433 for devices using native Windows device driver 116
DELETE SUBSCRIBER command 757 FORMAT parameter 212
DELETE SUBSCRIPTION command 753 GENERICTAPE 116, 209, 211
DELETE VOLHISTORY command 656 LTO 222
DELETE VOLUME command 434, 435 OPTICAL 209
deleting overview 121
cached files on disk 422 QIC 209, 211
empty volume 434, 656 REMOVABLEFILE 218
file spaces 487 requesting information about 229
files 433, 535 selecting for import and export 783
scratch volume 282, 656 sequential 211
storage volume 435 SERVER 209, 211, 765
subfile 578 StorageTek devices 211, 226
volume history information 656 tape 211, 218
volume with residual data 435 Ultrium, LTO 211
deletion hold 538 updating 211, 218
deployment VOLSAFE 226
automatic 454 WORM 209, 211
command-line interface 459, 460 WORM12 209, 211
configure 457 WORM14 209, 211
FTP site 455 device classes
importing 461 database backups 943
Index 1211
domain, policy (continued) encryption
changing 501 changing method 562
creating 522 choosing a method 561
deleting 556 DRIVEENCRYPTION parameter
description of 505 3592 Generation 2 217
distributing via profile 552, 738 ECARTRIDGE 227
for NAS file server node 241 LTO-4 224
querying 554 methods 188, 560
updating 518, 520 END EVENTLOGGING command 887
drive Enterprise Administration
cleaning 197 description 721
defining 206 enterprise configuration
defining path for 208 communication setup 726
deleting 202 description 722, 735
detecting changes on a SAN 153 procedure for setup 736
element address 206, 208 profile for 737
querying 186 scenario 724, 736
serial number 206 subscription to 738
server retries for acquiring 88 enterprise event logging 726, 897
simultaneous-write function, requirements for 377 environment file
updating 187 modifying queries 865, 866
updating to use for NDMP operations 238 modifying reporting performance 865
drive configuration 144 environment variable, accounting 837
DRIVEACQUIRERETRY server option 88 error analysis 822
DRIVEENCRYPTION parameter error checking for drive cleaning 201
3592 device class 217 error reporting for ANR9999D messages 886
ECARTRIDGE device class 227 error reports for volumes 408
LTO device class 224 establishing server-to-server communications
driver, device enterprise configuration 726
for manual tape devices 99, 100 enterprise event logging 726
IBM Tivoli Storage Manager, installing 99, 100 virtual volumes 734
installing 99 estimate network bandwidth 1008
requirements 99 estimate replication 1007
stopping 649 estimated capacity for storage pools 404
Windows 108 estimated capacity for tape volumes 408
drivers event logging 626, 885, 891
for IBM devices 105 event record (for a schedule)
drives 144 deleting 603, 666
defining in the library 120 description of 592, 601
dsm.opt file 444, 488, 589 managing 664
dsmaccnt.log 837 querying 665
DSMADMC command 780, 793, 799 removing from the database 603, 665
DSMC loop session 491 setting retention period 603, 665
DSMMAXSG utility 154 event server 897
dsmsched.log file 603 example
DSMSERV DISPLAY DBSPACE command 708 assigning a default management class 532
DSMSERV DISPLAY LOG command 708, 712 register three client nodes with CLI 449
DSMSERV_ACCOUNTING_DIR 837 validating and activating a policy set 534
duplicate-identification processes 332, 336 expiration 97
duplication of restored data 981 expiration date, setting 662
DVD-RAM support expiration processing 30
configuring 118 description 958
defining and updating a device class 218 files eligible 501, 535
dismounting volumes from DVD-RAM devices 177 of subfiles 501, 526, 535, 577
dynamic serialization, description of 524, 530 starting 535
using disaster recovery manager 536
EXPIRE INVENTORY command 30
E duration of process 536
export
ECARTRIDGE device class 211
administrator information 780
education
client node information 781
see Tivoli technical training xix
data from virtual volumes 801
element address 206
decided when 773
ENABLE EVENTS command 886
directly to another server 774
ENABLE SESSIONS command 494
labeling tapes 776, 783
enabling client/server communications 56
monitoring 798
encoding events to UTF-8 891
options to consider 774
Index 1213
GRANT AUTHORITY command 918 IBM Tivoli Storage Manager Operations Center (continued)
group backup, on the client 12 web server 621
group, server IDLETIMEOUT server option 492, 493
copying 762 image backup
defining 761 policy for 546, 548
deleting 762 suggested use 8, 11
member, deleting 763 import
moving a member 763 data from virtual volumes 801
querying 762 monitoring 798
renaming 762 options to consider 788
updating description 762 PREVIEW parameter 782, 790
querying about a process 798
querying the activity log 800
H recovering from an error 797
replace existing definitions 789
HALT command 647
viewing information about a process 798
halting the server 52, 647
IMPORT ADMIN command 787
hardware scans
import Cognos reports 856
VMware environment 474, 640
stand-alone Tivoli Common Reporting 857
held volume in a client session 491
IMPORT commands 799
HELP command 657
IMPORT NODE command 787, 796
hierarchy, storage 22
IMPORT POLICY command 787
copying active backup data 269
import pool 182
defining in reverse order 277, 289
IMPORT SERVER command 787, 796
establishing 288
importing
example 271
active-data pools 795
for LAN-free data movement 288
data 787
how the server stores files in 290
data storage definitions 792, 794
next storage pool
date of creation 789, 795
definition 288
description of 771
deleting 433
directing messages to an output file 780, 793
migration to 299, 414
duplicate file spaces 795
restrictions 288
file data 795
staging data on disk for tape storage 298
node replication restriction 1000
hierarchy, storage pool 40
policy definitions 792
historical reports
server control data 793
server trends 863
subfiles 577
HL ADDRESS 453
subsets of information 797
Host Bus Adapter (HBA), increasing maximum transfer length
importing BIRT reports 881
for 154
importing customized BIRT reports 881
hostname
Importing customized Cognos reports 881
changing 654
importing reports 881
how to cause the server to accept date and time 649
include-exclude file
hub server 618
description of 28, 510
changing 620
for policy environment 505, 510
configuring 615
incomplete copy storage pool, using to restore 981
reconfiguring 620
incremental backup 514
incremental backup, client
file eligibility for 514
I frequency, specifying 606
IBM Cognos 846 full 514
IBM error analysis 822 partial 515
IBM Publications Center xv, xviii progressive 13
IBM Support Assistant xx incremental replication 1011
IBM Tivoli Monitoring 839 inheritance model for the simultaneous-write function 364
IBM Tivoli Storage Manager (Tivoli Storage Manager) initial configuration 37
introduction 3 initial configuration wizards 35, 36, 37
server network 32 initial configuration, stopping 37
setting up 52 initial replication 1010
starting 52 initial start date for schedule 661
IBM Tivoli Storage Manager device driver 104 initial start time for schedule 661
IBM Tivoli Storage Manager MACRO command 169 initializing 1141
IBM Tivoli Storage Manager Operations Center server 38
getting started 616 tape volumes 43
hub server 618 installing 107
performance 618 installing IBM Tivoli Storage Manager 440
spoke server 618, 619 client scheduler 55
Index 1215
LTO Ultrium devices and media manual library device (continued)
device class, defining and updating 222 adding, using wizard 113
driver 106 managing media 173
encryption 188, 224, 560 preparing media 172
WORM 164, 226 reusing media 174
LUN manually configuring a device
using in paths 208, 209 Windows 115
manuals
See publications
M MAXSCRATCH parameter 273, 285, 430
media
machine characteristics 1061
checking in 44, 161
machine recovery information 1062
labeling for automated library devices 159
macro
labeling for bar-code reader 160
commit individual commands 679
mount operations 174
continuation characters 678
reusing 172
controlling command processing 679
selecting and labeling 43
running 679
tape rotation 81, 177
scheduling on client 593
media label
substitution variables 678
for automated library device 159
testing 680
for devices unrecognized by IBM Tivoli Storage
using 676
Manager 172
writing commands 677
for manual library device 173
writing comments 677
for use with bar-code reader 160
MACRO administrative command, using 449
Media Labeling wizard 43
magnetic disk devices 66, 89
media managers, removable 179
maintenance 454, 459
media pools in RSM libraries 182
maintenance distribution 454, 459
merging file spaces 775, 788
maintenance plan
messages
modify 669
determining cause of ANR9999D message 886
maintenance script
directing import messages to an output file 780, 793
create 668
for drive cleaning 201
custom 668
getting help on 657
modify 30
severe 886
maintenance updates 454, 459
Microsoft Failover Cluster 1139
managed server
Microsoft Failover Cluster Manager 1134
changing the configuration manager 751, 756
Microsoft Management Console (MMC) snap-in 19
communication setup 726
MIGRATE STGPOOL command 308
deleting a subscription 753
migrating a file 503, 517
description 722
migration, client
managed objects 722, 750
automatic, for HSM client
refreshing configuration information 754
demand 504
renaming 757
files, eligible 517
returning managed objects to local control 755
threshold 504
setting up 738
using management class 518
subscribing to a profile 738, 750, 751
premigration for HSM client 504
management class
reconciliation 504
assigning a default 532
selective, for HSM client 504
associating a file with 511
stub file on HSM client 504
binding a file to 511
migration, server
configuration 508
canceling the server process 415
controlling user access 508
controlling by file age 304
copying 518, 523
controlling duration 308
default 509
controlling start of, server 303
define new 550
copy storage pool, role of 310
defining 523
defining threshold for disk storage pool 303
deleting 556
defining threshold for tape storage pool 305
description of 505, 508
delaying by file age 304
querying 553
description, server process 301
rebinding a file 513
minimizing access time to migrated files 305
updating 512, 518, 523
monitoring thresholds for storage pools 414
managing servers with the Operations Center 615
multiple concurrent processes
managing Tivoli Storage Manager servers 51
random access storage pool 273, 301
managingserver operation 30
sequential access storage pool 273, 309
manual drive
problems, diagnosing and fixing 299
attaching 99
providing additional space for server process 416
manual library device
starting manually 308
adding, using commands 119
Index 1217
NDMP operations for NAS file servers (continued) node replication (continued)
defining paths to drives (continued) disabling and enabling
obtaining names for devices attached to file server 255 rules 1043
defining paths to libraries 256 disabling and enabling replication
differential image backup, description 77 all client nodes 1042, 1043
full image backup, description 77 data types in file spaces 1040
interfaces used with 235 individual client nodes 1041
managing NAS nodes 237 disaster recovery
path, description 70, 71, 208 recovering data from the target 1050
planning 244 store operations on the target server 1050
policy configuration 241, 548 file spaces
prevent closing of inactive connections data types, disabling and enabling 1040
enabling TCP keepalive 240 purging data in 1044
overview 239 querying replication results 1047
specifying connection idle time 240 rules, changing 1018
registering a NAS node 252, 443 import-and-export operations
requirements for set up 233 converting from 1026
restoring a NAS file server 258 restriction 1000
scheduling a backup 257 migration by HSM for Windows client 1000
storage pools for NDMP operations 246 mode, replication 997
NetApp file server nodes
data format for backup 236 adding for replication 1027
international characters 262 all client nodes, disabling and enabling
NETAPPDUMP data format 236, 246 replication 1042, 1043
NetView 885 attributes updated during replication 999
Network Appliance file server individual client nodes, disabling and enabling
backup methods 242 replication 1041
requirements 233 removing from replication 1028
storage pool for backup 277 resetting the replication mode 1029
tape device for backup 233 overview 987
using NDMP operations 76, 233 planning 1005
network attached storage policy management 989
virtual file spaces 257 previewing results 1033
network bandwidth 1008 process information
network client 47 activity log 1048
Network Client Options File Wizard 55 file spaces 1047
network environment 37, 47 record retention 1049
network of IBM Tivoli Storage Manager servers 32 summary records 1048
network of Tivoli Storage Manager servers 721 records
network-attached nodes displaying 1047
comparing to local nodes 448 retaining 1049
network-attached storage (NAS) file server replicating
backup methods 242 canceling processes 1046
registering a NAS node for 252 data by file space 1035
using NDMP operations 76, 233 data by priority 1037
new tape drive 193 data by type 1036
next storage pool scheduling or starting manually 1034
definition 288 throughput, managing 1038
deleting 433 restoring, retrieving, and recalling data from the
migration to 299, 414 target 1050
no query restore 583 results, previewing 1033
node retention protection, archive 1000
registering 472, 500 rules
node privilege class attributes 991
description of 470 definitions 990
granting 471 disabling and enabling 1043
node replication 1011, 1040 file spaces 1018
configuration hierarchy 991
effectiveness, measuring 1048 nodes, individual 1020
removing 1051 processing example 992
setting up the default 1014, 1016 server 1022
source and target replication servers 988 Secure Sockets Layer (SSL) 1031, 1032
validating 1032 servers
database requirements 1007 communications, setting up 1014
database restore, replicating after 1045 configurations 988
deduplicated data 998, 1049 source, adding 1030
target 1030, 1031, 1050
Index 1219
options, querying point-in-time restore
VIRTUALMOUNTPOINT client option 475 enable for clients 9, 551
overflow location 273 policy
overview default 5, 499
IBM Tivoli Storage Manager Operations Center 616 deleting 555
owner authority, client 470, 472 description of 505
distributing with enterprise management 552
effect of changing 532, 533
P for application clients 546
for clients using SAN devices 549
PARALLEL command 669
for direct-to-tape backup 545
Passport Advantage xxi
for logical volume backups 546
password
for NAS file server node 241
changing the key database 910, 911
for point-in-time restore 551
default, administrative 38
for server as client 551
default, local client 38
for space management 499, 517, 523
LDAP-authenticated policy 929
importing 792
resetting an administrative 923
managing 497
setting authentication for a client 937
operations controlled by 502
setting expiration 933
planning 498
setting invalid limit 936
querying 552
setting minimum length 937
policy domain
update for scheduling operations 446
active-data pools, specifying 520
using with unified logon 938
assigning client node 534
password, change administrator 58
changing 501
path
creating 522
defining 208
define 549
description 70, 71, 244
deleting 556
paths
description of 505
defining 205
distributing via profile 552, 738
pending, volume state 409
for NAS file server node 241
per product ID (PID) 637
querying 554
PERFORM LIBACTION 145
updating 518, 520
performance
policy objects 45
cache, considerations for using 97, 310
default 45
clients, optimizing restore 269, 578
policy privilege class
concurrent client/server operation considerations 607
description 918, 920
data protection, increasing with simultaneous-write
granting 922
function 355
revoking 922, 923
data validation for nodes 560
policy set
data validation for storage pools 964
activating 533
file system effects on 95, 282
changing, via the active policy set 501
random-access disk (DISK) 89
copying 501, 518, 522
FILE-type volumes, backing up many small objects to 219
defining 522
fragmentation, private FILE volumes for reducing disk 66
deleting 556
migration, multiple concurrent processes 273, 309
description of 505
mobile client 576
querying 554
reclamation, multiple concurrent processes
updating 522
copy storage pools 397
validating 532, 534
primary sequential access storage pools 273, 393
pool, storage
storage pool backup, reducing time required for 355
3592, special considerations for 215
storage pool volume 305
active-data pool 269
volume frequently used, improve with longer mount
amount of space used 419
retention 214
auditing a volume 958
period, specifying for an incremental backup 606
backup 41
plan
comparing primary and copy types 431
Disaster Recovery Manager 1124
copy 269
DRM 1124
creating a hierarchy 288
planning, capacity
data format 236, 273, 277
database space requirements
defining 273
estimates based on number of files 690
defining a copy storage pool 429
estimates based storage pool capacity 692
defining for disk, example 277, 289
starting size 689
defining for NDMP operations 246
recovery log space requirements
defining for tape, example 277, 289
active and archive logs 693
deleting 433
active log mirror 705
description of 41, 268
archive failover log 706
destination in copy group 524, 530
archive log space for database reorganization 705
Index 1221
QUERY LICENSE command 633 recovery log 681, 685
QUERY MEDIA command 169 active log 684, 685, 687
QUERY MGMTCLASS command 553 active log mirror 686
QUERY NODE command 468 alternative file locations
QUERY OCCUPANCY command 417, 418, 419, 420 overview 716
QUERY OPTION command 822 specifying with ARCHFAILOVERLOGDIRECTORY
QUERY POLICYSET command 554 option or parameter 716
QUERY PROCESS command 651, 798, 821 specifying with ARCHLOGDIRECTORY
identification numbers of migration processes 415 parameter 717
information about data movement process 424 specifying with RECOVERYDIR parameter 717
QUERY PVUESTIMATE 637 archive failover log 684, 687
QUERY RESTORE command 495 archive log 684, 686, 687
QUERY RPFCONTENT command 1070 description of 31, 681
QUERY RPFILE command 1070 increasing the size 712
QUERY SCHEDULE command 592 log mirror 684, 686, 687
QUERY SCRIPT command 674 managing 681
QUERY SERVERGROUP command 762 monitoring 708
QUERY SESSION command 491, 820 out of space 712
QUERY SHREDSTATUS command 564 recovery logs
QUERY STATUS command 822 move to another directory 713
QUERY STGPOOL command 403, 414, 416 relocating on a server 713
QUERY SUBSCRIPTION command 752 recovery plan file
QUERY SYSTEM command 822 break out stanzas 1096
QUERY VOLHISTORY command 656 using VBScript procedure 1096
QUERY VOLUME command 406, 425 creating 1066
QUERYAUTH server option 918 example 1102
prefix 1054
stanzas 1099
R recovery, disaster
auditing storage pool volumes 968
random mode for libraries 100
general strategy 763
randomize, description of 607
media 1065
raw logical volume 24
methods 763
read-only access mode 286
providing 763
read/write access mode 286
REGISTER ADMIN command 920
real-time monitoring 844
REGISTER LICENSE command 632
rebinding
REGISTER NODE command 472
description of 513
registering
file to a management class 513
administrator 57
recalling a file
client node 40
selective 504
client option sets 442
transparent 504
workstation 443
receiver 885
registration
RECLAIM STGPOOL command 393
description of 440
reclamation 399
licensing for a client node 631
active-data pools 396
licensing for an administrator 631
aggregate reconstruction 391
managing client node 440, 451
controlling duration 393
setting for a client node 440
delayed start of process 390
source server 443
delaying reuse of volumes 400, 958
relationships
description of 23
among clients, storage, and policy 506
effects of collocation 400
remote access to clients 469
effects of DELETE FILESPACE 390
remote client 37, 47
multiple concurrent processes
removable file system device
copy storage pools 397
example of setting up 117
primary sequential access storage pools 273, 393
REMOVABLEFILE device type, defining and
off-site volume
updating 218
controlling when reclamation occurs 398
support for 116, 218
setting a threshold for sequential storage pool 273, 390,
removable media 66
430
removable media device, adding 116
starting reclamation manually 393
Removable Storage Manager (RSM) 179
storage pool for 273
REMOVE ADMIN command 924
virtual volumes 396
REMOVE NODE command 465
with single drive 394
RENAME ADMIN command 923
RECONCILE VOLUMES command 769
RENAME FILESPACE command 798
reconfiguring the Operations Center 620
RENAME NODE command 464
reconstructing aggregates 391, 397, 425
RENAME SCRIPT command 675
recovery instructions file 1105
Index 1223
schedule (continued) Secure Sockets Layer (SSL)
querying 592 Administration Center 912
results of 601, 665 certificate
server administrative command 660 adding CA-signed 911
startup window 606, 662 adding to key database 910
type of action 663 communication using 907
uncertain status 602, 665 digital certificate file protection 952
updating 661 Global Security Kit 908
viewing information about 592 security
schedule event client access, controlling 471
managing 601, 664 data encryption
querying 601, 665 3592 generation 2 560
viewing information about 601, 665 3592 Generation 2 217
schedule replication 1011 3592 generation 3 560
scheduled operations, setting the maximum 607 ECARTRIDGE 227
scheduler workload, controlling 606 IBM LTO Generation 4 188, 224, 560
scheduling Oracle StorageTek T10000B 188, 560
administrative commands 30 Oracle StorageTek T10000C 188, 560
verifying results 56 data encryption, 3592 Generation 2 and later 188
scheduling mode features, overview 27
client-polling 605 for the server 907
overview of 605 locking and unlocking administrators 924
selecting 605 locking and unlocking nodes 464
server-prompted 605 managing access 907, 925
setting on a client node 606 password expiration for nodes 933
setting on the server 605 privilege class authority for administrators 918
scratch category, 349X library 123 Secure Sockets Layer (SSL) for node replication 1031, 1032
scratch volume server options 918
deleting 282, 656 security, replicating node data 987
description 71 SELECT command 824
FILE volumes 98 customizing queries 825
number allowed in a storage pool 273, 430 selective backup 47, 502, 516
using in storage pools 285 selective recall 504
script sending commands to servers 758
maintenance 669 sequence number 489, 490
script, scheduling on client 593 sequential mode for libraries 100
script, server sequential storage pool
continuation characters 670 auditing a single volume in 966
copying 674 auditing multiple volumes in 965
defining 666 collocation 387
deleting 675 estimating space 403
EXIT statement 672 migration threshold 305
GOTO statement 672 reclamation 390
IF clause 671 SERIAL command 669
querying 674 serial number
renaming 675 automatic detection by the server 153, 205, 206, 208
routing commands in 759 for a drive 206
running 675 for a library 205, 206, 208
running commands in parallel 669 serialization parameter 499, 524, 530
running commands serially 669 server
substitution variables 671 activating 38, 52
updating 673 authority 57
used with SNMP 893 backing up subfiles on 576
Web browser, restricted to ASCII entry 666 canceling process 651
SCSI changing the date and time 649
automatic labeling of volumes 159 console, MMC snap-in 19
library with different tape technologies 215 creating initial storage pool volume 38
SCSI library deleting 734
connect to NAS file server 249 description of 3
connecting to the server 248 disabling access 494
SCSI tape library disaster recovery 33
setting up for NDMP operations 246 enabling access 494
SEARCHMPQUEUE server option 88 halting 52, 647
secure sockets layer 913 importing subfiles from 577
configuration 913 maintaining, overview 19
Secure Sockets Layer managing multiple 32
changing settings 620 managing operations 631
Index 1225
session (continued) shredding
server-initiated 453 BACKUP STGPOOL command 565
setting the maximum percentage for scheduled COPY ACTIVEDATA command 565
operations 607 DEFINE STGPOOL command 565
session, client DELETE FILESPACE, command 565
canceling 492 DELETE VOLUME, command 565
DSMC loop 491 deleting empty volumes 434
held volume 491 deleting volumes with data 435
managing 491 description 563
querying 491, 820 enforcing 565
viewing information about 491, 820 EXPIRE INVENTORY command 565
sessions, maximum number for scheduled operations 1038 EXPORT NODE command 565, 772
SET ACCOUNTING command 837 EXPORT SERVER command 565, 772
SET ACTLOGRETENTION command 831 GENERATE BACKUPSET command 565, 566
SET AUTHENTICATION command 937 MOVE DATA command 422, 565
SET CLIENTACTDURATION command 611 setting up 564
SET CONFIGMANAGER command 737, 740 UPDATE STGPOOL command 565
SET CONFIGREFRESH command 752 SHREDDING server option 564
SET CONTEXTMESSAGING command 886 simultaneous-write operations to primary and copy storage
SET CROSSDEFINE command 729, 732 pools
SET DBREPORTMODE command 708 drives 377, 378
SET DRMCHECKLABEL command 1057 inheritance model 363
SET DRMCOPYSTGPOOL command 1054 mount points 376
SET DRMCOURIERNAME command 1057 storage pools 378
SET DRMDBBACKUPEXPIREDAYS command 1057 size
SET DRMFILEPROCESS command 1057 Tivoli Storage Manager database, initial 38
SET DRMINSTPREFIX command 1054 SnapLock
SET DRMNOTMOUNTABLE command 1057 data protection, ensuring 544
SET DRMPLANPOSTFIX command 1054 event-based retention 543
SET DRMPLANPREFIX command 1054 reclamation 540
SET DRMPRIMSTGPOOL command 1054 retention periods 540
SET DRMRPFEXPIREDAYS 1070 WORM FILE volumes, setting up 544
SET DRMVAULTNAME command 1057 SnapMirror to Tape 265
SET EVENTRETENTION command 603, 665 snapshot, using in backup 9, 11, 948
SET INVALIDPWLIMIT command 936 using in directory-level backups 264
SET LICENSEAUDITPERIOD command 633 SNMP
SET MAXCMDRETRIES command 609 agent 893
SET MAXSCHEDSESSIONS command 607 communications 57, 893
SET PASSEXP command 933 configuring 896
SET QUERYSCHEDPERIOD command 609 enabled as a receiver 885, 893
SET RANDOMIZE command 607 heartbeat monitor 885, 893
SET REGISTRATION command 440 manager 893
SET RETRYPERIOD command 610 subagent 893
SET SCHEDMODES command 605 software support
SET SERVERHLADDRESS command 729, 732 describing problem for IBM Software Support xxii
SET SERVERLLADDRESS command 729, 732 determining business impact for IBM Software
SET SERVERNAME command 653, 728, 729, 732 Support xxii
SET SERVERPASSWORD 728, 729, 732 submitting a problem xxii
SET SUBFILE 576 Software Support
SET SUMMARYRETENTION 828 contact xxi
set up storage agent 913 Sony WORM media (AIT50 and AIT100) 164
SETOPT command 655 source server 765
setting space
clients to use subfile backup 577 directories associated with FILE-type device classes 420
compression 442 space requirements 1007
library mode 100 space-managed file 503
password 933 special file names 102
server options 56 spoke server 618
time interval for checking in volumes 214 adding 619
setting data deduplication options 342 SQL 824
shared access, nodes 467 SQL activity summary table 828
shared dynamic serialization, description of 524, 530 SQL SELECT * FROM PVUESTIMATE_DETAILS 637
shared file system 96 ssl 913
shared library 129, 146 configuration 913
shared static serialization, description of 524, 530 SSL
sharing Cognos reports 856, 857, 858, 859, 860 changing settings 620
SHRED DATA command 564
Index 1227
subfile backups (continued) tiered data deduplication 342
restoring 577 tiering 342
subordinate storage pool 288 time interval, setting for checking in volumes 214
subscriber, deleting 757 timeout
subscription client session 493
defining 750, 751 Tivoli Directory Server
deleting 753 configure for TLS 914
scenario 751 configure for TLS on the CLI 916
subset node replication 1010 Tivoli Enterprise Console 889
substitution variables, using 671 setting up as a receiver 891
support contract xxi Tivoli Enterprise Portal workspaces 841
support information xviii Tivoli event console 885, 889
support subscription xxi Tivoli Integrated Portal
supported devices 62 configuring SSL 912
system catalog tables 824 Tivoli Monitoring V6.3.3 updates xxvii
system privilege class Tivoli Monitoring V6.3.4 updates xxvi
revoking 923 Tivoli Storage Manager 844
overview 35, 36
server network 721
T starting as a service 645
Tivoli Storage Manager definitions 819
table of contents 261
Tivoli Storage Manager device driver 107, 108
generating for a backup set 573
Tivoli Storage Manager for Space Management 523
managing 239, 262
archive policy, relationship to 518
tape
backup policy, relationship to 518
backup to 39
description 503
capacity 230
files, destination for 523
compatibility between drives 193
migration of client files
devices 39
description 504
exporting data 784
eligibility 517
finding for client node 413
policy for, setting 517, 523
monitoring life 409
premigration 504
number of times mounted 409
recall of migrated files 504
planning for exporting data 783
reconciliation between client and server 504
recording format 212
selective migration 504
rotation 81, 177
setting policy for 518, 523
scratch, determining use 273, 285, 430
simultaneous-write function, version support for 357
setting mount retention period 214
space-managed file, definition 503
volumes
stub file 504
initializing 43
Tivoli Storage Manager Server Console 647
labeling 43
Tivoli technical training xix
tape drive, replacing 193
TLS (Transport Layer Security)
tape failover 1136
specifying communication ports 909
target server 765
training, Tivoli technical xix
TCP keepalive
transactions, database 681, 718
enabling 240
transparent recall 504
overview 239
Transport Layer Security (TLS) 908
specifying connection idle time 240
specifying communication ports 909
TCP/IP 452
troubleshooting
connect server to database 684
device configuration 152
IPv4 452
errors in database with external media manager 184
IPv6 452
tsmdlst 102
TCP/IP options 56
tsmdlst utility 102
named pipes option 57
TXNBYTELIMIT client option 290
TECUTF8EVENT option 891
TXNGROUPMAX server option 290
temporary disk space 692
type, device
temporary space 692
3570 209, 211
test replication 1039
3590 211
text editor
4MM 209, 211
to work with client 446
8MM 209, 211
threshold
CARTRIDGE 211
migration, for storage pool
CENTERA 67
random access 301
DISK 209
sequential access 306
DLT 209, 211
reclamation 273, 390, 430
DTF 209, 211
throughput capability 1039
ECARTRIDGE 211
THROUGHPUTDATATHRESHOLD server option 493
FILE 209
THROUGHPUTTIMETHRESHOLD server option 493
U
Ultrium, LTO device type V
device class, defining and updating 222 validate
driver 106 node data 560
encryption 188, 224, 560 VALIDATE LANFREE command 151
WORM 164, 226 VALIDATE POLICYSET command 532
unavailable access mode validating data
description 287 during a client session 559
marked with PERMANENT parameter 176 for storage pool volumes 961
uncertain, schedule status 602, 665 for virtual volumes 763
Unicode logical block protection 189
automatically renaming file space 479 performance considerations for nodes 560
client platforms supported 477 performance considerations for storage pools 964
clients and existing backup sets 485 variable, accounting log 837
deciding which clients need enabled file spaces 478 VARY command 97
description of 477 varying volumes on or off line 97
displaying Unicode-enabled file spaces 485 VERDELETED parameter 499, 526
example of migration process 484 VEREXISTS parameter 499, 526
file space identifier (FSID) 485, 486 verify
how clients are affected by migration 483 cluster configuration 1141
how file spaces are automatically renamed 481 Verifying and deleting Tivoli Monitoring for Tivoli Storage
migrating client file spaces 478 Manager backups
options for automatically renaming file spaces 479 DB2
Unicode versions verifying and deleting backups 873
planning for 481 versions data deleted, description of 499, 526
unified logon versions data exists, description of 499, 526
enable 938 viewing a Cognos report 855
unified logon for Windows NT 938 virtual file space mapping, command 263
uninstalling 108 virtual tape libraries 143, 145
UNIQUETDPTECEVENTS option 889 configuring 143
UNIQUETECEVENTS option 889 managing 143
UNLOCK ADMIN command 924 virtual tape library 64, 144, 145
UNLOCK NODE command 464 configuring 145
UNLOCK PROFILE command 745, 746 storage capacity 144
unplanned shutdown 647 virtual volume
unreadable files 967, 968 performance expectations 766
unrecognized pool 182 virtual volumes, server-to-server
UPDATE ADMIN command 923 deduplication 763
UPDATE ARCHIVE command 587 reclaiming 396
UPDATE BACKUPSET command 574 using to store data 763
UPDATE CLIENTOPT command 490 VIRTUALMOUNTPOINT client option 475
UPDATE CLOPTSET command 490 Vital Cartridge Records (VCR), corrupted condition 88
UPDATE COPYGROUP command 524, 530 VMware host environment
UPDATE DEVCLASS command 211 hardware scans 474, 640
UPDATE DOMAIN command 522 VOLSAFE device class 226
UPDATE LIBVOLUME command 71 volume capacity 212
UPDATE MGMTCLASS command 523 volume history 949
UPDATE NODE command 454, 484, 488 deleting information from 656
UPDATE POLICYSET command 522 volume history file 98, 949
UPDATE RECOVERYMEDIA command 1065 volume reuse 98
UPDATE SCHEDULE command 661 volumes
UPDATE SCRIPT command 673 access preemption 652
Index 1229
volumes (continued) Windows
allocating space for disk 95, 282 starting Tivoli Storage Manager as a service 52
assigning to storage pool 282 Windows Active Directory
auditing 168, 958 configuring 917
auditing considerations 958 Windows Administration Tools 641
automated library inventory 72, 158 Windows cluster configuration 1134
capacity, compression effect 231 Windows clustered environment 1134
checking out 167 Windows device driver 108
contents, querying 410 Windows Server 2008 109
defining to storage pools 284 Windows unified logon 938
delaying reuse 400, 958 wizard
deleting 434, 435, 656 client configuration 446
detailed report 412 client node 40
determining which are mounted 783 client options file 446
disk storage 284 cluster configuration 1142
disk storage pool, auditing 965 description 19
errors, read and write 408 device configuration
estimated capacity 408 automated library devices 113
finding for client node 413 manual devices 113
help in dsmc loop session 491 optical devices 114
labeling using commands 179 RSM configuration 181
location 409 initial configuration environment 37
managing 166 labeling 173
monitoring life 409 media labeling 43
monitoring movement of data 425 remote client configuration 447
monitoring use 406 server initialization 38
mount retention time 214 setup 446
moving files between 421 workstation, registering 443
number of times mounted 409 WORM devices and media
off-site, limiting number to be reclaimed 399 DLT WORM 164
offsite, limiting number to be reclaimed 273 IBM 3592 164
overview 71 LTO WORM 164
pending status 409 Oracle StorageTek T10000B drives 165
private 71 Oracle StorageTek T10000C drives 165
querying contents 410 Quantum LTO3 164
querying for general information 406 reclamation of optical media 395
random access storage pools 268, 282, 285 Sony AIT50 and AIT100 164
reclamation 394 special considerations for WORM media 164
restoring random-access 626 VolSafe
reuse delay 400, 958 considerations for media 164
scratch 71 defining VOLSAFE device classes 226
scratch, using 285 WORM FILE and SnapLock 539
sequential 284 WORM parameter 226
sequential storage pools 159, 283 WORM scratch category for volumes in 349X library 123
setting access mode 286 writing data simultaneously to primary and copy storage
standard report 411 pools
status, in automated library 72, 157 use during client storage operations 355
status, information on 407
updating 284
using private 71, 72, 157
varying on and off 97
WORM scratch category 123
VTL 144, 145
W
web administrative interface
description 19
Web administrative interface
limitation of browser for script definitions 666
Web backup-archive client
granting authority to 471
remote access overview 469
URL 440, 469
web server
starting 621
stopping 621
Printed in USA
SC23-9773-05