Docu93960 - NetWorker 19.1 Cluster Integration Guide PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 80

Dell EMC NetWorker

Version 19.1

Cluster Integration Guide


302-005-690
REV 01
Copyright © 1990-2019 Dell Inc. or its subsidiaries. All rights reserved.

Published May, 2019

Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.“ DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.

Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA.

Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com

2 NetWorker 19.1 Cluster Integration Guide


CONTENTS

Figures 5

Preface 7

Chapter 1 Introduction 13
Stand-alone application...............................................................................14
Cluster-aware application........................................................................... 14
Highly available application..........................................................................14

Chapter 2 Configuring the Cluster 17


Prepare to install NetWorker on a cluster................................................... 18
Microsoft Failover Cluster Server............................................................... 18
Preparing to install NetWorker on MSFCS clusters........................18
Configuring a highly available NetWorker server on Windows........ 19
Configuring a cluster-aware NetWorker client............................... 21
SLES High Availability Extension................................................................ 22
Preparing to install NetWorker on SLES cluster............................ 22
Configuring a highly available NetWorker Server in the cluster......22
Configuring a cluster-aware NetWorker client...............................24
Red Hat Enterprise Linux High Availability.................................................. 25
Preparing to install NetWorker on RHEL....................................... 25
Configuring a highly available NetWorker server in a RHEL 6.x
cluster........................................................................................... 25
Configuring a highly available NetWorker Server in a RHEL 7.x
cluster............................................................................................27
Configuring a cluster-aware client................................................. 29
Sun Cluster and Oracle Solaris Cluster....................................................... 29
Preparing to install NetWorker on Sun and Oracle Solaris Clusters...
29
Configuring a cluster-aware NetWorker client...............................30
AIX HACMP/PowerHA SystemMirror......................................................... 31
Preparing to install NetWorker on HACMP/PowerHA SystemMirror
...................................................................................................... 32
Configuring a cluster-aware NetWorker client...............................32
HP MC/ServiceGuard................................................................................ 32
Preparing to install NetWorker on MC/ServiceGuard.................... 33
Configuring the NetWorker on MC/ServiceGuard......................... 33
Configuring a cluster-aware NetWorker client...............................35
VERITAS Cluster Server.............................................................................35
Preparing to install NetWorker on VERITAS cluster...................... 35
Configuring a highly available NetWorker server........................... 36
Configuring NetWorker Client on a VERITAS cluster..................... 40
Troubleshooting configuration....................................................................43
Slow backups.................................................................................43
NetWorker virtual server fails to start nsrmmd.............................. 44
System hostname key is missing from the Lockbox ...................... 45

NetWorker 19.1 Cluster Integration Guide 3


CONTENTS

REST API does not work after NetWorker server cluster fail-over....
46

Chapter 3 Configuring Devices for a Highly Available NetWorker Server 47


Configuring an autochanger with shared tape devices................................48
Configuring an autochanger with non-shared tape devices........................ 49
Configuring the robotics on a stand-alone host...........................................51

Chapter 4 Configuring Backup and Recovery 53


Setting NetWorker environment variables in a cluster................................54
Limiting NetWorker server access to a client............................................. 54
Configuring the NetWorker virtual server...................................................55
Creating client resources for physical node backups.................................. 57
Creating a client resource for virtual client backups................................... 57
Configuring a backup device for the NetWorker virtual server................... 59
Configuring a virtual client to back up to a local storage node....... 60
Performing manual backups of a cluster node............................................ 60
Configuring manual backups for non-root or non-administrator
users..............................................................................................60
Performing manual backups from the command prompt................62
Performing manual backups from NetWorker User........................62
Troubleshooting backups............................................................................62
RAP error: Unable to extract resource info for client.....................62
File systems omitted during a scheduled save............................... 62
File system backup information written to the wrong client file index
...................................................................................................... 63
No matching devices found when backing up to HACMP devices....
64
Recovering data......................................................................................... 64
Configuring a virtual client to recover from a local storage node... 65
Troubleshooting recovery...........................................................................66
NSR server ‘nw_server_name’: client ‘virtual_hostname’ is not
properly configured on the NetWorker Server...............................66

Chapter 5 Uninstalling the NetWorker Software in a Cluster 67


Uninstalling NetWorker from HACMP........................................................ 68
Uninstalling NetWorker from HP MC/ServiceGuard...................................68
Uninstalling NetWorker from MSFCS......................................................... 68
Uninstalling NetWorker from RHEL High Availability.................................. 69
Uninstalling NetWorker from SLES HAE.....................................................69
Uninstalling NetWorker from SUN Cluster and Oracle Solaris Cluster........ 70
Uninstalling NetWorker from VCS.............................................................. 70
Uninstalling NetWorker on VCS for Solaris and Linux.................... 70
Uninstalling NetWorker on VCS for Windows................................. 71

Chapter 6 Updating a Highly Available NetWorker Application 73


Updating a NetWorker application.............................................................. 74

Glossary 75

4 NetWorker 19.1 Cluster Integration Guide


FIGURES

1 Highly-available NetWorker Server............................................................................. 16


2 Autochanger with shared devices............................................................................... 49
3 Autochanger with non-shared devices........................................................................50
4 External stand-alone storage node..............................................................................51

NetWorker 19.1 Cluster Integration Guide 5


FIGURES

6 NetWorker 19.1 Cluster Integration Guide


Preface

As part of an effort to improve product lines, periodic revisions of software and


hardware are released. Therefore, all versions of the software or hardware currently in
use might not support some functions that are described in this document. The
product release notes provide the most up-to-date information on product features.
If a product does not function correctly or does not function as described in this
document, contact a technical support professional.

Note

This document was accurate at publication time. To ensure that you are using the
latest version of this document, go to the Support website https://www.dell.com/
support.

Purpose
This document describes how to uninstall, update and install the NetWorker software
in a cluster environment.
Audience
This document is part of the NetWorker documentation set and is intended for use by
system administrators during the installation and setup of NetWorker software in a
cluster environment.
Revision history
The following table presents the revision history of this document.

Table 1 Revision history

Revision Date Description


01 May 20, 2019 First release of this document for NetWorker 19.1

Related documentation
The NetWorker documentation set includes the following publications, available on the
Support website:
l NetWorker E-LAB Navigator
Provides compatibility information, including specific software and hardware
configurations that NetWorker supports. To access E-LAB Navigator, go to
https://elabnavigator.emc.com/eln/elnhome.
l NetWorker Administration Guide
Describes how to configure and maintain the NetWorker software.
l NetWorker Network Data Management Protocol (NDMP) User Guide
Describes how to use the NetWorker software to provide data protection for
NDMP filers.
l NetWorker Cluster Integration Guide
Contains information related to configuring NetWorker software on cluster servers
and clients.
l NetWorker Installation Guide

NetWorker 19.1 Cluster Integration Guide 7


Preface

Provides information on how to install, uninstall, and update the NetWorker


software for clients, storage nodes, and servers on all supported operating
systems.
l NetWorker Updating from a Previous Release Guide
Describes how to update the NetWorker software from a previously installed
release.
l NetWorker Release Notes
Contains information on new features and changes, fixed problems, known
limitations, environment and system requirements for the latest NetWorker
software release.
l NetWorker Command Reference Guide
Provides reference information for NetWorker commands and options.
l NetWorker Data Domain Boost Integration Guide
Provides planning and configuration information on the use of Data Domain
devices for data deduplication backup and storage in a NetWorker environment.
l NetWorker Performance Optimization Planning Guide
Contains basic performance tuning information for NetWorker.
l NetWorker Server Disaster Recovery and Availability Best Practices Guide
Describes how to design, plan for, and perform a step-by-step NetWorker disaster
recovery.
l NetWorker Snapshot Management Integration Guide
Describes the ability to catalog and manage snapshot copies of production data
that are created by using mirror technologies on storage arrays.
l NetWorkerSnapshot Management for NAS Devices Integration Guide
Describes how to catalog and manage snapshot copies of production data that are
created by using replication technologies on NAS devices.
l NetWorker Security Configuration Guide
Provides an overview of security configuration settings available in NetWorker,
secure deployment, and physical security controls needed to ensure the secure
operation of the product.
l NetWorker VMware Integration Guide
Provides planning and configuration information on the use of VMware in a
NetWorker environment.
l NetWorker Error Message Guide
Provides information on common NetWorker error messages.
l NetWorker Licensing Guide
Provides information about licensing NetWorker products and features.
l NetWorker REST API Getting Started Guide
Describes how to configure and use the NetWorker REST API to create
programmatic interfaces to the NetWorker server.
l NetWorker REST API Reference Guide
Provides the NetWorker REST API specification used to create programmatic
interfaces to the NetWorker server.
l NetWorker 19.1 with CloudBoost 19.1 Integration Guide
Describes the integration of NetWorker with CloudBoost.
l NetWorker 19.1 with CloudBoost 19.1 Security Configuration Guide
Provides an overview of security configuration settings available in NetWorker and
Cloud Boost, secure deployment, and physical security controls needed to ensure
the secure operation of the product.

8 NetWorker 19.1 Cluster Integration Guide


Preface

l NetWorker Management Console Online Help


Describes the day-to-day administration tasks performed in the NetWorker
Management Console and the NetWorker Administration window. To view the
online help, click Help in the main menu.
l NetWorker User Online Help
Describes how to use the NetWorker User program, which is the Windows client
interface, to connect to a NetWorker server to back up, recover, archive, and
retrieve files over a network.
Special notice conventions that are used in this document
The following conventions are used for special notices:

NOTICE

Identifies content that warns of potential business or data loss.

Note

Contains information that is incidental, but not essential, to the topic.

Typographical conventions
The following type style conventions are used in this document:

Table 2 Style conventions

Bold Used for interface elements that a user specifically selects or clicks,
for example, names of buttons, fields, tab names, and menu paths.
Also used for the name of a dialog box, page, pane, screen area with
title, table label, and window.

Italic Used for full titles of publications that are referenced in text.
Monospace Used for:
l System code
l System output, such as an error message or script
l Pathnames, file names, file name extensions, prompts, and
syntax
l Commands and options

Monospace italic Used for variables.


Monospace bold Used for user input.

[] Square brackets enclose optional values.

| Vertical line indicates alternate selections. The vertical line means or


for the alternate selections.

{} Braces enclose content that the user must specify, such as x, y, or z.

... Ellipses indicate non-essential information that is omitted from the


example.

You can use the following resources to find more information about this product,
obtain support, and provide feedback.
Where to find product documentation
l https://www.dell.com/support

NetWorker 19.1 Cluster Integration Guide 9


Preface

l https://community.emc.com
Where to get support
The Support website https://www.dell.com/support provides access to product
licensing, documentation, advisories, downloads, and how-to and troubleshooting
information. The information can enable you to resolve a product issue before you
contact Support.
To access a product-specific page:
1. Go to https://www.dell.com/support.
2. In the search box, type a product name, and then from the list that appears, select
the product.
Knowledgebase
The Knowledgebase contains applicable solutions that you can search for either by
solution number (for example, KB000xxxxxx) or by keyword.
To search the Knowledgebase:
1. Go to https://www.dell.com/support.
2. On the Support tab, click Knowledge Base.
3. In the search box, type either the solution number or keywords. Optionally, you
can limit the search to specific products by typing a product name in the search
box, and then selecting the product from the list that appears.
Live chat
To participate in a live interactive chat with a support agent:
1. Go to https://www.dell.com/support.
2. On the Support tab, click Contact Support.
3. On the Contact Information page, click the relevant support, and then proceed.
Service requests
To obtain in-depth help from Licensing, submit a service request. To submit a service
request:
1. Go to https://www.dell.com/support.
2. On the Support tab, click Service Requests.

Note

To create a service request, you must have a valid support agreement. For details
about either an account or obtaining a valid support agreement, contact a sales
representative. To get the details of a service request, in the Service Request
Number field, type the service request number, and then click the right arrow.

To review an open service request:


1. Go to https://www.dell.com/support.
2. On the Support tab, click Service Requests.
3. On the Service Requests page, under Manage Your Service Requests, click
View All Dell Service Requests.
Online communities
For peer contacts, conversations, and content on product support and solutions, go to
the Community Network https://community.emc.com. Interactively engage with
customers, partners, and certified professionals online.

10 NetWorker 19.1 Cluster Integration Guide


Preface

How to provide feedback


Feedback helps to improve the accuracy, organization, and overall quality of
publications. You can send feedback to [email protected].

NetWorker 19.1 Cluster Integration Guide 11


Preface

12 NetWorker 19.1 Cluster Integration Guide


CHAPTER 1
Introduction

This document describes how to configure and use the NetWorker software in a
clustered environment. This guide also provides cluster specific information that you
need to know before you install NetWorker on a clustered host. You must install the
NetWorker software on each physical node in a cluster.
This guide does not describe how to install the NetWorker software. The NetWorker
Installation Guide describes how to install the NetWorker software on supported
operating systems. You can configure the NetWorker software in a cluster in one of
the following ways:

l Stand-alone application...................................................................................... 14
l Cluster-aware application................................................................................... 14
l Highly available application................................................................................. 14

Introduction 13
Introduction

Stand-alone application
When you install the NetWorker server, storage node, or client software as a stand-
alone application, the required daemons run on each node. When the NetWorker
daemons stop on a node, the cluster management software does not restart them
automatically.
In this configuration:
l NetWorker does not know which node owns the shared disk. To ensure that there
is always a backup of the shared disks, configure a NetWorker client resource for
each physical node to back up the shared and local disks.
l Shared disk backups will fail for each physical node that does not own or control
the shared disk.
l NetWorker writes client file index entries for the shared backup to the physical
node that owns the shared disk.
l To recover data from a shared disk backup, you must determine which physical
node owned the shared disk at the time of backup.

Note

NMC should always be used as an stand-alone application. NMC is not a cluster-aware


or highly-available application.

Cluster-aware application
On supported operating systems, when you configure a cluster-aware NetWorker
client, all required daemons run on each physical node. When the NetWorker daemons
stop on a node, the Cluster Management software does not restart them
automatically.
A cluster-aware NetWorker application determines path ownership of the virtual
applications that run in the cluster. This allows the NetWorker software to back up the
shared file system and write the client file index entries for the virtual client.
When you configure a cluster-aware NetWorker application, you must:
l Create a NetWorker client resource for the virtual node in the cluster to back up
the shared disk.
l Create a NetWorker client resource for each physical node to back up the local
disks.
l Select the virtual node to recover data from a shared disk backup.

Highly available application


On supported platforms such as Windows, SLES, and RHEL operating systems, you
can configure the NetWorker Server software as a highly available application. A
highly available NetWorker Server is also called a NetWorker virtual server.
When the NetWorker Server software is a highly available application:
l The active node runs the NetWorker Server daemons and accesses the
global /nsr or C:\Program Files\EMC NetWorker\nsr directory on the
shared drive.

14 NetWorker 19.1 Cluster Integration Guide


Introduction

l The passive nodes run the NetWorker Client daemon, nsrexecd.


l When a failover occurs, the new active node runs the NetWorker server daemons.
l The NetWorker virtual server uses the IP address and hostname of the NetWorker
virtual host, regardless of which cluster node owns the NetWorker Server
application.
l NetWorker determines path ownership of the virtual applications that run in the
cluster. This allows the NetWorker software to back up the shared file system and
write the client file index entries for the virtual client.
When you configure a highly available NetWorker Server, you must:
l Create a NetWorker Client resource for the virtual node in the cluster to back up
the shared disk.
l Create a NetWorker Client resource for each physical node to back up the local
disks.
l Select the virtual node to recover data from a shared disk backup.
The following figure provides an example of a highly available NetWorker Server in a
general cluster configuration consisting of two nodes and one virtual server. In this
illustration:
l Node 1, clus_phy1, is a physical node with local disks.
l Node 2, clus_phy2, is a physical node with local disks.
l Virtual Server, clus_vir1:
n Owns the shared disks. A volume manager manages the shared disk.
n Can fail over between Node 1 and Node 2. However, the NetWorker Server
software only runs on one node at a time.

Highly available application 15


Introduction

Figure 1 Highly-available NetWorker Server

16 NetWorker 19.1 Cluster Integration Guide


CHAPTER 2
Configuring the Cluster

This chapter describes how to prepare for a NetWorker installation on a cluster and
how to configure NetWorker on each cluster. Perform these steps after you install the
NetWorker software on each physical node.
The steps to install and update the NetWorker software in a clustered environment are
the same as the steps to install and update the software in a non-clustered
environment. The NetWorker Cluster Integration Guide describes how to install
NetWorker on each supported operating system.

l Prepare to install NetWorker on a cluster........................................................... 18


l Microsoft Failover Cluster Server....................................................................... 18
l SLES High Availability Extension........................................................................22
l Red Hat Enterprise Linux High Availability......................................................... 25
l Sun Cluster and Oracle Solaris Cluster...............................................................29
l AIX HACMP/PowerHA SystemMirror.................................................................31
l HP MC/ServiceGuard........................................................................................ 32
l VERITAS Cluster Server.................................................................................... 35
l Troubleshooting configuration........................................................................... 43

Configuring the Cluster 17


Configuring the Cluster

Prepare to install NetWorker on a cluster


This section provides general information to review before you install the NetWorker
software on the nodes in a cluster.
l Ensure that the physical and virtual node names are resolvable in Domain Name
System (DNS) or by using a hosts file.
l Ensure that the output of the hostname command on a each physical node
corresponds to an IP address that can be pinged.
l You can publish the virtual hostname in the DNS or Network Information Services
(NIS).
l Install the most recent cluster patch for the operating system.
l Install the NetWorker software in the same location on a private disk, on each
cluster node.
l Install the Networker software on all the nodes in the cluster.
l Ensure that authc is configured on all the nodes of NetWorker server cluster.
l It is recommended to install NMC on a standalone machine by using the virtual
hostname of the clustered NetWorker server.

Microsoft Failover Cluster Server


This section describes how to prepare the Microsoft Failover Cluster Server (MSFCS)
cluster, including AD-Detached Clusters before you install the NetWorker software.
This section also describes how to configure the NetWorker server software as a
highly available on each physical node of the cluster after you install the NetWorker
software on each physical node of the cluster.
The NetWorker Installation Guide describes how to install the NetWorker software.

Note

This section does not apply when you install NetWorker as a stand-alone application.

Preparing to install NetWorker on MSFCS clusters


Review this section before you install the NetWorker software on a MSFCS cluster.
l Reboot the cluster node after you install the NetWorker software. If you do not
reboot, you cannot start the cluster administrator program. If you cannot start the
cluster administrator program, then close the cluster administrator interface and
reload the software by running the following command, from the command line:

regsvr32 /u nsrdresex.dll

l To back up a host that is a member of multiple domains, an Active Directory (AD)


domain, and a DNS domain, you must define the AD domain name in:
n The host file on the NetWorker Server.
n The Alias attribute for the Client resource on the NetWorker Server.
l The WINDOWS ROLES AND FEATURES save set includes the MSFCS database.
When you back up the WINDOWS ROLES AND FEATURES save set, NetWorker

18 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

will automatically back up cluster configuration. The cluster maintains the MSFCS
database synchronously on two nodes, as a result the database backup on one
node might not reflect changes made on the other node.
l The NetWorker Server and Client software supports backup and recovery of file
system data on Windows 2019, Windows 2016, Windows Server 2012, and
Windows Server 2012 R2 File Servers configured for Windows Continuous
Availability with Cluster Shared Volumes (CSV). Support of CSV and deduplicated
CSV backups include levels Full, Incremental, and incr_synth_full. NetWorker
supports CSV and deduplicated CSV backups with the following restrictions:
n The volume cannot be a critical volume.
n NetWorker cannot shadow copy a CSV and local disks that are in the same
volume shadow copy set.

Note

The NetWorker software does not protect the Microsoft application data stored
on a CSV or deduplicated CSV, such as SQL databases or Hyper-V virtual
machines. To protect Microsoft application data use the NetWorker Module for
Microsoft (NMM) software. The NMM documentation provides more information
about specific backup and recovery instructions of Microsoft application data.

The section Windows Optimized Deduplication in the NetWorker Administration


Guide provides more information about performing a backup and recovery of
deduplicated CSV volumes.

Configuring a highly available NetWorker server on Windows


This section also describes how to configure the NetWorker server software as a
highly available application and the NetWorker client as a cluster-aware application,
after you install the NetWorker software on each physical node of the cluster.
Perform the following steps on each physical node as the administrator user.
Procedure
1. On one cluster node, type regcnsrd -c to create the NetWorker server
resource.
2. On the remaining cluster nodes, type regcnsrd -r to register the NetWorker
server resource.
If prompted with a message similar to the following, then type y:
Is this machine a member of the cluster on which you want
to register Resource Extension for NetWorker Server
resource?
3. Verify that a NetWorker Server resource type exists:
a. In the Failover Cluster Management program, right-click the name of the
cluster and select Properties.
b. From the Resource Types tab, verify that the User Defined Resource
Types list contains the NetWorker Server resource.
4. Perform the following according to the Windows version:
a. For Windows 2012, 2016, and 2019 from the Action menu, select Configure
Role.... The High Availability Wizard appears.
b. For Windows 2008 R2 SP1, from the Action menu, select Configure a
Service or Application

Configuring a highly available NetWorker server on Windows 19


Configuring the Cluster

5. On the Before You Begin page, click Next.


6. Perform the following according to the Windows version:

Note

Do not create a Generic Application resource for the NetWorker virtual server.

a. For Windows 2012, 2016, and 2019 on the Select Role page, select Other
Server, and then click Next.
b. For Windows 2008 R2 SP1, on the Select Service or Application page,
select Other Server, and click Next.
7. On the Client Access Point page, specify a hostname that does not exist in the
ID and an available IP address, and then click Next.

Note

The Client Access Point resource type defines the virtual identity of the
NetWorker server, and the wizard registers the hostname and IP address in
DNS.

8. On the Select Storage page, select the shared storage volume for the shared
nsr directory, and then click Next.
9. In the Select Resource Type list, select the NetWorker Server resource type,
and then click Next.
10. On the Confirmation page, review the resource configurations and then click
Next. The High Availability Wizard creates the resources components and the
group.
When the Summary page appears, a message similar to the following appears,
which you can ignore:
The clustered role will not be started because the
resources may need additional configuration. Finish
configuration, and then start the clustered role.

11. Click Finish.


12. For Windows 2012, 2016, and 2019 on the Roles window, select the New
NetWorker role. For Windows 2008 R2 SP1, expand Services and
Applications. Perform the following steps on the Resources tab:
a. In the Server Name section, expand the NetWorker server resource then
right-click the new IP Address resource and then select Properties.
b. On the Dependencies tab, select the shared disk associated with the
NetWorker server resource from the Dependencies list and then click Ok.
c. In the Other Resources section, right-click New NetWorker server and
select Properties.
d. On the Dependencies tab, in the Resource list, select the name of the
NetWorker resource.
e. On the Parameter tab, in the NsrDir field, specify the path on the shared
drive in which NetWorker will create the nsr directory. For example, e:
\nsr.

20 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

Note

Leave the ServerName and AdditionalArguments fields blank.

f. Click OK.

NOTICE

Do not create multiple NetWorker server resources. Creating more than one
instance of a NetWorker Server resource interferes with how the existing
NetWorker Server resources function.

A dependency is set between the NetWorker server resource and the shared
disk.
13. Right-click the NetWorker cluster resource and select Start Role.
The NetWorker server resource starts.
14. Confirm that the state of the NetWorker Server resource changes to Online.

Changing the default timeout of NetWorker daemons


A NetWorker server fail over occurs when the time to start up any NetWorker server
daemon exceeds 10 minutes.
To prevent a fail over, use the Failover Cluster Manager program to change the
default timeout of the NetWorker daemons.
Procedure
1. Expand the cluster and then select Roles. On the Roles window, select the new
NetWorker role. On the Resources tab, right-click the New NetWorker Server
resource, then select Properties.
2. On the Parameters tab of the NetWorker Server cluster resource, edit the
value for the AdditionalArguments field and add the ServerStartupTimeout
keyword.
For example, ServerStartupTimeout=time
where time is a numeric value in seconds.

Note

The ServerStartupTimeout keyword is case sensitive.

Configuring a cluster-aware NetWorker client


Before you begin
To install the NetWorker client software as cluster-aware:
Procedure
1. Install the NetWorker client software on the private disk of each node in the
cluster.
2. Configure each node in the cluster as a client of the NetWorker server. These
are physical cluster clients. If the NetWorker server is configured as a Cluster
resource, add the hostname and user of this NetWorker virtual server to these
Client resource attributes of the physical cluster clients:

Configuring a cluster-aware NetWorker client 21


Configuring the Cluster

l Remote Access
l Administrator
3. Configure each of the virtual servers in a cluster as a client of the NetWorker
virtual server. These are virtual cluster clients.
4. Add the hostname and user of each node to these Client resource attributes of
the virtual clients:
l Remote Access
l Administrator

SLES High Availability Extension


This section describes how to prepare the SLES High Availability Extension (SLES
HAE) cluster before you install the NetWorker software. This section also describes
how to configure the NetWorker client as a cluster-aware application, after you install
the NetWorker software on each physical node of the cluster.
The NetWorker Installation Guide describes how to install the NetWorker software.
SLES HAE provides three cluster management tools: Pacemaker GUI, HA Web
Konsole, and the crm shell.

Note

This section does not apply when you install NetWorker as a stand-alone application.

Preparing to install NetWorker on SLES cluster


Review this section before you install the NetWorker software on a Linux cluster.
Set the following environment variables:

export OCF_ROOT=/usr/lib/ocf

Update your profile file to make the change persistent across reboots.

Configuring a highly available NetWorker Server in the cluster


To configure a highly available NetWorker Server, you must configure each active
node and each passive node.
Before you begin
Perform the following steps on each physical node as the root user.
Procedure
1. To define the resource types that the NetWorker software requires, run the
cluster configuration script file, /usr/sbin/networker.cluster.
2. At the Enter directory where local NetWorker database is installed [/nsr]?
prompt, specify the location of the local NetWorker database directory that is
provided during the install procedure. For example: /nsr.
3. At the Do you wish to configure for both NetWorker server and client?
> Yes or No [Yes]? prompt, type Yes.
4. At the In what path will the shared nsr directory be created/located?
prompt, specify the pathname of the globally mounted /nsr directory that

22 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

contains the configuration information for the highly available NetWorker


server. For example: /share1.
5. At the Enter the Logical Hostname to be used for NetWorker? prompt,
specify the published logical hostname for the highly available NetWorker
server. For example: clus_vir1.
To change the configuration at a later time, run the networker.cluster -r
option and then run the networker.cluster command again.

Note

The logical hostname should be a unique FQDN/shortname. The FQDN/


shortname must resolve to a unique IP address.

6. On one node, create a the required resource groups for the NetWorker
resources:
a. Start the crm tool, by typing:
crm configure

b. Create a file system resource for the nsr directory. For example, type:

primitive fs ocf:heartbeat:Filesystem \
operations $id="fs-operations" \
op monitor interval="20" timeout="40" \
params device="/dev/sdb1" directory="/share1"
fstype="ext3"

c. Create an IP address resource for the NetWorker Server name. For example,
type:
primitive ip ocf:heartbeat:IPaddr \
operations $id="ip-operations" \
op monitor interval="5s" timeout="20s" \
params ip="10.5.172.250" cidr_netmask="255.255.254.0"
nic="eth1"

d. Create the NetWorker Server resource. For example, type:


primitive nws ocf:EMC_NetWorker:Server \
operations $id="nws-operations" \
op monitor interval="100" timeout="100" \
op start interval="0" timeout="120" \
op stop interval="0" timeout="300" \
op migrate_to interval="0" timeout="60" \
op migrate_from interval="0" timeout="120" \
op meta-data interval="0" timeout="10" \
op validate-all interval="0" timeout="10" \
meta is-managed="true"

Adjust the timeout values, as required for your environment.

Configuring a highly available NetWorker Server in the cluster 23


Configuring the Cluster

Note

For SLES 11 SP4 and SLES 15, do not include the following unsupported
default operations:

op meta-data interval="0" timeout="10" \


op validate-all interval="0" timeout="10" \

e. Define the NetWorker Server resource group that contains the file system,
NetWorker Server, and IP address resources. For example, type:
group NW_group fs ip nws

f. Commit the changes by typing:


commit

7. For SLES 11 SP4 only, perform the following steps:


a. Open the Pacemaker GUI.
b. Connect to the highly available cluster server by clicking Login to cluster ,
type the username and password, and then click OK.
c. Expand Configuration in the left navigation pane, and then click Resources.
d. Click the item NW_group, and then click Edit.
The Edit Group box appears.
e. On the Primitive tab, click the item nws, and then click Edit.
The Edit Primitive box appears.
f. On the Operations tab, click Add, and then select meta-data and validate-
all.
g. Click OK, and then exit the Pacemaker GUI.

Configuring a cluster-aware NetWorker client


A cluster-aware NetWorker client is aware of the clustered IP address and shared file
systems in a cluster. Perform these steps to configure a cluster-aware NetWorker
client, which allows you to create a client resource for the virtual node.
Before you begin
All the resources should be aggregated under a resource group. Perform the following
steps on each physical node as the root user.
Procedure
1. To define the resource types that the NetWorker software requires, run the
cluster configuration script file, /usr/sbin/networker.cluster.
2. At the Enter directory where local NetWorker database is installed [/nsr]?
prompt, specify the location of the local NetWorker database directory that is
provided during the install procedure. For example: /nsr.
3. At the Do you wish to configure for both NetWorker server and client?
--> Yes or No [Yes]? prompt, type No.

24 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

Red Hat Enterprise Linux High Availability


This section describes how to prepare the Red Hat Enterprise Linux (RHEL) High
Availability Add-on before you install the NetWorker software. This section also
describes how to configure the NetWorker client as a cluster-aware application, after
you install the NetWorker software on each physical node of the cluster.

Note

This section does not apply when you install NetWorker as a stand-alone application.

The NetWorker Installation Guide describes how to install the NetWorker software.

Preparing to install NetWorker on RHEL


Review this section before you install the NetWorker software on RHEL.
Before you install and configure the NetWorker server software, perform the following
task:
l Create a shared volume group and a logical volume in the cluster.
l Install the Conga web interface and start the luci service. For example:

yum install luci

service luci start

Configuring a highly available NetWorker server in a RHEL 6.x cluster


This section describes how to configure the NetWorker server software as a highly
available application and the NetWorker client as a cluster-aware application, after you
install the NetWorker software on each physical node of the cluster.
Before you begin
Perform the following steps on each physical node as the root user.
Procedure
1. To define the resource types that the NetWorker software requires, run the
cluster configuration script file, /usr/sbin/networker.cluster.
The cluster configuration script detects the Red Hat Cluster Manager.
2. At the Would you like to configure NetWorker for it [Yes]? prompt, type:
Yes.
3. At the Do you wish to continue? [Yes]? prompt, type: Yes.
The configuration script stops the NetWorker services.
4. At the Enter directory where local NetWorker database is installed [/nsr]?
prompt, specify the location of the local NetWorker database directory
provided during the install procedure. For example: /nsr.
5. At the Do you wish to configure for both NetWorker server and client? Yes
or No [Yes]? prompt, type: Yes.
6. At the Do you wish to add now the site-specific values for:
NSR_SHARED_DISK_DIR and NSR_SERVICE_ID in /usr/sbin/nw_redhat?
Yes or No [Yes]? prompt, type Yes.

Red Hat Enterprise Linux High Availability 25


Configuring the Cluster

7. At the In what path will the shared nsr directory be created/located?


prompt, specify the pathname of the globally mounted /nsr directory that
contains the configuration information for the highly available NetWorker
server. For example: /vg1.
8. At the Enter the Logical Hostname to be used for NetWorker? prompt,
specify the published logical hostname for the highly available NetWorker
server. For example: clus_vir1.
To change the configuration at a later time, run the networker.cluster -r
option and then run the networker.cluster command again.

Note

The logical hostname should be a unique FQDN/shortname. The FQDN/


shortname must resolve to a unique IP address.

The configuration script creates the nw_redhat file and the lcmap file.
9. Create a service group:
a. Connect to the Conga web interface.
b. On the Service tab, click Add.
c. In the Service Name field, specify a name for the resource. For example
rg1.
10. Add an LVM resource for the shared volume to the service group :
a. Click Add resource.
b. From the Global Resources drop down, select HA LVM.
c. In the Name field, specify the name of the resource. For example,
ha_lvm_vg1.

d. In the Volume Group Name field, specify the name of the volume group for
the shared disk that contains the /nsr directory. For example, vg1.
e. In the Logical Volume Name field, specify the logical volume name. For
example, vg1_1v.
11. Add a file system resource for the shared file system to the service group.
a. After the HA LVM Resource section, click Add Child Resource.
b. From the Global Resources drop down, select Filesystem.
c. In the Name field, specify the name of the file system. For example,
ha_fs_vg1 .

d. In the Mount point field, specify the mount point. For example: /vg1.
e. In the Device, FS label or UUID field, specify the device information. For
example, device "/dev/vg1/vg1_lv"
12. Add an IP address resource to the group:
a. After the Filesystem section, click Add Child Resource.
b. From the Global Resources drop down, select IP Address.
c. In the IP Address field, specify the IP address of the virtual NetWorker
server.

26 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

d. Optionally, in the Netmask field, specify the netmask that is associated with
IP address.
13. Add a script resource to the group:
a. After the IP address section, click Add Child Resource.
b. From the Global Resources drop down, select Script.
c. In the Name field, specify the name for the script resource. For example,
nwserver.

d. In the Path field, specify the path to the script file. For example, /usr/
sbin/nw_redhat.
14. Click Submit.

Configuring a highly available NetWorker Server in a RHEL 7.x cluster


This section describes how to configure the NetWorker Server software as a highly
available application after you install the NetWorker software on each physical node of
the cluster.
Before you begin
Set the following environment variable:

export OCF_ROOT=/usr/lib/ocf

Update the rc file to make the change persistent across reboots.


Perform the following steps on each physical node as the root user.
To configure a highly available NetWorker Server, you must configure each active
node and each passive node.
Procedure
1. To define the resource types that the NetWorker software requires, run the
cluster configuration script file, /usr/sbin/networker.cluster.
2. At the Enter directory where local NetWorker database is installed [/nsr]?
prompt, specify the location of the local NetWorker database directory
provided during the install procedure. For example: /nsr.
3. At the Do you wish to configure for both NetWorker server and client?> Yes
or No [Yes]? prompt, type Yes.
4. At the In what path will the shared nsr directory be created/located?
prompt, specify the pathname of the globally mounted /nsr directory that
contains the configuration information for the highly available NetWorker
server. For example: /share1
5. At the Enter the Logical Hostname to be used for NetWorker? prompt,
specify the published logical hostname for the highly available NetWorker
Server. For example: clus_vir1.
To change the configuration at a later time, run the networker.cluster –r
option and then run the networker.cluster command again.

Configuring a highly available NetWorker Server in a RHEL 7.x cluster 27


Configuring the Cluster

Note

The logical hostname should be a unique FQDN/shortname. The FQDN/


shortname must resolve to a unique IP address.

6. On one node, create the required resource groups for the NetWorker resources:
a. Create a file system resource for the nsr directory. For example, type:

pcs resource create fs ocf:heartbeat:Filesystem \


device="/dev/sdb1" directory="/share_storage" fstype=ext3 \
op monitor interval="20" timeout="40" \
--group NW_group

Note

--group NW_group adds the file system resource to the resource group.

b. Create an IP address resource for the NetWorker Server name. For example,
type:

pcs resource create ip ocf:heartbeat:IPaddr \


ip="192.168.8.108" cidr_netmask=24 nic="eno16777736" \
op monitor interval="5s" timeout="20s" \
--group NW_group

Note

--group NW_group adds the file system resource to the resource group.

c. Create the NetWorker Server resource. For example, type:

pcs resource create nws ocf:EMC_NetWorker:Server \


op monitor interval="100" timeout="100" \
op start interval="0" timeout="120" \
op stop interval="0" timeout="300" \
op migrate_to interval="0" timeout="60" \
op migrate_from interval="0" timeout="120" \
op meta-data interval="0" timeout="10" \
op validate-all interval="0" timeout="10" \
meta is-managed="true" \
--group NW_group

Note

--group NW_group adds the file system resource to the resource group.

7. If any resource fails to start, confirm that the shared volume is mounted. If the
shared volume is not mounted, manually mount the volume, and then reset the
status by typing the following command:
pcs resource cleanup nws

28 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

Configuring a cluster-aware client


A cluster-aware NetWorker client is aware of the clustered IP address and shared file
systems in a cluster. Perform these steps to configure a cluster-aware NetWorker
client, which allows you to create a client resource for the virtual node.
Before you begin
All the resources should be aggregated under a resource group. Perform the following
steps on each physical node as the root user.
Procedure
1. To define the resource types that the NetWorker software requires, run the
cluster configuration script file, /usr/sbin/networker.cluster.
The cluster configuration script detects the Red Hat Cluster Manager.
2. At the Would you like to configure NetWorker for it [Yes]? prompt, type:
Yes.
3. At the Enter directory where local NetWorker database is installed [/nsr]?
prompt, specify the location of the local NetWorker database directory
provided during the install procedure. For example: /nsr.
4. At the Do you wish to configure for both NetWorker server and client? Yes
or No [Yes]? prompt, type No.

Sun Cluster and Oracle Solaris Cluster


This section describes how to prepare the Sun Cluster or Oracle Solaris Cluster before
you install the NetWorker software. This section also describes how to configure the
NetWorker client as a cluster-aware application, after you install the NetWorker
software on each physical node of the cluster.
The NetWorker Installation Guide describes how to install the NetWorker software.

Note

This section does not apply when you install NetWorker as a stand-alone application.

Preparing to install NetWorker on Sun and Oracle Solaris Clusters


Review this section before you install the NetWorker software on Sun and Oracle
Solaris Clusters
Before you install the NetWorker software:
l Install Volume Manager software in the cluster. For example: Solaris Volume
Manager.
l Ensure that the PATH environment variable includes the /usr/sbin and /usr/
cluster/bin directories.
l Ensure that a resource group owns each globally mounted file system (except
the /global/.devices/... file system). To enable a resource group to own a globally
mounted file system (except the /global/.devices/... file systems), specify the file
system in only one NetWorker Client type resource. If you incorrectly configure
the ownership of global file systems in a NetWorker client type resource, then
multiple backup copies occur for each cluster node.

Configuring a cluster-aware client 29


Configuring the Cluster

Configuring a cluster-aware NetWorker client


A cluster-aware NetWorker client is aware of the clustered IP address and shared file
systems in a cluster. Perform these steps to configure a cluster-aware NetWorker
client, which allows you to create a client resource for the virtual node.
Before you begin
All the resources should be aggregated under a resource group. Perform the following
steps on each physical node as the root user.
Procedure
1. To define the resource types that the NetWorker software requires, run the
cluster configuration script file /usr/sbin/networker.cluster.
2. At the Enter directory where local NetWorker database is installed [/nsr]?
prompt, specify the location of the local NetWorker database directory
provided during the install procedure. For example: /nsr.
3. On one node in the cluster, create a resource group for the backup and a
resource instance for the LGTO.clnt resource:
a. Create a resource group:
clresourcegroup create resource_group_name
For example, to create the resource group backups, type:
clresourcegroup create backups

Note

A resource group must own all globally mounted file systems (except the /
global/.devices/... file systems). All globally mounted filesystems (except
the /global/.devices/... file systems) must have a NetWorker Client
resource type. A misconfigured file system results in multiple backup copies
for each cluster node.

b. Add the logical hostname resource type to the new resource group:
clreslogicalhostname create -g resource_group_name
logical_name
For example, when the logical hostname is clus_vir1, type:
clreslogicalhostname create -g backups clus_vir1

c. Optionally, to create an instance of the SUNW.HAStoragePlus resource


type:
l Determine if the HAStoragePlus resource type is registered within the
cluster:
clresourcetype list
l If required, register the HAStoragePlus resource type within the cluster:
clresourcetype register SUNW.HAStoragePlus
l Create the SUNW.HAStoragePlus resource:
clresource create -g resource_group_name -t
SUNW.HAStoragePlus -x
FilesystemMountPoints=pathname_1,pathname_2[,...]-x
AffinityOn=True hastorageplus

30 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

For example, to create the resource with mount points /global/db and /
global/space, type:
clresource create -g backups -t SUNW.HAStoragePlus -x \
FilesystemMountPoints=/global/db,\ /global/space -x
AffinityOn=True hastorageplus
The Sun Cluster documentation provides more information about the
SUNW.HAStoragePlus resource and locally mounted global systems.

d. Create an instance of the LGTO.clnt resource:


clresource create -g resource_group_name -t LGTO.clnt -x
clientname=virtual_hostname -x
owned_paths=pathname_1,pathname_2[,...] client
where:
l virtual_hostname is the name of the resource used by the Sun Cluster
logical hostname (SUNW.LogicalHostname) or shared address
(SUNW.SharedAddress) that you want to configure as a virtual
hostname.
l owned_pathsis a list of filesystems or raw devices on a shared storage
device to back up, separated by commas.
For example:
clresource create -g backups -t LGTO.clnt -x
clientname=clus_vir1 -x owned_paths=/global/db,/global/
space client
When the logical host resource name differs from the hostname it
specifies, define the clientname variable as the virtual hostname, then set
the network_resource property to the logical host resource name.
For example:
clresource create -g resource_group_name -t LGTO.clnt -x
clientname=virtual_hostname -x
network_resource=virtual_hostname -x
owned_paths=pathname_1,pathname_2[,...] client

AIX HACMP/PowerHA SystemMirror


This section describes how to prepare the AIX HACMP/PowerHA SystemMirror
cluster before you install the NetWorker software. This section also describes how to
configure the NetWorker client as a cluster-aware application, after you install the
NetWorker software on each physical node of the cluster.
The NetWorker Installation Guide describes how to install the NetWorker software.

Note

This section does not apply when you install NetWorker as a stand-alone application.

AIX HACMP/PowerHA SystemMirror 31


Configuring the Cluster

Preparing to install NetWorker on HACMP/PowerHA SystemMirror


Review this section before you install the NetWorker software on HACMP/PowerHA
SystemMirror.
l To back up a physical client:
n Each node requires persistent IPs or an extra NIC that is configured outside of
the control of the HACMP environment.
n NetWorker requires an address that uniquely connects to a physical client. The
service and boot addresses of HACMP for AIX do not meet this requirement
because a cluster configured with IP address takeover (IPAT) replaces the boot
address with the service address, when a resource group is attached.
l If you use IP address takeover (IPAT) and you do not define a resource group,
then you must use the boot address to connect to the host. Service addresses are
associated with a resource group, not physical nodes.
l Set the hostname to the name equivalent to the address that the dedicated NIC of
the physical client uses. Configure this NIC as the primary network adapter, for
example, en0.
l Service addresses are associated with a resource group, not physical nodes. The
output of the hostname command on a computer must correspond to a pingable IP
address. The computer hostname must also be set to the name equivalent of the
address used by the physical client’s persistent IP or dedicated NIC. Whether you
use persistent IP or dedicated NIC, you must use the primary network adapter (for
example, en0).

Configuring a cluster-aware NetWorker client


A cluster-aware NetWorker client is aware of the clustered IP address and shared file
systems in a cluster. Perform these steps to configure a cluster-aware NetWorker
client, which allows you to create a client resource for the virtual node.
Before you begin
All the resources should be aggregated under a resource group. Perform the following
steps on each physical node as the root user.
Procedure
1. Run the cluster configuration script /usr/sbin/networker.cluster.
2. At the Do you wish to continue? [Yes]? prompt, type Yes
3. At the Enter directory where local NetWorker database is installed [/nsr]?
prompt, specify the location of the local NetWorker database directory you
provided when you installed NetWorker. For example: /space/nsr.

HP MC/ServiceGuard
This section describes how to prepare the HP MC/ServiceGuard cluster before you
install the NetWorker software. This section also describes how to configure the
NetWorker client as a cluster-aware application, after you install the NetWorker
software on each physical node of the cluster.
The NetWorker Installation Guide describes how to install the NetWorker software.

32 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

Note

This section does not apply when you install NetWorker as a stand-alone application.

Preparing to install NetWorker on MC/ServiceGuard


Review this section before you install the NetWorker software on MC/ServiceGuard.
l To ensure the cluster services automatically start after a reboot, set the
AUTOSTART_CMCLD=1 value in the /etc/rc.config.d/cmcluster file.
l For HP-UX11.11/ServiceGuard11.16 only, perform the following steps to ensure that
the NetWorker daemons start:
1. Edit the /opt/networker/bin/nsr_mk_cluinfo.sg file.
2. Search for the following line:
FS=`cmgetconf -v 0 -p ${pkg_name}
3. Remove the 0 from the -v option:
FS=`cmgetconf -v -p ${pkg_name}
4. Save the file.

Configuring the NetWorker on MC/ServiceGuard


After you install the NetWorker software on each physical node, you can use the LC
integration framework method or the non-LC integration framework method to
configure the NetWorker software.

LC integration framework only


This section describes the benefits of using LC integration framework method when
you configure NetWorker in the cluster
The benefits of using the LC integration framework method include:
l Support for multiple IPs in one package.
l Support for the lcmap caching mechanism.
l Does not require the creation and configuration of the NetWorker.clucheck
and .nsr_cluster files. The configuration process automatically creates and
uses the nsr_mk_cluinfo and lcmap files in the /opt/networker/bin
directory.

Non-LC integration framework method only—Creating configuration files


This section describes how to create the configuration files that the non-LC
integration framework method requires when you configure NetWorker in the cluster.
Procedure
1. On the active node, create the NetWorker.clucheck and .nsr_cluster
file in the /etc/cmcluster directory.
For example:

touch /etc/cmcluster/NetWorker.clucheck
touch /etc/cmcluster/.nsr_cluster

Preparing to install NetWorker on MC/ServiceGuard 33


Configuring the Cluster

Note

Ensure everyone has read ownership and access permissions for


the .nsr_cluster file.

2. Define the mount points that the MC/ServiceGuard or MC/LockManager


package owns in the .nsr_cluster file. Include the NetWorker shared mount
point.
For example:

pkgname:published_ip_address:owned_path[:...]

where:
l pkgname is the name of the package.
l published_ip_address is the IP address assigned to the package that owns
the shared disk. Enclose IPv6 addresses in square brackets. You can enclose
IPv4 addresses in square brackets, but it is not necessary.
l owned_path is the path to the mount point. Separate additional paths with a
colon.
For example:
l IPv6 address:

client:[3ffe:80c0:22c:74:6eff:fe4c:2128]:/share/nw

l IPv4 address:

client:192.168.109.10:/share/nw

Note

An HP-UX MC/ServiceGuard package that does not contain a disk resource


does not require an entry in the.nsr_cluster file. If an online diskless
package is the only package on that cluster node, cmgetconf messages may
appear in the /var/admin file during a backup.
To avoid these messages, allocate a mounted file system to a mount point,
then add this mount point, the package name, and the IP address to
the .nsr_cluster file. The NetWorker software does not back up the file
system. However, you can mount the file system on each cluster node that
the diskless package might fail over to.

3. Copy the NetWorker.clucheck and .nsr_cluster file to the /etc/


cmcluster directory, on each passive node.

34 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

Configuring a cluster-aware NetWorker client


A cluster-aware NetWorker client is aware of the clustered IP address and shared file
systems in a cluster. Perform these steps to configure a cluster-aware NetWorker
client, which allows you to create a client resource for the virtual node.
Before you begin
All the resources should be aggregated under a resource group. Perform the following
steps on each physical node as the root user.
Procedure
1. To define the resource types that the NetWorker software requires, run the
cluster configuration script file /opt/networker/bin/
networker.cluster.
2. At the Do you wish to continue? [Yes]? prompt, type Yes.
3. At the Enter directory where local NetWorker database is installed [/nsr]?
prompt, specify the location of the local NetWorker database directory
provided during the install procedure.
4. At the Do you wish to use the updated NetWorker integration framework?
Yes or No [Yes]? prompt:
l To use the non-LC integration method, type No.
l To use the LC integration method, type Yes.

VERITAS Cluster Server


This section describes how to configure the NetWorker client or server as a cluster-
aware application on a VERITAS Cluster Server (VCS). Before configuration, you must
install the NetWorker software separately on each physical node of the cluster.
The NetWorker Installation Guide describes how to install the NetWorker software.

Note

This section does not apply when you install NetWorker as a stand-alone application.

Preparing to install NetWorker on VERITAS cluster


Review this section before you install the NetWorker software on a Linux or Solaris
VERITAS cluster.
l When the VERITAS Cluster Server installation and configuration directories are
not the default directories, set the following environment variables:
n VCS_HOME
The default directory is /opt/VRTSvcs.
n VCS_CONF
The default directory is /etc/VRTSvcs.
l Ensure that the PATH environment variable includes the /usr/sbin and
$VCS_HOME/bin directories. The default $VCS_HOME directory is /opt/
VRTSvcs/bin.

Configuring a cluster-aware NetWorker client 35


Configuring the Cluster

Note

l Create mount points on all nodes. Example: On linux, /dg1vol1 should be created
on all nodes.
l When configuring vxfs, use dsk instead of rdsk for block device.
Example: /dev/vx/dsk/dg2/dg2vol1

Configuring a highly available NetWorker server


First run the NetWorker cluster configuration script file and then create a NetWorker
resource group.

Creating the service group


This section provides a high-level overview of how to create and configure the
NetWorker server service group.
l Add the IP type resource. Use the IP address for the virtual NetWorker server
specified in the NetWorker service group.
l For Windows and VxVM: Add the VMDg and MountV type resources for the
shared disk to the NetWorker service group.
l For Solaris and Linux: Add the Mount type resource for the shared disk to the
NetWorker service group.
l Set the CleanProgramTimeout attribute of the NetWorker server process to a
minimum value of 180. Set the StopProgramTimeout attribute to a minimum of
value of 120.

Example 1 An instance of a NetWorker resource group definition on Linux

The following example shows an instance of the NetWorker resource group defined in
the /etc/VRTSvcs/conf/config/main.cf VCS cluster configuration file.

group networker (
SystemList = { arrow = 0, canuck = 1 }
)
Application nw_server (
StartProgram = "/usr/sbin/nw_vcs start"
StopProgram = "/usr/sbin/nw_vcs stop"
CleanProgram = "/usr/sbin/nw_vcs stop_force"
MonitorProgram = "/usr/sbin/nw_vcs monitor"
MonitorProcesses = {"/usr/sbin/nsrd -k
Virtual_server_hostname"}
)
DiskGroup dg1 (
DiskGroup = dg1
)

IP NW_IP (
Device = eth0
Address = "137.69.104.104"
)
Mount NW_Mount (
MountPoint = "/mnt/share"
BlockDevice = "/dev/sdc3"
FSType = ext2

36 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

Example 1 An instance of a NetWorker resource group definition on Linux (continued)

FsckOpt = "-n"
)
NW_Mount requires dg1
NW_IP requires NW_Mount
nw_server requires NW_IP
// resource dependency tree
//
// group networker
// {
// Application nw_server
// {
// IP NW_IP
// {
// Mount NW_Mount
// {
// DiskGroup dg1
// }
// }
// }

Example 2 An instance of a NetWorker resource group definition on Windows

The following example, shows an instance of the NetWorker resource group defined in
the C:\Program Files\Veritas\cluster server\conf\config\main.cf
VCS cluster configuration file.

group networker (
SystemList = { BU-ZEUS32 = 0, BU-HERA32 = 1 }
)
IP NWip1 (
Address = "10.5.163.41"
SubNetMask = "255.255.255.0"
MACAddress @BU-ZEUS32 = "00-13-72-5A-FC-06"
MACAddress @BU-HERA32 = "00-13-72-5A-FC-1E"
)
MountV NWmount1 (
MountPath = "S:\\"
VolumeName = SharedVolume1
VMDGResName = NWdg_1
)
Process NW_1 (
Enabled = 0
StartProgram = "D:\\Program Files\\EMC NetWorker\\nsr\\bin\
\nw_vcs.exe start"
StopProgram = "D:\\Program Files\\EMC NetWorker\\nsr\\bin\
\nw_vcs.exe stop"
CleanProgram = "D:\\Program Files\\EMC NetWorker\\nsr\\bin\
\nw_vcs.exe stop_force"
MonitorProgram = "D:\\program files\\EMC NetWorker\\nsr\
\bin\\nw_vcs.exe monitor"
UserName = "bureng\\administrator"
Password = BHFlGHdNDpGNkNNnF
)
VMDg NWdg_1 (
DiskGroupName = "32dg1"
)
NWip1 requires NWmount1
NWmount1 requires NWdg_1
NW_1 requires NWip1
// resource dependency tree

Configuring a highly available NetWorker server 37


Configuring the Cluster

Example 2 An instance of a NetWorker resource group definition on Windows (continued)

//
// group networker
// {
// Process NW_1
// {
// IP NWip1
// {
// MountV NWmount1
// {
// VMDg NWdg_1
// }
// }
// }
// }

Configuring the NetWorker software on Windows


Before you begin
Perform the following steps on each physical node as the administrator user.
Procedure
1. Bring the NetWorker server service group online.
2. To define the resource types that the NetWorker software requires, run the
cluster configuration binary, NetWorker_installation_path
\lc_config.exe.
3. At the Do you want to configure NetWorker virtual server?[y/n] prompt,
type Yes.
4. At the Enter shared nsr dir: prompt, specify the pathname of the shared nsr
directory that will contain the configuration information for the highly available
NetWorker server. For example: S:\nsr.
5. At the Enter the directory in which your Veritas Cluster Server software is
installed (typically something like C:\Program Files\Veritas\cluster
server): prompt, specify the location where you installed the Veritas Cluster
Server.
6. At the Is this OK [y/n] prompt, type Y to update the configuration.

Note

To change the configuration at a later time, run the lc_config.exe -r


option then run lc_config.exe again.

Configuring NetWorker on Linux


Before you begin
Perform the following steps on each physical node as the root user.
Procedure
1. To define the resource types that the NetWorker software requires, run the
cluster configuration script file /usr/sbin/networker.cluster.
2. At the Veritas Cluster Server is detected. Would you like to configure
NetWorker for it [Yes]? prompt, type Yes.

38 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

3. At the Do you wish to continue? [Yes]? prompt, type Yes.


4. At the Enter directory where local NetWorker database is installed [/nsr]?
prompt, specify the location of the local NetWorker database directory that you
provided when you installed NetWorker. For example: /space/nsr.
5. At the Do you want to configure NetWorker virtual server?[y/n] prompt,
type Yes.
6. At the Do you wish to add now the site-specific values for:
NSR_SHARED_DISK_DIR and NSR_SERVICE_ID Yes or No [Yes]? prompt,
type Yes to ensure compatibility with other cluster environments.
7. At the In what path will the shared nsr directory be created/located?
prompt, specify the pathname of the globally mounted /nsr directory that
contains the configuration information for the highly available NetWorker
server. For example: /global/nw.
8. At the Enter the Logical Hostname to be used for NetWorker? prompt,
specify the published logical hostname that the highly available NetWorker
server uses. For example: clus_vir1.

Note

l The logical hostname should be a unique FQDN/shortname. The FQDN/


shortname must resolve to a unique IP address.

Note

To change the configuration at a later time, run the lc_config.exe -r


option then run lc_config.exe again.

Adding the NetWorker server resource to the NetWorker service group


The NetWorker server is an Application resource type on Linux and a Process resource
type on Windows. Add these resource types to the NetWorker service group.
Before you add the NetWorker server resourece to the NetWorker service group,
ensure that the following dependencies are resolved.
l For Linux:
n Process resource depends on the IP resource.
n IP resource depends on the MountV resource.
l For Windows systems:
n Application resource depends on the IP resource.
n IP resource depends on the Mount resource

Example 3 NWserver resource on VCS for Linux

The following example, shows an instance of the Application resource type defined on
a Linux VCS cluster.

"Resource type: "Application"


"Attributes:

Configuring a highly available NetWorker server 39


Configuring the Cluster

Example 3 NWserver resource on VCS for Linux (continued)

User = root
StartProgram = "/usr/sbin/nw_vcs start"
StopProgram = "/usr/sbin/nw_vcs stop"
CleanProgram = "/usr/sbin/nw_vcs stop_force"
MonitorProgram = "/usr/sbin/nw_vcs monitor"
MonitorProcesses = "/usr/sbin/nsrd -k Virtual_server_hostname"

Example 4 NWserver resource on VCS for Windows

The following example, shows an instance of the Process resource type defined on a
Windows VCS cluster.

"Resource type: "Process"


"Attributes:
StartProgram = "C:\program files\EMC NetWorker\nsr\bin\nw_vcs.exe
start"
StopProgram = "C:\program files\ EMC NetWorker \nsr\bin\nw_vcs.exe
stop"
CleanProgram = "C:\program files\ EMC NetWorker \nsr\bin\nw_vcs.exe
stop_force"
MonitorProgram = "C:\program files\ EMC NetWorker \nsr\bin
\nw_vcs.exe monitor"
UserName = "<administrator user name> "
Password = "<administrator password>"
Domain = "<Active Directory domain name>"

Configuring NetWorker Client on a VERITAS cluster


This section also describes how to configure the NetWorker client as a cluster-aware
application, after you install the NetWorker software on each physical node of the
cluster. on a VERITAS Cluster Server (VCS).

Creating NetWorker Client resource instances


This section applies to Windows and UNIX.
Procedure
l A NetWorker virtual server requires an instance of the NWClient resource type in
any VCS group that:
n Contains raw devices or raw logical volumes to back up.
n Contains more than one IP type resource.
n Contains storage resources that are not automatically detected. For example:
– Storage resources defined in dependent groups.
– Storage resources that are not of the type Mount or CFSmount.
l Optionally create an instance of the NWClient resource type for a NetWorker
virtual server in the following configurations:
n The failover VCS group has only one IP type resource.

40 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

n The owned file systems on the shared devices are instances of the mount type
resource contained in the same service group.

About the NWClient resource


Before you create a NWclient resource, review this section to become familiar with
the structure of the NWClient resource.
The following table describes the required NWClient resource attributes.

Table 3 NWClient resource type attributes

Required attributes Type and dimension Definition


IPAddress string, scalar IP address of the virtual
NetWorker client. An IP type
resource with a matching
Address attribute must exist
in the service group.

Owned_paths string, vector A list of file systems or raw


devices on a shared storage
device. The virtual NetWorker
client specified by the IP
Address attribute owns these
file systems or raw devices.

Example 5 NWClient resource sample configuration

The following is a sample of a configured NWClient resource:

NWClient nw_helene (
IPAddress="137.69.104.251"
Owned_paths={ "/shared1", "/shared2", "/dev/rdsk/c1t4d0s4" }

Configuring a cluster-aware NetWorker client on Windows


A cluster-aware NetWorker client is aware of the clustered IP address and shared file
systems in a cluster. Perform these steps to configure a cluster-aware NetWorker
client, which allows you to create a client resource for the virtual node.
All the resources should be aggregated under a resource group. Log in to each
physical node as the administrator user and define the resource types that the
NetWorker software requires by running the cluster configuration binary
NetWorker_installation_path\lc_config.exe

Configuring a cluster-aware NetWorker client on Solaris and Linux


A cluster-aware NetWorker client is aware of the clustered IP address and shared file
systems in a cluster. Perform these steps to configure a cluster-aware NetWorker
client, which allows you to create a client resource for the virtual node.
Before you begin
All the resources should be aggregated under a resource group. Perform the following
steps on each physical node as the root user.

Configuring NetWorker Client on a VERITAS cluster 41


Configuring the Cluster

Procedure
1. To define the resource types that the NetWorker software requires, run the
cluster configuration script file /usr/sbin/networker.cluster.
2. At the Would you like to configure NetWorker for it [Yes]? prompt, type Yes.
3. At the Do you wish to continue? [Yes]? prompt, type Yes.
4. At the Enter directory where local NetWorker database is installed [/nsr]?
prompt, specify the location of the local NetWorker database directory that you
provided when you installed NetWorker. For example: /space/nsr.

Registering the resource type and creating resource instances


Register the NWClient resource and create NWClient resource instances on Windows
and UNIX.
Before you begin
Perform the following steps as the root user on UNIX or the administrator user on
Windows.
Procedure
1. To save the existing VCS configuration and prevent further changes while you
modify the main.cf file, type:
haconf -dump -maker

2. To stop the VCS software on all nodes and leave the resources available, type:
hastop -all -force

3. To make a backup copy of the main.cf file:


l For UNIX systems, type:
cd /etc/VRTSvcs/conf/config
cp main.cf main.cf.orig
l For Windows systems, type:
cd C:\Program Files\Veritas\cluster server\conf\config cp
main.cf main.cf.orig
4. To copy the NWClient resource definition for the file that is located in the VCS
configuration directory:
l For UNIX systems, type:
cp /etc/VRTSvcs/conf/NWClient.cf /etc/VRTSvcs/conf /config/
NWClient.cf
For Windows systems, type:
cp C:\Program Files\Veritas\cluster server\conf\NWClient.cf
C:\Program Files\Veritas\cluster server
\conf\config\NWClient.cf
5. To add the NWClient resource type and the NWClient resource type instances
to the main.cf file:
a. Type the following command:
include "NWClient.cf"

42 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

b. Save and close the file.


c. Verify the syntax of the main.cf file, type:
hacf -verify config

d. Log in on the remaining nodes in the cluster, type:


hastart

e. Start the VCS engine, type:


hastart

f. Verify the status of all service groups, type:


hagrp -display

g. Add a NWClient resource instance for the service groups that require the
resource.

Troubleshooting configuration
This section describes how to troubleshoot NetWorker configuration issues in a
cluster.

Slow backups
The lcmap program, queries cluster nodes and creates a map that includes
information such as path ownership of resource groups. In large cluster
configurations, lcmap may take a long time to complete and thus slow down certain
operations. This is most often noticed in very long backup times.
In these situations, consider adjusting cluster cache timeout. This attribute specifies a
time, in seconds, in which to cache the cluster map information on a NetWorker client.
Edit the cluster cache timeout attribute with caution. Values for the attribute can vary
from several minutes to several days and depends the following factors:
l How often the cluster configuration changes.
l The possibility of resource group failover.
l The frequency of NetWorker operations.
If you set the value too large, then an out-of-date cluster map can result and cause
incorrect path resolution. For example, if the cluster cache timeout value is set to
86400 (one day), then any changes to the cluster map will not be captured for up to
one day. If cluster map information changes before the next refresh period, then some
paths may not resolve correctly.

Note

If you set the value too small, then cache updates can occur too frequently, which
negatively affects performance. Experiment with one physical cluster node to find a
satisfactory timeout value. If you cannot obtain a significant improvement in
performance by adjusting this attribute, then reset the attribute value to 0 (zero).
When the attribute value is 0, NetWorker does not use the attribute.

Troubleshooting configuration 43
Configuring the Cluster

Editing the cluster cache timeout attribute


The cluster cache timeout attribute resides in the NSRLA database of the NetWorker
client and is visible only when NetWorker is configured for a cluster. For example on
UNIX, a NetWorker client is configured for a cluster when the networker.cluster script
runs and the nsrexecd program restarts.
To edit the cluster cache timeout value, perform these steps on each physical node as
the root user on UNIX or an administrator on Windows:
Procedure
1. Connect to the NSRLA database:
nsradmin -p nsrexecd

2. Display the current settings for attributes in the NSRLA resource. For example,
type:
print type:NSRLA

3. Change the value of the cluster cache timeout attribute. For example, type:
update cluster cache timeout: value
where value is the timeout value in seconds. A value of 0 (zero) specifies that
the cache is not used.

4. When prompted to confirm the change, type:


Yes

5. Confirm that the attribute was updated, type:


print type:NSRLA
NetWorker updates the NSRLA database with the new cache value. The
updated value takes effect after the next cache update, which is based on the
previous timeout value.

6. To make the timeout value take effect immediately, delete the cache file on the
physical node:
l UNIX: /tmp/lcmap.out
l Windows: NetWorker_install_path\nsr\bin\lcmap.out

NetWorker virtual server fails to start nsrmmd


When the NetWorker virtual server cannot start an nsrmmd process on a NetWorker
storage node, then a message similar to the following appears in the NetWorker server
daemon.raw file:

06/08/00 10:00:11 nsrmon #217: connect to nsrexec prog 390113 vers


1 on `uranus' failed: RPC error: Remote system error
06/08/00 10:00:11 nsrd: media notice: check storage node: uranus
(RPC error: Remote system error)
06/08/00 10:00:11 nsrd: media info: restarting nsrmmd #1 on uranus
in 2 minute(s)
06/08/00 10:02:12 nsrd: media info: restarting nsrmmd #1 on uranus
now

44 NetWorker 19.1 Cluster Integration Guide


Configuring the Cluster

06/08/00 10:02:42 nsrmon #183: connect to nsrexec prog 390113 vers


1 on `

The error also appears when the nsrexecd daemon on a UNIX host or the NetWorker
Remote Exec service on a Windows host is not running on the storage node.
To resolve this issue, start the nsrexecd process on UNIX or the NetWorker Remote
Exec service on Windows.

System hostname key is missing from the Lockbox


In the Clustered NetWorker environment, during resource failover, a message similar
to the following appears in the NetWorker server daemon.raw file.

The system hostname key is missing from the Lockbox

To resolve this issue, add the following to the Remote access list:

For Windows:

administrator@clus_phy1_IP
administrator@clus_phy1_shortname
administrator@clus_phy1_FQDN
administrator@clus_phy2_IP
administrator@clus_phy2_shortname
administrator@clus_phy2_FQDN
system@clus_phy1_IP
system@clus_phy1_shortname
system@clus_phy1_FQDN
system@clus_phy2_IP
system@clus_phy2_shortname
system@clus_phy2_FQDN

For Linux:

root@clus_phy1_IP
root@clus_phy1_shortname
root@clus_phy1_FQDN
root@clus_phy2_IP
root@clus_phy2_shortname
root@clus_phy2_FQDN

Note

Replace IP, shortname and FQDN with current environmental details.

System hostname key is missing from the Lockbox 45


Configuring the Cluster

REST API does not work after NetWorker server cluster fail-over
REST API does not work after NetWorker server cluster fail-over.
To resolve this issue, add the LDAPS certificate to the keystore using the
following:

/usr/java/latest/bin/keytool -import -alias aliasname.domain


-keystore /usr/java/latest/lib/security/cacerts -storepass changeit
-file CA.txt

46 NetWorker 19.1 Cluster Integration Guide


CHAPTER 3
Configuring Devices for a Highly Available
NetWorker Server

NetWorker supports the use of tape, AFTD, and Data Domain devices to back up
cluster host data. This chapter describes three common configuration scenarios when
using autochangers and tape devices to back up a highly available NetWorker server.
The information that describes how to configure AFTD and Data Domain devices in the
NetWorker Administration Guide and the NetWorker Data Domain Boost Integration Guide
applies to clustered and non-clustered hosts.

l Configuring an autochanger with shared tape devices....................................... 48


l Configuring an autochanger with non-shared tape devices................................ 49
l Configuring the robotics on a stand-alone host.................................................. 51

Configuring Devices for a Highly Available NetWorker Server 47


Configuring Devices for a Highly Available NetWorker Server

Configuring an autochanger with shared tape devices


In this configuration, the NetWorker virtual server manages the robotic arm.
NetWorker uses Dynamic Drive Sharing (DDS) to allow the virtual node and each
physical node to shares tape devices. Each physical and virtual node sends backup
data directly to a tape device and not over the network. Use this configuration when
most of the backup data originates from the inactive physical node.
Before you configure a shared autochanger and DDS devices, perform the following
steps:
Procedure
1. Ensure that the device-sharing infrastructure supports complete isolation and
protection of the path session between the autochanger and the node that
owns the NetWorker server resource. Protect the path from stray bus signals
and unauthorized session access from the other nodes.

Note

If processes on nodes other than the one that owns on the NetWorker server
can access the tape devices, data corruption might occur. The NetWorker
software might not detect the data corruption.

2. Zone the robotic arm and all drives to each physical node in the cluster.
3. Configure the same path (bus, target and LUNs) to the robotics and tape drives
on each node.
4. If you configured the bridge with node device-reassignment reservation
commands, then add these commands to the nsrrc startup script on the
NetWorker virtual server. The NetWorker Administration Guide describes how to
modify the nsrrc script.
5. Install the cluster vendor-supplied special device file for the robotic arm on each
physical node. The special device file creates a link to the tape or autochanger
device driver. Ensure that the name that is assigned to the link is the same on
each node for the same device. If you do not have matching special device files
across cluster nodes, you might be required to install fibre HBAs in the same
PCI slots on all the physical nodes within the cluster.
The following figure provides a graphical view of this configuration option.

48 NetWorker 19.1 Cluster Integration Guide


Configuring Devices for a Highly Available NetWorker Server

Figure 2 Autochanger with shared devices

6. To configure the autochanger and devices by using the NMC device


configuration wizard, specify the hostname of the virtual server, clus_vir1, when
prompted for the storage node name and the prefix name. The NetWorker
Administration Guide describes how to use NMC to configure autochangers and
devices.
7. To configure the autochanger and devices by using the jbconfig command,
run jbconfig -s clus_vir1 on the physical node that owns the NetWorker
server resource.
a. When prompted for the hostname to use as a prefix, specify the virtual
server name, clus_vir1.
b. When prompted to configure shared devices, select Yes.
The NetWorker Administration Guide describes how to use NMC to configure
autochangers and devices.
8. The storage node attribute value for each host is as follows:
l clus_phys1: clus_phys1
l clus_phys2: clus_phys2
l clus_vir1: nsrserverhost
Configuring backup and recovery describes how to configure the Client
resource for each cluster node.
9. When a failover occurs, NetWorker relocates and restarts savegroup operations
that were in progress on the failover node. Standard autochanger operations
however, (for example: performing an inventory, labeling, mounting, or
unmounting a volume) does not automatically restart on the new failover node.

Configuring an autochanger with non-shared tape devices


In this configuration, the robotic arm and tape devices are configured for the virtual
node only. The NetWorker virtual server and the physical node that owns the

Configuring an autochanger with non-shared tape devices 49


Configuring Devices for a Highly Available NetWorker Server

NetWorker server resource sends backup data directly to the tape devices. The
inactive physical node sends backup data to the tape devices over the network. Use
this configuration when most of the backup data originates from the active physical
node, the shared disk resource, and hosts external to the cluster.
The following figure provides a graphical view of this configuration option.
Figure 3 Autochanger with non-shared devices

In this example, use the following procedure to configure an autochanger with non-
shared tape devices:

Procedure
1. To configure the autochanger and devices by using the NMC device
configuration wizard, specify the hostname of the virtual server,clus_vir1, when
prompted for the storage node name and the prefix name. The NetWorker
Administration Guide describes how to use NMC to configure autochangers and
devices.
2. To configure the autochanger and devices by using the jbconfig command,
run jbconfig -s clus_vir1 on the physical node that owns the
NetWorker server resource.
l When prompted for the hostname to use as a prefix, specify the virtual
server name, clus_vir1.
l When prompted to configure shared devices, select Yes. The NetWorker
Administration Guide describes how to use jbconfig to configure
autochangers and devices.

3. The storage node attribute value for each host is as follows:


l clus_phys1: nsrserverhost
l clus_phys2: nsrserverhost

50 NetWorker 19.1 Cluster Integration Guide


Configuring Devices for a Highly Available NetWorker Server

l clus_vir1: nsrserverhost
The "Configuring backup and recovery" chapter describes how to configure the
Client resource for each cluster node.

Configuring the robotics on a stand-alone host


You can set up a stand-alone physical host as a Storage Node outside the cluster to
control the robotic arm when you cannot match bus target LUNs across the cluster
nodes or when you do not have a NetWorker Server within the cluster. The stand-
alone physical host can control the robotic arm through a Fibre Channel or SCSI
connection. Each node in the cluster sends backup data over the network to the tape
devices. The NetWorker virtual server requires a local device to back up the indexes
and bootstrap.
The following figure provides a graphical view of this configuration option.
Figure 4 External stand-alone storage node

In this example, use the following procedure to configure a stand-alone storage node:
l The NetWorker virtual server uses local device AFTD1 to back up the bootstrap
and indexes.
l To configure the autochanger and devices by using the NMC device configuration
wizard, specify the hostname of the stand-alone host, ext_SN, when prompted for
the storage node name and the prefix name.
l To configure the autochanger and devices by using the jbconfig command, run
jbconfig -s clu_vir1on the ext_SN. The NetWorker Administration Guide
describes how to use jbconfig to configure autochangers and devices.
n When prompted for the hostname to use as a prefix, specify the external
storage node, ext_SN.

Configuring the robotics on a stand-alone host 51


Configuring Devices for a Highly Available NetWorker Server

n When prompted to configure shared devices, select Yes.


l The Storage nodes attribute value in the Client resource for each host is as
follows:
n clus_phys1: clus_phys1
n clus_phys2: clus_phys2
n clus_vir1: nsrserverhost
The "Configuring backup and recovery" chapter describes how to configure the Client
resource for each cluster node.

52 NetWorker 19.1 Cluster Integration Guide


CHAPTER 4
Configuring Backup and Recovery

This chapter describes how to backup virtual and physical nodes in a cluster, and how
to configure a virtual client to backup to a local storage node.

Note

NetWorker supports the use of multiple IP address for a resource group (resource
service for MC/ServiceGuard). However, use only one of these IP addresses to
configure the virtual client resource. The name of the NetWorker Client resource can
be the short name, the FQDN corresponding to the IP address, or the IP address. For
example: resgrp1 is a resource group that is defined in a cluster and there are two IP
resources defined in the group, IP1 and IP2. If the IP address for IP1 is defined as a
NetWorker Client resource, then all shared paths in resgrp1 are saved under the IP
address for IP1 index.

l Setting NetWorker environment variables in a cluster....................................... 54


l Limiting NetWorker server access to a client..................................................... 54
l Configuring the NetWorker virtual server.......................................................... 55
l Creating client resources for physical node backups.......................................... 57
l Creating a client resource for virtual client backups...........................................57
l Configuring a backup device for the NetWorker virtual server...........................59
l Performing manual backups of a cluster node....................................................60
l Troubleshooting backups................................................................................... 62
l Recovering data................................................................................................. 64
l Troubleshooting recovery.................................................................................. 66

Configuring Backup and Recovery 53


Configuring Backup and Recovery

Setting NetWorker environment variables in a cluster


In a UNIX cluster, specify environment variables for a highly-available NetWorker
Server in the global /nsr/nsrrc file. The NetWorker Administration Guide describes
how to use the /nsr/nsrrc file.
To define environment variables for the cluster-aware or stand-alone UNIX NetWorker
host, modify or create the /nsr/nsrrc file in the local /nsr directory.

Limiting NetWorker server access to a client


By default, any NetWorker Server can back up a NetWorker host and perform a
directed recover to any NetWorker host. Use the servers files on a NetWorker host to
limit NetWorker Server access.
A highly available NetWorker Server or cluster-aware client uses multiple servers files.
To limit NetWorker Server access to a cluster node, you must create and edit these
servers files:
l Global servers file, located on the shared drive.
l Local servers file on for each physical cluster node.
A stand-alone NetWorker application on a cluster node uses one servers file,
located in the /nsr/res/servers on UNIX or the
NetWorker_installation_path\nsr\res on Windows.
To limit NetWorker server access to a cluster node:
Procedure
1. For a highly available NetWorker Server or cluster-aware NetWorker Client,
take the NetWorker virtual server offline on the active cluster node:
l For MSFCS on Windows 2008 R2 SP1, in the Failover Cluster Management
program, right-click on the NetWorker cluster service and select Take this
service or application Offline.
l For MSFCS on Windows 2012, 2016, and 2019 in the Failover Cluster
Management program, right-click the NetWorker cluster resource and
select Stop Role.
2. On each node, stop the NetWorker processes:
l From a command prompt on Linux or UNIX, type: nsr_shutdown
l On Windows, stop the NetWorker Remote Exec service. This also stops the
NetWorker Backup and Recover service on a NetWorker server.
3. On each physical node, edit or create the servers file:
l UNIX: /nsr/res/servers
l Windows: NetWorker_installation_path\nsr\res\servers

4. Specify the shortname and FDQN for each NetWorker Server, one per line, that
requires access to the NetWorker host.
When the NetWorker Server is highly available:

a. Add an entry for the NetWorker logical or virtual hostname first.


b. Add entries for each physical host.

54 NetWorker 19.1 Cluster Integration Guide


Configuring Backup and Recovery

For example:
clus_vir1
clus_vir1.corp.com
clus_phys1
clus_phys1.corp.com
clus_phys2
clus_phys2.corp.com

When the servers file does not contain any hosts, any NetWorker Server can
back up or perform a directed recovery to the host.

5. On the node with access to the shared disk, edit the global servers file.

Note

Ensure that the hostnames defined in the global servers file are the same as the
local servers file on each physical node.

6. For Linux only, edit the NetWorker boot-time startup file, /etc/init.d/
networker and delete any nsrexecd -s arguments that exist.
For example, when the /etc/init.d/networker contains the following
entry:

nsrexecd -s venus -s mars

Modify the file so the entry appears as:

nsrexecd

7. Start the NetWorker daemons on each node.


8. For a highly available NetWorker host only, bring the NetWorker application
online:
l For MSFCS on Windows 2008 R2 SP1, in the Failover Cluster Management
program, right-click on the NetWorker cluster service, and then select Bring
this service or application online.
l For MSFCS on Windows 2012, 2016, and 2019 in the Failover Cluster
Management program, right-click the NetWorker cluster resource, and then
select Start Role.
Confirm that the state of the NetWorker server resource changes to Online.

Configuring the NetWorker virtual server


This section only applies to a highly-available NetWorker Server and describes how to
configure the NetWorker virtual server and how to backup the shared disk.
NetWorker supports the use of multiple IP addresses for a resource group. However,
use only one of these IP addresses to configure the virtual client resource. The name
of the NetWorker Client resource can be the short name, the FQDN corresponding to
the IP address, or the IP address.
For example: resgrp1 is a resource group defined in a cluster and there are two IP
resources defined in the group, IP1 and IP2. If the IP address for IP1 is defined as a
NetWorker Client resource, then all shared paths in resgrp1 are saved under the IP
address for IP1 index.

Configuring the NetWorker virtual server 55


Configuring Backup and Recovery

To configure the NetWorker virtual server:


Procedure
1. Use NMC to connect to the NetWorker virtual server.
2. In the Configuration window, right-click the NetWorker Server and select
Properties.
3. In the Administrator attribute, specify the root user account for each RHEL
physical node. For each Windows physical node, specify the administrator and
the system account for each Windows physical node.
For example:
RHEL physical nodes:
root@clus_phys1
root@clus_phys2

Windows physical nodes:


administrator@clus_phys1
system@clus_phys1
administrator@clus_phys2
system@clus_phys2

4. Click OK.
5. For NetWorker Server configured to use the lockbox only:
a. In the left navigation pane, select Clients.
b. Right-click the client resource for the NetWorker virtual service and select
Modify Client Properties.
c. On the Globals (2 of 2) tab specify the name of each cluster node in the
Remote Access field.
l For RHEL cluster nodes, specify the name of the host that appears when
you use the hostname command.
l For Windows cluster nodes, use the full computer name that appears in the
Control Panel > System > Computer name field.
6. Click OK.

Note

When you configure the NetWorker Server to use a lockbox, you must update
the Remote Access field before the virtual node fails over to another cluster
node. If you do not update the Remote Access field before failover, you must
delete and create the lockbox resource. The NetWorker Security Configuration
Guide describes how to configure the lockbox resource.

56 NetWorker 19.1 Cluster Integration Guide


Configuring Backup and Recovery

Creating client resources for physical node backups


This section describes how to create a NetWorker client resource to back up the local
disks of a physical cluster node.
Procedure
1. Connect to the NetWorker server in NMC. For a highly-available NetWorker
server connect by using the virtual node name.
2. Click Protection and select Groups. Configure a Group resource or select an
existing group to back up the physical nodes.
3. Create a NetWorker client for each physical node within the cluster:
a. Right-click Clients and select Create.
b. In the Name attribute, type the name of the physical client.
c. In the Save set field, specify the local disks or ALL.

Note

For Windows, do not specify the quorum disk.

The ALL save set:


l Does not include shared disks.
l Includes local disk that belongs to the physical node.
l Includes the DISASTER_RECOVERY:\ save set for Windows clusters
l Includes the WINDOWS ROLES AND FEATURES save set for Windows
2012 clusters.

d. In the Group attribute, select the Group configured in step 2.


e. Define the remaining attributes in the Client properties window, as required,
and click Ok.

Creating a client resource for virtual client backups


This section describes how to create a NetWorker client resource to back up a shared
disk or Cluster Shared Volume (CSV), including deduplication-enabled CSV. These
steps apply to cluster-aware clients and the NetWorker virtual server.
Procedure
1. Connect to the NetWorker server by using NMC. For a highly available
NetWorker server, connect by using the virtual node name.
2. Click Protection, and then create a Policy and Workflow to back up the cluster
node, or select an existing workflow.
3. Create a NetWorker client resource for the virtual node.
For Microsoft Failover Cluster, ensure that you configure a network name
resource for the virtual client and that you add the resource to the resource
group that contains the disks for backup. The full name of the network name
resource should match the name of the NetWorker client resource or one of its
aliases.

Creating client resources for physical node backups 57


Configuring Backup and Recovery

4. Specify the save set to backup in the Save set field. To back up:
l All the shared drives and CSVs that a virtual client owns, specify All.
l A single drive volume of shared disk that a virtual client owns, specify the
drive volume letter.
For example, to backup a single drive volume, specify G:\.
To backup a single CSV, specify C:\clusterstorage\volumeX, where X
is the volume number, and C: is the system drive.

Note

If you specify the subdirectory of a deduplicated CSV volume, except in the


case where the subdirectory is the root of a mount point, then NetWorker
creates an unoptimized data deduplication backup .
5. For HACMP only, add the boot adapter name in the Aliases attribute.
6. On the Globals (2 of 2) tab, in the Remote Access field, specify the root user
account for each UNIX physical node or the system account for each Windows
physical node within the cluster.
For UNIX physical nodes:
root@clus_phys1 root@clus_phys2
For Windows physical nodes:
system@clus_phys1 system@clus_phys2

7. On the Apps and Modules tab, in the Application Information field, specify
environment variables, as required.
l For Snapshot Management backups only, use the NSR_PS_SHARED_DIR
variable to specify the share directory. For example:
NSR_PS_SHARED_DIR=P:\share
The NetWorker Snapshot Management Integration Guide describes how to
configure Snapshot backups.
l For Windows Server 2012 and Windows 2012 R2 CSV and deduplicated CSV
backups only:
As part of a deduplicated CSV backup, the preferred node tries to move
ownership of the CSV volume to itself. If the ownership move succeeds,
then NetWorker performs a backup locally. If the ownership move fails, then
NetWorker performs the backup over SMB. When the CSV ownership
moves, NetWorker restores the ownership to the original node after the
backup completes.
You can optionally specify the preferred cluster node to perform the backup.
To specify the preferred server, use the NetWorker client Preferred Server
Order List (PSOL) variable NSR_CSV_PSOL.
When you do not specify a PSOL NetWorker performs the backup by using
the Current Host Server node (virtual node).
Review the following information before you specify a PSOL:
n The save.exe process uses the first available server in the list to start
the CSV backup. The first node that is available and responds becomes
the preferred backup host. If none of the specified nodes in the PSOL are
available, then NetWorker tries the backup on the Current Host Server
node.

58 NetWorker 19.1 Cluster Integration Guide


Configuring Backup and Recovery

n The Remote access list attribute on the NetWorker client must contain
the identified cluster nodes.
n Use the NetBIOS name when you specify the node names. You cannot
specify the IP address or FQDN of the node.
To specify the PSOL, include a key/value pair in the client resource
Application information attribute. Specify the key/value pair in the
following format:
NSR_CSV_PSOL=MachineName1,MachineName2,MachineName3...
For example, physical node clus_phys2 owns the cluster resources for virtual
node clus_vir1. By default, clus_vir1 runs the backup request.
To offload operations, define clus_phy1 as the preferred node to start the
save operation. If clus_phy1 is unavailable, then NetWorker should try to use
clus_phy2 to start the save operation.
The NSR_CSV_PSOL variable in the clus_vir1 client resource is set to:
NSR_CSV_PSOL=MachineName1,MachineName2,MachineName3...
When a physical node performs the backup, NetWorker saves the backup
information to the client file index of the virtual client resource. When you
recover the CSV backup, specify clus_vir1 as the source client.
8. For deduplicated CSV backups only, to configure an unoptimized deduplication
backup, specify VSS:NSR_DEDUP_NON_OPTIMIZED=yes in the Save
operations attribute.
9. Define the remaining attributes in the Client properties window, as required,
and then click OK.

Configuring a backup device for the NetWorker virtual


server
The NetWorker virtual server requires a local backup device to save the bootstrap and
the server indexes. To ensures that the device is always available, configure a device
that belongs to the NetWorker virtual server and is shared between the physical
nodes.
Procedure
1. Edit the properties of the client resource for the NetWorker virtual server by
using NMC.
2. Select Globals (2 of 2).
3. In the Storage nodes attribute, specify the hostnames of each physical cluster
node followed by nsrserverhost.

Note
MSFCS does not support shared tapes. You cannot configure the NetWorker
virtual server with tape devices connected to a shared bus. MSFCS supports
disk devices connected to a shared bus. It is recommended that you do not use
file type devices connected to a shared bus.

Configuring a backup device for the NetWorker virtual server 59


Configuring Backup and Recovery

Configuring a virtual client to back up to a local storage node


By default, NetWorker sends the data from a virtual client to the first storage node
listed in the Storage Nodes attribute in the virtual client resource.
Use the keyword curphyhost to direct virtual client backups to a storage node device
on the physical host that currently owns the virtual client. The curphyhost keyword is
only applicable to virtual clients. Do not specify this keyword in the Clone Storage
nodes attribute in the Storage node resource or to the client resource of a NetWorker
virtual server. This can cause unexpected behavior. For example, NetWorker might
write the bootstrap and index backups to the local storage node for the virtual clients,
instead of a local device on the NetWorker virtual server.

Note

If you enable the Autoselect storage node attribute in the client resource, then
NetWorker will override the curphyhost setting for the client. The NetWorker
Administration Guide provides more information about the Autoselect storage node
attribute.

For example, consider a two-node cluster where:


l Nodes A and B are the two physical nodes in the cluster.
l The virtual client is saturn, which can reside on Node A or fail over to Node B.
During a backup without curphyhost listed in the Storage Nodes attribute for the
virtual client, NetWorker directs the backup data to the remote device (rd=) on
Node A. When saturn fails over to Node B and a backup for saturn starts,
NetWorker still directs the backup data to the remote device (rd=) on Node A.
When you specify curphyhost first in the Storage Nodes attribute for saturn, if
saturn fails over to Node B and a backup of saturn starts, NetWorker directs the
backup data to the remote device (rd=) on Node B. This action takes place
because, after the failover, saturn resides on Node B—the current physical host.
The following procedure describes how to use curphyhost:
Procedure
1. Edit the properties of the virtual client resource in NMC.
2. Select Globals (2 of 2).
3. In the Storage nodes attribute, add the curphyhost keyword.

Performing manual backups of a cluster node


You can perform manual backups of the physical or virtual nodes in a cluster from the
command prompt on UNIX and Windows or from the NetWorker User GUI, on
Windows only.
This section describes how to configure NetWorker to allow a non-root or non-
administrator account perform manual backups and how to perform a manual backup.

Configuring manual backups for non-root or non-administrator users


The backup operation uses the lcmap script to query the cluster and determine path
ownership. When you perform a manual backup with a non-root account on UNIX or a
non-administrator account on Windows, NetWorker cannot determine path ownership
information. As a result, NetWorker writes the backup information to the client file

60 NetWorker 19.1 Cluster Integration Guide


Configuring Backup and Recovery

index of the physical node that owns the file system instead of the client file index for
the virtual node .
This sections describes how to configure each supported operating system to allow
the lcmap script to query the cluster and determine path ownership for non-root or
non-administrator users.

Using non-root accounts on HP MC/ServiceGuard


Before you perform a manual backup of data from a virtual cluster client with non-root
privileges on HP MC/ServiceGuard, perform one of the following tasks:
l On each node in the cluster, ensure that the .rhosts file in the home directory of
the non-root account includes the hostname of each cluster node. For example:

nodeA
nodeB

l As the root user on each node in the cluster, edit or create the /etc/
cmcluster/cmclnodelist file and add the following information to the file:

nodeA user_name
nodeB user_name

Note

If the cmclnodelist file exists, the cluster software ignores any .rhosts file.

Using non-administrator accounts on MSFCS


Before you perform a manual backup of data from a virtual cluster client with non-
administrator privileges on MSFCS, modify the security descriptor properties on the
cluster so that the user can access the cluster resources.
For example:

Cluster ClusterName/prop "security descriptor"=DOMAIN


\USER ,grant,f:security

Using non-root accounts on VCS for UNIX


When you perform a manual backup of a physical or virtual cluster client in VCS as a
non-root user, the operating system might prompt you for a password.
To avoid the password prompt:
l In VCS 4.0, set the AllowNativeCliUsers attribute to 1.
l In VCS version 4.1 or later, use the VCS halogin command to store
authentication information.

Note

For information on how to set up VCS authentication, see the VCS documentation.

Using non-administrator accounts on VCS for Windows


For VCS 6.0 on Windows 2008 R2 SP1, to perform a backup you must start the
NetWorker User application or command prompt window, as an administrator.
For example:

Configuring manual backups for non-root or non-administrator users 61


Configuring Backup and Recovery

l To start a backup operation from the NetWorker User application: Right-click


the NetWorker User application and select Run as Administrator.
l To start a backup operation from the command prompt, right-click the command
prompt application and select Run as Administrator.

Performing manual backups from the command prompt


To perform a manual backup of a virtual or physical node, use the save command.
For example:

save -c client save_set

where:
l client is the virtual hostname to back up shared disk data or the physical node
hostname to back up data that is local to the node on which you run the save
command.
l save_set specifies the path to the backup data.

Performing manual backups from NetWorker User


You can use the NetWorker User program on a Windows physical node to back up
shared or local data.
To back up shared data, open NetWorker User on the active physical node.

Troubleshooting backups
This section provides resolutions for the following common backup and configuration
errors.

RAP error: Unable to extract resource info for client


This message appears when the NetWorker server fails to back up a virtual cluster
client because a NetWorker client resources does not exist for each physical node.
To resolve this issue, create a client resource for each physical node that is allowed to
own the virtual cluster client and then start the backup.

File systems omitted during a scheduled save


In a cluster environment, the NetWorker software must distinguish between the
follow.
l File systems that are associated with a physical client.
l File systems that are managed by a resource group (a virtual client).
To distinguish between these types of file system, NetWorker uses a criteria that is
called the path-ownership rules. These rules determine which client file index should
contain the information about a backup save set.
By default, when a conflict in the path-ownership rules occurs, the NetWorker
software does not:
l Back up scheduled save sets, which prevents a virtual NetWorker client from
writing save set information to multiple client file indexes.

62 NetWorker 19.1 Cluster Integration Guide


Configuring Backup and Recovery

l Consider there to be a match between the client that owns the file system and the
client resource that is configured to backup the file system.
The following conditions cause NetWorker to omit a file system backup during a
scheduled save:
l The save set attribute for a physical client resource contains a file system that is
owned by a virtual client.
l The save set attribute for a virtual Client resource contains a file system that is
owned by a physical client.
Resolve this issue in one of the following ways outlined in the following sections.

Correct the save set attribute for the client


Configure the NetWorker client to only back up the file systems that the client owns.
1. Use the nsrpolicy command to check the NetWorker path-ownership rules and
display the list of file systems owned by the client.
2. Modify the Save set attribute for the client to contain only the file systems that
the client owns.

Override default path-ownership rules


To force NetWorker to back up file systems that a client does not own, you can create
the pathownerignore file in the NetWorker bin directory on the client. This file
causes NetWorker to ignore default path-ownership rules and write information about
the file system save set to the client file index of the correct owner.

Note

Use the mminfo command to confirm that the backup information saves to the
correct client file index. By design, the NMC Server Group Details window indicates
that the backup corresponds to the physical client where you configured the save set.

File system backup information written to the wrong client file index
When the pathownerignore file exists on a client at the time of a backup,
NetWorker will back up save sets that a client does not own but writes information
about the backup save set to the client file index of the host that owns the file system.
To determine which client file index will contain save set information, run a test probe
with the verbose option set. For example: savegrp -pv -c client_name
group_name where:
l client_name is the name of the cluster client.
l group_name is the name of a group that contains the client backup.
To force NetWorker to write the save set information to the client that does not own
the file system, perform one of the following tasks:
l For a manual save operation, use the -c option with the save command to specify
the name of the client with the save set information.
l For a scheduled save operation, to force NetWorker to the save set information to
write save set information to the index of the client that backs up the save set:
l 1. Edit the properties of the client in NMC.
2. Select the Apps & Module tab.

File system backup information written to the wrong client file index 63
Configuring Backup and Recovery

3. In the Backup command attribute, specify the save command with the name
of the client to receive the save set information:

save -c client_name

Note

l pathownerignore explicitly backups the individual savesets and saveset all


option is not supported with it.
l Use the mminfo command to confirm that the backup information saves to the
correct client file index. By design, the NMC server Group Details window and the
savegrp completion report state that the backup corresponds to the physical client
where you configured the save set.

No matching devices found when backing up to HACMP devices


This error message appears when backups to devices attached to an AIX HACMP
cluster fail because the physical node name is not configured with an IP address that is
attached to the primary NIC.
To resolve this issue, configure the physical node IP address on primary NIC.
Preparing to install NetWorker on HACMP provides more information.

Recovering data
This section describes how to recover data from shared disks that belong to a virtual
client.

Note
The steps to recover data that originated on a private disk on a physical cluster client
are the same as when you recover data from a host that is not part of a cluster. The
NetWorker Administration Guide provides more information.
To recover Windows clusters, the chapter Windows Bare Metal Recovery (BMR) in
the NetWorker Administration Guide provides more information.

To recover data that is backed up from a shared disk that belongs to a virtual client,
perform the following steps:
Procedure
1. Ensure that you have correctly configured remote access to the virtual client:
a. Edit the properties of the virtual client resource in NMC.
b. On the Globals (2 of 2) tab, ensure that the Remote Access attribute
contains an entry for the root or Administrator user for each physical cluster
node.
2. To recover a CSV backup for a client that uses the NSR_CSV_PSOL variable,
ensure that the system account for each host in the preferred server order list
is a member of the NetWorker Operators User Group.
For example, if you configure the virtual node client resource that specifies the
CSV volumes with the following variable: NSR_CSV_PSOL=clu_virt1, clu_virt2,
specify the following users in the NetWorker Operators User Group:

64 NetWorker 19.1 Cluster Integration Guide


Configuring Backup and Recovery

system@clu_virt1
system@clu_virt2

3. Mount the file systems of the virtual client.


4. Recover the data.
l When you use the NetWorker User program on Windows, the source client is
the virtual client.
l When you perform a command line recovery, use the recover command with
the -c option to specify the name of the client you are trying to recover. For
example:
recover -s server_name -c virtual_client

Note

The-c virtual_client is optional when you run the recover command from the
global file system that the virtual client owns. The recover man page or the
NetWorker Command Reference Guide provide information.
n To recover data from a VCS 6.0 on Windows 2008 R2 SP1 you must also
start the NetWorker User program or command prompt window, as
administrator.
n To start a recover operation from the NetWorker User application, right-
click the NetWorker User application and select Run as Administrator.
n To start a recover operation from the command prompt, right-click the
command prompt application and select Run as Administrator.

Configuring a virtual client to recover from a local storage node


During a recover operation of virtual client data, NetWorker attempts to mount the
required volume in a device on the first storage node listed in the Recovery Storage
Nodes attribute in the virtual client resource.
Use the keyword currechost to instruct a virtual client recovery to mount the
required volume in a storage node device on the physical host that owns the virtual
client.

Note

The currechost keyword only applies to virtual client recoveries. Do not specify this
keyword in the clone storage nodes attribute in the Storage node resource or to the
client resource of the NetWorker virtual server. This can cause unexpected behavior,
for example, the NetWorker software writes the bootstrap and index backups to the
local storage node for the virtual clients, instead of a local device on the NetWorker
virtual server.

The following restrictions apply when you configure the recovery of virtual client data
from a local storage node:
l Ensure that there are no hosts or machines named currechost on the network.
l Do not specify currechost in the Clone storage nodes attribute of a virtual client
storage node resource.
l Do not apply the currechost keyword to the Storage nodes attribute or the
Recover Storage Nodes attribute of the virtual server's Client resource.

Configuring a virtual client to recover from a local storage node 65


Configuring Backup and Recovery

To configure the virtual client to recover data from a local storage node:
Procedure
1. Edit the properties of the virtual client resource in NMC.
2. In the Globals (2 of 2) tab, in the Storage nodes attribute or the Recover
storage nodes attribute, add the currechost keyword. Position the keyword in
the list based on the required priority. The keyword at the top of the list has the
highest priority. Ensure that this keyword is not the only keyword in the list.

Troubleshooting recovery
This section provides resolutions to issues that you may encounter when recovering
data from a cluster node backup.

NSR server ‘nw_server_name’: client ‘virtual_hostname’ is not properly


configured on the NetWorker Server
This message appears when you attempt to recover data from the physical node of a
highly available NetWorker server that was backed up by a NetWorker server that is
external to the cluster. To resolve this issue, create a client resource for the highly
available virtual NetWorker server on the external NetWorker server and retry the
recover operation.

66 NetWorker 19.1 Cluster Integration Guide


CHAPTER 5
Uninstalling the NetWorker Software in a
Cluster

Before you remove the NetWorker server software, you must remove the NetWorker
configuration from the cluster. This section describes how to take a highly available
NetWorker server offline and remove the NetWorker configuration from the cluster.
This section does not apply when the NetWorker server software is a stand-alone
application (not cluster managed) or when only the client software is installed.
The process of removing the NetWorker software from a cluster is the same as
removing the software on a stand-alone machine. The NetWorker Installation Guide
describes how to remove the NetWorker software.

l Uninstalling NetWorker from HACMP................................................................ 68


l Uninstalling NetWorker from HP MC/ServiceGuard.......................................... 68
l Uninstalling NetWorker from MSFCS................................................................ 68
l Uninstalling NetWorker from RHEL High Availability..........................................69
l Uninstalling NetWorker from SLES HAE............................................................ 69
l Uninstalling NetWorker from SUN Cluster and Oracle Solaris Cluster................70
l Uninstalling NetWorker from VCS......................................................................70

Uninstalling the NetWorker Software in a Cluster 67


Uninstalling the NetWorker Software in a Cluster

Uninstalling NetWorker from HACMP


Before you begin
Before you uninstall the NetWorker software from each node in the cluster, first
remove the NetWorker configuration from the cluster, then remove the NetWorker
software.
Procedure
1. Perform the following steps on each cluster node as the root user:
2. Shut down the NetWorker daemons:
nsr_shutdown

3. Remove the NetWorker configuration:


networker.cluster -r

4. Uninstall the NetWorker software. The NetWorker Installation Guide provides


complete instructions.

Uninstalling NetWorker from HP MC/ServiceGuard


Before you begin
Before you uninstall the NetWorker software from each node in the cluster, first
remove the NetWorker configuration from the cluster, then remove the NetWorker
software.
Perform the following steps on each physical node as the root user.
Procedure
1. Shut down the NetWorker daemons:
nsr_shutdown

2. Remove the NetWorker configuration from the cluster:


/opt/networker/bin/networker.cluster -r

3. Stop the NetWorker services:


nsr_shutdown

4. Uninstall the NetWorker software. The NetWorker Installation Guide provides


complete instructions.
5. If you used the non-LC integration method to configure the NetWorker
software, remove the /etc/cmcluster/NetWorker.clucheck file.

Uninstalling NetWorker from MSFCS


Procedure
1. Run the following command on each cluster node:

regcnsrd -u

68 NetWorker 19.1 Cluster Integration Guide


Uninstalling the NetWorker Software in a Cluster

2. Remove the NetWorker Server resource from MSFCS by running the following
command from any cluster node:

regcnsrd -d

3. Uninstall the NetWorker software on each node.


The NetWorker Installation Guide provides complete instructions.

Uninstalling NetWorker from RHEL High Availability


Before you begin
Before you uninstall the NetWorker software from each node in the cluster, first
remove the NetWorker configuration from the cluster, then remove the NetWorker
software.
Procedure
1. Perform the following steps on each node in the cluster:
a. Stop the NetWorker daemons:
nsr_shutdown

b. Remove the NetWorker configuration:


networkr.cluster -r
2. Uninstall the NetWorker software.
The NetWorker Installation Guide provides complete instructions.

Uninstalling NetWorker from SLES HAE


Before you begin
Before you uninstall the NetWorker software from each node in the cluster, first
remove the NetWorker configuration from the cluster, then remove the NetWorker
software.
Perform the following steps as the root user.
Procedure
1. Perform the following steps on each node in the cluster:
a. Stop the NetWorker daemons:
nsr_shutdown

b. Remove the NetWorker configuration:


networker.cluster -r

c. Uninstall the NetWorker software. The NetWorker Installation Guide provides


complete instructions.

Uninstalling NetWorker from RHEL High Availability 69


Uninstalling the NetWorker Software in a Cluster

Uninstalling NetWorker from SUN Cluster and Oracle


Solaris Cluster
Before you begin
Before you uninstall the NetWorker software from each node in the cluster, first
remove the NetWorker configuration from the cluster, then remove the NetWorker
software.
Perform the following steps as the root user.
Procedure
1. Perform the following steps on each node of the cluster:
a. Stop the NetWorker daemons:
nsr_shutdown

b. Remove the NetWorker configuration from the cluster:


networker.cluster -r

2. Uninstall the NetWorker software. The NetWorker Installation Guide provides


complete instructions.

Uninstalling NetWorker from VCS


This section describes how to remove the NetWorker configuration from the cluster
and remove the NetWorker software on Solaris, Linux and Windows.

Uninstalling NetWorker on VCS for Solaris and Linux


Before you begin
Before you uninstall the NetWorker software from each node in the cluster, first
remove the NetWorker configuration from the cluster, then remove the NetWorker
software.
Perform the following steps as the root user.
Procedure
1. Remove all the instances of the NWClient resource type and remove the
NWClient type definition from the configuration.
For information, refer to the hares (1m) and hatype(1m) man pages.

2. Perform the following steps on each cluster node:


a. Shut down the NetWorker daemons:
nsr_shutdown

b. Remove the NetWorker configuration:


networker.cluster -r

c. Uninstall the NetWorker software. The NetWorker Installation Guide provides


complete instructions.

70 NetWorker 19.1 Cluster Integration Guide


Uninstalling the NetWorker Software in a Cluster

Uninstalling NetWorker on VCS for Windows


Before you begin
Before you uninstall the NetWorker software from each node in the cluster, first
remove the NetWorker configuration from the cluster, then remove the NetWorker
software.
Perform the following steps as the administrator user.
Procedure
1. Remove all the instances of the NWClient resource type and remove the
NWClient type definition from the configuration.
2. Perform the following steps on each node in the cluster:
a. Stop the NetWorker services.
b. From a command prompt, remove the NetWorker configuration from the
cluster. For example, type:
lc_config.exe -r

c. Uninstall the NetWorker software. The NetWorker Installation Guide provides


complete instructions.

Uninstalling NetWorker on VCS for Windows 71


Uninstalling the NetWorker Software in a Cluster

72 NetWorker 19.1 Cluster Integration Guide


CHAPTER 6
Updating a Highly Available NetWorker
Application

This chapter provides an overview of how to update the NetWorker software in a


highly available cluster.

l Updating a NetWorker application......................................................................74

Updating a Highly Available NetWorker Application 73


Updating a Highly Available NetWorker Application

Updating a NetWorker application


Perform these steps on each node in the cluster.
Before you begin
Before performing the upgrade on windows cluster, rename the C:\Program Files
\EMC NetWorker\nsr\authc-server\tomcat\data_local folder to C:
\Program Files\EMC NetWorker\nsr\authc-server\tomcat
\data_local_backup on all the cluster nodes.
Procedure
1. Uninstall the NetWorker software from each node in the Cluster. The
"Uninstalling the NetWorker software in a cluster" chapter describes how to
remove the NetWorker software in each supported cluster.
2. Install the NetWorker software on each node in the cluster. The NetWorker
Installation Guide describes how to install the NetWorker software.
3. Configure the NetWorker software in the cluster. The “Configuring the cluster”
chapter describes how to configure the NetWorker software in each supported
cluster.

74 NetWorker 19.1 Cluster Integration Guide


GLOSSARY

This glossary contains definitions for terms used in this guide.

administrator Person who normally installs, configures, and maintains software on network
computers, and who adds users and defines user privileges.

advanced file type Disk storage device that uses a volume manager to enable multiple concurrent backup
device (AFTD) and recovery operations and dynamically extend available disk space.

attribute Name or value property of a resource.

authorization code Unique code that in combination with an associated enabler code unlocks the software
for permanent use on a specific host computer. See license key.

backup 1. Duplicate of database or application data, or an entire computer system, stored


separately from the original, which can be used to recover the original if it is lost or
damaged.
2. Operation that saves data to a volume for use as a backup.

backup group See group.

BMR Windows Bare Metal Recovery, formerly known as Disaster Recovery. For more
information on BMR, refer to the Windows Bare Metal Recovery chapter in the
Networker Administration Guide.

boot address The address used by a node name when it boots up, but before HACMP/PowerHA for
AIX starts.

bootstrap Save set that is essential for disaster recovery procedures. The bootstrap consists of
three components that reside on the NetWorker server: the media database, the
resource database, and a server index.

client Host on a network, such as a computer, workstation, or application server whose data
can be backed up and restored with the backup server software.

client file index Database maintained by the NetWorker server that tracks every database object, file,
or file system backed up. The NetWorker server maintains a single index file for each
client computer. The tracking information is purged from the index after the browse
time of each backup expires.

NetWorker 19.1 Cluster Integration Guide 75


Glossary

Client resource NetWorker server resource that identifies the save sets to be backed up on a client.
The Client resource also specifies information about the backup, such as the schedule,
browse policy, and retention policy for the save sets.

cluster client A NetWorker client within a cluster; this can be either a virtual client, or a NetWorker
Client resource that backs up the private data that belongs to one of the physical
nodes.

cluster virtual server Cluster network name, sometimes referred to as cluster server name or cluster alias. A
cluster virtual server has its own IP address and is responsible for starting cluster
applications that can fail over from one cluster node to another.

Console server See NetWorker Management Console (NMC).

current host server Cluster physical node that is hosting the Cluster Core Resources or owns the Cluster
Group. The cluster virtual server resolves to the current host server for a scheduled
NetWorker backup.

database 1. Collection of data arranged for ease and speed of update, search, and retrieval by
computer software.
2. Instance of a database management system (DBMS), which in a simple case might
be a single file containing many records, each of which contains the same set of
fields.

datazone Group of clients, storage devices, and storage nodes that are administered by a
NetWorker server.

device 1. Storage folder or storage unit that can contain a backup volume. A device can be a
tape device, optical drive, autochanger, or disk connected to the server or storage
node.
2. General term that refers to storage hardware.
3. Access path to the physical drive, when dynamic drive sharing (DDS) is enabled.

device-sharing The hardware, firmware, and software that permit several nodes in a cluster to share
infrastructure access to a device.

disaster recovery Restore and recovery of data and business operations in the event of hardware failure
or software corruption.

enabler code Unique code that activates the software:


l Evaluation enablers or temporary enablers expire after a fixed period of time.
l Base enablers unlock the basic features for software.
l Add-on enablers unlock additional features or products, for example, library
support.
See license key.

76 NetWorker 19.1 Cluster Integration Guide


Glossary

failover A means of ensuring application availability by relocating resources in the event of a


hardware or software failure. Two-node failover capability allows operations to switch
from one cluster node to the other. Failover capability can also be used as a resource
management tool.

failover cluster Windows high-availability clusters, also known as HA clusters or failover clusters, are
groups of computers that support server applications that can be reliably utilized with a
minimum of down-time. They operate by harnessing redundant computers in groups or
clusters that provide continued service when system components fail.

group One or more client computers that are configured to perform a backup together,
according to a single designated schedule or set of conditions.

Highly available An application that is installed in a cluster environment and configured for failover
application capability. On an MC/ServiceGuard cluster this is called a highly-available package.

Highly available package An application that is installed in a HP MC/ServiceGuard cluster environment and
configured for failover capability.

host Computer on a network.

host ID Eight-character alphanumeric number that uniquely identifies a computer.

hostname Name or address of a physical or virtual host computer that is connected to a network.

license key Combination of an enabler code and authorization code for a specific product release to
permanently enable its use. Also called an activation key.

managed application Program that can be monitored or administered, or both from the Console server.

media index Database that contains indexed entries of storage volume location and the life cycle
status of all data and volumes managed by the NetWorker server. Also known as media
database.

NetWorker 19.1 Cluster Integration Guide 77


Glossary

networker_install_path The path or directory where the installation process places the NetWorker software.
l AIX: /usr/sbin
l Linux: /usr/bin
l Solaris: /usr/sbin
l HP-UX: /opt/networker/bin
l Windows (New installs): C:\Program Files\EMC NetWorker\nsr\bin
l Windows (Updates): C:\Program Files\Legato\nsr\bin

NetWorker Management Software program that is used to manage NetWorker servers and clients. The NMC
Console (NMC) server also provides reporting and monitoring capabilities for all NetWorker processes.

NetWorker server Computer on a network that runs the NetWorker server software, contains the online
indexes, and provides backup and restore services to the clients and storage nodes on
the same network.

node A physical computer that is a member of a cluster.

node name The HACMP/PowerHA for AIX defined name for a physical node.

pathname Set of instructions to the operating system for accessing a file:


l An absolute pathname indicates how to find a file by starting from the root directory
and working down the directory tree.
l A relative pathname indicates how to find a file by starting from the current
location.

physical client The client associated with a physical node. For example the / and /usr file systems
belong to the physical client.

Physical host address The address used by the physical client. For HACMP for AIX 4.5, this is equivalent to a
(physical hostname) persistent IP address.

private disk A local disk on a cluster node. A private disk is not available to other nodes within the
cluster.

recover To restore data files from backup storage to a client and apply transaction (redo) logs
to the data to make it consistent with a given point-in-time.

remote device 1. Storage device that is attached to a storage node that is separate from the
NetWorker server.
2. Storage device at an offsite location that stores a copy of data from a primary
storage device for disaster recovery.

78 NetWorker 19.1 Cluster Integration Guide


Glossary

resource Software component whose configurable attributes define the operational properties of
the NetWorker server or its clients. Clients, devices, schedules, groups, and policies are
all NetWorker resources.

resource database NetWorker database of information about each configured resource.

resource group The AutoStart defined name for a virtual server.


(application service)

save NetWorker command that backs up client files to backup media volumes and makes
data entries in the online index.

save set 1. Group of tiles or a file system copied to storage media by a backup or snapshot
rollover operation.
2. NetWorker media database record for a specific backup or rollover.

scheduled backup Type of backup that is configured to start automatically at a specified time for a group
of one or more NetWorker clients. A scheduled backup generates a bootstrap save set.

service address The address used by highly-available services in an HACMP/PowerHA for AIX
environment.

shared disk Storage disk that is connected to multiple nodes in a cluster.

stand-alone server A NetWorker server that is running within a cluster, but not configured as a highly-
available application. A stand-alone server does not have failover capability.

storage device See device.

storage node Computer that manages physically attached storage devices or libraries, whose backup
operations are administered from the controlling NetWorker server. Typically a
“remote” storage node that resides on a host other than the NetWorker server.

virtual client A NetWorker Client resource that backs up data that belongs to a highly-available
service or application within a cluster. Virtual clients can fail over from one cluster node
to another. For HACMP/PowerHA for unix the virtual client is the client associated with
a highly-available resource group. The file system defined in a resource group belongs
to a virtual client. The virtual client uses the service address. The HACMP/PowerHA for
AIX resource group must contain an IP service label to be considered a NetWorker
virtual client.

NetWorker 19.1 Cluster Integration Guide 79


Glossary

80 NetWorker 19.1 Cluster Integration Guide

You might also like