Docu93960 - NetWorker 19.1 Cluster Integration Guide PDF
Docu93960 - NetWorker 19.1 Cluster Integration Guide PDF
Docu93960 - NetWorker 19.1 Cluster Integration Guide PDF
Version 19.1
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.“ DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com
Figures 5
Preface 7
Chapter 1 Introduction 13
Stand-alone application...............................................................................14
Cluster-aware application........................................................................... 14
Highly available application..........................................................................14
REST API does not work after NetWorker server cluster fail-over....
46
Glossary 75
Note
This document was accurate at publication time. To ensure that you are using the
latest version of this document, go to the Support website https://www.dell.com/
support.
Purpose
This document describes how to uninstall, update and install the NetWorker software
in a cluster environment.
Audience
This document is part of the NetWorker documentation set and is intended for use by
system administrators during the installation and setup of NetWorker software in a
cluster environment.
Revision history
The following table presents the revision history of this document.
Related documentation
The NetWorker documentation set includes the following publications, available on the
Support website:
l NetWorker E-LAB Navigator
Provides compatibility information, including specific software and hardware
configurations that NetWorker supports. To access E-LAB Navigator, go to
https://elabnavigator.emc.com/eln/elnhome.
l NetWorker Administration Guide
Describes how to configure and maintain the NetWorker software.
l NetWorker Network Data Management Protocol (NDMP) User Guide
Describes how to use the NetWorker software to provide data protection for
NDMP filers.
l NetWorker Cluster Integration Guide
Contains information related to configuring NetWorker software on cluster servers
and clients.
l NetWorker Installation Guide
NOTICE
Note
Typographical conventions
The following type style conventions are used in this document:
Bold Used for interface elements that a user specifically selects or clicks,
for example, names of buttons, fields, tab names, and menu paths.
Also used for the name of a dialog box, page, pane, screen area with
title, table label, and window.
Italic Used for full titles of publications that are referenced in text.
Monospace Used for:
l System code
l System output, such as an error message or script
l Pathnames, file names, file name extensions, prompts, and
syntax
l Commands and options
You can use the following resources to find more information about this product,
obtain support, and provide feedback.
Where to find product documentation
l https://www.dell.com/support
l https://community.emc.com
Where to get support
The Support website https://www.dell.com/support provides access to product
licensing, documentation, advisories, downloads, and how-to and troubleshooting
information. The information can enable you to resolve a product issue before you
contact Support.
To access a product-specific page:
1. Go to https://www.dell.com/support.
2. In the search box, type a product name, and then from the list that appears, select
the product.
Knowledgebase
The Knowledgebase contains applicable solutions that you can search for either by
solution number (for example, KB000xxxxxx) or by keyword.
To search the Knowledgebase:
1. Go to https://www.dell.com/support.
2. On the Support tab, click Knowledge Base.
3. In the search box, type either the solution number or keywords. Optionally, you
can limit the search to specific products by typing a product name in the search
box, and then selecting the product from the list that appears.
Live chat
To participate in a live interactive chat with a support agent:
1. Go to https://www.dell.com/support.
2. On the Support tab, click Contact Support.
3. On the Contact Information page, click the relevant support, and then proceed.
Service requests
To obtain in-depth help from Licensing, submit a service request. To submit a service
request:
1. Go to https://www.dell.com/support.
2. On the Support tab, click Service Requests.
Note
To create a service request, you must have a valid support agreement. For details
about either an account or obtaining a valid support agreement, contact a sales
representative. To get the details of a service request, in the Service Request
Number field, type the service request number, and then click the right arrow.
This document describes how to configure and use the NetWorker software in a
clustered environment. This guide also provides cluster specific information that you
need to know before you install NetWorker on a clustered host. You must install the
NetWorker software on each physical node in a cluster.
This guide does not describe how to install the NetWorker software. The NetWorker
Installation Guide describes how to install the NetWorker software on supported
operating systems. You can configure the NetWorker software in a cluster in one of
the following ways:
l Stand-alone application...................................................................................... 14
l Cluster-aware application................................................................................... 14
l Highly available application................................................................................. 14
Introduction 13
Introduction
Stand-alone application
When you install the NetWorker server, storage node, or client software as a stand-
alone application, the required daemons run on each node. When the NetWorker
daemons stop on a node, the cluster management software does not restart them
automatically.
In this configuration:
l NetWorker does not know which node owns the shared disk. To ensure that there
is always a backup of the shared disks, configure a NetWorker client resource for
each physical node to back up the shared and local disks.
l Shared disk backups will fail for each physical node that does not own or control
the shared disk.
l NetWorker writes client file index entries for the shared backup to the physical
node that owns the shared disk.
l To recover data from a shared disk backup, you must determine which physical
node owned the shared disk at the time of backup.
Note
Cluster-aware application
On supported operating systems, when you configure a cluster-aware NetWorker
client, all required daemons run on each physical node. When the NetWorker daemons
stop on a node, the Cluster Management software does not restart them
automatically.
A cluster-aware NetWorker application determines path ownership of the virtual
applications that run in the cluster. This allows the NetWorker software to back up the
shared file system and write the client file index entries for the virtual client.
When you configure a cluster-aware NetWorker application, you must:
l Create a NetWorker client resource for the virtual node in the cluster to back up
the shared disk.
l Create a NetWorker client resource for each physical node to back up the local
disks.
l Select the virtual node to recover data from a shared disk backup.
This chapter describes how to prepare for a NetWorker installation on a cluster and
how to configure NetWorker on each cluster. Perform these steps after you install the
NetWorker software on each physical node.
The steps to install and update the NetWorker software in a clustered environment are
the same as the steps to install and update the software in a non-clustered
environment. The NetWorker Cluster Integration Guide describes how to install
NetWorker on each supported operating system.
Note
This section does not apply when you install NetWorker as a stand-alone application.
regsvr32 /u nsrdresex.dll
will automatically back up cluster configuration. The cluster maintains the MSFCS
database synchronously on two nodes, as a result the database backup on one
node might not reflect changes made on the other node.
l The NetWorker Server and Client software supports backup and recovery of file
system data on Windows 2019, Windows 2016, Windows Server 2012, and
Windows Server 2012 R2 File Servers configured for Windows Continuous
Availability with Cluster Shared Volumes (CSV). Support of CSV and deduplicated
CSV backups include levels Full, Incremental, and incr_synth_full. NetWorker
supports CSV and deduplicated CSV backups with the following restrictions:
n The volume cannot be a critical volume.
n NetWorker cannot shadow copy a CSV and local disks that are in the same
volume shadow copy set.
Note
The NetWorker software does not protect the Microsoft application data stored
on a CSV or deduplicated CSV, such as SQL databases or Hyper-V virtual
machines. To protect Microsoft application data use the NetWorker Module for
Microsoft (NMM) software. The NMM documentation provides more information
about specific backup and recovery instructions of Microsoft application data.
Note
Do not create a Generic Application resource for the NetWorker virtual server.
a. For Windows 2012, 2016, and 2019 on the Select Role page, select Other
Server, and then click Next.
b. For Windows 2008 R2 SP1, on the Select Service or Application page,
select Other Server, and click Next.
7. On the Client Access Point page, specify a hostname that does not exist in the
ID and an available IP address, and then click Next.
Note
The Client Access Point resource type defines the virtual identity of the
NetWorker server, and the wizard registers the hostname and IP address in
DNS.
8. On the Select Storage page, select the shared storage volume for the shared
nsr directory, and then click Next.
9. In the Select Resource Type list, select the NetWorker Server resource type,
and then click Next.
10. On the Confirmation page, review the resource configurations and then click
Next. The High Availability Wizard creates the resources components and the
group.
When the Summary page appears, a message similar to the following appears,
which you can ignore:
The clustered role will not be started because the
resources may need additional configuration. Finish
configuration, and then start the clustered role.
Note
f. Click OK.
NOTICE
Do not create multiple NetWorker server resources. Creating more than one
instance of a NetWorker Server resource interferes with how the existing
NetWorker Server resources function.
A dependency is set between the NetWorker server resource and the shared
disk.
13. Right-click the NetWorker cluster resource and select Start Role.
The NetWorker server resource starts.
14. Confirm that the state of the NetWorker Server resource changes to Online.
Note
l Remote Access
l Administrator
3. Configure each of the virtual servers in a cluster as a client of the NetWorker
virtual server. These are virtual cluster clients.
4. Add the hostname and user of each node to these Client resource attributes of
the virtual clients:
l Remote Access
l Administrator
Note
This section does not apply when you install NetWorker as a stand-alone application.
export OCF_ROOT=/usr/lib/ocf
Update your profile file to make the change persistent across reboots.
Note
6. On one node, create a the required resource groups for the NetWorker
resources:
a. Start the crm tool, by typing:
crm configure
b. Create a file system resource for the nsr directory. For example, type:
primitive fs ocf:heartbeat:Filesystem \
operations $id="fs-operations" \
op monitor interval="20" timeout="40" \
params device="/dev/sdb1" directory="/share1"
fstype="ext3"
c. Create an IP address resource for the NetWorker Server name. For example,
type:
primitive ip ocf:heartbeat:IPaddr \
operations $id="ip-operations" \
op monitor interval="5s" timeout="20s" \
params ip="10.5.172.250" cidr_netmask="255.255.254.0"
nic="eth1"
Note
For SLES 11 SP4 and SLES 15, do not include the following unsupported
default operations:
e. Define the NetWorker Server resource group that contains the file system,
NetWorker Server, and IP address resources. For example, type:
group NW_group fs ip nws
Note
This section does not apply when you install NetWorker as a stand-alone application.
The NetWorker Installation Guide describes how to install the NetWorker software.
Note
The configuration script creates the nw_redhat file and the lcmap file.
9. Create a service group:
a. Connect to the Conga web interface.
b. On the Service tab, click Add.
c. In the Service Name field, specify a name for the resource. For example
rg1.
10. Add an LVM resource for the shared volume to the service group :
a. Click Add resource.
b. From the Global Resources drop down, select HA LVM.
c. In the Name field, specify the name of the resource. For example,
ha_lvm_vg1.
d. In the Volume Group Name field, specify the name of the volume group for
the shared disk that contains the /nsr directory. For example, vg1.
e. In the Logical Volume Name field, specify the logical volume name. For
example, vg1_1v.
11. Add a file system resource for the shared file system to the service group.
a. After the HA LVM Resource section, click Add Child Resource.
b. From the Global Resources drop down, select Filesystem.
c. In the Name field, specify the name of the file system. For example,
ha_fs_vg1 .
d. In the Mount point field, specify the mount point. For example: /vg1.
e. In the Device, FS label or UUID field, specify the device information. For
example, device "/dev/vg1/vg1_lv"
12. Add an IP address resource to the group:
a. After the Filesystem section, click Add Child Resource.
b. From the Global Resources drop down, select IP Address.
c. In the IP Address field, specify the IP address of the virtual NetWorker
server.
d. Optionally, in the Netmask field, specify the netmask that is associated with
IP address.
13. Add a script resource to the group:
a. After the IP address section, click Add Child Resource.
b. From the Global Resources drop down, select Script.
c. In the Name field, specify the name for the script resource. For example,
nwserver.
d. In the Path field, specify the path to the script file. For example, /usr/
sbin/nw_redhat.
14. Click Submit.
export OCF_ROOT=/usr/lib/ocf
Note
6. On one node, create the required resource groups for the NetWorker resources:
a. Create a file system resource for the nsr directory. For example, type:
Note
--group NW_group adds the file system resource to the resource group.
b. Create an IP address resource for the NetWorker Server name. For example,
type:
Note
--group NW_group adds the file system resource to the resource group.
Note
--group NW_group adds the file system resource to the resource group.
7. If any resource fails to start, confirm that the shared volume is mounted. If the
shared volume is not mounted, manually mount the volume, and then reset the
status by typing the following command:
pcs resource cleanup nws
Note
This section does not apply when you install NetWorker as a stand-alone application.
Note
A resource group must own all globally mounted file systems (except the /
global/.devices/... file systems). All globally mounted filesystems (except
the /global/.devices/... file systems) must have a NetWorker Client
resource type. A misconfigured file system results in multiple backup copies
for each cluster node.
b. Add the logical hostname resource type to the new resource group:
clreslogicalhostname create -g resource_group_name
logical_name
For example, when the logical hostname is clus_vir1, type:
clreslogicalhostname create -g backups clus_vir1
For example, to create the resource with mount points /global/db and /
global/space, type:
clresource create -g backups -t SUNW.HAStoragePlus -x \
FilesystemMountPoints=/global/db,\ /global/space -x
AffinityOn=True hastorageplus
The Sun Cluster documentation provides more information about the
SUNW.HAStoragePlus resource and locally mounted global systems.
Note
This section does not apply when you install NetWorker as a stand-alone application.
HP MC/ServiceGuard
This section describes how to prepare the HP MC/ServiceGuard cluster before you
install the NetWorker software. This section also describes how to configure the
NetWorker client as a cluster-aware application, after you install the NetWorker
software on each physical node of the cluster.
The NetWorker Installation Guide describes how to install the NetWorker software.
Note
This section does not apply when you install NetWorker as a stand-alone application.
touch /etc/cmcluster/NetWorker.clucheck
touch /etc/cmcluster/.nsr_cluster
Note
pkgname:published_ip_address:owned_path[:...]
where:
l pkgname is the name of the package.
l published_ip_address is the IP address assigned to the package that owns
the shared disk. Enclose IPv6 addresses in square brackets. You can enclose
IPv4 addresses in square brackets, but it is not necessary.
l owned_path is the path to the mount point. Separate additional paths with a
colon.
For example:
l IPv6 address:
client:[3ffe:80c0:22c:74:6eff:fe4c:2128]:/share/nw
l IPv4 address:
client:192.168.109.10:/share/nw
Note
Note
This section does not apply when you install NetWorker as a stand-alone application.
Note
l Create mount points on all nodes. Example: On linux, /dg1vol1 should be created
on all nodes.
l When configuring vxfs, use dsk instead of rdsk for block device.
Example: /dev/vx/dsk/dg2/dg2vol1
The following example shows an instance of the NetWorker resource group defined in
the /etc/VRTSvcs/conf/config/main.cf VCS cluster configuration file.
group networker (
SystemList = { arrow = 0, canuck = 1 }
)
Application nw_server (
StartProgram = "/usr/sbin/nw_vcs start"
StopProgram = "/usr/sbin/nw_vcs stop"
CleanProgram = "/usr/sbin/nw_vcs stop_force"
MonitorProgram = "/usr/sbin/nw_vcs monitor"
MonitorProcesses = {"/usr/sbin/nsrd -k
Virtual_server_hostname"}
)
DiskGroup dg1 (
DiskGroup = dg1
)
IP NW_IP (
Device = eth0
Address = "137.69.104.104"
)
Mount NW_Mount (
MountPoint = "/mnt/share"
BlockDevice = "/dev/sdc3"
FSType = ext2
FsckOpt = "-n"
)
NW_Mount requires dg1
NW_IP requires NW_Mount
nw_server requires NW_IP
// resource dependency tree
//
// group networker
// {
// Application nw_server
// {
// IP NW_IP
// {
// Mount NW_Mount
// {
// DiskGroup dg1
// }
// }
// }
The following example, shows an instance of the NetWorker resource group defined in
the C:\Program Files\Veritas\cluster server\conf\config\main.cf
VCS cluster configuration file.
group networker (
SystemList = { BU-ZEUS32 = 0, BU-HERA32 = 1 }
)
IP NWip1 (
Address = "10.5.163.41"
SubNetMask = "255.255.255.0"
MACAddress @BU-ZEUS32 = "00-13-72-5A-FC-06"
MACAddress @BU-HERA32 = "00-13-72-5A-FC-1E"
)
MountV NWmount1 (
MountPath = "S:\\"
VolumeName = SharedVolume1
VMDGResName = NWdg_1
)
Process NW_1 (
Enabled = 0
StartProgram = "D:\\Program Files\\EMC NetWorker\\nsr\\bin\
\nw_vcs.exe start"
StopProgram = "D:\\Program Files\\EMC NetWorker\\nsr\\bin\
\nw_vcs.exe stop"
CleanProgram = "D:\\Program Files\\EMC NetWorker\\nsr\\bin\
\nw_vcs.exe stop_force"
MonitorProgram = "D:\\program files\\EMC NetWorker\\nsr\
\bin\\nw_vcs.exe monitor"
UserName = "bureng\\administrator"
Password = BHFlGHdNDpGNkNNnF
)
VMDg NWdg_1 (
DiskGroupName = "32dg1"
)
NWip1 requires NWmount1
NWmount1 requires NWdg_1
NW_1 requires NWip1
// resource dependency tree
//
// group networker
// {
// Process NW_1
// {
// IP NWip1
// {
// MountV NWmount1
// {
// VMDg NWdg_1
// }
// }
// }
// }
Note
Note
Note
The following example, shows an instance of the Application resource type defined on
a Linux VCS cluster.
User = root
StartProgram = "/usr/sbin/nw_vcs start"
StopProgram = "/usr/sbin/nw_vcs stop"
CleanProgram = "/usr/sbin/nw_vcs stop_force"
MonitorProgram = "/usr/sbin/nw_vcs monitor"
MonitorProcesses = "/usr/sbin/nsrd -k Virtual_server_hostname"
The following example, shows an instance of the Process resource type defined on a
Windows VCS cluster.
n The owned file systems on the shared devices are instances of the mount type
resource contained in the same service group.
NWClient nw_helene (
IPAddress="137.69.104.251"
Owned_paths={ "/shared1", "/shared2", "/dev/rdsk/c1t4d0s4" }
Procedure
1. To define the resource types that the NetWorker software requires, run the
cluster configuration script file /usr/sbin/networker.cluster.
2. At the Would you like to configure NetWorker for it [Yes]? prompt, type Yes.
3. At the Do you wish to continue? [Yes]? prompt, type Yes.
4. At the Enter directory where local NetWorker database is installed [/nsr]?
prompt, specify the location of the local NetWorker database directory that you
provided when you installed NetWorker. For example: /space/nsr.
2. To stop the VCS software on all nodes and leave the resources available, type:
hastop -all -force
g. Add a NWClient resource instance for the service groups that require the
resource.
Troubleshooting configuration
This section describes how to troubleshoot NetWorker configuration issues in a
cluster.
Slow backups
The lcmap program, queries cluster nodes and creates a map that includes
information such as path ownership of resource groups. In large cluster
configurations, lcmap may take a long time to complete and thus slow down certain
operations. This is most often noticed in very long backup times.
In these situations, consider adjusting cluster cache timeout. This attribute specifies a
time, in seconds, in which to cache the cluster map information on a NetWorker client.
Edit the cluster cache timeout attribute with caution. Values for the attribute can vary
from several minutes to several days and depends the following factors:
l How often the cluster configuration changes.
l The possibility of resource group failover.
l The frequency of NetWorker operations.
If you set the value too large, then an out-of-date cluster map can result and cause
incorrect path resolution. For example, if the cluster cache timeout value is set to
86400 (one day), then any changes to the cluster map will not be captured for up to
one day. If cluster map information changes before the next refresh period, then some
paths may not resolve correctly.
Note
If you set the value too small, then cache updates can occur too frequently, which
negatively affects performance. Experiment with one physical cluster node to find a
satisfactory timeout value. If you cannot obtain a significant improvement in
performance by adjusting this attribute, then reset the attribute value to 0 (zero).
When the attribute value is 0, NetWorker does not use the attribute.
Troubleshooting configuration 43
Configuring the Cluster
2. Display the current settings for attributes in the NSRLA resource. For example,
type:
print type:NSRLA
3. Change the value of the cluster cache timeout attribute. For example, type:
update cluster cache timeout: value
where value is the timeout value in seconds. A value of 0 (zero) specifies that
the cache is not used.
6. To make the timeout value take effect immediately, delete the cache file on the
physical node:
l UNIX: /tmp/lcmap.out
l Windows: NetWorker_install_path\nsr\bin\lcmap.out
The error also appears when the nsrexecd daemon on a UNIX host or the NetWorker
Remote Exec service on a Windows host is not running on the storage node.
To resolve this issue, start the nsrexecd process on UNIX or the NetWorker Remote
Exec service on Windows.
To resolve this issue, add the following to the Remote access list:
For Windows:
administrator@clus_phy1_IP
administrator@clus_phy1_shortname
administrator@clus_phy1_FQDN
administrator@clus_phy2_IP
administrator@clus_phy2_shortname
administrator@clus_phy2_FQDN
system@clus_phy1_IP
system@clus_phy1_shortname
system@clus_phy1_FQDN
system@clus_phy2_IP
system@clus_phy2_shortname
system@clus_phy2_FQDN
For Linux:
root@clus_phy1_IP
root@clus_phy1_shortname
root@clus_phy1_FQDN
root@clus_phy2_IP
root@clus_phy2_shortname
root@clus_phy2_FQDN
Note
REST API does not work after NetWorker server cluster fail-over
REST API does not work after NetWorker server cluster fail-over.
To resolve this issue, add the LDAPS certificate to the keystore using the
following:
NetWorker supports the use of tape, AFTD, and Data Domain devices to back up
cluster host data. This chapter describes three common configuration scenarios when
using autochangers and tape devices to back up a highly available NetWorker server.
The information that describes how to configure AFTD and Data Domain devices in the
NetWorker Administration Guide and the NetWorker Data Domain Boost Integration Guide
applies to clustered and non-clustered hosts.
Note
If processes on nodes other than the one that owns on the NetWorker server
can access the tape devices, data corruption might occur. The NetWorker
software might not detect the data corruption.
2. Zone the robotic arm and all drives to each physical node in the cluster.
3. Configure the same path (bus, target and LUNs) to the robotics and tape drives
on each node.
4. If you configured the bridge with node device-reassignment reservation
commands, then add these commands to the nsrrc startup script on the
NetWorker virtual server. The NetWorker Administration Guide describes how to
modify the nsrrc script.
5. Install the cluster vendor-supplied special device file for the robotic arm on each
physical node. The special device file creates a link to the tape or autochanger
device driver. Ensure that the name that is assigned to the link is the same on
each node for the same device. If you do not have matching special device files
across cluster nodes, you might be required to install fibre HBAs in the same
PCI slots on all the physical nodes within the cluster.
The following figure provides a graphical view of this configuration option.
NetWorker server resource sends backup data directly to the tape devices. The
inactive physical node sends backup data to the tape devices over the network. Use
this configuration when most of the backup data originates from the active physical
node, the shared disk resource, and hosts external to the cluster.
The following figure provides a graphical view of this configuration option.
Figure 3 Autochanger with non-shared devices
In this example, use the following procedure to configure an autochanger with non-
shared tape devices:
Procedure
1. To configure the autochanger and devices by using the NMC device
configuration wizard, specify the hostname of the virtual server,clus_vir1, when
prompted for the storage node name and the prefix name. The NetWorker
Administration Guide describes how to use NMC to configure autochangers and
devices.
2. To configure the autochanger and devices by using the jbconfig command,
run jbconfig -s clus_vir1 on the physical node that owns the
NetWorker server resource.
l When prompted for the hostname to use as a prefix, specify the virtual
server name, clus_vir1.
l When prompted to configure shared devices, select Yes. The NetWorker
Administration Guide describes how to use jbconfig to configure
autochangers and devices.
l clus_vir1: nsrserverhost
The "Configuring backup and recovery" chapter describes how to configure the
Client resource for each cluster node.
In this example, use the following procedure to configure a stand-alone storage node:
l The NetWorker virtual server uses local device AFTD1 to back up the bootstrap
and indexes.
l To configure the autochanger and devices by using the NMC device configuration
wizard, specify the hostname of the stand-alone host, ext_SN, when prompted for
the storage node name and the prefix name.
l To configure the autochanger and devices by using the jbconfig command, run
jbconfig -s clu_vir1on the ext_SN. The NetWorker Administration Guide
describes how to use jbconfig to configure autochangers and devices.
n When prompted for the hostname to use as a prefix, specify the external
storage node, ext_SN.
This chapter describes how to backup virtual and physical nodes in a cluster, and how
to configure a virtual client to backup to a local storage node.
Note
NetWorker supports the use of multiple IP address for a resource group (resource
service for MC/ServiceGuard). However, use only one of these IP addresses to
configure the virtual client resource. The name of the NetWorker Client resource can
be the short name, the FQDN corresponding to the IP address, or the IP address. For
example: resgrp1 is a resource group that is defined in a cluster and there are two IP
resources defined in the group, IP1 and IP2. If the IP address for IP1 is defined as a
NetWorker Client resource, then all shared paths in resgrp1 are saved under the IP
address for IP1 index.
4. Specify the shortname and FDQN for each NetWorker Server, one per line, that
requires access to the NetWorker host.
When the NetWorker Server is highly available:
For example:
clus_vir1
clus_vir1.corp.com
clus_phys1
clus_phys1.corp.com
clus_phys2
clus_phys2.corp.com
When the servers file does not contain any hosts, any NetWorker Server can
back up or perform a directed recovery to the host.
5. On the node with access to the shared disk, edit the global servers file.
Note
Ensure that the hostnames defined in the global servers file are the same as the
local servers file on each physical node.
6. For Linux only, edit the NetWorker boot-time startup file, /etc/init.d/
networker and delete any nsrexecd -s arguments that exist.
For example, when the /etc/init.d/networker contains the following
entry:
nsrexecd
4. Click OK.
5. For NetWorker Server configured to use the lockbox only:
a. In the left navigation pane, select Clients.
b. Right-click the client resource for the NetWorker virtual service and select
Modify Client Properties.
c. On the Globals (2 of 2) tab specify the name of each cluster node in the
Remote Access field.
l For RHEL cluster nodes, specify the name of the host that appears when
you use the hostname command.
l For Windows cluster nodes, use the full computer name that appears in the
Control Panel > System > Computer name field.
6. Click OK.
Note
When you configure the NetWorker Server to use a lockbox, you must update
the Remote Access field before the virtual node fails over to another cluster
node. If you do not update the Remote Access field before failover, you must
delete and create the lockbox resource. The NetWorker Security Configuration
Guide describes how to configure the lockbox resource.
Note
4. Specify the save set to backup in the Save set field. To back up:
l All the shared drives and CSVs that a virtual client owns, specify All.
l A single drive volume of shared disk that a virtual client owns, specify the
drive volume letter.
For example, to backup a single drive volume, specify G:\.
To backup a single CSV, specify C:\clusterstorage\volumeX, where X
is the volume number, and C: is the system drive.
Note
7. On the Apps and Modules tab, in the Application Information field, specify
environment variables, as required.
l For Snapshot Management backups only, use the NSR_PS_SHARED_DIR
variable to specify the share directory. For example:
NSR_PS_SHARED_DIR=P:\share
The NetWorker Snapshot Management Integration Guide describes how to
configure Snapshot backups.
l For Windows Server 2012 and Windows 2012 R2 CSV and deduplicated CSV
backups only:
As part of a deduplicated CSV backup, the preferred node tries to move
ownership of the CSV volume to itself. If the ownership move succeeds,
then NetWorker performs a backup locally. If the ownership move fails, then
NetWorker performs the backup over SMB. When the CSV ownership
moves, NetWorker restores the ownership to the original node after the
backup completes.
You can optionally specify the preferred cluster node to perform the backup.
To specify the preferred server, use the NetWorker client Preferred Server
Order List (PSOL) variable NSR_CSV_PSOL.
When you do not specify a PSOL NetWorker performs the backup by using
the Current Host Server node (virtual node).
Review the following information before you specify a PSOL:
n The save.exe process uses the first available server in the list to start
the CSV backup. The first node that is available and responds becomes
the preferred backup host. If none of the specified nodes in the PSOL are
available, then NetWorker tries the backup on the Current Host Server
node.
n The Remote access list attribute on the NetWorker client must contain
the identified cluster nodes.
n Use the NetBIOS name when you specify the node names. You cannot
specify the IP address or FQDN of the node.
To specify the PSOL, include a key/value pair in the client resource
Application information attribute. Specify the key/value pair in the
following format:
NSR_CSV_PSOL=MachineName1,MachineName2,MachineName3...
For example, physical node clus_phys2 owns the cluster resources for virtual
node clus_vir1. By default, clus_vir1 runs the backup request.
To offload operations, define clus_phy1 as the preferred node to start the
save operation. If clus_phy1 is unavailable, then NetWorker should try to use
clus_phy2 to start the save operation.
The NSR_CSV_PSOL variable in the clus_vir1 client resource is set to:
NSR_CSV_PSOL=MachineName1,MachineName2,MachineName3...
When a physical node performs the backup, NetWorker saves the backup
information to the client file index of the virtual client resource. When you
recover the CSV backup, specify clus_vir1 as the source client.
8. For deduplicated CSV backups only, to configure an unoptimized deduplication
backup, specify VSS:NSR_DEDUP_NON_OPTIMIZED=yes in the Save
operations attribute.
9. Define the remaining attributes in the Client properties window, as required,
and then click OK.
Note
MSFCS does not support shared tapes. You cannot configure the NetWorker
virtual server with tape devices connected to a shared bus. MSFCS supports
disk devices connected to a shared bus. It is recommended that you do not use
file type devices connected to a shared bus.
Note
If you enable the Autoselect storage node attribute in the client resource, then
NetWorker will override the curphyhost setting for the client. The NetWorker
Administration Guide provides more information about the Autoselect storage node
attribute.
index of the physical node that owns the file system instead of the client file index for
the virtual node .
This sections describes how to configure each supported operating system to allow
the lcmap script to query the cluster and determine path ownership for non-root or
non-administrator users.
nodeA
nodeB
l As the root user on each node in the cluster, edit or create the /etc/
cmcluster/cmclnodelist file and add the following information to the file:
nodeA user_name
nodeB user_name
Note
If the cmclnodelist file exists, the cluster software ignores any .rhosts file.
Note
For information on how to set up VCS authentication, see the VCS documentation.
where:
l client is the virtual hostname to back up shared disk data or the physical node
hostname to back up data that is local to the node on which you run the save
command.
l save_set specifies the path to the backup data.
Troubleshooting backups
This section provides resolutions for the following common backup and configuration
errors.
l Consider there to be a match between the client that owns the file system and the
client resource that is configured to backup the file system.
The following conditions cause NetWorker to omit a file system backup during a
scheduled save:
l The save set attribute for a physical client resource contains a file system that is
owned by a virtual client.
l The save set attribute for a virtual Client resource contains a file system that is
owned by a physical client.
Resolve this issue in one of the following ways outlined in the following sections.
Note
Use the mminfo command to confirm that the backup information saves to the
correct client file index. By design, the NMC Server Group Details window indicates
that the backup corresponds to the physical client where you configured the save set.
File system backup information written to the wrong client file index
When the pathownerignore file exists on a client at the time of a backup,
NetWorker will back up save sets that a client does not own but writes information
about the backup save set to the client file index of the host that owns the file system.
To determine which client file index will contain save set information, run a test probe
with the verbose option set. For example: savegrp -pv -c client_name
group_name where:
l client_name is the name of the cluster client.
l group_name is the name of a group that contains the client backup.
To force NetWorker to write the save set information to the client that does not own
the file system, perform one of the following tasks:
l For a manual save operation, use the -c option with the save command to specify
the name of the client with the save set information.
l For a scheduled save operation, to force NetWorker to the save set information to
write save set information to the index of the client that backs up the save set:
l 1. Edit the properties of the client in NMC.
2. Select the Apps & Module tab.
File system backup information written to the wrong client file index 63
Configuring Backup and Recovery
3. In the Backup command attribute, specify the save command with the name
of the client to receive the save set information:
save -c client_name
Note
Recovering data
This section describes how to recover data from shared disks that belong to a virtual
client.
Note
The steps to recover data that originated on a private disk on a physical cluster client
are the same as when you recover data from a host that is not part of a cluster. The
NetWorker Administration Guide provides more information.
To recover Windows clusters, the chapter Windows Bare Metal Recovery (BMR) in
the NetWorker Administration Guide provides more information.
To recover data that is backed up from a shared disk that belongs to a virtual client,
perform the following steps:
Procedure
1. Ensure that you have correctly configured remote access to the virtual client:
a. Edit the properties of the virtual client resource in NMC.
b. On the Globals (2 of 2) tab, ensure that the Remote Access attribute
contains an entry for the root or Administrator user for each physical cluster
node.
2. To recover a CSV backup for a client that uses the NSR_CSV_PSOL variable,
ensure that the system account for each host in the preferred server order list
is a member of the NetWorker Operators User Group.
For example, if you configure the virtual node client resource that specifies the
CSV volumes with the following variable: NSR_CSV_PSOL=clu_virt1, clu_virt2,
specify the following users in the NetWorker Operators User Group:
system@clu_virt1
system@clu_virt2
Note
The-c virtual_client is optional when you run the recover command from the
global file system that the virtual client owns. The recover man page or the
NetWorker Command Reference Guide provide information.
n To recover data from a VCS 6.0 on Windows 2008 R2 SP1 you must also
start the NetWorker User program or command prompt window, as
administrator.
n To start a recover operation from the NetWorker User application, right-
click the NetWorker User application and select Run as Administrator.
n To start a recover operation from the command prompt, right-click the
command prompt application and select Run as Administrator.
Note
The currechost keyword only applies to virtual client recoveries. Do not specify this
keyword in the clone storage nodes attribute in the Storage node resource or to the
client resource of the NetWorker virtual server. This can cause unexpected behavior,
for example, the NetWorker software writes the bootstrap and index backups to the
local storage node for the virtual clients, instead of a local device on the NetWorker
virtual server.
The following restrictions apply when you configure the recovery of virtual client data
from a local storage node:
l Ensure that there are no hosts or machines named currechost on the network.
l Do not specify currechost in the Clone storage nodes attribute of a virtual client
storage node resource.
l Do not apply the currechost keyword to the Storage nodes attribute or the
Recover Storage Nodes attribute of the virtual server's Client resource.
To configure the virtual client to recover data from a local storage node:
Procedure
1. Edit the properties of the virtual client resource in NMC.
2. In the Globals (2 of 2) tab, in the Storage nodes attribute or the Recover
storage nodes attribute, add the currechost keyword. Position the keyword in
the list based on the required priority. The keyword at the top of the list has the
highest priority. Ensure that this keyword is not the only keyword in the list.
Troubleshooting recovery
This section provides resolutions to issues that you may encounter when recovering
data from a cluster node backup.
Before you remove the NetWorker server software, you must remove the NetWorker
configuration from the cluster. This section describes how to take a highly available
NetWorker server offline and remove the NetWorker configuration from the cluster.
This section does not apply when the NetWorker server software is a stand-alone
application (not cluster managed) or when only the client software is installed.
The process of removing the NetWorker software from a cluster is the same as
removing the software on a stand-alone machine. The NetWorker Installation Guide
describes how to remove the NetWorker software.
regcnsrd -u
2. Remove the NetWorker Server resource from MSFCS by running the following
command from any cluster node:
regcnsrd -d
administrator Person who normally installs, configures, and maintains software on network
computers, and who adds users and defines user privileges.
advanced file type Disk storage device that uses a volume manager to enable multiple concurrent backup
device (AFTD) and recovery operations and dynamically extend available disk space.
authorization code Unique code that in combination with an associated enabler code unlocks the software
for permanent use on a specific host computer. See license key.
BMR Windows Bare Metal Recovery, formerly known as Disaster Recovery. For more
information on BMR, refer to the Windows Bare Metal Recovery chapter in the
Networker Administration Guide.
boot address The address used by a node name when it boots up, but before HACMP/PowerHA for
AIX starts.
bootstrap Save set that is essential for disaster recovery procedures. The bootstrap consists of
three components that reside on the NetWorker server: the media database, the
resource database, and a server index.
client Host on a network, such as a computer, workstation, or application server whose data
can be backed up and restored with the backup server software.
client file index Database maintained by the NetWorker server that tracks every database object, file,
or file system backed up. The NetWorker server maintains a single index file for each
client computer. The tracking information is purged from the index after the browse
time of each backup expires.
Client resource NetWorker server resource that identifies the save sets to be backed up on a client.
The Client resource also specifies information about the backup, such as the schedule,
browse policy, and retention policy for the save sets.
cluster client A NetWorker client within a cluster; this can be either a virtual client, or a NetWorker
Client resource that backs up the private data that belongs to one of the physical
nodes.
cluster virtual server Cluster network name, sometimes referred to as cluster server name or cluster alias. A
cluster virtual server has its own IP address and is responsible for starting cluster
applications that can fail over from one cluster node to another.
current host server Cluster physical node that is hosting the Cluster Core Resources or owns the Cluster
Group. The cluster virtual server resolves to the current host server for a scheduled
NetWorker backup.
database 1. Collection of data arranged for ease and speed of update, search, and retrieval by
computer software.
2. Instance of a database management system (DBMS), which in a simple case might
be a single file containing many records, each of which contains the same set of
fields.
datazone Group of clients, storage devices, and storage nodes that are administered by a
NetWorker server.
device 1. Storage folder or storage unit that can contain a backup volume. A device can be a
tape device, optical drive, autochanger, or disk connected to the server or storage
node.
2. General term that refers to storage hardware.
3. Access path to the physical drive, when dynamic drive sharing (DDS) is enabled.
device-sharing The hardware, firmware, and software that permit several nodes in a cluster to share
infrastructure access to a device.
disaster recovery Restore and recovery of data and business operations in the event of hardware failure
or software corruption.
failover cluster Windows high-availability clusters, also known as HA clusters or failover clusters, are
groups of computers that support server applications that can be reliably utilized with a
minimum of down-time. They operate by harnessing redundant computers in groups or
clusters that provide continued service when system components fail.
group One or more client computers that are configured to perform a backup together,
according to a single designated schedule or set of conditions.
Highly available An application that is installed in a cluster environment and configured for failover
application capability. On an MC/ServiceGuard cluster this is called a highly-available package.
Highly available package An application that is installed in a HP MC/ServiceGuard cluster environment and
configured for failover capability.
hostname Name or address of a physical or virtual host computer that is connected to a network.
license key Combination of an enabler code and authorization code for a specific product release to
permanently enable its use. Also called an activation key.
managed application Program that can be monitored or administered, or both from the Console server.
media index Database that contains indexed entries of storage volume location and the life cycle
status of all data and volumes managed by the NetWorker server. Also known as media
database.
networker_install_path The path or directory where the installation process places the NetWorker software.
l AIX: /usr/sbin
l Linux: /usr/bin
l Solaris: /usr/sbin
l HP-UX: /opt/networker/bin
l Windows (New installs): C:\Program Files\EMC NetWorker\nsr\bin
l Windows (Updates): C:\Program Files\Legato\nsr\bin
NetWorker Management Software program that is used to manage NetWorker servers and clients. The NMC
Console (NMC) server also provides reporting and monitoring capabilities for all NetWorker processes.
NetWorker server Computer on a network that runs the NetWorker server software, contains the online
indexes, and provides backup and restore services to the clients and storage nodes on
the same network.
node name The HACMP/PowerHA for AIX defined name for a physical node.
physical client The client associated with a physical node. For example the / and /usr file systems
belong to the physical client.
Physical host address The address used by the physical client. For HACMP for AIX 4.5, this is equivalent to a
(physical hostname) persistent IP address.
private disk A local disk on a cluster node. A private disk is not available to other nodes within the
cluster.
recover To restore data files from backup storage to a client and apply transaction (redo) logs
to the data to make it consistent with a given point-in-time.
remote device 1. Storage device that is attached to a storage node that is separate from the
NetWorker server.
2. Storage device at an offsite location that stores a copy of data from a primary
storage device for disaster recovery.
resource Software component whose configurable attributes define the operational properties of
the NetWorker server or its clients. Clients, devices, schedules, groups, and policies are
all NetWorker resources.
save NetWorker command that backs up client files to backup media volumes and makes
data entries in the online index.
save set 1. Group of tiles or a file system copied to storage media by a backup or snapshot
rollover operation.
2. NetWorker media database record for a specific backup or rollover.
scheduled backup Type of backup that is configured to start automatically at a specified time for a group
of one or more NetWorker clients. A scheduled backup generates a bootstrap save set.
service address The address used by highly-available services in an HACMP/PowerHA for AIX
environment.
stand-alone server A NetWorker server that is running within a cluster, but not configured as a highly-
available application. A stand-alone server does not have failover capability.
storage node Computer that manages physically attached storage devices or libraries, whose backup
operations are administered from the controlling NetWorker server. Typically a
“remote” storage node that resides on a host other than the NetWorker server.
virtual client A NetWorker Client resource that backs up data that belongs to a highly-available
service or application within a cluster. Virtual clients can fail over from one cluster node
to another. For HACMP/PowerHA for unix the virtual client is the client associated with
a highly-available resource group. The file system defined in a resource group belongs
to a virtual client. The virtual client uses the service address. The HACMP/PowerHA for
AIX resource group must contain an IP service label to be considered a NetWorker
virtual client.