Onefs Cli Administration Guide 7 2
Onefs Cli Administration Guide 7 2
Onefs Cli Administration Guide 7 2
OneFS
Version 7.2
CONTENTS
Chapter 1
27
Chapter 2
29
Chapter 3
41
Chapter 4
51
CONTENTS
CONTENTS
CONTENTS
Chapter 5
Access zones
167
Chapter 6
185
CONTENTS
CONTENTS
Chapter 7
Identity management
319
CONTENTS
Chapter 8
Auditing
337
Auditing overview........................................................................................338
Protocol audit events.................................................................................. 338
Supported event types................................................................................ 338
Supported audit tools................................................................................. 339
Managing audit settings..............................................................................340
Enable system configuration auditing............................................ 340
Enable protocol access auditing.....................................................340
Auditing settings............................................................................341
Integrating with the EMC Common Event Enabler.........................................342
Install CEE for Windows..................................................................343
Configure CEE for Windows............................................................ 344
Auditing commands.................................................................................... 344
isi audit settings modify.................................................................344
isi audit settings view.................................................................... 346
isi audit topics list..........................................................................347
isi audit topics modify....................................................................348
isi audit topics view....................................................................... 348
Chapter 9
File sharing
349
CONTENTS
CONTENTS
11
CONTENTS
Chapter 10
Home directories
457
Chapter 11
Snapshots
469
CONTENTS
13
CONTENTS
Chapter 12
513
Deduplication overview...............................................................................514
Deduplication jobs......................................................................................514
Data replication and backup with deduplication..........................................515
Snapshots with deduplication.....................................................................515
Deduplication considerations......................................................................515
Shadow-store considerations......................................................................516
SmartDedupe license functionality..............................................................516
Managing deduplication............................................................................. 516
Assess deduplication space savings ............................................. 517
Specify deduplication settings ...................................................... 517
View deduplication space savings .................................................518
View a deduplication report ...........................................................518
Deduplication job report information............................................. 518
Deduplication information............................................................. 519
Deduplication commands........................................................................... 520
isi dedupe settings modify............................................................. 520
isi dedupe settings view.................................................................521
isi dedupe stats............................................................................. 521
isi dedupe reports list.................................................................... 522
isi dedupe reports view ................................................................. 522
Chapter 13
525
CONTENTS
15
CONTENTS
Chapter 14
601
FlexProtect overview....................................................................................602
File striping................................................................................................. 602
Requested data protection.......................................................................... 602
FlexProtect data recovery.............................................................................603
Smartfail........................................................................................ 603
Node failures................................................................................. 603
Requesting data protection......................................................................... 604
Requested protection settings.....................................................................604
Requested protection disk space usage...................................................... 605
Chapter 15
NDMP backup
607
CONTENTS
17
CONTENTS
Chapter 16
645
Chapter 17
18
Protection domains
669
CONTENTS
Chapter 18
Data-at-rest-encryption
673
Chapter 19
SmartQuotas
681
19
CONTENTS
Chapter 20
Storage Pools
733
CONTENTS
Chapter 21
System jobs
787
21
CONTENTS
Chapter 22
Networking
821
CONTENTS
23
CONTENTS
Chapter 23
Hadoop
873
CONTENTS
Chapter 24
Antivirus
901
25
CONTENTS
Chapter 25
VMware integration
923
26
CHAPTER 1
Introduction to this guide
27
Live Chat
Create a Service Request
Telephone Support
28
CHAPTER 2
Isilon scale-out NAS
29
Use Case
S-Series
IOPS-intensive applications
X-Series
Function
30
Isilon cluster
An Isilon cluster consists of three or more hardware nodes, up to 144. Each node runs the
Isilon OneFS operating system, the distributed file-system software that unites the nodes
into a cluster. A clusters storage capacity ranges from a minimum of 18 TB to a maximum
of 15.5 PB.
Cluster administration
OneFS centralizes cluster management through a web administration interface and a
command-line interface. Both interfaces provide methods to activate licenses, check the
status of nodes, configure the cluster, upgrade the system, generate alerts, view client
connections, track performance, and change various settings.
In addition, OneFS simplifies administration by automating maintenance with a job
engine. You can schedule jobs that scan for viruses, inspect disks for errors, reclaim disk
space, and check the integrity of the file system. The engine manages the jobs to
minimize impact on the cluster's performance.
With SNMP versions 2c and 3, you can remotely monitor hardware components, CPU
usage, switches, and network interfaces. EMC Isilon supplies management information
bases (MIBs) and traps for the OneFS operating system.
OneFS also includes a RESTful application programming interfaceknown as the Platform
APIto automate access, configuration, and monitoring. For example, you can retrieve
performance statistics, provision users, and tap the file system. The Platform API
integrates with OneFS role-based access control to increase security. See the Isilon
Platform API Reference.
Quorum
An Isilon cluster must have a quorum to work properly. A quorum prevents data conflicts
for example, conflicting versions of the same filein case two groups of nodes become
unsynchronized. If a cluster loses its quorum for read and write requests, you cannot
access the OneFS file system.
For a quorum, more than half the nodes must be available over the internal network. A
seven-node cluster, for example, requires a four-node quorum. A 10-node cluster requires
a six-node quorum. If a node is unreachable over the internal network, OneFS separates
the node from the cluster, an action referred to as splitting. After a cluster is split, cluster
operations continue as long as enough nodes remain connected to have a quorum.
In a split cluster, the nodes that remain in the cluster are referred to as the majority
group. Nodes that are split from the cluster are referred to as the minority group.
Internal and external networks
31
When split nodes can reconnect with the cluster and resynchronize with the other nodes,
the nodes rejoin the cluster's majority group, an action referred to as merging.
A OneFS cluster contains two quorum properties:
l
By connecting to a node with SSH and running the sysctl command-line tool as root,
you can view the status of both types of quorum. Here is an example for a cluster that has
a quorum for both read and write operations, as the command's output indicates with a
1, for true:
sysctl efs.gmp.has_quorum
efs.gmp.has_quorum: 1
sysctl efs.gmp.has_super_block_quorum
efs.gmp.has_super_block_quorum: 1
Storage pools
Storage pools segment nodes and files into logical divisions to simplify the management
and storage of data.
A storage pool comprises node pools and tiers. Node pools group equivalent nodes to
protect data and ensure reliability. Tiers combine node pools to optimize storage by
need, such as a frequently used high-speed tier or a rarely accessed archive.
The SmartPools module groups nodes and files into pools. If you do not activate a
SmartPools license, the module provisions node pools and creates one file pool. If you
activate the SmartPools license, you receive more features. You can, for example, create
multiple file pools and govern them with policies. The policies move files, directories, and
file pools among node pools or tiers. You can also define how OneFS handles write
operations when a node pool or tier is full. SmartPools reserves a virtual hot spare to
reprotect data if a drive fails regardless of whether the SmartPools license is activated.
IP address pools
Within a subnet, you can partition a cluster's external network interfaces into pools of IP
address ranges. The pools empower you to customize your storage network to serve
different groups of users. Although you must initially configure the default external IP
subnet in IPv4 format, you can configure additional subnets in IPv4 or IPv6.
You can associate IP address pools with a node, a group of nodes, or NIC ports. For
example, you can set up one subnet for storage nodes and another subnet for accelerator
nodes. Similarly, you can allocate ranges of IP addresses on a subnet to different teams,
such as engineering and sales. Such options help you create a storage topology that
matches the demands of your network.
In addition, network provisioning rules streamline the setup of external connections.
After you configure the rules with network settings, you can apply the settings to new
nodes.
As a standard feature, the OneFS SmartConnect module balances connections among
nodes by using a round-robin policy with static IP addresses and one IP address pool for
each subnet. Activating a SmartConnect Advanced license adds features, such as
defining IP address pools to support multiple DNS zones.
Storage pools
33
Data-access protocols
With the OneFS operating system, you can access data with multiple file-sharing and
transfer protocols. As a result, Microsoft Windows, UNIX, Linux, and Mac OS X clients can
share the same directories and files.
OneFS supports the following protocols.
SMB
The Server Message Block (SMB) protocol enables Windows users to access the
cluster. OneFS works with SMB 1, SMB 2, and SMB 2.1, as well as SMB 3.0 for
Multichannel only. With SMB 2.1, OneFS supports client opportunity locks (oplocks)
and large (1 MB) MTU sizes. The default file share is /ifs.
NFS
The Network File System (NFS) protocol enables UNIX, Linux, and Mac OS X systems
to remotely mount any subdirectory, including subdirectories created by Windows
users. OneFS works with NFS versions 3 and 4. The default export is /ifs.
HDFS
The Hadoop Distributed File System (HDFS) protocol enables a cluster to work with
Apache Hadoop, a framework for data-intensive distributed applications. HDFS
integration requires you to activate a separate license.
FTP
FTP allows systems with an FTP client to connect to the cluster and exchange files.
HTTP
HTTP gives systems browser-based access to resources. OneFS includes limited
support for WebDAV.
A file provider for accounts in /etc/spwd.db and /etc/group files. With the file
provider, you can add an authoritative third-party source of user and group
information.
You can manage users with different identity management systems; OneFS maps the
accounts so that Windows and UNIX identities can coexist. A Windows user account
managed in Active Directory, for example, is mapped to a corresponding UNIX account in
NIS or LDAP.
To control access, an Isilon cluster works with both the access control lists (ACLs) of
Windows systems and the POSIX mode bits of UNIX systems. When OneFS must
transform a file's permissions from ACLs to mode bits or from mode bits to ACLs, OneFS
merges the permissions to maintain consistent security settings.
OneFS presents protocol-specific views of permissions so that NFS exports display mode
bits and SMB shares show ACLs. You can, however, manage not only mode bits but also
34
ACLs with standard UNIX tools, such as the chmod and chown commands. In addition,
ACL policies enable you to configure how OneFS manages permissions for networks that
mix Windows and UNIX systems.
Access zones
OneFS includes an access zones feature. Access zones allow users from different
authentication providers, such as two untrusted Active Directory domains, to access
different OneFS resources based on an incoming IP address. An access zone can
contain multiple authentication providers and SMB namespaces.
RBAC for administration
OneFS includes role-based access control (RBAC) for administration. In place of a
root or administrator account, RBAC lets you manage administrative access by role.
A role limits privileges to an area of administration. For example, you can create
separate administrator roles for security, auditing, storage, and backup.
It is recommended that you do not save data to the root /ifs file path but in directories
below /ifs. The design of your data storage structure should be planned carefully. A
well-designed directory optimizes cluster performance and cluster administration.
Data layout
OneFS evenly distributes data among a cluster's nodes with layout algorithms that
maximize storage efficiency and performance. The system continuously reallocates data
to conserve space.
OneFS breaks data down into smaller sections called blocks, and then the system places
the blocks in a stripe unit. By referencing either file data or erasure codes, a stripe unit
helps safeguard a file from a hardware failure. The size of a stripe unit depends on the
file size, the number of nodes, and the protection setting. After OneFS divides the data
into stripe units, OneFS allocates, or stripes, the stripe units across nodes in the cluster.
When a client connects to a node, the client's read and write operations take place on
multiple nodes. For example, when a client connects to a node and requests a file, the
node retrieves the data from multiple nodes and rebuilds the file. You can optimize how
OneFS lays out data to match your dominant access patternconcurrent, streaming, or
random.
35
Writing files
On a node, the input-output operations of the OneFS software stack split into two
functional layers: A top layer, or initiator, and a bottom layer, or participant. In read and
write operations, the initiator and the participant play different roles.
When a client writes a file to a node, the initiator on the node manages the layout of the
file on the cluster. First, the initiator divides the file into blocks of 8 KB each. Second, the
initiator places the blocks in one or more stripe units. At 128 KB, a stripe unit consists of
16 blocks. Third, the initiator spreads the stripe units across the cluster until they span a
width of the cluster, creating a stripe. The width of the stripe depends on the number of
nodes and the protection setting.
After dividing a file into stripe units, the initiator writes the data first to non-volatile
random-access memory (NVRAM) and then to disk. NVRAM retains the information when
the power is off.
During the write transaction, NVRAM guards against failed nodes with journaling. If a
node fails mid-transaction, the transaction restarts without the failed node. When the
node returns, it replays the journal from NVRAM to finish the transaction. The node also
runs the AutoBalance job to check the file's on-disk striping. Meanwhile, uncommitted
writes waiting in the cache are protected with mirroring. As a result, OneFS eliminates
multiple points of failure.
Reading files
In a read operation, a node acts as a manager to gather data from the other nodes and
present it to the requesting client.
Because an Isilon cluster's coherent cache spans all the nodes, OneFS can store different
data in each node's RAM. By using the internal InfiniBand network, a node can retrieve
file data from another node's cache faster than from its own local disk. If a read operation
requests data that is cached on any node, OneFS pulls the cached data to serve it
quickly.
In addition, for files with an access pattern of concurrent or streaming, OneFS pre-fetches
in-demand data into a managing node's local cache to further improve sequential-read
performance.
Metadata layout
OneFS protects metadata by spreading it across nodes and drives.
Metadatawhich includes information about where a file is stored, how it is protected,
and who can access itis stored in inodes and protected with locks in a B+ tree, a
standard structure for organizing data blocks in a file system to provide instant lookups.
OneFS replicates file metadata across the cluster so that there is no single point of
failure.
Working together as peers, all the nodes help manage metadata access and locking. If a
node detects an error in metadata, the node looks up the metadata in an alternate
location and then corrects the error.
36
Striping
In a process known as striping, OneFS segments files into units of data and then
distributes the units across nodes in a cluster. Striping protects your data and improves
cluster performance.
To distribute a file, OneFS reduces it to blocks of data, arranges the blocks into stripe
units, and then allocates the stripe units to nodes over the internal network.
At the same time, OneFS distributes erasure codes that protect the file. The erasure codes
encode the file's data in a distributed set of symbols, adding space-efficient redundancy.
With only a part of the symbol set, OneFS can recover the original file data.
Taken together, the data and its redundancy form a protection group for a region of file
data. OneFS places the protection groups on different drives on different nodescreating
data stripes.
Because OneFS stripes data across nodes that work together as peers, a user connecting
to any node can take advantage of the entire cluster's performance.
By default, OneFS optimizes striping for concurrent access. If your dominant access
pattern is streaming--that is, lower concurrency, higher single-stream workloads, such as
with video--you can change how OneFS lays out data to increase sequential-read
performance. To better handle streaming access, OneFS stripes data across more drives.
Streaming is most effective on clusters or subpools serving large files.
Description
Antivirus
OneFS can send files to servers running the Internet Content Adaptation
Protocol (ICAP) to scan for viruses and other threats.
37
Feature
Description
Clones
OneFS enables you to create clones that share blocks with other files to save
space.
OneFS can back up data to tape and other devices through the Network Data
Management Protocol. Although OneFS supports both NDMP 3-way and 2way backup, 2-way backup requires an Isilon Backup Accelerator node.
Protection
domains
The following software modules also help protect data, but they require you to activate a
separate license:
Licensed
Feature
Description
SyncIQ
SyncIQ replicates data on another Isilon cluster and automates failover and
failback operations between clusters. If a cluster becomes unusable, you can
fail over to another Isilon cluster.
SnapshotIQ
You can protect data with a snapshota logical copy of data stored on a
cluster.
SmartLock
The SmartLock tool prevents users from modifying and deleting files. You can
commit files to a write-once, read-many state: The file can never be modified
and cannot be deleted until after a set retention period. SmartLock can help
you comply with Securities and Exchange Commission Rule 17a-4.
38
Data mirroring
You can protect on-disk data with mirroring, which copies data to multiple locations.
OneFS supports two to eight mirrors. You can use mirroring instead of erasure codes, or
you can combine erasure codes with mirroring.
Mirroring, however, consumes more space than erasure codes. Mirroring data three
times, for example, duplicates the data three times, which requires more space than
erasure codes. As a result, mirroring suits transactions that require high performance.
You can also mix erasure codes with mirroring. During a write operation, OneFS divides
data into redundant protection groups. For files protected by erasure codes, a protection
group consists of data blocks and their erasure codes. For mirrored files, a protection
group contains all the mirrors of a set of blocks. OneFS can switch the type of protection
group as it writes a file to disk. By changing the protection group dynamically, OneFS can
continue writing data despite a node failure that prevents the cluster from applying
erasure codes. After the node is restored, OneFS automatically converts the mirrored
protection groups to erasure codes.
VMware integration
OneFS integrates with several VMware products, including vSphere, vCenter, and ESXi.
For example, OneFS works with the VMware vSphere API for Storage Awareness (VASA) so
that you can view information about an Isilon cluster in vSphere. OneFS also works with
the VMware vSphere API for Array Integration (VAAI) to support the following features for
block storage: hardware-assisted locking, full copy, and block zeroing. VAAI for NFS
requires an ESXi plug-in.
With the Isilon for vCenter plug-in, you can backup and restore virtual machines on an
Isilon cluster. With the Isilon Storage Replication Adapter, OneFS integrates with the
Data mirroring
39
VMware vCenter Site Recovery Manager to recover virtual machines that are replicated
between Isilon clusters.
Software modules
You can access advanced features by activating licenses for EMC Isilon software
modules.
SmartLock
SmartLock protects critical data from malicious, accidental, or premature alteration
or deletion to help you comply with SEC 17a-4 regulations. You can automatically
commit data to a tamper-proof state and then retain it with a compliance clock.
SyncIQ automated failover and failback
SyncIQ replicates data on another Isilon cluster and automates failover and failback
between clusters. If a cluster becomes unusable, you can fail over to another Isilon
cluster. Failback restores the original source data after the primary cluster becomes
available again.
File clones
OneFS provides provisioning of full read/write copies of files, LUNs, and other
clones. OneFS also provides virtual machine linked cloning through VMware API
integration.
SnapshotIQ
SnapshotIQ protects data with a snapshota logical copy of data stored on a
cluster. A snapshot can be restored to its top-level directory.
SmartPools
SmartPools enable you to create multiple file pools governed by file-pool policies.
The policies move files and directories among node pools or tiers. You can also
define how OneFS handles write operations when a node pool or tier is full.
SmartConnect
If you activate a SmartConnect Advanced license, you can balance policies to evenly
distribute CPU usage, client connections, or throughput. You can also define IP
address pools to support multiple DNS zones in a subnet. In addition, SmartConnect
supports IP failover, also known as NFS failover.
InsightIQ
The InsightIQ virtual appliance monitors and analyzes the performance of your Isilon
cluster to help you optimize storage resources and forecast capacity.
Aspera for Isilon
Aspera moves large files over long distances fast. Aspera for Isilon is a cluster-aware
version of Aspera technology for non-disruptive, wide-area content delivery.
HDFS
OneFS works with the Hadoop Distributed File System protocol to help clients
running Apache Hadoop, a framework for data-intensive distributed applications,
analyze big data.
SmartQuotas
The SmartQuotas module tracks disk usage with reports and enforces storage limits
with alerts.
40
CHAPTER 3
Introduction to the OneFS command-line
interface
41
Syntax diagrams
The format of each command is described in a syntax diagram.
The following conventions apply for syntax diagrams:
Element Description
[]
Square brackets indicate an optional element. If you omit the contents of the square
brackets when specifying a command, the command still runs successfully.
<>
Angle brackets indicate a placeholder value. You must replace the contents of the
angle brackets with a valid value, otherwise the command fails.
{}
Braces indicate a group of elements. If the contents of the braces are separated by a
vertical bar, the contents are mutually exclusive. If the contents of the braces are not
separated by a bar, the contents must be specified together.
...
Ellipses indicate that the preceding element can be repeated more than once. If
ellipses follow a brace or bracket, the contents of the braces or brackets can be
repeated more than once.
Each isi command is broken into three parts: command, required options, and optional
options. Required options are positional, meaning that you must specify them in the
order that they appear in the syntax diagram. However, you can specify a required option
in an alternative order by preceding the text displayed in angle brackets with a double
dash. For example, consider isi snapshot snapshots create.
isi snapshot snapshots create <name> <path>
[--expires <timestamp>]
[--alias <string>]
[--verbose]
If the <name> and <path> options are prefixed with double dashes, the options can be
moved around in the command. For example, the following command is valid:
isi snapshot snapshots create --verbose --path /ifs/data --alias
newSnap_alias --name newSnap
exclusively. If a word belongs to more than one command, the command fails. For
example, isi sn snap c newSnap /ifs/data is not equivalent to isi
snapshot snapshots create newSnap /ifs/data because the root of isi
sn could belong to either isi snapshot or isi snmp.
If you begin typing a word and then press TAB, the rest of the word automatically appears
as long as the word is unambiguous and applies to only one command. For example, isi
snap completes to isi snapshot because that is the only valid possibility. However,
isi sn does not complete, because it is the root of both isi snapshot and isi
snmp.
Universal options
Some options are valid for all commands.
Syntax
isi [--timeout <integer>] [--debug] <command> [--help]
--timeout <integer>
Specifies the number of seconds before the command times out.
--debug
Displays all calls to the Isilon OneFS Platform API. If a traceback occurs, displays
traceback in addition to error message.
--help
Displays a basic description of the command and all valid options for the command.
Examples
The following command causes the isi sync policy list command to timeout
after 30 seconds:
isi --timeout 30 sync policy list
The following command displays help output for isi sync policy list:
isi sync policy list --help
However, if you are on the sudoers list, the following command succeeds:
sudo isi sync policy list
Universal options
43
The following tables list all One FS commands available, the associated privilege or rootaccess requirement, and whether sudo is required to run the command.
Note
If you are running in compliance mode, more commands will require sudo.
Table 1 Privileges sorted by CLI command
44
isi command
Privilege
Requires sudo
isi alert
ISI_PRIV_EVENT
isi audit
ISI_PRIV_AUDIT
ISI_PRIV_AUTH
ISI_PRIV_ROLE
isi avscan
ISI_PRIV_ANTIVIRUS
isi batterystatus
ISI_PRIV_STATISTICS
isi config
root
ISI_PRIV_JOB_ENGINE
ISI_PRIV_STATISTICS
isi devices
ISI_PRIV_DEVICES
isi domain
root
isi email
ISI_PRIV_CLUSTER
isi events
ISI_PRIV_EVENT
isi exttools
root
isi fc
root
isi filepool
ISI_PRIV_SMARTPOOLS
isi firmware
root
isi ftp
ISI_PRIV_FTP
isi get
root
isi hdfs
root
isi iscsi
ISI_PRIV_ISCSI
isi job
ISI_PRIV_JOB_ENGINE
isi license
ISI_PRIV_LICENSE
isi lun
ISI_PRIV_ISCSI
isi ndmp
ISI_PRIV_NDMP
isi networks
ISI_PRIV_NETWORK
isi nfs
ISI_PRIV_NFS
isi command
Privilege
Requires sudo
isi perfstat
ISI_PRIV_STATISTICS
isi pkg
root
isi quota
ISI_PRIV_QUOTA
isi readonly
root
isi remotesupport
ISI_PRIV_REMOTE_SUPPORT
isi servicelight
ISI_PRIV_DEVICES
isi services
root
isi set
root
isi smartlock
root
isi smb
ISI_PRIV_SMB
isi snapshot
ISI_PRIV_SNAPSHOT
isi snmp
ISI_PRIV_SNMP
isi stat
ISI_PRIV_STATISTICS
isi statistics
ISI_PRIV_STATISTICS
isi status
ISI_PRIV_STATISTICS
isi storagepool
ISI_PRIV_SMARTPOOLS
isi sync
ISI_PRIV_SYNCIQ
isi tape
ISI_PRIV_NDMP
isi target
ISI_PRIV_ISCSI
isi update
root
isi version
ISI_PRIV_CLUSTER
isi worm
root
isi zone
ISI_PRIV_AUTH
Privilege
isi commands
Requires sudo
ISI_PRIV_ANTIVIRUS
isi avscan
ISI_PRIV_AUDIT
isi audit
ISI_PRIV_AUTH
isi zone
isi email
isi version
ISI_PRIV_CLUSTER
45
Privilege
isi commands
ISI_PRIV_DEVICES
isi devices
isi servicelight
isi alert
isi events
ISI_PRIV_EVENT
ISI_PRIV_FTP
isi ftp
ISI_PRIV_ISCSI
isi iscsi
isi lun
isi target
isi job
ISI_PRIV_JOB_ENGINE
46
Requires sudo
x
x
x
ISI_PRIV_LICENSE
isi license
ISI_PRIV_NDMP
isi ndmp
isi tape
ISI_PRIV_NETWORK
isi networks
ISI_PRIV_NFS
isi nfs
ISI_PRIV_QUOTA
isi quota
ISI_PRIV_ROLE
ISI_PRIV_REMOTE_SUPPORT
isi remotesupport
ISI_PRIV_SMARTPOOLS
isi filepool
isi storagepool
ISI_PRIV_SMB
isi smb
ISI_PRIV_SNAPSHOT
isi snapshot
ISI_PRIV_SNMP
isi snmp
ISI_PRIV_STATISTICS
isi batterystatus
isi perfstat
isi stat
isi statistics
isi status
ISI_PRIV_SYNCIQ
isi sync
root
isi config
isi domain
x
x
Privilege
isi commands
l
isi exttools
isi fc
isi firmware
isi get
isi hdfs
isi pkg
isi readonly
isi services
isi set
isi smartlock
isi update
isi worm
Requires sudo
isi_bootdisk_finish
isi_bootdisk_provider_dev
isi_bootdisk_status
isi_bootdisk_unlock
isi_checkjournal
isi_clean_idmap
isi_client_stats
isi_cpr
isi_cto_update
isi_disk_firmware_reboot
isi_dmi_info
isi_dmilog
isi_dongle_sync
isi_drivenum
isi_dsp_install
SmartLock compliance command permissions
47
48
isi_dumpjournal
isi_eth_mixer_d
isi_evaluate_provision_drive
isi_fcb_vpd_tool
isi_flexnet_info
isi_flush
isi_for_array
isi_fputil
isi_gather_info
isi_gather_auth_info
isi_gather_cluster_info
isi_gconfig
isi_get_itrace
isi_get_profile
isi_hangdump
isi_hw_check
isi_hw_status
isi_ib_bug_info
isi_ib_fw
isi_ib_info
isi_ilog
isi_imdd_status
isi_inventory_tool
isi_ipmicmc
isi_job_d
isi_kill_busy
isi_km_diag
isi_lid_d
isi_linmap_mod
isi_logstore
isi_lsiexputil
isi_make_abr
isi_mcp
isi_mps_fw_status
isi_netlogger
isi_nodes
isi_ntp_config
isi_ovt_check
isi_patch_d
isi_promptsupport
isi_radish
isi_rbm_ping
isi_repstate_mod
isi_restill
isi_rnvutil
isi_sasphymon
isi_save_itrace
isi_savecore
isi_sed
isi_send_abr
isi_smbios
isi_stats_tool
isi_transform_tool
isi_ufp
isi_umount_ifs
isi_update_cto
isi_update_serialno
isi_vitutil
isi_vol_copy
isi_vol_copy_vnx
In addition to isi commands, you can run the following UNIX commands through sudo:
l
date
gcore
ifconfig
kill
killall
nfsstat
ntpdate
nvmecontrol
pciconf
pkill
ps
renice
shutdown
sysctl
tcpdump
top
49
50
Month
Year
SnapshotIQ 30 days
SmartLock
31 days
SyncIQ
30 days
CHAPTER 4
General cluster administration
51
User interfaces
Depending on your preference, location, or the task at hand, OneFS provides four
different interfaces for managing the cluster.
Web administration interface
The browser-based OneFS web administration interface provides secure access with
OneFS-supported browsers. You can use this interface to view robust graphical
monitoring displays and to perform cluster-management tasks.
Command-line interface
Cluster-administration tasks can be performed using the command-line interface
(CLI). Although most tasks can be performed from either the CLI or the web
administration interface, a few tasks can be accomplished only from the CLI.
Node front panel
You can monitor node and cluster details from a node front panel.
OneFS Platform API
OneFS includes a RESTful services application programmatic interface (API). Through
this interface, cluster administrators can develop clients and software to automate
the management and monitoring of their Isilon storage systems.
52
Licensing
Advanced cluster features are available when you activate licenses for OneFS software
modules. Each optional OneFS software module requires you to activate a separate
license.
For more information about the following optional software modules, contact your EMC
Isilon sales representative.
l
HDFS
InsightIQ
SmartConnect Advanced
SmartDedupe
SmartLock
SmartPools
SmartQuotas
Log in to the web administration interface
53
SnapshotIQ
SyncIQ
License status
The status of a OneFS module license indicates whether the functionality provided by the
module are available on the cluster.
Licenses exist in one of the following states:
Status
Description
Inactive
The license has not been activated on the cluster. You cannot access the features
provided by the corresponding module.
Evaluation The license has been temporarily activated on the cluster. You can access the
features provided by the corresponding module for a limited period of time. After the
license expires, the features will become unavailable, unless the license is
reactivated.
Activated
The license has been activated on the cluster. You can access the features provided
by the corresponding module.
Expired
The evaluation license has expired on the cluster. You can no longer access the
features provided by the corresponding module. The features will remain
unavailable, unless you reactivate the license.
The following table describes what functionality is available for each license depending
on the license's status:
54
License
Inactive
Evaluation/
Activated
Expired
HDFS
Clients cannot
access the cluster
through HDFS.
InsightIQ
SmartPools
License
Inactive
Evaluation/
Activated
Expired
which reserves
space for data repair
if a drive fails, is also
available.
SmartConnect
Advanced
Client connections
are balanced by
using a round robin
policy. IP address
allocation is static.
Each external
network subnet can
be assigned only
one IP address pool.
SmartDedupe
You cannot
deduplicate data
with SmartDedupe.
SmartLock
SnapshotIQ
License status
55
License
Inactive
Evaluation/
Activated
Expired
SmartQuotas
SyncIQ
License configuration
You can configure or unconfigure some OneFS module licenses.
You can configure a license by performing specific operations through the corresponding
module. Not all actions that require you to activate a license will configure the license.
Also, not all licenses can be configured. Configuring a license does not add or remove
access to any features provided by a module.
You can unconfigure a license only through the isi license unconfigure
command. You may want to unconfigure a license for a OneFS software module if, for
example, you enabled an evaluation version of a module but later decided not to
purchase a permanent license. Unconfiguring a module license does not deactivate the
license. Unconfiguring a license does not add or remove access to any features provided
by a module.
The following table describes both the actions that cause each license to be configured
and the results of unconfiguring each license:
56
License
Cause of configuring
Result of unconfiguring
HDFS
No system impact.
InsightIQ
No system impact.
No system impact.
SmartPools
Create a file pool policy (other than the OneFS deletes all file pool policies
default file pool policy).
(except the default file pool policy).
SmartConnect
SmartDedupe
No system impact.
License
Cause of configuring
Result of unconfiguring
SmartLock
No system impact.
SnapshotIQ
SmartQuotas
Create a quota.
No system impact.
SyncIQ
No system impact.
Unconfigure a license
You can unconfigure a licensed module through the command-line interface.
You must have root user privileges on your Isilon cluster to unconfigure a module license.
This procedure is available only through the command-line interface (CLI).
Note
57
If you do not know the module name, run the isi license command for a list of
OneFS modules and their status.
OnesFS returns a confirmation message similar to the following text: The
SmartConnect module has been unconfigured. The license is
unconfigured, and any processes enabled for the module are disabled.
Certificates
You can renew the Secure Sockets Layer (SSL) certificate for the Isilon web administration
interface or replace it with a third-party SSL certificate.
All Platform API communication, which includes communication through the web
administration interface, is over SSL. You can replace or renew the self-signed certificate
with a certificate that you generate. To replace or renew an SSL certificate, you must be
logged in as root.
Procedure
1. Establish an SSH connection to any node in the cluster.
2. At the command prompt, run the following command to create the appropriate
directory.
mkdir /ifs/local/
3. At the command prompt, run the following command to change to the directory.
cd /ifs/local/
Description
Third-party
(public or
private) CAissued
certificate
Option
Description
receive the signed certificate (now a .crt file) from the CA,
copy the certificate to /ifs/local/<common-name>.crt.
Self-signed
a. At the command prompt, run the following command to create
certificate
a two-year certificate. Increase or decrease the value for -days
based on the
to generate a certificate with a different expiration date.
existing (stock)
ssl.key
cp /usr/local/apache2/conf/ssl.key/server.key ./openssl
req -new \/
-days 730 -nodes -x509 -key server.key -out server.crt
59
In addition, you should add the following attributes to be sent with your certificate
request:
l
Cluster identity
You can specify identity attributes for the EMC Isilon cluster.
Cluster name
The cluster name appears on the login page, and it makes the cluster and its nodes
more easily recognizable on your network. Each node in the cluster is identified by
the cluster name plus the node number. For example, the first node in a cluster
named Images may be named Images-1.
Cluster description
The cluster description appears below the cluster name on the login page. The
cluster description is useful if your environment has multiple clusters.
Login message
The login message appears as a separate box on the login page. The login message
can convey cluster information, login instructions, or warnings that a user should
know before logging into the cluster.
61
Note
If the cluster and Active Directory become out of sync by more than 5 minutes,
authentication will not work.
62
Procedure
1. Run the isi email command.
The following example configures SMTP email settings:
isi email --mail-relay 10.7.180.45 \
--mail-sender [email protected] \
--mail-subject "Isilon cluster event" --use-smtp-auth yes \
--auth-user SMTPuser --auth-pass Password123 --use-encryption yes
25
No
********
No
63
Description
Manual Allows you to manually add a node to the cluster without requiring authorization.
Secure
Requires authorization of every node added to the cluster and the node must be added
through the web administration interface or through the isi devices -a add -d
<unconfigured_node_serial_no> command in the command-line interface.
Note
If you specify a secure join mode, you cannot join a node to the cluster through serial
console wizard option [2] Join an existing cluster.
If the cluster character encoding is not set to UTF-8, SMB share names are case-sensitive.
You must restart the cluster to apply character encoding changes.
CAUTION
3. Run the commit command to save your changes and exit the isi config
subsystem.
4. Restart the cluster to apply character encoding modifications.
2. To specify how often to update the last-accessed time, set the atime_grace_period
system control.
Specify the amount of time as a number of seconds.
The following command configures OneFS to update the last-accessed time every two
weeks:
sysctl efs.bam.atime_grace_period=1209600
Cluster monitoring
You can view health and status information for the EMC Isilon cluster and monitor cluster
and node performance.
Run the isi status command to review the following information:
l
IP addresses
Throughput
Critical events
Job status
Additional commands are available to review performance information for the following
areas:
l
65
Advanced performance monitoring and analytics are available through the InsightIQ
module, which requires you to activate a separate license. For more information about
optional software modules, contact your EMC Isilon sales representative.
66
State
Description
Interface
HEALTHY
Command-line
interface, web
administration
interface
SMARTFAIL or
Smartfail or
restripe in
progress
Command-line
interface, web
administration
interface
NOT AVAILABLE
Command-line
interface, web
administration
interface
Error
state
Note
SED_ERROR command-line
interface states.
SUSPENDED
Command-line
interface, web
administration
interface
NOT IN USE
Command-line
interface, web
administration
interface
REPLACE
STALLED
NEW
USED
67
State
Description
Interface
PREPARING
Command-line
interface only
EMPTY
Command-line
interface only
WRONG_TYPE
BOOT_DRIVE
Command-line
interface only
SED_ERROR
Command-line
interface, web
administration
interface
Note
Error
state
available.
ERASE
Command-line
interface only
Note
available.
INSECURE
Command-line
interface only
Web
administration
interface only
Note
SED.
UNENCRYPTED SED
68
State
Description
Interface
Error
state
Note
SNMP monitoring
You can use SNMP to remotely monitor the EMC Isilon cluster hardware components,
such as fans, hardware sensors, power supplies, and disks. The default Linux SNMP tools
or a GUI-based SNMP tool of your choice can be used for this purpose.
You can enable SNMP monitoring on individual nodes on your cluster, and you can also
monitor cluster information from any node. Generated SNMP traps are sent to your SNMP
network. You can configure an event notification rule that specifies the network station
where you want to send SNMP traps for specific events, so that when an event occurs, the
cluster sends the trap to that server. OneFS supports SNMP in read-only mode. OneFS
supports SNMP version 2c, which is the default value, and SNMP version 3.
Note
OneFS does not support SNMP v1. Although an option for v1/v2c may be displayed, if you
select the v1/v2c pair, OneFS will only monitor through SNMP v2c.
You can configure settings for SNMP v3 alone or for both SNMP v2c and v3.
Note
If you configure SNMP v3, OneFS requires the SNMP-specific security level of AuthNoPriv
as the default value when querying the cluster. The security level AuthPriv is not
supported.
Elements in an SNMP hierarchy are arranged in a tree structure, similar to a directory tree.
As with directories, identifiers move from general to specific as the string progresses from
left to right. Unlike a file hierarchy, however, each element is not only named, but also
numbered.
For example, the SNMP
entity .iso.org.dod.internet.private.enterprises.isilon.oneFSss.s
sLocalNodeId.0 maps to .1.3.6.1.4.1.12124.3.2.0. The part of the name that
Check battery status
69
refers to the OneFS SNMP namespace is the 12124 element. Anything further to the right
of that number is related to OneFS-specific monitoring.
Management Information Base (MIB) documents define human-readable names for
managed objects and specify their data types and other properties. You can download
MIBs that are created for SNMP-monitoring of an Isilon cluster from the webadministration interface or manage them using the command-line interface. MIBs are
stored in /usr/local/share/snmp/mibs/ on a OneFS node. The OneFS ISILONMIBs serve two purposes:
l
ISILON-MIB is a registered enterprise MIB. Isilon clusters have two separate MIBs:
ISILON-MIB
Defines a group of SNMP agents that respond to queries from a network monitoring
system (NMS) called OneFS Statistics Snapshot agents. As the name implies, these
agents snapshot the state of the OneFS file system at the time that it receives a
request and reports this information back to the NMS.
ISILON-TRAP-MIB
Generates SNMP traps to send to an SNMP monitoring station when the
circumstances occur that are defined in the trap protocol data units (PDUs).
The OneFS MIB files map the OneFS-specific object IDs with descriptions. Download or
copy MIB files to a directory where your SNMP tool can find them, such as /usr/share/
snmp/mibs/ or /usr/local/share/snmp/mibs, depending on the tool that you
use.
To enable Net-SNMP tools to read the MIBs to provide automatic name-to-OID mapping,
add -m All to the command, as in the following example:
snmpwalk -v2c -c public -m All <node IP> isilon
If the MIB files are not in the default Net-SNMP MIB directory, you may need to specify the
full path, as in the following example. Note that all three lines are a single command.
snmpwalk -m /usr/local/share/snmp/mibs/ISILON-MIB.txt:/usr/local\
/share/snmp/mibs/ISILON-TRAP-MIB.txt:/usr/local/share/snmp/mibs \
/ONEFS-TRAP-MIB.txt -v2c -C c -c public <node IP> enterprises.onefs
Note
The previous examples are run from the snmpwalk command on a cluster. Your SNMP
version may require different arguments.
When SNMP v3 is used, OneFS requires the SNMP-specific security level of AuthNoPriv as
the default value when querying the EMC Isilon cluster. The security level AuthPriv is not
supported.
Procedure
1. Run the isi snmp command.
The following command allows access only through SNMP version 3:
isi snmp --protocols "v3"
The Isilon cluster does not generate SNMP traps unless you configure an event
notification rule to send events.
Procedure
1. Click Cluster Management > General Settings > SNMP Monitoring.
2. In the Service area of the SNMP Monitoring page, enable or disable SNMP monitoring.
a. To disable SNMP monitoring, click Disable, and then click Submit.
b. To enable SNMP monitoring, click Enable, and then continue with the following
steps to configure your settings.
3. In the Downloads area, click Download for the MIB file that you want to download.
Follow the download process that is specific to your browser.
4. Optional: If you are using Internet Explorer as your browser, right-click the Download
link, select Save As from the menu, and save the file to your local drive.
You can save the text in the file format that is specific to your Net-SNMP tool.
5. Copy MIB files to a directory where your SNMP tool can find them, such as /usr/
share/snmp/mibs/ or /usr/local/share/snmp/mibs, depending on the
SNMP tool that you use.
SNMP monitoring
71
To have Net-SNMP tools read the MIBs to provide automatic name-to-OID mapping,
add -m All to the command, as in the following example: snmpwalk -v2c -c
public -m All <node IP> isilon
6. Navigate back to the SNMP Monitoring page and configure General Settings.
a. In the Settings area, configure protocol access by selecting the version that you
want.
OneFS does not support writable OIDs; therefore, no write-only community string
setting is available.
b. In the System location field, type the system name.
This setting is the value that the node reports when responding to queries. Type a
name that helps to identify the location of the node.
c. Type the contact email address in the System contact field.
7. Optional: If you selected SNMP v1/v2 as your protocol, locate the SNMP v1/v2c
Settings section and type the community name in the Read-only community field.
The default community name is I$ilonpublic.
Note
OneFS no longer supports SNMP v1. Although an option for v1/v2c may be displayed,
if you select the v1/v2c pair, OneFS will only monitor through SNMP v2c.
8. Configure SNMP v3 Settings.
a. In the Read-only user field, type the SNMP v3 security name to change the name of
the user with read-only privileges.
The default read-only user is general.
The password must contain at least eight characters and no spaces.
b. In the SNMP v3 password field, type the new password for the read-only user to
set a new SNMP v3 authentication password.
The default password is password. It is recommended that you change the
password to improve security.
c. Type the new password in the Confirm password field to confirm the new
password.
9. Click Submit.
72
Coalesced events
OneFS coalesces related, group events or repeated, duplicate events into a single event.
Description
Because the events are triggered by a single occurrence, OneFS creates a group event
and combines the related messages under the new group event numbered 24.294.
Instead of seeing four events, you will see a single group event alerting you to storage
transport issues. You can still view all the grouped events individually if you choose.
To view this coalesced event, run the following command:
isi events show 24.924
The system displays the following example output of the coalesced group event:
ID:
24.924
Type:
199990001
Severity:
critical
Value:
0.0
Message:
Disk Errors detected (Bay 1)
Node:
21
Lifetime:
Sun Jun 17 23:29:29 2012 - Now
Quieted:
Not quieted
Specifiers: disk: 35
val: 0.0
73
devid: 24
drive_serial: 'XXXXXXXXXXXXX'
lba: 1953520064L
lnn: 21
drive_type: 'HDD'
device: 'da1'
bay: 1
unit: 805306368
Coalesced by: -Coalescer Type: Group
Coalesced events:
ID
STARTED
ENDED SEV LNN MESSAGE
24.911 06/17 23:29 -- I
21 Disk stall: Bay 1, Type HDD, LNUM 35.
Disk ...
24.912 06/17 23:29 -- I
21 Sector error: da1 block 1953520064
24.913 06/17 23:29 -- I
21 Sector error: da1 block 2202232
24.914 06/17 23:29 -- I
21 Sector error: da1 block 2202120
24.915 06/17 23:29 -- I
21 Sector error: da1 block 2202104
24.916 06/17 23:29 -- I
21 Sector error: da1 block 2202616
24.917 06/17 23:29 -- I
21 Sector error: da1 block 2202168
24.918 06/17 23:29 -- I
21 Sector error: da1 block 2202106
24.919 06/17 23:29 -- I
21 Sector error: da1 block 2202105
24.920 06/17 23:29 -- I
21 Sector error: da1 block 1048670
24.921 06/17 23:29 -- I
21 Sector error: da1 block 223
24.922 06/17 23:29 -- C
21 Disk Repair Initiated: Bay 1, Type
HDD, LNUM...
The system displays the following example output of the coalesced duplicate event:
ID: 1.3035
Type: 500010001
Severity: info
Value: 0.0
Message: SmartQuotas threshold violation on quota violated, domain
direc...
Node: All
Lifetime: Thu Jun 14 01:00:00 2012 - Now
Quieted: Not quieted
Specifiers: enforcement: 'advisory'
domain: 'directory /ifs/quotas'
name: 'violated'
val: 0.0
devid: 0
lnn: 0
Coalesced by: -Coalescer Type: Duplicate
Coalesced events:
ID STARTED ENDED SEV LNN MESSAGE
74
18.621
vio...
18.630
vio...
18.638
vio...
18.647
vio...
18.655
vio...
06/14 01:00 -- I
06/15 01:00 -- I
06/16 01:00 -- I
06/17 01:00 -- I
06/18 01:00 -- I
Oldest events
Newest events
Historical events
Coalesced events
Severity level
Event types
Procedure
1. Run the isi events list command.
The system displays output similar to the following example:
ID
STARTED
2.2
04/24 03:53
3, 4, 5, 6, ...
1.166 04/24 03:54
expire on ...
1.167 04/24 03:54
will expir...
1.168 04/24 03:54
will expir...
2.57 04/25 17:41
'HDFS' will...
2.58 04/25 17:41
'{license}'...
1.2
05/02 16:40
3, 4, 5, 6, ...
3.1
05/02 16:50
3, 4, 5, 6, ...
1.227 04/29 20:25
1.228 04/29 20:26
1.229 04/29 22:33
1.259 05/01 00:00
2.59 04/25 17:41
'{license}'...
ENDED
--
--
--
--
--
--
--
--
C
C
C
I
I
All
All
All
1
All
04/29
04/29
04/29
05/01
05/02
20:25
20:26
22:33
00:00
20:17
75
The following example command displays a list of events with a severity value of
critical:
isi events list --severity=critical
STARTED
04/24 03:53
5, 6, ...
05/02 16:40
5, 6, ...
05/02 16:50
5, 6, ...
04/29 20:25
04/29 20:26
04/29 22:33
ENDED
--
--
--
04/29 20:25 C
04/29 20:26 C
04/29 22:33 C
2. To view the details of a specific event, run the isi events show command and
specify the event instance ID
The following example command displays the details for the event with the instance
ID of :
isi events show --instanceid=2.57
will expire on
'HDFS' will
'HDFS' will
'HDFS' will
'HDFS' will
Description
COALESCED
CREATOR EV COALID UPDATED A group was created and the placeholder first event label was
updated to include actual group information.
DROPPED
An event did not include any new information and was not stored
in the master event database.
FORWARDED_TO_MASTER
DB: STORED
DB: PURGED
Responding to events
You can view event details and respond to cluster events.
You can view and manage new events, open events, and recently ended events. You can
also view coalesced events and additional, more-detailed information about a specific
event. You also can quiet or cancel events.
77
Quiet
Acknowledges and removes the event from the list of new events and adds the event
to a list of quieted events.
Note
If a new event of the same event type is triggered, it is a separate new event and
must be quieted.
Unquiet
Returns a quieted event to an unacknowledged state in the list of new events and
removes the event from the list of quieted events.
Cancel
Permanently ends an occurrence of an event. The system cancels an event when
conditions are met that end its duration, which is bounded by a start time and an
end time, or when you cancel the event manually.
Most events are canceled automatically by the system when the event reaches the end of
its duration. The event remains in the system until you manually acknowledge or quiet
the event. You can acknowledge events through either the web administration interface
or the command-line interface.
Quiet an event
You can acknowledge and remove an event from the event list by quieting it.
Procedure
1. Optional: To identify the instance ID of the event that you want to quiet, run the
following command:
isi events list
Unquiet an event
You can return a quieted event to an unacknowledged state by unquieting the event.
Procedure
1. Optional: To identify the instance ID of the event that you want to unquiet, run the
following command:
isi events list
78
Cancel an event
You can end the occurrence of an event by canceling it.
Procedure
1. Optional: To identify the instance ID of the event that you want to cancel, run the
following command:
isi events list
79
Setting
Option
Description
Notification batch
mode
Batch all
Batch by severity
Batch by category
No batching
No custom notification
template is set
Custom notification
template
80
DESCRIPTION
10 categories, 1 event types
10
10 categories, 2 event types
2. To view the details of a specific notification rule, run the isi events
notifications list and specify the event name.
The name of the notification rule is case-sensitive.
The following example command displays the details for the SupportIQ notification
rule:
isi events notifications list --name=SupportIQ
81
Procedure
1. Run the isi events notifications create command.
The following example command creates a rule called example rule that specifies that
a notification should be sent to [email protected] when any critical event occurs:
isi events notifications create --name=test-rule \
[email protected] --include-critical=all
82
The following example command deletes the notification rule named test-rule:
isi events notifications delete --name-test-rule
Cluster maintenance
Trained service personnel can replace or upgrade components in Isilon nodes.
Isilon Technical Support can assist you with replacing node components or upgrading
components to increase performance.
battery
SATA/SAS Drive
memory (DIMM)
fan
front panel
intrusion switch
IB/NVRAM card
SAS controller
power supply
If you configure your cluster to send alerts to Isilon, Isilon Technical Support will contact
you if a component needs to be replaced. If you do not configure your cluster to send
alerts to Isilon, you must initiate a service request.
drive
memory (DIMM)
If you want to upgrade components in your nodes, contact Isilon Technical Support.
Cluster maintenance
83
the drive firmware to the latest revision by installing the drive support package or the
drive firmware package.
You can determine whether the drive firmware on your cluster is of the latest revision by
viewing the status of the drive firmware.
Note
It is recommended that you contact EMC Isilon Technical Support before updating the
drive firmware.
Automatically updates the drive firmware for new and replacement drives to the latest
revision before those drives are formatted and used in a cluster. This is applicable
only for clusters running OneFS 7.2 and later.
Note
To view the drive firmware status of all the nodes, run the following command:
isi drivefirmware status
To view the drive firmware status of drives on a specific node, run the isi
devices command with the -a fwstatus option. Run the following command
to view the drive firmware status of each drive on node 1:
isi devices -a fwstatus -d 1
Model
HGST HUS724030ALA640
HGST HUS724030ALA640
HGST HUS724030ALA640
FW
MF80AAC0
MF80AAC0
MF80AAC0
Desired FW
Run the following command to view the drive firmware status on node 1 and disk
12:
isi devices -a fwstatus -d 1:12
85
Note
Power cycling drives during a firmware update might return unexpected results. As a best
practice, do not restart or power off nodes when the drive firmware is being updated in a
cluster.
Procedure
1. Run the isi devices command with the -a fwupdate option.
The following command updates the drive firmware for all drives on node 1:
isi devices -a fwupdate -d 1
You must run the command once for each node that requires a drive firmware update.
Updating the drive firmware of a single drive takes approximately 15 seconds,
depending on the drive model. OneFS updates drives sequentially.
FW
MF80AAC0
Desired FW
30
Count
1
Nodes
Where:
Model
Displays the name of the drive model.
FW
Displays the version number of the firmware currently running on the drives.
86
Desired FW
If the drive firmware should be upgraded, displays the version number of the drive
firmware that the firmware should be updated to.
Count
Displays the number of drives of this model that are currently running the specified
drive firmware.
Nodes
Displays the LNNs of nodes that the specified drives exist in.
The following example shows the output of the isi devices command with the -a
fwstatus option:
Node 1
Bay 1
Bay 2
Bay 3
Model
HGST HUS724030ALA640
HGST HUS724030ALA640
HGST HUS724030ALA640
FW
MF80AAC0
MF80AAC0
MF80AAC0
Desired FW
Where:
Drive
Displays the number of the bay that the drive is in.
Note
This column is not labeled in the output. The information appears under the node
number.
Model
Displays the name of the drive model.
FW
Displays the version number of the firmware currently running on the drive.
Desired FW
Displays the version number of the drive firmware that the drive should be updated
to. If a drive firmware update is not required, the Desired FW column is empty.
New and replacement drives added to a cluster are formatted regardless of the status of
their firmware revision. You can identify a firmware update failure by viewing the firmware
status for the drives on a specific node. In case of a failure, run the isi devices
command with the fwupdate action on the node to update the firmware manually. For
example, run the following command to manually update the firmware on node 1:
isi devices -a fwupdate -d 1
87
Procedure
1. Identify the serial number of the node to be added by running the following command:
isi devices --action discover
2. Join the node to the cluster by running the following command where <serial num> is
the serial number of the node.
isi devices --action add --device <serial num>
88
To restart a single node or all nodes on the cluster, run the reboot command.
The following command restarts a single node by specifying the logical node
number (lnn):
reboot 7
l
To shut down a single node or all nodes on the cluster, run the shutdown
command.
The following command shuts down all nodes on the cluster:
shutdown all
Upgrading OneFS
Two options are available for upgrading the OneFS operating system: a rolling upgrade or
a simultaneous upgrade. Before upgrading OneFS software, a pre-upgrade check must be
performed.
A rolling upgrade individually upgrades and restarts each node in the EMC Isilon cluster
sequentially. During a rolling upgrade, the cluster remains online and continues serving
clients with no interruption in service, although some connection resets may occur on
SMB clients. Rolling upgrades are performed sequentially by node number, so a rolling
upgrade takes longer to complete than a simultaneous upgrade. The final node in the
upgrade process is the node that you used to start the upgrade process.
Note
Rolling upgrades are not available for all clusters. For instructions on how to plan an
upgrade, prepare the cluster for upgrade, and perform an upgrade of the operating
system, see the OneFS Upgrade Planning and Process Guide.
A simultaneous upgrade installs the new operating system and restarts all nodes in the
cluster at the same time. Simultaneous upgrades are faster than rolling upgrades but
require a temporary interruption of service during the upgrade process. Your data is
inaccessible during the time that it takes to complete the upgrade process.
Before beginning either a simultaneous or rolling upgrade, OneFS compares the current
cluster and operating system with the new version to ensure that the cluster meets
certain criteria, such as configuration compatibility (SMB, LDAP, SmartPools), disk
availability, and the absence of critical cluster events. If upgrading puts the cluster at
risk, OneFS warns you, provides information about the risks, and prompts you to confirm
whether to continue the upgrade.
If the cluster does not meet the pre-upgrade criteria, the upgrade does not proceed, and
the unsupported statuses are listed.
Remote support
Isilon Technical Support personnel can remotely manage your Isilon cluster to
troubleshoot an open support case with your permission.
You can enable remote customer service support through SupportIQ or the EMC Secure
Remote Support (ESRS) Gateway.
Upgrading OneFS
89
The SupportIQ scripts are based on the Isilon isi_gather_info log-gathering tool.
The SupportIQ module is included with the OneFS operating system and does not require
you to activate a separate license. You must enable and configure the SupportIQ module
before SupportIQ can run scripts to gather data. The feature may have been enabled
when the cluster was first set up, but you can enable or disable SupportIQ through the
Isilon web administration interface.
In addition to enabling the SupportIQ module to allow the SupportIQ agent to run scripts,
you can enable remote access, which allows Isilon Technical Support personnel to
monitor cluster events and remotely manage your cluster using SSH or the web
administration interface. Remote access helps Isilon Technical Support to quickly identify
and troubleshoot cluster issues. Other diagnostic tools are available for you to use in
conjunction with Isilon Technical Support to gather and upload information such as
packet capture metrics.
Note
If you enable remote access, you must also share cluster login credentials with Isilon
Technical Support personnel. Isilon Technical Support personnel remotely access your
cluster only in the context of an open support case and only after receiving your
permission.
Configuring SupportIQ
OneFS logs contain data that Isilon Technical Support personnel can securely upload,
with your permission, and then analyze to troubleshoot cluster problems. The SupportIQ
technology must be enabled and configured for this process.
When SupportIQ is enabled, Isilon Technical Support personnel can request logs through
scripts that gather cluster data and then upload the data to a secure location. You must
enable and configure the SupportIQ module before SupportIQ can run scripts to gather
data. The feature may have been enabled when the cluster was first set up.
You can also enable remote access, which allows Isilon Technical Support personnel to
troubleshoot your cluster remotely and run additional data-gathering scripts. Remote
access is disabled by default. To enable remote SSH access to your cluster, you must
provide the cluster password to a Technical Support engineer.
90
Disable SupportIQ
You can disable SupportIQ so the SupportIQ agent does not run scripts to gather and
upload data about your Isilon cluster.
Procedure
1. Run the following command:
isi_promptsupport -e
SupportIQ scripts
When SupportIQ is enabled, Isilon Technical Support personnel can request logs with
scripts that gather cluster data and then upload the data. The SupportIQ scripts are
located in the /usr/local/SupportIQ/Scripts/ directory on each node.
The following table lists the data-gathering activities that SupportIQ scripts perform.
These scripts can be run automatically, at the request of an Isilon Technical Support
representative, to collect information about your cluster's configuration settings and
operations. The SupportIQ agent then uploads the information to a secure Isilon FTP site,
so that it is available for Isilon Technical Support personnel to analyze. The SupportIQ
scripts do not affect cluster services or the availability of your data.
Action
Description
91
92
Action
Description
Collects and uploads information about the state and health of the
OneFS /ifs/ file system.
Get IB data
Collects and uploads only the most recent cluster log information.
Get messages
Collects and uploads information about cluster-wide and nodespecific network configuration settings and operations.
Warns if the chassis is open and uploads a text file of the event
information.
isi_gather_info
isi_gather_info -incremental
isi_gather_info
single node
Enable support personnel to run the same scripts used by SupportIQ to gather data
from your devices.
An important difference between SupportIQ and the ESRS Gateway is that SupportIQ
management is cluster-wide; SupportIQ manages all nodes. The ESRS Gateway manages
nodes individually; you select which nodes should be managed.
You can only enable one remote support system on your Isilon cluster. The EMC products
you use and your type of environment determine which system is most appropriate for
your Isilon cluster:
l
If your environment comprises one or more EMC products that can be monitored, use
the ESRS Gateway.
If your use of ESRS requires the ESRS Client, use SupportIQ. Isilon nodes do not
support ESRS Client connectivity.
If the only EMC products in your environment are Isilon nodes, use SupportIQ.
See the most recent version of the document titled EMC Secure Remote Support Technical
Description for a complete description of EMC Secure Remote Support features and
functionality.
Additional documentation on ESRS can be found on the EMC Online Support site.
Select which nodes you want managed through the ESRS Gateway with the ESRS
Configuration Tool.
Remote support using ESRS Gateway
93
Create rules for remote support connection to Isilon nodes with the ESRS Policy
Manager.
See the most recent version of the document titled EMC Secure Remote Site Planning Guide
for a complete description of ESRS Gateway server requirements, installation, and
configuration.
See the most recent version of the document titled EMC Secure Remote Support Gateway
for Windows Operations Guide for a complete description of the ESRS Configuration Tool.
See the most recent version of the document titled EMC Secure Remote Support Policy
Manager Operations Guide for a complete description of the ESRS Policy Manger.
Additional documentation on ESRS can be found on the EMC Online Support site.
94
yes
gw-serv-esrs1
gw-serv-esrs2
yes
no
subnet0
isi config
Opens a new prompt where node and cluster settings can be altered.
The command-line prompt changes to indicate that you are in the isi config
subsystem. While you are in the isi config subsystem, other OneFS commands are
unavailable and only isi config commands are valid.
Syntax
isi config
When you are in the isi config subsystem, the syntax for each option is <command>
unless otherwise noted. Commands are not recognized unless you are currently at the
isi config command prompt.
Note
Changes are updated when you run the commit command. Some commands require you
to restart the cluster.
Commands
changes
Displays a list of changes to the configuration that have not been committed.
commit
Commits configuration settings and then exits isi config.
date <time-and-date>
Displays or sets the current date and time on the cluster.
95
<time-and-date>
Sets cluster time to the time specified.
Specify <time-and-date> in the following format:
<YYYY>-<MM>-<DD>[T<hh>:<mm>[:<ss>]]
ipset
Obsolete. Use lnnset to renumber cluster nodes. The IP address cannot be set
manually.
joinmode [<mode>]
Displays the setting for how nodes are added to the current cluster. Options <mode>
specifies the cluster add node setting as one of the following values.
manual
Configures the cluster so that joins can be initiated by either the node or the
cluster.
secure
Configures the cluster so that joins can be initiated by only the cluster.
lnnset [<old-lnn> <new-lnn>]
Displays a table of logical node number (LNN), device ID, and internal IP address for
each node in the cluster when run without arguments. Changes the LNN when
specified.
<old lnn>
Specifies the old LNN that is to be changed.
<new lnn>
Specifies the new LNN that is replacing the old LNN value for that node.
Note
The new LNN must not be currently assigned to another node. Users logged in to
the shell or web administration interface of a node whose LNN is change must
log in again to view the new LNN.
migrate [<interface-name> [[<old-ip-range>] {<new-ip-range> [-n <netmask>]}]]
Displays a list of IP address ranges that can be assigned to nodes or both adds and
removes IP ranges from that list.
isi config
97
<interface-name>
Specifies the name of the interface as int-a, int-b, and failover.
<old-ip-range>
Specifies the range of IP addresses that can no longer be assigned to nodes. If
unspecified, all existing IP ranges are removed before the new IP range is added.
Specify in the form of <lowest-ip>-<highest-ip>.
<new-ip-range>
Specifies the range of IP addresses that can be assigned to nodes. Specify in the
form of <lowest-ip>-<highest-ip>.
-n <netmask>
Specifies a new netmask for the interface.
Note
If more than one node is given a new IP address, the cluster reboots when the change
is committed. If only one node is given a new IP address, only that node is rebooted.
mtu [<value>]
Displays the size of the maximum transmission unit (MTU) that the cluster uses for
internal network communications when run with no arguments. Sets a new size of the
MTU value, when specified. This command is for the internal network only.
Note
This command is not valid for clusters with an InfiniBand back end.
<value>
Specifies the new size of the MTU value. Any value is valid, but not all values
may be compatible with your network. The most common settings are 1500 for
standard frames and 9000 for jumbo frames.
name [<new_name>]
Displays the names currently assigned to clusters when run with no arguments.
Assigns new names to clusters, as specified.
<new name>
Specifies a new name for the cluster.
If run on an unconfigured node, this command does not accept any arguments.
remove
98
If run on an unconfigured node, this command does not accept any arguments.
status [advanced]
Displays current information on the status of the cluster. To display additional
information, including device health, specify advanced.
timezone [<timezone identifier>]
Displays the current time zone or specifies new time zones. Specifies the new
timezone for the cluster as one of the following values:
<timezone identifier>
Specifies the new time zone for the cluster as one of the following values:
isi email
Configures email settings for the cluster.
Syntax
isi email
[--mail-relay] <string>]
[--smtp-port <integer>]
[--mail-sender <string>]
[--mail-subject <string>]
isi email
99
Options
--mail-relay <string>
Specifies the SMTP relay address.
--smtp-port <integer>
Specifies the SMTP relay port. The default value is 25.
--mail-sender <string>
Specifies the originator email address.
--mail-subject <string>
Specifies the prefix string for the email subject.
--use-smtp-auth {yes | no}
Specifies whether using SMTP authentication. Yes enables SMTP authentication.
{--auth-user | -u} <string>
Specifies the username for SMTP authentication.
{--auth-pass | -p} <string>
Specifies the password for SMTP authentication.
--use-encryption {yes | no}
Specifies whether using encryption (TLS) for SMTP authentication. Yes enables
encryption.
isi exttools
Provides subcommands for interacting with supported third-party tools.
Nagios is the only third-party tool that is supported in this release. Multiple Isilon clusters
can be monitored with the configuration file that is generated when this command is
used.
Syntax
isi exttools nagios_config
100
Options
{--key | -k} <key>
Activates the specified license key. Multiple keys can be specified by separating
them with either spaces or commas.
Options
--module <module>
Specifies the name of the license that you want to unconfigure.
isi perfstat
Displays a realtime snapshot of network and file I/O performance.
Syntax
isi perfstat
101
Options
<patch-spec-file>
Provide the description and parameters for the patch that you are creating.
Options
<patch-name>
Required. Uninstalls a patch from the cluster. The patch name must be the name of
an installed patch.
Examples
To uninstall a package named patch-example.tar from the cluster, run the following
command.
isi pkg delete patch-example.tar
Use the isi pkg info command to verify that the patch was successfully uninstalled.
Options
<patch-name>
Displays information about only the specified patch. <patch-name> can be the path to
a tar archive or the URL of a patch on an HTTP or FTP site. If you omit this option, the
system displays all installed patches.
Examples
When you examine a specific patch, more information is returned than when you run isi
pkg info without arguments to get information on all patches.
102
To get information for a patch named patch-example.tar, run the following command.
isi pkg info <path> patch-example.tar
The system displays the package name and date of installation, similar to the following
output.
Information for patch-example:
Description:
Package Name : patch-example - 2009-10-11
If the patch is not installed, the system displays the following output.
patch-example.tar It is not installed.
Options
<patch-name>
Required. Installs the specified patch on the cluster. <patch-name> can be either a
path to a tar archive or a URL for a patch on an HTTP or FTP site.
Examples
To install a patch named patch-example.tar on the cluster, run the following command.
isi pkg install patch-example.tar
If necessary, use the isi pkg info command to verify that the patch was successfully
installed.
103
Options
--enabled {yes|no}
Specifies whether support for ESRS Gateway is enabled on the Isilon cluster.
--primary-esrs-gateway <string>
Specifies the name of the primary ESRS Gateway server.
The gateway server acts as the single point of entry and exit for IP-based remote
support activities and monitoring notifications.
--secondary-esrs-gateway <string>
Specifies the name of a secondary ESRS Gateway server used as a failover server.
--use-smtp-failover {yes|no}
Specifies whether to send event notifications to a failover SMTP address upon ESRS
transmission failure.
The SMTP email address is specified using the isi email command.
--email-customer-on-failure {yes|no}
Specifies whether to send an alert to a customer email address upon failure of other
notification methods.
--remote-support-subnet <string>
Specifies the subnet on the Isilon cluster to be used for remote support connections.
Examples
To enable support for ESRS Gateway on a cluster and specify a primary ESRS Gateway
server, run the following command:
isi remotesupport connectemc modify --enabled yes --primary-esrsgateway gw-serv-esrs1
Options
This command has no options.
isi services
Displays a list of available services. The -l and -a options can be used separately or
together.
Syntax
isi services
[-l | -a]
[<service> [{enable | disable}]]
Options
-l
104
Lists all available services and the current status of each. This is the default value for
this command.
- a
Lists all services, including hidden services, and the current status of each.
<service> {enable | disable}
isi set
Works similar to chmod, providing a mechanism to adjust OneFS-specific file attributes,
such as the requested protection, or to explicitly restripe files. Files can be specified by
path or LIN.
Syntax
isi set
[-f -F -L -n -v -r
-R]
[-p <policy>]
[-w <width>]
[-c {on | off}]
[-g <restripe_goal>]
[-e <encoding>]
[-d <@r drives>]
[-a {<default> | <streaming> | <random> | <custom{1..5}>}]
[-l {<concurrency> | <streaming> | <random>}]
[--diskpool {<id> | <name>}]
[-A {on | off}]
[-P {on | off}]
[{--strategy | -s} {<avoid> | <metadata> | <metadata-write> |
<data>]
[<file> {<path> | <lin>}]
Options
-f
Suppresses warnings on failures to change a file.
-F
Includes the /ifs/.ifsvar directory content and any of its subdirectories. Without
-F, the /ifs/.ifsvar directory content and any of its subdirectories are skipped.
This setting allows the specification of potentially dangerous, unsupported protection
policies.
-L
Specifies file arguments by LIN instead of path.
-n
Displays the list of files that would be changed without taking any action.
-v
Displays each file as it is reached.
-r
isi set
105
Runs a restripe.
-R
Sets protection recursively on files.
-p <policy>
Specifies protection policies in the following forms:
+M
Where M is the number of node failures that can be tolerated without loss of
data. +M must be a number from, where numbers 1 through 4 are valid.
+D:M
Where D indicates the number of drive failures and M indicates number of node
failures that can be tolerated without loss of data. D must be a number from 1
through 4 and M must be any value that divides into D evenly. For example, +2:2
and +4:2 are valid, but +1:2 and +3:2 are not.
Nx
Where N is the number of independent mirrored copies of the data that will be
stored. N must be a number, with 1 through 8 being valid choices.
-w <width>
Specifies the number of nodes across which a file is striped. Typically, w = N + M, but
width can also mean the total of the number of nodes that are used.
You can set a maximum width policy of 32, but the actual protection is still subject to
the limitations on N and M.
-c {on | off}
Specifies whether write-coalescing is turned on.
-g <restripe goal>
Specifies the restripe goal. The following values are valid:
repair
reprotect
rebalance
retune
-e <encoding>
Specifies the encoding of the filename. The following values are valid:
EUC-JP
EUC-JP-MS
EUC-KR
ISO-8859-1
ISO-8859-10
ISO-8859-13
ISO-8859-14
ISO-8859-15
ISO-8859-160
ISO-8859-2
106
ISO-8859-3
ISO-8859-4
ISO-8859-5
ISO-8859-6
ISO-8859-7
ISO-8859-8
ISO-8859-9
UTF-8
UTF-8-MAC
Windows-1252
Windows-949
Windows-SJIS
-d <@r drives>
Specifies the minimum number of drives that the file is spread across.
-a <value>
Specifies the file access pattern optimization setting. The following values are valid:
default
streaming
random
custom1
custom2
custom3
custom4
custom5
-l <value>
Specifies the file layout optimization setting. This is equivalent to setting both the -a
and -d flags.
concurrency
streaming
random
--diskpool <id | name>
Sets the preferred diskpool for a file.
-A {on | off}
Specifies whether file access and protections settings should be managed manually.
-P {on | off}
Specifies whether the file inherits values from the applicable file pool policy.
{--strategy | -s} <value>
Sets the SSD strategy for a file. The following values are valid:
If the value is metadata-write, all copies of the file's metadata are laid out on SSD
storage if possible, and user data still avoids SSDs. If the value is data, Both the file's
isi set
107
meta- data and user data (one copy if using mirrored protection, all blocks if FEC) are
laid out on SSD storage if possible.
avoid
Writes all associated file data and metadata to HDDs only. The data and
metadata of the file are stored so that SSD storage is avoided, unless doing so
would result in an out-of-space condition.
metadata
Writes both file data and metadata to HDDs. One mirror of the metadata for the
file is on SSD storage if possible, but the strategy for data is to avoid SSD
storage.
metadata-write
Writes file data to HDDs and metadata to SSDs, when available. All copies of
metadata for the file are on SSD storage if possible, and the strategy for data is
to avoid SSD storage.
data
Uses SSD node pools for both data and metadata. Both the metadata for the file
and user data, one copy if using mirrored protection and all blocks if FEC, are on
SSD storage if possible.
<file> {<path> |<lin>}
Specifies a file by path or LIN.
isi snmp
Manages SNMP settings.
When SNMP v3 is used, OneFS requires AuthNoPriv as the default value. AuthPriv is not
supported.
Syntax
isi snmp
[--syslocation <string>]
[--syscontact <string>]
[--protocols <value>]
[--rocommunity <string>]
[--v3-rouser <string>]
[--v3-password <string>]
Options
--syslocation <string>
Specifies the SNMP network. read-only field that the SNMP implementation on the
cluster can report back to a user when queried. It's purely informational for the user.
This just sets the value of the standard system location OID.
--syscontact <string>
Sets the SNMP network contact address.
--protocols <value>
Specifies SNMP protocols. The following values are valid:
v1/v2c
v3
108
all
Note
109
Options
{--csv | -c}
Displays data as comma-separated values.
Note
<int>
<int>-<int>
--protocols <value>
110
Specifies which protocols to report statistics on. Multiple values can be specified in a
comma-separated list, for example --protocols=http,papi. The following
values are valid:
l
all
external
ftp
hdfs
http
internal
irp
iscsi
jobd
lsass_in
lsass_out
nlm
nfs3
nfs4
papi
siq
smb1
smb2
--classes <string>
Specify which operation classes to report statistics on. The default setting is all
classes. The following values are valid:
all
All classes
read
File and stream reading
write
File and stream writing
create
File link node stream and directory creation
delete
File link node stream and directory deletion
namespace_read
Attribute stat and ACL reads; lookup directory reading
namespace_write
Renames; attribute setting; permission time and ACL writes
file_state
Open close; locking: acquire release break check; notification
111
session_state
Negotiation inquiry or manipulation of protocol connection or session state
other
File-system information for other uncategorized operations
--orderby <column>
Specifies how rows are ordered. The following values are valid:
l
Class
In
InAvg
InMax
InMin
LocalAddr
LocalName
Node
NumOps
Ops
Out
OutAvg
OutMax
OutMin
Proto
RemoteAddr
RemoteName
TimeAvg
TimeMax
TimeMin
TimeStamp
UserID
UserName
--total
Groups and aggregates results as implied by filtering options.
--totalby <column>
Aggregates results according to specified fields. The following values are valid:
112
Class
Group
LocalAddr
LocalName
Node
Proto
RemoteAddr
RemoteName
UserId
UserName
Note
113
TimeMin
Displays the minimum elapsed time taken to complete an operation. Displayed
in microseconds.
TimeAvg
Displays the average elapsed time taken to complete an operation. Displayed in
microseconds.
Node
Displays the node on which the operation was performed.
Proto
Displays the protocol of the operation.
Class
Displays the class of the operation.
UserID
Displays the numeric UID of the user issuing the operation request or the unique
logical unit number (LUN) identifier in the case of the iSCSI protocol.
UserName
Displays the resolved text name of the UserID, or the target and LUN in the case
of the iSCSI protocol. In either case, if resolution cannot be performed,
UNKNOWN is displayed.
LocalAddr
Displays the IP address of the host receiving the operation request. Displayed in
dotted-quad form.
LocalName
Displays the resolved text name of the LocalAddr, if resolution can be performed.
RemoteAddr
Displays the IP address of the host sending the operation request. Displayed in
dotted-quad form.
RemoteName
Displays the resolved text name of the RemoteAddr, if resolution can be
performed.
--long
Displays all possible columns.
--zero
Shows table entries with no values.
--local_addrs <integer>
Specifies local IP addresses for which statistics will be reported.
--local-names <string>
Specifies local host names for which statistics will be reported.
--remote_addrs <integer>
Specifies remote IP addresses for which statistics will be reported.
--remote_names <string>
Specifies remote client names for which statistics will be reported.
114
--user_ids <integer>
Specifies user ids for which statistics will be reported. The default setting is all users.
--user_names <string>
Specifies user names for which statistics will be reported. The default setting is all
users.
--numeric
If text identifiers of local hosts, remote clients, or users are in the list of columns to
display (the default setting is for them to be displayed), display the unresolved
numeric equivalent of these columns.
--wide
Displays resolved names with a wide column width.
Options
{--stats | -s} <statistics-key-string>
Displays documentation on specified statistics. For a complete list of statistics, run
isi statistics list stats.
Options
{--csv | -c}
Displays data as a comma-separated list.
115
Note
Overrides --csv, if specified, and disables top-style display and dynamic unit
conversion.
--noconversion
Displays all data in base quantities, without dynamic conversion. If set, this
parameter also disables the display of units within the data table.
--noheader
Displays data without column headings.
{--top | -t}
Displays the results in a top-style display, where data is continuously overwritten in a
single table.
Note
<int>
<int>-<int>
*
--orderby <column>
Specifies how the rows are ordered. The following values are valid:
116
Busy
BytesIn
BytesOut
Drive
Inodes
OpsIn
OpsOut
Queued
SizeIn
SizeOut
Slow
TimeAvg
TimeInQ
Type
Used
--timestamp
Displays the time at which the isi statistics tool last gathered data. Time is
displayed in Epoch seconds.
--type
Specifies the drive types for which statistics will be reported. The default setting is all
drives. The following values are valid:
sata
sas
ssd
117
[--output <column>]
[--long]
Options
{--csv | -c}
Displays data as a comma-separated list.
Note
Overrides --csv, if specified, and disables top-style display and dynamic unit
conversion.
--noconversion
Displays all data in base quantities, without dynamic conversion. If set, this option
also disables the display of units within the data table.
--noheader
Displays data without column headings.
{--top | -t}
Displays the results in top-style display, where data is continuously overwritten in a
single table.
Note
all
<int>
<int>-<int>
--events <string>
Specifies which event types for the specified information are reported. The following
values are valid:
blocked
Access to the LIN was blocked waiting for a resource to be released by another
operation. Class is other.
contended
A LIN is experiencing cross-node contention; it is being accessed simultaneously
through multiple nodes. Class is other.
deadlocked
The attempt to lock the LIN resulted in deadlock. Class is other.
link
The LIN has been linked into the file system; the LIN associated with this event is
the parent directory and not the linked LIN. Class is namespace_write.
lock
The LIN is locked. Class is other.
lookup
A name is looked up in a directory; the LIN for the directory searched is the one
associated with the event. Class is namespace_read.
read
A read was performed. Class is read.
rename
A file or directory was renamed. The LIN associated with this event is the
directory where the rename took place for either the source directory or the
destination directory, if they differ. Class is namespace_write.
getattr
A file or directory attribute has been read. Class is namespace_read.
setattr
A file or directory attribute has been added, modified, or deleted. Class is
namespace_write.
unlink
A file or directory has been unlinked from the file system, the LIN associated with
this event is the parent directory of the removed item. Class is
namespace_write.
write
A write was performed. Class is write.
--classes <string>
Specifies which classes for the specified information will be reported. The default
setting is all classes. The following values are valid:
isi statistics heat
119
read
File and stream reading
write
File and stream writing
create
File, link, node, stream, and directory creation
delete
File, link, node, stream, and directory deletion
namespace_read
Attribute, stat, and ACL reads; lookup, directory reading
namespace_write
Renames; attribute setting; permission, time, and ACL writes
file_state
Open, close; locking: acquire, release, break, check; notification
session_state
Negotiation, inquiry, or manipulation of protocol connection or session state
other
File-system information
--orderby <column>
Specifies how rows are ordered. The following values are valid:
Class
Event
LIN
Node
Ops
Path
TimeStamp
--totalby <column>
Aggregates results according to specified fields. The following values are valid:
Class
Event
LIN
Node
Ops
Path
TimeStamp
--maxpath <integer>
Specifies the maximum path length to look up in the file system.
-pathdepth <integer>
120
121
Options
{--csv | -c}
Displays data as a comma-separated list.
Note
Overrides --csv, if specified, and disables top-style display and dynamic unit
conversion.
--noconversion
Displays all data in base quantities, without dynamic conversion. If set, this option
also disables the display of units within the data table.
--noheader
Displays data without column headings.
{--top | -t}
Displays results in a top-style display, where data is continuously overwritten in a
single table.
Note
<int>
<int>-<int>
--stats <string>
122
Specifies which statistics should be reported for requested nodes, where the value
for <string> is a statistics key. Use the isi statistics list stats command
for a complete listing of statistics keys.
{--onecolumn | -1}
Displays output one series at a time in one column rather than in a grid format.
{--formattime | -F}
Formats series times rather than using UNIX Epoch timestamp format.
--begin <number>
Specifies begin time in UNIX Epoch timestamp format.
--end <number>
Specifies end time in UNIX Epoch timestamp format.
--resolution <number>
Specifies the minimum interval between series data points in seconds.
--zero
Displays grid rows with no valid series points.
Options
--client
Displays valid option values for client mode.
--protocol
Displays valid option values for protocol mode.
--heat
Displays valid option values for heat mode.
--drive
Displays valid option values for drive mode.
123
Options
--client
Displays valid option values for client mode.
--protocol
Displays valid option values for protocol mode.
--heat
Displays valid option values for heat mode.
--drive
Displays valid option values for drive mode.
Options
--client
Displays valid option values for client mode.
--protocol
Displays valid option values for protocol mode.
--protocol
Displays valid option values for heat mode.
--drive
Displays valid option values for drive mode.
Options
--client
Displays valid option values for client mode.
--protocol
Displays valid option values for protocol mode.
--heat
124
Options
--client
Displays valid option values for client mode.
--protocol
Displays valid option values for protocol mode.
--heat
Displays valid option values for heat mode.
--drive
Displays valid option values for drive mode.
Options
--client
Displays valid option values for client mode.
--protocol
Displays valid option values for protocol mode.
--heat
Displays valid option values for heat mode.
--drive
Displays valid option values for drive mode.
125
Options
--client
Displays valid option values for client mode.
--protocol
Displays valid option values for protocol mode.
--heat
Displays valid option values for heat mode.
--drive
Displays valid option values for drive mode.
Options
--client
Displays valid option values for client mode.
--protocol
Displays valid option values for protocol mode.
--heat
Displays valid option values for heat mode.
--drive
Displays valid option values for drive mode.
126
Options
--client
Displays valid option values for client mode.
--protocol
Displays valid option values for protocol mode.
--heat
Displays valid option values for heat mode.
--drive
Displays valid option values for drive mode.
Options
--client
Displays valid option values for client mode.
--protocol
Displays valid option values for protocol mode.
--heat
Displays valid option values for heat mode.
--drive
Displays valid option values for drive mode.
127
Options
--client
Displays valid option values for client mode.
--protocol
Displays valid option values for protocol mode.
--heat
Displays valid option values for heat mode.
--drive
Displays valid option values for drive mode.
Options
{--csv | -c}
Displays data as comma-separated values.
128
Note
all
<int>
<int>-<int>
--protocols <value>
Specifies which protocols to report statistics on. Multiple values can be specified in a
comma-separated list, for example --protocols=http,papi. The following
values are valid:
isi statistics protocol
129
all
external
ftp
hdfs
http
internal
irp
iscsi
jobd
lsass_in
lsass_out
nlm
nfs3
nfs4
papi
siq
smb1
smb2
--classes <class>
Specifies which operation classes to report statistics on. The default setting is all.
The following values are valid:
all
All classes
read
File and stream reading
write
File and stream writing
create
File link node stream and directory creation
delete
File link node stream and directory deletion
namespace_read
Attribute stat and ACL reading; lookup directory reading
namespace_write
Renames; attribute setting; permission time and ACL writes
file_state
Open, close; locking: acquire, release, break, check; notification
session_state
Negotiation inquiry or manipulation of protocol connection or session state
130
other
File-system information. Multiple values can be specified in a comma-separated
list.
--orderby <column>
Specifies how rows are ordered. The following values are valid:
l
Class
In
InAvg
InMax
InMin
InStdDev
Node
NumOps
Ops
Out
OutAvg
OutMax
OutMin
OutStdDev
Proto
TimeAvg
TimeMax
TimeMin
TimeStamp
TimeStdDev
--total
Groups and aggregates the results according to the filtering options.
--totalby <column>
Aggregates results according to specified fields. The following values are valid:
l
Class
Node
Proto
Op
Note
131
Class
In
InAvg
InMax
InMin
InStdDev
Node
NumOps
Ops
Out
OutAvg
OutMax
OutMin
OutStdDev
Proto
TimeAvg
TimeMax
TimeMin
TimeStamp
TimeStdDev
--output <column>
Specifies which columns to display. The following values are valid:
timestamp
Displays the time at which the isi statistics tool last gathered data.
Displayed in POSIX time (number of seconds elapsed since January 1, 1970).
Specify <time-and-date> in the following format:
<YYYY>-<MM>-<DD>[T<hh>:<mm>[:<ss>]]
132
NumOps
Displays the number of times an operation has been performed. Multiple values
can be specified in a comma-separated list.
Ops
Displays the rate at which an operation has been performed. Displayed in
operations per second.
InMax
Displays the maximum input (received) bytes for an operation.
InMin
Displays the minimum input (received) bytes for an operation.
In
Displays the rate of input for an operation since the last time isi statistics
collected the data. Displayed in bytes per second.
InAvg
Displays the average input (received) bytes for an operation.
OutMax
Displays the maximum output (sent) bytes for an operation.
OutMin
Displays the minimum ouput (sent) bytes for an operation.
Out
Displays the rate of ouput for an operation since the last time isi statistics
collected the data. Displayed in bytes per second.
OutAvg
Displays the average ouput (sent) bytes for an operation.
TimeMax
Displays the maximum elapsed time taken to complete an operation. Displayed
in microseconds.
TimeMin
Displays the minimum elapsed time taken to complete an operation. Displayed
in microseconds.
TimeAvg
Displays the average elapsed time taken to complete an operation. Displayed in
microseconds.
Node
Displays the node on which the operation was performed.
Proto
Displays the protocol of the operation.
Class
Displays the class of the operation.
133
InStdDev
Displays the standard deviation of the input (received) bytes for an operation.
Displayed in bytes.
OutStdDev
Displays the standard deviation of the output (sent) bytes for an operation.
Displayed in bytes.
Op
Displays the name of the operation
--long
Displays all possible columns.
--zero
Shows table entries with no values.
--operations <operation>
Specifies the operations on which statistics are reported. To view a list of valid
values, run the following command: isi statistics list operations.
Multiple values can be specified in a comma-separated list.
Options
{--csv | -c}
Displays data as comma-separated values.
Note
Displays all data in base quantities, without dynamic conversion. If set, this option
also disables the display of units within the data table.
--noheader
Displays data without column headings.
{--top | -t}
Displays results in top-style display, where data is continuously overwritten in a
single table.
Note
135
Options
{--csv | -c}
Displays data as comma-separated values.
Note
136
Note
Options
{--csv | -c}
Displays data as a comma-separated list.
Note
137
Note
Overrides --csv, if specified, and disables top-style display and dynamic unit
conversion.
--noconversion
Displays all data in base quantities, without dynamic conversion. If set, this option
also disables the display of units within the data table.
--noheader
Displays data without column headings.
{--top | -t}
Displays results in a top-style display, where data is continuously overwritten in a
single table.
Note
138
isi status
Displays information about the current status of the cluster, events, and jobs.
Syntax
isi status
[-q]
[-r]
[-w]
[-D]
[-d <storage-pool-name>]
[-n <id>]
Options
-q
Omits event and protection operations and displays only information on the status of
the cluster.
-r
Displays the raw size.
-w
Displays results without truncations.
-D
Displays more detailed information on running protection operations, including a list
of worker processes. Also displays more information on failed protection operations,
including a list of errors.
-d <storage-pool-name>
Displays a node pool or tier view of the file system instead of a cluster view. If a
storage pool name such as a tier or a node pool is specified, only information for that
pool is reported.
-n <id>
Displays the same information for an individual node, specified by logical node
number (LNN), in addition to statistics for each disk in the node.
isi update
Updates a cluster to a newer version of OneFS.
You are prompted to specify where the image to use to update the cluster is located. After
the image is loaded, you are prompted to reboot the cluster.
Syntax
isi update
[--rolling]
[--manual]
[--drain-time <duration>]
[--check-only]
Options
--rolling
isi status
139
Performs a rolling update, allowing the cluster to remain available during the update.
When a rolling update is interrupted, the same update command can be issued to
restart the rolling update. The update then attempts to continue where the previous
update was interrupted. Rolling updates are not supported for all versions. Contact
your Isilon representative for information about which versions support this option.
--manual
Causes rolling update process to pause and wait for user input before rebooting each
node.
--drain-time <duration>
Sets the update process to suspend a node from its SmartConnect pool. The process
then waits for clients to disconnect or for the specified <duration> to elapse before
rebooting the node. The default <duration> units are in seconds. You can specify
different time units by adding a letter to the end of the time, however. The following
values are valid:
m
Minutes
h
Hours
d
Days
w
Weeks
--check-only
Provides information about potential failures across the cluster but does not initiate
the upgrade process.
isi version
Displays detailed information about the Isilon cluster software properties.
Syntax
isi version
[<os-info>]
Options
<os-info>
Optional variable that limits the output to specified pieces of information. If you do
not include an <os-info> value, the system displays all information. Only the following
values for <os-info> are acceptable.
osversion
Displays the name, build, release date, and current operating system version.
osbuild
Displays build information.
osrelease
Displays the version string for the software.
140
ostype
Displays the name of the operating system.
osrevision
Displays the revision number as a base-10 number.
copyright
Displays the current copyright information for the software.
Examples
The following command displays the name of the operating system only.
isi version ostype
isi_for_array
Runs commands on multiple nodes in an array, either in parallel or in serial.
When options conflict, the one specified last takes precedence.
Note
The -k, -u, -p, and -q options are valid only for SSH transport.
Syntax
isi_for_array
[--array-name <array>]
[--array-file <filename>]
[--directory <directory>]
[--diskless]
[--known-hosts-file <filename>]
[--user <user>]
[--nodes <nodes>]
[--password <password>]
[--pre-command <command>]
[--query-password]
[--quiet]
[--serial]
[--storage]
[--transport <transport-type>]
[--throttle <setting>]
[--exclude-nodes <nodes>]
[--exclude-down-nodes]
Options
{--array-name | -a} <array>
Uses <array>.
{--array-file | -A} <filename>
Reads array information from<filename>. The default looks first for
$HOME/.array.xml, then for /etc/ifs/array.xml.
{--directory | -d} <directory>
Runs commands from the specified directory on remote computers. The current
working directory is the default directory. An empty <directory> results in commands
being run in the user's home directory on the remote computer.
{--diskless | -D}
Runs commands from diskless nodes.
isi_for_array
141
142
isi get
Displays information about a set of files, including the requested protection, current
actual protection, and whether write-coalescing is enabled.
Requested protection appears in one of three colors: green, yellow, or red. Green
indicates full protection. Yellow indicates degraded protection under a mirroring policy.
Red indicates a loss of one or more data blocks under a parity policy.
Syntax
isi get {{[-a] [-d] [-g] [-s] [{-D | -DD | -DDC}] [-R] <path>}
| {[-g] [-s] [{-D | -DD | -DDC}] [-R] -L <lin>}}
Options
-a
Displays the hidden "." and ".." entries of each directory.
-d
Displays the attributes of a directory instead of the contents.
-g
Displays detailed information, including snapshot governance lists.
-s
Displays the protection status using words instead of colors.
-D
Displays more detailed information.
-DD
Includes information about protection groups and security descriptor owners and
groups.
-DDC
Includes cyclic redundancy check (CRC) information.
-R
Displays information about the subdirectories and files of the specified directories.
<path>
Displays information about the specified file or directory.
Specify as a file or directory path.
-L <lin>
Displays information about the specified file or directory.
Specify as a file or directory LIN.
Examples
The following command displays information on ifs/home/ and all of its
subdirectories:
isi get -R /ifs/home
isi get
143
LEVEL
4x/2
8x/3
4x/2
4x/2
4x/2
4x/2
PERFORMANCE
concurrency
concurrency
concurrency
concurrency
concurrency
concurrency
COAL
on
on
on
on
on
on
FILE
./
../
admin/
ftp/
newUser1/
newUser2/
/ifs/home/admin:
default 4+2/2 concurrency on
.zshrc
/ifs/home/ftp:
default
4x/2 concurrency on
default
4x/2 concurrency on
incoming/
pub/
/ifs/home/ftp/incoming:
/ifs/home/ftp/pub:
/ifs/home/newUser1:
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
on
on
on
on
on
on
on
on
on
.cshrc
.login
.login_conf
.mail_aliases
.mailrc
.profile
.rhosts
.shrc
.zshrc
/ifs/home/newUser2:
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
default 4+2/2 concurrency
on
on
on
on
on
on
on
on
on
.cshrc
.login
.login_conf
.mail_aliases
.mailrc
.profile
.rhosts
.shrc
.zshrc
isi_gather_info
Collects and uploads the most recent cluster log information to SupportIQ.
Multiple instances of -i, -f, -s, -S, and -1 are allowed.
gather_expr and analysis_expr can be quoted.
The default temporary directory is /ifs/data/Isilon_Support/ (change with -L or
-T).
Syntax
isi_gather_info
[-h]
[-v]
[-u <user>]
[-p <password>]
[-i]
[--incremental]
[-l]
[-f <filename>]
144
[-n <nodes>]
[--local-only]
[--skip-node-check]
[-s gather-script]
[-S gather-expr]
[-1 gather-expr]
[-a analysis-script]
[-A analysis-expr]
[-t <tarfile>]
[-x exclude_tool]
[-I]
[-L]
[-T <temp-dir>]
[--tardir <dir>]
[--symlinkdir <dir>]
[--varlog_recent]
[--varlog_all]
[--nologs]
[--group <name>]
[--noconfig]
[--save-only]
[--save]
[--upload]
[--noupload]
[--re-upload <filename>]
[--verify-upload]
[--http]
[--nohttp]
[--http-host <host>]
[--http-path <dir>]
[--http-proxy <host>]
[--http-proxy-port <port>]
[--ftp]
[--noftp]
[--ftp-user <user>]
[--ftp-pass <password>]
[--ftp-host <host>]
[--ftp-path <dir>]
[--ftp-port <alt-port>]
[--ftp-proxy <host>]
[--ftp-proxy-port <port>]
[--ftp-mode <mode>]
[--esrs]
[--email]
[--noemail]
[--email-addresses]
[--email-subject]
[--email-body]
[--skip-size-check]
Options
-h
Prints this message and exits.
-v
Prints version info and exits.
-u <user>
Specifies the login as <user> instead of as the default root user.
-p <password>
Uses <password>.
-i
isi_gather_info
145
Includes only the listed utility. See also the -l option for a list of utilities to include.
The special value all may be used to include every known utility.
--incremental
Gathers only those logs that changed since last log upload.
-l
Lists utilities and groups that can be included. See -i and --group.
-f <filename>
Gathers <filename> from each node. The value must be an absolute path.
-n <nodes>
Gathers information from only the specified nodes. Nodes must be a list or range of
LNNs, for example, 1,4-10,12,14. If no nodes are specified, the whole array is
used. Note that nodes are automatically excluded if they are down.
--local-only
Gathers information only from only the local node. Run this option when gathering
files from the /ifs filesystem.
--skip-node-check
Skips the check for node availability.
-s gather-script
Runs <gather-script> on every node.
-S gather-expr
Runs <gather-expr> on every node.
-1 gather-expr
Runs <gather-expr> on the local node.
-a analysis-script
Runs <analysis-script> on results.
-A analysis-expr
Runs <analysis-expr> on every node.
-t <tarfile>
Saves all results to the specified <tarfile> rather than to the default tar file.
-x exclude_tool
Excludes the specified tool or tools from being gathered from each node. Multiple
tools can be listed as comma-separated values.
-I
Saves results to /ifs. This is the default setting.
-L
Save all results to local storage /var/crash/support/.
-T <temp-dir>
Saves all results to <temp-dir> instead of the default directory. -T overrides -L and l.
--tardir <dir>
Places the final package directly into the specified directory.
--symlinkdir <dir>
146
147
148
Event commands
You can access and configure OneFS events and notification rules settings using the
event commands. Running isi events without subcommands is equivalent to running
isi events list.
Options
{--instanceids | -i} {<id> | <id,id,id...>}
Specifies the instance ID of the event that you want to cancel.
Specifies multiple event instances in a comma-separated list.
You can specify all event instances with all.
Options
{--oldest | -o} {-<rel-time> | <spec-time>}
Displays only events that have a start time after a specified date and time.
Specify -<rel-time> in the following format, where d, h, and m specify days, hours and
minutes:
<integer>{d | h | m}
Specify <spec-time> in the following format, where <mm>/<dd>/]YYYY <HH>:<MM> are the
numerical month, day, year, hour and minute:
<mm>/<dd>/]YYYY <HH>:<MM>
Event commands
149
Specify <spec-time>in the following format, where <MM>/<DD>/]YYYY <hh>:<mm> are the
numerical month, day, year, hour and minute:
<MM>/<DD>/]YYYY <hh>:<mm>
{--history | -s}
Retrieves only historical events, which are those that have an end date or are quieted
events.
{--coalesced | -C}
Includes coalesced events in results.
{--severity} | -v}<value>
Retrieves events for a specified level or levels . Multiple levels can be specified in a
comma-separated list. the following values are valid:
info
warn
critical
emergency
{--limit | -L} <integer>
Limits results to the specified number of events.
{--localdb | -l}
Uses the local DB rather than the master.
--nodes <node>
Specifies which nodes to report statistics on. Default is all. The following values are
valid:
l
all
<int>
<int>-<int>
lnn
severity
value
quiet
message
{--sort-by | -b} <column>
Specifies which column to sort the rows by. The default sort column is start_time.
--csv
Displays rows in CSV format and suppresses headers.
{--wide | -w}
Displays table in wide mode without truncations.
Options
--name <name>
Specifies the name of the notification rule being created.
--email <email-address>
Specifies the email address to send an SNMP event. Multiple email address can be
delimited with commas.
--snmp <SNMP-community>@<SNMP host>
Specifies the SNMP community and hostname to send snmp event. Community and
hostname are connected by an @ symbol. Multiple entries can be specified in a
comma-separated list.
--include-all <id>[,<id>]
Configures specified events for all severities (info, warn, critical, emergency). -include=all configures all events for all severities.
--include-info <id>[,<id>]
151
Options
<name>
152
153
Options
{--name | -n} <name>
Specifies the name of the notification rule to delete.
Options
{--name | -n} <name>
Specifies the name of the event notification rule.
Options
{--instanceids | -i} [<id> | <id,id,id...>]
Specifies the instance ID of the event that you want to quiet.
Specifies multiple event instances in a comma-separated list.
You can specify all event instances with all.
Options
{--wait | -w}
154
Options
{--name | -n} <name>
Specifies the name of the setting to display.
Options
{--name | -n} <name>>
Specifies the name of the setting to be changed.
{--value | -v} <value>
Specifies the new value for the specified setting.
Options
{--instanceid | -i} <id>
Specifies the ID of the event to view.
{--wide | -w}
Displays the event information in wide mode.
{--localdb | -l}
Uses localdb instead of the master.
155
Options
{--instanceid | -i} {<id> | <id,id,id...>}
Instance ID of the event that you want to unquiet.
Specify multiple event instances in a comma-separated list.
Specify all event instances with all.
Hardware commands
You can check the status of your cluster harware, including specific node components,
through the hardware commands.
isi batterystatus
Displays the current state of NVRAM batteries and charging systems on node hardware
that supports this feature.
Syntax
isi batterystatus
Options
There are no options for this command.
Examples
To view the current state of NVRAM batteries and charging systems, run the following
command:
isi batterystatus
If the node hardware is not compatible, the system displays output similar to the
following:
Battery status not supported on this hardware.
156
isi devices
Displays information about devices in the cluster and changes their status.
Syntax
isi devices
[--action <action>]
[--device {<LNN>:<drive> | <node-serial>}]
[--grid]
[--log <syslog-tag>]
[--timeout <timeout>]
Options
If no options are specified with this command, the current status of the local node is
displayed.
--action <action>
Designates the action to perform on a target device.
The following actions are available:
status
Displays the status of the given device.
smartfail
SmartFails the given node or drive.
stopfail
Ends the SmartFail operation on the given node. If the node is still attached to
the cluster, the node is returned to a healthy state. If the node has been removed
from the cluster, the node enters a suspended state.
add
Adds the given node or drive to the cluster.
format
Formats the specified drive.
fwstatus
Displays the firmware status of drives.
fwupdate
Updates the firmware of drives.
discover
Scans the internal cluster network for nodes that have not been added to the
cluster.
confirm
Displays the join status of the node.
queue
Queues the specified node to be added to the cluster. Once the node is
connected to the internal cluster network, the node will be automatically added
to the cluster. Specify --device as a node serial number.
unqueue
Removes the specified node from the queue of nodes to be added to the cluster.
If the node is connected to the internal cluster network, the node will not be
automatically added to the cluster. Specify --device as a node serial number.
queuelist
Displays the list of nodes that are queued to be added to the cluster.
isi devices
157
bay<N>
Options
There are no options for this command.
Examples
To display the status of the service light, run the following command.
isi servicelight status
158
Options
There are no options for this command.
Examples
To turn off the LED service light on the back panel of a node, run the following command.
isi servicelight off
isi servicelight on
Turns on the LED service light on the back panel of a node.
Syntax
isi servicelight on
Options
There are no options for this command.
Examples
To turn on the LED service light on the back panel of a node, run the following command.
isi servicelight on
Options
If no options are specified with this command, the current drive firmware status of the
local node is displayed.
{--local | -L}
Displays information from the local node only.
{--diskless | -D}
Displays information only from diskless nodes such as accelerators.
{--storage | -S}
Displays information only from storage nodes.
{--include-nodes | -n} <nodes>
Displays information only from the specified nodes.
{--exclude-nodes | -x} <nodes>
Displays information from all nodes except those that are specified.
--verbose
Displays more detailed information.
isi servicelight on
159
Options
{--local | -L}
Displays information from the local node only.
{--diskless | -D}
Displays information only from diskless nodes such as accelerators.
{--storage | -S}
Displays information only from storage nodes.
{--include-nodes | -n} <nodes>
Displays information only from the specified nodes.
{--exclude-nodes | -x} <nodes>
Displays information from all nodes except those that are specified.
Examples
To display the status of all firmware, run the following command:
isi firmware status
To display firmware package information from all storage nodes in the cluster, run the
following command:
isi firmware package --storage
160
Options
{--local | -L}
Displays information from the local node only.
{--diskless | -D}
Displays information only from diskless nodes such as accelerators.
{--storage | -S}
Displays information only from storage nodes.
{--include-nodes | -n} <nodes>
Displays information only from the specified nodes.
{--exclude-nodes | -x} <nodes>
Displays information from all nodes except those that are specified.
{--include-device | -d} <device>
Displays information only from the specified device.
--exclude-device <device>
Displays information from all devices except those that are specified.
{--include-type | -t} <device-type>
Displays information only from the specified device type.
--exclude-type <device-type>
Displays information from all device types except those that are specified.
--save
Save the output of the status to /etc/ifs/firmware_versions.
{--verbose | -v}
Displays more detailed information.
Examples
To display firmware package information from nodes two and three, run the following
command:
isi firmware status --include-nodes 2,3
Type
--------Network
DiskCtrl
Firmware
--------------------------------4.8.930+205-0002-05_A
6.28.00.00+01.28.02.00+1.17+0.99c
Nodes
----2-3
2-3
To display firmware package information for the network device type, run the following
command:
isi firmware status --include-type network
Type
--------Network
Firmware
Nodes
------------------------- ------------4.8.930+205-0002-05_A
1-6
161
Options
{--local | -L}
Updates the local node only.
{--diskless | -D}
Updates diskless nodes such as accelerators only.
{--storage | -S}
Updates storage nodes only.
{--include-nodes | -n} <nodes>
Updates the specified nodes only.
{--exclude-nodes | -x} <nodes>
Updates all nodes except those that are specified.
{--include-device | -d} <device>
Updates the specified device only.
--exclude-device <device>
Updates all devices except those that are specified.
{--include-type | -t} <device-type>
Updates the specified device type only.
--exclude-type <device-type>
Updates all device types except those that are specified.
--force
Forces the update.
{--verbose | -v}
Displays more detailed information.
162
Examples
To update the firmware on nodes two and three, run the following command:
isi firmware update --include-nodes 2,3
To update the firmware for the network device type only, run the following command:
isi firmware update --include-type network
Options
If no options are specified, the local node is set to read-write mode.
--nodes <nodes>
Specifies the nodes to apply read-write settings to. The following values for <nodes>
are valid:
l
all
l
<int>
<int>-<int>
Use the isi readonly show command to confirm the read-write settings of the
cluster. The system displays output similar to the following example.
node
---1
2
3
4
mode
-----------read/write
read/write
read/write
read/write
status
----------------------------------------------
163
5
6
read/write
read/write
isi readonly on
Sets nodes to read-only mode.
If read-only mode is currently disallowed for this node, it will remain read/write until
read-only mode is allowed again.
Syntax
isi readonly on
[--nodes <nodes>]
[--verbose]
Options
If no options are specified, the local node is set to read-only mode.
--nodes <nodes>
Specifies the nodes to apply read-only settings to. The following values for <nodes>
are valid:
l
all
<int>
<int>-<int>
Use the isi readonly show command to confirm the read-only settings of the
cluster. The system displays output similar to the following example.
node
---1
2
3
4
5
6
164
mode
-----------read-only
read-only
read-only
read-only
read-only
read-only
status
---------------------------------------------user-ui
user-ui
user-ui
user-ui
user-ui
user-ui
Options
There are no options for this command.
Examples
To display the read-only settings for the cluster, run the following command.
isi readonly show
mode
-----------read/write
read/write
read/write
read/write
read/write
read/write
status
----------------------------------------------
165
166
CHAPTER 5
Access zones
Access zones
167
Access zones
168
The base directory cannot be identical to the base directory of any other access zone,
except the System zone. For example, you cannot specify /ifs/data/hr to both
the zone2 and zone3 access zones.
Cannot overlap with the file system tree of a base directory in any other access zone,
except the System zone. For example, if /ifs/data/hr is assigned to zone2, you
cannot assign /ifs/data/hr/personnel to zone3.
The base directory of the default System access zone is /ifs and cannot be
modified.
Access zones
Note
Assigning a base directory that is identical to or overlaps with the System zone is
allowed, but only recommended as a temporary base directory when modifying the base
directory path and migrating data to the new directory.
Details
OneFS does not support one DNS server per access zone. It is
recommended that all access zones point to a single DNS
server.
Access zones
20
Access zones best practices
169
Access zones
Entity
20
LDAP providers
20
NIS providers
20
Local providers
20
File providers
20
Kerberos providers
20
SMB shares per access zone A cluster cannot exceed a total of 30000 SMB shares across all the
access zones.
Note
SSH
HTTP
FTP
WebHDFS
RAN API
Quality of service
You can set upper bounds on quality of service by assigning specific physical resources
to each access zone.
Quality of service addresses physical hardware performance characteristics that can be
measured, improved, and sometimes guaranteed. Characteristics measured for quality of
service include but are not limited to throughput rates, CPU usage, and disk capacity.
When you share physical hardware in an EMC Isilon cluster across multiple virtual
instances, competition exists for the following services:
l
CPU
Memory
Network bandwidth
Disk I/O
Disk capacity
Access zones do not provide logical quality of service guarantees to these resources, but
you can partition these resources between access zones on a single cluster. The following
table describes a few ways to partition resources to improve quality of service:
170
Use
Notes
NICs
You can assign specific NICs on specific nodes to an IP address pool that is
associated with an access zone. By assigning these NICs, you can determine the
Access zones
Use
Notes
nodes and interfaces that are associated with an access zone. This enables the
separation of CPU, memory, and network bandwidth.
SmartPools
SmartQuotas Through SmartQuotas, you can limit disk capacity by a user or a group or in a
directory. By applying a quota to an access zone's base directory, you can limit
disk capacity used in that access zone.
The following command creates an access zone named zone3, sets the base directory
to /ifs/hr/data and creates the directory on the EMC Isilon cluster if it does not
already exist:
isi zone zones create zone3 /ifs/hr/data --create-path
171
Access zones
The following command removes all authentication providers from the zone3 access
zone:
isi zone zones modify zone3 --clear-auth-providers
Access zones
directories and data still exist, and you can map new shares, exports, or paths in another
access zone.
Procedure
1. Run the isi zone zones delete command.
The following command deletes the zone3 access zone :
isi zone zones delete zone3
Options
<zone>
Specifies an access zone by name.
<user>
Specifies a user by name.
--uid <integer>
Specifies a user by UID.
--group <string>
Specifies a group by name.
--gid <integer>
Specifies a group by GID.
--sid <string>
Specifies an object by user or group SID.
--wellknown <name>
Specifies a well-known user, group, machine, or account name.
{--verbose | -v}
Returns a success or fail message after running the command.
173
Access zones
Options
<zone>
Specifies an access zone by name.
<user>
Specifies a user by name.
--uid <integer>
Specifies a user by UID.
--group <string>
Specifies a group by name.
--gid <integer>
Specifies a group by GID.
--sid <string>
Specifies an object by user or group SID.
--wellknown <string>
Specifies an object by well-known SID.
{--force | -f}
Suppresses command-line prompts and messages.
{--verbose | -v}
Returns a success or fail message after running the command.
Options
<zone>
Specifies an access zone by name.
{--limit | -l} <integer>
Displays no more than the specified number of items.
174
Access zones
Options
<name>
Specifies the name of the access zone.
<path>
Specifies the base directory path for the zone. Paths for zones must not overlap,
meaning that you cannot create nested zones.
--cache-size <size>
Specifies the maximum size of the zones in-memory identity cache in bytes. Valid
values are integers in the range 1000000 - 50000000. The default value is
5000000.
--map-untrusted <workgroup>
Maps untrusted domains to the specified NetBIOS workgroup during authentication.
isi zone zones create
175
Access zones
--auth-providers <provider-type>:<provider-name>
Specifies one or more authentication providers, separated by commas, for
authentication to the access zone. Authentication providers are checked in the order
specified. You must specify the name of the authentication provider in the following
format: <provider-type>:<provider-name>.
--netbios-name <string>
Specifies the NetBIOS name.
--all-auth-providers {yes | no}
Specifies whether to authenticate through all available authentication providers. If
no, authentication is through the list of providers specified by the --authproviders option.
--user-mapping-rules <string>
Specifies one or more user mapping rules, separated by commas, for the access
zone.
--home-directory-umask <integer>
Specifies the permissions to set on auto-created user home directories.
--skeleton-directory <string>
Sets the skeleton directory for user home directories.
--audit-success <operations>
Specifies one or more filters, separated by commas, for auditing protocol operations
that succeeded. The following operations are valid:
l
close
create
delete
get_security
logoff
logon
read
rename
set_security
tree_connect
write
--audit-failure <operations>
Specifies one or more filters, separated by commas, for auditing protocol operations
that failed. The following operations are valid:
176
close
create
delete
get_security
logoff
logon
read
Access zones
rename
set_security
tree_connect
write
all
--hdfs-authentication <operations>
Specifies the allowed authentication type for the HDFS protocol. Valid values are
all, simple_only, or kerberos_only
--hdfs-root-directory <path>
Specifies the root directory for the HDFS protocol.
--webhdfs-enabled {yes | no}
Enables or disables WebHDFS on the zone.
--hdfs-ambari-server <string>
Specifies the Ambari server that receives communication from an Ambari agent. The
value must be a resolvable hostname, FQDN, or IP address.
--hdfs-ambari-namenode <string>
Specifies a point of contact in the access zone that Hadoop services managed
through the Ambari interface should connect through. The value must be a resolvable
IP address or a SmartConnect zone name.
--syslog-forwarding-enabled {yes | no}
Enables or disables syslog forwarding of zone audit events.
--syslog-audit-events <operations>
Sets the filter for the auditing protocol operations to forward to syslog. You must
specify the --syslog-audit-events parameter for each additional filter. The
following operations are valid:
l
close
create
delete
get_security
logoff
logon
read
rename
set_security
tree_connect
write
all
177
Access zones
Options
<zone>
Specifies the name of the access zone to delete.
{--force | -f}
Suppresses command-line prompts and messages.
{--verbose | -v}
Displays the results of running the command.
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
178
Access zones
Examples
To view a list of all access zones in the cluster, run the following command:
isi zone zones list
Options
<zone>
179
Access zones
--map-untrusted <string>
Specifies the NetBIOS workgroup to map untrusted domains to during authentication.
--auth-providers <provider-type>:<provider-name>
Specifies one or more authentication providers, separated by commas, for
authentication to the access zone. This option overwrites any existing entries in the
authentication providers list. To add or remove providers without affecting the current
entries, configure settings for --add-auth-providers or --remove-authproviders.
--clear-auth-providers
Removes all authentication providers from the access zone.
--add-auth-providers <provider-type>:<provider-name>
Adds one or more authentication providers, separated by commas, to the access
zone.
--remove-auth-providers <provider-type>:<provider-name>
Removes one or more authentication providers, separated by commas, from the
access zone.
--netbios-name <string>
Specifies the NetBIOS name.
--all-auth-providers {yes | no}
Specifies whether to authenticate through all available authentication providers. If
no, authentication is through the list of providers specified by the --authproviders option.
--user-mapping-rules <string>
Specifies one or more user mapping rules, separated by commas, for the access zon.
This option overwrites all entries in the user mapping rules list. To add or remove
mapping rules without overwriting the current entries, configure settings with -add-user-mapping-rules or --remove-user-mapping-rules.
--clear-user-mapping-rules
Removes all user mapping rules from the access zone.
--add-user-mapping-rules <string>
Adds one or more user mapping rules, separated by commas, to the access zone.
--remove-user-mapping-rules <string>
Removes one or more user mapping rules, separated by commas, from the access
zone.
--home-directory-umask <integer>
Specifies the permissions to set on auto-created user home directories.
--skeleton-directory <string>
Sets the skeleton directory for user home directories.
--audit-success <operations>
Specifies one or more filters, separated by commas, for auditing protocol operations
that succeeded. This option overwrites the current list of filter operations. The
following operations are valid:
180
close
create
Access zones
delete
get_security
logoff
logon
read
rename
set_security
tree_connect
write
all
To add or remove filters without affecting the current list, configure settings with -add-audit-success or --remove-audit-success.
--clear-audit-success
Clears all filters for auditing protocol operations that succeeded.
--add-audit-success <operations>
Adds one or more filters, separated by commas, for auditing protocol operations that
succeeded. The following operations are valid:
l
close
create
delete
get_security
logoff
logon
read
rename
set_security
tree_connect
write
all
--remove-audit-success <operations>
Removes one or more filters, separated by commas, for auditing protocol operations
that succeeded. The following operations are valid:
l
close
create
delete
get_security
logoff
logon
read
rename
isi zone zones modify
181
Access zones
set_security
tree_connect
write
all
--audit-failure <operations>
Specifies one or more filters, separated by commas, for auditing protocol operations
that failed. The following operations are valid:
l
close
create
delete
get_security
logoff
logon
read
rename
set_security
tree_connect
write
all
This option overwrites the current list of filter operations. To add or remove filters
without affecting the current list, configure settings with --add-audit-failure
or --remove-audit-failure.
--clear-audit-failure
Clears all filters for auditing protocol operations that failed.
--add-audit-failure <operations>
Adds one or more filters, separated by commas, for auditing protocol operations that
failed. The following operations are valid:
l
close
create
delete
get_security
logoff
logon
read
rename
set_security
tree_connect
write
all
--remove-audit-failure <operations>
Removes one or more filters, separated by commas, for auditing protocol operations
that failed. The following operations are valid:
182
Access zones
close
create
delete
get_security
logoff
logon
read
rename
set_security
tree_connect
write
all
--hdfs-authentication <operations>
Specifies the allowed authentication type for the HDFS protocol. Valid values are
all, simple_only, or kerberos_only.
--hdfs-root-directory <path>
Specifies the root directory for the HDFS protocol.
--webhdfs-enabled {yes | no}
Enables or disables WebHDFS on the zone.
--hdfs-ambari-server <string>
Specifies the Ambari server that receives communication from an Ambari agent. The
value must be a resolvable hostname, FQDN, or IP address.
--hdfs-ambari-namenode <string>
Specifies a point of contact in the access zone that Hadoop services managed
through the Ambari interface should connect through. The value must be a resolvable
IP address or a SmartConnect zone name.
--syslog-forwarding-enabled {yes | no}
Enables or disables syslog forwarding of zone audit events.
--syslog-audit-events <operations>
Sets the filter for the auditing protocol operations to forward to syslog. You must
specify the --syslog-audit-events parameter for each additional filter. The
following operations are valid:
l
close
create
delete
get_security
logoff
logon
read
rename
set_security
183
Access zones
tree_connect
write
all
close
create
delete
get_security
logoff
logon
read
rename
set_security
tree_connect
write
all
--remove-syslog-audit-events <string>
Removes a filter for the auditing protocol operations to forward to syslog. You must
specify the --syslog-audit-events parameter for each additional filter. The
all option specifies all valid filter operations. Specify --remove-syslogaudit-events for each filter setting that you want to add.
--create-path
Specifies that the zone path is to be created if it doesn't already exist.
{--verbose | -v}
Displays the results of running the command.
Options
<zone>
Specifies the name of the access zone to view.
184
CHAPTER 6
Authentication and access control
185
In most situations, the default settings are sufficient. You can configure additional access
zones, custom roles, and permissions policies as necessary for your particular
environment.
Role-based access
You can assign role-based access to delegate administrative tasks to selected users.
Role based access control (RBAC) allows the right to perform particular administrative
actions to be granted to any user who can authenticate to a cluster. Roles are created by
a Security Administrator, assigned privileges, and then assigned members. All
administrators, including those given privileges by a role, must connect to the System
zone to configure the cluster. When these members log in to the cluster through a
configuration interface, they have these privileges. All administrators can configure
settings for access zones, and they always have control over all access zones on the
cluster.
Roles also give you the ability to assign privileges to member users and groups. By
default, only the root user and the admin user can log in to the web administration
interface through HTTP or the command-line interface through SSH. Using roles, the root
and admin users can assign others to built-in or customer roles that have login and
administrative privileges to perform specific administrative tasks.
Note
As a best practice, assign users to roles that contain the minimum set of necessary
privileges. For most purposes, the default permission policy settings, system access
zone, and built-in roles are sufficient. You can create role-based access management
policies as necessary for your particular environment.
Roles
You can permit and limit access to administrative areas of your EMC Isilon cluster on a
per-user basis through roles.
OneFS includes built-in administrator roles with predefined sets of privileges that cannot
be modified. The following list describes what you can and cannot do through roles:
186
You can add any user to a role as long as the user can authenticate to the cluster.
You can create custom roles and assign privileges to those roles.
You can add a group to a role, which grants to all users who are members of that
group all of the privileges associated with the role.
Note
When OneFS is first installed, only users with root- or admin-level can log in and assign
users to roles.
Built-in roles
Built-in roles include privileges to perform a set of administrative functions.
The following tables describe each of the built-in roles from most powerful to least
powerful. The tables include the privileges and read/write access levels, if applicable,
that are assigned to each role. You can assign users and groups to built-in roles and to
roles that you create.
Table 3 SecurityAdmin role
Description
Privileges
Read/write
access
ISI_PRIV_LOGIN_CONSOLE N/A
ISI_PRIV_LOGIN_PAPI
N/A
ISI_PRIV_LOGIN_SSH
N/A
ISI_PRIV_AUTH
Read/write
ISI_PRIV_ROLE
Read/write
Description
Privileges
Read/write
access
ISI_PRIV_LOGIN_CONSOLE
N/A
ISI_PRIV_LOGIN_PAPI
N/A
ISI_PRIV_LOGIN_SSH
N/A
ISI_PRIV_SYS_SHUTDOWN
N/A
ISI_PRIV_SYS_SUPPORT
N/A
ISI_PRIV_SYS_TIME
N/A
ISI_PRIV_ANTIVIRUS
Read/write
ISI_PRIV_AUDIT
Read/write
ISI_PRIV_CLUSTER
Read/write
187
Description
Privileges
Read/write
access
ISI_PRIV_DEVICES
Read/write
ISI_PRIV_EVENT
Read/write
ISI_PRIV_FTP
Read/write
ISI_PRIV_HDFS
Read/write
ISI_PRIV_HTTP
Read/write
ISI_PRIV_ISCSI
Read/write
ISI_PRIV_JOB_ENGINE
Read/write
ISI_PRIV_LICENSE
Read/write
ISI_PRIV_NDMP
Read/write
ISI_PRIV_NETWORK
Read/write
ISI_PRIV_NFS
Read/write
ISI_PRIV_NTP
Read/write
ISI_PRIV_QUOTA
Read/write
ISI_PRIV_REMOTE_SUPPORT Read/write
ISI_PRIV_SMARTPOOLS
Read/write
ISI_PRIV_SMB
Read/write
ISI_PRIV_SNAPSHOT
Read/write
ISI_PRIV_STATISTICS
Read/write
ISI_PRIV_SYNCIQ
Read/write
ISI_PRIV_VCENTER
Read/write
ISI_PRIV_WORM
Read/write
ISI_PRIV_NS_TRAVERSE
N/A
ISI_PRIV_NS_IFS_ACCESS
N/A
Description
Privileges
188
Read/write access
N/A
ISI_PRIV_LOGIN_PAPI
N/A
ISI_PRIV_LOGIN_SSH
N/A
ISI_PRIV_ANTIVIRUS
Read-only
ISI_PRIV_AUDIT
Read-only
ISI_PRIV_CLUSTER
Read-only
Description
Privileges
Read/write access
ISI_PRIV_DEVICES
Read-only
ISI_PRIV_EVENT
Read-only
ISI_PRIV_FTP
Read-only
ISI_PRIV_HDFS
Read-only
ISI_PRIV_HTTP
Read-only
ISI_PRIV_ISCSI
Read-only
ISI_PRIV_JOB_ENGINE
Read-only
ISI_PRIV_LICENSE
Read-only
SI_PRIV_NDMP
Read-only
ISI_PRIV_NETWORK
Read-only
ISI_PRIV_NFS
Read-only
ISI_PRIV_NTP
Read-only
ISI_PRIV_QUOTA
Read-only
ISI_PRIV_REMOTE_SUPPORT Read-only
ISI_PRIV_SMARTPOOLS
Read-only
ISI_PRIV_SMB
Read-only
ISI_PRIV_SNAPSHOT
Read-only
ISI_PRIV_STATISTICS
Read-only
ISI_PRIV_SYNCIQ
Read-only
ISI_PRIV_VCENTER
Read-only
ISI_PRIV_WORM
Read-only
Description
Privileges
Read/write
access
ISI_PRIV_LOGIN_PAPI
N/A
ISI_PRIV_ISCSI
Read/write
ISI_PRIV_NETWORK
Read/write
ISI_PRIV_SMARTPOOLS
Read/write
ISI_PRIV_SNAPSHOT
Read/write
ISI_PRIV_SYNCIQ
Read/write
ISI_PRIV_VCENTER
Read/write
ISI_PRIV_NS_TRAVERSE
N/A
189
Description
Privileges
Read/write
access
ISI_PRIV_NS_IFS_ACCESS N/A
Table 7 BackupAdmin role
Description
Privileges
Read/write access
Read-only
ISI_PRIV_IFS_RESTORE Read-only
Custom roles
Custom roles supplement built-in roles.
You can create custom roles and assign privileges mapped to administrative areas in your
EMC Isilon cluster environment. For example, you can create separate administrator roles
for security, auditing, storage provisioning, and backup.
You can designate certain privileges as read-only or read/write when adding the privilege
to a role. You can modify this option at any time.
You can add or remove privileges as user responsibilities grow and change.
Privileges
Privileges permit users to complete tasks on an EMC Isilon cluster.
Privileges are associated with an area of cluster administration such as Job Engine, SMB,
or statistics.
Privileges have one of two forms:
Action
Allows a user to perform a specific action on a cluster. For example, the
ISI_PRIV_LOGIN_SSH privilege allows a user to log in to a cluster through an SSH
client.
Read/Write
Allows a user to view or modify a configuration subsystem such as statistics,
snapshots, or quotas. For example, the ISI_PRIV_SNAPSHOT privilege allows an
administrator to create and delete snapshots and snapshot schedules. A read/write
privilege can grant either read-only or read/write access. Read-only access allows a
user to view configuration settings; read/write access allows a user to view and
modify configuration settings.
Privileges are granted to the user on login to a cluster through the OneFS API, the web
administration interface, SSH, or a console session. A token is generated for the user,
which includes a list of all privileges granted to the user. Each URI, web-administration
interface page, and command requires a specific privilege to view or modify the
information available through any of these interfaces.
190
Note
Privileges are not granted to users that do not connect to the System Zone during login or
to users that connect through the deprecated Telnet service, even if they are members of
a role.
OneFS privileges
Privileges in OneFS are assigned through role membership; privileges cannot be assigned
directly to users and groups.
Table 8 Login privileges
OneFS privilege
User right
Privilege type
Action
ISI_PRIV_LOGIN_PAPI
Action
ISI_PRIV_LOGIN_SSH
Action
User right
Privilege type
OneFS privilege
Action
ISI_PRIV_SYS_SUPPORT
Action
ISI_PRIV_SYS_TIME
Action
OneFS privilege
User right
Privilege type
ISI_PRIV_AUTH
Configure external
authentication providers
Read/write
ISI_PRIV_ROLE
Read/write
OneFS privilege
User right
Privilege type
ISI_PRIV_ANTIVIRUS
Configure antivirus
scanning
Read/write
IS_PRIV_AUDIT
Configure audit
capabilities
Read/write
ISI_PRIV_CLUSTER
Read/write
191
192
OneFS privilege
User right
Privilege type
ISI_PRIV_DEVICES
Read/write
ISI_PRIV_EVENT
Read/write
ISI_PRIV_FTP
Read/write
ISI_PRIV_HDFS
Read/write
ISI_PRIV_HTTP
Read/write
ISI_PRIV_ISCSI
Read/write
ISI_PRIV_JOB_ENGINE
Schedule cluster-wide
jobs
Read/write
ISI_PRIV_LICENSE
Read/write
ISI_PRIV_NDMP
Read/write
ISI_PRIV_NETWORK
Configure network
interfaces
Read/write
ISI_PRIV_NFS
Read/write
ISI_PRIV_NTP
Configure NTP
Read/write
ISI_PRIV_QUOTA
Read/write
ISI_PRIV_REMOTE_SUPPO
RT
Read/write
ISI_PRIV_SMARTPOOLS
Read/write
ISI_PRIV_SMB
Read/write
ISI_PRIV_SNAPSHOT
Read/write
ISI_PRIV_SNMP
Read/write
ISI_PRIV_STATISTICS
Read/write
ISI_PRIV_SYNCIQ
Configure SyncIQ
Read/write
ISI_PRIV_VCENTER
Read/write
ISI_PRIV_WORM
Configure SmartLock
directories
Read/write
OneFS privilege
User right
Privilege type
ISI_PRIV_EVENT
Read/write
ISI_PRIV_LICENSE
Read/write
ISI_PRIV_STATISTICS
Read/write
OneFS privilege
User right
Privilege type
ISI_PRIV_IFS_BACKUP
Action
Note
Action
Note
If you are on the sudoers list because you are a member of a role that has the
ISI_PRIV_EVENT privilege, the following command succeeds:
sudo isi alert list
The following tables list all One FS commands available, the associated privilege or rootaccess requirement, and whether sudo is required to run the command.
193
Note
If you are running in compliance mode, additional sudo commands are available.
Table 14 Privileges sorted by CLI command
194
isi command
Privilege
Requires sudo
isi alert
ISI_PRIV_EVENT
isi audit
ISI_PRIV_AUDIT
ISI_PRIV_AUTH
ISI_PRIV_ROLE
isi avscan
ISI_PRIV_ANTIVIRUS
isi batterystatus
ISI_PRIV_STATISTICS
isi config
root
ISI_PRIV_JOB_ENGINE
ISI_PRIV_STATISTICS
isi devices
ISI_PRIV_DEVICES
isi drivefirmware
root
isi domain
root
isi email
ISI_PRIV_CLUSTER
isi events
ISI_PRIV_EVENT
isi exttools
root
isi fc
root
isi filepool
ISI_PRIV_SMARTPOOLS
isi firmware
root
isi ftp
ISI_PRIV_FTP
isi get
root
isi hdfs
ISI_PRIV_HDFS
isi iscsi
ISI_PRIV_ISCSI
isi job
ISI_PRIV_JOB_ENGINE
isi license
ISI_PRIV_LICENSE
isi lun
ISI_PRIV_ISCSI
isi ndmp
ISI_PRIV_NDMP
isi networks
ISI_PRIV_NETWORK
isi nfs
ISI_PRIV_NFS
isi perfstat
ISI_PRIV_STATISTICS
isi command
Privilege
Requires sudo
isi pkg
root
isi quota
ISI_PRIV_QUOTA
isi readonly
root
isi remotesupport
ISI_PRIV_REMOTE_SUPPORT
isi servicelight
ISI_PRIV_DEVICES
isi services
root
isi set
root
isi smb
ISI_PRIV_SMB
isi snapshot
ISI_PRIV_SNAPSHOT
isi snmp
ISI_PRIV_SNMP
isi stat
ISI_PRIV_STATISTICS
isi statistics
ISI_PRIV_STATISTICS
isi status
ISI_PRIV_STATISTICS
isi storagepool
ISI_PRIV_SMARTPOOLS
isi sync
ISI_PRIV_SYNCIQ
isi tape
ISI_PRIV_NDMP
isi target
ISI_PRIV_ISCSI
isi update
root
isi version
ISI_PRIV_CLUSTER
isi worm
ISI_PRIV_WORM
isi zone
ISI_PRIV_AUTH
Privilege
isi commands
ISI_PRIV_ANTIVIRUS
isi avscan
ISI_PRIV_AUDIT
isi audit
ISI_PRIV_AUTH
isi zone
ISI_PRIV_IFS_BACKUP
N/A
ISI_PRIV_CLUSTER
isi email
isi version
Requires sudo
x
N/A
x
195
Privilege
isi commands
ISI_PRIV_DEVICES
isi devices
isi servicelight
isi alert
isi events
ISI_PRIV_FTP
isi ftp
ISI_PRIV_HDFS
isi hdfs
ISI_PRIV_ISCSI
isi iscsi
isi lun
isi target
isi job
ISI_PRIV_LICENSE
isi license
ISI_PRIV_NDMP
isi ndmp
isi tape
ISI_PRIV_NETWORK
isi networks
ISI_PRIV_NFS
isi nfs
ISI_PRIV_QUOTA
isi quota
ISI_PRIV_ROLE
ISI_PRIV_REMOTE_SUPPORT
isi remotesupport
ISI_PRIV_IFS_RESTORE
N/A
ISI_PRIV_SMARTPOOLS
isi filepool
isi storagepool
ISI_PRIV_SMB
isi smb
ISI_PRIV_SNAPSHOT
isi snapshot
ISI_PRIV_SNMP
isi snmp
ISI_PRIV_STATISTICS
isi batterystatus
isi perfstat
isi stat
ISI_PRIV_EVENT
ISI_PRIV_JOB_ENGINE
196
Requires sudo
x
N/A
Privilege
isi commands
Requires sudo
isi statistics
isi status
ISI_PRIV_SYNCIQ
isi sync
ISI_PRIV_WORM
isi worm
root
isi config
isi domain
isi drivefirmware
isi exttools
isi fc
isi firmware
isi get
isi pkg
isi readonly
isi services
isi set
isi update
These privileges circumvent traditional file access checks, such as mode bits or NTFS
ACLs.
Most cluster privileges allow changes to cluster configuration in some manner. The
backup and restore privileges allow access to cluster data from the System zone, the
traversing of all directories, and reading of all file data and metadata regardless of file
permissions.
Users assigned these privileges use the protocol as a backup protocol to another
machine without generating access-denied errors and without connecting as the root
user. These two privileges are supported over the following client-side protocols:
l
SMB
RAN API
FTP
SSH
Data backup and restore privileges
197
Authentication
OneFS supports local and remote authentication providers to verify that users attempting
to access an EMC Isilon cluster are who they claim to be. Anonymous access, which does
not require authentication, is supported for protocols that allow it.
OneFS supports concurrent multiple authentication provider types, which are analogous
to directory services. For example, OneFS is often configured to authenticate Windows
clients with Active Directory and to authenticate UNIX clients with LDAP. You can also
configure NIS, designed by Sun Microsystems, to authenticate users and groups when
they access a cluster.
Note
198
Authentication
provider
Active Directory
LDAP
NIS
Local
File
MIT Kerberos
x
x
Description
Authentication
Netgroups
Kerberos authentication
Kerberos is a network authentication provider that negotiates encryption tickets for
securing a connection. OneFS supports Active Directory Kerberos and MIT Kerberos
authentication providers on an EMC Isilon cluster. If you configure an Active Directory
provider, Kerberos authentication is provided automatically. MIT Kerberos works
independently of Active Directory.
For MIT Kerberos authentication, you define an administrative domain known as a realm.
Within this realm, an authentication server has the authority to authenticate a user, host,
or service. You can optionally define a Kerberos domain to allow additional domain
extensions to be associated with a realm.
The authentication server in a Kerberos environment is called the Key Distribution Center
(KDC) and distributes encrypted tickets. When a user authenticates with an MIT Kerberos
provider within a realm, an encrypted ticket with the user's service principal name (SPN)
is created and validated to securely pass the user's identification for the requested
service.
199
You can include an MIT Kerberos provider in specific access zones for authentication.
Each access zone may include at most one MIT Kerberos provider. You can discontinue
authentication through an MIT Kerberos provider by removing the provider from all the
referenced access zones.
SPNs must match the SmartConnect zone name and the FQDN hostname of the cluster. If
the SmartConnect zone settings are changed, you must update the SPNs on the cluster to
match the changes.
LDAP
The Lightweight Directory Access Protocol (LDAP) is a networking protocol that enables
you to define, query, and modify directory services and resources.
OneFS can authenticate users and groups against an LDAP repository in order to grant
them access to the cluster. OneFS supports Kerberos authentication for an LDAP provider.
The LDAP service supports the following features:
l
Users, groups, and netgroups.
l
Configurable LDAP schemas. For example, the ldapsam schema allows NTLM
authentication over the SMB protocol for users with Windows-like attributes.
l
Simple bind authentication, with and without SSL.
l
Redundancy and load balancing across servers with identical directory data.
l
Multiple LDAP provider instances for accessing servers with different user data.
l
Encrypted passwords.
Active Directory
The Active Directory directory service is a Microsoft implementation of Lightweight
Directory Access Protocol (LDAP), Kerberos, and DNS technologies that can store
information about network resources. Active Directory can serve many functions, but the
primary reason for joining the cluster to an Active Directory domain is to perform user and
group authentication.
When the cluster joins an Active Directory domain, a single Active Directory machine
account is created. The machine account establishes a trust relationship with the domain
200
and enables the cluster to authenticate and authorize users in the Active Directory forest.
By default, the machine account is named the same as the cluster. If the cluster name is
more than 15 characters long, the name is hashed and displayed after joining the
domain.
Note
Configure a single Active Directory instance if all domains have a trust relationship.
Configure multiple Active Directory instances only to grant access to multiple sets of
mutually-untrusted domains.
NIS
The Network Information Service (NIS) provides authentication and identity uniformity
across local area networks. OneFS includes an NIS authentication provider that enables
you to integrate the cluster with your NIS infrastructure.
NIS, designed by Sun Microsystems, can authenticate users and groups when they
access the cluster. The NIS provider exposes the passwd, group, and netgroup maps from
an NIS server. Hostname lookups are also supported. You can specify multiple servers for
redundancy and load balancing.
Note
File provider
A file provider enables you to supply an authoritative third-party source of user and group
information to an EMC Isilon cluster. A third-party source is useful in UNIX and Linux
environments that synchronize /etc/passwd, /etc/group, and etc/netgroup
files across multiple servers.
Standard BSD /etc/spwd.db and /etc/group database files serve as the file
provider backing store on a cluster. You generate the spwd.db file by running the
pwd_mkdb command in the OneFS command-line interface (CLI). You can script updates
to the database files.
On a cluster, a file provider hashes passwords with libcrypt. The Modular Crypt
Format is parsed to determine the hashing algorithm. The following algorithms are
supported:
l
MD5
Blowfish
NT-Hash
SHA-256
SHA-512
NIS
201
Note
The built-in System file provider includes services to list, manage, and authenticate
against system accounts such as root, admin, and nobody. It is recommended that you
do not modify the System file provider.
Local provider
The local provider provides authentication and lookup facilities for user accounts added
by an administrator.
Local authentication is useful when Active Directory, LDAP, or NIS directory services are
not configured or when a specific user or application needs access to the cluster. Local
groups can include built-in groups and Active Directory groups as members.
In addition to configuring network-based authentication sources, you can manage local
users and groups by configuring a local password policy for each node in the cluster.
OneFS settings specify password complexity, password age and re-use, and passwordattempt lockout policies.
Authorization
OneFS supports two types of authorization data on a file: Windows-style access control
lists (ACLs) and POSIX mode bits (UNIX permissions). Authorization type is based on the
ACL policies that are set and on the file-creation method.
Access to a file or directory is governed by either a Windows access control list (ACL) or
UNIX mode bits. Regardless of the security model, OneFS enforces access rights
consistently across access protocols. A user is granted or denied the same rights to a file
when using SMB for Windows file sharing as when using NFS for UNIX file sharing.
An EMC Isilon cluster includes global policy settings that enable you to customize the
default ACL and UNIX permissions to best support your environment. Generally, files that
are created over SMB or in a directory that has an ACL receive an ACL; otherwise, OneFS
relies on the POSIX mode bits that define UNIX permissions. In either case, the owner is
represented by a UNIX identifier (UID or GID) or by its Windows identifier (SID). The
202
primary group is represented by a GID or SID. Although mode bits are present when a file
has an ACL, the mode bits are provided for only protocol compatibility, not for access
checks.
Note
Although you can configure ACL policies to optimize a cluster for UNIX or Windows, you
should do so only if you understand how ACL and UNIX permissions interact.
The OneFS file system installs with UNIX permissions as the default. By using Windows
Explorer or OneFS administrative tools, you can give a file or directory an ACL. In addition
to Windows domain users and groups, ACLs in OneFS can include local, NIS, and LDAP
users and groups. After you give a file an ACL, OneFS stops enforcing the file's mode bits,
which remain only as an estimate of the effective permissions.
SMB
You can configure SMB shares to provide Windows clients network access to file system
resources on the cluster. You can grant permissions to users and groups to carry out
operations such as reading, writing, and setting access permissions on SMB shares.
ACLs
In Windows environments, file and directory permissions, referred to as access rights, are
defined in access control lists (ACLs). Although ACLs are more complex than mode bits,
ACLs can express much more granular sets of access rules. OneFS checks the ACL
processing rules commonly associated with Windows ACLs.
A Windows ACL contains zero or more access control entries (ACEs), each of which
represents the security identifier (SID) of a user or a group as a trustee. In OneFS, an ACL
can contain ACEs with a UID, GID, or SID as the trustee. Each ACE contains a set of rights
that allow or deny access to a file or folder. An ACE can optionally contain an inheritance
flag to specify whether the ACE should be inherited by child folders and files.
Note
Instead of the standard three permissions available for mode bits, ACLs have 32 bits of
fine-grained access rights. Of these, the upper 16 bits are general and apply to all object
types. The lower 16 bits vary between files and directories but are defined in a way that
allows most applications to apply the same bits for files and directories.
Rights grant or deny access for a given trustee. You can block user access explicitly
through a deny ACE or implicitly by ensuring that a user does not directly, or indirectly
through a group, appear in an ACE that grants the right.
NFS
You can configure NFS exports to provide UNIX clients network access to file system
resources on the cluster.
SMB
203
UNIX permissions
In a UNIX environment, file and directory access is controlled by POSIX mode bits, which
grant read, write, or execute permissions to the owning user, the owning group, and
everyone else.
OneFS supports the standard UNIX tools for viewing and changing permissions, ls,
chmod, and chown. For more information, run the man ls, man chmod, and man
chown commands.
All files contain 16 permission bits, which provide information about the file or directory
type and the permissions. The lower 9 bits are grouped as three 3-bit sets, called triples,
which contain the read, write, and execute (rwx) permissions for each class of users
owner, group, and other. You can set permissions flags to grant permissions to each of
these classes.
Unless the user is root, OneFS checks the class to determine whether to grant or deny
access to the file. The classes are not cumulative: The first class matched is applied. It is
therefore common to grant permissions in decreasing order.
Mixed-permission environments
When a file operation requests an objects authorization data, for example, with the ls l command over NFS or with the Security tab of the Properties dialog box in Windows
Explorer over SMB, OneFS attempts to provide that data in the requested format. In an
environment that mixes UNIX and Windows systems, some translation may be required
when performing create file, set security, get security, or access operations.
SID-to-UID and SID-to-GID mappings are cached in both the OneFS ID mapper and the
stat cache. If a mapping has recently changed, the file might report inaccurate
information until the file is updated or the cache is flushed.
contains the corresponding rights that are denied. In both cases, the trustee of the
ACE corresponds to the file owner, group, or everyone. After all of the ACEs are
generated, any that are not needed are removed before the synthetic ACL is returned.
Managing roles
You can view, add, or remove members of any role. Except for built-in roles, whose
privileges you cannot modify, you can add or remove OneFS privileges on a role-by-role
basis.
Note
Roles take both users and groups as members. If a group is added to a role, all users who
are members of that group are assigned the privileges associated with the role. Similarly,
members of multiple roles are assigned the combined privileges of each role.
View roles
You can view information about built-in and custom roles.
Procedure
1. Run one of the following commands to view roles.
l
To view a basic list of all roles on the cluster, run the following command:
isi auth roles list
To view detailed information about each role on the cluster, including member and
privilege lists, run the following command:
isi auth roles list --verbose
To view detailed information about a single role, run the following command,
where <role> is the name of the role:
isi auth roles view <role>
View privileges
You can view user privileges.
This procedure must be performed through the command-line interface (CLI). You can
view a list of your privileges or the privileges of another user using the following
commands:
Procedure
1. Establish an SSH connection to any node in the cluster.
2. To view privileges, run one of the following commands.
l
Managing roles
205
To view a list of privileges for another user, run the following command, where
<user> is a placeholder for another user by name:
isi auth mapping token <user>
3. Run the following command to add a user to the role, where <role> is the name of the
role and <string> is the name of the user:
isi auth roles modify <role> [--add-user <string>]
Note
You can also modify the list of users assigned to a built-in role.
4. Run the following command to add a privilege with read/write access to the role,
where <role> is the name of the role and <string> is the name of the privilege:
isi auth roles modify <role> [--add-priv <string>]
5. Run the following command to add a privilege with read-only access to the role, where
<role> is the name of the role and <string> is the name of the privilege:
isi auth roles modify <role> [--add-priv-ro <string>]
206
The following command joins the user test to the LDAP server test-ldap.example.com:
isi auth ldap create test \
--base-dn="dc=test-ldap,dc=example,dc=com" \
--server-uris="ldap://test-ldap.example.com"
Note
You can specify multiple servers by repeating the --server-uris parameter with
the URI value or with a comma-separated list, such as --server-uris="ldap://
a.example.com,ldap://b.example.com".
2. If the LDAP server does not allow anonymous queries, you can create an LDAP
provider by running the isi auth ldap create command, where variables in
angle brackets are placeholders for values specific to your environment:
isi auth ldap create <name> --bind-dn=<distinguished-name> \
--bind-password=<password> --server-uris=<uri>
The following command joins the LDAP server test-ldap.example.com and binds to
user test in the organizational unit users:
isi auth ldap create test-ldap \
--bind-dn="cn=test,ou=users,dc=test-ldap,dc=example,dc=com" \
--bind-password="mypasswd" \
--server-uris="ldap://test-ldap.example.com"
Note
207
Consider the following information when you configure an Active Directory provider:
l
When you join Active Directory from OneFS, cluster time is updated from the Active
Directory server, as long as an NTP server has not been configured for the cluster.
If you migrate users to a new or different Active Directory domain, you must re-set the
ACL domain information after you configure the new provider. You can reset the
domain information with third-party tools, such as Microsoft SubInACL.
Procedure
1. Run the following command to configure an Active Directory provider, where <name> is
a placeholder for the fully qualified Active Directory name and <user> is a placeholder
for a user name with permission to join machines to the given domain.
isi auth ads create <name> <user>
208
You can specify multiple servers by repeating the --servers parameter for each
server or with a comma-separated list, such as -server="a.example.com,b.example.com".
209
If the replacement files are located outside the /ifs directory tree, you must distribute
them manually to every node in the cluster. Changes that are made to the system
provider's files are automatically distributed across the cluster.
Procedure
1. Establish an SSH connection to any node in the cluster.
2. Run the pwd_mkdb <file> command, where <file> is the location of the source password
file.
Note
By default, the binary password file, spwd.db, is created in the /etc directory. You
can override the location to store the spwd.db file by specifying the -d option with a
different target directory.
The following command generates an spwd.db file in the /etc directory from a
password file that is located at /ifs/test.passwd:
pwd_mkdb /ifs/test.passwd
The following command generates an spwd.db file in the /ifs directory from a
password file that is located at /ifs/test.passwd:
pwd_mkdb -d /ifs /ifs/test.passwd
Although you can rename a file provider, there are two caveats: you can rename a file
provider through only the web administration interface and you cannot rename the
System file provider.
Procedure
1. Run the following command to modify a file provider, where <provider-name> is a
placeholder for the name that you supplied for the provider.
isi auth file modify <provider-name>
211
The fields are defined below in the order in which they appear in the file.
Note
UNIX systems often define the passwd format as a subset of these fields, omitting the
Class, Change, and Expiry fields. To convert a file from passwd to master.passwd
format, add :0:0: between the GID field and the Gecos field.
Username
The user name. This field is case-sensitive. OneFS does not limit the length; many
applications truncate the name to 16 characters, however.
Password
The users encrypted password. If authentication is not required for the user, you can
substitute an asterisk (*) for a password. The asterisk character is guaranteed to not
match any password.
UID
The UNIX user identifier. This value must be a number in the range 0-4294967294
that is not reserved or already assigned to a user. Compatibility issues occur if this
value conflicts with an existing account's UID.
GID
The group identifier of the users primary group. All users are a member of at least
one group, which is used for access checks and can also be used when creating
files.
Class
This field is not supported by OneFS and should be left empty.
Change
OneFS does not support changing the passwords of users in the file provider. This
field is ignored.
Expiry
OneFS does not support the expiration of user accounts in the file provider. This field
is ignored.
Gecos
This field can store a variety of information but is usually used to store the users full
name.
Home
The absolute path to the users home directory, beginning at /ifs.
Shell
The absolute path to the users shell. If this field is set to /sbin/nologin, the
user is denied command-line access.
212
The fields are defined below in the order in which they appear in the file.
Group name
The name of the group. This field is case-sensitive. Although OneFS does not limit
the length of the group name, many applications truncate the name to 16 characters.
Password
This field is not supported by OneFS and should contain an asterisk (*).
GID
The UNIX group identifier. Valid values are any number in the range 0-4294967294
that is not reserved or already assigned to a group. Compatibility issues occur if this
value conflicts with an existing group's GID.
Group members
A comma-delimited list of user names.
Where <host> is a placeholder for a machine name, <user> is a placeholder for a user name,
and <domain> is a placeholder for a domain name. Any combination is valid except an
empty triple: (,,).
The following sample file contains two netgroups. The rootgrp netgroup contains four
hosts: two hosts are defined in member triples and two hosts are contained in the nested
othergrp netgroup, which is defined on the second line.
rootgrp (myserver, root, somedomain.com) (otherserver, root,
somedomain.com) othergrp
othergrp (other-win,, somedomain.com) (other-linux,, somedomain.com)
Note
A new line signifies a new netgroup. You can continue a long netgroup entry to the next
line by typing a backslash character (\) in the right-most position of the first line.
213
2. To list users and groups for an LDAP provider type that is named Unix LDAP, run a
command similar to the following example:
isi auth users list --provider="lsa-ldap-provider:Unix LDAP"
214
The maximum name length is 104 characters. It is recommended that names do not
exceed 64 characters.
Names can contain any special character that is not in the list of invalid characters. It
is recommend that names do not contain spaces.
Separate password policies are configured for each access zone. Each access zone in the
cluster contains a separate instance of the local provider, which allows each access zone
to have its own list of local users who can authenticate. Password complexity is
configured for each local provider, not for each user.
Procedure
1. Establish an SSH connection to any node in the cluster.
2. Optional: Run the following command to view the current password settings:
isi auth local view system
3. Run the isi auth local modify command, choosing from the parameters
described in Local password policy default settings.
The --password-complexity parameter must be specified for each setting.
isi auth local modify system --password-complexity=lowercase \
--password-complexity=uppercase -password-complexity=numeric \
--password-complexity=symbol
The following command configures a local password policy for a local provider:
isi auth local modify <provider-name> \
--min-password-length=20 \
--lockout-duration=20m \
--lockout-window=5m \
--lockout-threshold=5 \
--add-password-complexity=uppercase \
--add-password-complexity=numeric
215
number of possible passwords that an attacker must check before the correct password
is guessed.
Setting
Description
Comments
min-password-length
password-complexity
lowercase
numeric
min-password-age
max-password-age
password-historylength
lockout-duration
lockout-threshold
216
Setting
Description
Comments
217
You can run the command with <gid> or <sid> instead of <group>.
Procedure
1. Run the isi auth krb5 realm modify command to modify an MIT Kerberos
realm.
For example, run the following command to modify an MIT Kerberos realm by
specifying an alternate KDC and an administrative server:
isi auth krb5 realm modify <realm> --is-default-realm true --kdc
<kdc> --admin-server <admin-server>
Accessing the Kerberos administration server and creating keys for services on the
OneFS cluster.
219
Create an MIT Kerberos provider and join a realm with administrator credentials
You can create an MIT Kerberos provider and join an MIT Kerberos realm using the
credentials authorized to access the Kerberos administration server. You can then create
keys for the various services on the EMC Isilon cluster. This is the recommended method
for creating a Kerberos provider and joining a Kerberos realm.
Before you begin
You must be a member of a role that has ISI_PRIV_AUTH privileges to access the Kerberos
administration server.
Procedure
1. Run the following command to create a Kerberos provider and join a Kerberos realm,
where <realm> is the name of the Kerberos realm which already exists or is created
if it does not exist:
isi auth krb5 create <realm> <user> --<kdc>=<string>
The following example joins a user with admin credentials to the clustername.company.com realm:
isi auth krb5 create cluster-name.company.com aima/admin -kdc=<kdc-name>.domain.company.com
Create an MIT Kerberos provider and join a realm with a keytab file
You can create an MIT Kerberos provider and join an MIT Kerberos realm through a keytab
file. Follow this method only if your Kerberos environment is managed by manually
transferring the Kerberos key information through the keytab files.
Before you begin
Make sure that the following prerequisites are met:
l
You must create and copy a keytab file to a node on the cluster.
You must be a member of a role that has ISI_PRIV_AUTH privileges to access the
Kerberos administration server.
Procedure
1. Run the following command to create a Kerberos provider and join a Kerberos realm,
where <realm> is the placeholder for the realm name which should exist as a realm
object already:
isi auth krb5 create <realm> --keytab-file=<string>
For example, run the following command to create a Kerberos provider and join the
cluster-name.company.com realm using a keytab file:
isi auth krb5 create cluster-name.company.com --keytab-file=/tmp/
krb5.keytab
220
221
Update the SPNs if there are any changes to the SmartConnect zone settings that are
based on those SPNs
List the registered SPNs to compare them against a list of discovered SPNs
Delete specific key versions or delete all the keys associated with an SPN
Delete keys
You can delete specific key versions or all the keys associated with a service principal
name (SPN).
Before you begin
You must be a member of a role that has ISI_PRIV_AUTH privileges to delete keys.
Managing MIT Kerberos authentication
223
After creating new keys due to security reasons or to match configuration changes, follow
this procedure to delete older version of the keys so that the keytab table is not
populated with redundant keys.
Procedure
1. Run the isi auth krb5 spn delete command to delete all keys for a specified
SPN or a specific version of a key.
For example, run the following command to delete all the keys associated with an SPN
for an MIT Kerberos provider:
isi auth krb5 spn delete <provider-name> <spn> --all
The <provider-name> is the name of the MIT Kerberos provider. You can delete a
specific version of the key by specifying a key version number value for the kvno
argument and including that value in the command syntax.
You can optionally specify a password for <user> which is the placeholder for a user
who has the permission to join clients to the given domain.
224
You must create and copy a keytab file to a node on the cluster where you will
perform this procedure.
You must be a member of a role that has ISI_PRIV_AUTH privileges to import a keytab
file.
Procedure
1. Import the keys of a keytab file by running the isi auth krb5 spn import
command.
For example, run the following command to import the keys of the <keytab-file> to the
provider referenced as <provider-name>:
isi auth krb5 spn import <provider-name> <keytab-file>
225
Permissions
Expected : user:<username> \
allow
dir_gen_read,dir_gen_write,dir_gen_execute,delete_child
3. View mode-bits permissions for a user by running the isi auth access command.
The following command displays verbose-mode file permissions information in /
ifs/ for the user that you specify in place of <username>:
isi auth access <username> /ifs/ -v
4. View expected ACL user permissions on a file for a user by running the isi auth
access command.
The following command displays verbose-mode ACL file permissions for the file
file_with_acl.tx in /ifs/data/ for the user that you specify in place of
<username>:
isi auth access <username> /ifs/data/file_with_acl.tx -v
226
Because ACL policies change the behavior of permissions throughout the system, they
should be modified only as necessary by experienced administrators with advanced
knowledge of Windows ACLs. This is especially true for the advanced settings, which are
applied regardless of the cluster's environment.
For UNIX, Windows, or balanced environments, the optimal permission policy settings are
selected and cannot be modified. You can choose to manually configure the cluster's
default permission settings if necessary to support your particular environment, however.
Procedure
1. Run the following command to modify ACL policy settings, where <provider-name>
specifies the name of the provider:
isi auth ads modify <provider-name>
You cannot combine the --template parameter with the convert mode option,
but you can combine the parameter with the clone and inherit mode options.
Conversely, you cannot combine the --mapping-type and --zone parameters
with the clone and inherit mode options, but you can combine the parameters
with the convert mode option.
Example 2 Examples
227
Options
<name>
228
Options
<name>
Specifies a fully qualified Active Directory domain name, which will also be used as
the provider name.
<user>
Specifies the user name of an account that has permission to join machine accounts
to the Active Directory domain.
--password <string>
Specifies the password of the provided user account. If you omit this option, you will
be prompted to supply a password.
--account <string>
Specifies the machine account name to use in Active Directory. By default, the cluster
name is used.
--organizational-unit <string>
Specifies the name of the organizational unit (OU) to connect to on the Active
Directory server. Specify the OU in the form OuName or OuName1/SubName2.
--kerberos-nfs-spn {yes | no}
Specifies whether to add SPNs for using Kerberized NFS.
isi auth ads create
229
--dns-domain <dns-domain>
Specifies a DNS search domain to use instead of the domain that is specified in the
--name setting.
--allocate-gids {yes | no}
Enables or disables GID allocation for unmapped Active Directory groups. Active
Directory groups without GIDs can be proactively assigned a GID by the ID mapper. If
this option is disabled, GIDs are not proactively assigned, but when a user's primary
group does not include a GID, the system may allocate one.
--allocate-uids {yes | no}
Enables or disables UID allocation for unmapped Active Directory users. Active
Directory users without UIDs can be proactively assigned a UID by the ID mapper. If
this option is disabled, UIDs are not proactively assigned, but when a user's identity
does not include a UID, the system may allocate one.
--cache-entry-expiry <duration>
Specifies how long to cache a user or group, in the format <integer>{Y|M|W|D|H|m|s}.
--assume-default-domain {yes | no}
Specifies whether to look up unqualified user names in the primary domain. If this
option is set to no, the primary domain must be specified for each authentication
operation.
--check-online-interval <duration>
Specifies the time between provider online checks, in the format <integer>{Y|M|W|D|H|
m|s}.
--create-home-directory {yes | no}
Specifies whether to create a home directory the first time that a user logs in, if a
home directory does not already exist for the user.
--domain-offline-alerts {yes | no}
Specifies whether to send an alert if the domain goes offline. If this option is set to
yes, notifications are sent as specified in the global notification rules. The default
value is no.
--home-directory-template <path>
Specifies the template path to use when creating home directories. The path must
begin with /ifs and can include special character sequences that are dynamically
replaced with strings at home directory creation time that represent specific
variables. For example, %U, %D, and %Z are replaced with the user name, provider
domain name, and zone name, respectively. For more information, see the Home
directories section.
--ignore-all-trusts {yes | no}
Specifies whether to ignore all trusted domains.
--ignored-trusted-domains <dns-domain>
Specifies a list of trusted domains to ignore if --ignore-all-trusts is disabled.
Repeat this option to specify multiple list items.
--include-trusted-domains <dns-domain>
Specifies a list of trusted domain to include if --ignore-all-trusts is enabled.
Repeat this option to specify multiple list items.
--ldap-sign-and-seal {yes | no}
230
This setting is for debugging purposes and should be left unconfigured during normal
operation. To disable this feature, use a timeout value of 0.
{--node-dc-affinity-timeout} <timestamp>
Specifies the timeout setting for the local node affinity to a domain controller, using
the date format <YYYY>-<MM>-<DD> or the date/time format <YYYY>-<MM><DD>T<hh>:<mm>[:<ss>].
Note
231
Specifies whether to allow the Active Directory provider to respond to getpwent and
getgrent requests.
--sfu-support {none | rfc2307}
Specifies whether to support RFC 2307 attributes for Windows domain controllers.
RFC 2307 is required for Windows UNIX Integration and for Services For UNIX (SFU)
technologies.
--store-sfu-mappings {yes | no}
Specifies whether to store SFU mappings permanently in the ID mapper.
{--verbose | -v}
Displays the results of running the command.
Options
<provider-name>
232
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Examples
To view a list of all the Active Directory providers that the cluster is joined to, run the
following command:
isi auth ads list
233
[--node-dc-affinity-timeout <timestamp>]
[--login-shell <path>]
[--lookup-domains <dns-domain>]
[--clear-lookup-domains]
[--add-lookup-domains <dns-domain>]
[--remove-lookup-domains <dns-domain>]
[--lookup-groups {yes | no}]
[--lookup-normalize-groups {yes | no}]
[--lookup-normalize-users {yes | no}]
[--lookup-users {yes | no}]
[--machine-password-changes {yes | no}]
[--machine-password-lifespan <duration>]
[--nss-enumeration {yes | no}]
[--sfu-support {none | rfc2307}]
[--store-sfu-mappings {yes | no}]
[--verbose]
Options
<provider-name>
Specifies the domain name that the Active Directory provider is joined to, which is
also the Active Directory provider name.
--reset-schannel {yes | no}
Resets the secure channel to the primary domain.
--domain-controller <dns-domain>
Specifies a domain controller.
--allocate-gids {yes | no}
Enables or disables GID allocation for unmapped Active Directory groups. Active
Directory groups without GIDs can be proactively assigned a GID by the ID mapper. If
this option is disabled, GIDs are not assigned proactively, but when a user's primary
group does not include a GID, the system may allocate one.
--allocate-uids {yes | no}
Enables or disables UID allocation for unmapped Active Directory users. Active
Directory users without UIDs can be proactively assigned a UID by the ID mapper. If
this option is disabled, UIDs are not assigned proactively, but when a user's identity
does not include a UID, the system may allocate one.
--cache-entry-expiry <duration>
Specifies how long to cache a user or group, in the format <integer>{Y|M|W|D|H|m|s}.
--assume-default-domain {yes | no}
Specifies whether to look up unqualified user names in the primary domain. If this
option is set to no, the primary domain must be specified for each authentication
operation.
--check-online-interval <duration>
Specifies the time between provider online checks, in the format <integer>{Y|M|W|D|H|
m|s}.
--create-home-directory {yes | no}
Specifies whether to create a home directory the first time a user logs in, if a home
directory does not already exist for the user.
--domain-offline-alerts {yes | no}
234
Specifies whether to send an alert if the domain goes offline. If this option is set to
yes, notifications are sent as specified in the global notification rules. The default
value is no.
--home-directory-template <path>
Specifies the template path to use when creating home directories. The path must
begin with /ifs and can include special character sequences that are dynamically
replaced with strings at home directory creation time that represent specific
variables. For example, %U, %D, and %Z are replaced with the user name, provider
domain name, and zone name, respectively. For more information, see the Home
directories section.
--ignore-all-trusts {yes | no}
Specifies whether to ignore all trusted domains.
--ignored-trusted-domains <dns-domain>
Specifies a list of trusted domains to ignore if --ignore-all-trusts is disabled.
Repeat this option to specify multiple list items.
--clear-ignored-trusted-domains
Clears the list of ignored trusted domains if --ignore-all-trusts is disabled.
--add-ignored-trusted-domains <dns-domain>
Adds a domain to the list of trusted domains to ignore if --ignore-all-trusts is
disabled. Repeat this option to specify multiple list items.
--remove-ignored-trusted-domains <dns-domain>
Removes a specified domain from the list of trusted domains to ignore if --ignoreall-trusts is disabled. Repeat this option to specify multiple list items.
--include-trusted-domains <dns-domain>
Specifies a list of trusted domains to include if --ignore-all-trusts is enabled.
Repeat this option to specify multiple list items.
--clear-include-trusted-domains
Clears the list of trusted domains to include if --ignore-all-trusts is enabled.
--add-include-trusted-domains <dns-domain>
Adds a domain to the list of trusted domains to include if --ignore-all-trusts
is enabled. Repeat this option to specify multiple list items.
--remove-include-trusted-domains <dns-domain>
Removes a specified domain from the list of trusted domains to include if -ignore-all-trusts is enabled. Repeat this option to specify multiple list items.
--ldap-sign-and-seal {yes | no}
Specifies whether to use encryption and signing on LDAP requests to a domain
controller.
{--node-dc-affinity | -x} <string>
Specifies the domain controller that the node should exclusively communicate with
(affinitize). This option should be used with a timeout value, which is configured
using the --node-dc-affinity-timeout option. Otherwise, the default timeout
value of 30 minutes is assigned.
235
Note
This setting is for debugging purposes and should be left unconfigured during normal
operation. To disable this feature, use a timeout value of 0.
{--node-dc-affinity-timeout} <timestamp>
Specifies the timeout setting for the local node affinity to a domain controller, using
the date format <YYYY>-<MM>-<DD> or the date/time format <YYYY>-<MM><DD>T<hh>:<mm>[:<ss>].
Note
236
Sets the maximum age of the machine account password, in the format <integer>{Y|M|
W|D|H|m|s}.
--nss-enumeration {yes | no}
Specifies whether to allow the Active Directory provider to respond to getpwent and
getgrent requests.
--sfu-support {none | rfc2307}
Specifies whether to support RFC 2307 attributes for domain controllers. RFC 2307 is
required for Windows UNIX Integration and for Services For UNIX (SFU) technologies.
--store-sfu-mappings {yes | no}
Specifies whether to store SFU mappings permanently in the ID mapper.
{--verbose | -v}
Displays the results of running the command.
Options
{--domain | -D} <string>
Specifies the DNS domain name for the user or group that is attempting to connect to
the cluster.
--machinecreds
Directs the system to use machine credentials when connecting to the cluster.
{--user | -U} <string>
Specifies an administrative user account to connect to the cluster, if required.
{--password | -P} <string>
Specifies the administrative user account password.
{--repair | -r}
Repairs missing SPNs.
237
Options
{--spn | -s} <string>
Specifies an SPN to register. Repeat this option to specify multiple list items.
{--domain | -D} <string>
Specifies the DNS domain name for the user or group that is attempting to connect to
the cluster.
{--account | -a} <string>
Specifies the address of the machine account. If no account is specified, the machine
account of the cluster is used.
{--user | -U} <string>
Specifies an administrative user account to connect to the cluster, if required.
{--password | -P} <string>
Specifies the administrative user account password.
--machinecreds
Directs the system to use machine credentials when connecting to the cluster.
Options
{--spn | -s} <string>
Specifies an SPN to delete. Repeat this option to specify multiple list items.
{--domain | -D} <string>
Specifies the DNS domain name for the user or group that is attempting to connect to
the cluster.
{--account | -a} <string>
Specifies the address of the machine account. If no account is specified, the machine
account of the cluster is used.
--machinecreds
Directs the system to use machine credentials when connecting to the cluster.
{--user | -U} <string>
Specifies an administrative user account to connect to the cluster, if required.
{--password | -P} <string>
Specifies the administrative user account password.
238
Options
{--domain | -D} <string>
Specifies the DNS domain name for the user or group that is attempting to connect to
the cluster.
{--account | -a} <string>
Specifies the address of the machine account. If no account is specified, the machine
account of the cluster is used.
--machinecreds
Directs the system to use machine credentials when connecting to the cluster.
{--user | -U} <string>
Specifies an administrative user account to connect to the cluster, if required.
{--password | -P} <string>
Specifies the administrative user account password.
Examples
Run the following command to view a list of SPNs that are currently registered against the
machine account:
isi auth ads spn list
Options
<provider>
239
Options
<provider>
Options
<provider>
240
Options
<provider-name>
Options
<error-code>
241
[--unfindable-users <string>]
[--unlistable-groups <string>]
[--unlistable-users <string>]
[--unmodifiable-groups <string>]
[--unmodifiable-users <string>]
[--user-domain <string>]
[--verbose]
Options
<name>
242
243
Options
<provider-name>
244
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
245
[--clear-listable-users]
[--add-listable-users <string>]
[--remove-listable-users <string>]
[--login-shell <path>]
[--modifiable-groups <string>]
[--clear-modifiable-groups]
[--add-modifiable-groups <string>]
[--remove-modifiable-groups <string>]
[--modifiable-users <string>]
[--clear-modifiable-users]
[--add-modifiable-users <string>]
[--remove-modifiable-users <string>]
[--netgroup-file <path>]
[--normalize-groups {yes | no}]
[--normalize-users {yes | no}]
[--ntlm-support {all | v2only | none}]
[--provider-domain <string>]
[--restrict-findable {yes | no}]
[--restrict-listable {yes | no}]
[--restrict-modifiable {yes | no}]
[--unfindable-groups <string>]
[--clear-unfindable-groups]
[--add-unfindable-groups <string>]
[--remove-unfindable-groups <string>]
[--unfindable-users <string>]
[--clear-unfindable-users]
[--add-unfindable-users <string>]
[--remove-unfindable-users <string>]
[--unlistable-groups <string>]
[--clear-unlistable-groups]
[--add-unlistable-groups <string>]
[--remove-unlistable-groups <string>]
[--unlistable-users <string>]
[--clear-unlistable-users]
[--add-unlistable-users <string>]
[--remove-unlistable-users <string>]
[--unmodifiable-groups <string>]
[--clear-unmodifiable-groups]
[--add-unmodifiable-groups <string>]
[--remove-unmodifiable-groups <string>]
[--unmodifiable-users <string>]
[--clear-unmodifiable-users]
[--add-unmodifiable-users <string>]
[--remove-unmodifiable-users <string>]
[--user-domain <string>]
[--verbose]
Options
<provider-name>
Specifies the name of the file provider to modify. This setting cannot be modified.
--provider <string>
Specifies an authentication provider of the format <type>:<instance>. Valid provider
types are ads, ldap, nis, file, and local. For example, an LDAP provider named
auth1 can be specified as ldap:auth1.
--password-file <path>
Specifies the path to a passwd.db replacement file.
--group-file <path>
Specifies the path to a group replacement file.
--authentication {yes | no}
246
Enables or disables the use of the provider for authentication as well as identity. The
default value is yes.
--cache-entry-expiry <duration>
Specifies the length of time after which the cache entry will expire, in the format
<integer>[{Y | M | W | D | H | m | s}]. To turn off cache expiration, set this value to off.
--create-home-directory {yes | no}
Specifies whether to create a home directory the first time a user logs in, if a home
directory does not already exist for the user.
--enabled {yes | no}
Enables or disables the provider.
--enumerate-groups {yes | no}
Specifies whether to allow the provider to enumerate groups.
--enumerate-users {yes | no}
Specifies whether to allow the provider to enumerate users.
--findable-groups <string>
Specifies a group that can be found in this provider if --restrict-findable is
enabled. Repeat this option to specify multiple list items. If populated, any groups
that are not included in this list cannot be resolved. This option overwrites any
existing entries in the findable groups list; to add or remove groups without affecting
current entries, use --add-findable-groups or --remove-findablegroups.
--clear-findable-groups
Removes all entries from the list of findable groups.
--add-findable-groups <string>
Adds an entry to the list of findable groups that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--remove-findable-groups <string>
Removes an entry from the list of findable groups that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--findable-users <string>
Specifies a user that can be found in the provider if --restrict-findable is
enabled. Repeat this option to specify multiple list items. If populated, any users that
are not included in this list cannot be resolved. This option overwrites any existing
entries in the findable users list; to add or remove users without affecting current
entries, use --add-findable-users or --remove-findable-users.
--clear-findable-users
Removes all entries from the list of findable users.
--add-findable-users <string>
Adds an entry to the list of findable users that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--remove-findable-users <string>
Removes an entry from the list of findable users that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--group-domain <string>
isi auth file modify
247
Specifies the domain that the provider will use to qualify groups. The default group
domain is FILE_GROUPS.
--group-file <path>
Specifies the path to a group replacement file.
--home-directory-template <path>
Specifies the path to use as a template for naming home directories. The path must
begin with /ifs and can include special character sequences that are dynamically
replaced with strings at home directory creation time that represent specific
variables. For example, %U, %D, and %Z are replaced with the user name, provider
domain name, and zone name, respectively. For more information, see the Home
directories section.
--listable-groups <string>
Specifies a group that can be viewed in this provider if --restrict-listable is
enabled. Repeat this option to specify multiple list items. If populated, any groups
that are not included in this list cannot be viewed. This option overwrites any existing
entries in the listable groups list; to add or remove groups without affecting current
entries, use --add-listable-groups or --remove-listable-groups.
--clear-listable-groups
Removes all entries from the list of viewable groups.
--add-listable-groups <string>
Adds an entry to the list of viewable groups that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--remove-listable-groups <string>
Removes an entry from the list of viewable groups that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--listable-users <string>
Specifies a user that can be viewed in this provider if --restrict-listable is
enabled. Repeat this option to specify multiple list items. If populated, any users that
are not included in this list cannot be viewed. This option overwrites any existing
entries in the listable users list; to add or remove users without affecting current
entries, use --add-listable-users or --remove-listable-users.
--clear-listable-users
Removes all entries from the list of viewable users.
--add-listable-users <string>
Adds an entry to the list of viewable users that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--remove-listable-users <string>
Removes an entry from the list of viewable users that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--login-shell <path>
Specifies the path to the user's login shell. This setting applies only to users who
access the file system through SSH.
--modifiable-groups <string>
Specifies a group that can be modified if --restrict-modifiable is enabled.
Repeat this option to specify multiple list items. If populated, any groups that are not
248
included in this list cannot be modified. This option overwrites any existing entries in
the modifiable groups list; to add or remove groups without affecting current entries,
use --add-modifiable-groups or --remove-modifiable-groups.
--clear-modifiable-groups
Removes all entries from the list of modifiable groups.
--add-modifiable-groups <string>
Adds an entry to the list of modifiable groups that is checked if --restrictmodifiable is enabled. Repeat this option to specify multiple list items.
--remove-modifiable-groups <string>
Removes an entry from the list of modifiable groups that is checked if --restrictmodifiable is enabled. Repeat this option to specify multiple list items.
--modifiable-users <string>
Specifies a user that can be modified if --restrict-modifiable is enabled.
Repeat this option to specify multiple list items. If populated, any users that are not
included in this list cannot be modified. This option overwrites any existing entries in
the modifiable users list; to add or remove users without affecting current entries,
use --add-modifiable-users or --remove-modifiable-users.
--clear-modifiable-users
Removes all entries from the list of modifiable users.
--add-modifiable-users <string>
Adds an entry to the list of modifiable users that is checked if --restrictmodifiable is enabled. Repeat this option to specify multiple list items.
--remove-modifiable-users <string>
Removes an entry from the list of modifiable users that is checked if --restrictmodifiable is enabled. Repeat this option to specify multiple list items.
--netgroup-file <path>
Specifies the path to a netgroup replacement file.
--normalize-groups {yes | no}
Normalizes group names to lowercase before lookup.
--normalize-users {yes | no}
Normalizes user names to lowercase before lookup.
--ntlm-support {all | v2only | none}
For users with NTLM-compatible credentials, specifies which NTLM versions to
support. Valid values are all, v2only, and none. NTLMv2 provides additional
security over NTLM and is recommended.
--password-file <path>
Specifies the path to a passwd.db replacement file.
--provider-domain <string>
Specifies the domain that this provider will use to qualify user and group names.
--restrict-findable {yes | no}
Specifies whether to check this provider for filtered lists of findable and unfindable
users and groups.
--restrict-listable {yes | no}
isi auth file modify
249
Specifies whether to check this provider for filtered lists of viewable and unviewable
users and groups.
--restrict-modifiable {yes | no}
Specifies whether to check this provider for filtered lists of modifiable and
unmodifiable users and groups.
--unfindable-groups <string>
If --restrict-findable is enabled and the findable groups list is empty,
specifies a group that cannot be resolved by this provider. Repeat this option to
specify multiple list items. This option overwrites any existing entries in the
unfindable groups list; to add or remove groups without affecting current entries, use
--add-unfindable-groups or --remove-unfindable-groups.
--clear-unfindable-groups
Removes all entries from the list of unfindable groups.
--add-unfindable-groups <string>
Adds an entry to the list of unfindable groups that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--remove-unfindable-groups <string>
Removes an entry from the list of unfindable groups that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--unfindable-users <string>
If --restrict-findable is enabled and the findable users list is empty, specifies
a user that cannot be resolved by this provider. Repeat this option to specify multiple
list items. This option overwrites any existing entries in the unfindable users list; to
add or remove users without affecting current entries, use --add-unfindableusers or --remove-unfindable-users.
--clear-unfindable-users
Removes all entries from the list of unfindable groups.
--add-unfindable-users <string>
Adds an entry to the list of unfindable users that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--remove-unfindable-users <string>
Removes an entry from the list of unfindable users that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--unlistable-groups <string>
If --restrict-listable is enabled and the viewable groups list is empty,
specifies a group that cannot be listed by this provider. Repeat this option to specify
multiple list items. This option overwrites any existing entries in the unlistable groups
list; to add or remove groups without affecting current entries, use --addunlistable-groups or --remove-unlistable-groups.
--clear-unlistable-groups
Removes all entries from the list of unviewable groups.
--add-unlistable-groups <string>
Adds an entry to the list of unviewable groups that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
250
--remove-unlistable-groups <string>
Removes an entry from the list of unviewable groups that is checked if -restrict-listable is enabled. Repeat this option to specify multiple list items.
--unlistable-users <string>
If --restrict-listable is enabled and the viewable users list is empty,
specifies a user that cannot be listed by this provider. Repeat this option to specify
multiple list items. This option overwrites any existing entries in the unlistable users
list; to add or remove users without affecting current entries, use --addunlistable-users or --remove-unlistable-users.
--clear-unlistable-users
Removes all entries from the list of unviewable users.
--add-unlistable-users <string>
Adds an entry to the list of unviewable users that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--remove-unlistable-users <string>
Removes an entry from the list of unviewable users that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--unmodifiable-groups <string>
If --restrict-modifiable is enabled and the modifiable groups list is empty,
specifies a group that cannot be modified. Repeat this option to specify multiple list
items. This option overwrites any existing entries in this providers unmodifiable
groups list; to add or remove groups without affecting current entries, use --addunmodifiable-groups or --remove-unmodifiable-groups.
--clear-unmodifiable-groups
Removes all entries from the list of unmodifiable groups.
--add-unmodifiable-groups <string>
Adds an entry to the list of unmodifiable groups that is checked if --restrictmodifiable is enabled. Repeat this option to specify multiple list items.
--remove-unmodifiable-groups <string>
Removes an entry from the list of unmodifiable groups that is checked if -restrict-modifiable is enabled. Repeat this option to specify multiple list
items.
--unmodifiable-users <string>
If --restrict-modifiable is enabled and the modifiable users list is empty,
specifies a user that cannot be modified. Repeat this option to specify multiple list
items. This option overwrites any existing entries in this providers unmodifiable
users list; to add or remove users without affecting current entries, use --addunmodifiable-users or --remove-unmodifiable-users.
--clear-unmodifiable-users
Removes all entries from the list of unmodifiable users.
--add-unmodifiable-users <string>
Adds an entry to the list of unmodifiable users that is checked if --restrictmodifiable is enabled. Repeat this option to specify multiple list items.
--remove-unmodifiable-users <string>
isi auth file modify
251
Removes an entry from the list of unmodifiable users that is checked if -restrict-modifiable is enabled. Repeat this option to specify multiple list
items.
--user-domain <string>
Specifies the domain that this provider will use to qualify users. The default user
domain is FILE_USERS.
{--verbose | -v}
Displays the results of running the command.
Options
<provider-name>
Options
<name>
Specifies the SID of the user to add to the group. Repeat this option to specify
multiple users.
--add-wellknown <name>
Specifies a wellknown persona name to add to the group. Repeat this option to
specify multiple personas.
--sid <string>
Sets the Windows security identifier (SID) for the group, for example S-1-5-21-13.
--zone <string>
Specifies the access zone in which to create the group.
--provider <string>
Specifies a local authentication provider in the specified access zone.
{--verbose | -v}
Displays more detailed information.
{--force | -f}
Suppresses command-line prompts and messages.
Options
This command requires <group>, --gid <integer>, or --sid <string>.
<group>
253
Options
There are no options for this command.
Examples
To flush all cached group information, run the following command:
isi auth groups flush
Options
--domain <string>
Specifies the provider domain.
--zone <string>
Specifies an access zone.
--provider <string>
Specifies an authentication provider.
{--limit | -l} <integer>
Displays no more than the specified number of items.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
254
Options
This command requires <group>, --gid <integer>, or --sid <string>.
<group>
255
--add-wellknown <name>
Specifies a well-known SID to add to the group. Repeat this option to specify multiple
list items.
--remove-wellknown <name>
Specifies a well-known SID to remove from the group. Repeat this option to specify
multiple list items.
--zone <string>
Specifies the group's access zone.
--provider <string>
Specifies the group's authentication provider.
{--verbose | -v}
Displays more detailed information.
{--force | -f}
Suppresses command-line prompts and messages.
Options
This command requires <group>, --gid <integer>, or --sid <string>.
<group>
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Options
This command requires <group>, --gid <integer>, or --sid <string>.
<group>
isi auth id
Displays your access token.
Syntax
isi auth id
Options
There are no options for this command.
257
Options
<realm>
Options
<realm>
258
Options
<realm>
Options
{--limit | -l} <integer>
Specifies the number of Kerberos realms to display.
--format {table | json | csv | list}
Specifies whether to display the Kerberos realms in a tabular, JSON, CSV, or list
format.
{--no-header | -a}
Specifies not to display the headers in the CSV or tabular formats.
{--no-footer | -z}
Specifies not to display the table summary footer information.
Options
<realm>
259
Options
<realm>
Specifies a user name with permissions to create the service principal names (SPNs)
in the given Kerberos realm.
--keytab-file <string>
Specifies the keytab file to import.
--password <string>
Specifies the password used for joining a Kerberos realm.
--spn <string>
Specifies the SPNs to register. Specify --spn for each additional SPN that you want to
register.
--is-default-realm <boolean>
Specifies whether the Kerberos realm will be the default.
--kdc <string>
Specifies the hostname or IP address of the Key Distribution Center (KDC). Specify -kdc for each additional hostname or IP address of the KDC.
--admin-server <string>
Specifies the hostname or IP address of the administrative server (master KDC).
--default-domain<string>
Specifies the default Kerberos domain for the Kerberos realm used for translating v4
principal names.
Options
<provider-name>
260
Options
{--limit | -l} <integer>
Specifies the number of Kerberos providers to display.
--format {table | json | csv | list}
Specifies to display the Kerberos providers in a tabular, JSON, CSV, or list format.
{--no-header | -a}
Specifies not to display the headers in the CSV or tabular formats.
{--no-footer | -z}
Specifies not to display the table summary footer information.
Options
<provider-name>
Options
<domain>
261
Options
<domain>
Options
<domain>
Options
{--limit | -l} <integer>
Specifies the number of Kerberos domain mappings to display.
--format {table | json | csv | list}
Specifies whether to display the Kerberos domain mappings in a tabular, JSON, CSV,
or list formats.
{--no-header | -a}
Specifies not to display the headers in the CSV or tabular formats.
{--no-footer | -z}
262
Options
<domain>
Options
<provider-name>
Specifies a user name with permissions to create the service principal names (SPNs)
in the Kerberos realm.
<spn>
Options
<provider-name>
263
Options
<provider-name>
Options
<provider-name>
Specifies a user name with permissions to join clients to the given Kerberos domain.
--password <string>
Specifies the password that was used when modifying the Kerberos realm.
{--force | -f}
Specifies not to ask for a confirmation.
Options
<provider-name>
264
Options
<provider-name>
Specifies the Kerberos provider name.
{--limit | -l} <integer>
Specifies the number of SPNs and keys to display.
--format {table | json | csv | list}
Specifies to display the SPNs and keys in a tabular, JSON, CSV, or list format.
{--no-header | -a}
Specifies not to display the headers in the CSV or tabular formats.
{--no-footer | -z}
Specifies not to display the table summary footer information.
Options
--always-send-preauth <boolean>
Specifies whether to send preauth.
--revert-always-send-preauth
Sets the value of --always-send-preauth to the system default.
--default-realm <string>
Specifies the default Kerberos realm name.
--dns-lookup-kdc <boolean>
Allows DNS to find Key Distribution Centers (KDCs).
--revert-dns-lookup-kdc
Sets the value of --dns-lookup-kdc to the system default.
--dns-lookup-realm <boolean>
Allows DNS to find the Kerberos realm names.
isi auth krb5 spn list
265
--revert-dns-lookup-realm
Sets the value of --dns-lookup-realm to the system default.
266
[--search-scope <scope>]
[--search-timeout <integer>]
[--shell-attribute <string>]
[--uid-attribute <string>]
[--unfindable-groups <string>]
[--unfindable-users <string>]
[--unique-group-members-attribute <string>]
[--unlistable-groups <string>]
[--unlistable-users <string>]
[--user-base-dn <string>]
[--user-domain <string>]
[--user-filter <string>]
[--user-search-scope <scope>]
[--bind-password <string>]
[--set-bind-password]
[--verbose]
Options
<name>
267
Specifies the LDAP attribute that contains common names. The default value is cn.
--create-home-directory {yes | no}
Specifies whether to automatically create a home directory the first time a user logs
in, if a home directory does not already exist for the user.
--crypt-password-attribute <string>
Specifies the LDAP attribute that contains UNIX passwords. This setting has no
default value.
--email-attribute <string>
Specifies the LDAP attribute that contains email addresses. The default value is
mail.
--enabled {yes | no}
Enables or disables the provider.
--enumerate-groups {yes | no}
Specifies whether to allow the provider to enumerate groups.
--enumerate-users {yes | no}
Specifies whether to allow the provider to enumerate users.
--findable-groups <string>
Specifies a list of groups that can be found in this provider if --restrictfindable is enabled. Repeat this option to specify each additional findable group.
If populated, groups that are not included in this list cannot be resolved.
--findable-users <string>
Specifies a list of users that can be found in this provider if --restrictfindable is enabled. Repeat this option to specify each additional findable user. If
populated, users that are not included in this list cannot be resolved.
--gecos-attribute <string>
Specifies the LDAP attribute that contains GECOS fields. The default value is gecos.
--gid-attribute <string>
Specifies the LDAP attribute that contains GIDs. The default value is gidNumber.
--group-base-dn <string>
Specifies the distinguished name of the entry at which to start LDAP searches for
groups.
--group-domain <string>
Specifies the domain that the provider will use to qualify groups. The default group
domain is LDAP_GROUPS.
--group-filter <string>
Sets the LDAP filter for group objects.
--group-members-attribute <string>
Specifies the LDAP attribute that contains group members. The default value is
memberUid.
--group-search-scope <scope>
Defines the default depth from the base distinguished name (DN) to perform LDAP
searches for groups.
The following values are valid:
268
default
Applies the setting in --search-scope.
Note
269
--netgroup-base-dn <string>
Specifies the distinguished name of the entry at which to start LDAP searches for
netgroups.
--netgroup-filter <string>
Sets the LDAP filter for netgroup objects.
--netgroup-members-attribute <string>
Specifies the LDAP attribute that contains netgroup members. The default value is
memberNisNetgroup.
--netgroup-search-scope <scope>
Defines the depth from the base distinguished name (DN) to perform LDAP searches
for netgroups.
The following values are valid:
default
Applies the setting in --search-scope.
Note
271
Options
<provider-name>
{--force | -f}
Suppresses command-line prompts and messages.
<provider-name>
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
273
274
[--remove-unlistable-users <string>]
[--user-base-dn <string>]
[--user-domain <string>]
[--user-filter <string>]
[--user-search-scope <scope>]
[--bind-password <string>]
[--set-bind-password]
[--verbose]
Options
<provider-name>
275
Specifies the time between provider online checks, in the format <integer>[{Y | M | W | D
| H | m | s}].
--cn-attribute <string>
Specifies the LDAP attribute that contains common names. The default value is cn.
--create-home-directory {yes | no}
Specifies whether to create a home directory the first time a user logs in, if a home
directory does not already exist for the user. The directory path is specified in the
path template through the --home-directory-template command.
--crypt-password-attribute <string>
Specifies the LDAP attribute that contains UNIX passwords. This setting has no
default value.
--email-attribute <string>
Specifies the LDAP attribute that contains email addresses. The default value is
mail.
--enabled {yes | no}
Enables or disables this provider.
--enumerate-groups {yes | no}
Specifies whether to allow the provider to enumerate groups.
--enumerate-users {yes | no}
Specifies whether to allow the provider to enumerate users.
--findable-groups <string>
Specifies a list of groups that can be found in this provider if --restrictfindable is enabled. Repeat this option to specify multiple list items. If populated,
groups that are not included in this list cannot be resolved in this provider. This
option overwrites the entries in the findable groups list; to add or remove groups
without affecting current entries, use --add-findable-groups or --removefindable-groups.
--clear-findable-groups
Removes the list of findable groups.
--add-findable-groups <string>
Adds an entry to the list of findable groups that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--remove-findable-groups <string>
Removes an entry from the list of findable groups that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--findable-users <string>
Specifies a list of users that can be found in this provider if --restrictfindable is enabled. Repeat this option to specify multiple list items. If populated,
users that are not included in this list cannot be resolved in this provider. This option
overwrites the entries in the findable users list; to add or remove users without
affecting current entries, use --add-findable-users or --removefindable-users.
--clear-findable-users
Removes the list of findable users.
276
--add-findable-users <string>
Adds an entry to the list of findable users that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--remove-findable-users <string>
Removes an entry from the list of findable users that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--gecos-attribute <string>
Specifies the LDAP attribute that contains GECOS fields. The default value is gecos.
--gid-attribute <string>
Specifies the LDAP attribute that contains GIDs. The default value is gidNumber.
--group-base-dn <string>
Specifies the distinguished name of the entry at which to start LDAP searches for
groups.
--group-domain <string>
Specifies the domain that this provider will use to qualify groups. The default group
domain is LDAP_GROUPS.
--group-filter <string>
Sets the LDAP filter for group objects.
--group-members-attribute <string>
Specifies the LDAP attribute that contains group members. The default value is
memberUid.
--group-search-scope <scope>
Defines the default depth from the base distinguished name (DN) to perform LDAP
searches for groups.
The following values are valid:
default
Applies the setting in --search-scope.
Note
277
replaced with strings at home directory creation time that represent specific
variables. For example, %U, %D, and %Z are replaced with the user name, provider
domain name, and zone name, respectively. For more information, see the Home
directories section.
--homedir-attribute <string>
Specifies the LDAP attribute that is used when searching for the home directory. The
default value is homeDirectory.
--ignore-tls-errors {yes | no}
Continues over a secure connection even if identity checks fail.
--listable-groups <string>
Specifies a list of groups that can be viewed in this provider if --restrictlistable is enabled. Repeat this option to specify multiple list items. If populated,
groups that are not included in this list cannot be viewed in this provider. This option
overwrites the entries in the listable groups list; to add or remove groups without
affecting current entries, use --add-listable-groups or --removelistable-groups.
--clear-listable-groups
Removes all entries from the list of viewable groups.
--add-listable-groups <string>
Adds an entry to the list of listable groups that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--remove-listable-groups <string>
Removes an entry from the list of viewable groups that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--listable-users <string>
Specifies a list of users that can be viewed in this provider if --restrictlistable is enabled. Repeat this option to specify multiple list items. If populated,
users that are not included in this list cannot be viewed in this provider. This option
overwrites the entries in the listable users list; to add or remove users without
affecting current entries, use --add-listable-users or --removelistable-users.
--clear-listable-users
Removes all entries from the list of viewable users.
--add-listable-users <string>
Adds an entry to the list of listable users that is checked if --restrict-listable
is enabled. Repeat this option to specify multiple list items.
--remove-listable-users <string>
Removes an entry from the list of viewable users that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--login-shell <path>
Specifies the pathname to the user's login shell, for users who access the file system
through SSH.
--member-of-attribute <string>
278
Sets the attribute to be used when searching LDAP for reverse memberships. This
LDAP value should be an attribute of the user type posixAccount that describes the
groups in which the POSIX user is a member.
--name-attribute <string>
Specifies the LDAP attribute that contains UIDs, which are used as login names. The
default value is uid.
--netgroup-base-dn <string>
Specifies the distinguished name of the entry at which to start LDAP searches for
netgroups.
--netgroup-filter <string>
Sets the LDAP filter for netgroup objects.
--netgroup-members-attribute <string>
Specifies the LDAP attribute that contains netgroup members. The default value is
memberNisNetgroup.
--netgroup-search-scope <scope>
Defines the depth from the base distinguished name (DN) to perform LDAP searches
for netgroups.
The following values are valid:
default
Applies the setting in --search-scope.
Note
279
--clear-unfindable-groups
Removes all entries from the list of unfindable groups.
--add-unfindable-groups <string>
Adds an entry to the list of unfindable groups that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--remove-unfindable-groups <string>
Removes an entry from the list of unfindable groups that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--unfindable-users <string>
Specifies a user that cannot be found in this provider if --restrict-findable is
enabled. Repeat this option to specify multiple list items. This option overwrites the
entries in the unfindable users list; to add or remove users without affecting current
entries, use --add-unfindable-users or --remove-unfindable-users.
--clear-unfindable-users
Removes all entries from the list of unfindable groups.
--add-unfindable-users <string>
Adds an entry to the list of unfindable users that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--remove-unfindable-users <string>
Removes an entry from the list of unfindable users that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--unique-group-members-attribute <string>
Specifies the LDAP attribute that contains unique group members. This attribute is
used to determine which groups a user belongs to if the LDAP server is queried by the
users DN instead of the users name. This setting has no default value.
--unlistable-groups <string>
Specifies a group that cannot be listed in this provider if --restrict-listable
is enabled. Repeat this option to specify multiple list items. This option overwrites
the entries in the unlistable groups list; to add or remove groups without affecting
current entries, use --add-unlistable-groups or --remove-unlistablegroups.
--clear-unlistable-groups
Removes all entries from the list of unviewable groups.
--add-unlistable-groups <string>
Adds an entry to the list of unviewable groups that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--remove-unlistable-groups <string>
Removes an entry from the list of unviewable groups that is checked if -restrict-listable is enabled. Repeat this option to specify multiple list items.
--unlistable-users <string>
Specifies a user that cannot be viewed in this provider if --restrict-listable
is enabled. Repeat this option to specify multiple list items. This option overwrites
the entries in the unlistable users list; to add or remove users without affecting
281
282
Interactively sets the password for the distinguished name that is used when binding
to the LDAP server. This option cannot be used with --bind-password.
{--verbose | -v}
Displays the results of running the command.
Options
<provider-name>
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Options
<provider-name>
isi auth ldap view
283
Options
<provider-name>
--login-shell <string>
Specifies the path to the UNIX login shell.
--machine-name <string>
Specifies the domain to use to qualify user and group names for the provider.
--min-password-age <duration>
Sets the minimum password age, in the format <integer>[{Y | M | W | D | H | m | s}].
--max-password-age <duration>
Sets the maximum password age, in the format <integer>[{Y | M | W | D | H | m | s}].
--min-password-length <integer>
Sets the minimum password length.
--password-prompt-time <duration>
Sets the remaining time until a user is prompted for a password change, in the format
<integer>[{Y | M | W | D | H | m | s}].
--password-complexity {lowercase | uppercase | numeric | symbol}
Specifies the conditions that a password is required to meet. A password must
contain at least one character from each specified option to be valid. For example, if
lowercase and numeric are specified, a password must contain at least on
lowercase character and one digit to be valid. Symbols are valid, excluding # and @.
--clear-password-complexity
Clears the list of parameters against which to validate new passwords.
--add-password-complexity {lowercase | uppercase | numeric |
symbol
Adds items to the list of parameters against which to validate new passwords. Repeat
this command to specify additional password-complexity options.
--remove-password-complexity <string>
Removes items from the list of parameters against which to validate new passwords.
Repeat this command to specify each password- complexity option that you want to
remove.
--password-history-length <integer>
Specifies the number of previous passwords to store to prevent reuse of a previous
password. The max password history length is 24.
{--verbose | -v}
Displays more detailed information.
Options
{--set | -s} <string>
Sets the log level for the current node. The log level determines how much
information is logged.
isi auth log-level
285
The following values are valid and are organized from least to most information:
l
always
error
warning
info
verbose
debug
trace
Note
Levels verbose, debug, and trace may cause performance issues. Levels debug and
trace log information that likely will be useful only when consulting EMC Isilon
Technical Support.
Examples
To set the log level to debug, run the following command:
isi auth log-level --set=debug
Options
<source>
--only-external
Only deletes identity mappings that were created automatically and that include a
UID or GID from an external authentication source. Must be used in conjunction with
--all.
--2way
Specifies or deletes a two-way, or reverse, mapping.
--target <string>
Specifies the mapping target by identity type, in the format <type>:<value>for
example, UID:2002.
--target-uid <integer>
Specifies the mapping target by UID.
--target-gid <integer>
Specifies the mapping target by GID.
--target-sid <string>
Specifies the mapping target by SID.
--zone<string>
Deletes identity mappings in the specified access zone. If no access zone is
specified, mappings are deleted from the default System zone.
Options
If no option is specified, the full kernel mapping database is displayed.
{--file | -f} <path>
Prints the database to the specified output file.
--zone <string>
Displays the database from the specified access zone. If no access zone is specified,
displays all mappings.
Examples
To view the kernel mapping database, run the following command:
isi auth mapping dump
287
Options
You must specify either --all or one of the source options.
--all
Flushes all identity mappings on the EMC Isilon cluster.
--source <string>
Specifies the mapping source by identity type, in the format <type>:<value>for
example, UID:2002.
--source-uid <integer>
Specifies the source identity by UID.
--source-gid <integer>
Specifies the source identity by GID.
--source-sid <string>
Specifies the source identity by SID.
--zone<string>
Specifies the access zone of the source identity. If no access zone is specified, any
mapping for the specified source identity is flushed from the default System zone.
288
Options
Note
When modifying a UID or GID range, make sure that your settings meet the following
requirements:
l
A mapping does not overlap with another range that might be used by other IDs on
the cluster
The mapping is large enough to avoid running out of unused IDs; if all IDs in the
range are in use, ID allocation will fail.
--set-uid-low <integer>
Sets the lowest UID value in the range.
--set-uid-high <integer>
Sets the highest UID value in the range.
--set-uid-hwm <integer>
Specifies the next UID that will be allocated (the high water mark).
Note
l
If the high water mark is set to more than the high UID value, UID allocation will
fail.
The high water mark cannot be set to less than the lowest UID value in the range.
If the specified <integer> value is less than the low UID value, the high water mark
is set to the low UID value.
--set-gid-low <integer>
Sets the lowest GID value in the range.
--set-gid-high <integer>
Sets the highest GID value in the range.
--set-gid-hwm <integer>
Specifies the next GID that will be used (the high water mark).
Note
l
If the high water mark is set to more than the high GID value, GID allocation will
fail.
High water mark cannot be set to less than the lowest GID value in the range. If
the specified <integer> value is less than the low GID value, high water mark is set
to the low GID value.
--get-uid-range
Displays the current UID range.
--get-gid-range
Displays the current GID range.
289
Options
{--file | -f} <path>
Specifies the full path to the file to import. File content must be in the same format as
the output that is displayed by running the isi auth mapping dump command.
{--overwrite | -o}
Overwrites existing entries in the mapping database file.
Options
<id>
Specifies the ID of the source identity type in the format <type>:<value>for example,
UID:2002.
--uid <integer>
Specifies the mapping source by UID.
--gid <integer>
Specifies the mapping source by GID.
--sid <string>
Specifies the mapping source by SID.
--nocreate
Specifies that nonexistent mappings should not be created.
--zone
Specifies the access zone of the source identity. If no access zone is specified, OneFS
displays mappings from the default System zone.
Examples
The following command displays mappings for a user whose UID is 2002 in the zone3
access zone:
isi auth mapping view uid:2002 --zone=zone3
290
Mapping
---------------------------------------------test1
UID:2002
2002
None
S-1-5-21-1776575851-2890035977-2418728619-1004
test1
Options
<source>
291
Specifies the access zone that the ID mapping is applied to. If no access zone is
specified, the mapping is applied to the default System zone.
Options
<source>
Options
This command requires <user> or --uid <integer> or --kerberos-principal <string>.
<user>
Options
--netgroup <string>
Specifies the name of a netgroup.
--recursive
Recursively resolves nested netgroups.
isi auth mapping token
293
--ignore
Ignores errors and unresolvable netgroups.
--raw
Displays raw netgroup information.
Options
<name>
295
--login-shell <path>
Specifies the path to the user's login shell. This setting applies only to users who
access the file system through SSH.
--normalize-groups {yes | no}
Normalizes group name to lowercase before lookup.
--normalize-users {yes | no}
Normalizes user name to lowercase before lookup.
--provider-domain <string>
Specifies the domain that this provider will use to qualify user and group names.
--ntlm-support {all | v2only | none}
For users with NTLM-compatible credentials, specifies which NTLM versions to
support. Valid values are all, v2only, and none. NTLMv2 provides additional
security over NTLM.
--request-timeout <integer>
Specifies the request timeout interval in seconds.
--restrict-findable {yes | no}
Specifies whether to check this provider for filtered lists of findable and unfindable
users and groups.
--restrict-listable {yes | no}
Specifies whether to check this provider for filtered lists of viewable and unviewable
users and groups.
--retry-time <integer>
Sets the timeout period in seconds after which a request will be retried.
--unfindable-groups <string>
If --restrict-findable is enabled and the findable groups list is empty,
specifies a group that cannot be resolved by this provider. Repeat this option to
specify multiple list items.
--unfindable-users <string>
If --restrict-findable is enabled and the findable users list is empty, specifies
a user that cannot be resolved by this provider. Repeat this option to specify multiple
list items.
--unlistable-groups <string>
If --restrict-listable is enabled and the listable groups list is empty,
specifies a group that cannot be viewed by this provider. Repeat this option to specify
multiple list items.
--unlistable-users <string>
If --restrict-listable is enabled and the listable users list is empty, specifies
a user that cannot be viewed by this provider. Repeat this option to specify multiple
list items.
--user-domain <string>
Specifies the domain that this provider will use to qualify users. The default user
domain is NIS_USERS.
--ypmatch-using-tcp {yes | no}
Uses TCP for YP Match operations.
296
{--verbose | -v}
Displays the results of running the command.
Options
<provider-name>
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
297
298
Options
<provider-name>
299
--clear-findable-groups
Removes all entries from the list of findable groups.
--add-findable-groups <string>
Adds an entry to the list of findable groups that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--remove-findable-groups <string>
Removes an entry from the list of findable groups that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--findable-users <string>
Specifies a user that can be found in this provider if --restrict-findable is
enabled. Repeat this option to specify multiple list items. If populated, users that are
not included in this list cannot be resolved. This option overwrites the entries in the
findable users list; to add or remove users without affecting current entries, use -add-findable-users or --remove-findable-users.
--clear-findable-users
Removes all entries from the list of findable users.
--add-findable-users <string>
Adds an entry to the list of findable users that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--remove-findable-users <string>
Removes an entry from the list of findable users that is checked if --restrictfindable is enabled. Repeat this option to specify multiple list items.
--group-domain <string>
Specifies the domain that this provider will use to qualify groups. The default group
domain is NIS_GROUPS.
--home-directory-template <path>
Specifies the path to use as a template for naming home directories. The path must
begin with /ifs and can include special character sequences that are dynamically
replaced with strings at home directory creation time that represent specific
variables. For example, %U, %D, and %Z are replaced with the user name, provider
domain name, and zone name, respectively. For more information, see the Home
directories section.
--hostname-lookup {yes | no}
Enables or disables host name lookups.
--listable-groups <string>
Specifies a group that can be viewed in this provider if --restrict-listable is
enabled. Repeat this option to specify multiple list items. If populated, groups that
are not included in this list cannot be viewed. This option overwrites the entries in the
listable groups list; to add or remove groups without affecting current entries, use -add-listable-groups or --remove-listable-groups.
--clear-listable-groups
Removes all entries from the list of viewable groups.
--add-listable-groups <string>
300
Adds an entry to the list of viewable groups that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--remove-listable-groups <string>
Removes an entry from the list of viewable groups that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--listable-users <string>
Specifies a user that can be viewed in this provider if --restrict-listable is
enabled. Repeat this option to specify multiple list items. If populated, users that are
not included in this list cannot be viewed. This option overwrites the entries in the
listable users list; to add or remove users without affecting current entries, use -add-listable-users or --remove-listable-users.
--clear-listable-users
Removes all entries from the list of viewable users.
--add-listable-users <string>
Adds an entry to the list of viewable users that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--remove-listable-users <string>
Removes an entry from the list of viewable users that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--login-shell <path>
Specifies the path to the user's login shell. This setting applies only to users who
access the file system through SSH.
--normalize-groups {yes | no}
Normalizes group names to lowercase before lookup.
--normalize-users {yes | no}
Normalizes user names to lowercase before lookup.
--provider-domain <string>
Specifies the domain that this provider will use to qualify user and group names.
--ntlm-support {all | v2only | none}
For users with NTLM-compatible credentials, specifies which NTLM versions to
support. Valid values are all, v2only, and none. NTLMv2 provides additional
security over NTLM.
--request-timeout <integer>
Specifies the request timeout interval in seconds.
--restrict-findable {yes | no}
Specifies whether to check this provider for filtered lists of findable and unfindable
users and groups.
--restrict-listable {yes | no}
Specifies whether to check this provider for filtered lists of viewable and unviewable
users and groups.
--retry-time <integer>
Sets the timeout period in seconds after which a request will be retried.
--unfindable-groups <string>
isi auth nis modify
301
302
entries in the unlistable users list; to add or remove users without affecting current
entries, use --add-unlistable-users or --remove-unlistable-users.
--clear-unlistable-users
Removes all entries from the list of unviewable users.
--add-unlistable-users <string>
Adds an entry to the list of unviewable users that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--remove-unlistable-users <string>
Removes an entry from the list of unviewable users that is checked if --restrictlistable is enabled. Repeat this option to specify multiple list items.
--user-domain <string>
Specifies the domain that this provider will use to qualify users. The default user
domain is NIS_USERS.
--ypmatch-using-tcp {yes | no}
Uses TCP for YP Match operations.
{--verbose | -v}
Displays the results of running the command.
Options
<provider-name>
Options
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
isi auth nis view
303
{--verbose | -v}
Displays more detailed information.
Note
When using the --verbose option, the output Read Write: No means that the
privileges are read-only.
Options
There are no options for this command.
Options
<name>
Options
<role>
{--verbose | -v}
Displays more detailed information.
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Options
<role>
305
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Examples
To view the members of the SystemAdmin role, run the following command:
isi auth roles members list systemadmin
In the following sample output, the SystemAdmin role currently contains one member, a
user named admin:
Type Name
---------user admin
---------Total: 1
Options
<role>
306
Removes a group with the specified name from the role. Repeat this option for each
additional item.
--add-gid <integer>
Adds a group with the specified GID to the role. Repeat this option for each additional
item.
--remove-gid <integer>
Removes a group with the specified GID from the role. Repeat this option for each
additional item.
--add-uid <integer>
Adds a user with the specified UID to the role. Repeat this option for each additional
item.
--remove-uid <integer>
Removes a user with the specified UID from the role. Repeat this option for each
additional item.
--add-user <string>
Adds a user with the specified name to the role. Repeat this option for each
additional item.
--remove-user <string>
Removes a user with the specified name from the role. Repeat this option for each
additional item.
--add-sid <string>
Adds a user or group with the specified SID to the role. Repeat this option for each
additional item.
--remove-sid <string>
Removes a user or group with the specified SID from the role. Repeat this option for
each additional item.
--add-wellknown <string>
Adds a well-known SID with the specified namefor example, Everyoneto the role.
Repeat this option for each additional item.
--remove-wellknown <string>
Removes a well-known SID with the specified name from the role. Repeat this option
for each additional item.
--add-priv <string>
Adds a read/write privilege to the role. Applies to custom roles only. Repeat this
option for each additional item.
--add-priv-ro <string>
Adds a read-only privilege to the role. Applies to custom roles only. Repeat this
option for each additional item.
--remove-priv <string>
Removes a privilege from the role. Applies to custom roles only. Repeat this option for
each additional item.
{--verbose | -v}
Displays the results of running the command.
307
Options
<role>
308
Options
<role>
Specifies the name of the role to view.
Options
--send-ntlmv2 {yes | no}
Specifies whether to send only NTLMv2 responses to an SMB client. The default value
is no. Valid values are yes, no. The default value is no.
--revert-send-ntlmv2
Reverts the --send-ntlmv2 setting to the system default value.
--space-replacement <character>
For clients that have difficulty parsing spaces in user and group names, specifies a
substitute character. Be careful to choose a character that is not in use.
--revert-space-replacement
Reverts the --space-replacement setting to the system default value.
--workgroup <string>
Specifies the NetBIOS workgroup. The default value is WORKGROUP.
--revert-workgroup
Reverts the --workgroup setting to the system default value.
--provider-hostname-lookup <string>
Allows hostname lookup through authentication providers. Applies to NIS only.
--alloc-retries <integer>
Specifies the number of times to retry an ID allocation before failing.
--revert-alloc-retries
Reverts the --alloc-retries setting to the system default value.
--cache-cred-lifetime <duration>
Specifies the length of time to cache credential responses from the ID mapper, in the
format <integer>[{Y | M | W | D | H | m | s}].
--revert-cache-cred-lifetime
Reverts the --cache-cred-lifetime setting to the system default value.
isi auth settings global modify
309
--cache-id-lifetime <duration>
Specifies the length of time to cache ID responses from the ID mapper, in the format
<integer>[{Y | M | W | D | H | m | s}].
--revert-cache-id-lifetime
Reverts the --cache-id-lifetime setting to the system default value.
--on-disk-identity <string>
Controls the preferred identity to store on disk. If OneFS is unable to convert an
identity to the preferred format, it is stored as is. This setting does not affect
identities that are already stored on disk.
The accepted values are listed below.
native
Allows OneFS to determine the identity to store on disk. This is the
recommended setting.
unix
Always stores incoming UNIX identifiers (UIDs and GIDs) on disk.
sid
Stores incoming Windows security identifiers (SIDs) on disk unless the SID was
generated from a UNIX identifier. If the SID was generated from a UNIX identifier,
OneFS converts it back to the UNIX identifier and stores it on disk.
Note
To prevent permission errors after changing the on-disk identity, run isi job jobs
start PermissionRepair with the convert mode specified.
--revert-on-disk-identity
Sets the --on-disk-identity setting to the system default value.
--rpc-max-requests <integer>
Specifies the maximum number of simultaneous ID mapper requests allowed. The
default value is 64.
--revert-rpc-max-requests
Sets the --rpc-max-requests setting to the system default value.
--unknown-gid <integer>
Specifies the GID to use for the unknown (anonymous) group.
--revert-unknown-gid
Sets the --unknown-gid setting to the system default value.
--unknown-uid <integer>
Specifies the UID to use for the unknown (anonymous) user.
--revert-unknown-uid
Sets the --unknown-uid setting to the system default value.
{--verbose | -v}
Displays more detailed information.
310
Options
There are no options for this command.
Examples
To view the current authentication settings on the cluster, run the following command:
isi auth settings global view
No
WORKGROUP
disabled
5
15m
15m
native
5s
16
30s
80
80
Yes
1000000
2000000
Yes
1000000
2000000
2147483648
4294967292
4294967293
4294967293
4294967294
4294967294
Options
<zone><string>
Specifies an access zone by name.
isi auth settings global view
311
--limit [ -l | <integer>]
Specifies the number of providers to display.
--format {table | json | csv | list}
Displays providers in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Options
<name>
312
Office Phone:
Home Phone:
Other information:
Values must be entered as a comma-separated list, and values that contain spaces
must be enclosed in quotation marks. For example, the --gecos="Jane
Doe",Seattle,555-5555,,"Temporary worker" option with these values
results in the following entries:
Full Name: Jane Doe
Office Location: Seattle
Office Phone: 555-5555
Home Phone:
Other information: Temporary worker
--home-directory <path>
Specifies the path to the user's home directory.
--password <string>
Sets the user's password to the specified value. This option cannot be used with the
--set-password option.
--password-expires {yes | no}
Specifies whether to allow the password to expire.
--primary-group <name>
Specifies the user's primary group by name.
--primary-group-gid <integer>
Specifies the user's primary group by GID.
--primary-group-sid <string>
Specifies the user's primary group by SID.
--prompt-password-change {yes | no}
Prompts the user to change the password during the next login.
--shell <path>
Specifies the path to the UNIX login shell.
--uid <integer>
Overrides automatic allocation of the UNIX user identifier (UID) with the specified
value. Setting this option is not recommended.
--zone <string>
Specifies the access zone in which to create the user.
--provider <string>
Specifies a local authentication provider in the specified access zone.
--set-password
Sets the password interactively. This option cannot be used with the --password
option.
{--verbose | -v}
Displays the results of running the command.
{--force | -f}
Suppresses command-line prompts and messages.
isi auth users create
313
Options
This command requires <user>, --uid <integer>, or --sid <string>.
<user>
Options
There are no options for this command.
Examples
To flush all cached user information, run the following command:
isi auth user flush
314
Options
--domain <string>
Displays only the users in the specified provider domain.
--zone <string>
Specifies the access zone whose users you want to list. The default access zone is
System.
--provider <string>
Displays only the users in the specified authentication provider. The syntax for
specifying providers is <provider-type>:<provider-name>, being certain to use the colon
separator; for example, isi auth users list --provider="lsa-ldapprovider:Unix LDAP".
{--limit | -l} <integer>.
Displays no more than the specified number of items.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
315
Options
This command requires <user>, --uid <integer>, or --sid <string>.
<user>
316
Home Phone:
Other information:
Values must be entered as a comma-separated list, and values that contain spaces
must be enclosed in quotation marks. For example, the --gecos= "Jane
Doe",Seattle,555-5555,,"Temporary worker" option with these values
results in the following entries:
Full Name: Jane Doe
Office Location: Seattle
Office Phone: 555-5555
Home Phone:
Other information: Temporary worker
--home-directory <path>
Specifies the path to the user's home directory.
--password <string>
Sets the user's password to the specified value. This option cannot be used with the
--set-password option.
--password-expires {yes | no}
Specifies whether to allow the password to expire.
--primary-group <name>
Specifies the user's primary group by name.
--primary-group-gid <integer>
Specifies the user's primary group by GID.
--primary-group-sid <string>
Specifies the user's primary group by SID.
--prompt-password-change {yes | no}
Prompts the user to change the password during the next login.
--shell <path>
Specifies the path to the UNIX login shell.
--new-uid <integer>
Specifies a new UID for the user. Setting this option is not recommended.
--zone <string>
Specifies the name of the access zone that contains the user.
--add-group <name>
Specifies the name of a group to add the user to. Repeat this option to specify
multiple list items.
--add-gid <integer>
Specifies the GID of a group to add the user to. Repeat this option to specify multiple
list items.
--remove-group <name>
Specifies the name of a group to remove the user from. Repeat this option to specify
multiple list items.
--remove-gid <integer>
isi auth users modify
317
Specifies the GID of a group to remove the user from. Repeat this option to specify
multiple list items.
--provider <string>
Specifies an authentication provider of the format <type>:<instance>. Valid provider
types are ads, ldap, nis, file, and local. For example, an LDAP provider named
auth1 can be specified as ldap:auth1.
--set-password
Sets the password interactively. This option cannot be used with the --password
option.
{--verbose | -v}
Displays the results of running the command.
{--force | -f}
Suppresses command-line prompts and messages.
Options
This command requires <user>, --uid <integer>, or --sid <string>.
<user>
CHAPTER 7
Identity management
Identity management
319
Identity management
Authenticate a user with Active Directory but give the user a UNIX identity.
Disallow login of users that do not exist in both Active Directory and LDAP.
For more information about identity management, see the white paper Managing identities
with the Isilon OneFS user mapping service at EMC Online Support.
Identity types
OneFS supports three primary identity types, each of which you can store directly on the
file system. Identity types are user identifier and group identifier for UNIX, and security
identifier for Windows.
When you log on to an EMC Isilon cluster, the user mapper expands your identity to
include your other identities from all the directory services, including Active Directory,
LDAP, and NIS. After OneFS maps your identities across the directory services, it
generates an access token that includes the identity information associated with your
accounts. A token includes the following identifiers:
l
A UNIX user identifier (UID) and a group identifier (GID). A UID or GID is a 32-bit
number with a maximum value of 4,294,967,295.
A security identifier (SID) for a Windows user account. A SID is a series of authorities
and sub-authorities ending with a 32-bit relative identifier (RID). Most SIDs have the
form S-1-5-21-<A>-<B>-<C>-<RID>, where <A>, <B>, and <C> are specific to a domain or
computer and <RID> denotes the object in the domain.
A list of supplemental identities, including all groups in which the user is a member.
The token also contains privileges that stem from administrative role-based access
control.
On an Isilon cluster, a file contains permissions, which appear as an access control list
(ACL). The ACL controls access to directories, files, and other securable system objects.
When a user tries to access a file, OneFS compares the identities in the users access
token with the files ACL. OneFS grants access when the files ACL includes an access
control entry (ACE) that allows the identity in the token to access the file and that does
320
Identity management
not include an ACE that denies the identity access. OneFS compares the access token of
a user with the ACL of a file.
Note
For more information about access control lists, including a description of the
permissions and how they correspond to POSIX mode bits, see the white paper titled EMC
Isilon multiprotocol data access with a unified security model on the EMC Online Support
web site.
When a name is provided as an identifier, it is converted into the corresponding user or
group object and the correct identity type. You can enter or display a name in various
ways:
l
UNIX assumes unique case-sensitive namespaces for users and groups. For example,
Name and name represent different objects.
Windows provides a single, case-insensitive namespace for all objects and also
specifies a prefix to target an Active Directory domain; for example, domain\name.
Kerberos and NFSv4 define principals, which require names to be formatted the same
way as email addresses; for example, [email protected].
Multiple names can reference the same object. For example, given the name support and
the domain example.com, support, EXAMPLE\support and [email protected] are all
names for a single object in Active Directory.
Access tokens
An access token is created when the user first makes a request for access.
Access tokens represent who a user is when performing actions on the cluster and supply
the primary owner and group identities during file creation. Access tokens are also
compared against the ACL or mode bits during authorization checks.
During user authorization, OneFS compares the access token, which is generated during
the initial connection, with the authorization data on the file. All user and identity
mapping occurs during token generation; no mapping takes place during permissions
evaluation.
An access token includes all UIDs, GIDs, and SIDs for an identity, in addition to all OneFS
privileges. OneFS reads the information in the token to determine whether a user has
access to a resource. It is important that the token contains the correct list of UIDs, GIDs,
and SIDs. An access token is created from one of the following sources:
Source
Authentication
Username
Kerberized NFSv3
Kerberized NFSv4
HTTP
FTP
HDFS
SMB NTLM
Access tokens
321
Identity management
Source
Authentication
Description
User identity
lookup
Note
ID mapping
The user's identifiers are associated across directory services. All SIDs
are converted to their equivalent UID/GID and vice versa. These ID
mappings are also added to the access token.
User mapping
On-disk identity The default on-disk identity is calculated from the final token and the
calculation
global setting. These identities are used for newly created files.
ID mapping
The Identity (ID) mapping service maintains relationship information between mapped
Windows and UNIX identifiers to provide consistent access control across file sharing
protocols within an access zone.
Note
ID mapping and user mapping are different services, despite the similarity in names.
During authentication, the authentication daemon requests identity mappings from the
ID mapping service in order to create access tokens. Upon request, the ID mapping
service returns Windows identifiers mapped to UNIX identifiers or UNIX identifiers
mapped to Windows identifiers. When a user authenticates to a cluster over NFS with a
UID or GID, the ID mapping service returns the mapped Windows SID, allowing access to
files that another user stored over SMB. When a user authenticates to the cluster over
322
Identity management
SMB with a SID, the ID mapping service returns the mapped UNIX UID and GID, allowing
access to files that a UNIX client stored over NFS.
Mappings between UIDs or GIDs and SIDs are stored according to access zone in a
cluster-distributed database called the ID map. Each mapping in the ID map is stored as a
one-way relationship from the source to the target identity type. Two-way mappings are
stored as complementary one-way mappings.
User and group lookups may be disabled or limited, depending on the Active Directory
settings. You enable user and group lookup settings through the isi auth ads
modify command.
If the ID mapping service does not locate and return a mapped UID or GID in the ID map,
the authentication daemon searches other external authentication providers configured
in the same access zone for a user that matches the same name as the Active Directory
user.
If a matching user name is found in another external provider, the authentication daemon
adds the matching user's UID or GID to the access token for the Active Directory user, and
the ID mapping service creates a mapping between the UID or GID and the Active
Directory user's SID in the ID map. This is referred to as an external mapping.
Note
When an external mapping is stored in the ID map, the UID is specified as the on-disk
identity for that user. When the ID mapping service stores a generated mapping, the SID
is specified as the on-disk identity.
If a matching user name is not found in another external provider, the authentication
daemon assigns a UID or GID from the ID mapping range to the Active Directory user's
SID, and the ID mapping service stores the mapping in the ID map. This is referred to as a
generated mapping. The ID mapping range is a pool of UIDs and GIDs allocated in the
mapping settings.
After a mapping has been created for a user, the authentication daemon retrieves the UID
or GID stored in the ID map upon subsequent lookups for the user.
For UIDs, the ID mapping service generates a UNIX SID with a domain of S-1-22-1 and
a resource ID (RID) matching the UID. For example, the UNIX SID for UID 600 is
S-1-22-1-600.
ID mapping
323
Identity management
For GIDs, the ID mapping service generates a UNIX SID with a domain of S-1-22-2 and
an RID matching the GID. For example, the UNIX SID for GID 800 is S-1-22-2-800.
ID mapping ranges
In access zones with multiple external authentication providers, such as Active Directory
and LDAP, it is important that the UIDs and GIDs from different providers that are
configured in the same access zone do not overlap. Overlapping UIDs and GIDs between
providers within an access zone might result in some users gaining access to other users'
directories and files.
The range of UIDs and GIDs that can be allocated for generated mappings is configurable
in each access zone through the isi auth settings mappings modify
command. The default range for both UIDs and GIDs is 10000002000000 in each
access zone.
Do not include commonly used UIDs and GIDs in your ID ranges. For example, UIDs and
GIDs below 1000 are reserved for system accounts and should not be assigned to users
or groups.
User mapping
User mapping provides a way to control permissions by specifying a user's security
identifiers, user identifiers, and group identifiers. OneFS uses the identifiers to check file
or group ownership.
With the user-mapping feature, you can apply rules to modify which user identity OneFS
uses, add supplemental user identities, and modify a user's group membership. The
user-mapping service combines a users identities from different directory services into a
single access token and then modifies it according to the rules that you create.
Note
You can configure mapping rules on a per-zone basis. Mapping rules must be configured
separately in each access zone that uses them. OneFS maps users only during login or
protocol access.
The user's groups come from Active Directory and LDAP, with the LDAP groups and the
autogenerated group GID added to the list. To pull groups from LDAP, the mapping
service queries the memberUid attribute. The users home directory, gecos, and shell
come from Active Directory.
324
Identity management
Options
A parameter
Wildcards
User mapping
325
Identity management
Stop all processing before applying a default deny rule. To do so, create a rule
that matches allowed users but does nothing, such as an add operator with no
field options, and has the break option. After enumerating the allowed users,
you can place a catchall deny at the end to replace anybody unmatched with an
empty user.
To prevent explicit rules from being skipped, in each group of rules, order explicit
rules before rules that contain wildcard characters.
Add the LDAP or NIS primary group to the supplemental groups
When an Isilon cluster is connected to Active Directory and LDAP, a best practice is
to add the LDAP primary group to the list of supplemental groups. This lets OneFS
honor group permissions on files created over NFS or migrated from other UNIX
storage systems. The same practice is advised when an Isilon cluster is connected to
both Active Directory and NIS.
On-disk identity
After the user mapper resolves a user's identities, OneFS determines an authoritative
identifier for it, which is the preferred on-disk identity.
OnesFS stores either UNIX or Windows identities in file metadata on disk. On-disk identity
types are UNIX, SID, and native. Identities are set when a file is created or a file's access
control data is modified. Almost all protocols require some level of mapping to operate
correctly, so choosing the preferred identity to store on disk is important. You can
configure OneFS to store either the UNIX or the Windows identity, or you can allow OneFS
to determine the optimal identity to store.
On-disk identity types are UNIX, SID, and native. Although you can change the type of ondisk identity, the native identity is best for a network with UNIX and Windows systems. In
native on-disk identity mode, setting the UID as the on-disk identity improves NFS
performance.
Note
The SID on-disk identity is for a homogeneous network of Windows systems managed
only with Active Directory. When you upgrade from a version earlier than OneFS 6.5, the
on-disk identity is set to UNIX. When you upgrade from OneFS 6.5 or later, the on-disk
identity setting is preserved. On new installations, the on-disk identity is set to native.
The native on-disk identity type allows the OneFS authentication daemon to select the
correct identity to store on disk by checking for the identity mapping types in the
following order:
326
Identity management
Order Mapping
type
Description
Algorithmic
mapping
External
mapping
A user with an explicit UID and GID defined in a directory service (such as
Active Directory with RFC 2307 attributes, LDAP, NIS, or the OneFS file
provider or local provider) has the UNIX identity set as the on-disk
identity.
Persistent
mapping
No mapping
If a user lacks a UID or GID even after querying the other directory
services and identity databases, its SID is set as the on-disk identity. In
addition, to make sure a user can access files over NFS, OneFS allocates
a UID and GID from a preset range of 1,000,000 to 2,000,000. In native
on-disk identity mode, a UID or GID that OneFS generates is never set as
the on-disk identity.
Note
If you change the on-disk identity type, you should run the PermissionRepair job in
convert mode to make sure that the disk representation of all files is consistent with the
changed setting.
Managing ID mappings
You can create, modify, and delete identity mappings and configure ID mapping settings.
Managing ID mappings
327
Identity management
The following command deletes all identity mappings in the zone3 access zone that
were both created automatically and include a UID or GID from an external
authentication source:
isi auth mapping delete --all --only-external --zone=zone3
The following command deletes the identity mapping of the user with UID 4236 in the
zone3 access zone:
isi auth mapping delete --source-uid=4236 --zone=zone3
328
Identity management
The following command flushes the mapping of the user with UID 4236 in the zone3
access zone:
isi auth mapping flush --source-uid-4236 --zone=zone3
329
Identity management
SID: S-1-22-2-4236
On Disk: 4236
Yes
25000
50000
Yes
25000
50000
330
You can only create user-mapping rules if you are connected to the EMC Isilon cluster
through the System zone; however, you can apply user-mapping rules to specific
access zones. If you create a user-mapping rule for a specific access zone, the rule
applies only in the context of its zone.
When you change user-mapping on one node, OneFS propagates the change to the
other nodes.
Identity management
After you make a user-mapping change, the OneFS authentication service reloads the
configuration.
The OneFS user access token contains a combination of identities from Active Directory
and LDAP if both directory services are configured. You can run the following commands
to discover the identities that are within each specific directory service.
Procedure
1. Establish an SSH connection to any node in the cluster.
2. View a user identity from Active Directory only by running the isi auth users
view command.
The following command displays the identity of a user named stand in the Active
Directory domain named YORK:
isi auth users view --user=YORK\\stand --show-groups
3. Vew a user identity from LDAP only by running the isi auth users view
command.
The following command displays the identity of an LDAP user named stand:
isi auth user view --user=stand --show-groups
331
Identity management
UID: 4326
SID: S-1-22-1-4326
Primary Group
ID : GID:7222
Name : stand
Additional Groups: stand
sd-group
sd-group2
If you do not specify an access zone, user-mapping rules are created in the System zone.
Procedure
1. To create a rule to merge the Active Directory user with a user from LDAP, run the
following command, where <user-a> and <user-b> are placeholders for the identities to
be merged; for example, user_9440 and lduser_010, respectively:
isi zone zones modify System --add-user-mapping-rules \
"<DOMAIN> <user-a> &= <user-b>"
If the command runs successfully, the system displays the mapping rule, which is
visible in the User Mapping Rules line of the output:
Name:
Cache Size:
Map Untrusted:
SMB Shares:
Auth Providers:
Local Provider:
NetBIOS Name:
All SMB Shares:
All Auth Providers:
User Mapping Rules:
Home Directory Umask:
Skeleton Directory:
Zone ID:
System
4.77M
Yes
Yes
Yes
<DOMAIN>\<user_a> &= <user_b>
0077
/usr/share/skel
1
2. To verify the changes to the token, run a command similar to the following example:
isi auth mapping token <DOMAIN>\\<user-a>
If the command runs successfully, the system displays output similar to the following
example:
User
Name : <DOMAIN>\<user-a>
UID : 1000201
332
Identity management
SID : S-1-5-21-1195855716-1269722693-1240286574-11547
ZID: 1
Zone: System
Privileges: Primary Group
Name : <DOMAIN>\domain users
GID : 1000000
SID : S-1-5-21-1195855716-1269722693-1240286574-513
Supplemental Identities
Name : Users
GID : 1545
SID : S-1-5-32-545
Name : lduser_010
UID : 10010
SID : S-1-22-1-10010
Name : example
GID : 10000
SID : S-1-22-2-10000
Name : ldgroup_20user
GID : 10026
SID : S-1-22-2-10026
3. Write a rule similar to the following example to append the UNIX account to the
Windows account with the groups option:
MYDOMAIN\<win-username> ++ <UNIX-username> [groups]
333
Identity management
Procedure
1. Establish an SSH connection to any node in the cluster.
2. Write a rule similar to the following example to insert information from LDAP into a
user's access token:
*\* += * [group]
3. Write a rule similar to the following example to append other information from LDAP to
a user's access token:
*\* ++ * [user,groups]
username
unix_name
primary_uid
primary_user_sid
primary_gid
primary_group_sid
Options control how a rule combines identity information in a token. The break option is
the exception: It stops OneFS from processing additional rules.
Although several options can apply to a rule, not all options apply to all operators. The
following table describes the effect of each option and the operators that they work with.
334
Option
Operator
Description
user
insert, append
Copies the primary UID and primary user SID, if they exist, to the
token.
groups
insert, append
Copies the primary GID and primary group SID, if they exist, to the
token.
groups
insert, append
If the mapping service fails to find the second user in a rule, the
service tries to find the username of the default user. The name
of the default user cannot include wildcards. When you set the
Identity management
Option
Operator
Description
option for the default user in a rule with the command-line
interface, you must set it with an underscore: default_user.
break
all operators
Stops the mapping service from applying rules that follow the
insertion point of the break option. The mapping service
generates the final token at the point of the break.
Web interface
CLI Direction
Description
append
Append fields
from a user
++
Left-to-right
insert
Insert fields
from a user
+=
Left-to-right
replace
Replace one
user with a
different user
=>
Left-to-right
335
Identity management
Operator
Web interface
CLI Direction
Description
OneFS denies access with a no such
user error.
336
remove
groups
Remove
supplemental
groups from a
user
--
join
&= Bidirectional Inserts the new identity into the token. If the
new identity is the second user, the
mapping service inserts it after the existing
identity; otherwise, the service inserts it
before the existing identity. The location of
the insertion point is relevant when the
existing identity is already the first in the list
because OneFS uses the first identity to
determine the ownership of new file system
objects.
Unary
CHAPTER 8
Auditing
Auditing overview................................................................................................338
Protocol audit events.......................................................................................... 338
Supported event types........................................................................................ 338
Supported audit tools......................................................................................... 339
Managing audit settings......................................................................................340
Integrating with the EMC Common Event Enabler.................................................342
Auditing commands............................................................................................ 344
Auditing
337
Auditing
Auditing overview
You can audit system configuration changes and SMB and NFS protocol activity on an
EMC Isilon cluster. All audit data is stored and protected in the cluster file system and
organized by audit topics.
When you enable or disable system configuration auditing, no additional configuration is
required. If you enable configuration auditing, all configuration events that are handled
by the APIincluding writes, modifications, and deletionsare tracked and recorded in
the config audit topic directories.
You can enable and configure protocol auditing for one or more access zones in a cluster.
If you enable protocol auditing for an access zone, file-access events through the SMB
and NFS protocol are recorded in the protocol audit topic directories. The protocol
audit log file is consumable by auditing applications that support the EMC Common Event
Enabler (CEE), such as Varonis DatAdvantage for Windows. By default, OneFS logs only
the events that are handled by Varonis, but you can specify which events to log in each
access zone. For example, you might want to audit the default set of protocol events in
the System access zone but audit only successful attempts to delete files in a different
access zone.
338
Auditing
close
Mount a share
Delete a file
Close a directory
rename
delete
set_security
The following event types are available for forwarding through CEE but are unsupported
by Varonis DatAdvantage:
Event name Example protocol activity
read
write
close
get_security
The following protocol audit events are not exported through CEE and are unsupported by
Varonis DatAdvantage:
Event name Example protocol activity
logon
logoff
Supported features
Audit events
create
close
delete
rename
339
Auditing
Application
Supported features
Audit events
l
set_security
Note
It is recommended that you install and configure third-party auditing applications before
you enable the OneFS auditing feature. Otherwise, the backlog consumed by the tool may
be so large that results may be stale for a prolonged time.
For the most current list of supported auditing tools, see the Isilon Third-Party Software &
Hardware Compatibility Guide.
If you are integrating with a third-party auditing application, it is recommended that you
install and configure third-party auditing applications before you enable the OneFS
auditing feature. Otherwise, the backlog consumed by the tool may be so large that
results may be stale for a prolonged time.
Procedure
1. Run the isi audit settings modify command.
The following command enables system configuration auditing on the cluster:
isi audit settings modify --config-auditing-enabled=yes
340
Auditing
Note
Because each audited event consumes system resources, it is recommended that you
only configure zones for events that are needed by your auditing application. It is
recommended that you install and configure third-party auditing applications before you
enable the OneFS auditing feature. Otherwise, the backlog consumed by the tool may be
so large that results may be stale for a prolonged time.
Procedure
1. Run the isi audit settings modify command.
The following command enables SMB and NFS protocol access auditing in the System
access zone, and forwards logged events to a CEE server:
isi audit settings modify --protocol-auditing-enabled=yes \
--cee-server-uris=http://sample.com:12228/cee \
--hostname=cluster.domain.com --audited-zones=System
Auditing settings
Basic settings for audit configuration are available through the isi audit modify
command. When you audit protocol events for an access zone, a default set of audit
events are logged. You can modify the list of audit events to log by running the isi
zone zones modify <zone> command, where <zone> is the name of an audited access
zonefor example, the System zone.
Options
--protocol-auditing-enabled {yes | no}
Enables or disables the auditing of I/O events.
--audited-zones <zones>
Specifies one or more access zones, separated by commas, that will be audited if
protocol auditing is enabled. This option overwrites all entries in the list of access
zones; to add or remove access zones without affecting current entries, use --addaudited-zones or --remove-audited-zones.
--clear-audited-zones
Clears the list of access zones to audit.
--add-audited-zones <zones>
Adds one or more access zones, separated by commas, to the list of zones that will
be audited if protocol auditing is enabled.
--remove-audited-zones <zones>
Removes one or more access zones, separated by commas, that will be audited if
protocol auditing is enabled.
--cee-server-uris <uris>
Specifies one or more CEE server URIs, separated by commas, where audit logs will
be forwarded if protocol auditing is enabled. This option overwrites all entries in the
list of CEE server URIs; to add or remove URIs without affecting current entries, use -add-cee-server-uris or --remove-cee-server-uris.
--clear-cee-server-uris
Clears the list of CEE server URIs to which audit logs are forwarded.
--add-cee-server-uris <uris>
Auditing settings
341
Auditing
Adds one or more CEE server URIs, separated by commas, where audit logs are
forwarded if protocol auditing is enabled.
--remove-cee-server-uris <uris>
Removes one or more CEE server URIs, separated by commas, from the list of URIs
where audit logs are forwarded if protocol auditing is enabled.
--cee-log-time <date>
Specifies a date after which the audit CEE forwarder will forward logs. To forward SMB
or NFS traffic logs, specify protocol. Specify <date> in the following format:
[protocol]@<YYYY>-<MM>-<DD> <HH>:<MM>:<SS>
--syslog-log-time <date>
Specifies a date after which the audit syslog forwarder will forward logs. To forward
SMB or NFS traffic logs, specify protocol. To forward configuration change logs,
specify config. Specify <date> in the following format:
[protocol|config]@<YYYY>-<MM>-<DD> <HH>:<MM>:<SS>
--hostname <string>
Specifies the hostname of this cluster for reporting protocol events to CEE servers.
This is typically the SmartConnect zone name. The hostname is used to construct the
UNC path of audited files and directoriesfor example, \\hostname\ifs\data
\file.txt.
--config-auditing-enabled {yes | no}
Enables or disables the auditing of requests to modify application programming
interface (API) configuration settings.
{--verbose | -v}
Displays the results of running the command.
It is recommended that you install and configure third-party auditing applications before
you enable the OneFS auditing feature. Otherwise, the backlog consumed by the tool may
be so large that results may be stale for a prolonged time.
342
Auditing
Install WinRAR or another suitable archival program that can open .iso files as an
archive, and copy the files.
Install SlySoft Virtual CloneDrive, which allows you to mount an ISO image as a drive
that you can copy files from.
Note
You should install a minimum of two servers. It is recommended that you install CEE v6.2
or higher.
Procedure
1. Download the CEE framework software from EMC Online Support:
a. In a web browser, go to https://support.emc.com/search/.
b. In the Search Support field, type Common Event Enabler for Windows, and
then click the Search icon.
c. Click Common Event Enabler <Version> for Windows, where <Version> is 6.0.0 or
later, and then follow the instructions to open or save the .iso file.
2. From the .iso file, extract the 32-bit or 64-bit EMC_CEE_Pack executable file that
you need.
After the extraction completes, the EMC Common Event Enabler installation wizard
opens.
3. Click Next to proceed to the License Agreement page.
4. Select the I accept... option to accept the terms of the license agreement, and then
click Next.
5. On the Customer Information page, type your user name and organization, select your
installation preference, and then click Next.
6. On the Setup Type page, select Complete, and then click Next.
7. Click Install to begin the installation.
The Installing EMC Common Event Enabler page displays the progress of the
installation. When the installation is complete, the InstallShield Wizard Completed
page appears.
8. Click Finish to exit the wizard.
9. Restart the system.
343
Auditing
Registry location
Key
Value
CEE HTTP
listen port
[HKEY_LOCAL_MACHINE\SOFTWARE\EMC\CEE
\Configuration]
HttpPort
12228
Enable
audit
remote
endpoints
[HKEY_LOCAL_MACHINE\SOFTWARE\EMC\CEE\CEPP
\Audit\Configuration]
Enabled
Audit
remote
endpoints
[HKEY_LOCAL_MACHINE\SOFTWARE\EMC\CEE\CEPP
\Audit\Configuration]
EndPoint
<EndPoint>
Note
l
The HttpPort value must match the port in the CEE URIs that you specify during
OneFS protocol audit configuration.
Auditing commands
You can audit system configuration events and SMB and NFS protocol access events on
the EMC Isilon cluster. All audit data is stored in files called audit topics, which collect log
information that you can process further with auditing tools such as Varonis
DatAdvantage for Windows.
Auditing
To enable auditing of protocol access, you must set the --protocol-auditingenabled option to yes, and you must also specify which access zones to audit by
setting the --audited-zones option.
Note
If you are integrating with a third-party auditing application, It is recommended that you
install and configure third-party auditing applications before you enable the OneFS
auditing feature. Otherwise, the backlog consumed by the tool may be so large that
results may be stale for a prolonged time.
Syntax
isi audit settings modify
[--protocol-auditing-enabled {yes | no} ]
[--audited-zones <zones> | --clear-audited-zones]
[--add-audited-zones <zones>]
[--remove-audited-zones <zones>]
[--cee-server-uris <uris> | --clear-cee-server-uris]
[--add-cee-server-uris <uris>]
[--remove-cee-server-uris <uris>]
[--hostname <string>]
[--config-auditing-enabled {yes | no}]
[--config-syslog-enabled {yes | no}]
[--verbose]
Options
--protocol-auditing-enabled {yes | no}
Enables or disables the auditing of data-access requests through the SMB and NFS
protocol.
--audited-zones <zones>
Specifies one or more access zones, separated by commas, that will be audited if
protocol auditing is enabled. This option overwrites all entries in the list of access
zones; to add or remove access zones without affecting current entries, use --addaudited-zones or --remove-audited-zones.
--clear-audited-zones
Clears the list of access zones to audit.
--add-audited-zones <zones>
Adds one or more access zones, separated by commas, to the list of zones that will
be audited if protocol auditing is enabled.
--remove-audited-zones <zones>
Removes one or more access zones, separated by commas, that will be audited if
protocol auditing is enabled.
--cee-server-uris <uris>
Specifies one or more CEE server URIs, separated by commas, where audit logs will
be forwarded if protocol auditing is enabled. The OneFS CEE export service uses
round robin load-balancing when exporting events to multiple CEE servers. This
option overwrites all entries in the list of CEE server URIs; to add or remove URIs
without affecting current entries, use --add-cee-server-uris or --removecee-server-uris.
--clear-cee-server-uris
Clears the list of CEE server URIs to which audit logs are forwarded.
isi audit settings modify
345
Auditing
--add-cee-server-uris <uris>
Adds one or more CEE server URIs, separated by commas, to the list of URIs where
audit logs are forwarded.
--remove-cee-server-uris <uris>
Removes one or more CEE server URIs, separated by commas, from the list of URIs
where audit logs are forwarded.
--cee-log-time <date>
Specifies a date after which the audit CEE forwarder will forward logs. To forward SMB
or NFS traffic logs, specify protocol. Specify <date> in the following format:
[protocol]@<YYYY>-<MM>-<DD> <HH>:<MM>:<SS>
--syslog-log-time <date>
Specifies a date after which the audit syslog forwarder will forward logs. To forward
SMB or NFS traffic logs, specify protocol. To forward configuration change logs,
specify config. Specify <date> in the following format:
[protocol|config]@<YYYY>-<MM>-<DD> <HH>:<MM>:<SS>
--hostname <string>
Specifies the name of the storage cluster to use when forwarding protocol events
typically, the SmartConnect zone name. When SmartConnect is not implemented, the
value must match the hostname of the cluster as Varonis recognizes it. If the field is
left blank, events from each node are filled with the node name (clustername + lnn).
This setting is required only if needed by your third-party audit application.
--config-auditing-enabled {yes | no}
Enables or disables the auditing of requests made through the API for system
configuration changes.
--config-syslog-enabled {yes | no}
Enables or disables the forwarding of system configuration changes to syslog.
{--verbose | -v}
Displays the results of running the command.
Note
OneFS collects the following protocol events for audited access zones by default:
create, close, delete, rename, and set_security. Only these events are
handled by Varonis DatAdvantage. You can specify the successful and failed events that
are audited in an access zone by running the isi zone zones modify command.
Because each audited event consumes system resources, you should only log events that
are supported by your auditing application.
346
Auditing
Options
There are no options for this command.
Examples
To view current audit settings, run the following command:
isi audit settings view
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
347
Auditing
Options
<name>
Specifies the name of the audit topic to modify. Valid values are protocol and
config.
--max-cached-messages <integer>
Specifies the maximum number of audit messages to cache before writing them to a
persistent store. The larger the number, the more efficiently audit events can be
processed. If you specify 0, each audit event is sent synchronously.
{--verbose | -v}
Displays the results of running the command.
Options
<name>
Specifies the name of the audit topic whose properties you want to view. Valid values
are protocol and config.
348
CHAPTER 9
File sharing
File sharing
349
File sharing
It is recommended that you do not save data to the root /ifs file path but in directories
below /ifs. The design of your data storage structure should be planned carefully. A
well-designed directory structure optimizes cluster performance and administration.
You can set Windows- and UNIX-based permissions on OneFS files and directories. Users
who have the required permissions and administrative privileges can create, modify, and
read data on the cluster through one or more of the supported file sharing protocols.
l
SMB. Allows Microsoft Windows and Mac OS X clients to access files that are stored
on the cluster.
NFS. Allows Linux and UNIX clients that adhere to the RFC1813 (NFSv3) and RFC3530
(NFSv4) specifications to access files that are stored on the cluster.
HTTP and HTTPS (with optional DAV). Allows clients to access files that are stored on
the cluster through a web browser.
FTP. Allows any client that is equipped with an FTP client program to access files that
are stored on the cluster through the FTP protocol.
SMB
OneFS includes a configurable SMB service to create and manage SMB shares. SMB
shares provide Windows clients network access to file system resources on the cluster.
You can grant permissions to users and groups to carry out operations such as reading,
writing, and setting access permissions on SMB shares.
The /ifs directory is configured as an SMB share and is enabled by default. OneFS
supports both user and anonymous security modes. If the user security mode is enabled,
users who connect to a share from an SMB client must provide a valid user name with
proper credentials.
The SMB protocol uses security identifiers (SIDs) for authorization data. All identities are
converted to SIDs during retrieval and are converted back to their on-disk representation
before they are stored on the cluster.
When a file or directory is created, OneFS checks the access control list (ACL) of its parent
directory. If the ACL contains any inheritable access control entries (ACEs), a new ACL is
generated from those ACEs. Otherwise, OneFS creates an ACL from the combined file and
directory create mask and create mode settings.
OneFS supports the following SMB clients:
350
SMB version
1.0
File sharing
SMB version
2.0
2.1
Windows 7 or later
Windows Server 2008 R2 or later
You can migrate multiple SMB servers, such as Windows file servers or NetApp filers,
to a single Isilon cluster. You can then configure a separate access zone for each SMB
server.
You can configure each access zone with a unique set of SMB share names that do
not conflict with share names in other access zones, and then join each access zone
to a different Active Directory domain.
You can reduce the number of available and accessible shares to manage by
associating an IP address pool with an access zone to restrict authentication to the
zone.
You can configure default SMB share settings that apply to all shares in an access
zone.
The Isilon cluster includes a built-in access zone named System, where you manage all
aspects of the cluster and other access zones. If you don't specify an access zone when
managing SMB shares, OneFS will default to the System zone.
SMB Multichannel
SMB Multichannel supports establishing a single SMB session over multiple network
connections.
SMB Multichannel is a feature of the SMB 3.0 protocol that provides the following
capabilities:
SMB shares in access zones
351
File sharing
Increased throughput
OneFS can transmit more data to a client through multiple connections over a high
speed network adapters or over multiple network adapters.
Connection failure tolerance
When an SMB Multichannel session is established over multiple network
connections, the session is not lost if one of the connections has a network fault,
which enables the client to continue to work.
Automatic discovery
SMB Multichannel automatically discovers supported hardware configurations on
the client that have multiple available network paths and then negotiates and
establishes a session over multiple network connections. You are not required to
install components, roles, role services, or features.
SMB Multichannel must be enabled on both the EMC Isilon cluster and the Windows
client computer. It is enabled on the Isilon cluster by default.
SMB Multichannel establishes a single SMB session over multiple network connections
only on supported network interface card (NIC) configurations. SMB Multichannel
requires at least one of the following NIC configurations on the client computer:
l
One or more network interface cards that support Receive Side Scaling (RSS).
One or more network interface cards configured with link aggregation. Link
aggregation enables you to combine the bandwidth of multiple NICs on a node into a
single logical interface.
352
Client-side NIC
Configuration
Description
Single RSS-capable
NIC
Multiple NICs
File sharing
Client-side NIC
Configuration
Description
allow SMB Multichannel to leverage the combined bandwidth of multiple
NICs and provides connection fault tolerance if a connection or a NIC fails.
Note
When you connect to a zone through the MMC Shared Folders snap-in, you can view and
manage all SMB shares assigned to that zone; however, you can only view active SMB
sessions and open files on the specific node that you are connected to in that zone.
Changes you make to shares through the MMC Shared Folders snap-in are propagated
across the cluster.
353
File sharing
You must run the Microsoft Management Console (MMC) from a Windows workstation
that is joined to the domain of an Active Directory (AD) provider configured on the
cluster.
Role-based access control (RBAC) privileges do not apply to the MMC. A role with
SMB privileges is not sufficient to gain access.
l
local to local
local to remote
remote to local
remote to remote
To allow Windows SMB clients to traverse each type of symbolic link, the type must be
enabled on the client. The following Windows command enables all four link types:
fsutil behavior set SymlinkEvaluation L2L:1 R2R:1 L2R:1 R2L:1
354
File sharing
In another example, the following options must be enabled in your Samba configuration
file (smb.conf) to allow Samba clients to traverse symbolic links and wide links:
l
follow symlinks=yes
wide links=yes
For more information, refer to the symbolic link configuration information for your specific
SMB client and version.
NFS
OneFS provides an NFS server so you can share files on your cluster with NFS clients that
adhere to the RFC1813 (NFSv3) and RFC3530 (NFSv4) specifications.
In OneFS, the NFS server is fully optimized as a multi-threaded service running in user
space instead of the kernel. This architecture load balances the NFS service across all
nodes of the cluster, providing the stability and scalability necessary to manage up to
thousands of connections across multiple NFS clients.
NFS mounts execute and refresh quickly, and the server constantly monitors fluctuating
demands on NFS services and makes adjustments across all nodes to ensure continuous,
reliable performance. Using a built-in process scheduler, OneFS ensures fair allocation of
node resources so that no client can seize more than its fair share of NFS services.
The NFS server also supports access zones defined in OneFS, so that clients can access
only the exports appropriate to their zone. For example, if NFS exports are specified for
Zone 2, only clients assigned to Zone 2 can access these exports.
To simplify client connections, especially for exports with large pathnames, the NFS
server also supports aliases, which are shortcuts to mount points that clients can specify
directly.
For secure NFS file sharing, OneFS supports NIS and LDAP authentication providers.
NFS exports
You can manage individual NFS export rules that define mount-points (paths) available to
NFS clients and how the server should perform with these clients.
In OneFS, you can create, delete, list, view, modify, and reload NFS exports.
NFS export rules are zone-aware. Each export is associated with a zone, can only be
mounted by clients on that zone, and can only expose paths below the zone root. By
default, any export command applies to the client's current zone.
Each rule must have at least one path (mount-point), and can include additional paths.
You can also specify that all subdirectories of the given path or paths are mountable.
Otherwise, only the specified paths are exported, and child directories are not
mountable.
An export rule can specify a particular set of clients, enabling you to restrict access to
certain mount-points or to apply a unique set of options to these clients. If the rule does
Anonymous access to SMB shares
355
File sharing
not specify any clients, then the rule applies to all clients that connect to the server. If the
rule does specify clients, then that rule is applied only to those clients.
NFS aliases
You can create and manage aliases as shortcuts for directory pathnames in OneFS. If
those pathnames are defined as NFS exports, NFS clients can specify the aliases as NFS
mount points.
NFS aliases are designed to give functional parity with SMB share names within the
context of NFS. Each alias maps a unique name to a path on the file system. NFS clients
can then use the alias name in place of the path when mounting.
Aliases must be formed as top-level Unix pathnames, having a single forward slash
followed by name. For example, you could create an alias named /q4 that mapped
to /ifs/data/finance/accounting/winter2013 (a path in OneFS). An NFS
client could mount that directory through either of:
mount cluster_ip:/q4
mount cluster_ip:/ifs/data/finance/accounting/winter2013
Aliases and exports are completely independent. You can create an alias without
associating it with an NFS export. Similarly, an NFS export does not require an alias.
Each alias must point to a valid path on the file system. While this path is absolute, it
must point to a location beneath the zone root (/ifs on the System zone). If the alias
points to a path that does not exist on the file system, any client trying to mount the alias
would be denied in the same way as attempting to mount an invalid full pathname.
NFS aliases are zone-aware. By default, an alias applies to the client's current access
zone. To change this, you can specify an alternative access zone as part of creating or
modifying an alias.
Each alias can only be used by clients on that zone, and can only apply to paths below
the zone root. Alias names are unique per-zone, but the same name can be used in
different zones, for example, /home.
When you create an alias in the Web Administration interface, the alias list displays the
status of the alias. Similarly, using the --check option of the isi nfs aliases
command, you can check the status of an NFS alias. Status can be one of good, illegal
path, name conflict, not exported, or path not found.
356
Log file
Description
nfs.log
rpc_lockd.log
rpc_statd.log
isi_netgroup_d.log
File sharing
FTP
OneFS includes a secure FTP service called vsftpd, which stands for Very Secure FTP
Daemon, that you can configure for standard FTP and FTPS file transfers.
It is recommended that you configure ACL and UNIX permissions only if you fully
understand how they interact with one another.
357
File sharing
Note
It is recommended that you keep write caching enabled. You should also enable write
caching for all file pool policies.
OneFS interprets writes to the cluster as either synchronous or asynchronous, depending
on a client's specifications. The impacts and risks of write caching depend on what
protocols clients use to write to the cluster, and whether the writes are interpreted as
synchronous or asynchronous. If you disable write caching, client specifications are
ignored and all writes are performed synchronously.
The following table explains how clients' specifications are interpreted, according to the
protocol.
Protocol Synchronous
Asynchronous
NFS
SMB
If a node fails, no data will be lost except in the unlikely event that a client of that
node also crashes before it can reconnect to the cluster. In that situation,
asynchronous writes that have not been committed to disk will be lost.
SMB
If a node fails, asynchronous writes that have not been committed to disk will be lost.
It is recommended that you do not disable write caching, regardless of the protocol that
you are writing with. If you are writing to the cluster with asynchronous writes, and you
decide that the risks of data loss are too great, it is recommended that you configure your
clients to use synchronous writes, rather than disable write caching.
358
File sharing
No
Yes
Yes
No
Yes
No
nobody
No
4
0
No
Isilon Server
4
0
Yes
No
Yes
Modifying global SMB file sharing settings could result in operational failures. Be aware
of the potential consequences before modifying these settings.
Procedure
1. Run the isi smb settings global modify command.
The following example command specifies that read-only cannot be deleted from SMB
shares:
isi smb settings global modify --allow-delete-readonly=no
You can determine whether the service is enabled or disabled by running the isi
services -l command.
Managing SMB settings
359
File sharing
Procedure
1. Run the isi services command.
The following command disables the SMB service:
isi services smb disable
The following command disables SMB Multichannel on the EMC Isilon cluster:
isi smb settings global modify -support-multichannel=no
No
No
No
norecurse
default acl
0700
0000
0700
0100
No
never
0XED00
0x01-0x1F:-1
File sharing
Yes
Yes
Yes
No
If you modify the default settings, the changes are applied to all existing shares in the
access zone.
Procedure
1. Run the isi smb settings shares modify command.
The following command specifies that guests are never allowed access to shares in
zone5:
isi smb settings global modify --zone=zone5 --impersonateguest=never
The following command disables SMB Multichannel on the EMC Isilon cluster:
isi smb settings global modify -support-multichannel=no
361
File sharing
Note
It is recommended that you configure advanced SMB share settings only if you have a
solid understanding of the SMB protocol.
Share names can contain up to 80 characters, and can only contain alphanumeric
characters, hyphens, and spaces. Also, if the cluster character encoding is not set to
UTF-8, SMB share names are case-sensitive.
The following command creates a directory at /ifs/data/share2, converts it to an
SMB share, and adds the share to the default System zone because no zone is
specified:
isi smb shares create share2 --path=/ifs/data/share2 \
--create-path --browsable=true --description="Example Share 2"
362
File sharing
Note
If no default ACL is configured and the parent directory does not have an inheritable
ACL, an ACL is created for the share with the directory-create-mask and
directory-create-mode settings.
The following command creates the directory /ifs/data/share4 and converts it to
a non-browsable SMB share. The command also configures the use of mode bits for
permissions control:
isi smb shares create --name=share4 --path=/ifs/data/share4 \
--create-path --browsable=false --description="Example Share 4" \
--inheritable-path-acl=true --create-permissions="use create mask
\
and mode"
2. The following command creates home directories for each user that connects to the
share, based on the user's NetBIOS domain and user name.
In this example, if a user is in a domain named DOMAIN and has a username of
user_1, the path /ifs/home/%D/%U expands to /ifs/home/DOMAIN/user_1.
isi smb shares modify HOMEDIR --path=/ifs/home/%D/%U \
--allow-variable-expansion=yes --auto-create-directory=yes
The following command creates a share named HOMEDIR with the existing
path /ifs/share/home:
isi smb shares create HOMEDIR /ifs/share/home
3. Run the isi smb shares permission modify command to enable access to
the share.
The following command allows the well-known user Everyone full permissions to the
HOMEDIR share:
isi smb shares permission modify HOMEDIR --wellknown Everyone \
--permission-type allow --permission full
363
File sharing
Note
If the cluster character encoding is not set to UTF-8, SMB share names are casesensitive.
2. Optional: Verify the change by running the following command to list permissions on
the share:
isi smb shares permission list ifs
File sharing
The following command enables the guest user in the access zone named zone3:
isi auth users modify Guest --enabled=yes --zone=zone3
2. Set guest impersonation on the share you want to allow anonymous access to by
running the isi smb share modify command.
The following command configures guest impersonation on a share named share1 in
zone3:
isi smb share modify share1 --zone=zone3 \
--impersonate-guest=always
3. Verify that the Guest user account has permission to access the share by running the
isi smb share permission list command.
The following command list the permissions for share1 in zone3:
isi smb share permission list share1 --zone=zone3
2. Set guest impersonation as the default value for all shares in the access zone by
running the isi smb settings share modify command.
The following command configures guest impersonation for all shares in zone3:
isi smb settings share modify --zone=zone3 \
--impersonate-guest=always
3. Verify that the Guest user account has permission to each share in the access zone by
running the isi smb share permission list command.
The following command list the permissions for share1 in zone3:
isi smb share permission list share1 --zone=zone3
365
File sharing
When you create an SMB share through the web administration interface, you must select
the Allow Variable Expansion check box or the string is interpreted literally by the system.
366
Variable Value
Description
%U
%D
File sharing
Variable Value
Description
l
%Z
%L
%0
%1
%2
Note
If the user name includes fewer than three characters, the %0, %1, and %2 variables
wrap around. For example, for a user named ab, the variables maps to a, b, and a,
respectively. For a user named a, all three variables map to a.
2
Yes
No
Yes
367
File sharing
You can determine whether NFS services are enabled or disabled by running the isi
nfs settings global view command.
Procedure
1. Run the isi nfs settings global modify command.
The following command disables the NFSv3 service:
isi nfs settings global modify --nfsv3-enabled no
It is recommended that you modify the default export to limit access only to trusted
clients, or to restrict access completely. To ensure that sensitive data is not
compromised, other exports that you create should be lower in the OneFS file hierarchy,
and can be protected by access zones or limited to specific clients with either root, readwrite, or read-only access, as appropriate.
It is recommended that you not modify default export settings unless you are sure of the
result.
Procedure
1. Run the isi nfs settings export modify command.
368
File sharing
The following command specifies a maximum export file size of one terabyte:
isi nfs settings export modify --max-file-size 1099511627776
You could also restore the maximum export file size to the system default by using the
following command:
isi nfs settings export modify --revert-max-file-size
2. Confirm the following default values for these settings, which show that root is
mapped to nobody, thereby restricting root access:
Map Root
Enabled:
User:
Primary Group:
Secondary Groups:
True
Nobody
-
3. If the root-squashing rule, for some reason, is not in effect, you can implement it for
the default NFS export by running the isi nfs export modify command, as
follows:
isi nfs exports modify 1 --map-root-enabled true --map-root nobody
Results
With these settings, regardless of the users' credentials on the NFS client, they would not
be able to gain root privileges on the NFS server.
369
File sharing
The following command creates an export supporting client access to multiple paths
and their subdirectories:
isi nfs exports create /ifs/data/projects,/ifs/home --all-dirs=yes
2. Optional: To view the export ID, which is required for modifying or deleting the export,
run the isi nfs exports list command.
In the following example output, export 1 contains a directory path that does not
currently exist:
ID
Message
----------------------------------1
'/ifs/test' does not exist
----------------------------------Total: 1
This command would override the export's access-restriction setting if there was a
conflict. For example, if the export was created with read-write access disabled, the
client, 10.1.249.137, would still have read-write permissions on the export.
File sharing
2. If you did not specify the --force option, type yes at the confirmation prompt.
When you create an NFS alias, OneFS performs a health check. If, for example, the full
path that you specify is not a valid path, OneFS issues a warning:
Warning: health check on alias '/home' returned 'path not found'
Nonetheless, the alias is created, and you can create the directory that the alias
points to at a later time.
371
File sharing
When you modify an NFS alias, OneFS performs a health check. If, for example, the
path to the alias is not valid, OneFS issues a warning:
Warning: health check on alias '/home' returned 'not exported'
Nonetheless, the alias is modified, and you can create the export at a later time.
When you delete an NFS alias, OneFS asks you to confirm the operation:
Are you sure you want to delete NFS alias /home? (yes/[no])
This command lists aliases that have been created in an access zone named hqhome:
isi nfs aliases list --zone hq-home
372
File sharing
60
NO
YES
YES
YES
YES
YES
YES
root
/ifs/home/ftp
077
off
60
300
NO
hide
0666
local user home directory
077
NO
YES
300
(none)
NO
(disabled)
No local users chrooted; exception list
(none)
373
File sharing
You can determine whether the service is enabled or disabled by running the isi
services -l command.
Procedure
1. Run the following command:
isi services vsftpd enable
File sharing
Procedure
1. Click Protocols > HTTP Settings.
2. From the Service options, select one of the following settings:
Option
Description
Enable HTTP
Disable HTTP entirely Closes the HTTP port used for file access. Users can continue
to access the web administration interface by specifying the
port number in the URL. The default port is 8080.
3. In the Document root directory field, type or click Browse to navigate to an existing
directory in /ifs, or click File System Explorer to create a new directory and set its
permissions.
Note
The HTTP server runs as the daemon user and group. To properly enforce access
controls, you must grant the daemon user or group read access to all files under the
document root, and allow the HTTP server to traverse the document root.
4. In the Server hostname field, type the HTTP server name. The server hostname must
be a fully-qualified, SmartConnect zone name and valid DNS name. The name must
begin with a letter and contain only letters, numbers, and hyphens (-).
5. In the Administrator email address field, type an email address to display as the
primary contact for issues that occur while serving files.
6. From the Active Directory Authentication list, select an authentication setting:
Option
Description
Off
Integrated Authentication
Only
7. Click the Enable DAV check box. This allows multiple users to manage and modify files
collaboratively across remote web servers.
Enable and configure HTTP
375
File sharing
SMB commands
You can access and configure the SMB file sharing service through the SMB commands.
Options
{--set | -s} <string>
Specifies the level of data logged for SMB shares on the node. Specify one of the
following valid options:
l
always
error
warning
info
verbose
debug
trace
Options
{--level | -l} <string>
Sets the level information to be logged. The following values are valid:
376
always
error
warning
File sharing
info
verbose
debug
trace
read
write
session-setup
logoff
flush
notify
tree-connect
tree-disconnect
create
delete
oplock
locking
set-info
query
close
create-directory
delete-directory
Options
{--id | -i} <string>
Specifies the ID of the log-level filter to be deleted from the node.
377
File sharing
Options
There are no options for this command.
To view a list of open files, run the isi smb openfiles list command.
Syntax
isi smb openfiles close <id>
[--force]
Options
<id>
Options
{--limit | -l} <integer>
Displays no more than the specified number of smb openfiles.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
378
File sharing
Any open files are automatically closed before an SMB session is deleted.
Syntax
isi smb sessions delete <computer-name>
[{--user <name> | --uid <id> | --sid <sid>}]
[--force]
[--verbose]
Options
<computer-name>
Required. Specifies the computer name. If a --user, --uid, or --sid option is not
specified, the system deletes all SMB sessions associated with this computer.
--user <string>
Specifies the name of the user. Deletes only those SMB sessions to the computer that
are associated with the specified user.
--uid <id>
Specifies a numeric user identifier. Deletes only those SMB sessions to the computer
that are associated with the specified user identifier.
--sid <sid>
Specifies a security identifier. Deletes only those SMB sessions to the computer that
are associated with the security identifier.
{--force | -f}
Specifies that the command execute without prompting for confirmation.
Examples
The following command deletes all SMB sessions associated with a computer named
computer1:
isi smb sessions delete computer1
The following command deletes all SMB sessions associated with a computer named
computer1 and a user named user1:
isi smb sessions delete computer1 --user=user1
Any open files are automatically closed before an SMB session is deleted.
379
File sharing
Syntax
isi smb sessions delete-user {<user> | --uid <id> | --sid <sid> }
[--computer-name <string>]
[--force]
[--verbose]
Options
<user>
Required. Specifies the user name. If the --computer-name option is omitted, the
system deletes all SMB sessions associated with this user.
{--computer-name | -C} <string>
Deletes only the user's SMB sessions that are associated with the specified
computer.
{--force | -f}
Suppresses command-line prompts and messages.
{--verbose | -v}
Displays more detailed information.
Examples
The following command deletes all SMB sessions associated with a user called user1:
isi smb sessions delete-user user1
The following command deletes all SMB sessions associated with a user called user1 and
a computer called computer1:
isi smb sessions delete-user user1 \
--computer-name=computer1
Options
{--limit | -l} <integer>
Specifies the maximum number of SMB sessions to list.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
380
File sharing
{--verbose | -v}
Displays more detailed information.
Options
--access-based-share-enum {yes | no}
Enumerates only the files and folders that the requesting user has access to.
--revert-access-based-share-enum
Sets the value to the system default for --access-based-share-enum.
--dot-snap-accessible-child {yes | no}
Specifies whether to make the /ifs/.snapshot directory visible in subdirectories
of the share root. The default setting is no.
--revert-dot-snap-accessible-child
Sets the value to the system default for --dot-snap-accessible-child.
--dot-snap-accessible-root {yes | no}
Specifies whether to make the /ifs/.snapshot directory accessible at the share
root. The default setting is yes.
isi smb settings global modify
381
File sharing
--revert-dot-snap-accessible-root
Sets the value to the system default for --dot-snap-accessible-root.
--dot-snap-visible-child {yes | no}
Specifies whether to make the /ifs/.snapshot directory visible in subdirectories
of the share root. The default setting is no.
--revert-dot-snap-visible-child
Sets the value to the system default for --dot-snap-visible-child.
--dot-snap-visible-root {yes | no}
Specifies whether to make the /ifs/.snapshot directory visible at the root of the
share. The default setting is no.
--revert-dot-snap-visible-root
Sets the value to the system default for --dot-snap-visible-root.
--enable-security-signatures {yes | no}
Indicates whether the server supports signed SMB packets.
--revert-enable-security-signatures
Sets the value to the system default for --enable-security-signatures.
--guest-user <integer>
Specifies the fully qualified user to use for guest access.
--revert-guest-user
Sets the value to the system default for --guest-user.
--ignore-eas {yes | no}
Specifies whether to ignore EAs on files.
--revert-ignore-eas
Sets the value to the system default for --ignore-eas.
--onefs-cpu-multiplier <integer>
Specifies the number of OneFS worker threads to configure based on the number of
CPUs. Valid numbers are 1-4.
--revert-onefs-cpu-multiplier
Sets the value to the system default for --onefs-cpu-multiplier.
--onefs-num-workers <integer>
Specifies the number of OneFS worker threads that are allowed to be configured.
Valid numbers are 0-1024. If set to 0, the number of SRV workers will equal the value
specified by --onefs-cpu-multiplier times the number of CPUs.
--revert-onefs-num-workers
Sets the value to the system default for --onefs-num-workers.
--require-security-signatures {yes | no}
Specifies whether packet signing is required. If set to yes, signing is always required.
If set to no, signing is not required but clients requesting signing will be allowed to
connect if the --enable-security-signatures option is set to yes.
--revert-require-security-signatures
Sets the value to the system default for --require-security-signatures.
382
File sharing
--server-string <string>
Provides a description of the server.
--revert-server-string
Sets the value to the system default for --revert-server-string.
--srv-cpu-multiplier <integer>
Specifies the number of SRV worker threads to configure per CPU. Valid numbers are
1-8.
--revert_srv-cpu-multiplier
Sets the value to the system default for --srv-cpu-multiplier.
--srv-num-workers <integer>
Specifies the number of OneFS worker threads that are allowed to be configured.
Valid numbers are 0-1024. If set to 0, the number of SRV workers will equal the value
specified by --srv-cpu-multiplier times the number of CPUs.
--revert-srv-num-workers
Sets the value to the system default for --revert-srv-num-workers.
--support-multichannel {yes | no}
Specifies whether Multichannel for SMB 3.0 is enabled on the cluster. SMB
Multichannel is enabled by default.
--revert-support-multichannel
Set the value of --support-multichannel back to the default system value.
--support-netbios {yes | no}
Specifies whether to support the NetBIOS protocol.
--revert-support-netbios
Sets the value to the system default for --support-netbios.
--support-smb2 {yes | no}
Specifies whether to support the SMB 2.0 protocol. The default setting is yes.
--revert-support-smb2
Sets the value to the system default for --support-smb2.
Options
There are no options for this command.
383
File sharing
Options
--access-based-enumeration {yes | no}
Specifies whether access-based enumeration is enabled.
--revert-access-based-enumeration
Sets the value to the system default for --access-based-enumeration.
--access-based-enumeration-root-only {yes | no}
384
File sharing
385
File sharing
Specifies whether to hide files that begin with a periodfor example, UNIX
configuration files.
--revert-hide-dot-files
Sets the value to the system default for --hide-dot-files.
--host-acl <string>
Specifies which hosts are allowed access. Specify --host-acl for each additional
host ACL clause. This will replace any existing ACL.
--revert-host-acl
Sets the value to the system default for --host-acl.
--clear-host-acl <string>
Clears the value for an ACL expressing which hosts are allowed access.
--add-host-acl <string>
Adds an ACE to the already-existing host ACL. Specify --add-host-acl for each
additional host ACL clause to be added.
--remove-host-acl <string>
Removes an ACE from the already-existing host ACL. Specify --remove-host-acl
for each additional host ACL clause to be removed.
--impersonate-guest {always | "bad user" | never}
Allows guest access to the share. The acceptable values are always, "bad user",
and never.
--revert-impersonate-guest
Sets the value to the system default for --impersonate-guest.
--impersonate-user <string>
Allows all file access to be performed as a specific user. This must be a fully qualified
user name.
--revert-impersonate-user
Sets the value to the system default for --impersonate-user.
--mangle-byte-start <string>
Specifies the wchar_t starting point for automatic invalid byte mangling.
--revert-mangle-byte-start
Sets the value to the system default for --mangle-byte-start.
--mangle-map <string>
Maps characters that are valid in OneFS but are not valid in SMB names.
--revert-mangle-map
Sets the value to the system default for --mangle-map.
--clear-mangle-map <string>
Clears the values for character mangle map.
--add-mangle-map <string>
Adds a character mangle map. Specify --add-mangle-map for each additional Add
character mangle map.
--remove-mangle-map <string>
386
File sharing
Options
--zone <string>
Specifies the name of the access zone. Displays only the settings for shares in the
specified zone.
387
File sharing
Options
<name>
Required. Specifies the name for the new SMB share.
<path>
Required. Specifies the full path of the SMB share to create, beginning at /ifs.
--zone <string>
Specifies the access zone the new SMB share is assigned to. If no access zone is
specified, the new SMB share is assigned to the default System zone.
{--inheritable-path-acl | -i} {yes | no}
If set to yes, if the parent directory has an inheritable access control list (ACL), its
ACL will be inherited on the share path. The default setting is no.
--create-path
Creates the SMB-share path if one doesn't exist.
--host-acl <string>
Specifies the ACL that defines host access. Specify --host-acl for each additional
host ACL clause.
--description <string>
Specifies a description for the SMB share.
--csc-policy {none | documents | manual | programs}, -C {none |
documents | manual | programs}
Sets the client-side caching policy for the share. Valid values are none, documents,
manual, and programs.
--allow-variable-expansion {yes | no}
Specifies automatic expansion of variables for home directories.
--directory-create-mask <integer>
Creates home directories automatically.
--browsable {yes | no}, -b {yes | no}
388
File sharing
If set to yes, makes the share visible in net view and the browse list. The default
setting is yes.
--allow-execute-always {yes | no}
If set to yes, allows a user with read access to a file to also execute the file. The
default setting is no.
--directory-create-mask <integer>
Defines which mask bits are applied when a directory is created.
--strict-locking {yes | no}
If set to yes, directs the server to check for and enforce file locks. The default setting
is no.
--hide-dot-files {yes | no}
If set to yes, hides files that begin with a decimalfor example, UNIX configuration
files. The default setting is no.
--impersonate-guest {always | "bad user" | never}
Allows guest access to the share. The acceptable values are always, "bad user",
and never.
--strict-flush {yes | no}
If set to yes, flush requests are always honored. The default setting is yes.
--access-based-enumeration {yes | no}
If set to yes, enables access-based enumeration only on the files and folders that
the requesting user can access. The default setting is no.
--access-based-enumeration-root-only {yes | no}
If set to yes, enables access-based enumeration only on the root directory of the
SMB share. The default setting is no.
--mangle-byte-start <string>
Specifies the wchar_t starting point for automatic invalid byte mangling.
--file-create-mask <integer>
Defines which mask bits are applied when a file is created.
--create-permissions {"default acl" | "inherit mode bits" |
"use create mask and mode"}
Sets the default permissions to apply when a file or directory is created. Valid values
are "default acl", "inherit mode bits", and "use create mask and
mode"
--mangle-map <string>
Maps characters that are valid in OneFS but are not valid in SMB names.
--impersonate-user <string>
Allows all file access to be performed as a specific user. This value must be a fully
qualified user name.
--change-notify {norecurse | all | none}
Defines the change notify setting. The acceptable values are norecurse, all, or
none.
--oplocks {yes | no}
isi smb shares create
389
File sharing
The following sample command specifies that the subnet 10.7.215.0/24 is allowed
access to the share, but all other subnets are denied access:
--host-acl=allow:10.7.216.0/24 --host-acl=deny:ALL
Options
<share>
390
File sharing
Options
--zone <string>
Specifies the access zone. Displays all SMB shares in the specified zone. If no access
zone is specified, the system displays all SMB shares in the default System zone.
{--limit | -l} <integer>
Specifies the maximum number of items to list.
--sort {name | path | description}
Specifies the field to sort items by.
{--descending | -d}
Sorts the data in descending order.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
--verbose | -v
Displays more detailed information.
391
File sharing
[--description <string>]
[--csc-policy {manual | documents | programs | none}]
[--revert-csc-policy]
[--allow-variable-expansion {yes | no}]
[--revert-allow-variable-expansion]
[--auto-create-directory {yes | no}]
[--revert-auto-create-directory {yes | no}]
[--browsable {yes | no}]
[--revert-browsable]
[--allow-execute-always {yes | no}]
[--revert-allow-execute-always]
[--directory-create-mask <integer>]
[--revert-directory-create-mask]
[--strict-locking {yes | no}]
[--revert-strict-locking]
[--hide-dot-files {yes | no}]
[--revert-hide-dot-files]
[--impersonate-guest {always | "bad user" | never}]
[--revert-impersonate-guest]
[--strict-flush {yes | no}]
[--revert-strict-flush]
[--access-based-enumeration {yes | no}]
[--revert-access-based-enumeration]
[--access-based-enumeration-root-only {yes | no}]
[--revert-access-based-enumeration-root-only]
[--mangle-byte-start <integer>]
[--revert-mangle-byte-start]
[--file-create-mask <integer>]
[--revert-file-create-mask]
[--create-permissions {"default acl" | "inherit mode bits"
| "use create mask and mode"}]
[--revert-create-permissions]
[--mangle-map <mangle-map>]
[--revert-mangle-map]
[--clear-mangle-map]
[--add-mangle-map <string>]
[--remove-mangle-map <string>]
[--impersonate-user <string>]
[--revert-impersonate-user]
[--change-notify {all | norecurse | none}'
[--revert-change-notify]
[--oplocks {yes | no}]
[--revert-oplocks]
[--allow-delete-readonly {yes | no}]
[--revert-allow-delete-readonly]
[--directory-create-mode <integer>]
[--revert-directory-create-mode]
[--ntfs-acl-support {yes | no}]
[--revert-ntfs-acl-support]
[--file-create-mode <integer>]
[--revert-file-create-mode]
[--verbose]
Options
<share>
392
File sharing
Specifies the access zone that the SMB share is assigned to. If no access zone is
specified, the system modifies the SMB share with the specified name assigned to
the default System zone, if found.
--new-zone <string>
Specifies the new access zone that SMB share will be reassigned to.
--host-acl <host-acl>
An ACL expressing which hosts are allowed access. Specify --host-acl for each
additional host ACL clause.
--revert-host-acl
Sets the value to the system default for --host-acl.
--clear-host-acl
Clears the value of an ACL that expresses which hosts are allowed access.
--add-host-acl <string>
Adds an ACL expressing which hosts are allowed access. Specify --add-host-acl
for each additional host ACL clause to add.
--remove-host-acl <string>
Removes an ACL expressing which hosts are allowed access. Specify --removehost-acl for each additional host ACL clause to remove.
--description <string>
The description for this SMB share.
--csc-policy, -C {manual | documents | programs | none}
Specifies the client-side caching policy for the shares.
--revert-csc-policy
Sets the value to the system default for --csc-policy.
{--allow-variable-expansion | -a} {yes | no}
Allows the automatic expansion of variables for home directories.
--revert-allow-variable-expansion
Sets the value to the system default for --allow-variable-expansion.
{--auto-create-directory | -d} {yes | no}
Automatically creates home directories.
--revert-auto-create-directory
Sets the value to the system default for --auto-create-directory.
{--browsable | -b} {yes | no}
The share is visible in the net view and the browse list.
--revert-browsable
Sets the value to the system default for --browsable.
--allow-execute-always {yes | no}
Allows users to execute files they have read rights for.
--revert-allow-execute-always
Sets the value to the system default for --allow-execute-always.
--directory-create-mask <integer>
isi smb shares modify
393
File sharing
File sharing
--mangle-map <mangle-map>
The character mangle map. Specify --mangle-map for each additional character
mangle map.
--revert-mangle-map
Sets the value to the system default for --mangle-map.
--clear-mangle-map
Clears the value for character mangle map.
--add-mangle-map <string>
Adds a character mangle map. Specify --add-mangle-map for each additional Add
character mangle map.
--remove-mangle-map <string>
Removes a character mangle map. Specify --remove-mangle-map for each
additional Remove character mangle map.
--impersonate-user <string>
The user account to be used as a guest account.
--revert-impersonate-user
Sets the value to the system default for --impersonate-user.
--change-notify {all | norecurse | none}
Specifies the level of change notification alerts on a share.
--revert-change-notify
Sets the value to the system default for --change-notify.
--oplocks {yes | no}
Supports oplocks.
--revert-oplocks
Sets the value for the system default of --oplocks.
--allow-delete-readonly {yes | no}
Allows the deletion of read-only files in the share.
--revert-allow-delete-readonly
Sets the value for the system default of --allow-delete-readonly.
--directory-create-mode <integer>
Specifies the directory create mode bits.
--revert-directory-create-mode
Sets the value for the system default of --directory-create-mode.
--ntfs-acl-support {yes | no}
Supports NTFS ACLs on files and directories.
--revert-ntfs-acl-support
Sets the value for the system default of --ntfs-acl-support.
--file-create-mode <integer>
Specifies the file create mode bits.
--revert-file-create-mode
Sets the value for the system default of --file-create-mode.
isi smb shares modify
395
File sharing
Options
<share>
396
File sharing
[--zone <string>]
[--force]
[--verbose]
Options
<share>
Options
<share>
397
File sharing
Options
<share>
398
File sharing
Options
<share>
Specifies the name of the SMB share.
<user>
Specifies a user name.
--group <name>
Specifies a group name.
--gid <integer>
Specifies a numeric group identifier.
--uid <integer>
Specifies a numeric user identifier.
--sid <string>
Specifies a security identifier.
--wellknown <string>
Specifies a well-known user, group, machine, or account name.
--zone <string>
Specifies an access zone.
Options
<share>
399
File sharing
NFS commands
You can access and configure the NFS file sharing service through the NFS commands.
Options
None
With no optional parameter specified, displays the currently set log level.
--set {always | error | warning | info | verbose | debug | trace}
Specifies the desired NFS log level for this node, as described in the following table.
Log level
Description
always
Specifies that all NFS events are logged in NFS log files.
error
Specifies that only NFS error conditions are logged in NFS log files.
warning
Specifies that only NFS warning conditions are logged in NFS log files.
info
Specifies that only NFS information conditions are logged in NFS log files.
verbose
debug
trace
Adds tracing information that EMC Isilon can use to pinpoint issues
--reset
Restores the log level to the system default.
Examples
The following command sets the NFS log level to error only:
isi nfs log-level --set error
The following command resets the NFS log level to the system default:
isi nfs log-level --reset
400
File sharing
Options
<name>
The name of the alias. Alias names must be formed as Unix root directory with a
single forward slash followed by the name. For example, /home.
<path>
The OneFS directory pathname the alias links to. The pathname must be an absolute
path below the access zone root. For example, /ifs/data/ugroup1/home.
--zone
The access zone in which the alias is active.
--force
Forces creation of the alias without requiring confirmation.
Example
The following command creates an alias in a zone named ugroup1:
isi nfs aliases create /home /ifs/data/ugroup1/home
--zone ugroup1
Options
<name>
The name of the alias to be deleted.
--zone <string>
The access zone in which the alias is active.
--force
Forces the alias to be deleted without having to confirm the operation.
401
File sharing
Example
The following command deletes an alias from a zone named ugroup1.
isi nfs aliases delete /projects --zone ugroup1
If you do not use the --force option in the command, OneFS asks you to confirm the
deletion. Type yes to confirm or no to decline the operation, then press ENTER.
Options
--check
For the current zone, displays a list of aliases and their health status.
--zone <string>
The access zone in which the alias is active.
--limit <integer>
Displays no more than the specified number of NFS aliases.
--sort {zone | name | path | health}
Specifies the field to sort by.
--descending
Specifies to sort the data in descending order.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
--no-header
Displays table and CSV output without headers.
--no-footer
Displays table output without footers.
Example
The following command displays a table of the aliases in a zone named ugroup1
including their health status.
isi nfs aliases list --zone ugroup1 --check
402
File sharing
Options
<alias>
The current name of the alias, for example, /home.
--zone <string>
The access zone in which the alias is currently active.
--new-zone <string>
The new access zone in which the alias is to be active.
--name <string>
A new name for the alias.
--path <string>
The new OneFS directory pathname the alias should link to. The pathname must be
an absolute path below the access zone root. For example, /ifs/data/ugroup2/
home.
Example
The following command modifies the zone, name, and path of an existing alias:
isi nfs aliases modify /home --name /users --zone ugroup1 --new-zone
ugroup2
--path /ifs/data/ugroup2/users
Options
<name>
The name of the alias.
isi nfs aliases modify
403
File sharing
--zone <string>
The access zone in which the alias is active.
--check
Include the health status of the alias.
Example
The following command displays a table of information, including the health status, of an
alias named /projects in the current zone.
isi nfs aliases view /projects --check
Options
--limit <integer>
Displays no more than the specified number of NFS exports.
--zone <string>
Specifies the access zone in which the export was created.
[--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
--no-header
Displays table and CSV output without headers.
--no-footer
Displays table output without footers.
--verbose
Displays more detailed information.
Examples
The following command checks the exports in a zone namedZone-1:
isi nfs exports check --zone Zone-1
If the check finds no problems, it returns an empty table. If, however, the check finds a
problem, it returns a display similar to the following:
ID
Message
--------------------------------------3
'/ifs/data/project' does not exist
404
File sharing
--------------------------------------Total: 1
To view the default NFS export settings that will be applied when creating an export, run
the isi nfs settings export view command.
Syntax
isi nfs exports create <paths>
[--block-size <size>]
[--can-set-time {yes | no}]
[--case-insensitive {yes | no}]
[--case-preserving {yes | no}]
[--chown-restricted {yes | no}]
[--directory-transfer-size <size>]
[--link-max <integer>]
[--max-file-size <size>]
[--name-max-size <integer>]
[--no-truncate {yes | no}]
[--return-32bit-file-ids {yes | no}]
[--symlinks {yes | no}]
[--zone <string>]
[--clients <string>]
[--description <string>]
[--root-clients <string>]
[--read-write-clients <string>]
[--read-only-clients <string>]
[--all-dirs {yes | no}]
[--encoding <string>]
[--security-flavors {unix | krb5 | krb5i | krb5p}]
[--snapshot <snapshot>]
[--map-lookup-uid {yes | no}]
[--map-retry {yes | no}]
[--map-root-enabled {yes | no}]
[--map-non-root-enabled {yes | no}]
[--map-failure-enabled {yes | no}]
[--map-all <identity>]
[--map-root <identity>]
[--map-non-root <identity>]
[--map-failure <identity>]
[--map-full {yes | no}]
[--commit-asynchronous {yes | no}]
[--read-only {yes | no}]
[--readdirplus {yes | no}]
[--read-transfer-max-size <size>]
[--read-transfer-multiple <integer>]
[--read-transfer-size <size>]
[--setattr-asynchronous {yes | no}]
[--time-delta <time delta>]
[--write-datasync-action {datasync | filesync |unstable}]
[--write-datasync-reply {datasync | filesync}]
[--write-filesync-action {datasync | filesync |unstable}]
[--write-filesync-reply filesync]
[--write-unstable-action {datasync | filesync |unstable}]
[--write-unstable-reply {datasync | filesync |unstable}]
[--write-transfer-max-size <size>]
[--write-transfer-multiple <integer>]
[--write-transfer-size <size>]
405
File sharing
[--force]
[--verbose]
Options
--paths <paths> ...
Required. Specifies the path to be exported, starting at /ifs. This option can be
repeated to specify multiple paths.
--block-size <size>
Specifies the block size, in bytes.
--can-set-time {yes | no}
If set to yes, enables the export to set time. The default setting is no.
--case-insensitive {yes | no}
If set to yes, the server will report that it ignores case for file names. The default
setting is no.
--case-preserving {yes | no}
If set to yes, the server will report that it always preserves case for file names. The
default setting is no.
--chown-restricted {yes | no}
If set to yes, the server will report that only the superuser can change file ownership.
The default setting is no.
--directory-transfer-size <size>
Specifies the preferred directory transfer size. Valid values are a number followed by
a case-sensitive unit of measure: b for bytes; K for KB; M for MB; or G for GB. If no unit
is specified, bytes are used by default. The maximum value is 4294967295b. The
initial default value is 128K.
--link-max <integer>
The reported maximum number of links to a file.
--max-file-size <size>
Specifies the maximum allowed file size on the server (in bytes). If a file is larger than
the specified value, an error is returned.
--name-max-size <integer>
The reported maximum length of characters in a filename.
--no-truncate {yes | no}
If set to yes, too-long file names will result in an error rather than be truncated.
--return-32bit-file-ids {yes | no}
Applies to NFSv3 and NFSv4. If set to yes, limits the size of file identifiers returned
from readdir to 32-bit values. The default value is no.
Note
This setting is provided for backward compatibility with older NFS clients, and should
not be enabled unless necessary.
--symlinks {yes | no}
If set to yes, advertises support for symlinks. The default setting is no.
406
File sharing
--zone <string>
Access zone in which the export should apply. The default zone is system.
--clients <string>
Specifies a client to be allowed access through this export. Specify clients as an IP,
hostname, netgroup, or CIDR range. You can add multiple clients by repeating this
option.
Note
This option replaces the entire list of clients. To add or remove a client from the list,
specify --add-clients or --remove-clients.
--description <string>
The description for this NFS export.
--root-clients <string>
Allows the root user of the specified client to execute operations as the root user of
the cluster. This option overrides the --map-all and --map-root option for the
specified client.
Specify clients as an IP, hostname, netgroup, or CIDR range. You can specify multiple
clients in a comma-separated list.
--read-write-clients <string>
Grants read/write privileges to the specified client for this export. This option
overrides the --read-only option for the specified client.
Specify clients as an IP, hostname, netgroup, or CIDR range. You can specify multiple
clients in a comma-separated list.
--read-only-clients <string>
Makes the specified client read-only for this export. This option overrides the -read-only option for the specified client.
Specify clients as an IP, hostname, netgroup, or CIDR range. You can specify multiple
clients in a comma-separated list.
--all-dirs {yes | no}
If set to yes, this export will cover all directories. The default setting is no.
--encoding <string>
Specifies the character encoding of clients connecting through this NFS export.
Valid values and their corresponding character encodings are provided in the
following table. These values are taken from the node's /etc/encodings.xml file,
and are not case sensitive.
Value
Encoding
cp932
Windows-SJIS
cp949
Windows-949
cp1252
Windows-1252
euc-kr
EUC-KR
euc-jp
EUC-JP
euc-jp-ms
EUC-JP-MS
407
File sharing
Value
Encoding
utf-8-mac
UTF-8-MAC
utf-8
UTF-8
iso-8859-1
ISO-8859-1 (Latin-1)
iso-8859-2
ISO-8859-2 (Latin-2)
iso-8859-3
ISO-8859-3 (Latin-3)
iso-8859-4
ISO-8859-4 (Latin-4)
iso-8859-5
ISO-8859-5 (Cyrillic)
iso-8859-6
ISO-8859-6 (Arabic)
iso-8859-7
ISO-8859-7 (Greek)
iso-8859-8
ISO-8859-8 (Hebrew)
iso-8859-9
ISO-8859-9 (Latin-5)
File sharing
409
File sharing
KB; M for MB; or G for GB. If no unit is specified, bytes are used by default. The
maximum value is 4294967295b. The initial default value is 512K.
--read-transfer-multiple <integer>
Specifies the suggested multiple read size to report to NFSv3 and NFSv4 clients. Valid
values are 0-4294967295. The initial default value is 512.
--read-transfer-size <size>
Specifies the preferred read transfer size to report to NFSv3 and NFSv4 clients. Valid
values are a number followed by a case-sensitive unit of measure: b for bytes; K for
KB; M for MB; or G for GB. If no unit is specified, bytes are used by default. The
maximum value is 4294967295b. The initial default value is 128K.
--setattr-asynchronous {yes | no}
If set to yes, performs set-attributes operations asynchronously. The default setting
is no.
--time-delta <float>
Specifies server time granularity, in seconds.
--write-datasync-action {datasync | filesync | unstable}
Applies to NFSv3 and NFSv4 only. Specifies an alternate datasync write method. The
following values are valid:
l
datasync
filesync
unstable
datasync
filesync
datasync
filesync
unstable
410
datasync
filesync
File sharing
unstable
datasync
filesync
unstable
The following command creates an NFS export with multiple directory paths and a custom
security type (Kerberos 5):
isi nfs exports create /ifs/data/projects /ifs/data/templates
--security-flavors krb5
411
File sharing
Options
<id>
Specifies the ID of the NFS export to delete. You can use the isi nfs exports
list command to view a list of exports and their IDs in the current zone.
--zone <string>
Specifies the access zone in which the export was created. Without this switch, the
command defaults to the current zone.
--force
Suppresses command-line prompts and messages.
--verbose
Displays more detailed information.
Options
--zone <string>
Specifies the name of the access zone in which the export was created.
{--limit <integer>
Displays no more than the specified number of NFS exports.
--sort <field>
Specifies the field to sort by. Valid values are as follows:
412
id
zone
paths
description
clients
root_clients
read_only_clients
read_write_clients
unresolved_clients
all_dirs
block_size
File sharing
can_set_time
commit_asynchronous
directory_transfer_size
encoding
map_lookup_uid
map_retry
map_all
map_root
map_full
max_file_size
read_only
readdirplus
readdirplus_prefetch
return_32bit_file_ids
read_transfer_max_size
read_transfer_multiple
read_transfer_size
security_flavors
setattr_asynchronous
symlinks
time_delta
write_datasync_action
write_datasync_reply
write_filesync_action
write_filesync_reply
write_unstable_action
write_unstable_reply
write_transfer_max_size
write_transfer_multiple
write_transfer_size
--descending
Specifies to sort the data in descending order.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
--no-header
Displays table and CSV output without headers.
--no-footer
Displays table output without footers.
--verbose
Displays more detailed information.
isi nfs exports list
413
File sharing
Examples
The following command lists NFS exports, by default in the current zone:
isi nfs exports list
You can use the isi nfs settings export view command to see the full list of
default settings for exports.
Syntax
isi nfs exports modify <ID>
[--block-size <size>]
[--revert-block-size]
[--can-set-time {yes | no}]
[--revert-can-set-time]
[--case-insensitive {yes | no}]
[--revert-case-insensitive]
[--case-preserving {yes | no}]
[--revert-case-preserving]
[--chown-restricted {yes | no}]
[--revert-chown-restricted]
[--directory-transfer-size <size>]
[--revert-directory-transfer-size]
[--link-max <integer>]
[--revert-link-max]
[--max-file-size <size>]
[--revert-max-file-size]
[--name-max-size <integer>]
[--revert-name-max-size]
[--no-truncate {yes | no}]
[--revert-no-truncate]
[--return-32bit-file-ids {yes | no}]
[--revert-return-32bit-file-ids]
[--symlinks {yes | no}]
[--revert-symlinks]
[--new-zone <string>]
[--description <string>]
[--paths <path>]
[--clear-paths]
[--add-paths <string>]
[--remove-paths <string>]
[--clients <string>]
[--clear-clients]
[--add-clients <string>]
[--remove-clients <string>]
[--root-clients <string>]
[--clear-root-clients]
[--add-root-clients <string>]
[--remove-root-clients <string>]
[--read-write-clients <string>]
[--clear-read-write-clients]
[--add-read-write-clients <string>]
[--remove-read-write-clients <string>]
[--read-only-clients <string>]
[--clear-read-only-clients]
414
File sharing
[--add-read-only-clients <string>]
[--remove-read-only-clients <string>]
[--all-dirs {yes | no}]
[--revert-all-dirs]
[--encoding <string>]
[--revert-encoding]
[--security-flavors {unix | krb5 | krb5i | krb5p}]
[--revert-security-flavors]
[--clear-security-flavors]
[--add-security-flavors {unix | krb5 | krb5i | krb5p}]
[--remove-security-flavors <string>]
[--snapshot <snapshot>]
[--revert-snapshot]
[--map-lookup-uid {yes | no}]
[--revert-map-lookup-uid]
[--map-retry {yes | no}]
[--revert-map-retry]
[--map-root-enabled {yes | no}]
[--revert-map-root-enabled]
[--map-non-root-enabled {yes | no}]
[--revert-map-non-root-enabled]
[--map-failure-enabled {yes | no}]
[--revert-map-failure-enabled]
[--map-all <identity>]
[--revert-map-all]
[--map-root <identity>]
[--revert-map-root]
[--map-non-root <identity>]
[--revert-map-non-root]
[--map-failure <identity>]
[--revert-map-failure]
[--map-full {yes | no}]
[--revert-map-full]
[--commit-asynchronous {yes | no}]
[--revert-commit-asynchronous]
[--read-only {yes | no}]
[--revert-read-only]
[--readdirplus {yes | no}]
[--revert-readdirplus]
[--read-transfer-max-size <size>]
[--revert-read-transfer-max-size]
[--read-transfer-multiple <integer>]
[--revert-read-transfer-multiple]
[--read-transfer-size <size>]
[--revert-read-transfer-size]
[--setattr-asynchronous {yes | no}]
[--revert-setattr-asynchronous]
[--time-delta <time delta>]
[--revert-time-delta]
[--write-datasync-action {datasync | filesync |unstable}]
[--revert-write-datasync-action]
[--write-datasync-reply {datasync | filesync}]
[--revert-write-datasync-reply]
[--write-filesync-action {datasync | filesync |unstable}]
[--revert-write-filesync-action]
[--write-filesync-reply filesync]
[--write-unstable-action {datasync | filesync |unstable}]
[--revert-write-unstable-action]
[--write-unstable-reply {datasync | filesync |unstable}]
[--revert-write-unstable-reply]
[--write-transfer-max-size <size>]
[--revert-write-transfer-max-size]
[--write-transfer-multiple <integer>]
[--revert-write-transfer-multiple]
[--write-transfer-size <size>]
[--revert-write-transfer-size]
[--zone <string>]
415
File sharing
[--force]
[--verbose]
Options
<ID>
The export ID number. You can use the isi nfs exports list command to view
all the exports and their ID numbers in the current access zone.
--block-size <size>
Specifies the block size, in bytes.
--revert-block-size
Restores the setting to the system default.
--can-set-time {yes | no}
If set to yes, enables the export to set time. The default setting is no.
--revert-can-set-time
Restores the setting to the system default.
--case-insensitive {yes | no}
If set to yes, the server will report that it ignores case for file names. The default
setting is no.
--revert-case-insensitive
Restores the setting to the system default.
--case-preserving {yes | no}
If set to yes, the server will report that it always preserves case for file names. The
default setting is no.
--revert-case-preserving
Restores the setting to the system default.
--chown-restricted {yes | no}
If set to yes, the server will report that only the superuser can change file ownership.
The default setting is no.
--revert-chown-restricted
Restores the setting to the system default.
--directory-transfer-size <size>
Specifies the preferred directory transfer size. Valid values are a number followed by
a case-sensitive unit of measure: b for bytes; K for KB; M for MB; or G for GB. If no unit
is specified, bytes are used by default. The maximum value is 4294967295b. The
initial default value is 128K.
--revert-directory-transfer-size
Restores the setting to the system default.
--link-max <integer>
The reported maximum number of links to a file.
--revert-link-max
Restores the setting to the system default.
--max-file-size <size>
416
File sharing
Specifies the maximum allowed file size on the server (in bytes). If a file is larger than
the specified value, an error is returned.
--revert-max-file-size
Restores the setting to the system default.
--name-max-size <integer>
The reported maximum length of characters in a filename.
--revert-name-max-size
Restores the setting to the system default.
--no-truncate {yes | no}
If set to yes, too-long file names will result in an error rather than be truncated.
--revert-no-truncate
Restores the setting to the system default.
--return-32bit-file-ids {yes | no}
Applies to NFSv3 and later. If set to yes, limits the size of file identifiers returned
from readdir to 32-bit values. The default value is no.
Note
This setting is provided for backward compatibility with older NFS clients, and should
not be enabled unless necessary.
--revert-return-32bit-file-ids
Restores the setting to the system default.
--symlinks {yes | no}
If set to yes, advertises support for symlinks. The default setting is no.
--revert-symlinks
Restores the setting to the system default.
--new-zone <string>
Specifies a new access zone in which the export should apply. The default zone is
system.
--description <string>
The description for this NFS export.
--paths <paths> ...
Required. Specifies the path to be exported, starting at /ifs. This option can be
repeated to specify multiple paths.
--clear-paths
Clear any of the paths originally specified for the export. The path must be within
the /ifs directory.
--add-paths <paths> ...
Add to the paths originally specified for the export. The path must be within /ifs.
This option can be repeated to specify multiple paths.
--remove-paths <paths> ...
Remove a path from the paths originally specified for the export. The path must be
within /ifs. This option can be repeated to specify multiple paths to be removed.
417
File sharing
--clients <string>
Specifies a client to be allowed access through this export. Specify clients as an IP,
hostname, netgroup, or CIDR range. You can add multiple clients by repeating this
option.
--clear-clients
Clear the full list of clients originally allowed access through this export.
--add-clients <string>
Specifies a client to be added to the list of clients with access through this export.
Specify clients to be added as an IP, hostname, netgroup, or CIDR range. You can add
multiple clients by repeating this option.
--remove-clients <string>
Specifies a client to be removed from the list of clients with access through this
export. Specify clients to be removed as an IP, hostname, netgroup, or CIDR range.
You can remove multiple clients by repeating this option.
--root-clients <string>
Allows the root user of the specified client to execute operations as the root user of
the cluster. This option overrides the --map-all and --map-root option for the
specified client.
Specify clients as an IP, hostname, netgroup, or CIDR range. You can specify multiple
clients in a comma-separated list.
--clear-root-clients
Clear the full list of root clients originally allowed access through this export.
--add-root-clients <string>
Specifies a root client to be added to the list of root clients with access through this
export. Specify root clients to be added as an IP, hostname, netgroup, or CIDR range.
You can add multiple root clients by repeating this option.
--remove-root-clients <string>
Specifies a root client to be removed from the list of root clients with access through
this export. Specify root clients to be removed as an IP, hostname, netgroup, or CIDR
range. You can remove multiple root clients by repeating this option.
--read-write-clients <string>
Grants read/write privileges to the specified client for this export. This option
overrides the --read-only option for the specified client.
Specify clients as an IP, hostname, netgroup, or CIDR range. You can specify multiple
clients in a comma-separated list.
--clear-read-write-clients
Clear the full list of read-write clients originally allowed access through this export.
--add-read-write-clients <string>
Specifies a read-write client to be added to the list of read-write clients with access
through this export. Specify read-write clients to be added as an IP, hostname,
netgroup, or CIDR range. You can add multiple read-write clients by repeating this
option.
--remove-read-write-clients <string>
Specifies a read-write client to be removed from the list of read-write clients with
access through this export. Specify read-write clients to be removed as an IP,
418
File sharing
hostname, netgroup, or CIDR range. You can remove multiple read-write clients by
repeating this option.
--read-only-clients <string>
Makes the specified client read-only for this export. This option overrides the -read-only option for the specified client.
Specify clients as an IP, hostname, netgroup, or CIDR range. You can specify multiple
clients in a comma-separated list.
--clear-read-only-clients
Clear the full list of read-only clients originally allowed access through this export.
--add-read-only-clients <string>
Specifies a read-only client to be added to the list of read-only clients with access
through this export. Specify read-only clients to be added as an IP, hostname,
netgroup, or CIDR range. You can add multiple read-only clients by repeating this
option.
--remove-read-only-clients <string>
Specifies a read-only client to be removed from the list of read-only clients with
access through this export. Specify read-only clients to be removed as an IP,
hostname, netgroup, or CIDR range. You can remove multiple read-only clients by
repeating this option.
--all-dirs {yes | no}
If set to yes, this export will cover all directories. The default setting is no.
--revert-all-dirs
Restores the setting to the system default.
--encoding <string>
Specifies the character encoding of clients connecting through this NFS export.
Valid values and their corresponding character encodings are provided in the
following table. These values are taken from the node's /etc/encodings.xml file,
and are not case sensitive.
Value
Encoding
cp932
Windows-SJIS
cp949
Windows-949
cp1252
Windows-1252
euc-kr
EUC-KR
euc-jp
EUC-JP
euc-jp-ms
EUC-JP-MS
utf-8-mac
UTF-8-MAC
utf-8
UTF-8
iso-8859-1
ISO-8859-1 (Latin-1)
iso-8859-2
ISO-8859-2 (Latin-2)
iso-8859-3
ISO-8859-3 (Latin-3)
iso-8859-4
ISO-8859-4 (Latin-4)
419
File sharing
Value
Encoding
iso-8859-5
ISO-8859-5 (Cyrillic)
iso-8859-6
ISO-8859-6 (Arabic)
iso-8859-7
ISO-8859-7 (Greek)
iso-8859-8
ISO-8859-8 (Hebrew)
iso-8859-9
ISO-8859-9 (Latin-5)
--revert-encoding
Restores the setting to the system default.
--security-flavors {unix | krb5 | krb5i | krb5p} ...
Specifies a security flavor to support. To support multiple security flavors, repeat this
option for each additional entry. The following values are valid:
sys
Sys or UNIX authentication.
krb5
Kerberos V5 authentication.
krb5i
Kerberos V5 authentication with integrity.
krb5p
Kerberos V5 authentication with privacy.
--revert-security-flavors
Restores the setting to the system default.
--snapshot {<snapshot> | <snapshot-alias>}
Specifies the ID of a snapshot or snapshot alias to export. If you specify this option,
directories will be exported in the state captured in either the specified snapshot or
the snapshot referenced by the specified snapshot alias. If the snapshot does not
capture the exported path, the export will be inaccessible to users.
If you specify a snapshot alias, and the alias is later modified to reference a new
snapshot, the new snapshot will be automatically applied to the export.
Because snapshots are read-only, clients will not be able to modify data through the
export unless you specify the ID of a snapshot alias that references the live version of
the file system.
Specify <snapshot> or <snapshot-alias> as the ID or name of a snapshot or snapshot
alias.
--revert-snapshot
Restores the setting to the system default.
--map-lookup-uid {yes | no}
420
File sharing
If set to yes, incoming UNIX user identifiers (UIDs) will be looked up locally. The
default setting is no.
--revert-map-lookup-uid
Restores the setting to the system default.
--map-retry {yes | no}
If set to yes, the system will retry failed user-mapping lookups. The default setting is
no.
--revert-map-retry
Restores the setting to the system default.
--map-root-enabled {yes | no}
Enable/disable mapping incoming root users to a specific account.
--revert-map-root-enabled
Restores the setting to the system default.
--map-non-root-enabled {yes | no}
Enable/disable mapping incoming non-root users to a specific account.
--revert-map-non-root-enabled
Restores the setting to the system default.
--map-failure-enabled {yes | no}
Enable/disable mapping users to a specific account after failing an auth lookup.
--revert-map-failure-enabled
Restores the setting to the system default.
--map-all <identity>
Specifies the default identity that operations by any user will execute as. If this
option is not set to root, you can allow the root user of a specific client to execute
operations as the root user of the cluster by including the client in the --rootclients list.
--revert-map-all
Restores the setting to the system default.
--map-root <identity>
Map incoming root users to a specific user and/or group ID.
--revert-map-root
Restores the setting to the system default.
--map-non-root <identity>
Map non-root users to a specific user and/or group ID.
--revert-map-non-root
Restores the setting to the system default.
--map-failure <identity>
Map users to a specific user and/or group ID after a failed auth attempt.
--revert-map-failure
Restores the setting to the system default.
--map-full {yes | no}
421
File sharing
If set to yes, full identity mapping resolution will be used for mapped users. The
default setting is no.
--revert-map-full
Restores the setting to the system default.
--commit-asynchronous {yes | no}
If set to yes, enables commit data operations to be performed asynchronously. The
default setting is no
--revert-commit-asynchronous
Restores the setting to the system default.
--read-only {yes | no}
Determines the default privileges for all clients accessing the export.
If set to yes, you can grant read/write privileges to a specific client by including the
client in the --read-write-clients list.
If set to no, you can make a specific client read-only by including the client in the -read-only-clients list. The default setting is no.
--revert-read-only
Restores the setting to the system default.
--readdirplus {yes | no}
Applies to NFSv3 only. If set to yes, enables processing of readdir-plus requests. The
default setting is no.
--revert-readdirplus
Restores the setting to the system default.
--read-transfer-max-size <size>
Specifies the maximum read transfer size to report to NFSv3 and NFSv4 clients. Valid
values are a number followed by a case-sensitive unit of measure: b for bytes; K for
KB; M for MB; or G for GB. If no unit is specified, bytes are used by default. The
maximum value is 4294967295b. The initial default value is 512K.
--revert-read-transfer-max-size
Restores the setting to the system default.
--read-transfer-multiple <integer>
Specifies the suggested multiple read size to report to NFSv3 and NFSv4 clients. Valid
values are 0-4294967295. The initial default value is 512.
--revert-read-transfer-multiple
Restores the setting to the system default.
--read-transfer-size <size>
Specifies the preferred read transfer size to report to NFSv3 and NFSv4 clients. Valid
values are a number followed by a case-sensitive unit of measure: b for bytes; K for
KB; M for MB; or G for GB. If no unit is specified, bytes are used by default. The
maximum value is 4294967295b. The initial default value is 128K.
--revert-read-transfer-size
Restores the setting to the system default.
--setattr-asynchronous {yes | no}
422
File sharing
datasync
filesync
unstable
datasync
filesync
datasync
filesync
unstable
datasync
filesync
isi nfs exports modify
423
File sharing
unstable
datasync
filesync
unstable
424
File sharing
Options
There are no options for this command.
Options
<id>
Specifies the ID of the NFS export to display. If you do not know the ID, use the isi
nfs exports list command to view a list of exports and their associated IDs.
--zone <string>
Specifies the name of the access zone in which the export was created.
Options
{--minutes | -m} <number>
Specifies the time in minutes between writes to the backup drive.
Options
{--node | -n} <string>
Specifies the node to send the update command.
425
File sharing
Options
{--minutes | -m} <number>
Specifies the time in minutes between each netgroup expiration.
Options
{--node | -n} <string>
Specifies the node to send the flush command to.
Options
{--minutes | -m} <number>
Specifies the time in minutes before stale cache entries are wiped.
Options
{--seconds | -s} <integer>
Specifies the time in seconds between attempts to retry an NFS netgroup.
426
File sharing
Options
--limit<integer>
Displays no more than the specified number of NFS nlm locks.
--sort {client | path | lock_type | range | created}
Specifies the field to sort by.
--descending
Specifies to sort the data in descending order.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
--no-header
Displays table and CSV output without headers.
--no-footer
Displays table output without footers.
--verbose
Displays more detailed information.
Examples
To view a detailed list of all current NLM locks, run the following command:
isi nfs nlm locks list --verbose
In the following sample output, there are currently three locks: one on /ifs/home/
test1/file.txt and two on /ifs/home/test2/file.txt.
Client
-----------------------machineName/10.72.134.119
machineName/10.59.166.125
machineName/10.63.119.205
Path
-----------------------/ifs/home/test1/file.txt
/ifs/home/test2/file.txt
/ifs/home/test2/file.txt
Lock Type
--------exclusive
shared
shared
Range
-----[0, 2]
[10, 20]
[10, 20]
427
File sharing
Options
--limit<integer>
Displays no more than the specified number of NLM locks.
--sort {client | path | lock_type | range | created}
Specifies the field to sort by.
--descending
Specifies to sort the data in descending order.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
--no-header
Displays table and CSV output without headers.
--no-footer
Displays table output without footers.
--verbose
Displays more detailed information.
Examples
The following command displays a detailed list of clients waiting to lock a currentlylocked file:
isi nfs nlm locks waiters --verbose
428
File sharing
Options
<client>
Options
--limit<integer>
Displays no more than the specified number of NFS nlm sessions.
--sort {client | path | lock_type | range | created}
Specifies the field to sort by.
--descending
Specifies to sort the data in descending order.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
--no-header
Displays table and CSV output without headers.
--no-footer
Displays table output without footers.
isi nfs nlm sessions disconnect
429
File sharing
--verbose
Displays more detailed information.
Example
To view a list of active NLM sessions, run the following command:
isi nfs nlm sessions list
You can view the currently configured default NFS export settings by running the isi
nfs settings export view command.
Syntax
isi nfs exports modify <ID>
[--block-size <size>]
[--revert-block-size]
[--can-set-time {yes | no}]
[--revert-can-set-time]
[--case-insensitive {yes | no}]
[--revert-case-insensitive]
[--case-preserving {yes | no}]
[--revert-case-preserving]
[--chown-restricted {yes | no}]
[--revert-chown-restricted]
[--directory-transfer-size <size>]
[--revert-directory-transfer-size]
[--link-max <integer>]
[--revert-link-max]
[--max-file-size <size>]
[--revert-max-file-size]
[--name-max-size <integer>]
[--revert-name-max-size]
[--no-truncate {yes | no}]
[--revert-no-truncate]
[--return-32bit-file-ids {yes | no}]
[--revert-return-32bit-file-ids]
[--symlinks {yes | no}]
[--revert-symlinks]
[--all-dirs {yes | no}]
[--revert-all-dirs]
[--encoding <string>]
[--revert-encoding]
[--security-flavors {unix | krb5 | krb5i | krb5p}]
[--revert-security-flavors]
[--clear-security-flavors]
[--add-security-flavors {unix | krb5 | krb5i | krb5p}]
[--remove-security-flavors <string>]
[--snapshot <snapshot>]
[--revert-snapshot]
[--map-lookup-uid {yes | no}]
[--revert-map-lookup-uid]
[--map-retry {yes | no}]
[--revert-map-retry]
[--map-root-enabled {yes | no}]
[--revert-map-root-enabled]
[--map-non-root-enabled {yes | no}]
[--revert-map-non-root-enabled]
[--map-failure-enabled {yes | no}]
430
File sharing
[--revert-map-failure-enabled]
[--map-all <identity>]
[--revert-map-all]
[--map-root <identity>]
[--revert-map-root]
[--map-non-root <identity>]
[--revert-map-non-root]
[--map-failure <identity>]
[--revert-map-failure]
[--map-full {yes | no}]
[--revert-map-full]
[--commit-asynchronous {yes | no}]
[--revert-commit-asynchronous]
[--read-only {yes | no}]
[--revert-read-only]
[--readdirplus {yes | no}]
[--revert-readdirplus]
[--read-transfer-max-size <size>]
[--revert-read-transfer-max-size]
[--read-transfer-multiple <integer>]
[--revert-read-transfer-multiple]
[--read-transfer-size <size>]
[--revert-read-transfer-size]
[--setattr-asynchronous {yes | no}]
[--revert-setattr-asynchronous]
[--time-delta <integer>]
[--revert-time-delta]
[--write-datasync-action {datasync | filesync |unstable}]
[--revert-write-datasync-action]
[--write-datasync-reply {datasync | filesync}]
[--revert-write-datasync-reply]
[--write-filesync-action {datasync | filesync |unstable}]
[--revert-write-filesync-action]
[--write-filesync-reply filesync]
[--write-unstable-action {datasync | filesync |unstable}]
[--revert-write-unstable-action]
[--write-unstable-reply {datasync | filesync |unstable}]
[--revert-write-unstable-reply]
[--write-transfer-max-size <size>]
[--revert-write-transfer-max-size]
[--write-transfer-multiple <integer>]
[--revert-write-transfer-multiple]
[--write-transfer-size <size>]
[--revert-write-transfer-size]
[--zone <string>]
[--force]
[--verbose]
Options
--block-size <size>
Specifies the block size, in bytes.
--revert-block-size
Restores the setting to the system default.
--can-set-time {yes | no}
If set to yes, enables the export to set time. The default setting is no.
--revert-can-set-time
Restores the setting to the system default.
--case-insensitive {yes | no}
If set to yes, the server will report that it ignores case for file names. The default
setting is no.
isi nfs settings export modify
431
File sharing
--revert-case-insensitive
Restores the setting to the system default.
--case-preserving {yes | no}
If set to yes, the server will report that it always preserves case for file names. The
default setting is no.
--revert-case-preserving
Restores the setting to the system default.
--chown-restricted {yes | no}
If set to yes, the server will report that only the superuser can change file ownership.
The default setting is no.
--revert-chown-restricted
Restores the setting to the system default.
--directory-transfer-size <size>
Specifies the preferred directory transfer size. Valid values are a number followed by
a case-sensitive unit of measure: b for bytes; K for KB; M for MB; or G for GB. If no unit
is specified, bytes are used by default. The maximum value is 4294967295b. The
initial default value is 128K.
--revert-directory-transfer-size
Restores the setting to the system default.
--link-max <integer>
The reported maximum number of links to a file.
--revert-link-max
Restores the setting to the system default.
--max-file-size <size>
Specifies the maximum allowed file size on the server (in bytes). If a file is larger than
the specified value, an error is returned.
--revert-max-file-size
Restores the setting to the system default.
--name-max-size <integer>
The reported maximum length of characters in a filename.
--revert-name-max-size
Restores the setting to the system default.
--no-truncate {yes | no}
If set to yes, too-long file names will result in an error rather than be truncated.
--revert-no-truncate
Restores the setting to the system default.
--return-32bit-file-ids {yes | no}
Applies to NFSv3 and later. If set to yes, limits the size of file identifiers returned
from readdir to 32-bit values. The default value is no.
Note
This setting is provided for backward compatibility with older NFS clients, and should
not be enabled unless necessary.
432
File sharing
--revert-return-32bit-file-ids
Restores the setting to the system default.
--symlinks {yes | no}
If set to yes, advertises support for symlinks. The default setting is no.
--revert-symlinks
Restores the setting to the system default.
--new-zone <string>
Specifies a new access zone in which the export should apply. The default zone is
system.
--all-dirs {yes | no}
If set to yes, this export will cover all directories. The default setting is no.
--revert-all-dirs
Restores the setting to the system default.
--encoding <string>
Specifies the character encoding of clients connecting through this NFS export.
Valid values and their corresponding character encodings are provided in the
following table. These values are taken from the node's /etc/encodings.xml file,
and are not case sensitive.
Value
Encoding
cp932
Windows-SJIS
cp949
Windows-949
cp1252
Windows-1252
euc-kr
EUC-KR
euc-jp
EUC-JP
euc-jp-ms
EUC-JP-MS
utf-8-mac
UTF-8-MAC
utf-8
UTF-8
iso-8859-1
ISO-8859-1 (Latin-1)
iso-8859-2
ISO-8859-2 (Latin-2)
iso-8859-3
ISO-8859-3 (Latin-3)
iso-8859-4
ISO-8859-4 (Latin-4)
iso-8859-5
ISO-8859-5 (Cyrillic)
iso-8859-6
ISO-8859-6 (Arabic)
iso-8859-7
ISO-8859-7 (Greek)
iso-8859-8
ISO-8859-8 (Hebrew)
iso-8859-9
ISO-8859-9 (Latin-5)
433
File sharing
Value
Encoding
--revert-encoding
Restores the setting to the system default.
--security-flavors {unix | krb5 | krb5i | krb5p} ...
Specifies a security flavor to support. To support multiple security flavors, repeat this
option for each additional entry. The following values are valid:
sys
Sys or UNIX authentication.
krb5
Kerberos V5 authentication.
krb5i
Kerberos V5 authentication with integrity.
krb5p
Kerberos V5 authentication with privacy.
--revert-security-flavors
Restores the setting to the system default.
--snapshot {<snapshot> | <snapshot-alias>}
Specifies the ID of a snapshot or snapshot alias to export. If you specify this option,
directories will be exported in the state captured in either the specified snapshot or
the snapshot referenced by the specified snapshot alias. If the snapshot does not
capture the exported path, the export will be inaccessible to users.
If you specify a snapshot alias, and the alias is later modified to reference a new
snapshot, the new snapshot will be automatically applied to the export.
Because snapshots are read-only, clients will not be able to modify data through the
export unless you specify the ID of a snapshot alias that references the live version of
the file system.
Specify <snapshot> or <snapshot-alias> as the ID or name of a snapshot or snapshot
alias.
--revert-snapshot
Restores the setting to the system default.
--map-lookup-uid {yes | no}
If set to yes, incoming UNIX user identifiers (UIDs) will be looked up locally. The
default setting is no.
--revert-map-lookup-uid
Restores the setting to the system default.
--map-retry {yes | no}
If set to yes, the system will retry failed user-mapping lookups. The default setting is
no.
--revert-map-retry
Restores the setting to the system default.
434
File sharing
435
File sharing
Determines the default privileges for all clients accessing the export.
If set to yes, you can grant read/write privileges to a specific client by including the
client in the --read-write-clients list.
If set to no, you can make a specific client read-only by including the client in the -read-only-clients list. The default setting is no.
--revert-read-only
Restores the setting to the system default.
--readdirplus {yes | no}
Applies to NFSv3 only. If set to yes, enables processing of readdir-plus requests. The
default setting is no.
--revert-readdirplus
Restores the setting to the system default.
--read-transfer-max-size <size>
Specifies the maximum read transfer size to report to NFSv3 and NFSv4 clients. Valid
values are a number followed by a case-sensitive unit of measure: b for bytes; K for
KB; M for MB; or G for GB. If no unit is specified, bytes are used by default. The
maximum value is 4294967295b. The initial default value is 512K.
--revert-read-transfer-max-size
Restores the setting to the system default.
--read-transfer-multiple <integer>
Specifies the suggested multiple read size to report to NFSv3 and NFSv4 clients. Valid
values are 0-4294967295. The initial default value is 512.
--revert-read-transfer-multiple
Restores the setting to the system default.
--read-transfer-size <size>
Specifies the preferred read transfer size to report to NFSv3 and NFSv4 clients. Valid
values are a number followed by a case-sensitive unit of measure: b for bytes; K for
KB; M for MB; or G for GB. If no unit is specified, bytes are used by default. The
maximum value is 4294967295b. The initial default value is 128K.
--revert-read-transfer-size
Restores the setting to the system default.
--setattr-asynchronous {yes | no}
If set to yes, performs set-attributes operations asynchronously. The default setting
is no.
--revert-setattr-asynchronous
Restores the setting to the system default.
--time-delta <integer>
Specifies server time granularity, in seconds.
--revert-time-delta
Restores the setting to the system default.
--write-datasync-action {datasync | filesync | unstable}
Applies to NFSv3 and NFSv4 only. Specifies an alternate datasync write method. The
following values are valid:
436
File sharing
datasync
filesync
unstable
datasync
filesync
datasync
filesync
unstable
datasync
filesync
unstable
datasync
filesync
unstable
437
File sharing
Options
--zone <string>
Specifies the access zone in which the default settings apply.
Example
To view the currently-configured default export settings, run the following command:
isi nfs settings export view
438
File sharing
439
File sharing
Options
--lock-protection <integer>
Specifies the number of nodes failures that can happen before a lock might be lost.
--nfsv3-enabled {yes | no}
Specifies that NFSv3 is enabled.
--nfsv4-enabled {yes | no}
Specifies that NFSv4 is enabled.
{--force
Causes the command to be executed without your confirmation.
Options
There are no options for this command.
Options
--nfsv4-domain <string>
Specifies the NFSv4 domain name.
--revert-nfsv4-domain
Return the setting to the system default. Default is localdomain.
--nfsv4-replace-domain {yes | no}
Replace owner/group domain with the domainname. For NFSv4.
--revert-nfsv4-replace-domain
Returns setting to the system default. Default is yes.
--nfsv4-no-domain {yes | no}
440
File sharing
Options
There are no options for this command.
FTP commands
You can access and configure the FTP service through the FTP commands.
441
File sharing
Options
<value>
Specifies the time, in seconds, that a remote client has to establish a PASV style data
connection before timeout. All integers between 30 and 600 are valid values. If no
options are specified, the current timeout is displayed. The default value is 60.
Examples
To set the data connection timeout to 5 minutes, run the following command:
isi ftp accept-timeout 300
Options
<value>
Controls whether directory list commands are enabled. Valid values are YES and NO. If
no options are specified, displays whether directory list commands are permitted.
The default value is YES.
Examples
To disable directory list commands, run the following command:
isi ftp allow-dirlists NO
Options
<value>
Controls whether anonymous logins are permitted or not. If enabled, both the
usernames ftp and anonymous are recognized as anonymous logins. Valid values are
YES and NO. If no options are specified, displays whether or not anonymous logins
are permitted. The default value is NO.
Examples
To allow anonymous access, run the following command:
isi ftp allow-anon-access YES
442
File sharing
Options
<value>
Controls whether anonymous users are able to upload files under certain conditions.
Valid values are YES and NO. For anonymous users to be able to upload, the isi ftp
allow-writes command must be set to YES, and the anonymous user must have
write permission on the desired upload location. If no options are specified, displays
whether anonymous users are permitted to upload files. The default value is YES.
Examples
To disable anonymous users from uploading files, run the following command:
isi ftp allow-anon-upload NO
Options
<value>
Controls whether files can be downloaded. Valid values are YES and NO. If no options
are specified, displays whether downloads are permitted. The default value is YES.
Examples
To disable downloads from being permitted, run the following command:
isi ftp allow-downloads NO
Options
<value>
Valid values are YES and NO. If set to YES, normal user accounts can be used to log in.
If no options are specified, displays whether commands that change the file system
are permitted. The default value is YES.
443
File sharing
Examples
To deny local login permission, run the following command:
isi ftp allow-local-access NO
Options
<value>
If no options are specified, displays whether commands that change the file system
are permitted. Controls whether any of the following commands are allowed:
l
STOR
DELE
RNFR
RNTO
MKD
RMD
APPE
SITE
Valid values are YES and NO. The default value is YES.
Examples
To disable commands that change the file system, run the following command:
isi ftp allow-writes NO
Options
<value>
Controls whether FTP always initially changes directories to the home directory of the
user. The default value is YES. If set to NO, you can set up a chroot area in FTP
without having a home directory for the user. If no options are specified, displays the
current setting. Valid values are YES and NO.
444
File sharing
Options
<value>
Gives ownership of anonymously uploaded files to the specified user. The value must
be a local username. If no options are specified, displays the owner of anonymously
uploaded files. The default value is root.
Examples
The following command sets the owner of anonymously uploaded files to be "user1":
isi ftp anon-chown-username user1
Options
There are no options for this command.
Examples
To display a list of anonymous user passwords, run the following command:
isi ftp anon-password-list
password
Options
<value>
Specifies the password being added to the anonymous password list.
Examples
The following command adds "1234" to the anonymous password list:
isi ftp anon-password-list add 1234
445
File sharing
Options
<value>
Specifies which password to remove from the anonymous password list.
Examples
The following command removes "1234" from the anonymous password list:
isi ftp anon-password-list remove 1234
Options
{--value | -v} <ifs-directory>
Represents a directory in /ifs that the Very Secure FTP Daemon (VSFTPD) will try to
change to after an anonymous login. Valid values are paths in /ifs. If no options are
specified, displays the root path for anonymous users. The default value is /ifs/
home/ftp.
--reset
Resets the value to /ifs/home/ftp.
Examples
The following command sets the root path for anonymous users to /ifs/home/
newUser/:
isi ftp anon-root-path --value /ifs/home/newUser/
446
File sharing
Options
<value>
Specifies the umask for file creation by anonymous users. Valid values are octal
umask numbers. If no options are specified, displays the current anonymous user file
creation umask.The default value is 077.
Note
The value must contain the '0' prefix; otherwise, the value will be treated as a base
10 integer
.
Examples
The following command sets the umask for file creation by anonymous users to 066:
isi ftp anon-umask 066
Options
<value>
Determines whether ASCII downloads and uploads are enabled. The following values
are valid:
l
both ASCII mode data transfers are honored on both downloads and uploads.
If no options are specified, displays whether ASCII downloads and uploads are
permitted. The default value is off.
Examples
To allow both ASCII downloads and uploads, run the following command:
isi ftp ascii-mode both
447
File sharing
Options
<value>
Specifies the timeout (in seconds) for a remote client to respond to our PORT style
data connection. Valid values are integers between 30 and 600. If no options are
specified, displays the current data connection response timeout. The default value
is 60.
Examples
To set the timeout to two minutes, run the following command:
isi ftp connect-timeout 120
Options
There are no options for this command.
Examples
To view a list of local user chroot exceptions, run the following command:
isi ftp chroot-exception-list
448
File sharing
Options
<value>
Specifies the user being added to the chroot exception list.
Examples
The following command adds newUser to the chroot exception list:
isi ftp chroot-exception-list add newUser
Options
<value>
Required. Specifies the user being removed from the chroot exception list.
Examples
The following command removes newUser from the chroot exception list:
isi ftp chroot-exception-list remove newUser
Options
<value>
Specifies which users are placed in a chroot jail in their home directory after they
login. The following values are valid:
l
all All local users are placed in a chroot jail in their home directory after they
login.
l
all-with-exceptions All local users except those in the chroot exception list
are placed in a chroot jail in their home directory after they login.
none No local users are placed in a chroot jail in their home directory after they
login.
l
449
File sharing
If no options are specified, displays the current setting. The default value is none.
Examples
To place users who are not on the chroot exception list in a chroot jail in their home
directory after they login, run the following command:
isi ftp chroot-local-mode --value=all-with-exceptions
To place only users in the chroot exception list in a chroot jail in their home directory after
they login, run the following command:
isi ftp chroot-local-mode --value=none-with-exceptions
Options
<value>
Enables support for using chroot- exception-list entries as exceptions to chroot mode
when set to YES. Valid values are YES and NO. If no options are specified, displays
whether the lookup is enabled. The default value is NO.
Options
<value>
Specifies the maximum time (in seconds) data transfers are allowed to stall with no
progress before the remote client is removed. Valid values are integers between 30
and 600. The default value is 300.
Examples
To set the timeout to 1 minute, run the following command:
isi ftp data-timeout 60
450
File sharing
Options
There are no options for this command.
Examples
To view the list of denied users, run the following command:
isi ftp denied-user-list
Options
<value>
Specifies the name of the user being added to the denied user list.
Examples
The following command adds unwelcomeUser to the list of denied users:
isi ftp denied-user-list add unwelcomeUser
<value>
Options
<value>
isi ftp denied-user-list
451
File sharing
Specifies the name of the user being removed from the denied user list.
Examples
The following command removes approvedUser from the list of denied users:
isi ftp denied-user-list remove approvedUser
Options
<value>
Specifies whether the time displayed in directory listings is in your local time zone.
Valid values are YES and NO. If NO, time displays on GMT. If YES the time displays in
your local time zone. If no options are specified, the current setting is displayed. The
default value is NO.
The last-modified times returned by commands issued inside of the FTP shell are also
affected by this parameter.
Examples
To set the time displayed in directory listings to your local time zone, run the following
command:
isi ftp dirlist-localtime YES
Options
<value>
Determines what information is displayed about users and groups in directory
listings. The following values are valid:
452
hide All user and group information in directory listings is displayed as ftp.
numeric Numeric IDs are shown in the user and group fields of directory listings.
File sharing
textual Textual names are shown in the user and group fields of directory
listings.
If no options are specified, displays the current setting. The default value is hide.
Examples
To show numeric IDs of users and groups in directory listings, run the following
command:
isi ftp dirlist-names numeric
Options
<value>
Specifies the permissions with which uploaded files are created. Valid values are
octal permission numbers. If no options are specified, this command displays the
current file creation permission setting. The default value is 0666.
Note
Options
{--value | -v} <ifs-directory>
Specifies a directory in /ifs that VSFTPD attempts to change into after a local login.
Valid values are paths in /ifs. If no options are specified, the current root path for
local users is displayed. The default value is the local user home directory.
isi ftp file-create-perm
453
File sharing
--reset
Resets to use the local user home directory.
Examples
The following command sets the root path for local users to /ifs/home/newUser:
isi ftp local-root-path --value=/ifs/home/newUser
To set the root path for local users back to the local user home directory, run the
following command:
isi ftp local-root-path --reset
Options
<value>
Species the umask for file creation by local users. Valid values are octal umask
numbers. If no options are specified, displays the current local user file creation
umask. The default value is 077.
Note
Value must contain the '0' prefix, otherwise the value will be treated as a base 10
integer.
Examples
The following comand sets the local user file creation umask to 066:
isi ftp local-umask 066
454
File sharing
Options
<value>
Specifies whether or not to allow FXP transfers. Valid values are YES and NO. If no
options are specified, displays current setting. The default value is NO.
Examples
To allow FXP transfers, run the following command:
isi ftp server-to-server YES
Options
<value>
Valid values are YES and NO. If set to YES, the command maintains login sessions for
each user through Pluggable Authentication Modules (PAM). If set to NO, the
command prevents automatic home directory creation if that functionality is
otherwise available. If no options are specified, displays whether FTP session support
is enabled. The default value is YES.
Options
<value>
Specifies the maximum time (in seconds) that a remote client may spend between
FTP commands before the remote client is kicked off. Valid values are integers
between 30 and 600. If no options are specified, displays the current idle system
timeout. The default value is 300.
Examples
To set the timeout to one minute, run the following command:
isi ftp session-timeout 60
455
File sharing
Options
If no options are specified, displays the current user configuration directory path.
{--value | -v} <directory>
Specifies the directory where user-specific configurations that override global
configurations can be found. The default value is the local user home directory.
--reset
Reset to use the local user home directory.
Examples
The following command sets the user configuration directory to /ifs/home/User/
directory:
isi ftp user-config-dir --value=/ifs/home/User/directory
To set the user configuration directory back to the local user home directory, run the
following command:
isi ftp user-config-dir --reset
456
CHAPTER 10
Home directories
Home directories
457
Home directories
Active Directory
Local
Share permissions are checked when files are accessed, before the underlying file
system permissions are checked. Either of these permissions can prevent access to the
file or directory.
458
Home directories
Home directory share paths must begin with /ifs/ and must be in the root path of the
access zone in which the home directory SMB share is created.
In the following commands, the --allow-variable-expansion option is enabled to
indicate that %U should be expanded to the user name, which is user411 in this
example. The --auto-create-directory option is enabled to create the directory if
it does not exist:
isi smb shares create HOMEDIR --path=/ifs/home/%U \
--allow-variable-expansion=yes --auto-create-directory=yes
isi smb shares permission modify HOMEDIR --wellknown Everyone \
--permission-type allow --permission full
isi smb shares view HOMEDIR
When user411 connects to the share with the net use command, the user's home
directory is created at /ifs/home/user411. On user411's Windows client, the net
use m: command connects /ifs/home/user411 through the HOMEDIR share:
net use m: \\cluster.company.com\HOMEDIR /u:user411
Procedure
1. Run the following commands on the cluster with the --allow-variableexpansion option enabled. The %U expansion variable expands to the user name,
and the --auto-create-directory option is enabled to create the directory if it
does not exist:
isi smb shares create HOMEDIR --path=/ifs/home/%U \
--allow-variable-expansion=yes --auto-create-directory=yes
isi smb shares permission modify HOMEDIR --wellknown Everyone \
--permission-type allow --permission full
459
Home directories
If user411 connects to the share with the net use command, user411's home
directory is created at /ifs/home/user411. On user411's Windows client, the net
use m: command connects /ifs/home/user411 through the HOMEDIR share,
mapping the connection similar to the following example:
net use m: \\cluster.company.com\HOMEDIR /u:user411
2. Run a net use command, similar to the following example, on a Windows client to
map the home directory for user411:
net use q: \\cluster.company.com\HOMEDIR_ACL /u:user411
460
Home directories
3. Run a command similar to the following example on the cluster to view the inherited
ACL permissions for the user411 share:
cd /ifs/home/user411
ls -lde .
If another SMB share exists that matches the user's name, the user connects to the
explicitly named share rather than to the %U share.
Procedure
1. Run the following command to create a share that matches the authenticated user
login name when the user connects to the share:
isi smb share create %U /ifs/home/%U \
--allow-variable-expansion=yes --auto-create-directory=yes \
--zone=System
After running this command, user Zachary will see a share named 'zachary' rather
than '%U', and when Zachary tries to connect to the share named 'zachary', he will be
directed to /ifs/home/zachary. On a Windows client, if Zachary runs the
following commands, he sees the contents of his /ifs/home/zachary directory:
net use m: \\cluster.ip\zachary /u:zachary
cd m:
dir
Similarly, if user Claudia runs the following commands on a Windows client, she sees
the directory contents of /ifs/home/claudia:
net use m: \\cluster.ip\claudia /u:claudia
cd m:
dir
Zachary and Claudia cannot access one another's home directory because only the
share 'zachary' exists for Zachary and only the share 'claudia' exists for Claudia.
461
Home directories
2. Run the following command to set the default login shell for all Active Directory users
in your domain to /bin/bash:
isi auth ads modify YOUR.DOMAIN.NAME.COM --login-shell /bin/bash
System
/ifs
4.77M
Yes
0077
/usr/share/skel
create, delete, rename, set_security, close
Home directories
In the command result, you can see the default setting for Home Directory
Umask for the created home directory is 0700, which is equivalent to (0755 &
~(077)). You can modify the Home Directory Umask setting for a zone with the
--home-directory-umask option, specifying an octal number as the umask
value. This value indicates the permissions that are to be disabled, so larger mask
values indicate fewer permissions. For example, a umask value of 000 or 022 yields
created home directory permissions of 0755, whereas a umask value of 077 yields
created home directory permissions of 0700.
2. Run a command similar to the following example to allow a group/others write/
execute permission in a home directory:
isi zone zones modify System --home-directory-umask=022
In this example, user home directories will be created with mode bits 0755 masked
by the umask field, set to the value of 022. Therefore, user home directories will be
created with mode bits 0755, which is equivalent to (0755 & ~(022)).
2. Run the isi auth ads modify command with the --home-directorytemplate and --create-home-directory options.
isi auth ads modify YOUR.DOMAIN.NAME.COM \
--home-directory-template=/ifs/home/ADS/%D/%U \
--create-home-directory=yes
3. Run the isi auth ads view command with the --verbose option.
The system displays output similar to the following example:
Name: YOUR.DOMAIN.NAME.COM
NetBIOS Domain: YOUR
...
Create Home Directory: Yes
Set SSH/FTP home directory creation options
463
Home directories
5. Optional: To verify this information from an external UNIX node, run the ssh command
from an external UNIX node.
For example, the following command would create /ifs/home/ADS/<yourdomain>/user_100 if it did not previously exist:
ssh <your-domain>\\[email protected]
2. Run the isi zone zones modify command to modify the default skeleton
directory.
The following command modifies the default skeleton directory, /usr/share/skel,
in an access zone, where System is the value for the <zone> option and /usr/share/
skel2 is the value for the <path> option:
isi zone zones modify System --skeleton-directory=/usr/share/skel2
464
Home directories
File
Active Directory
Home directory
Home directory
creation
Enabled
--homedirectorytemplate=/ifs
/home/%U
--createhomedirectory=yes
--loginshell=/bin/sh
--homedirectorytemplate=""
--createhomedirectory=no
--homedirectory-
Disabled
None
Disabled
/bin/sh
465
Home directories
Authentication
provider
Home directory
Home directory
creation
Disabled
None
Disabled
None
template=/ifs
/home/%D/%U
l
--createhomedirectory=no
--loginshell=/bin/sh
Note
If available, provider
information overrides
this value.
LDAP
NIS
--homedirectorytemplate=""
--createhomedirectory=no
--homedirectorytemplate=""
--createhomedirectory=no
When you create an SMB share through the web administration interface, you must select
the Allow Variable Expansion check box or the string is interpreted literally by the system.
466
Variable Value
Description
%U
Home directories
Variable Value
Description
typically included at the end of the path. For
example, for a user named user1, the path /ifs/
home/%U is mapped to /ifs/home/user1.
%D
%Z
%L
%0
%1
%2
Note
If the user name includes fewer than three characters, the %0, %1, and %2 variables
wrap around. For example, for a user named ab, the variables maps to a, b, and a,
respectively. For a user named a, all three variables map to a.
467
Home directories
Local user
File user
468
LDAP user
NIS user
CHAPTER 11
Snapshots
Snapshots
469
Snapshots
Snapshots overview
A OneFS snapshot is a logical pointer to data that is stored on a cluster at a specific point
in time.
A snapshot references a directory on a cluster, including all data stored in the directory
and its subdirectories. If the data referenced by a snapshot is modified, the snapshot
stores a physical copy of the data that was modified. Snapshots are created according to
user specifications or are automatically generated by OneFS to facilitate system
operations.
To create and manage snapshots, you must activate a SnapshotIQ license on the cluster.
Some applications must generate snapshots to function but do not require you to
activate a SnapshotIQ license; by default, these snapshots are automatically deleted
when OneFS no longer needs them. However, if you activate a SnapshotIQ license, you
can retain these snapshots. You can view snapshots generated by other modules without
activating a SnapshotIQ license.
You can identify and locate snapshots by name or ID. A snapshot name is specified by a
user and assigned to the virtual directory that contains the snapshot. A snapshot ID is a
numerical identifier that OneFS automatically assigns to a snapshot.
470
Snapshots
To reduce disk-space usage, snapshots that reference the same directory reference each
other, with older snapshots referencing newer snapshots. If a file is deleted, and several
snapshots reference the file, a single snapshot stores a copy the file, and the other
snapshots reference the file from the snapshot that stored the copy. The reported size of
a snapshot reflects only the amount of data stored by the snapshot and does not include
the amount of data referenced by the snapshot.
Because snapshots do not consume a set amount of storage space, there is no availablespace requirement for creating a snapshot. The size of a snapshot grows according to
how the data referenced by the snapshot is modified. A cluster cannot contain more than
20,000 snapshots.
Snapshot schedules
You can automatically generate snapshots according to a snapshot schedule.
With snapshot schedules, you can periodically generate snapshots of a directory without
having to manually create a snapshot every time. You can also assign an expiration
period that determines when SnapshotIQ deletes each automatically generated
snapshot.
Snapshot aliases
A snapshot alias is a logical pointer to a snapshot. If you specify an alias for a snapshot
schedule, the alias will always point to the most recent snapshot generated by that
schedule. Assigning a snapshot alias allows you to quickly identify and access the most
recent snapshot generated according to a snapshot schedule.
If you allow clients to access snapshots through an alias, you can reassign the alias to
redirect clients to other snapshots. In addition to assigning snapshot aliases to
snapshots, you can also assign snapshot aliases to the live version of the file system.
This can be useful if clients are accessing snapshots through a snapshot alias, and you
want to redirect the clients to the live version of the file system.
Snapshot schedules
471
Snapshots
It is recommended that you do not disable the snapshot delete job. Disabling the
snapshot delete job prevents unused disk space from being freed and can also cause
performance degradation.
472
Snapshots
Deletion
type
Snapshot
frequency
Snapshot time
Snapshot
expiration
Max snapshots
retained
Ordered
Every hour
deletion
(for mostly
static
data)
Beginning at 12:00
AM Ending at 11:59
AM
1 month
720
Beginning at 12:00
AM Ending at 11:59
PM
1 day
27
At 12:00 AM
1 week
Saturday at 12:00
AM
1 month
3 months
Every month
File clones
SnapshotIQ enables you to create file clones that share blocks with existing files in order
to save space on the cluster. A file clone usually consumes less space and takes less
time to create than a file copy. Although you can clone files from snapshots, clones are
primarily used internally by OneFS.
The blocks that are shared between a clone and cloned file are contained in a hidden file
called a shadow store. Immediately after a clone is created, all data originally contained
in the cloned file is transferred to a shadow store. Because both files reference all blocks
from the shadow store, the two files consume no more space than the original file; the
clone does not take up any additional space on the cluster. However, if the cloned file or
clone is modified, the file and clone will share only blocks that are common to both of
them, and the modified, unshared blocks will occupy additional space on the cluster.
Over time, the shared blocks contained in the shadow store might become useless if
neither the file nor clone references the blocks. The cluster routinely deletes blocks that
are no longer needed. You can force the cluster to delete unused blocks at any time by
running the ShadowStoreDelete job.
Clones cannot contain alternate data streams (ADS). If you clone a file that contains
alternate data streams, the clone will not contain the alternate data streams.
File clones
473
Snapshots
Shadow-store considerations
Shadow stores are hidden files that are referenced by cloned and deduplicated files. Files
that reference shadow stores behave differently than other files.
l
When files that reference shadow stores are replicated to another Isilon cluster or
backed up to a Network Data Management Protocol (NDMP) backup device, the
shadow stores are not transferred to the target Isilon cluster or backup device. The
files are transferred as if they contained the data that they reference from shadow
stores. On the target Isilon cluster or backup device, the files consume the same
amount of space as if they had not referenced shadow stores.
When OneFS creates a shadow store, OneFS assigns the shadow store to a storage
pool of a file that references the shadow store. If you delete the storage pool that a
shadow store resides on, the shadow store is moved to a pool occupied by another
file that references the shadow store.
OneFS does not delete a shadow-store block immediately after the last reference to
the block is deleted. Instead, OneFS waits until the ShadowStoreDelete job is run to
delete the unreferenced block. If a large number of unreferenced blocks exist on the
cluster, OneFS might report a negative deduplication savings until the
ShadowStoreDelete job is run.
Shadow stores are protected at least as much as the most protected file that
references it. For example, if one file that references a shadow store resides in a
storage pool with +2 protection and another file that references the shadow store
resides in a storage pool with +3 protection, the shadow store is protected at +3.
Quotas account for files that reference shadow stores as if the files contained the
data referenced from shadow stores; from the perspective of a quota, shadow-store
references do not exist. However, if a quota includes data protection overhead, the
quota does not account for the data protection overhead of shadow stores.
Snapshot locks
A snapshot lock prevents a snapshot from being deleted. If a snapshot has one or more
locks applied to it, the snapshot cannot be deleted and is referred to as a locked
snapshot. If the duration period of a locked snapshot expires, OneFS will not delete the
snapshot until all locks on the snapshot have been deleted.
OneFS applies snapshot locks to ensure that snapshots generated by OneFS applications
are not deleted prematurely. For this reason, it is recommended that you do not delete
snapshot locks or modify the duration period of snapshot locks.
A limited number of locks can be applied to a snapshot at a time. If you create snapshot
locks, the limit for a snapshot might be reached, and OneFS could be unable to apply a
snapshot lock when necessary. For this reason, it is recommended that you do not create
snapshot locks.
474
Snapshots
Snapshot reserve
The snapshot reserve enables you to set aside a minimum percentage of the cluster
storage capacity specifically for snapshots. If specified, all other OneFS operations are
unable to access the percentage of cluster capacity that is reserved for snapshots.
Note
The snapshot reserve does not limit the amount of space that snapshots can consume on
the cluster. Snapshots can consume a greater percentage of storage capacity specified by
the snapshot reserve. It is recommended that you do not specify a snapshot reserve.
Active
No
Yes
Configure SnapshotIQ
settings
No
Yes
Yes
Yes
Delete snapshots
Yes
Yes
Yes
Yes
View snapshots
Yes
Yes
If you a SnapshotIQ license becomes inactive, you will no longer be able to create new
snapshots, all snapshot schedules will be disabled, and you will not be able to modify
snapshots or snapshot settings. However, you will still be able to delete snapshots and
access data contained in snapshots.
475
Snapshots
directories while the directories are empty. Creating a domain for a directory that contains
less data takes less time.
Create a snapshot
You can create a snapshot of a directory.
Procedure
1. Run the isi snapshot snapshots create command.
The following command creates a snapshot for /ifs/data/media:
isi snapshot snapshots create /ifs/data/media --name media-snap
476
Snapshots
Description
%A
%a
The abbreviated day of the week. For example, if the snapshot is generated
on a Sunday, %a is replaced with Sun.
%B
%b
%C
The first two digits of the year. For example, if the snapshot is created in
2014, %C is replaced with 20.
%c
%d
%e
%F
%G
%g
%H
The hour. The hour is represented on the 24-hour clock. Single-digit hours are
preceded by a zero. For example, if a snapshot is created at 1:45 AM, %H is
replaced with 01.
%h
%I
The hour represented on the 12-hour clock. Single-digit hours are preceded
by a zero. For example, if a snapshot is created at 1:45 PM, %I is replaced
with 01.
477
Snapshots
Variable
Description
%j
%k
The hour represented on the 24-hour clock. Single-digit hours are preceded
by a blank space.
%l
The hour represented on the 12-hour clock. Single-digit hours are preceded
by a blank space. For example, if a snapshot is created at 1:45 AM, %I is
replaced with 1.
%M
%m
%p
AM or PM.
%{PolicyName} The name of the replication policy that the snapshot was created for. This
variable is valid only if you are specifying a snapshot naming pattern for a
replication policy.
%R
%r
%S
%s
%{SrcCluster} The name of the source cluster of the replication policy that the snapshot was
created for. This variable is valid only if you are specifying a snapshot naming
pattern for a replication policy.
%T
%U
%u
The numerical day of the week. Numbers range from 1 to 7. The first day of
the week is calculated as Monday. For example, if a snapshot is created on
Sunday, %u is replaced with 7.
%V
The two-digit numerical week of the year that the snapshot was created in.
Numbers range from 01 to 53. The first day of the week is calculated as
Monday. If the week of January 1 is four or more days in length, then that
week is counted as the first week of the year.
%v
The day that the snapshot was created. This variable is equivalent to
specifying %e-%b-%Y.
%W
The two-digit numerical week of the year that the snapshot was created in.
Numbers range from 00 to 53. The first day of the week is calculated as
Monday.
%w
The numerical day of the week that the snapshot was created on. Numbers
range from 0 to 6. The first day of the week is calculated as Sunday. For
example, if the snapshot was created on Sunday, %w is replaced with 0.
478
Snapshots
Variable
Description
%X
The time that the snapshot was created. This variable is equivalent to
specifying %H:%M:%S.
%Y
%y
The last two digits of the year that the snapshot was created in. For example,
if the snapshot was created in 2014, %y is replaced with 14.
%Z
%z
The offset from coordinated universal time (UTC) of the time zone that the
snapshot was created in. If preceded by a plus sign, the time zone is east of
UTC. If preceded by a minus sign, the time zone is west of UTC.
%+
The time and date that the snapshot was created. This variable is equivalent
to specifying %a %b %e %X %Z %Y.
%%
Managing snapshots
You can delete and view snapshots. You can also modify the name, duration period, and
alias of an existing snapshot. However, you cannot modify the data contained in a
snapshot; the data contained in a snapshot is read-only.
Managing snapshots
479
Snapshots
Delete a snapshot
You can delete a snapshot if you no longer want to access the data contained in the
snapshot.
OneFS frees disk space occupied by deleted snapshots when the SnapshotDelete job is
run. Also, if you delete a snapshot that contains clones or cloned files, data in a shadow
store might no longer be referenced by files on the cluster; OneFS deletes unreferenced
data in a shadow store when the ShadowStoreDelete job is run. OneFS routinely runs
both the shadow store delete and SnapshotDelete jobs. However, you can also manually
run the jobs at any time.
Procedure
1. Delete a snapshot by running the isi snapshot snapshots delete command.
The following command deletes a snapshot named OldSnapshot:
isi snapshot snapshots delete OldSnapshot
2. Optional: To increase the speed at which deleted snapshot data is freed on the
cluster, start the SnapshotDelete job by running the following command:
isi job jobs start snapshotdelete
3. To increase the speed at which deleted data shared between deduplicated and cloned
files is freed on the cluster, start the ShadowStoreDelete job by running the following
command:
isi job jobs start shadowstoredelete
480
Snapshots
View snapshots
You can view a list of snapshots or detailed information about a specific snapshot.
Procedure
1. View all snapshots by running the following command:
isi snapshot snapshots list
2. Optional: To view detailed information about a specific snapshot, run the isi
snapshot snapshots view command.
The following command displays detailed information about
HourlyBackup_07-15-2013_22:00:
isi snapshot snapshots view HourlyBackup_07-15-2013_22:00
14
HourlyBackup_07-15-2013_22:00
/ifs/data/media
No
hourly
2013-07-15T22:00:10
2013-08-14T22:00:00
0b
0b
0.00%
0.00%
active
Snapshot information
You can view information about snapshots through the output of the isi snapshot
snapshots list command.
ID
The ID of the snapshot.
Name
The name of the snapshot.
Path
The path of the directory contained in the snapshot.
View snapshots
481
Snapshots
Revert a snapshot
You can revert a directory back to the state it was in when a snapshot was taken.
Before you begin
l
Procedure
1. Optional: To identify the ID of the snapshot you want to revert, run the isi
snapshot snapshots view command.
The following command displays the ID of HourlyBackup_07-15-2014_23:00:
isi snapshot snapshots view HourlyBackup_07-15-2014_23:00
You can access up to 64 snapshots of a directory through Windows explorer, starting with
the most recent snapshot. To access more than 64 snapshots for a directory, access the
cluster through a UNIX command line.
482
Snapshots
Procedure
1. In Windows Explorer, navigate to the directory that you want to restore or the directory
that contains the file that you want to restore.
2. Right-click the folder, and then click Properties.
3. In the Properties window, click the Previous Versions tab.
4. Select the version of the folder that you want to restore or the version of the folder
that contains the version of the file that you want to restore.
5. Restore the version of the file or directory.
l
To copy the selected directory to another location, click Copy and then specify a
location to copy the directory to.
To restore a specific file, click Open, and then copy the file into the original
directory, replacing the existing copy with the snapshot version.
3. Clone a file from the snapshot by running the cp command with the -c option.
483
Snapshots
484
Snapshots
1
every-other-hour
/ifs/data/media
EveryOtherHourBackup_%m-%d-%Y_%H:%M
Every day every 2 hours
1D
2013-07-16T18:00:00
EveryOtherHourBackup_07-16-2013_18:00
485
Snapshots
If a snapshot alias references the live version of the file system, the Target ID is
-1.
2. Optional: View information about a specific snapshot by running the isi snapshot
aliases view command.
The following command displays information about latestWeekly:
isi snapshot aliases view latestWeekly
486
Snapshots
It is recommended that you do not create, delete, or modify snapshots locks unless you
are instructed to do so by Isilon Technical Support.
Deleting a snapshot lock that was created by OneFS might result in data loss. If you
delete a snapshot lock that was created by OneFS, it is possible that the corresponding
snapshot might be deleted while it is still in use by OneFS. If OneFS cannot access a
snapshot that is necessary for an operation, the operation will malfunction and data loss
might result. Modifying the expiration date of a snapshot lock created by OneFS can also
result in data loss because the corresponding snapshot can be deleted prematurely.
It is recommended that you do not modify the expiration dates of snapshot locks.
Procedure
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. Run the isi snapshot locks modify command.
The following command sets an expiration date two days from the present date for a
snapshot lock with an ID of 1 that is applied to a snapshot named
SnapshotApril2014:
isi snapshot locks modify SnapshotApril2014 1 --expires 2D
Managing with snapshot locks
487
Snapshots
The system prompts you to confirm that you want to delete the snapshot lock.
3. Type yes and then press ENTER.
488
Snapshots
Yes
Yes
Yes
0.00%
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
SnapshotIQ settings
SnapshotIQ settings determine how snapshots behave and can be accessed.
The following settings are displayed in the output of the isi snapshot settings
view command:
Service
Determines whether SnapshotIQ is enabled on the cluster.
Autocreate
Determines whether snapshots are automatically generated according to snapshot
schedules.
Note
SnapshotIQ settings
489
Snapshots
Snapshot commands
You can control and access snapshots through the snapshot commands. Most snapshot
commands apply specifically to the SnapshotIQ tool and are available only if a
SnapshotIQ license is configured on the cluster.
490
Variable
Description
%A
%a
The abbreviated day of the week. For example, if the snapshot is generated
on a Sunday, %a is replaced with Sun.
Snapshots
Variable
Description
%B
%b
%C
The first two digits of the year. For example, if the snapshot is created in
2014, %C is replaced with 20.
%c
%d
%e
%F
%G
%g
%H
The hour. The hour is represented on the 24-hour clock. Single-digit hours are
preceded by a zero. For example, if a snapshot is created at 1:45 AM, %H is
replaced with 01.
%h
%I
The hour represented on the 12-hour clock. Single-digit hours are preceded
by a zero. For example, if a snapshot is created at 1:45 PM, %I is replaced
with 01.
%j
%k
The hour represented on the 24-hour clock. Single-digit hours are preceded
by a blank space.
%l
The hour represented on the 12-hour clock. Single-digit hours are preceded
by a blank space. For example, if a snapshot is created at 1:45 AM, %I is
replaced with 1.
%M
%m
491
Snapshots
Variable
Description
%p
AM or PM.
%{PolicyName} The name of the replication policy that the snapshot was created for. This
variable is valid only if you are specifying a snapshot naming pattern for a
replication policy.
%R
%r
%S
%s
%{SrcCluster} The name of the source cluster of the replication policy that the snapshot was
created for. This variable is valid only if you are specifying a snapshot naming
pattern for a replication policy.
%T
%U
%u
The numerical day of the week. Numbers range from 1 to 7. The first day of
the week is calculated as Monday. For example, if a snapshot is created on
Sunday, %u is replaced with 7.
%V
The two-digit numerical week of the year that the snapshot was created in.
Numbers range from 01 to 53. The first day of the week is calculated as
Monday. If the week of January 1 is four or more days in length, then that
week is counted as the first week of the year.
%v
The day that the snapshot was created. This variable is equivalent to
specifying %e-%b-%Y.
%W
The two-digit numerical week of the year that the snapshot was created in.
Numbers range from 00 to 53. The first day of the week is calculated as
Monday.
%w
The numerical day of the week that the snapshot was created on. Numbers
range from 0 to 6. The first day of the week is calculated as Sunday. For
example, if the snapshot was created on Sunday, %w is replaced with 0.
492
%X
The time that the snapshot was created. This variable is equivalent to
specifying %H:%M:%S.
%Y
%y
The last two digits of the year that the snapshot was created in. For example,
if the snapshot was created in 2014, %y is replaced with 14.
%Z
%z
The offset from coordinated universal time (UTC) of the time zone that the
snapshot was created in. If preceded by a plus sign, the time zone is east of
UTC. If preceded by a minus sign, the time zone is west of UTC.
Snapshots
Variable
Description
%+
The time and date that the snapshot was created. This variable is equivalent
to specifying %a %b %e %X %Z %Y.
%%
Options
<name>
Specifies a name for the snapshot schedule.
<path>
Specifies the path of the directory to include in the snapshots.
<pattern>
Specifies a naming pattern for snapshots created according to the schedule.
<schedule>
Specifies how often snapshots are created.
Specify in the following format:
"<interval> [<frequency>]"
493
Snapshots
You can optionally append "st", "th", or "rd" to <integer>. For example, you can specify
"Every 1st month"
Specify <day> as any day of the week or a three-letter abbreviation for the day. For
example, both "saturday" and "sat" are valid.
--alias <alias>
Specifies an alias for the latest snapshot generated based on the schedule. The alias
enables you to quickly locate the most recent snapshot that was generated according
to the schedule.
Specify as any string.
{--duration | -x} <duration>
Specifies how long snapshots generated according to the schedule are stored on the
cluster before OneFS automatically deletes them.
Specify in the following format:
<integer><units>
Options
<schedule-name>
Modifies the specified snapshot schedule.
Specify as a snapshot schedule name or ID.
494
Snapshots
--name <name>
Specifies a new name for the schedule.
Specify as any string.
{--alias | -a} <name>
Specifies an alias for the latest snapshot generated based on the schedule. The alias
enables you to quickly locate the most recent snapshot that was generated according
to the schedule. If specified, the specified alias will be applied to the next snapshot
generated by the schedule, and all subsequently generated snapshots.
Specify as any string.
--path <path>
Specifies a new directory path for this snapshot schedule. If specified, snapshots
generated by the schedule will contain only this directory path.
Specify as a directory path.
--pattern <naming-pattern>
Specifies a pattern by which snapshots created according to the schedule are named.
--schedule <schedule>
Specifies how often snapshots are created.
Specify in the following format:
"<interval> [<frequency>]"
You can optionally append "st", "th", or "rd" to <integer>. For example, you can specify
"Every 1st month"
Specify <day> as any day of the week or a three-letter abbreviation for the day. For
example, both "saturday" and "sat" are valid.
{--duration | -x} <duration>
Specifies how long snapshots generated according to the schedule are stored on the
cluster before OneFS automatically deletes them.
Specify in the following format:
<integer><units>
495
Snapshots
Options
<schedule-name>
Deletes the specified snapshot schedule.
Specify as a snapshot schedule name or ID.
{--force | -f}
Does not prompt you to confirm that you want to delete this snapshot schedule.
{--verbose | -v}
Displays a message confirming that the snapshot schedule was deleted.
496
Snapshots
[--no-header]
[--no-footer]
[--verbose]
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--sort <attribute>
Sorts output displayed by the specified attribute.
The following values are valid:
id
Sorts output by the ID of a snapshot schedule.
name
Sorts output alphabetically by the name of a snapshot schedule.
path
Sorts output by the absolute path of the directory contained by snapshots
created according to a schedule.
pattern
Sorts output alphabetically by the snapshot naming pattern assigned to
snapshots generated according to a schedule.
schedule
Sorts output alphabetically by the schedule. For example, "Every week" precedes
"Yearly on January 3rd"
duration
Sorts output by the length of time that snapshots created according to the
schedule endure on the cluster before being automatically deleted.
alias
Sorts output alphabetically by the name of the alias assigned to the most recent
snapshot generated according to the schedule.
next_run
Sorts output by the next time that a snapshot will be created according to the
schedule.
next_snapshot
Sorts output alphabetically by the name of the snapshot that is scheduled to be
created next.
{--descending | -d}
Displays output in reverse order.
--format <output-format>
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
isi snapshot schedules list
497
Snapshots
Options
<schedule-name>
Displays information about the specified snapshot schedule.
Specify as a snapshot schedule name or ID.
Options
{--begin | -b} <timestamp>
Displays only snapshots that are scheduled to be generated after the specified date.
Specify <timestamp> in the following format:
<yyyy>-<mm>-<dd>[T<HH>:<MM>[:<SS>]]
If this option is not specified, the output displays a list of snapshots that are
scheduled to be generated after the current time.
{--end | -e} <time>
Displays only snapshots that are scheduled to be generated before the specified
date.
Specify <time> in the following format:
<yyyy>-<mm>-<dd>[T<HH>:<MM>[:<SS>]]
If this option is not specified, the output displays a list of snapshots that are
scheduled to be generated before 30 days after the begin time.
{--limit | -l} <integer>
Displays no more than the specified number of items.
--format <output-format>
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
498
Snapshots
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Options
<path>
499
Snapshots
Options
<snapshot>
Modifies the specified snapshot or snapshot alias.
Specify as the name or ID of a snapshot or snapshot alias.
--name <name>
Specifies a new name for the snapshot or snapshot alias.
Specify as any string.
{--expires | -x} {<timestamp> | <duration>}
Specifies when OneFS will automatically delete this snapshot.
Specify <timestamp> in the following format:
<yyyy>-<mm>-<dd>[T<HH>:<MM>[:<SS>]]
Snapshots
Displays a message confirming that the snapshot or snapshot alias was modified.
Options
--all
Deletes all snapshots.
--snapshot <snapshot>
Deletes the specified snapshot.
Specify as a snapshot name or ID.
--schedule <schedule>
Deletes all snapshots created according to the specified schedule.
Specify as a snapshot schedule name or ID.
--type <type>
Deletes all snapshots of the specified type.
The following types are valid:
alias
Deletes all snapshot aliases.
real
Deletes all snapshots.
{--force | -f}
Does not prompt you to confirm that you want to delete the snapshot.
{--verbose | -v}
Displays a message confirming that the snapshot was deleted.
Examples
The following command deletes newSnap1:
isi snapshot snapshots delete --snapshot newSnap1
501
Snapshots
[--no-footer]
[--verbose]
Options
--state <state>
Displays only snapshots and snapshot aliases that exist in the specified state.
The following states are valid:
all
Displays all snapshots and snapshot aliases that are currently occupying space
on the cluster.
active
Displays only snapshots and snapshot aliases that have not been deleted.
deleting
Displays only snapshots that have been deleted but are still occupying space on
the cluster. The space occupied by deleted snapshots will be freed the next time
the snapshot delete job is run.
{--limit | -l} <integer>
Displays no more than the specified number of items.
--sort <attribute>
Sorts command output by the specified attribute.
The following attributes are valid:
id
Sorts output by the ID of a snapshot.
name
Sorts output alphabetically by the name of a snapshot.
path
Sorts output by the absolute path of the directory contained in a snapshot.
has_locks
Sorts output by whether any snapshot locks have been applied to a snapshot.
schedule
If a snapshot was generated according to a schedule, sorts output alphabetically
by the name of the snapshot schedule.
target_id
If a snapshot is an alias, sorts output by the snapshot ID of the target snapshot
instead of the snapshot ID of the alias.
target_name
If a snapshot is an alias, sorts output by the name of the target snapshot instead
of the name of the alias.
created
Sorts output by the time that a snapshot was created.
expires
Sorts output by the time at which a snapshot is scheduled to be automatically
deleted.
size
Sorts output by the amount of disk space taken up by a snapshot.
502
Snapshots
shadow_bytes
Sorts output based on the amount of data that a snapshot references from
shadow stores. Snapshots reference shadow store data if a file contained in a
snapshot is cloned or a snapshot is taken of a cloned file.
pct_reserve
Sorts output by the percentage of the snapshot reserve that a snapshot
occupies.
pct_filesystem
Sorts output by the percent of the file system that a snapshot occupies.
state
Sorts output based on the state of snapshots.
{--descending | -d}
Displays output in reverse order.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table output without headers.
{--no-footer | -z}
Displays table output without footers. Footers display snapshot totals, such as the
total amount of storage space consumed by snapshots.
{--verbose | -v}
Displays more detailed information.
Options
<snapshot>
Displays information about the specified snapshot.
Specify as a snapshot name or ID.
503
Snapshots
Options
--service {enable | disable}
Determines whether snapshots can be generated.
Note
This option limits only the amount of space available to applications other than
SnapshotIQ. It does not limit the amount of space that snapshots are allowed to
occupy. Snapshots can occupy more than the specified percentage of system storage
space.
--global-visible-accessible {yes | no}
Specifying yes causes snapshot directories and sub-directories to be visible and
accessible through all protocols, overriding all other snapshot visibility and
accessibility settings. Specifying no causes visibility and accessibility settings to be
controlled through the other snapshot visibility and accessibility settings.
--nfs-root-accessible {yes | no}
Determines whether snapshot directories are accessible through NFS.
--nfs-root-visible {yes | no}
Determines whether snapshot directories are visible through NFS.
--nfs-subdir-accessible {yes | no}
Determines whether snapshot subdirectories are accessible through NFS.
--smb-root-accessible {yes | no}
504
Snapshots
Options
There are no options for this command.
It is recommended that you do not create snapshot locks and do not use this command.
If the maximum number of locks on a snapshot is reached, some applications, such as
SyncIQ, might not function properly.
Syntax
isi snapshot locks create <snapshot>
[--comment <string>]
[--expires {<timestamp> | <duration>}]
[--verbose]
Options
<snapshot>
Specifies the name of the snapshot to apply this lock to.
{--comment | -c} <string>
Specifies a comment to describe the lock.
Specify as any string.
{--expires | -x} {<timestamp> | <duration>}
Specifies when the lock will be automatically deleted by the system.
isi snapshot settings view
505
Snapshots
If this option is not specified, the snapshot lock will exist indefinitely.
Specify <timestamp> in the following format:
<yyyy>-<mm>-<dd>[T<HH>:<MM>[:<SS>]]
It is recommended that you do not modify the expiration date of snapshot locks and do
not run this command. Modifying the expiration date of a snapshot lock that was created
by OneFS might result in data loss.
Syntax
isi snapshot locks modify <snapshot> <id>
{--expires {<timestamp> | <duration>} | --clear-expires}
[--verbose]
Options
<snapshot>
Modifies a snapshot lock that has been applied to the specified snapshot.
Specify as a snapshot name or ID.
<id>
Modifies the snapshot lock of the specified ID.
{--expires | -x} {<timestamp> | <duration>}
Specifies when the lock will be automatically deleted by the system.
If this option is not specified, the snapshot lock will exist indefinitely.
506
Snapshots
It is recommended that you do not delete snapshot locks and do not run this command.
Deleting a snapshot lock that was created by OneFS might result in data loss.
Syntax
isi snapshot locks delete <snapshot> <id>
[--force]
[--verbose]
Options
<snapshot>
Deletes a snapshot lock that has been applied to the specified snapshot.
Specify as a snapshot name or ID.
<id>
Modifies the snapshot lock of the specified ID.
isi snapshot locks delete
507
Snapshots
{--force | -f}
Does not prompt you to confirm that you want to delete this snapshot lock.
{--verbose | -v}
Displays a message confirming that the snapshot lock was deleted.
Options
<snapshot>
Displays all locks belonging to the specified snapshot.
Specify as a snapshot name.
{--limit | -l} <integer>
Displays no more than the specified number of items.
--sort <attribute>
Sorts output displayed by the specified attribute.
The following values are valid:
id
Sorts output by the ID of a snapshot lock.
comment
Sorts output alphabetically by the description of a snapshot lock.
expires
Sorts output by the length of time that a lock endures on the cluster before being
automatically deleted.
count
Sorts output by the number of times that a lock is held.
{--descending | -d}
Displays output in reverse order.
--format <output-format>
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
508
Snapshots
Options
<name>
Specifies the snapshot to view locks for.
Specify as a snapshot name or ID.
<id>
Displays the specified lock.
Specify as a snapshot lock ID.
Options
<name>
Assigns the alias to the specified snapshot or to the live version of the file system.
Specify as a snapshot ID or name. To target the live version of the file system, specify
LIVE.
{--verbose | -v}
Displays more detailed information.
Options
<alias>
Modifies the specified snapshot alias.
Specify as a snapshot-alias name or ID.
--name <name>
Specifies a new name for the snapshot alias.
isi snapshot locks view
509
Snapshots
--target <snapshot>
Reassigns the snapshot alias to the specified snapshot or the live version of the file
system.
Specify as a snapshot ID or name. To target the live version of the file system, specify
LIVE.
{--verbose | -v}
Displays more detailed information.
Options
<alias>
Deletes the snapshot alias of the specified name.
Specify as a snapshot-alias name or ID.
--all
Deletes all snapshot aliases.
{--force | -f}
Runs the command without prompting you to confirm that you want to delete the
snapshot alias.
{--verbose | -v}
Displays more detailed information.
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--sort <attribute>
Sorts output displayed by the specified attribute.
The following values are valid:
id
Sorts output by the ID of the snapshot alias.
510
Snapshots
name
Sorts output by the name of the snapshot alias.
target_id
Sorts output by the ID of the snapshot that the snapshot alias is assigned to.
target_name
Sorts output by the name of the snapshot that the snapshot alias is assigned to.
created
Sorts output by the date the snapshot alias was created.
{--descending | -d}
Displays output in reverse order.
--format <output-format>
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Options
<alias>
Displays detailed information about the specified snapshot alias.
Specify as a snapshot-alias name or ID.
511
Snapshots
512
CHAPTER 12
Deduplication with SmartDedupe
513
Deduplication overview
The SmartDedupe software module enables you to save storage space on your cluster by
reducing redundant data. Deduplication maximizes the efficiency of your cluster by
decreasing the amount of storage required to store multiple files with similar blocks.
SmartDedupe deduplicates data by scanning an Isilon cluster for identical data blocks.
Each block is 8 KB. If SmartDedupe finds duplicate blocks, SmartDedupe moves a single
copy of the blocks to a hidden file called a shadow store. SmartDedupe then deletes the
duplicate blocks from the original files and replaces the blocks with pointers to the
shadow store.
Deduplication is applied at the directory level, targeting all files and directories
underneath one or more root directories. You can first assess a directory for
deduplication and determine the estimated amount of space you can expect to save. You
can then decide whether to deduplicate the directory. After you begin deduplicating a
directory, you can monitor how much space is saved by deduplication in real time.
SmartDedupe does not deduplicate files that are 32 KB and smaller, because doing so
would consume more cluster resources than the storage savings are worth. Each shadow
store can contain up to 255 blocks. Each block in a shadow store can be referenced
32000 times.
Deduplication jobs
Deduplication is performed by maintenance jobs referred to as deduplication jobs. You
can monitor and control deduplication jobs as you would any other maintenance job on
the cluster. Although the overall performance impact of deduplication is minimal, the
deduplication job consumes 256 MB of memory per node.
When a deduplication job is first run on a cluster, SmartDedupe samples blocks from
each file and creates index entries for those blocks. If the index entries of two blocks
match, SmartDedupe scans the blocks adjacent to the matching pair and then
deduplicates all duplicate blocks. After a deduplication job samples a file once, new
deduplication jobs will not sample the file again until the file is modified.
The first deduplication job you run might take significantly longer to complete than
subsequent deduplication jobs. The first deduplication job must scan all files under the
specified directories to generate the initial index. If subsequent deduplication jobs take a
long time to complete, this most likely indicates that a large amount of data is being
deduplicated. However, it can also indicate that clients are creating a large amount of
new data on the cluster. If a deduplication job is interrupted during the deduplication
process, the job will automatically restart the scanning process from where the job was
interrupted.
It is recommended that you run deduplication jobs when clients are not modifying data
on the cluster. If clients are continually modifying files on the cluster, the amount of
space saved by deduplication is minimal because the deduplicated blocks are constantly
removed from the shadow store. For most clusters, it is recommended that you start a
deduplication job every ten days.
The permissions required to modify deduplication settings are not the same as those
needed to run a deduplication job. Although a user must have the maintenance job
permission to run a deduplication job, the user must have the deduplication permission
to modify deduplication settings. By default, the deduplication job is configured to run at
a low priority.
514
Deduplication considerations
Deduplication can significantly increase the efficiency at which you store data. However,
the effect of deduplication varies depending on the cluster.
You can reduce redundancy on a cluster by running SmartDedupe. Deduplication creates
links that can impact the speed at which you can read from and write to files. In
particular, sequentially reading chunks smaller than 512 KB of a deduplicated file can be
significantly slower than reading the same small, sequential chunks of a nondeduplicated file. This performance degradation applies only if you are reading noncached data. For cached data, the performance for deduplicated files is potentially better
than non-deduplicated files. If you stream chunks larger than 512 KB, deduplication does
not significantly impact the read performance of the file. If you intend on streaming 8 KB
or less of each file at a time, and you do not plan on concurrently streaming the files, it is
recommended that you do not deduplicate the files.
Data replication and backup with deduplication
515
Deduplication is most effective when applied to static or archived files and directories.
The less files are modified, the less negative effect deduplication has on the cluster. For
example, virtual machines often contain several copies of identical files that are rarely
modified. Deduplicating a large number of virtual machines can greatly reduce consumed
storage space.
Shadow-store considerations
Shadow stores are hidden files that are referenced by cloned and deduplicated files. Files
that reference shadow stores behave differently than other files.
l
Managing deduplication
You can manage deduplication on a cluster by first assessing how much space you can
save by deduplicating individual directories. After you determine which directories are
516
If you assess multiple directories, disk savings will not be differentiated by directory
in the deduplication report.
2. Start the assessment job by running the following command:
isi job jobs start dedupeassessment
4. View prospective space savings by running the isi dedupe reports view
command:
The following command displays the prospective savings recorded in a deduplication
report with an ID of 46:
isi dedupe reports view 46
2. Optional: To modify the settings of the deduplication job, run the isi job types
modify command.
The following command configures the deduplication job to be run every Friday at
10:00 PM:
isi job types Dedupe --schedule "Every Friday at 10:00 PM"
517
518
Deduplication information
You can view information about how much disk space is being saved by deduplication.
The following information is displayed in the output of the isi dedupe stats
command:
Cluster Physical Size
The total amount of physical disk space on the cluster.
Cluster Used Size
The total amount of disk space currently occupied by data on the cluster.
Logical Size Deduplicated
The amount of disk space that has been deduplicated in terms of reported file sizes.
For example, if you have three identical files that are all 5 GB, the logical size
deduplicated is 15 GB.
Logical Saving
The amount of disk space saved by deduplication in terms of reported file sizes. For
example, if you have three identical files that are all 5 GB, the logical saving is 10
GB.
Estimated Size Deduplicated
The total amount of physical disk space that has been deduplicated, including
protection overhead and metadata. For example, if you have three identical files that
are all 5 GB, the estimated size deduplicated would be greater than 15 GB, because
of the disk space consumed by file metadata and protection overhead.
Deduplication information
519
Deduplication commands
You can control data deduplication through the deduplication commands. Deduplication
commands are available only if you activate a SmartDedupe license.
Options
--paths <path>
Deduplicates files located under the specified root directories.
--clear-paths
Stops deduplication for all previously specified root directories. If you run the isi
dedupe settings modify command with this option, you must run the
command again with either --paths or --add-path to resume deduplication.
--add-paths <path>
Deduplicates files located under the specified root directory in addition to directories
that are already being deduplicated.
--remove-paths <path>
Stops deduplicating the specified root directory.
--assess-paths <path>
Assesses how much space will be saved if files located under the specified root
directories are deduplicated.
--clear-assess-paths
Stops assessing how much space will be saved if previously specified root directories
are deduplicated. If you run the isi dedupe settings modify command with
this option, you must run the command again with either --paths or --add-path
to resume deduplication.
--add-assess-paths <path>
Assesses how much space will be saved if the specified root directories are
deduplicated in addition to directories that are already being assessed.
520
--remove-assess-paths <path>
Stops assessing how much space will be saved if the specified root directories are
deduplicated.
{--verbose | -v}
Displays more detailed information.
Examples
The following command starts deduplicating /ifs/data/active and /ifs/data/
media:
isi dedupe settings modify --add-paths /ifs/data/active,/ifs/data/
media
Options
There are no options for this command.
Options
There are no options for this command.
Examples
To view information about deduplication space savings, run the following command:
isi dedupe stats
17.019G
4.994G
13.36M
11.13M
30.28M
25.23M
521
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table output without headers.
{--no-footer | -z}
Displays table output without footers. Footers display snapshot totals, such as the
total amount of storage space consumed by snapshots.
{--verbose | -v}
Displays more detailed information.
Examples
To view a list of deduplication reports, run the following command:
isi dedupe reports list
Options
<job-id>
522
Displays the deduplication report for the deduplication job of the specified ID.
Examples
The following command displays a deduplication job:
isi dedupe reports view 12
523
524
CHAPTER 13
Data replication with SyncIQ
525
To prevent permissions errors, make sure that ACL policy settings are the same across
source and target clusters.
You can create two types of replication policies: synchronization policies and copy
policies. A synchronization policy maintains an exact replica of the source directory on
the target cluster. If a file or sub-directory is deleted from the source directory, the file or
directory is deleted from the target cluster when the policy is run again.
You can use synchronization policies to fail over and fail back data between source and
target clusters. When a source cluster becomes unavailable, you can fail over data on a
target cluster and make the data available to clients. When the source cluster becomes
available again, you can fail back the data to the source cluster.
A copy policy maintains recent versions of the files that are stored on the source cluster.
However, files that are deleted on the source cluster are not deleted from the target
cluster. Failback is not supported for copy policies. Copy policies are most commonly
used for archival purposes.
Copy policies enable you to remove files from the source cluster without losing those files
on the target cluster. Deleting files on the source cluster improves performance on the
source cluster while maintaining the deleted files on the target cluster. This can be useful
526
if, for example, your source cluster is being used for production purposes and your target
cluster is being used only for archiving.
After creating a job for a replication policy, SyncIQ must wait until the job completes
before it can create another job for the policy. Any number of replication jobs can exist on
a cluster at a given time; however, only five replication jobs can run on a source cluster at
the same time. If more than five replication jobs exist on a cluster, the first five jobs run
while the others are queued to run. The number of replication jobs that a single target
cluster can support concurrently is dependent on the number of workers available on the
target cluster.
You can replicate any number of files and directories with a single replication job. You
can prevent a large replication job from overwhelming the system by limiting the amount
of cluster resources and network bandwidth that data synchronization is allowed to
consume. Because each node in a cluster is able to send and receive data, the speed at
which data is replicated increases for larger clusters.
You can accurately predict when modifications will be made to the data
Configuring a policy to start when changes are made to the source directory can be useful
under the following conditions:
l
For policies that are configured to start whenever changes are made to the source
directory, SyncIQ checks the source directories every ten seconds. SyncIQ does not
account for excluded files or directories when detecting changes, so policies that exclude
files or directories from replication might be run unnecessarily. For example, assume that
newPolicy replicates /ifs/data/media but excludes /ifs/data/media/temp. If a
modification is made to /ifs/data/media/temp/file.txt, SyncIQ will run
newPolicy, but will not replicate /ifs/data/media/temp/file.txt.
If a policy is configured to start whenever changes are made to its source directory, and a
replication job fails, SyncIQ will wait one minute before attempting to run the policy
again. SyncIQ will increase this delay exponentially for each failure up to a maximum
delay of eight hours. You can override the delay by running the policy manually at any
time. After a job for the policy completes successfully, SyncIQ will resume checking the
source directory every ten seconds.
527
target cluster, the mark persists on the target cluster. When a replication policy is run,
SyncIQ checks the mark to ensure that data is being replicated to the correct location.
On the target cluster, you can manually break an association between a replication policy
and target directory. Breaking the association between a source and target cluster causes
the mark on the target cluster to be deleted. You might want to manually break a target
association if an association is obsolete. If you break the association of a policy, the
policy is disabled on the source cluster and you cannot run the policy. If you want to run
the disabled policy again, you must reset the replication policy.
Note
Breaking a policy association causes either a full or differential replication to occur the
next time you run the replication policy. During a full or differential replication, SyncIQ
creates a new association between the source and target clusters. Depending on the
amount of data being replicated, a full or differential replication can take a very long time
to complete.
number of workers per node to increase the speed at which data is replicated to the
target cluster.
You can also reduce resource consumption through file-operation rules that limit the rate
at which replication policies are allowed to send files. However, it is recommended that
you only create file-operation rules if the files you intend to replicate are predictably
similar in size and not especially large.
Replication reports
After a replication job completes, SyncIQ generates a report that contains detailed
information about the job, including how long the job ran, how much data was
transferred, and what errors occurred.
If a replication report is interrupted, SyncIQ might create a subreport about the progress
of the job so far. If the job is then restarted, SyncIQ creates another subreport about the
progress of the job until the job either completes or is interrupted again. SyncIQ creates a
subreport each time the job is interrupted until the job completes successfully. If multiple
subreports are created for a job, SyncIQ combines the information from the subreports
into a single report.
SyncIQ routinely deletes replication reports. You can specify the maximum number of
replication reports that SyncIQ retains and the length of time that SyncIQ retains
replication reports. If the maximum number of replication reports is exceeded on a
cluster, SyncIQ deletes the oldest report each time a new report is created.
You cannot customize the content of a replication report.
Note
If you delete a replication policy, SyncIQ automatically deletes any reports that were
generated for that policy.
Replication snapshots
SyncIQ generates snapshots to facilitate replication, failover, and failback between Isilon
clusters. Snapshots generated by SyncIQ can also be used for archival purposes on the
target cluster.
Replication reports
529
SyncIQ generates source snapshots to ensure that replication jobs do not transfer
unmodified data. When a job is created for a replication policy, SyncIQ checks whether it
is the first job created for the policy. If it is not the first job created for the policy, SyncIQ
compares the snapshot generated for the earlier job with the snapshot generated for the
new job.
SyncIQ replicates only data that has changed since the last time a snapshot was
generated for the replication policy. When a replication job is completed, SyncIQ deletes
the previous source-cluster snapshot and retains the most recent snapshot until the next
job is run.
530
Note
Data failover
Data failover is the process of preparing data on a secondary cluster to be modified by
clients. After you fail over to a secondary cluster, you can redirect clients to modify their
data on the secondary cluster.
Before failover is performed, you must create and run a replication policy on the primary
cluster. You initiate the failover process on the secondary cluster. Failover is performed
per replication policy; to migrate data that is spread across multiple replication policies,
you must initiate failover for each replication policy.
You can use any replication policy to fail over. However, if the action of the replication
policy is set to copy, any file that was deleted on the primary cluster will be present on
the secondary cluster. When the client connects to the secondary cluster, all files that
were deleted on the primary cluster will be available to the client.
If you initiate failover for a replication policy while an associated replication job is
running, the failover operation completes but the replication job fails. Because data
might be in an inconsistent state, SyncIQ uses the snapshot generated by the last
successful replication job to revert data on the secondary cluster to the last recovery
point.
If a disaster occurs on the primary cluster, any modifications to data that were made after
the last successful replication job started are not reflected on the secondary cluster.
When a client connects to the secondary cluster, their data appears as it was when the
last successful replication job was started.
Data failback
Data failback is the process of restoring clusters to the roles they occupied before a
failover operation. After data failback is complete, the primary cluster hosts clients and
replicates data to the secondary cluster for backup.
The first step in the failback process is updating the primary cluster with all of the
modifications that were made to the data on the secondary cluster. The next step in the
failback process is preparing the primary cluster to be accessed by clients. The final step
in the failback process is resuming data replication from the primary to the secondary
cluster. At the end of the failback process, you can redirect users to resume accessing
their data on the primary cluster.
You can fail back data with any replication policy that meets all of the following criteria:
l
The policy does not exclude any files or directories from replication.
531
the last completed replication job started. The RPO is never greater than the time it takes
for two consecutive replication jobs to run and complete.
If a disaster occurs while a replication job is running, the data on the secondary cluster is
reverted to the state it was in when the last replication job completed. For example,
consider an environment in which a replication policy is scheduled to run every three
hours, and replication jobs take two hours to complete. If a disaster occurs an hour after
a replication job begins, the RPO is four hours, because it has been four hours since a
completed job began replicating data.
RTO is the maximum amount of time required to make backup data available to clients
after a disaster. The RTO is always less than or approximately equal to the RPO,
depending on the rate at which replication jobs are created for a given policy.
If replication jobs run continuously, meaning that another replication job is created for
the policy before the previous replication job completes, the RTO is approximately equal
to the RPO. When the secondary cluster is failed over, the data on the cluster is reset to
the state it was in when the last job completed; resetting the data takes an amount of
time proportional to the time it took users to modify the data.
If replication jobs run on an interval, meaning that there is a period of time after a
replication job completes before the next replication job for the policy starts, the
relationship between RTO and RPO depends on whether a replication job is running when
the disaster occurs. If a job is in progress when a disaster occurs, the RTO is roughly
equal to the RPO. However, if a job is not running when a disaster occurs, the RTO is
negligible because the secondary cluster was not modified since the last replication job
ran, and the failover process is almost instantaneous.
532
directories under the included directory are replicated to the target cluster; any
directories that are not contained in an included directory are excluded.
If you both include and exclude directories, any excluded directories must be contained
in one of the included directories; otherwise, the excluded-directory setting has no effect.
For example, consider a policy with the following settings:
l
In this example, the setting that excludes the /ifs/data/archive directory has no
effect because the /ifs/data/archive directory is not under either of the included
directories. The /ifs/data/archive directory is not replicated regardless of whether
the directory is explicitly excluded. However, the setting that excludes the /ifs/data/
media/music/working directory does have an effect, because the directory would be
replicated if the setting was not specified.
In addition, if you exclude a directory that contains the source directory, the excludedirectory setting has no effect. For example, if the root directory of a policy is /ifs/
data, explicitly excluding the /ifs directory does not prevent /ifs/data from being
replicated.
Any directories that you explicitly include or exclude must be contained in or under the
specified root directory. For example, consider a policy in which the specified root
directory is /ifs/data. In this example, you could include both the /ifs/data/
media and the /ifs/data/users/ directories because they are under /ifs/data.
Excluding directories from a synchronization policy does not cause the directories to be
deleted on the target cluster. For example, consider a replication policy that
synchronizes /ifs/data on the source cluster to /ifs/data on the target cluster. If
the policy excludes /ifs/data/media from replication, and /ifs/data/media/
file exists on the target cluster, running the policy does not cause /ifs/data/
media/file to be deleted from the target cluster.
533
534
File name
Includes or excludes files based on the file name. You can specify to include or
exclude full or partial names that contain specific text.
The following wildcard characters are accepted:
Note
Alternatively, you can filter file names by using POSIX regular-expression (regex) text.
Isilon clusters support IEEE Std 1003.2 (POSIX.2) regular expressions. For more
information about POSIX regular expressions, see the BSD man pages.
Table 17 Replication file matching wildcards
Wildcard Description
*
[ ]
Path
Includes or excludes files based on the file path. This option is available for copy
policies only.
You can specify to include or exclude full or partial paths that contain specified text.
You can also include the wildcard characters *, ?, and [ ].
Size
Includes or excludes files based on their size.
Note
535
Type
Includes or excludes files based on one of the following file-system object types:
l
Soft link
Regular file
Directory
Source directory
File-criteria statement
Target directory
Note
SyncIQ does not support dynamically allocated IP address pools. If a replication job
connects to a dynamically allocated IP address, SmartConnect might reassign the
address while a replication job is running, which would disconnect the job and cause it to
fail.
Procedure
1. Run the isi sync policies create command.
The following command creates a policy that replicates /ifs/data/source on the
local cluster to /ifs/data/target on cluster.domain.name every week. The
command also creates archival snapshots on the target cluster:
isi sync policies create weeklySync sync /ifs/data/source \
cluster.domain.name /ifs/data/target \
--schedule "Every Saturday at 12:00 AM" \
536
--target-snapshot-archive on \
--target-snapshot-pattern \
"%{PolicyName}-%{SrcCluster}-%Y-%m-%d_%H-%M"\
--target-snapshot-expiration 1Y
2. To view the assessment report, run the isi sync reports view command.
The following command displays the assessment report for weeklySync:
isi sync reports view weeklySync 1
537
number of canceled replication jobs can exist on a cluster. If a replication job remains
paused for more than a week, SyncIQ automatically cancels the job.
The following command replicates the source directory of weeklySync according to the
snapshot HourlyBackup_07-15-2013_23:00:
isi sync jobs start weeklySync \
--source-snapshot HourlyBackup_07-15-2013_23:00
2. To view detailed information about a specific replication job, run the isi sync
jobs view command.
The following command displays detailed information about a replication job created
by weeklySync:
isi sync jobs view weeklySync
weeklySync
3
running
run
5s
2013-07-16T23:12:00
539
Note
Although you cannot fail over or fail back SmartLock directories, you can recover
SmartLock directories on a target cluster. After you recover SmartLock directories, you can
migrate them back to the source cluster.
2. On the secondary cluster, replicate data to the primary cluster with the mirror policies.
You can replicate data either by manually starting the mirror policies or by modifying
the mirror policies and specifying a schedule.
3. Prevent clients from accessing the secondary cluster and then run each mirror policy
again.
To minimize impact to clients, it is recommended that you wait until client access is
low before preventing client access to the cluster.
4. On the primary cluster, allow writes to the target directories of the mirror policies by
running the isi sync recovery allow-write command.
The following command allows writes to the target directory of weeklySync_mirror:
isi sync recovery allow-write weeklySync_mirror
5. On the secondary cluster, complete the failback process by running the isi sync
recovery resync-prep command for all mirror policies.
The following command completes the failback process for weeklySync:
isi sync recovery resync-prep weeklySync_mirror
If the last replication job completed successfully and a replication job is not
currently running, run the isi sync recovery allow-write command.
For example, the following command enables writes to the target directory of
SmartLockSync:
isi sync recovery allow-write SmartLockSync
If a replication job is currently running, wait until the replication job completes,
and then run the isi sync recovery allow-write command.
For example, the following command enables writes to the target directory of
SmartLockSync:
isi sync recovery allow-write SmartLockSync
Performing disaster recovery for SmartLock directories
541
If the primary cluster became unavailable while a replication job was running,
select run the isi sync target break command.
For example, the following command enables writes to the target directory of
SmartLockSync:
isi sync target break SmartLockSync
2. If you ran isi sync target break, restore any files that are left in an
inconsistent state.
a. Delete all files that are not committed to a WORM state from the target directory.
b. Copy all files from the failover snapshot to the target directory.
Failover snapshots are named according to the following naming pattern:
SIQ-Failover-<policy-name>-<year>-<month>-<day>_<hour>-<minute><second>
The source directory is the SmartLock directory that you are migrating.
The target directory must be an empty SmartLock directory. The directory must be
of the same SmartLock type as the source directory of a policy you are failing back.
For example, if the target directory is a compliance directory, the source must also
be a compliance directory.
2. Replicate data to the target cluster by running the policies you created.
You can replicate data either by manually starting the policies or by specifying a policy
schedule.
3. Optional: To ensure that SmartLock protection is enforced for all files, commit all files
in the SmartLock directory to a WORM state.
Because autocommit information is not transferred to the target cluster, files that
were scheduled to be committed to a WORM state on the source cluster will not be
542
scheduled to be committed at the same time on the target cluster. To ensure that all
files are retained for the appropriate time period, you can commit all files in target
SmartLock directories to a WORM state.
For example, the following command automatically commits all files in /ifs/data/
smartlock to a WORM state after one minute:
isi worm domains modify --domain /ifs/data/smartlock \
--autocommit-offset 1m
This step is necessarily only if you have configured an autocommit time period for the
SmartLock directory.
4. Prevent clients from accessing the source cluster and run the policy that you created.
To minimize impact to clients, it is recommended that you wait until client access is
low before preventing client access to the cluster.
5. On the target cluster, enable writes to the target directories of the replication policies
by running the isi sync recovery allow-writes command.
For example, the following command enables writes to the target directory of
SmartLockSync:
isi sync recovery allow-writes SmartLockSync
Source directory
File-criteria statement
Target cluster
This applies only if you target a different cluster. If you modify the IP or domain name
of a target cluster, and then modify the replication policy on the source cluster to
match the new IP or domain name, a full replication is not performed.
Target directory
543
Procedure
1. Run the isi sync policies modify command.
Assuming that weeklySync has been reset and has not been run since it was reset, the
following command causes a differential replication to be performed the next time
weeklySync is run:
isi sync policies modify weeklySync \
--target-compare-initial-sync on
If you disable a replication policy while an associated replication job is running, the
running replication job is not interrupted. However, the policy will not create another job
until the policy is enabled.
Procedure
1. Run either the isi sync policies enable or the isi sync policies
disable command.
The following command enables weeklySync:
isi sync policies enable weeklySync
544
2. Optional: To view detailed information about a specific replication policy, run the isi
sync policies view command.
The following command displays detailed information about weeklySync:
isi sync policies view weeklySync
dd16d277ff995a78e9affbba6f6e2919
weeklySync
/ifs/data/archive
sync
No
localhost
Yes
/ifs/data/sometarget
No
SIQ-%{SrcCluster}-%{PolicyName}-%Y-%mNever
SIQ-%{SrcCluster}-%{PolicyName}-latest
Yes
No
Never
Manually scheduled
notice
No
3
2Y
2000
No
No
No
No
No
finished
2013-07-17T15:39:49
2013-07-17T15:39:49
No
No
Yes
545
Target
The IP address or fully qualified domain name of the target cluster.
To cancel a job, specify a replication policy. For example, the following command
cancels a replication job created according to weeklySync:
isi sync target cancel weeklySync
To cancel all jobs targeting the local cluster, run the following command:
isi sync target cancel --all
After a replication policy is reset, SyncIQ performs a full or differential replication the next
time the policy is run. Depending on the amount of data being replicated, a full or
differential replication can take a very long time to complete.
Procedure
1. Run the isi sync target break command.
The following command breaks the association between weeklySync and the local
cluster:
isi sync target break weeklySync
2. To view detailed information about a specific replication policy, run the isi sync
target view command.
The following command displays detailed information about weeklySync:
isi sync target view weeklySync
weeklySync
cluster
/ifs/data/sometarget
finished
writes_disabled
000c295159ae74fcde517c1b85adc03daff9
127.0.0.1
No
2013-07-17T15:39:51
547
2. Modify a performance rule by running the isi sync rules modify command.
The following command causes a performance rule with an ID of bw-0 to be enforced
only on Saturday and Sunday:
isi sync rules modify bw-0 --days X,S
2. Delete a performance rule by running the isi sync rules delete command.
The following command deletes a performance rule with an ID of bw-0:
isi sync rules delete bw-0
548
2. Optional: To view detailed information about a specific performance rule, run the isi
sync rules view command.
The following command displays detailed information about a performance rule with
an ID of bw-0:
isi sync rules view --id bw-0
549
specified limit. Excess reports are periodically deleted by SyncIQ; however, you can
manually delete all excess replication reports at any time. This procedure is available
only through the command-line interface (CLI).
Procedure
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. Delete excess replication reports by running the following command:
isi sync reports rotate
2. View a replication report by running the isi sync reports view command.
The following command displays a replication report for weeklySync:
isi sync reports view weeklySync 2
3. Optional: To view a list of subreports for a report, run the isi sync reports
subreports list command.
The following command displays subreports for weeklySync:
isi sync reports subreports list weeklySync 1
4. Optional: To view a subreport, run the isi sync reports subreports view
command.
The following command displays a subreport for weeklySync:
isi sync reports subreports view weeklySync 1 2
weeklySync
1
2
2013-07-17T21:59:10
2013-07-17T21:59:15
run
finished
a358db8b248bf432c71543e0f02df64e
initial
5s
0
0
0
0
0
0
0
0
0
0
0
Files New: 0
Source Files Deleted: 0
Files Changed: 0
Target Files Deleted: 0
Up To Date Files Skipped: 0
User Conflict Files Skipped: 0
Error Io Files Skipped: 0
Error Net Files Skipped: 0
Error Checksum Files Skipped: 0
Bytes Transferred: 245
Total Network Bytes: 245
Total Data Bytes: 20
File Data Bytes: 20
Sparse Data Bytes: 0
Target Snapshots: SIQ-FailovernewPol123-2013-07-17_21-59-15, newPol123-Archive-cluster-17
Total Phases: 2
Phases
Phase : STF_PHASE_IDMAP_SEND
Start Time : 2013-07-17T21:59:11
End Time : 2013-07-17T21:59:13
551
Sync Type
The action that was performed by the replication job.
Initial Sync
Indicates that either a differential or a full replication was performed.
Incremental Sync
Indicates that only modified files were transferred to the target cluster.
Failover / Failback Allow Writes
Indicates that writes were enabled on a target directory of a replication policy.
Failover / Failback Disallow Writes
Indicates that an allow writes operation was undone.
Failover / Failback Resync Prep
Indicates that an association between files on the source cluster and files on
the target cluster was created. This is the first step in the failback preparation
process.
Failover / Failback Resync Prep Domain Mark
Indicates that a SyncIQ domain was created for the source directory. This is the
second step in the failback preparation process.
Failover / Failback Resync Prep Restore
Indicates that a source directory was restored to the last recovery point. This is
the third step in the failback preparation process.
Failover / Failback Resync Prep Finalize
Indicates that a mirror policy was created on the target cluster. This is the last
step in the failback preparation process.
Upgrade
Indicates that a policy-conversion replication occurred after upgrading the
OneFS operating system or merging policies.
Source
The path of the source directory on the source cluster.
Target
The IP address or fully qualified domain name of the target cluster.
Actions
Displays any report-related actions that you can perform.
you can reset the replication policy. However, resetting the policy causes a full or
differential replication to be performed the next time the policy is run.
Note
Depending on the amount of data being synchronized or copied, a full and differential
replications can take a very long time to complete.
Depending on the amount of data being replicated, a full or differential replication can
take a very long time to complete. Reset a replication policy only if you cannot fix the
issue that caused the replication error. If you fix the issue that caused the error, resolve
the policy instead of resetting the policy.
Procedure
1. Run the isi sync policies reset command.
The following command resets weeklySync:
isi sync policies reset weeklySync
553
3. Run the policy by running the isi sync jobs start command.
For example, the following command runs newPolicy:
isi sync jobs start newPolicy
Managing changelists
You can create and view changelists that describe what data was modified by a
replication job. Changelists are most commonly accessed by applications through the
OneFS Platform API.
To create a changelist, you must enable changelists for a replication policy. If changelists
are enabled for a policy, SyncIQ does not automatically delete the repstate files
generated by the policy; if changelists are not enabled for a policy, SyncIQ automatically
deletes the repstate files after the corresponding replication jobs complete. SyncIQ
generates one repstate file for each replication job. Because a large amount of repstate
files can consume a large amount of disk space, it is recommended that you do not
enable changelists for a policy unless it is necessary for your workflow.
If changelists are enabled for a policy, SyncIQ does not automatically delete source
cluster snapshots for the policy. To create a changelist, you must have access to two
consecutive snapshots and an associated repstate generated by a replication policy.
Create a changelist
You can create a changelist to view what data was modified by a replication job.
Before you begin
Enable changelists for a replication policy, and then run the policy at least twice. The
following command enables changelists for newPolicy:
isi sync policies modify newPolicy --changelist true
Procedure
1. Optional: Record the IDs of the snapshots generated by the replication policy.
a. View snapshot IDs by running the following command:
isi snapshot snapshots list
554
The snapshots must have been generated sequentially for a replication policy. If
source-archival snapshots are not enabled for the policy, snapshots generated for
the policy are named according to the following convention:
SIQ-Changelist-<policy-name>-<date>
If source-archival snapshots are enabled for the policy, snapshots are named
according to the source-snapshot naming convention.
2. Create a changelist by running the isi job jobs start command with the
ChangelistCreate option.
The following command creates a changelist:
isi job jobs start ChangelistCreate --older-snapid 2 --newersnapid 6
You can also specify the --retain-repstate option to allow you to recreate the
changelist later. If this option is not specified, the repstate file used to generate the
changelist is deleted after the changelist is created.
View a changelist
You can view a changelist that describes what data was modified by a replication job.
This procedure is available only through the command-line interface (CLI).
Procedure
1. View the IDs of changelists by running the following command:
isi_changelist_mod -l
Changelist IDs include the IDs of both snapshots used to create the changelist. If
OneFS is still in the process of creating a changelist, inprog is appended to the
changelist ID.
2. Optional: View all contents of a changelist by running the isi_changelist_mod
command with the -a option.
The following command displays the contents of a changelist named 2_6:
isi_changelist_mod -a 2_6
Changelist information
You can view the information contained in changelists.
Note
View a changelist
555
556
size
The size of the item that was modified, in bytes. If an item was removed, this field is
set to 0.
path_size
The total size of the null-terminated UTF-8 string that contains the path, relative to
the root path, of the file or directory that was modified or removed, in bytes.
path_offset
The number of bytes between the start of the changelist entry structure and the path,
relative to the root path, of the file or directory that was modified or removed.
atime
The POSIX timestamp of when the item was last accessed.
atimensec
The number of nanoseconds past the atime that the item was last accessed.
ctime
The POSIX timestamp of when the item was last changed.
ctimensec
The number of nanoseconds past the ctime that the item was last changed.
mtime
The POSIX timestamp of when the item was last modified.
mtimensec
The number of nanoseconds past the mtime that the item was last modified.
557
[--report-max-age <duration>]
[--report-max-count <integer>]
[--resolve {enable | disable}]
[--restrict-target-network {on | off}]
[--source-subnet <subnet> --source-pool <pool>]
[--target-compare-initial-sync {on | off}]
[--verbose]
Options
<name>
Specifies a name for the replication policy.
Specify as any string.
<action>
Specifies the type of replication policy.
The following types of replication policy are valid:
copy
Creates a copy policy that adds copies of all files from the source to the target.
sync
Creates a synchronization policy that synchronizes data on the source cluster to
the target cluster and deletes all files on the target cluster that are not present
on the source cluster.
<source-root-path>
Specifies the directory on the local cluster that files are replicated from.
Specify as a full directory path.
<target-host>
Specifies the cluster that the policy replicates data to.
Specify as one of the following:
l
The fully qualified domain name of any node in the target cluster.
localhost
SyncIQ does not support dynamically allocated IP address pools. If a replication job
connects to a dynamically allocated IP address, SmartConnect might reassign the
address while a replication job is running, which would disconnect the job and cause
it to fail.
<target-path>
Specifies the directory on the target cluster that files are replicated to.
Specify as a full directory path.
--description <string>
Specifies a description of the replication policy.
--password <password>
558
Specifies a password to access the target cluster. If the target cluster requires a
password for authentication purposes, you must specify this parameter or --setpassword.
--set-password
Prompts you to specify a password for the target cluster after the command is run.
This can be useful if you do not want other users on the cluster to see the password
you specify. If the target cluster requires a password for authentication purposes, you
must specify this parameter or --password.
{--source-include-directories | -i} <path>
Includes only the specified directories in replication.
Specify as any directory path contained in the root directory. You can specify multiple
directories by specifying --source-include-directories multiple times
within a command. For example, if the root directory is /ifs/data, you could
specify the following:
--source-include-directories /ifs/data/music --source-includedirectories /ifs/data/movies
*
isi sync policies create
559
l
l
[ ]
?
Selects files based on whether they are owned by the user of the specified name.
--group-id <id>
Selects files based on whether they are owned by the group of the specified ID.
--group-name <name>
Selects files based on whether they are owned by the group of the specified name.
The operator specifies which files are selected in relationship to the attribute (for
example, all files smaller than the given size). Specify operators in the following form:
--operator <value>
ne
Not equal
lt
Less than
le
gt
Greater than
ge
not
Not
The link specifies how the criterion relates to the one that follows it (for example, the
file is selected only if it meets both criteria). The following links are valid:
--and
Selects files that meet the criteria of the options that come before and after this
value.
--or
Selects files that meet either the criterion of the option that comes before this value
or the criterion of the option that follows this value.
{--schedule | -S} {<schedule> | when-source-modified}
Specifies how often data will be replicated. Specifying when-source-modified
causes OneFS to replicate data every time that the source directory of the policy is
modified.
Specify in the following format:
"<interval> [<frequency>]"
561
You can optionally append "st", "th", or "rd" to <integer>. For example, you can specify
"Every 1st month"
Specify <day> as any day of the week or a three-letter abbreviation for the day. For
example, both "saturday" and "sat" are valid.
--enabled {true | false}
Determines whether the policy is enabled or disabled.
The default value is true.
--check-integrity {true | false}
Specifies whether to perform a checksum on each file data packet that is affected by
the SyncIQ job. If this option is set to true, and the checksum values do not match,
SyncIQ retransmits the file data packet.
The default value is true.
--log-level <level>
Specifies the amount of data recorded in logs.
The following values are valid, organized from least to most information:
l
fatal
error
notice
info
copy
debug
trace
Specifies the snapshot naming pattern for snapshots that are generated by
replication jobs on the target cluster.
The default naming pattern is the following string:
SIQ-%{SrcCluster}-%{PolicyName}-%Y-%m-%d_%H-%M
--target-snapshot-expiration <duration>
Specifies an expiration period for archival snapshots on the target cluster.
If this option is not specified, archival snapshots will remain indefinitely on the target
cluster.
Specify in the following format:
<integer><units>
Specifying off could result in data loss. It is recommended that you consult Isilon
Technical Support before specifying off.
--source-snapshot-archive {on | off}
Determines whether archival snapshots are retained on the source cluster. If this
option is set to off, SyncIQ will still maintain one snapshot at a time for the policy to
facilitate replication.
--source-snapshot-pattern <naming-pattern>
Specifies a naming pattern for the most recent archival snapshot generated on the
source cluster.
563
--source-snapshot-expiration <duration>
Specifies an expiration period for archival snapshots retained on the source cluster.
If this option is not specified, archival snapshots will exist indefinitely on the source
cluster.
Specify in the following format:
<integer><units>
564
If you specify on, and you specify the target cluster as a SmartConnect zone,
replication jobs connect only to nodes in the specified zone. If off is specified, does
not restrict replication jobs to specific nodes on the target cluster.
--source-subnet <subnet>
Restricts replication jobs to running only on nodes in the specified subnet on the
local cluster.
--source-pool <pool>
Restricts replication jobs to running only on nodes in the specified pool on the local
cluster.
--target-compare-initial-sync {on | off}
Determines whether the full or differential replications are performed for this policy.
Full or differential replications are performed the first time a policy is run and after a
policy has been reset. If set to on, performs a differential replication. If set to off,
performs a full replication.
If differential replication is enabled the first time a replication policy is run, the policy
will run slower without any benefit.
The default value is off.
{--verbose | -v}
Displays a message confirming that the snapshot schedule was created.
565
| --source-snapshot-pattern <naming-pattern>
| --source-snapshot-expiration <duration>
| --report-max-age <duration>
| --report-max-count <integer>
| --resolve {enable | disable}
| --restrict-target-network {on | off}
| --source-subnet <subnet> --source-pool <pool>
| --clear-source-network
| --target-compare-initial-sync {on | off}}...
[--verbose]
[--force]
Options
<policy>
Modifies the specified replication policy.
Specify as a replication policy name or ID.
{--name | -n} <new-policy-name>
Specifies a new name for this replication policy.
--action <policy-type>
Specifies the type of replication policy.
The following types of replication policy are valid:
copy
Creates a copy policy that adds copies of all files from the source to the target.
sync
Creates a synchronization policy that synchronizes data on the source cluster to
the target cluster and deletes all files on the target cluster that are not present
on the source cluster.
{--target-host | -C} <target-cluster>
Specifies the cluster that the policy replicates data to.
Specify as one of the following:
l
The fully qualified domain name of any node in the target cluster.
localhost
SyncIQ does not support dynamically allocated IP address pools. If a replication job
connects to a dynamically allocated IP address, SmartConnect might reassign the
address while a replication job is running, which would disconnect the job and cause
it to fail.
{--target-path | -p} <target-path>
Specifies the directory on the target cluster that files are replicated to.
Specify as a full directory path.
--source-root-path <root-path>
Specifies the directory on the local cluster that files are replicated from.
Specify as a full directory path.
566
--description <string>
Specifies a description of this replication policy.
--password <password>
Specifies a password to access the target cluster. If the target cluster requires a
password for authentication purposes, you must specify this parameter or --setpassword.
--set-password
Prompts you to specify a password for the target cluster after the command is run.
This can be useful if you do not want other users on the cluster to see the password
you specify. If the target cluster requires a password for authentication purposes, you
must specify this parameter or --password.
{--source-include-directories | -i} <path>
Includes only the specified directories in replication.
Specify as any directory path contained in the root directory. You can specify multiple
directories by specifying --source-include-directories multiple times
within a command. For example, if the root directory is /ifs/data, you could
specify the following:
--source-include-directories /ifs/data/music --source-includedirectories /ifs/data/movies
--clear-source-include-directories
Clears the list of included directories.
--add-source-include-directories <path>
Adds the specified directory to the list of included directories.
--remove-source-include-directories <path>
Removes the specified directory from the list of included directories.
{--source-exclude-directories | -e} <path>
Does not include the specified directories in replication.
Specify as any directory path contained in the root directory. If --sourceinclude-directories is specified, --source-exclude-directories
directories must be contained in the included directories. You can specify multiple
directories by specifying --source-exclude-directories multiple times
within a command. For example, you could specify the following:
--source-exclude-directories /ifs/data/music --source-excludedirectories /ifs/data/movies --exclude /ifs/data/music/working
--clear-source-exclude-directories
Clears the list of excluded directories.
--add-source-exclude-directories <path>
Adds the specified directory to the list of excluded directories.
--remove-source-exclude-directories <path>
Removes the specified directory from the list of excluded directories.
--begin-filter <predicate> --operator <value> [<predicate> --operator
<operator> <link>]... --end-filter
Specifies the file-matching criteria that determines which files are replicated. Specify
<predicate> as one or more of the following options:
isi sync policies modify
567
The following options are valid for both copy and synchronization policies:
--size<integer>[{B | KB | MB | GB | TB | PB}]
Selects files according to the specified size.
--file-type <value>
Selects only the specified file-system object type.
The following values are valid:
f
Specifies regular files
d
Specifies directories
l
Specifies soft links
--name <value>
Selects only files whose names match the specified string.
You can include the following wildcards:
l
*
l
[ ]
Selects files that have been modified since the specified time. This predicate is valid
only for copy policies.
--changed-before '{<mm>/<dd>/<yyyy> [<HH>:<mm>] | <integer> {days | weeks |
months | years} ago}'
Selects files that have not been modified since the specified time. This predicate is
valid only for copy policies.
--changed-time '{<mm>/<dd>/<yyyy> [<HH>:<mm>] | <integer> {days | weeks |
months | years} ago}'
Selects files that were modified at the specified time. This predicate is valid only for
copy policies.
--no-group
Selects files based on whether they are owned by a group.
--no-user
Selects files based on whether they are owned by a user.
--posix-regex-name <value>
Selects only files whose names match the specified POSIX regular expression. IEEE
Std 1003.2 (POSIX.2) regular expressions are supported.
--user-id <id>
Selects files based on whether they are owned by the user of the specified ID.
--user-name <name>
Selects files based on whether they are owned by the user of the specified name.
--group-id <id>
Selects files based on whether they are owned by the group of the specified ID.
--group-name <name>
Selects files based on whether they are owned by the group of the specified name.
The following <operator> values are valid:
Operator Description
eq
ne
Not equal
lt
Less than
le
gt
Greater than
ge
not
Not
You can use the following <link> values to combine and alter the options available for
predicates:
--and
Selects files that meet the criteria of the options that come before and after this
value.
--or
569
Selects files that meet either the criterion of the option that comes before this value
or the criterion of the option that follows this value.
{--schedule | -S} {<schedule> | when-source-modified}
Specifies how often data will be replicated. Specifying when-source-modified
causes OneFS to replicate data every time that the source directory of the policy is
modified.
Specify <schedule>in the following format:
"<interval> [<frequency>]"
You can optionally append "st", "th", or "rd" to <integer>. For example, you can specify
"Every 1st month"
Specify <day> as any day of the week or a three-letter abbreviation for the day. For
example, both "saturday" and "sat" are valid.
--enabled {true | false}
Determines whether the policy is enabled or disabled.
--check-integrity {true | false}
Specifies whether to perform a checksum on each file data packet that is affected by
the SyncIQ job. If this option is set to true and the checksum values do not match,
SyncIQ retransmits the file data packet.
The default value is true.
--log-level <level>
Specifies the amount of data recorded in logs.
The following values are valid, organized from least to most information:
570
fatal
error
notice
info
copy
debug
trace
--target-snapshot-expiration <duration>
Specifies an expiration period for archival snapshots on the target cluster.
If this option is not specified, archival snapshots will remain indefinitely on the target
cluster.
Specify in the following format:
<integer><units>
571
Specifying off could result in data loss. It is recommended that you consult Isilon
Technical Support before specifying off.
--source-snapshot-archive {on | off}
Determines whether archival snapshots are retained on the source cluster. If this
option is set to off, SyncIQ will still maintain one snapshot at a time for the policy to
facilitate replication.
--source-snapshot-pattern <naming-pattern>
Specifies a naming pattern for the most recent archival snapshot generated on the
source cluster.
For example, the following pattern is valid:
SIQ-source-%{PolicyName}-%Y-%m-%d_%H-%M
--source-snapshot-expiration <duration>
Specifies an expiration period for archival snapshots retained on the source cluster.
If this option is not specified, archival snapshots will exist indefinitely on the source
cluster.
Specify in the following format:
<integer><units>
Y
Specifies years
M
Specifies months
W
Specifies weeks
D
Specifies days
H
Specifies hours
--report-max-count <integer>
Specifies the maximum number of reports to retain for the replication policy.
--resolve {enable | disable}
Determines whether users can manually resolve the policy if the policy encounters an
error and becomes unrunnable.
--restrict-target-network {on | off}
If you specify on, and you specify the target cluster as a SmartConnect zone,
replication jobs connect only to nodes in the specified zone. If off is specified, does
not restrict replication jobs to specific nodes on the target cluster.
--source-subnet <subnet>
Restricts replication jobs to running only on nodes in the specified subnet on the
local cluster.
--source-pool <pool>
Restricts replication jobs to running only on nodes in the specified pool on the local
cluster.
--clear-source-network
Runs replication jobs on any nodes in the cluster, instead of restricting the jobs to a
specified subnet.
--target-compare-initial-sync {on | off}
Determines whether the full or differential replications are performed for this policy.
Full or differential replications are performed the first time a policy is run and after a
policy has been reset. If set to on, performs a differential replication. If set to off,
performs a full replication.
If differential replication is enabled the first time a replication policy is run, the policy
will run slower without any benefit.
The default value is off.
{--verbose | -v}
Displays a confirmation message.
{--force | -f}
Does not prompt you to confirm modifications.
573
Options
<policy>
Deletes the specified replication policy.
--all
Deletes all replication policies.
--local-only
Does not delete the policy association on the target cluster. Not deleting a policy
association on the target cluster will cause the target directory to remain in a readonly state.
{--force | -f}
Deletes the policy, even if an associated job is currently running. Also, does not
prompt you to confirm the deletion.
CAUTION
Options
If no options are specified, displays a table of all replication policies.
{--limit | -l} <integer>
Displays no more than the specified number of items.
--sort <attribute>
Sorts output displayed by the specified attribute.
The following values are valid:
574
name
Sorts output by the name of the replication policy.
target_path
Sorts output by the path of the target directory.
action
Sorts output by the type of replication policy.
description
Sorts output by the policy description.
enabled
Sorts output by whether the policies are enabled or disabled.
target_host
Sorts output by the target cluster.
check_integrity
Sorts output by whether the policy is configured to perform a checksum on each
file data packet that is affected by a replication job.
source_root_path
Sorts output by the path of the source directory.
source_include_directories
Sorts output by directories that have been explicitly included in replication.
source_exclude_directories
Sorts output by directories that have been explicitly excluded in replication.
file_matching_pattern
Sorts output by the predicate that determines which files are replicated.
target_snapshot_archive
Sorts output by whether archival snapshots are generated on the target cluster.
target_snapshot_pattern
Sorts output by the snapshot naming pattern for snapshots that are generated
by replication jobs on the target cluster.
target_snapshot_expiration
Sorts output by the expiration period for archival snapshots on the target cluster.
target_detect_modifications
Sorts output by whether full or differential replications are performed for this
policy.
source_snapshot_archive
Sorts output by whether archival snapshots are retained on the source cluster.
source_snapshot_pattern
Sorts output by the naming pattern for the most recent archival snapshot
generated on the source cluster.
source_snapshot_expiration
Sorts output by the expiration period for archival snapshots retained on the
source cluster.
schedule
Sorts output by the schedule of the policy.
log_level
Sorts output by the amount of data that is recorded in logs.
575
log_removed_files
Sorts output by whether OneFS retains a log of all files that are deleted when the
replication policy is run.
workers_per_node
Sorts output by the number of workers per node that are generated by OneFS to
perform each replication job for the policy.
report_max_age
Sorts output by how long replication reports are retained before they are
automatically deleted by OneFS
report_max_count
Sorts output by the maximum number of reports that are retained for the
replication policy.
force_interface
Sorts output by whether data is sent over only the default interface of the subnet
specified by the --source-network option of the isi sync policies
create or isi sync policies modify commands.
restrict_target_network
Sorts output by whether replication jobs are restricted to connecting to nodes in
a specified zone on the target cluster.
target_compare_initial_sync
Sorts output by whether full or differential replications are performed for the
policies.
last_success
Sorts output by the last time that a replication job completed successfully.
password_set
Sorts output by whether the policy specifies a password for the target cluster.
source_network
Sorts output by the subnet on the local cluster that the replication policy is
restricted to.
source_interface
Sorts output by the pool on the local cluster that the replication policy is
restricted to.
{--descending | -d}
Displays output in reverse order.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
576
Options
<policy>
Displays information about the specified replication policy.
Specify as a replication policy name or ID.
Options
<policy>
Disables the specified replication policy. Specify as a replication policy name or a
replication policy ID.
--all
Disables all replication policies on the cluster.
--verbose
Displays more detailed information.
Options
<policy>
Enables the specified replication policy. Specify as a replication policy name or a
replication policy ID.
--all
Enables all replication policies on the cluster.
--verbose
Displays more detailed information.
577
Options
<policy-name>
Starts a replication job for the specified replication policy.
--test
Creates a replication policy report that reflects the number of files and directories that
would be replicated if the specified policy was run. You can test only policies that
have not been run before.
--source-snapshot <snapshot>
Replicates data according to the specified SnapshotIQ snapshot. If specified, a
snapshot is not generated for the replication job. Replicating data according to
snapshots generated by the SyncIQ tool is not supported.
Specify as a snapshot name or ID. The source directory of the policy must be
contained in the specified snapshot. This option is valid only if the last replication job
completed successfully or if you are performing a full or differential replication. If the
last replication job completed successfully, the specified snapshot must be more
recent than the snapshot referenced by the last replication job.
{--verbose | -v}
Displays more detailed information.
Options
<policy-name>
Pauses a job that was created according to the specified replication policy.
Specify as a replication policy name.
--all
Pauses all currently running replication jobs.
{--verbose | -v}
Displays more detailed information.
578
Options
<policy-name>
Resumes a paused job that was created by the specified policy.
Specify as a replication policy name.
--all
Resumes all currently running replication jobs.
{--verbose | -v}
Displays more detailed information.
Options
<policy-name>
Cancels a job that was created according to the specified replication policy.
Specify as a replication policy name or ID.
--all
Cancels all currently running replication jobs.
--verbose
Displays more detailed information.
Options
If no options are specified, displays information about replication jobs for all policies.
--state <state>
isi sync jobs resume
579
Options
<policy>
Displays information about a running replication job created according to the
specified policy.
Specify as a replication policy name or ID.
580
[--no-footer]
[--verbose]
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Options
<policy>
Displays information about a replication job created according to the specified
replication policy.
Specify as a replication policy name or ID.
Options
If no options are specified, displays current default replication report settings.
--service {on | off | paused}
Determines the state of the SyncIQ tool.
isi sync jobs reports view
581
--source-subnet <subnet>
Restricts replication jobs to running only on nodes in the specified subnet on the
local cluster.
--source-pool <pool>
Restricts replication jobs to running only on nodes in the specified pool on the local
cluster.
--restrict-target-network {on | off}
If you specify on, and you specify the target cluster as a SmartConnect zone,
replication jobs connect only to nodes in the specified zone. If off is specified, does
not restrict replication jobs to specific nodes on the target cluster.
Note
SyncIQ does not support dynamically allocated IP address pools. If a replication job
connects to a dynamically allocated IP address, SmartConnect might reassign the
address while a replication job is running, which would disconnect the job and cause
it to fail.
--report-max-age <duration>
Specifies the default amount of time that SyncIQ retains reports before automatically
deleting them.
Specify in the following format:
<integer><units>
Options
There are no options for this command.
582
Options
<policy>
Resolves the specified replication policy.
Specify as a replication policy name or ID.
{--force | -f}
Suppresses command-line prompts and messages.
Options
<policy>
Resets the specified replication policy.
Specify as a replication policy name or ID
--all
Resets all replication policies
{--verbose | -v}
Displays more detailed information.
Options
<policy>
Cancels a replication job created according to the specified replication policy.
isi sync policies resolve
583
Options
If no options are specified, displays a table of all replication policies currently targeting
the local cluster.
--target-path <path>
Displays information about the replication policy targeting the specified directory.
{--limit | -l} <integer>
Displays no more than the specified number of items.
--sort <attribute>
Sorts output displayed by the specified attribute.
The following values are valid:
name
Sorts output by the name of the replication policy.
source_host
Sorts output by the name of the source cluster.
target_path
Sorts output by the path of the target directory.
last_job_status
Sorts output by the status of the last replication job created according to the
policy.
failover_failback_state
Sorts output by whether the target directory is read only.
{--descending | -d}
Displays output in reverse order.
--format {table | json | csv | list}
584
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Options
<policy-name>
Displays information about the specified policy.
--target-path <path>
Displays information about the policy targeting the specified directory.
Breaking a source and target association requires you to reset the replication policy
before you can run the policy again. Depending on the amount of data being replicated, a
full or differential replication can take a very long time to complete.
Syntax
isi sync target break {<policy> | --target-path <path>}
[--force]
[--verbose]
Options
<policy>
Removes the association of the specified replication policy targeting this cluster.
Specify as a replication policy name, a replication policy ID, or the path of a target
directory.
--target-path <path>
Removes the association of the replication policy targeting the specified directory
path.
{--force | -f}
Forces the replication policy association to be removed, even if an associated job is
currently running.
isi sync target view
585
CAUTION
Forcing a target break might cause errors if an associated replication job is currently
running.
{--verbose | -v}
Displays more detailed information.
Options
If no options are specified, displays basic information about all completed replication
jobs.
--state <state>
Displays information about only replication jobs in the specified state. The following
states are valid:
l
scheduled
running
paused
finished
failed
canceled
needs_attention
unknown
Options
<policy>
Displays a replication report about the specified replication policy.
<job-id>
isi sync target reports view
587
Displays a replication report about the job with the specified ID.
Options
<policy>
Displays subreports about the specified policy.
<job-id>
Displays subreports about the job of the specified ID.
{--limit | -l} <integer>
Displays no more than the specified number of items.
--sort <attribute>
Sorts output displayed by the specified attribute.
The following values are valid:
start_time
Sorts output by when the replication job started.
end_time
Sorts output by when the replication job ended.
action
Sorts output by the action that the replication job performed.
state
Sorts output by the progress of the replication job.
id
Sorts output by the ID of the replication report.
policy_id
Sorts output by the ID of the replication policy
policy_name
Sorts output by the name of the replication policy.
job_id
Sorts output by the ID of the replication job.
total_files
Sorts output by the total number of files that were modified by the replication job.
files_transferred
Sorts output by the total number of files that were transferred to the target cluster.
bytes_transferred
588
Sorts output by the total number of files that were transferred to the target cluster.
duration
Sorts output by how long the replication job ran.
errors
Sorts output by errors that the replication job encountered.
warnings
Sorts output by warnings that the replication job triggered.
{--descending | -d}
Displays output in reverse order.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Options
<policy>
Displays a sub report about the specified replication policy. Specify as a replication
policy name.
<job-id>
Displays a sub report about the specified replication job. Specify as a replication job
ID.
<subreport-id>
Displays the subreport with the specified ID.
589
[--descending]
[--format {table | json | csv | list}]
[--no-header]
[--no-footer]
[--verbose]
Options
--policy-name <policy>
Displays only replication reports that were created for the specified policy.
--state <state>
Displays only replication reports whose jobs are in the specified state.
{--limit | -l} <integer>
Displays no more than the specified number of items.
--sort <attribute>
Sorts output displayed by the specified attribute.
The following values are valid:
start_time
Sorts output by when the replication job started.
end_time
Sorts output by when the replication job ended.
action
Sorts output by the action that the replication job performed.
state
Sorts output by the progress of the replication job.
id
Sorts output by the ID of the replication subreport.
policy_id
Sorts output by the ID of the replication policy
policy_name
Sorts output by the name of the replication policy.
job_id
Sorts output by the ID of the replication job.
total_files
Sorts output by the total number of files that were modified by the replication job.
files_transferred
Sorts output by the total number of files that were transferred to the target cluster.
bytes_transferred
Sorts output by the total number of files that were transferred to the target cluster.
duration
S orts output by how long the replication job ran.
errors
Sorts output by errors that the replication job encountered.
warnings
Sorts output by warnings that the replication job triggered.
590
{--descending | -d}
Displays output in reverse order.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Options
<policy>
Displays a replication report about the specified replication policy.
<job-id>
Displays a replication report about the job with the specified ID.
Options
{--verbose | -v}
Displays more detailed information.
591
[--no-header]
[--no-footer]
[--verbose]
Options
<policy>
Displays subreports about the specified policy.
<job-id>
Displays subreports about the job of the specified ID.
{--limit | -l} <integer>
Displays no more than the specified number of items.
--sort <attribute>
Sorts output displayed by the specified attribute.
The following values are valid:
start_time
Sorts output by when the replication job started.
end_time
Sorts output by when the replication job ended.
action
Sorts output by the action that the replication job performed.
state
Sorts output by the progress of the replication job.
id
Sorts output by the ID of the replication report.
policy_id
Sorts output by the ID of the replication policy
policy_name
Sorts output by the name of the replication policy.
job_id
Sorts output by the ID of the replication job.
total_files
Sorts output by the total number of files that were modified by the replication job.
files_transferred
Sorts output by the total number of files that were transferred to the target cluster.
bytes_transferred
Sorts output by the total number of files that were transferred to the target cluster.
duration
Sorts output by how long the replication job ran.
errors
Sorts output by errors that the replication job encountered.
warnings
Sorts output by warnings that the replication job triggered.
{--descending | -d}
592
Options
<policy>
Displays a sub report about the specified replication policy. Specify as a replication
policy name.
<job-id>
Displays a sub report about the specified replication job. Specify as a replication job
ID.
<subreport-id>
Displays the subreport of the specified ID.
Options
<policy-name>
Allows writes for the target directory of the specified replication policy.
Specify as a replication policy name, a replication policy ID, or the path of a target
directory.
--revert
Reverts an allow-writes operation on the local cluster only. This action does not affect
the source cluster of the replication policy.
isi sync reports subreports view
593
--log-level <level>
Specifies the amount of data recorded in logs.
The following values are valid, organized from least to most information:
l
fatal
error
notice
info
copy
debug
trace
Options
<policy-name>
Targets the following replication policy.
Specify as a replication policy name or ID. The replication policy must be a
synchronization policy.
--verbose
Displays more detailed information.
Options
<type>
Specifies the type of performance rule. The following values are valid:
594
file_count
Creates a performance rule that limits the number of files that can be sent by
replication jobs per second.
bandwidth
Creates a performance rule that limits the amount of bandwidth that replication jobs
are allowed to consume.
<interval>
Enforces the performance rule on the specified hours of the day. Specify in the
following format:
<hh>:<mm>-<hh>:<mm>
<days>
Enforces the performance rule on the specified days of the week.
The following values are valid:
X
Specifies Sunday
M
Specifies Monday
T
Specifies Tuesday
W
Specifies Wednesday
R
Specifies Thursday
F
Specifies Friday
S
Specifies Saturday
You can include multiple days by specifying multiple values separated by commas.
You can also include a range of days by specifying two values separated by a dash.
<limit>
Specifies the maximum number of files that can be sent or KBs that can be consumed
per second by replication jobs.
--description <string>
Specifies a description of this performance rule.
--verbose
Displays more detailed information.
595
Options
<id>
Modifies the replication performance rule of the specified ID.
{--interval | -i} <interval>
Specifies which hours of the day to enforce the performance rule. Specify in the
following format:
<hh>:<mm>-<hh>:<mm>
596
Options
<id>
Deletes the performance rule of the specified ID.
--all
Deletes all performance rules.
--type <type>
Deletes all performance rules of the specified type. The following values are valid:
file_count
Deletes all performance rules that limit the number of files that can be sent by
replication jobs per second.
bandwidth
Deletes all performance rules that limit the amount of bandwidth that replication jobs
are allowed to consume.
--force
Does not prompt you to confirm that you want to delete the performance rule.
--verbose
Displays more detailed information.
Options
--type <type>
Displays only performance rules of the specified type. The following values are valid:
file_count
Displays only performance rules that limit the number of files that can be sent by
replication jobs per second.
bandwidth
Displays only performance rules that limit the amount of bandwidth that replication
jobs are allowed to consume.
isi sync rules delete
597
Options
<id>
Displays information about the replication performance rule with the specified ID.
isi_changelist_mod
Displays, modifies, and creates changelists.
Syntax
isi_changelist_mod
[-l]
[-c <oldsnapshot> <newsnapshot> [{--root <string>
| --jobid <unsigned_integer>}]]
[-f <changelist>]
[-k <changelist>]
[-a <changelist> [--terse]]
[-g <changelist> <lin> [--terse]]
[-r <changelist> <low> <high> --terse]
[-s <changelist> <lin>
[--path <string>]
[--type <integer>]
[--size <integer>]
[--atime <integer>]
[--atimensec <integer>]
[--ctime <integer>]
[--ctimensec <integer>]
[--mtime <integer>]
[--mtimensec <integer>]
]
[-d <changelist> <lin>]
Options
-l
Displays a list of changelists.
-c <oldsnapshot> <newsnapshot>
598
Creates an empty changelist that you can manually populate for testing purposes.
Specify <oldsnapshot> and <newsnapshot> as snapshot IDs.
--root <string>
Specifies the path of the directory the changelist is for.
--jobid <unsigned_integer>
Specifies the ID of the job that created the changelist.
-f <changelist>
Finalizes a manually-created changelist.
-k <changelist>
Deletes the specified changelist.
-a <changelist>
Displays the contents of the specified changelist.
-g <changelist> <lin>
Displays the specified changelist entry.
-r <changelist> <low> <high>
Displays the specified range of changelist entries.
--terse
Displays less detailed information.
-s <changelist> <lin>
Modifies the specified changelist entry.
--path <string>
Specifies the path of the file or directory that was modified or removed.
--type <integer>
If an item was modified, describes the type of item that was modified. The following
values are valid:
1
Specifies that a regular file was modified.
2
Specifies that a directory was modified.
3
Specifies that a symbolic link was modified.
4
Specifies that a first-in-first-out (FIFO) queue was modified.
5
Specifies that a Unix domain socket was modified.
6
Specifies that a character device was modified.
7
Specifies that a block device was modified.
8
Specifies that an unknown type of file was modified.
To specify that any type of item was removed, specify 0.
--size <integer>
isi_changelist_mod
599
Specifies the size of the item that was modified, in bytes. If an item was removed,
specify 0.
--atime <integer>
Specifies when the item was last accessed.
Specify as a POSIX timestamp.
--atimensec <integer>
Specifies the number of nanoseconds past the atime that the item was last accessed.
--ctime <integer>
Specifies when the item was last changed.
Specify as a POSIX timestamp.
--ctimensec <integer>
Specifies the number of nanoseconds past the ctime that the item was last changed.
--mtime <integer>
Specifies when the item was last modified.
Specify as a POSIX timestamp.
--mtimensec <integer>
Specifies the number of nanoseconds past the mtime that the item was last modified.
-d <changelist> <lin>
Deletes the specified entry from the changelist.
600
CHAPTER 14
Data layout with FlexProtect
FlexProtect overview............................................................................................602
File striping......................................................................................................... 602
Requested data protection.................................................................................. 602
FlexProtect data recovery.....................................................................................603
Requesting data protection................................................................................. 604
Requested protection settings.............................................................................604
Requested protection disk space usage.............................................................. 605
601
FlexProtect overview
An Isilon cluster is designed to continuously serve data, even when one or more
components simultaneously fail. OneFS ensures data availability by striping or mirroring
data across the cluster. If a cluster component fails, data stored on the failed component
is available on another component. After a component failure, lost data is restored on
healthy components by the FlexProtect proprietary system.
Data protection is specified at the file level, not the block level, enabling the system to
recover data quickly. Because all data, metadata, and parity information is distributed
across all nodes, the cluster does not require a dedicated parity node or drive. This
ensures that no single node limits the speed of the rebuild process.
File striping
OneFS uses the internal network to automatically allocate and stripe data across nodes
and disks in the cluster. OneFS protects data as the data is being written. No separate
action is necessary to stripe data.
OneFS breaks files into smaller logical chunks called stripes before writing the files to
disk; the size of each file chunk is referred to as the stripe unit size. Each OneFS block is
8 KB, and a stripe unit consists of 16 blocks, for a total of 128 KB per stripe unit. During a
write, OneFS breaks data into stripes and then logically places the data in a stripe unit.
As OneFS stripes data across the cluster, OneFS fills the stripe unit according to the
number of nodes and protection level.
OneFS can continuously reallocate data and make storage space more usable and
efficient. As the cluster size increases, OneFS stores large files more efficiently.
Smartfail
OneFS protects data stored on failing nodes or drives through a process called
smartfailing.
During the smartfail process, OneFS places a device into quarantine. Data stored on
quarantined devices is read only. While a device is quarantined, OneFS reprotects the
data on the device by distributing the data to other devices. After all data migration is
complete, OneFS logically removes the device from the cluster, the cluster logically
changes its width to the new configuration, and the node or drive can be physically
replaced.
OneFS smartfails devices only as a last resort. Although you can manually smartfail
nodes or drives, it is recommended that you first consult Isilon Technical Support.
Occasionally a device might fail before OneFS detects a problem. If a drive fails without
being smartfailed, OneFS automatically starts rebuilding the data to available free space
on the cluster. However, because a node might recover from a failure, if a node fails,
OneFS does not start rebuilding data unless the node is logically removed from the
cluster.
Node failures
Because node loss is often a temporary issue, OneFS does not automatically start
reprotecting data when a node fails or goes offline. If a node reboots, the file system does
not need to be rebuilt because it remains intact during the temporary failure.
If you configure N+1 data protection on a cluster, and one node fails, all of the data is still
accessible from every other node in the cluster. If the node comes back online, the node
rejoins the cluster automatically without requiring a full rebuild.
To ensure that data remains protected, if you physically remove a node from the cluster,
you must also logically remove the node from the cluster. After you logically remove a
node, the node automatically reformats its own drives, and resets itself to the factory
FlexProtect data recovery
603
default settings. The reset occurs only after OneFS has confirmed that all data has been
reprotected. You can logically remove a node using the smartfail process. It is important
that you smartfail nodes only when you want to permanently remove a node from the
cluster.
If you remove a failed node before adding a new node, data stored on the failed node
must be rebuilt in the free space in the cluster. After the new node is added, OneFS
distributes the data to the new node. It is more efficient to add a replacement node to the
cluster before failing the old node because OneFS can immediately use the replacement
node to rebuild the data stored on the failed node.
For 4U Isilon IQ X-Series and NL-Series nodes, and IQ 12000X/EX 12000 combination
platforms, the minimum cluster size of three nodes requires a minimum of N+2:1.
604
Requested protection
setting
Minimum number of
nodes required
Definition
[+1n]
[+2d:1n]
[+2n]
[+3d:1n]
Requested protection
setting
Minimum number of
nodes required
Definition
node failure without sustaining any
data loss.
[+3d:1n1d]
[+3n]
[+4d:1n]
[+4d:2n]
[+4n]
Nx (Data mirroring)
N
The cluster can recover from N - 1
For example, 5x requires drive or node failures without
a minimum of five nodes. sustaining data loss. For example, 5x
protection means that the cluster can
recover from four drive or node
failures.
[+2d:1n] [+2n]
[+3d:
1n]
[+3d:
1n1d]
[+3n]
[+4d:
1n]
[+4d:
2n]
[+4n]
4+2
(33%)
6+3
(33%)
3+3
(50%)
8+4
(33%)
2 +1
(33%)
605
Number [+1n]
of
nodes
[+2d:1n] [+2n]
[+3d:
1n]
[+3d:
1n1d]
[+3n]
[+4d:
1n]
[+4d:
2n]
[+4n]
3 +1
(25%)
6+2
(25%)
9+3
(25%)
5+3
(38%)
12 + 4
(25%)
4+4
(50%)
4 +1
(20%)
8+2
(20%)
3+2
(40%)
12 + 3
(20%)
7+3
(30%)
16 + 4
(20%)
6+4
(40%)
5 +1
(17%)
10 + 2
(17%)
4+2
(33%)
15 + 3
(17%)
9+3
(25%)
16 + 4
(20%)
8+4
(33%)
6 +1
(14%)
12 + 2
(14%)
5+2
(29%)
15 + 3
(17%)
11 + 3
(21%)
4+3
(43%)
16 + 4
(20%)
10 + 4
(29%)
7 +1
(13%)
14 + 2
(12.5%)
6+2
(25%)
15 + 3
(17%)
13 + 3
(19%)
5+3
(38%)
16 + 4
(20%)
12 + 4
(25% )
8 +1
(11%)
16 + 2
(11%)
7+2
(22%)
15 + 3
(17%)
15+3
(17%)
6+3
(33%)
16 + 4
(20%)
14 + 4
(22%)
5+4
(44%)
10
9 +1
(10%)
16 + 2
(11%)
8+2
(20%)
15 + 3
(17%)
15+3
(17%)
7+3
(30%)
16 + 4
(20%)
16 + 4
(20%)
6+4
(40%)
12
11 +1
(8%)
16 + 2
(11%)
10 + 2
(17%)
15 + 3
(17%)
15+3
(17%)
9+3
(25%)
16 + 4
(20%)
16 + 4
(20%)
8+4
(33%)
14
13 + 1
(7%)
16 + 2
(11%)
12 + 2
(14%)
15 + 3
(17%)
15+3
(17%)
11 + 3
(21%)
16 + 4
(20%)
16 + 4
(20%)
10 + 4
(29%)
16
15 + 1
(6%)
16 + 2
(11%)
14 + 2
(13%)
15 + 3
(17%)
15+3
(17%)
13 + 3
(19%)
16 + 4
(20%)
16 + 4
(20%)
12 + 4
(25%)
18
16 + 1
(6%)
16 + 2
(11%)
16 + 2
(11%)
15 + 3
(17%)
15+3
(17%)
15 + 3
(17%)
16 + 4
(20%)
16 + 4
(20%)
14 + 4
(22%)
20
16 + 1
(6%)
16 + 2
(11%)
16 + 2
(11%)
16 + 3
(16%)
16 + 3
(16%)
16 + 3
(16%)
16 + 4
(20%)
16 + 4
(20% )
16 + 4
(20%)
30
16 + 1
(6%)
16 + 2
(11%)
16 + 2
(11%)
16 + 3
(16%)
16 + 3
(16%)
16 + 3
(16%)
16 + 4
(20%)
16 + 4
(20%)
16 + 4
(20%)
The parity overhead for mirrored data protection is not affected by the number of nodes in
the cluster. The following table describes the parity overhead for requested mirrored
protection.
2x
3x
4x
5x
6x
7x
8x
606
CHAPTER 15
NDMP backup
NDMP backup
607
NDMP backup
NDMP backup
then connect the Fibre Channel switch to two Fibre Channel ports, OneFS creates two
entries for the device, one for each path.
Note
If you perform an NDMP two-way backup operation, you must assign static IP addresses
to the Backup Accelerator node. If you connect to the cluster through a data management
application (DMA), you must connect to the IP address of a Backup Accelerator node. If
you perform an NDMP three-way backup, you can connect to any node in the cluster.
DMA
Supported
Symantec NetBackup
Yes
EMC Networker
Yes
EMC Avamar
Yes
Commvault Simpana
No
Yes
Dell NetVault
Yes
ASG-Time Navigator
Yes
609
NDMP backup
In a level 10 NDMP backup, only data changed since the most recent incremental
(level 1-9) backup or the last level 10 backup is copied. By repeating level 10
backups, you can be assured that the latest versions of files in your data set are
backed up without having to run a full backup.
l
Supported DMAs
NDMP backups are coordinated by a data management application (DMA) that runs on a
backup server.
OneFS supports the following DMAs:
l
Symantec NetBackup
EMC NetWorker
EMC Avamar
Dell NetVault
CommVault Simpana
ASG-Time Navigator
Note
All supported DMAs can connect to an Isilon cluster through IPv4. CommVault Simpana is
currently the only DMA that also supports connecting to an Isilon cluster through IPv6.
610
NDMP backup
See the Isilon Third-Party Software and Hardware Compatibility Guide for the latest
information about supported DMAs.
LTO-3
LTO-4
LTO-5
LTO-6
OneFS does not back up file system configuration data, such as file protection level
policies and quotas.
OneFS does not support multiple concurrent backups onto the same tape.
OneFS does not support restoring data from a file system other than OneFS. However,
you can migrate data via the NDMP protocol from a NetApp or EMC VNX storage
system to OneFS.
Backup Accelerator nodes cannot interact with more than 1024 device paths,
including the paths of tape and media changer devices. For example, if each device
has four paths, you can connect 256 devices to a Backup Accelerator node. If each
device has two paths, you can connect 512 devices.
OneFS does not support more than 64 concurrent NDMP sessions per Backup
Accelerator node.
Install the latest patches for OneFS and your data management application (DMA).
If you are backing up multiple directories that contain small files, set up a separate
schedule for each directory.
If you are performing three-way NDMP backups, run multiple NDMP sessions on
multiple nodes in your Isilon cluster.
Restore files through Direct Access Restore (DAR), especially if you restore files
frequently. However, it is recommended that you do not use DAR to restore a full
backup or a large number of files, as DAR is better suited to restoring smaller
numbers of files.
NDMP hardware support
611
NDMP backup
Restore files through Directory DAR (DDAR) if you restore large numbers of files
frequently.
Use the largest tape record size available for your version of OneFS. The largest tape
record size for OneFS versions 6.5.5 and later is 256 KB. The largest tape record size
for versions of OneFS earlier than 6.5.5 is 128 KB.
If possible, do not include or exclude files from backup. Including or excluding files
can affect backup performance, due to filtering overhead.
Limit the number of files in a directory. Distribute files across multiple directories
instead of including a large number of files in a single directory.
Networking recommendations
l
Connect NDMP sessions only through SmartConnect zones that are exclusively used
for NDMP backup.
Configure multiple policies when scheduling backup operations, with each policy
capturing a portion of the file system. Do not attempt to back up the entire file system
through a single policy.
This is recommended only if you are backing up a significant amount of data. Running
four concurrent streams might not be necessary for smaller backups.
l
Attach more Backup Accelerator nodes to larger clusters. The recommended number
of Backup Accelerator nodes is listed in the following table.
Table 19 Nodes per Backup Accelerator node
X-Series
NL-Series
S-Series
HD-Series
Attach more Backup Accelerator nodes if you are backing up to more tape devices.
The following table lists the recommended number of tape devices per backup
accelerator node:
Table 20 Tape devices per Backup Accelerator node
Tape device type Recommended number of tape devices per Backup Accelerator node
612
LTO-5, LTO-6
LTO-4
NDMP backup
Tape device type Recommended number of tape devices per Backup Accelerator node
LTO-3
DMA-specific recommendations
l
Character
Description
Example
archive*
/ifs/data/archive1
/ifs/data/archive42_a/media
[]
/ifs/data/data_store_a
/ifs/data/data_store_c
user_?
/ifs/data/user_1
/ifs/data/user_2
Includes a blank
space
user\ 1
/ifs/data/user 1
/ifs/data/data_store_8
Unanchored patterns such as home or user1 target a string of text that might belong to
many files or directories. Anchored patterns target specific file pathnames, such as ifs/
data/home. You can include or exclude either type of pattern.
For example, suppose you want to back up the /ifs/data/home directory, which
contains the following files and directories:
l
/ifs/data/home/user1/file.txt
/ifs/data/home/user2/user1/file.txt
/ifs/data/home/user3/other/file.txt
/ifs/data/home/user4/emptydirectory
Excluding files and directories from NDMP backups
613
NDMP backup
If you simply include the /ifs/data/home directory, all files and directories, including
emptydirectory would be backed up.
If you specify both include and exclude patterns, any excluded files or directories under
the included directories would not be backed up. If the excluded directories are not found
in any of the included directories, the exclude specification would have no effect.
Note
2. Configure NDMP backup by running the isi ndmp settings set command.
The following command configures OneFS to interact with EMC NetWorker:
isi ndmp settings set dma emc
614
NDMP backup
615
NDMP backup
616
NDMP backup
617
NDMP backup
NDMP backup
619
NDMP backup
In contrast, when a non-restartable backup fails, you must back up all data from the
beginning, regardless of what was transferred during the initial backup process.
After you enable restartable backups from your DMA, you can manage restartable backup
contexts from OneFS. These contexts are the data that OneFS stores to facilitate
restartable backups. Each context represents a checkpoint that the restartable backup
process can return to if a backup fails.
Restartable backups are supported only for EMC NetWorker 8.1 and later.
2. To view detailed information about a specific backup context, run the isi ndmp
extensions contexts view command.
The following command displays detailed information about a backup context with an
ID of 792eeb8a-8784-11e2-aa70-0025904e91a4:
isi ndmp extensions contexts view 792eeb8a-8784-11e2aa70-0025904e91a4
620
NDMP backup
621
NDMP backup
When you specify a file list backup, only the listed files and subdirectories in the source
directory are backed up. With a level 0 file list backup, all listed files and directories in
the source directory are backed up.
A backup level other than 0 triggers an incremental file list backup. In an incremental file
list backup, only the listed files that were created or changed in the source directory since
the last incremental backup of the same level are backed up.
To configure a file list backup, you must complete the following tasks:
l
The file list is an ASCII text file that lists the pathnames of files to be backed up. The
pathnames must be relative to the path specified in the FILESYSTEM environment
variable. Absolute file paths in the file list are not supported. The pathnames of all files
must be included, or they are not backed up. For example, if you include the pathname of
a subdirectory, only the subdirectory, not the files it contains, is backed up.
To specify the full path of the source directory to be backed up, you must specify the
FILESYSTEM environment variable in your DMA. For example:
FILESYSTEM=/ifs/data/projects
To specify the pathname of the file list, you must specify the environment variable,
BACKUP_FILE_LIST in your DMA. The file list must be accessible from the node
performing the backup. For example:
BACKUP_FILE_LIST=/ifs/data/proj_list.txt
As shown in the example, the pathnames are relative to the full path of the source
directory, which you specify in the FILESYSTEM environment variable. Absolute file paths
are not supported in the file list.
622
NDMP backup
623
NDMP backup
Parallel restore works for full and selective restore operations. If you specify DAR (direct
access restore), however, the operation reverts to serial processing.
Supported DMAs
Tested configurations
7.1.1
7.1.0.1 (and
later)*
7.0.2.5
6.6.5.26
* The tape drive sharing function is not supported in the OneFS 7.0.1 release.
624
NDMP backup
EMC NetWorker refers to the tape drive sharing capability as DDS (dynamic drive sharing).
Symantec NetBackup uses the term SSO (shared storage option). Consult your DMA
vendor documentation for configuration instructions.
3. Optional: To remove a default NDMP setting for a directory, run the isi ndmp
settings variables delete command:
For example, the following command removes the default file history format
for /ifs/data/media:
isi ndmp settings variables delete /ifs/data/media --name HIST
625
NDMP backup
Environment variable
Valid values
Default
Description
BACKUP_MODE=
TIMESTAMP
SNAPSHOT
TIMESTAMP
FILESYSTEM=
<file-path>
None
LEVEL=
<integer>
0
Performs a full NDMP
backup.
1-9
Performs an incremental
backup at the specified level.
10
Performs unlimited
incremental backups.
UPDATE=
Y
N
Y
OneFS updates the dump
dates file.
N
OneFS does not update the
dump dates file.
HIST=
<file-historyformat>
D
Specifies dir/node file
history.
626
NDMP backup
Environment variable
Valid values
Default
Description
F
Specifies path-based file
history.
Y
Specifies the default file
history format determined by
your NDMP backup settings.
N
Disables file history.
DIRECT=
Y
N
Y
Enables DAR and DDAR.
N
Disables DAR and DDAR.
FILES=
<file-matchingpattern>
None
EXCLUDE=
<file-matchingpattern>
None
RESTORE_HARDLINK
_BY_TABLE=
Y
N
CHECKPOINT_INTERVAL
_IN_BYTES=
<size>
5 GB
627
NDMP backup
Environment variable
Valid values
Default
Description
the backup from where the
process failed. The <size>
parameter is the space between
each checkpoint.
Note that this variable can only be
set from the DMA. For example, if
you specify 2 GB, your DMA would
create a checkpoint each time 2
GB of data were backed up.
Restartable backups are
supported only for EMC NetWorker
8.1 and later.
BACKUP_FILE_LIST=
<file-path>
None
RESTORE_OPTIONS=
0
1
628
NDMP backup
Options
<name>
Specifies a name for the NDMP user account.
Delete snapshots for snapshot-based incremental backups
629
NDMP backup
<password>
Specifies a password for the NDMP user account.
Examples
The following command creates and NDMP user account with a name of ndmp_user and a
password of 1234:
isi ndmp user create ndmp_user 1234
Options
<name>
Modifies the password of the specified NDMP user account.
<password>
Assigns the specified password to the given NDMP user account.
Examples
The following command sets the password of ndmp_user to newpassword:
isi ndmp user modify ndmp_user newpassword
Options
<name>
Deletes the specified NDMP user account.
Examples
The following example deletes ndmp_user:
isi ndmp user delete ndmp_user
630
NDMP backup
Options
If no options are specified, displays information about all NDMP users.
--name <name>
Displays information about only the specified NDMP user.
Examples
To view information about all NDMP user accounts, run the following command:
isi ndmp user list
Options
If no options are specified, scans all nodes and ports.
--node <lnn>
Scans only the node of the specified logical node number (LNN).
--port <integer>
Scans only the specified port. If you specify --node, scans only the specified port on
the specified node. If you do not specify --node, scans the specified port on all
nodes.
--reconcile
Removes entries for devices or paths that have become inaccessible.
Examples
To scan the entire cluster for NDMP devices, and remove entries for devices and paths
that have become inaccessible, run the following command:
isi tape rescan --reconcile
631
NDMP backup
Options
<devname>
Modifies the name of the specified NDMP device.
<rename>
Specifies a new name for the given NDMP device.
Examples
The following example renames tape003 to tape005:
isi tape rename tape003 tape005
Options
<devname>
Disconnects the cluster from the specified device. Specify as an NDMP device name.
--all
Disconnects the cluster from all devices.
Examples
The following command disconnects tape001 from the cluster:
isi tape delete tape001
632
NDMP backup
[--mc]
[--verbose]
Options
{--devname | --n} <name>
Displays only the specified device. Specify as a device name.
--node <lnn>
Displays only devices that are attached to the node of the specified logical node
number (LNN).
--tape
Displays only tape devices.
--mc
Displays only media changer devices.
{--verbose | -v}
Displays more detailed information.
Examples
To view a list of all NDMP devices, run the following command:
isi tape list
isi fc set
Configures a Fibre Channel port on a Backup Accelerator node.
This command is valid only if a Backup Accelerator node is attached to the cluster, and
the specified port is disabled.
Syntax
isi fc set <port> {--wwnn <wwnn> | --wwpn <wwpn>
| --topology <topology> | --rate <rate>}
Options
<port>
Configure the specified port.
Specify as a port ID.
--wwnn <wwnn>
Specifies the world wide node name (WWNN) of the port.
Specify as a string of 16 hexadecimal characters.
--wwpn <wwpn>
Specifies the world wide port name (WWPN) of the port.
Specify as a string of 16 hexadecimal characters.
--topology <topology>
Specifies the type of Fibre Channel topology that the port expects.
The following values are valid:
isi fc set
633
NDMP backup
ptp
Causes the port to expect a point-to-point topology, with one backup device or
Fibre Channel switch directly connected to the port.
loop
Causes the port to expect an arbitrated loop topology, with multiple backup
devices connected to a single port in a circular formation.
auto
Causes the port to detect the topology automatically. This is the recommended
setting. If you are using a fabric topology, specify this setting.
--rate <rate>
Specifies the rate that OneFS will attempt to send data through the port.
The following values are valid:
auto
OneFS automatically negotiates with the DMA to determine the rate. This is the
recommended setting.
1
Attempts to send data through the port at a speed of 1 Gb per second.
2
Attempts to send data through the port at a speed of 2 Gb per second.
4
Attempts to send data through the port at a speed of 4 Gb per second.
8
Attempts to send data through the port at a speed of 8 Gb per second.
Examples
The following command causes port 1 on node 5 to expect a point-to-point Fibre Channel
topology:
isi fc set 5:1 --topology ptp
isi fc disable
Disables a Fibre Channel port.
Syntax
isi fc disable <port>
Options
<port>
Disables the specified port.
Specify as a port ID.
634
NDMP backup
Examples
The following command disables port 1 on node 5:
isi fc disable 5:1
isi fc enable
Enables a Fibre Channel port.
Syntax
isi fc enable <port>
Options
<port>
Enables the specified port.
Specify as a port ID.
Examples
The following command enables port 1 on node 5:
isi fc enable 5:1
isi fc list
Displays a list of Fibre Channel ports on Backup Accelerator nodes connected to the
cluster.
Syntax
isi fc list
[{<port> | --node <id>}]
Options
If no options are specified, displays all Fibre Channel ports on Backup Accelerator nodes
connected to the cluster.
<port>
--node <id>
Displays all ports on the specified node.
Specify as a node ID.
Examples
The following command displays all ports on node 5:
isi fc list --node 5
isi fc enable
635
NDMP backup
WWNN
2000001b3214ccc3
2000001b3234ccc3
2000001b3254ccc3
2000001b3234ccc3
WWPN
2100001b3214ccc3
2101001b3234ccc3
2100001b3254ccc3
2103001b3274ccc3
State
enabled
enabled
enabled
enabled
Topology
auto
auto
auto
auto
Rate
auto
auto
auto
auto
State
enabled
Topology
auto
Options
<session>
Terminates the specified session. Specify as a session ID.
Examples
The following command terminates a session with an ID of 4.36339:
isi ndmp kill 4.36339
Options
If no options are specified, displays all NDMP sessions.
--session <id>
Displays only the session of the specified ID.
--node <id>
Displays only sessions running on the node of the specified ID.
{--verbose | -v}
Displays detailed information.
636
Rate
auto
NDMP backup
Examples
Run the following command to view all NDMP sessions:
isi ndmp list
The following list describes the values for the Data, Mover, and OP columns:
A
Active
I
Idle
P
Paused
R
Recover
B
Backup
Options
<session>
Displays diagnostic information about the specified NDMP session. Specify as a
session ID.
Examples
The following command displays diagnostic information for session 4.34729:
isi ndmp probe 4.34729
637
NDMP backup
Options
<name>
Modifies the specified setting.
The following values are valid:
dma
Configures the cluster to interact with the specified data management vendor
(DMA).
port
Specifies the port through which a DMA vendor connects to the cluster.
<value>
Specifies a value for the setting. If you are modifying the port setting, specify a
TCP/IP port.
If you are modifying the DMA setting, the following values are valid:
atempo
Atempo Time Navigator
bakbone
BakBone NetVault
commvault
CommVault Simpana
emc
EMC Networker
symantec
Symantec Netbackup
tivoli
IBM Tivoli Storage Manager
symantec-netbackup
Symantec Netbackup
symantec-backupexec
Symantec Backup Exec
generic
Unsupported DMA vendor
Examples
To set the vendor of the current data management application to EMC Networker, run the
following command:
isi ndmp settings set --name dma --value emc
638
NDMP backup
The following command sets the port number on which the NDMP daemon listens for
incoming connections to 10001:
isi ndmp settings set port 10001
Options
If no options are specified, all settings are displayed.
--name <setting>
Displays only the value of the specified setting.
The following values are valid:
port
The port through which a Data Management Application (DMA) connects.
dma
The DMA vendor that the cluster is currently configured to interact with.
Examples
To view a list of NDMP settings and values, run the following command:
isi ndmp settings list
For a list of available environment variables, see NDMP environment variables on page
626.
Options
<path>
Applies the default NDMP environment variable value to the specified path.
<name>
Specifies the NDMP environment variable to define.
isi ndmp settings list
639
NDMP backup
<value>
Specifies the value to be applied to the NDMP environment variable.
Examples
The following command causes snapshot-based incremental backups to be performed
for /ifs/data/media by default:
isi ndmp settings variables create /ifs/data/media BACKUP_MODE
SNAPSHOT
Options
For a list of available environment variables, see NDMP environment variables on page
626.
<path>
Applies the default NDMP-environment-variable value to the specified path.
<name>
Specifies the NDMP environment variable to be defined.
<value>
Specifies the value to be applied to the NDMP environment variable.
Examples
The following command sets the default file history for backing up /ifs/data/media
to be path-based:
isi ndmp settings variables modify /ifs/data/media HIST F
Options
<path>
Applies the default NDMP-environment-variable value to the specified path.
--name <variable>
Deletes the default value for the specified NDMP environment variable. The following
values are valid:
640
FILESYSTEM
LEVEL
UPDATE
NDMP backup
HIST
DIRECT
FILES
EXCLUDE
ENCODING
RESTORE_HARDLINK_BY_TABLE
If this option is not specified, deletes default values for all NDMP environment
variables for the given directory.
Examples
The following command removes all default NDMP settings for /ifs/data/media:
isi ndmp settings variables delete /ifs/data/media
The following command removes the default file-history setting for backing up /ifs/
data/media:
isi ndmp settings variables delete /ifs/data/media --name HIST
Options
--path <path>
Applies the default NDMP-environment-variable value to the specified path.
Examples
To view default values for NDMP environment variables for directory paths, run the
following command:
isi ndmp settings variables list
Options
<path>
isi ndmp settings variables list
641
NDMP backup
Options
--path <path>
Displays only dumpdate entries that relate to the specified file path.
Examples
To view NDMP dumpdate entries, run the following command:
isi ndmp dumpdates list
Options
<id>
Deletes the specified restartable backup context.
642
NDMP backup
Examples
The following command deletes a restartable backup context:
isi ndmp extensions contexts delete 533089ed-c4c5-11e2bad5-001b21a2c2dc
Options
--id <id>
Displays only the restartable backup context of the specified ID.
Examples
To view restartable backup contexts, run the following command:
isi ndmp extensions contexts list
Options
--id <id>
Displays information about the restartable backup context of the specified ID.
Examples
The following command displays information about a restartable backup context:
isi ndmp extensions contexts view 2a9a9e11-ac6c-11e2-b40d-0025904b58c6
643
NDMP backup
Options
--bre_max_contexts <integer>
Specifies the maximum number of restartable backup contexts. Specify as an integer
from 1 to 1024.
Examples
The following command sets the maximum number of restartable backup contexts to
500:
isi ndmp extensions settings modify --bre_max_contexts 500
Options
There are no options for this command.
Examples
To view the maximum number of restartable backup contexts, run the following
command:
isi ndmp extensions settings view
644
CHAPTER 16
File retention with SmartLock
645
SmartLock overview
You can prevent users from modifying and deleting files on an EMC Isilon cluster with the
SmartLock software module. You must activate a SmartLock license on a cluster to
protect data with SmartLock.
With the SmartLock software module, you can create SmartLock directories and commit
files within those directories to a write once read many (WORM) state. You cannot erase
or re-write a file committed to a WORM state. After a file is removed from a WORM state,
you can delete the file. However, you can never modify a file that has been committed to
a WORM state, even after it is removed from a WORM state.
Compliance mode
SmartLock compliance mode enables you to protect your data in compliance with the
regulations defined by U.S. Securities and Exchange Commission rule 17a-4. You can
upgrade a cluster to compliance mode during the initial cluster configuration process,
before you activate the SmartLock license. To upgrade a cluster to SmartLock compliance
mode after the initial cluster configuration process, contact Isilon Technical Support.
If you upgrade a cluster to compliance mode, you will not be able to log in to that cluster
through the root user account. Instead, you can log in to the cluster through the
compliance administrator account that is configured either during initial cluster
configuration or when the cluster is upgraded to compliance mode. If you are logged in
through the compliance administrator account, you can perform administrative tasks
through the sudo command.
SmartLock directories
In a SmartLock directory, you can commit a file to a WORM state manually or you can
configure SmartLock to automatically commit the file. You can create two types of
SmartLock directories: enterprise and compliance. However, you can create compliance
directories only if the cluster has been upgraded to SmartLock compliance mode. Before
you can create SmartLock directories, you must activate a SmartLock license on the
cluster.
If you commit a file to a WORM state in an enterprise directory, the file can never be
modified and cannot be deleted until the retention period passes. However, if you are
logged in through the root user account, you can delete the file before the retention
period passes through the privileged delete feature. The privileged delete feature is not
available for compliance directories. Enterprise directories reference the system clock to
facilitate time-dependent operations, including file retention.
Compliance directories enable you to protect your data in compliance with the
regulations defined by U.S. Securities and Exchange Commission rule 17a-4. If you
commit a file to a WORM state in a compliance directory, the file cannot be modified or
deleted before the specified retention period has expired. You cannot delete committed
files, even if you are logged in to the compliance administrator account. Compliance
directories reference the compliance clock to facilitate time-dependent operations,
including file retention.
You must set the compliance clock before you can create compliance directories. You can
set the compliance clock only once. After you set the compliance clock, you cannot
modify the compliance clock time. The compliance clock is controlled by the compliance
clock daemon. Because a user can disable the compliance clock daemon, it is possible
646
for a user to increase the retention period of WORM committed files in compliance mode.
However, it is not possible to decrease the retention period of a WORM committed file.
Do not configure SmartLock settings for a target SmartLock directory unless you are no
longer replicating data to the directory. Configuring an autocommit time period for a
target SmartLock directory can cause replication jobs to fail. If the target SmartLock
directory commits a file to a WORM state, and the file is modified on the source cluster,
the next replication job will fail because it cannot update the file.
Allowed
Non-SmartLock
Non-SmartLock
Yes
Non-SmartLock
SmartLock enterprise
Yes
Non-SmartLock
SmartLock compliance
No
SmartLock enterprise
Non-SmartLock
SmartLock enterprise
SmartLock enterprise
Yes
SmartLock enterprise
SmartLock compliance
No
SmartLock compliance
Non-SmartLock
No
SmartLock compliance
SmartLock enterprise
No
SmartLock compliance
SmartLock compliance
Yes
If you replicate SmartLock directories to another cluster with SyncIQ, the WORM state of
files is replicated. However, SmartLock directory configuration settings are not transferred
to the target directory.
For example, if you replicate a directory that contains a committed file that is set to expire
on March 4th, the file is still set to expire on March 4th on the target cluster. However, if
647
the directory on the source cluster is set to prevent files from being committed for more
than a year, the target directory is not automatically set to the same restriction.
If you back up data to an NDMP device, all SmartLock metadata relating to the retention
date and commit status is transferred to the NDMP device. If you restore data to a
SmartLock directory on the cluster, the metadata persists on the cluster. However, if the
directory that you restore to is not a SmartLock directory, the metadata is lost. You can
restore to a SmartLock directory only if the directory is empty.
SmartLock considerations
648
If a file is owned exclusively by the root user, and the file exists on a cluster that is in
SmartLock compliance mode, the file will be inaccessible, because the root user
account is disabled in compliance mode. For example, this can happen if a file is
assigned root ownership on a cluster that has not been upgraded to compliance
mode, and then the file is replicated to a cluster in compliance mode. This can also
occur if a file is assigned root ownership before a cluster is upgraded to SmartLock
compliance mode or if a root-owned file is restored on a compliance cluster after
being backed up.
It is recommended that you create files outside of SmartLock directories and then
transfer them into a SmartLock directory after you are finished working with the files.
If you are uploading files to a cluster, it is recommended that you upload the files to a
non-SmartLock directory, and then later transfer the files to a SmartLock directory. If a
file is committed to a WORM state while the file is being uploaded, the file will
become trapped in an inconsistent state.
Files can be committed to a WORM state while they are still open. If you specify an
autocommit time period for a directory, the autocommit time period is calculated
according to the length of time since the file was last modified, not when the file was
closed. If you delay writing to an open file for more than the autocommit time period,
the file will be automatically committed to a WORM state, and you will not be able to
write to the file.
In a Microsoft Windows environment, if you commit a file to a WORM state, you can
no longer modify the hidden or archive attributes of the file. Any attempt to modify
the hidden or archive attributes of a WORM committed file will generate an error. This
can prevent third-party applications from modifying the hidden or archive attributes.
Retention periods
A retention period is the length of time that a file remains in a WORM state before being
released from a WORM state. You can configure SmartLock directory settings that enforce
default, maximum, and minimum retention periods for the directory.
If you manually commit a file, you can optionally specify the date that the file is released
from a WORM state. You can configure a minimum and a maximum retention period for a
SmartLock directory to prevent files from being retained for too long or too short a time
period. It is recommended that you specify a minimum retention period for all SmartLock
directories.
For example, assume that you have a SmartLock directory with a minimum retention
period of two days. At 1:00 PM on Monday, you commit a file to a WORM state, and
specify the file to be released from a WORM state on Tuesday at 3:00 PM. The file will be
Set the compliance clock
649
released from a WORM state two days later on Wednesday at 1:00 PM, because releasing
the file earlier would violate the minimum retention period.
You can also configure a default retention period that is assigned when you commit a file
without specifying a date to release the file from a WORM state.
Procedure
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. Run the isi worm domains create command.
If you specify the path of an existing directory, the directory must be empty.
The following command creates a compliance directory with a default retention period
of four years, a minimum retention period of three years, and an maximum retention
period of five years:
sudo isi worm domains create /ifs/data/SmartLock/directory1 \
--compliance --default-retention 4Y --min-retention 3Y \
--max-retention 5Y --mkdir
650
You can modify SmartLock directory settings only 32 times per directory. It is
recommended that you set SmartLock configuration settings only once and do not modify
the settings after files are added to the SmartLock directory.
Procedure
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. Modify SmartLock configuration settings by running the isi worm modify
command.
The following command sets the default retention period to one year:
isi worm domains modify /ifs/data/SmartLock/directory1 \
--default-retention 1Y
3. Optional: To view detailed information about a specific SmartLock directory, run the
isi worm domains view command.
The following command displays detailed information about /ifs/data/
SmartLock/directory2:
isi worm domains view /ifs/data/SmartLock/directory2
651
65537
/ifs/data/SmartLock/directory2
enterprise
4295426060
30m
off
1Y
3M
3/32 Max
652
Override date
The override retention date for the directory. Files committed to a WORM state are
not released from a WORM state until after the specified date, regardless of the
maximum retention period for the directory or whether a user specifies an earlier
date to release a file from a WORM state.
Privileged delete
Indicates whether files in the directory can be deleted through the privileged delete
functionality.
on
The root user can delete files committed to a WORM state by running the isi
worm files delete command.
off
WORM committed files cannot be deleted, even through the isi worm files
delete command.
disabled
WORM committed files cannot be deleted, even through the isi worm files
delete command. After this setting is applied, it cannot be modified.
Default retention period
The default retention period for the directory. If a user does not specify a date to
release a file from a WORM state, the default retention period is assigned.
Times are expressed in the format "<integer> <time>", where <time> is one of the
following values:
Y
Specifies years
M
Specifies months
W
Specifies weeks
D
Specifies days
H
Specifies hours
m
Specifies minutes
s
Specifies seconds
Forever indicates that WORM committed files are retained permanently by default.
Use Min indicates that the default retention period is equal to the minimum
retention date. Use Max indicates that the default retention period is equal to the
maximum retention date.
653
654
Total modifies
The total number of times that SmartLock settings have been modified for the
directory. You can modify SmartLock settings only 32 times per directory.
3. Specify the name of the file you want to set a retention period for by creating an
object.
Managing files in SmartLock directories
655
4. Specify the retention period by setting the last access time for the file.
The following command sets an expiration date of July 1, 2015 at 1:00 PM:
$file.LastAccessTime = Get-Date "2015/7/1 1:00 pm"
656
2. Override the retention period expiration date for all WORM committed files in a
SmartLock directory by running the isi worm modify command.
For example, the following command overrides the retention period expiration date
of /ifs/data/SmartLock/directory1 to June 1, 2014:
isi worm domains modify /ifs/data/SmartLock/directory1 \
--override-date 2014-06-01
3. Delete the WORM committed file by running the isi worm filedelete command.
The following command deletes /ifs/data/SmartLock/directory1/file:
isi worm files delete /ifs/data/SmartLock/directory1/file
657
-----------------------------------65539 /ifs/data/SmartLock/directory1
WORM State: COMMITTED
Expires: 2015-06-01T00:00:00
SmartLock commands
You can control file retention through the WORM commands. WORM commands apply
specifically to the SmartLock tool, and are available only if you have activated a
SmartLock license on the cluster.
Options
<path>
Creates a SmartLock directory at the specified path.
Specify as a directory path.
{--compliance | -C}
Specifies the SmartLock directory as a SmartLock compliance directory. This option is
valid only on clusters running in SmartLock compliance mode.
{--autocommit-offset | -a} <duration>
Specifies an autocommit time period. After a file exists in a SmartLock directory
without being modified for the specified length of time, the file automatically
committed to a WORM state.
Specify <duration> in the following format:
<integer><units>
658
D
Specifies days
H
Specifies hours
m
Specifies minutes
s
Specifies seconds
To specify no autocommit time period, specify none. The default value is none.
{--override-date | -o} <timestamp>
Specifies an override retention date for the directory. Files committed to a WORM
state are not released from a WORM state until after the specified date, regardless of
the maximum retention period for the directory or whether a user specifies an earlier
date to release a file from a WORM state.
Specify <timestamp> in the following format:
<YYYY>-<MM>-<DD>[T<hh>:<mm>[:<ss>]]
If you specify this option, you can never enable the privileged delete functionality for
the directory. If a file is then committed to a WORM state in the directory, you will not
be able to delete the file until the retention period has passed.
{--default-retention | -d} {<duration> | forever | use_min | use_max}
Specifies a default retention period. If a user does not explicitly assign a retention
period expiration date, the default retention period is assigned to the file when it is
committed to a WORM state.
Specify <duration> in the following format:
<integer><units>
659
W
Specifies weeks
D
Specifies days
H
Specifies hours
m
Specifies minutes
s
Specifies seconds
To permanently retain WORM committed files by default, specify forever. To assign
the minimum retention period as the default retention period, specify use_min. To
assign the maximum retention period as the default retention period, specify
use_max.
{--min-retention | -m} {<duration> | forever}
Specifies a minimum retention period. Files are retained in a WORM state for at least
the specified amount of time.
Specify <duration> in the following format:
<integer><units>
660
Options
<domain>
Modifies the specified SmartLock directory.
Specify as a directory path, ID, or LIN of a SmartLock directory.
isi worm domains modify
661
{--compliance | -C}
Specifies the SmartLock directory as a SmartLock compliance directory. This option is
valid only on clusters running in SmartLock compliance mode.
{--autocommit-offset | -a} <duration>
Specifies an autocommit time period. After a file exists in a SmartLock directory
without being modified for the specified length of time, the file automatically
committed to a WORM state.
Specify <duration> in the following format:
<integer><units>
--clear-override-date
Removes the override retention date for the given SmartLock directory.
{--privileged-delete | -p} {true | false}
Determines whether files in the directory can be deleted through the isi worm
files delete command. This option is available only for SmartLock enterprise
directories.
The default value is false.
662
--disable-privileged-delete
Permanently prevents WORM committed files from being deleted from the SmartLock
directory.
Note
If you specify this option, you can never enable the privileged delete functionality for
the SmartLock directory. If a file is then committed to a WORM state in the directory,
you will not be able to delete the file until the retention period expiration date has
passed.
{--default-retention | -d} {<duration> | forever | use_min | use_max}
Specifies a default retention period. If a user does not explicitly assign a retention
period expiration date, the default retention period is assigned to the file when it is
committed to a WORM state.
Specify <duration> in the following format:
<integer><units>
663
Y
Specifies years
M
Specifies months
W
Specifies weeks
D
Specifies days
H
Specifies hours
m
Specifies minutes
s
Specifies seconds
To permanently retain all WORM committed files, specify forever.
--clear-min-retention
Removes the minimum retention period for the given SmartLock directory.
{--max-retention | -x} {<duration> | forever}
Specifies a maximum retention period. Files cannot be retained in a WORM state for
more than the specified amount of time, even if a user specifies an expiration date
that results in a longer retention period.
Specify <duration> in the following format:
<integer><units>
{--force | -f}
Does not prompt you to confirm the creation of the SmartLock directory.
{--verbose | -v}
Displays more detailed information.
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--sort <attribute>
Sorts output displayed by the specified attribute.
The following values are valid:
id
Sorts output by the SmartLock directory ID.
path
Sorts output by the path of the SmartLock directory.
type
Sorts output based on whether the SmartLock directory is a compliance
directory.
lin
Sorts output by the inode number of the SmartLock directory.
autocommit_offset
Sorts output by the autocommit time period of the SmartLock directory.
override_date
Sorts output by the override retention date of the SmartLock directory.
privileged_delete
Sorts output based on whether the privileged delete functionality is enabled for
the SmartLock directory.
default_retention
Sorts output by the default retention period of the SmartLock directory.
min_retention
Sorts output by the minimum retention period of the SmartLock directory.
max_retention
Sorts output by the maximum retention period of the SmartLock directory.
665
total_modifies
Sorts output by the total number of times that the SmartLock directory has been
modified.
{--descending | -d}
Displays output in reverse order.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table output without headers.
{--no-footer | -z}
Displays table output without footers. Footers display snapshot totals, such as the
total amount of storage space consumed by snapshots.
{--verbose | -v}
Displays more detailed information.
Options
<domain>
Displays information about the specified SmartLock directory.
Specify as a directory path, ID, or LIN of a SmartLock directory.
You can set the compliance clock only once. After the compliance clock has been set,
you cannot modify the compliance clock time.
Syntax
isi worm cdate set
Options
There are no options for this command.
666
Options
There are no options for this command.
Options
<path>
Deletes the specified file. The file must exist in a SmartLock enterprise directory with
the privileged delete functionality enabled.
Specify as a file path.
--force
Does not prompt you to confirm that you want to delete the file.
--verbose
Displays more detailed information.
Options
<path>
Displays information about the specified file. The file must be committed to a WORM
state.
Specify as a file path.
--no-symlinks
If <path> refers to a file, and the given file is a symbolic link, displays WORM
information about the symbolic link. If this option is not specified, and the file is a
symbolic link, displays WORM information about the file that the symbolic link refers
to.
isi worm cdate view
667
668
CHAPTER 17
Protection domains
Protection domains
669
Protection domains
670
Copying a large number of files into a protection domain might take a very long time
because each file must be marked individually as belonging to the protection
domain.
You cannot move directories in or out of protection domains. However, you can move
a directory contained in a protection domain to another location within the same
protection domain.
Creating a protection domain for a directory that contains a large number of files will
take more time than creating a protection domain for a directory with fewer files.
Because of this, it is recommended that you create protection domains for directories
while the directories are empty, and then add files to the directory.
Protection domains
671
Protection domains
672
CHAPTER 18
Data-at-rest-encryption
Data-at-rest-encryption
673
Data-at-rest-encryption
Self-encrypting drives
Self-encrypting drives store data on a EMC Isilon cluster that is specially designed for
data-at-rest encryption.
Data-at-rest- encryption on self-encrypted drives occurs when data that is stored on a
device is encrypted to prevent unauthorized data access. All data written to the storage
device is encrypted when it is stored, and all data read from the storage device is
decrypted when it is read. The stored data is encrypted with a 256-bit data AES
encryption key and decrypted in the same manner. OneFS controls data access by
combining the drive authentication key with on-disk data-encryption keys.
Note
All nodes in a cluster must be of the self-encrypting drive type. Mixed nodes are not
supported.
When a drive is smartfailed and removed from a node, the encryption key on the drive
is removed. Because the encryption key for reading data from the drive must be the
same key that was used when the data was written, it is impossible to decrypt data
that was previously written to the drive. When you smartfail and then remove a drive,
it is cryptographically erased.
Note
674
When a self-encrypting drive loses power, the drive locks to prevent unauthorized
access. When power is restored, data is again accessible when the appropriate drive
authentication key is provided.
Data-at-rest-encryption
Before you begin the data-migration process, both clusters must be upgraded to the
same OneFS version.
During data migration, an error is generated that indicates you are running in mixed
mode, which is not supported and is not secure. The data migrated to the self-encrypted
drives is not secure until the smartfail process is completed for the non-encrypted drives.
CAUTION
Description
Interface
HEALTHY
Command-line
interface, web
administration
interface
SMARTFAIL or
Smartfail or
restripe in
progress
Command-line
interface, web
administration
interface
NOT AVAILABLE
Command-line
interface, web
administration
interface
Error
state
675
Data-at-rest-encryption
State
Description
Interface
Note
SED_ERROR command-line
interface states.
676
SUSPENDED
Command-line
interface, web
administration
interface
NOT IN USE
Command-line
interface, web
administration
interface
REPLACE
STALLED
NEW
USED
PREPARING
Command-line
interface only
EMPTY
Command-line
interface only
WRONG_TYPE
BOOT_DRIVE
Command-line
interface only
Error
state
Data-at-rest-encryption
State
Description
Interface
Error
state
SED_ERROR
Command-line
interface, web
administration
interface
Note
available.
ERASE
Command-line
interface only
Note
available.
INSECURE
Command-line
interface only
Web
administration
interface only
Note
SED.
UNENCRYPTED SED
677
Data-at-rest-encryption
Lnum 11
/dev/da1
Lnum 10
/dev/da2
Lnum 9
/dev/da3
Lnum 8
/dev/da4
Lnum 7
/dev/da5
Lnum 6
/dev/da6
Lnum 5
/dev/da7
Lnum 4
/dev/da8
Lnum 3
/dev/da9
Lnum 2
/dev/da10
Lnum 1
/dev/da11
Lnum 0
/dev/da12
[SMARTFAIL]
SN:Z296M8HK
[HEALTHY]
SN:Z296M8N5
[HEALTHY]
SN:Z296LBP4
[HEALTHY]
SN:Z296LCJW
[HEALTHY]
SN:Z296M8XB
[HEALTHY]
SN:Z295LXT7
[HEALTHY]
SN:Z296M8ZF
[HEALTHY]
SN:Z296M8SD
[HEALTHY]
SN:Z296M8QA
[HEALTHY]
SN:Z296M8Q7
[HEALTHY]
SN:Z296M8SP
[HEALTHY]
SN:Z296M8QZ
If you run the isi dev command after the smartfail completes successfully, the system
displays output similar to the following example, showing the drive state as REPLACE:
Node 1, [ATTN]
Bay 1
000093172YE04
Bay 2
00009330EYE03
Bay 3
00009330EYE03
Bay 4
00009327BYE03
Bay 5
00009330KYE03
Bay 6
000093172YE03
Bay 7
00009330KYE03
Bay 8
00009330EYE03
Bay 9
00009330EYE03
Bay 10
00009330EYE03
Bay 11
00009330EYE04
Bay 12
00009330JYE03
678
Lnum 11
/dev/da1
Lnum 10
/dev/da2
Lnum 9
/dev/da3
Lnum 8
/dev/da4
Lnum 7
/dev/da5
Lnum 6
/dev/da6
Lnum 5
/dev/da7
Lnum 4
/dev/da8
Lnum 3
/dev/da9
Lnum 2
/dev/da10
Lnum 1
/dev/da11
Lnum 0
/dev/da12
[REPLACE]
SN:Z296M8HK
[HEALTHY]
SN:Z296M8N5
[HEALTHY]
SN:Z296LBP4
[HEALTHY]
SN:Z296LCJW
[HEALTHY]
SN:Z296M8XB
[HEALTHY]
SN:Z295LXT7
[HEALTHY]
SN:Z296M8ZF
[HEALTHY]
SN:Z296M8SD
[HEALTHY]
SN:Z296M8QA
[HEALTHY]
SN:Z296M8Q7
[HEALTHY]
SN:Z296M8SP
[HEALTHY]
SN:Z296M8QZ
Data-at-rest-encryption
If you run the isi dev command while the drive in bay 3 is being smartfailed, the
system displays output similar to the following example:
Node 1, [ATTN]
Bay 1
000093172YE04
Bay 2
00009330EYE03
Bay 3
00009330EYE03
Bay 4
00009327BYE03
Bay 5
00009330KYE03
Bay 6
000093172YE03
Bay 7
00009330KYE03
Bay 8
00009330EYE03
Bay 9
00009330EYE03
Bay 10
00009330EYE03
Bay 11
00009330EYE04
Bay 12
00009330JYE03
Lnum 11
/dev/da1
Lnum 10
/dev/da2
Lnum 9
N/A
Lnum 8
/dev/da4
Lnum 7
/dev/da5
Lnum 6
/dev/da6
Lnum 5
/dev/da7
Lnum 4
/dev/da8
Lnum 3
/dev/da9
Lnum 2
/dev/da10
Lnum 1
/dev/da11
Lnum 0
/dev/da12
[REPLACE]
SN:Z296M8HK
[HEALTHY]
SN:Z296M8N5
[SMARTFAIL]
SN:Z296LBP4
[HEALTHY]
SN:Z296LCJW
[HEALTHY]
SN:Z296M8XB
[HEALTHY]
SN:Z295LXT7
[HEALTHY]
SN:Z296M8ZF
[HEALTHY]
SN:Z296M8SD
[HEALTHY]
SN:Z296M8QA
[HEALTHY]
SN:Z296M8Q7
[HEALTHY]
SN:Z296M8SP
[HEALTHY]
SN:Z296M8QZ
If the smartfail is unsuccessful, OneFS attempts to delete the drive password, because
the drive could not be crypto-erased. If you run the isi dev command after the
smartfail is unsuccessful, the system displays output similar to the following example,
showing the drive state as ERASE:
Node 1, [ATTN]
Bay 1
000093172YE04
Bay 2
00009330EYE03
Bay 3
00009330EYE03
Bay 4
00009327BYE03
Bay 5
00009330KYE03
Bay 6
000093172YE03
Bay 7
00009330KYE03
Bay 8
00009330EYE03
Bay 9
00009330EYE03
Bay 10
00009330EYE03
Bay 11
00009330EYE04
Bay 12
00009330JYE03
Lnum 11
/dev/da1
Lnum 10
/dev/da2
Lnum 9
/dev/da3
Lnum 8
/dev/da4
Lnum 7
/dev/da5
Lnum 6
/dev/da6
Lnum 5
/dev/da7
Lnum 4
/dev/da8
Lnum 3
/dev/da9
Lnum 2
/dev/da10
Lnum 1
/dev/da11
Lnum 0
/dev/da12
[REPLACE]
SN:Z296M8HK
[HEALTHY]
SN:Z296M8N5
[ERASE]
SN:Z296LBP4
[HEALTHY]
SN:Z296LCJW
[HEALTHY]
SN:Z296M8XB
[HEALTHY]
SN:Z295LXT7
[HEALTHY]
SN:Z296M8ZF
[HEALTHY]
SN:Z296M8SD
[HEALTHY]
SN:Z296M8QA
[HEALTHY]
SN:Z296M8Q7
[HEALTHY]
SN:Z296M8SP
[HEALTHY]
SN:Z296M8QZ
679
Data-at-rest-encryption
680
CHAPTER 19
SmartQuotas
SmartQuotas
681
SmartQuotas
SmartQuotas overview
The OneFS SmartQuotas module is an optional quota-management tool that monitors
and enforces administrator-defined storage limits. Through the use of accounting and
enforcement quota limits, reporting capabilities, and automated notifications, you can
manage and monitor storage utilization, monitor disk storage, and issue alerts when
storage limits are exceeded.
A storage quota defines the boundaries of storage capacity that are allowed for a group, a
user, or a directory on an Isilon cluster. You can configure SmartQuotas to provision,
monitor, and report disk-storage usage and send automated notifications when storage
limits are approached or exceeded. SmartQuotas also provides flexible reporting options
that can help you analyze data usage.
Quota types
OneFS uses the concept of quota types as the fundamental organizational unit of storage
quotas. Storage quotas comprise a set of resources and an accounting of each resource
type for that set. Storage quotas are also called storage domains.
Storage quotas creation requires three identifiers:
l
Note
You should not create quotas of any type on the OneFS root (/ifs). A root-level quota
may significantly degrade performance.
682
SmartQuotas
In this example, the default-user type created a new specific-user type automatically
(user:admin) and added the new usage to it. Default-user does not have any usage
because it is used only to generate new quotas automatically. Default-user enforcement
is copied to a specific-user (user:admin), and the inherited quota is called a linked quota.
In this way, each user account gets its own usage accounting.
Defaults can overlap. For example, default-user@/ifs/dir-1 and default-user@/ifs/cs
both may be defined. If the default enforcement changes, OneFS storage quotas
propagate the changes to the linked quotas asynchronously. Because the update is
asynchronous, there is some delay before updates are in effect. If a default type, such as
every user or every group, is deleted, OneFS deletes all children that are marked as
inherited. As an option, you can delete the default without deleting the children, but it is
important to note that this action breaks inheritance on all inherited children.
Continuing with the example, add another file that is owned by the root user. Because the
root type exists, the new usage is added to it.
my-OneFS-1# touch /ifs/dir-1/anotherfile
my-OneFS-1# isi quota ls -v --path=/ifs/dir-1 --format=list
Type: default-user
AppliesTo: DEFAULT
Path: /ifs/dir-1
Snap: No
Thresholds
Hard : Soft : -
683
SmartQuotas
Configuration changes for linked quotas must be made on the parent quota that the
linked quota is inheriting from. Changes to the parent quota are propagated to all
children. To override configuration from the parent quota, you must unlink the quota first.
684
SmartQuotas
Track the amount of disk space used by various users or groups to bill each user,
group, or directory for only the disk space used.
Review and analyze reports that help you identify storage usage patterns and
define storage policies.
Plan for capacity and other storage needs.
Enforcement limits
Enforcement limits include all of the functionality of the accounting option, plus the
ability to limit disk storage and send notifications. Using enforcement limits, you can
logically partition a cluster to control or restrict how much storage that a user, group,
or directory can use. For example, you can set hard- or soft-capacity limits to ensure
that adequate space is always available for key projects and critical applications and
to ensure that users of the cluster do not exceed their allotted storage capacity.
Optionally, you can deliver real-time email quota notifications to users, group
managers, or administrators when they are approaching or have exceeded a quota
limit.
Note
If a quota type uses the accounting-only option, enforcement limits cannot be used for
that quota.
The actions of an administrator logged in as root may push a domain over a quota
threshold. For example, changing the protection level or taking a snapshot has the
potential to exceed quota parameters. System actions such as repairs also may push a
quota domain over the limit.
The system provides three types of administrator-defined enforcement thresholds.
Threshold
type
Description
Hard
Limits disk usage to a size that cannot be exceeded. If an operation, such as a file
write, causes a quota target to exceed a hard quota, the following events occur:
l
685
SmartQuotas
Threshold
type
Description
Allows a limit with a grace period that can be exceeded until the grace period
expires. When a soft quota is exceeded, an alert is logged to the cluster and a
notification is issued to specified recipients; however, data writes are permitted
during the grace period.
If the soft threshold is still exceeded when the grace period expires, data writes
fail, and a hard-limit notification is issued to the recipients you have specified.
Writes resume when the usage falls below the threshold.
Advisory
Disk-usage calculations
For each quota that you configure, you can specify whether data-protection overhead is
included in future disk-usage calculations.
Most quota configurations do not need to include overhead calculations. If you do not
include data-protection overhead in usage calculations for a quota, future disk-usage
calculations for the quota include only the space that is required to store files and
directories. Space that is required for the data-protection setting of the cluster is not
included.
Consider the same example user, who is now restricted by a 40 GB quota that does not
include data-protection overhead in its disk-usage calculations. If your cluster is
configured with a 2x data-protection level and the user writes a 10 GB file to the cluster,
that file consumes 20 GB of space but the 10GB for the data-protection overhead is not
counted in the quota calculation. In this example, the user has reached 25 percent of the
40 GB quota by writing a 10 GB file to the cluster. This method of disk-usage calculation
is recommended for most quota configurations.
If you include data-protection overhead in usage calculations for a quota, future diskusage calculations for the quota include the total amount of space that is required to
store files and directories, in addition to any space that is required to accommodate your
data-protection settings, such as parity or mirroring. For example, consider a user who is
restricted by a 40 GB quota that includes data-protection overhead in its disk-usage
calculations. If your cluster is configured with a 2x data-protection level (mirrored) and
the user writes a 10 GB file to the cluster, that file actually consumes 20 GB of space: 10
GB for the file and 10 GB for the data-protection overhead. In this example, the user has
reached 50 percent of the 40 GB quota by writing a 10 GB file to the cluster.
Note
Cloned and deduplicated files are treated as ordinary files by quotas. If the quota
includes data protection overhead, the data protection overhead for shared data is not
included in the usage calculation.
You can configure quotas to include the space that is consumed by snapshots. A single
path can have two quotas applied to it: one without snapshot usage, which is the default,
686
SmartQuotas
and one with snapshot usage. If you include snapshots in the quota, more files are
included in the calculation than are in the current directory. The actual disk usage is the
sum of the current directory and any snapshots of that directory. You can see which
snapshots are included in the calculation by examining the .snapshot directory for the
quota path.
Note
Only snapshots created after the QuotaScan job finishes are included in the calculation.
Quota notifications
Quota notifications are generated for enforcement quotas, providing users with
information when a quota violation occurs. Reminders are sent periodically while the
condition persists.
Each notification rule defines the condition that is to be enforced and the action that is to
be executed when the condition is true. An enforcement quota can define multiple
notification rules. When thresholds are exceeded, automatic email notifications can be
sent to specified users, or you can monitor notifications as system alerts or receive
emails for these events.
Notifications can be configured globally, to apply to all quota domains, or be configured
for specific quota domains.
Enforcement quotas support the following notification settings. A given quota can use
only one of these settings.
Limit notification settings
Description
687
SmartQuotas
Quota reports
The OneFS SmartQuotas module provides reporting options that enable administrators to
manage cluster resources and analyze usage statistics.
Storage quota reports provide a summarized view of the past or present state of the
quota domains. After raw reporting data is collected by OneFS, you can produce data
summaries by using a set of filtering parameters and sort types. Storage-quota reports
include information about violators, grouped by threshold types. You can generate
reports from a historical data sample or from current data. In either case, the reports are
views of usage data at a given time. OneFS does not provide reports on data aggregated
over time, such as trending reports, but you can use raw data to analyze trends. There is
no configuration limit on the number of reports other than the space needed to store
them.
OneFS provides the following data-collection and reporting methods:
l
Ad hoc reports are generated and saved at the request of the user.
Creating quotas
You can create two types of storage quotas to monitor data: accounting quotas and
enforcement quotas. Storage quota limits and restrictions can apply to specific users,
groups, or directories.
The type of quota that you create depends on your goal.
l
688
Enforcement quotas monitor and limit disk usage. You can create enforcement
quotas that use any combination of hard limits, soft limits, and advisory limits.
SmartQuotas
Note
Note
After you create a new quota, it begins to report data almost immediately, but the data is
not valid until the QuotaScan job completes. Before using quota data for analysis or other
purposes, verify that the QuotaScan job has finished.
689
SmartQuotas
purposes, verify that the QuotaScan job has finished by running the isi job events
list --job-type quotascan command.
Managing quotas
You can modify the configured values of a storage quota, and you can enable or disable a
quota. You can also create quota limits and restrictions that apply to specific users,
groups, or directories.
Quota management in OneFS is simplified by the quota search feature, which helps you
to locate a quota or quotas by using filters. You can unlink quotas that are associated
with a parent quota, and configure custom notifications for quotas. You can also disable
a quota temporarily and then enable it when needed.
Note
Manage quotas
Quotas help you monitor and analyze the current or historic use of disk storage. You can
search for quotas, and modify, delete, and unlink quotas.
An initial QuotaScan job must run for the default or scheduled quotas. Otherwise, the
data displayed may be incomplete.
Before you modify a quota, consider how the changes will affect the file system and end
users.
For information about the parameters and options that you can use for this procedure,
run the isi quota quotas list --help command.
Note
l
You can edit or delete a quota report only when the quota is not linked to a default
quota.
You can unlink a quota only when the quota is linked to a default quota.
Procedure
1. Monitor and analyze current disk storage by running the following isi quota
quotas view command.
The following example provides current usage information for the root user on the
specified directory and includes snapshot data. For more information about the
690
SmartQuotas
parameters for this command, run the isi quota quotas list --help
command.
isi quota quotas list -v --path=/ifs/data/quota_test_2 \
--include-snapshots="yes"
2. View all information in the quota report by running the isi quota reports list
command:
To view specific information in a quota report, run the isi quota quotas list
--help command to view the filter parameters. The following command lists all
information in the quota report.
isi quota reports list -v
3. Optional: To delete a quota, run the isi quota quotas delete command.
The following command deletes the specified directory-type quota. For information
about parameters for this command, run the isi quota quotas delete -help command.
isi quota quotas delete /ifs/data/quota_test_2 directory
Configuration changes for linked quotas must be made on the parent (default) quota
that the linked quota is inheriting from. Changes to the parent quota are propagated
to all children. If you want to override configuration from the parent quota, you must
first unlink the quota.
691
SmartQuotas
2. At the command prompt, run the following command, where <filename> is the name of
an exported configuration file:
isi_classic quota import --from-file <filename>
The system parses the file and imports the quota settings from the configuration file.
Quota settings that you configured before importing the quota configuration file are
retained, and the imported quota settings are effective immediately.
Threshold exceeded
Over-quota reminder
692
SmartQuotas
You must be logged in to the web administration interface to perform this task.
Procedure
1. Click File System Management > SmartQuotas > Settings.
2. Optional: In the Email Mapping area, click Create an email mapping rule.
3. From the Provider Type list, select the provider type for this notification rule.
4. From the Current Domain list, select the domain that you want to use for the mapping
rule.
5. In the Map-to-Domain field, type the name of the domain that you want to map email
notifications to.
Repeat this step if you want to map more than one domain.
6. Click Save Rule.
693
SmartQuotas
The following example illustrates a custom email template to notify recipients about an
exceeded quota.
Text-file contents with variables
The disk quota on directory <ISI_QUOTA_PATH> owned by
<ISI_QUOTA_OWNER> was exceeded.
The <ISI_QUOTA_TYPE> quota limit is <ISI_QUOTA_THRESHOLD>, and
<ISI_QUOTA_USAGE> is in use. Please free some disk space
by deleting unnecessary files.
For more information, contact Jane Anderson in IT.
Email contents with resolved variables
The disk quota on directory
/ifs/data/sales_tools/collateral owned by jsmith was exceeded.
The hard quota limit is 10 GB, and 11 GB is in use. Please
free some disk space by deleting unnecessary files.
For more information, contact Jane Anderson in IT.
694
SmartQuotas
Results
Reports are generated according to your criteria and can be viewed by running the isi
quota reports list command.
Results
You can view the quota report by running the isi quota reports list -v
command.
695
SmartQuotas
If quota reports are not in the default directory, you can run the isi quota
settings command to find the directory where they are stored.
2. To view a list of all quota reports in the specified directory, run the following
command:
ls -a *.xml
3. To view a specific quota report in the specified directory, run the following command:
ls <filename>.xml
Description
Directory Path
User Quota
Group Quota
No Usage Limit
Description
Exceeded Remains
exceeded
Send email
Yes
Yes
Notify owner
Yes
Yes
Notify another
Yes
Yes
696
SmartQuotas
Option
Description
Exceeded Remains
exceeded
Message template
Select from the following template types for use in formatting email
notifications:
Yes
Yes
Create cluster event Select to generate an event notification for the quota when exceeded.
Yes
Yes
Delay
Specify the length of time (hours, days, weeks) to delay before generating a
notification.
Yes
No
Frequency
Specify the notification and alert frequency: daily, weekly, monthly, yearly;
depending on selection, specify intervals, day to send, time of day, multiple
emails per rule.
No
Yes
Custom
Description
Exceeded Remains
exceeded
Grace
period
expired
Write
access
denied
Send email
Yes
Yes
Yes
Yes
Notify owner
Yes
Yes
Yes
Yes
Notify another
Yes
Yes
Yes
Yes
Message
template
Yes
Yes
Yes
Yes
Custom
Create cluster
event
Yes
Yes
Yes
Yes
Delay
Yes
No
No
Yes
Frequency
No
Yes
Yes
No
697
SmartQuotas
Description
Write access
denied
Exceeded
Send email
Yes
Yes
Notify owner
Yes
Yes
Notify another
Yes
Yes
Message template
Select from the following template types for use in formatting email
notifications:
Yes
Yes
Create cluster event Select to generate an event notification for the quota when exceeded.
Yes
Yes
Delay
Specify the length of time (hours, days, weeks) to delay before generating a
notification.
Yes
No
Frequency
Specify the notification and alert frequency: daily, weekly, monthly, yearly;
depending on selection, specify intervals, day to send, time of day, multiple
emails per rule.
No
Yes
Custom
Description
Turn Off Notifications for this Quota Disables all notifications for the quota.
Use Custom Notification Rules
698
SmartQuotas
stored. When the maximum number of reports are stored, the system deletes the oldest
reports to make space for new reports as they are generated.
Setting
Description
Scheduled
reporting
Report
frequency
On. Reports run automatically according to the schedule that you specify.
Specifies the interval for this report to run: daily, weekly, monthly, or yearly. You can use the following options to
further refine the report schedule.
Generate report every. Specify the numeric value for the selected report frequency; for example, every 2
months.
Generate reports on. Select the day or multiple days to generate reports.
Select report day by. Specify date or day of the week to generate the report.
Generate one report per specified by. Set the time of day to generate this report.
Generate multiple reports per specified day. Set the intervals and times of day to generate the report for
that day.
Scheduled
Determines the maximum number of scheduled reports that are available for viewing on the SmartQuotas
report archiving Reports page.
Limit archive size for scheduled reports to a specified number of reports. Type the integer to specify the
maximum number of reports to keep.
Archive Directory. Browse to the directory where you want to store quota reports for archiving.
Manual report
archiving
Determines the maximum number of manually generated (on-demand) reports that are available for viewing on
the SmartQuotas Reports page.
Limit archive size for live reports to a specified number of reports. Type the integer to specify the maximum
number of reports to keep.
Archive Directory. Browse to the directory where you want to store quota reports for archiving.
Description
Example
ISI_QUOTA_PATH
/ifs/data
20 GB
ISI_QUOTA_USAGE
10.5 GB
ISI_QUOTA_OWNER
jsmith
ISI_QUOTA_TYPE
Threshold type
Advisory
699
SmartQuotas
Variable
Description
Example
ISI_QUOTA_GRACE
5 days
ISI_QUOTA_NODE
someHost-prod-wf-1
Quota commands
You can configure quotas to track, limit, and manage disk usage by directory, user, or
group. Quota commands that create and modify quotas are available only if you activate
a SmartQuotas license on the cluster.
Options
--path <path>
Specifies an absolute path within the /ifs file system.
CAUTION
You should not create quotas of any type on the /ifs directory. A root-level quota
may result in significant performance degradation.
--type
Specifies a quota type. The following values are valid:
directory
Creates a quota for all data in the directory, regardless of owner.
user
Creates a quota for one specific user. Requires specification of the --user, -uid, --sid, or --wellknown option.
700
SmartQuotas
group
Creates a quota for one specific group. Requires specification of the --group,
--gid, --sid, or --wellknown option.
default-user
Creates a master quota that creates a linked quota for every user who has data
in the directory.
default-group
Creates a master quota that creates a linked quota for every group that owns
data in the directory.
--user <name>
Specifies a user name.
--group <name>
Specifies a group name.
--gid <id>
Specifies the numeric group identifier (GID).
--uid <id>
Specifies a numeric user identifier (UID).
--sid <sid>
Sets a security identifier (SID). For example, S-1-5-21-13.
--wellknown <name>
Specifies a well-known user, group, machine, or account name.
--hard-threshold <size>
Sets an absolute limit for disk usage. Attempts to write to disk are generally denied if
the request violates the quota limit. Size is a capacity value formatted as<integer>[{b |
K | M | G | T | P}].
--advisory-threshold <size>
Sets the advisory threshold. For notification purposes only. Does not enforce
limitations on disk write requests. Size is a capacity value formatted as<integer>[{b |
K | M | G | T | P}].
--soft-threshold <size>
Specifies the soft threshold, which allows writes to disk above the threshold until the
soft grace period expires. Attempts to write to disk are denied thereafter. Size is a
capacity value formatted as<integer>[{b | K | M | G | T | P}].
--soft-grace <duration>
Specifies the soft threshold grace period, which is the amount of time to wait before
disk write requests are denied.
Specify <duration> in the following format:
<integer><units>
701
SmartQuotas
M
Specifies months
W
Specifies weeks
D
Specifies days
H
Specifies hours
--container {yes | no}
Specifies that threshold be shown as the available space on the SMB share, instead
of the whole cluster. The setting applies only to hard thresholds. When setting this
value, you must specify --enforced.
--include-snapshots {yes | no}
Includes snapshots in the quota size.
--thresholds-include-overhead {yes | no}
Includes OneFS storage overhead in the quota threshold when set to yes.
--enforced {yes | no}
Enforces this quota when set to yes. Specifying any threshold automatically sets this
value to yes on create.
--zone <zone>
Specifies an access zone.
{--verbose | -v}
Displays more detailed information.
Options
--path <path>
Deletes quotas of the specified type. Argument must be specified with the --path
option. The following values are valid:
702
SmartQuotas
directory
Specifies a quota for all data in the directory, regardless of owner.
user
Specifies a quota for one specific user. Requires specification of --user, -uid, or --sid.
group
Specifies a quota for one specific group. Requires specification of the --group,
--gid, or --sid option.
default-user
Specifies a master quota that creates a linked quota for every user who has data
in the directory.
default-group
Specifies a master quota that creates a linked quota for every group that owns
data in the directory.
--all
Deletes all quotas. Flag may not be specified with --type or --path.
--user <name>
Deletes a quota associated with the user identified by name.
--gid <id>
Deletes a quota by the specified numeric group identifier (GID).
--uid <id>
Deletes a quota by the specified numeric user identifier (UID).
--sid <sid>
Specifies a security identifier (SID) for selecting the quota. For example, S-1-5-21-13.
--recurse-path-parents
Searches parent paths for quotas.
--recurse-path-children
Searches child paths for quotas.
--include-snapshots {yes | no}
Deletes quotas that include snapshot data usage.
--zone <zone>
Specifies an access zone.
{--verbose | -v}
Displays more detailed information.
703
SmartQuotas
Options
--path <path>
Specifies an absolute path within the /ifs file system.
--type
Specifies a quota type. The following values are valid:
directory
Creates a quota for all data in the directory, regardless of owner.
user
Creates a quota for one specific user. Requires specification of the --user, -uid, or --sid option.
group
Creates a quota for one specific group. Requires specification of the --group,
--gid, or --sid option.
default-user
Creates a master quota that creates a linked quota for every user who has data
in the directory.
default-group
Creates a master quota that creates a linked quota for every group that owns
data in the directory.
--user <name>
Specifies a user name.
--group <name>
Specifies a group name.
--gid <id>
Specifies the numeric group identifier (GID).
--uid <id>
Specifies a numeric user identifier (UID).
--sid <sid>
Specifies a security identifier (SID) for selecting the quota that you want to modify.
For example, S-1-5-21-13.
704
SmartQuotas
--wellknown <name>
Specifies a well-known user, group, machine, or account name.
--hard-threshold <size>
Sets an absolute limit for disk usage. Attempts to write to disk are generally denied if
the request violates the quota limit. Size is a capacity value formatted as<integer>[{b |
K | M | G | T | P}].
--advisory-threshold <size>
Sets the advisory threshold. For notification purposes only. Does not enforce
limitations on disk write requests. Size is a capacity value formatted as<integer>[{b |
K | M | G | T | P}].
--soft-threshold <size>
Specifies the soft threshold, which allows writes to disk above the threshold until the
soft grace period expires. Attempts to write to disk are denied thereafter. Size is a
capacity value formatted as<integer>[{b | K | M | G | T | P}].
--soft-grace <duration>
Specifies the soft threshold grace period, which is the amount of time to wait before
disk write requests are denied.
Specify <duration> in the following format:
<integer><units>
705
SmartQuotas
Options
--user <name>
Specifies a user name.
--group <name>
Specifies a group name.
--gid <id>
Specifies the numeric group identifier (GID).
--uid <id>
Specifies a numeric user identifier (UID).
--sid <sid>
Specifies a security identifier (SID) for selecting the quota. For example, S-1-5-21-13.
--wellknown <name>
Specifies a well-known user, group, machine, or account name.
706
SmartQuotas
<type>
Specifies a quota type. The following values are valid:
directory
Creates a quota for all data in the directory, regardless of owner.
user
Creates a quota for one specific user. Requires specification of the --user, -uid, --sid, or --wellknown option.
group
Creates a quota for one specific group. Requires specification of the --group,
--gid, --sid, or --wellknown option.
default-user
Creates a master quota that creates a linked quota for every user who has data
in the directory.
default-group
Creates a master quota that creates a linked quota for every group that owns
data in the directory.
--path
Specifies quotas on the specified path.
--recurse-path-parents
Specifies parent paths for quotas.
--recurse-path-children
Specifies child paths for quotas.
--include-snapshots {yes | no}
Specifies quotas that include snapshot data usage.
--exceeded
Specifies only quotas that have an exceeded threshold.
--enforced {yes | no}
Specifies quotas that have an enforced threshold.
--zone <zone>
Specifies quotas in the specified zone.
--limit <integer>
Specifies the number of quotas to display.
--format
Displays quotas in the specified format. The following values are valid:
table
json
csv
list
{--no-header | -a}
Suppresses headers in CSV or table formats.
{--no-footer | -z}
isi quota quotas list
707
SmartQuotas
Options
--path <path>
Specifies an absolute path within the /ifs file system.
--type
Specifies quotas of the specified type. Argument must be specified with the --path
option. The following values are valid:
directory
Specifies a quota for all data in the directory, regardless of owner.
user
Specifies a quota for one specific user. Requires specification of -user, --uid,
--sid, or --wellknown option.
group
Specifies a quota for one specific group. Requires specification of the --group,
--gid, --sid, or --wellknown option.
default-user
Specifies a master quota that creates a linked quota for every user who has data
in the directory.
default-group
Specifies a master quota that creates a linked quota for every group that owns
data in the directory.
--user <name>
Specifies a quota associated with the user identified by name.
--group <name>
Specifies a quota associated with the group identified by name.
--gid <id>
Specifies a quota by the numeric group identifier (GID).
--uid <id>
Specifies a quota by the specified numeric user identifier (UID).
--sid <sid>
708
SmartQuotas
Specifies a security identifier (SID) for selecting the quota. For example, S-1-5-21-13.
--wellknown <name>
Specifies a well-known user, group, machine, or account name.
--include-snapshots {yes | no}
Specifies quotas that include snapshot data usage.
--zone <zone>
Specifies an access zone.
Use the isi quota quotas notifications disable command to disable all
notifications for a quota.
Syntax
isi quota quotas notifications clear
--path <path>
--type {directory | user | group | default-user | default-group}
[--user <name> | --group <name> | --gid <id> | --uid <id>
| --sid <sid> | --wellknown <name>]
[--include-snapshots {yes | no}]
Options
--path <path>
Specifies an absolute path within the /ifs file system.
<type> --type
Specifies a quota type. The following values are valid:
directory
Creates a quota for all data in the directory, regardless of owner.
user
Creates a quota for one specific user. Requires specification of the --user, -uid, --sid, or --wellknown option.
group
Creates a quota for one specific group. Requires specification of the --group,
--gid, --sid, or --wellknown option.
default-user
Creates a master quota that creates a linked quota for every user who has data
in the directory.
default-group
Creates a master quota that creates a linked quota for every group that owns
data in the directory.
--user <name>
Specifies a user name.
--group <name>
Specifies a group name.
isi quota quotas notifications clear
709
SmartQuotas
--gid <id>
Specifies the numeric group identifier (GID).
--uid <id>
Specifies a numeric user identifier (UID).
--sid <sid>
Specifies a security identifier (SID) for selecting the quota. For example, S-1-5-21-13.
--wellknown <name>
Specifies a well-known user, group, machine, or account name.
--include-snapshots {yes | no}
Includes snapshots in the quota size.
Options
--path <path>
Specifies an absolute path within the /ifs file system.
--type
Specifies a quota type. The following values are valid:
directory
Creates a quota for all data in the directory, regardless of owner.
user
Creates a quota for one specific user. Requires specification of the --user, -uid, --sid, or --wellknown option.
group
Creates a quota for one specific group. Requires specification of the --group,
--gid, --sid, or --wellknown option.
default-user
Creates a master quota that creates a linked quota for every user who has data
in the directory.
710
SmartQuotas
default-group
Creates a master quota that creates a linked quota for every group that owns
data in the directory.
--threshold
Specifies the threshold type. The following values are valid:
hard
Sets an absolute limit for disk usage. Attempts to write to disk are generally
denied if the request violates the quota limit.
soft
Specifies the soft threshold. Allows writes to disk above the threshold until the
soft grace period expires. Attempts to write to disk are denied thereafter.
advisory
Sets the advisory threshold. For notification purposes only. Does not enforce
limitations on disk write requests.
--condition
Specifies the quota condition on which to send a notification. The following values
are valid:
denied
Specifies a notification when a hard threshold or soft threshold outside of its
soft grace period causes a disk write operation to be denied.
exceeded
Specifies a notification when disk usage exceeds the threshold. Applies to only
soft thresholds within the soft-grace period.
violated
Specifies a notification when disk usage exceeds a quota threshold but none of
the other conditions apply.
expired
Specifies a notification when disk usage exceeds the soft threshold and the softgrace period has expired.
--user <name>
Specifies a user name.
--group <name>
Specifies a group name.
--gid <id>
Specifies the numeric group identifier (GID).
--uid <id>
Specifies a numeric user identifier (UID).
--sid <sid>
Sets a security identifier (SID). For example, S-1-5-21-13.
--wellknown <name>
Specifies a well-known user, group, machine, or account name.
--include-snapshots {yes | no}
isi quota quotas notifications create
711
SmartQuotas
712
SmartQuotas
Options
--path <path>
Deletes quota notifications set on an absolute path within the /ifs file system.
--type
Deletes a quota notification by specified type. The following values are valid:
directory
Specifies a quota for all data in the directory, regardless of owner.
user
Specifies a quota for one specific user. Requires specification of the --user, -uid, --sid, or --wellknown option.
group
Specifies a quota for one specific group. Requires specification of the --group,
--gid, --sid, or --wellknown option.
default-user
Specifies a master quota that creates a linked quota for every user who has data
in the directory.
default-group
Specifies a master quota that creates a linked quota for every group that owns
data in the directory.
--threshold
Deletes a quota notification by specified threshold. The following values are valid:
hard
Specifies an absolute limit for disk usage.
soft
Specifies the soft threshold.
advisory
Specifies the advisory threshold..
--condition
Deletes a quote notification by the specified condition on which to send a
notification. The following values are valid:
denied
Specifies a notification when a hard threshold or soft threshold outside of its
soft grace period causes a disk write operation to be denied.
exceeded
Specifies a notification when disk usage exceeds the threshold. Applies to only
soft thresholds within the soft-grace period.
violated
Specifies a notification when disk usage exceeds a quota threshold but none of
the other conditions apply.
713
SmartQuotas
expired
Specifies a notification when disk usage exceeds the soft threshold and the softgrace period has expired.
--user <name>
Deletes a quota notification by the specified user name.
--group <name>
Deletes a quota notification by the specified group name.
--gid <id>
Deletes a quota notification by the specified numeric group identifier (GID).
--uid <id>
Deletes a quota notification by the specified numeric user identifier (UID).
--sid <sid>
Deletes a quota notification by the specified security identifier (SID) for selecting the
quota. For example, S-1-5-21-13.
--wellknown <name>
Deletes a quota notification by the specified well-known user, group, machine, or
account name.
--include-snapshots {yes | no}
Deletes a quota notification by the specified settings for Included snapshots in the
quota size.
{--verbose | -v}
Displays more detailed information.
When you disable all quota notifications, system notification behavior is disabled also.
Use the --clear options to remove specific quota notification rules and fall back to the
system default.
Syntax
isi quota quotas notifications disable
--path <path>
--type {directory | user | group | default-user | default-group}
[--user <name> | --group <name> | --gid <id> | --uid <id>
| --sid <sid> | --wellknown <name>]
[--include-snapshots {yes | no}]
Options
--path <path>
Specifies an absolute path within the /ifs file system.
--type
Disables quotas of the specified type. Argument must be specified with the --path
option. The following values are valid:
714
SmartQuotas
directory
Specifies a quota for all data in the directory, regardless of owner.
user
Specifies a quota for one specific user. Requires specification of -user, --uid,
--sid, or --wellknown option.
group
Specifies a quota for one specific group. Requires specification of the --group,
--gid, --sid, or --wellknown option.
default-user
Specifies a master quota that creates a linked quota for every user who has data
in the directory.
default-group
Specifies a master quota that creates a linked quota for every group that owns
data in the directory.
--user <name>
Disables a quota associated with the user identified by name.
--gid <id>
Disables a quota by the specified numeric group identifier (GID).
--uid <id>
Disables a quota by the specified numeric user identifier (UID).
--sid <sid>
Specifies a security identifier (SID) for selecting a quota. For example, S-1-5-21-13.
--wellknown <name>
Specifies a well-known user, group, machine, or account name.
--include-snapshots {yes | no}
Disables quotas that include snapshot data usage.
Options
--path <path>
Specifies an absolute path within the /ifs file system.
isi quota quotas notifications list
715
SmartQuotas
--type
Specifies a quota type. The following values are valid:
directory
Creates a quota for all data in the directory, regardless of owner.
user
Creates a quota for one specific user. Requires specification of the --user, -uid, --sid, or --wellknown option.
group
Creates a quota for one specific group. Requires specification of the --group,
--gid, --sid, or --wellknown option.
default-user
Creates a master quota that creates a linked quota for every user who has data
in the directory.
default-group
Creates a master quota that creates a linked quota for every group that owns
data in the directory.
--threshold
Specifies the threshold type. The following values are valid:
hard
Sets an absolute limit for disk usage. Attempts to write to disk are generally
denied if the request violates the quota limit.
soft
Specifies the soft threshold. Allows writes to disk above the threshold until the
soft grace period expires. Attempts to write to disk are denied thereafter.
advisory
Sets the advisory threshold. For notification purposes only. Does not enforce
limitations on disk write requests.
--condition
Specifies the quota condition on which to send a notification. The following values
are valid:
denied
Specifies a notification when a hard threshold or soft threshold outside of its
soft grace period causes a disk write operation to be denied.
exceeded
Specifies a notification when disk usage exceeds the threshold. Applies to only
soft thresholds within the soft-grace period.
violated
Specifies a notification when disk usage exceeds a quota threshold but none of
the other conditions apply.
expired
Specifies a notification when disk usage exceeds the soft threshold and the softgrace period has expired.
716
SmartQuotas
--user <name>
Specifies a user name.
--group <name>
Specifies a group name.
--gid <id>
Specifies the numeric group identifier (GID).
--uid <id>
Specifies a numeric user identifier (UID).
--sid <sid>
Specifies a security identifier (SID) for selecting the quota. For example, S-1-5-21-13.
--wellknown <name>
Specifies a well-known user, group, machine, or account name.
--include-snapshots {yes | no}
Includes snapshots in the quota size.
{--limit | -l} <integer>
Specifies the number of quota notification rules to display.
--format
Displays quota notification rules in the specified format. The following values are
valid:
table
json
csv
list
{--no-header | -a}
Suppresses headers in CSV or table formats.
{--no-footer | -z}
Suppresses table summary footer information.
{--verbose | -v}
Displays more detailed information.
717
SmartQuotas
Options
--path <path>
Specifies an absolute path within the /ifs file system.
--type
Specifies a quota type. The following values are valid:
directory
Creates a quota for all data in the directory, regardless of owner.
user
Creates a quota for one specific user. Requires specification of the --user, -uid, --sid, or --wellknown option.
group
Creates a quota for one specific group. Requires specification of --group, -gid, --sid, or --wellknown option.
default-user
Creates a master quota that creates a linked quota for every user who has data
in the directory.
default-group
Creates a master quota that creates a linked quota for every group that owns
data in the directory.
--threshold
Specifies the threshold type. The following values are valid:
hard
Sets an absolute limit for disk usage. Attempts to write to disk are generally
denied if the request violates the quota limit.
soft
Specifies the soft threshold. Allows writes to disk above the threshold until the
soft grace period expires. Attempts to write to disk are denied thereafter.
advisory
Sets the advisory threshold. For notification purposes only. Does not enforce
limitations on disk write requests.
--condition
Specifies the quota condition on which to send a notification. The following values
are valid:
denied
Specifies a notification when a hard threshold or soft threshold outside of its
soft grace period causes a disk write operation to be denied.
718
SmartQuotas
exceeded
Specifies a notification when disk usage exceeds the threshold. Applies to only
soft thresholds within the soft-grace period.
violated
Specifies a notification when disk usage exceeds a quota threshold but none of
the other conditions apply.
expired
Specifies a notification when disk usage exceeds the soft threshold and the softgrace period has expired.
--user <name>
Specifies a user name.
--group <name>
Specifies a group name.
--gid <id>
Specifies the numeric group identifier (GID).
--uid <id>
Specifies a numeric user identifier (UID).
--sid <sid>
Sets a security identifier (SID). For example, S-1-5-21-13.
--wellknown <name>
Specifies a well-known user, group, machine, or account name.
--include-snapshots {yes | no}
Includes snapshots in the quota size.
--schedule <name>
Specifies the date pattern at which recurring notifications are made.
--holdoff <duration>
Specifies the length of time to wait before generating a notification.
Specify <duration> in the following format:
<integer><units>
719
SmartQuotas
H
Specifies hours
s
Specifies seconds
--clear-holdoff
Clears the value for the --holdoff duration.
--action-alert {yes | no}
Generates an alert when the notification condition is met.
--action-email-owner {yes | no}
Specifies that an email be sent to a user when the threshold is crossed. Requires -action-email-address.
--action-email-address <address>
Specifies the email address of user to be notified.
{--verbose | -v}
Displays more detailed information.
Options
--path <path>
Specifies an absolute path within the /ifs file system.
--type
Specifies a quota type. The following values are valid:
directory
Creates a quota for all data in the directory, regardless of owner.
user
Creates a quota for one specific user. Requires specification of the --user, -uid, --sid, or --wellknown option.
group
Creates a quota for one specific group. Requires specification of the --group,
--gid, --sid, or --wellknown option.
720
SmartQuotas
default-user
Creates a master quota that creates a linked quota for every user who has data
in the directory.
default-group
Creates a master quota that creates a linked quota for every group that owns
data in the directory.
--threshold
Specifies the threshold type. The following values are valid:
hard
Sets an absolute limit for disk usage. Attempts to write to disk are generally
denied if the request violates the quota limit.
soft
Specifies the soft threshold. Allows writes to disk above the threshold until the
soft grace period expires. Attempts to write to disk are denied thereafter.
advisory
Sets the advisory threshold. For notification purposes only. Does not enforce
limitations on disk write requests.
--condition
Specifies the quota condition on which to send a notification. The following values
are valid:
denied
Specifies a notification when a hard threshold or soft threshold outside of its
soft grace period causes a disk write operation to be denied.
exceeded
Specifies a notification when disk usage exceeds the threshold. Applies to only
soft thresholds within the soft-grace period.
violated
Specifies a notification when disk usage exceeds a quota threshold but none of
the other conditions apply.
expired
Specifies a notification when disk usage exceeds the soft threshold and the softgrace period has expired.
--user <name>
Specifies a user name.
--group <name>
Specifies a group name.
--gid <id>
Specifies the numeric group identifier (GID).
--uid <id>
Specifies a numeric user identifier (UID).
--sid <sid>
Specifies a security identifier (SID) for selecting the quota. For example, S-1-5-21-13.
isi quota quotas notifications view
721
SmartQuotas
--wellknown <name>
Specifies a well-known user, group, machine, or account name.
--include-snapshots {yes | no}
Includes snapshots in the quota size.
Options
{--verbose | -v}
Displays more detailed information.
Options
--time <string>
Specifies the timestamp of the report.
Specify <time-and-date> in the following format:
<YYYY>-<MM>-<DD>[T<hh>:<mm>[:<ss>]]
722
SmartQuotas
s
Specifies seconds
--generated
Specifies the method used to generate the report. The following values are valid:
live
scheduled
manual
--type
Specifies a report type. The following values are valid:
summary
detail
{--verbose | -v}
Displays more detailed information.
Options
--limit <integer>
Specifies the number of quotas to display.
--format
Displays quotas in the specified format. The following values are valid:
table
json
csv
list
{--no-header | -a}
Suppresses headers in CSV or table formats.
{--no-footer | -z}
Suppresses table summary footer information.
{--verbose | -v}
Displays more detailed information.
723
SmartQuotas
Options
--threshold
Specifies the threshold type. The following values are valid:
hard
Sets an absolute limit for disk usage. Attempts to write to disk are generally
denied if the request violates the quota limit.
soft
Specifies the soft threshold. Allows writes to disk above the threshold until the
soft grace period expires. Attempts to write to disk are denied thereafter.
advisory
Sets the advisory threshold. For notification purposes only. Does not enforce
limitations on disk write requests.
--condition
Specifies the quota condition on which to send a notification. The following values
are valid:
denied
Specifies a notification when a hard threshold or soft threshold outside of its
soft grace period causes a disk write operation to be denied.
exceeded
Specifies a notification when disk usage exceeds the threshold. Applies to only
soft thresholds within the soft-grace period.
724
SmartQuotas
violated
Specifies a notification when disk usage exceeds a quota threshold but none of
the other conditions apply.
expired
Specifies a notification when disk usage exceeds the soft threshold and the softgrace period has expired.
--schedule <string>
Specifies the date pattern at which recurring notifications are made.
--holdoff <duration>
Specifies the length of time to wait before generating a notification.
Specify <duration> in the following format:
<integer>
<units>
725
SmartQuotas
Options
--threshold
Specifies the threshold type. The following values are valid:
hard
Sets an absolute limit for disk usage. Attempts to write to disk are generally
denied if the request violates the quota limit.
soft
Specifies the soft threshold. Allows writes to disk above the threshold until the
soft grace period expires. Attempts to write to disk are denied thereafter.
advisory
Sets the advisory threshold. For notification purposes only. Does not enforce
limitations on disk write requests.
--condition
Specifies the quota condition on which to send a notification. The following values
are valid:
denied
Specifies a notification when a hard threshold or soft threshold outside of its
soft grace period causes a disk write operation to be denied.
exceeded
Specifies a notification when disk usage exceeds the threshold. Applies to only
soft thresholds within the soft-grace period.
violated
Specifies a notification when disk usage exceeds a quota threshold but none of
the other conditions apply.
expired
Specifies a notification when disk usage exceeds the soft threshold and the softgrace period has expired.
{--verbose | -v}
Displays more detailed information.
726
SmartQuotas
Options
{--limit | -l} <integer>
Specifies the number of quota notification rules to display.
--format
Displays quotas in the specified format. The following values are valid:
table
json
csv
list
{--no-header | -a}
Suppresses headers in CSV or table formats.
{--no-footer | -z}
Suppresses table summary footer information.
{--verbose | -v}
Displays more detailed information.
Options
--threshold
Specifies the threshold type. The following values are valid:
727
SmartQuotas
hard
Sets an absolute limit for disk usage. Attempts to write to disk are generally
denied if the request violates the quota limit.
soft
Specifies the soft threshold. Allows writes to disk above the threshold until the
soft grace period expires. Attempts to write to disk are denied thereafter.
advisory
Sets the advisory threshold. For notification purposes only. Does not enforce
limitations on disk write requests.
--condition
Specifies the quota condition on which to send a notification. The following values
are valid:
denied
Specifies a notification when a hard threshold or soft threshold outside of its
soft grace period causes a disk write operation to be denied.
exceeded
Specifies a notification when disk usage exceeds the threshold. Applies to only
soft thresholds within the soft-grace period.
violated
Specifies a notification when disk usage exceeds a quota threshold but none of
the other conditions apply.
expired
Specifies a notification when disk usage exceeds the soft threshold and the softgrace period has expired.
--schedule <string>
Specifies the date pattern at which recurring notifications are made.
--holdoff <duration>
Specifies the length of time to wait before generating a notification.
Specify <duration> in the following format:
<integer><units>
SmartQuotas
Options
--threshold
Specifies the threshold type. The following values are valid:
hard
Sets an absolute limit for disk usage. Attempts to write to disk are generally
denied if the request violates the quota limit.
soft
Specifies the soft threshold. Allows writes to disk above the threshold until the
soft grace period expires. Attempts to write to disk are denied thereafter.
advisory
Sets the advisory threshold. For notification purposes only. Does not enforce
limitations on disk write requests.
--condition
Specifies the quota condition on which to send a notification. The following values
are valid:
denied
Specifies a notification when a hard threshold or soft threshold outside of its
soft grace period causes a disk write operation to be denied.
exceeded
Specifies a notification when disk usage exceeds the threshold. Applies to only
soft thresholds within the soft-grace period.
violated
Specifies a notification when disk usage exceeds a quota threshold but none of
the other conditions apply.
729
SmartQuotas
expired
Specifies a notification when disk usage exceeds the soft threshold and the softgrace period has expired.
Options
--schedule <schedule>
Specifies the date pattern at which recurring notifications are made. For more
information about date pattern or other schedule parameters, see man isischedule.
--revert-schedule
Sets the --schedule value to system default.
--scheduled-dir <dir>
Specifies the location where scheduled quota reports are stored.
--revert-scheduled-dir
Sets the--scheduled-dir value to system default.
--scheduled-retain <integer>
Specifies the maximum number of scheduled reports to keep.
--revert-scheduled-retain
Sets the --scheduled-retain value to system default.
--live-dir <dir>
Specifies the location where live quota reports are stored.
--revert-live-dir
Sets the --live-dir value to system default.
--live-retain <integer>
Specifies the maximum number of live quota reports to keep.
--revert-live-retain
Sets the --live-retain value to system default.
{--verbose | -v}
Displays more detailed information.
730
SmartQuotas
Options
There are no options for this command.
731
SmartQuotas
732
CHAPTER 20
Storage Pools
Storage Pools
733
Storage Pools
Without active
SmartPools license
Yes
Yes
Yes
Yes
Yes
SSD strategies
Yes
Yes
L3 cache
Yes
Yes
Tiers
Yes
Yes
GNA
Yes
Yes
No
Yes
Spillover management
No
Yes
734
Storage Pools
735
Storage Pools
Autoprovisioning
When you add a node to your cluster, OneFS automatically assigns the node to a node
pool. With node pools, OneFS can ensure optimal performance, load balancing, and
reliability of the file system. Autoprovisioning reduces the time required for the manual
management tasks associated with resource planning and configuration.
Nodes are not provisioned, meaning they are not associated with each other and are not
writable, until at least three nodes of an equivalence class are added to the cluster. If you
have added only two nodes of an equivalence class to your cluster, no data is stored on
those nodes until you add a third node of the same equivalence class.
Similarly, if a node goes down or is removed from the cluster so that fewer than three
equivalence-class nodes remain, the node pool becomes under-provisioned. However,
the two remaining nodes are still writable. If only one node remains, that node is not
writable, but remains readable.
OneFS offers a compatibility function, also referred to as node equivalency, that enables
certain node types to be provisioned to existing node pools, even when there are fewer
than three equivalence-class nodes. For example, an S210 node could be provisioned
and added to a node pool of S200 nodes. Similarly, an X410 node could be added to a
node pool of X400 nodes. The compatibility function ensures that you can add nodes one
at a time to your cluster and still have them be fully functional peers within a node pool.
The larger number of the two factors (minimum number of virtual drives or percentage
of total disk space), rather than their sum, determines the space allocated for virtual
hot spare.
It is important to understand the following information when configuring VHS settings:
736
If you configure both settings, the enforced minimum value satisfies both
requirements.
If you select the option to reduce the amount of available space, free-space
calculations do not include the space reserved for the virtual hot spare. The reserved
virtual hot spare free space is used for write operations unless you select the option
to deny new data writes. If Reduce amount of available space is enabled while Deny
Storage Pools
new data writes is disabled, it is possible for the file system to report utilization as
more than 100 percent.
Note
Virtual hot spare reservations affect spillover. If the virtual hot spare option Deny writes
is enabled but Reduce amount of available space is disabled, spillover occurs before the
file system reports 100% utilization.
Spillover
If you activate a SmartPools license, you can designate a storage pool to receive spill
data when the hardware specified by a file pool policy is not writable. If you do not want
data to spill over from a different location because the specified node pool or tier is full or
not writable, you can disable this feature.
Spillover management is available after you activate a SmartPools license. You can direct
write operations to a specified storage pool in the cluster when there is not enough space
to write a file according to the storage pool policy.
Note
Virtual hot spare reservations affect spillover. If the setting Deny writes is enabled but
Deduce amount of available space is disabled, spillover occurs before the file system
reports 100% utilization.
Node pools
A node pool is a collection of three or more nodes. As you add nodes to an Isilon cluster,
OneFS automatically provisions them into node pools based on characteristics such as
series, drive size, RAM, and SSD-per-node ratio. Nodes with identical characteristics are
called equivalence-class nodes.
If you add fewer than three nodes of a node type, OneFS cannot autoprovision the nodes
to your cluster. In these cases, you can create compatibilities. Compatibilities enable
OneFS to provision nodes that are not equivalence-class to a compatible node pool.
After provisioning, each node in the OneFS cluster is a peer, and any node can handle a
data request. Each provisioned node increases the aggregate disk, cache, CPU, and
network capacity on the cluster.
You can move nodes from an automatically managed node pool into one that you define
manually. This capability is available only through the OneFS command-line interface. If
you attempt to remove nodes from a node pool such that the removal would leave fewer
than three nodes in the pool, the removal fails. When you remove a node from a manually
defined node pool, OneFS attempts to move the node into a node pool of the same
equivalence class, or into a compatible node pool.
Node compatibilities
OneFS requires that a node pool contain at least three nodes so that the operating
system can write data and perform the necessary load balancing and data protection
Spillover
737
Storage Pools
operations. You can enable certain nodes to be provisioned to an existing node pool by
defining a compatibility.
Note
S210 RAM
X400 RAM
X410 RAM
24 GB
32 GB
24 GB
32 GB
48 GB
64 GB
48 GB
64 GB
96 GB
128 GB
96 GB
128 GB
256 GB
192 GB
256 GB
Note
After you have added three or more S210 or X410 nodes to your cluster, you should
consider removing the compatibilities that you have created. This step enables OneFS to
autoprovision new S210 or X410 node pools and take advantage of the performance
specifications of the newer node types.
738
Storage Pools
Suggested protection
Based on the configuration of your Isilon cluster, OneFS automatically calculates the
amount of protection that is recommended to maintain EMC Isilon's stringent data
protection requirements.
OneFS includes a function to calculate the suggested protection for data to maintain a
theoretical mean-time to data loss (MTTDL) of 5000 years. Suggested protection provides
the optimal balance between data protection and storage efficiency on your cluster.
By configuring file pool policies, you can specify one of multiple requested protection
settings for a single file, for subsets of files called file pools, or for all files on the cluster.
It is recommended that you do not specify a setting below suggested protection. OneFS
periodically checks the protection level on the cluster, and alerts you if data falls below
the recommended protection.
Protection policies
OneFS provides a number of protection policies to choose from when protecting a file or
specifying a file pool policy.
The more nodes you have in your cluster, up to 20 nodes, the more efficiently OneFS can
store and protect data, and the higher levels of requested protection the operating
system can achieve. Depending on the configuration of your cluster and how much data
is stored, OneFS might not be able to achieve the level of protection that you request. For
example, if you have a three-node cluster that is approaching capacity, and you request
+2n protection, OneFS might not be able to deliver the requested protection.
The following table describes the available protection policies in OneFS.
Protection policy
Summary
+1n
+2d:1n
+2n
+3d:1n
+3d:1n1d
+3n
+4d:1n
+4d:2n
+4n
Mirrors:
2x
3x
Note
4x
5x
6x
Mirrors can use more data than the other protection policies, but
might be an effective way to protect files that are written nonsequentially or to provide faster access to important files.
7x
Suggested protection
739
Storage Pools
Protection policy
Summary
8x
SSD strategies
OneFS clusters can contain nodes that include solid-state drives (SSD). OneFS
autoprovisions equivalence-class nodes with SSDs into one or more node pools. The SSD
strategy defined in the default file pool policy determines how SSDs are used within the
cluster, and can be set to increase performance across a wide range of workflows.
You can configure file pool policies to apply specific SSD strategies as needed. When you
select SSD options during the creation of a file pool policy, you can identify the files in
the OneFS cluster that require faster or slower performance. When the SmartPools job
runs, OneFS uses file pool policies to move this data to the appropriate storage pool and
drive type.
The following SSD strategy options that you can set in a file pool policy are listed in order
of slowest to fastest choices:
Avoid SSDs
Writes all associated file data and metadata to HDDs only.
CAUTION
Use this option to free SSD space only after consulting with Isilon Technical Support
personnel. Using this strategy can negatively affect performance.
Metadata read acceleration
Writes both file data and metadata to HDDs. This is the default setting. An extra
mirror of the file metadata is written to SSDs, if available. The SSD mirror is in
addition to the number of mirrors, if any, required to satisfy the requested
protection.
Metadata read/write acceleration
Writes file data to HDDs and metadata to SSDs, when available. This strategy
accelerates metadata writes in addition to reads but requires about four to five times
more SSD storage than the Metadata read acceleration setting. Enabling GNA does
not affect read/write acceleration.
Data on SSDs
Uses SSD node pools for both data and metadata, regardless of whether global
namespace acceleration is enabled. This SSD strategy does not result in the creation
of additional mirrors beyond the normal requested protection but requires
significantly increased storage requirements compared with the other SSD strategy
options.
740
Storage Pools
that at least 2.0% of the total cluster storage is SSD-based before enabling global
namespace acceleration.
You can enable GNA only if 20% or more of the nodes in the cluster contain at least one
SSD and 1.5% or more of the total cluster storage is SSD-based. For best results, ensure
that at least 2.0% of the total cluster storage is SSD-based before enabling global
namespace acceleration. If the ratio of SSDs to non-SSDs on the cluster falls below the
1.5% threshold, GNA becomes inactive even if enabled. GNA is reactivated when the ratio
is corrected. When GNA is inactive, existing SSD mirrors are readable but newly written
metadata does not include the extra SSD mirror.
Note
If GNA is enabled for the cluster, file pool policies that direct data to node pools with L3
cache enabled should also set the SSD strategy to Avoid SSDs. Otherwise, additional
SSD mirrors would be created for data that is already accelerated by L3 cache. This is an
inefficient use of SSD storage space and is not recommended.
L3 cache overview
You can configure nodes with solid-state drives (SSDs) to increase cache memory and
speed up file system performance across larger working file sets.
OneFS caches file data and metadata at multiple levels. The following table describes the
types of file system cache available on an Isilon cluster.
Name
Type
Profile
Scope
Description
L1 cache
RAM
Volatile
Local
node
L2 cache
RAM
Volatile
Global
SmartCache
Variable
Nonvolatile
Local
node
L3 cache
SSD
Nonvolatile
Global
OneFS caches frequently accessed file and metadata in available random access memory
(RAM). Caching enables OneFS to optimize data protection and file system performance.
When RAM cache reaches capacity, OneFS normally discards the oldest cached data and
processes new data requests by accessing the storage drives. This cycle is repeated each
time RAM cache fills up.
You can deploy SSDs as L3 cache to reduce the cache cycling issue and further improve
file system performance. L3 cache adds significantly to the available cache memory and
provides faster access to data than hard disk drives (HDD).
As L2 cache reaches capacity, OneFS evaluates data to be released and, depending on
your workflow, moves the data to L3 cache. In this way, much more of the most frequently
accessed data is held in cache, and overall file system performance is improved.
L3 cache overview
741
Storage Pools
For example, consider a cluster with 128GB of RAM. Typically the amount of RAM
available for cache fluctuates, depending on other active processes. If 50 percent of RAM
is available for cache, the cache size would be approximately 64GB. If this same cluster
had three nodes, each with two 200GB SSDs, the amount of L3 cache would be 1.2TB,
approximately 18 times the amount of available L2 cache.
L3 cache is enabled by default for new node pools. A node pool is a collection of nodes
that are all of the same equivalence class, or for which compatibilities have been created.
L3 cache applies only to the nodes where the SSDs reside. For the HD400 node, which is
primarily for archival purposes, L3 cache is on by default and cannot be turned off. On the
HD400, L3 cache is used only for metadata.
If you enable L3 cache on a node pool, OneFS manages all cache levels to provide
optimal data protection, availability, and performance. In addition, in case of a power
failure, the data on L3 cache is retained and still available after power is restored.
Note
Although some benefit from L3 cache is found in workflows with streaming and
concurrent file access, L3 cache provides the most benefit in workflows that involve
random file access.
Migration to L3 cache
L3 cache is enabled by default on new nodes. If you are upgrading your cluster from an
older release (pre-OneFS 7.1.1), you must enable L3 cache manually on node pools that
have SSDs. When you enable L3 cache, OneFS activates a process that migrates SSDs
from storage disks to cache. File data currently on SSDs is moved elsewhere in the
cluster.
You can enable L3 cache as the default for all new node pools or manually for a specific
node pool, either through the command line or from the web administration interface.
You can enable L3 cache only on node pools whose nodes have SSDs.
Depending on the amount of data stored in your SSDs, the migration process can take
some time. OneFS displays a message informing you that the migration is about to begin:
WARNING: Changes to L3 cache configuration can have a long completion
time. If this is a concern, please contact EMC Isilon Support for
more information.
You must confirm whether OneFS should proceed with the migration. After you do, OneFS
handles the migration intelligently as a background process. You can continue to
administer your cluster during the migration.
If you choose to disable L3 cache on a node pool, the migration process is very fast.
742
Storage Pools
Required privileges
You need to have the SmartPools administrative privilege (or higher) to enable or disable
L3 cache.
You can enable or disable L3 cache from the command line or web administration
interface.
Tiers
A tier is a user-defined collection of node pools that you can specify as a storage pool for
files. A node pool can belong to only one tier.
You can create tiers to assign your data to any of the node pools in the tier. For example,
you can assign a collection of node pools to a tier specifically created to store data that
requires high availability and fast access. In a three-tier system, this classification may
be Tier 1. You can classify data that is used less frequently or that is accessed by fewer
users as Tier-2 data. Tier 3 usually comprises data that is seldom used and can be
archived for historical or regulatory purposes.
File pools
File pools are sets of files that you define to apply policy-based control of the storage
characteristics of your data.
The initial installation of OneFS places all files in the cluster into a single file pool, which
is subject to the default file pool policy. SmartPools enables you to define additional file
pools, and create policies that move files in these pools to specific node pools and tiers.
File pool policies match specific file characteristics (such as file size, type, date of last
access or a combination of these and other factors), and define specific storage
operations for files that match them. The following examples demonstrate a few ways you
can configure file pool policies:
l
You can create a file pool policy for a specific file extension that requires high
availability.
You can configure a file pool policy to store that type of data in a storage pool that
provides the fastest reads or read/writes.
You can create another file pool policy to evaluate last accessed date, allowing you to
store older files in storage pool best suited for archiving for historical or regulatory
purposes.
743
Storage Pools
The unlicensed OneFS SmartPools technology allows you to configure the default file pool
policy for managing the node pools that are created when the cluster is autoprovisioned.
The default file pool contains all files and is stored in any node pool. Default file pool
operations are defined by settings of the default file pool policy.
You cannot reorder or remove the default file pool policy. The settings in the default file
pool policy apply to all files that are not covered by another file pool policy. For example,
data that is not covered by a file pool policy can be moved to a tier that you identify as a
default for this purpose.
All file pool policy operations are executed when the SmartPools job runs. When new files
are created, OneFS temporarily chooses a storage pool policy, using a mechanism based
on file pool policies used when the last SmartPools job ran. The system may apply new
storage settings and move these files again when the next SmartPools job runs, based on
a matching file pool policy.
Storage Pools
The following command creates a compatibility between Isilon S200 and S210 nodes:
isi storagepool compatibilities active create S200 S210
OneFS provides a summary of the results of executing the command, and requires you
to confirm the operation.
2. To proceed, type yes, and then press ENTER. To cancel, type no, and then press
ENTER.
Results
If you proceeded to create the compatibility, OneFS adds any unprovisioned S210 nodes
to the S200 node pool.
OneFS provides a summary of the results of executing the command, including the
node pools that will be merged, and requires you to confirm the operation.
2. To proceed, type yes, and then press ENTER. To cancel, type no, and then press
ENTER.
Results
If you allowed the operation to proceed, the compatible node pools are merged into one
node pool.
745
Storage Pools
Delete a compatibility
You can delete a compatibility, and any nodes that are part of a node pool because of
this compatibility are removed from the node pool.
CAUTION
OneFS provides a summary of the results of executing the command, including the
node pools that will be affected by the compatibility removal, and requires you to
confirm the operation.
2. To proceed, type yes, and then press ENTER. To cancel, type no, and then press
ENTER.
Results
If you allowed the operation to proceed, OneFS splits any merged node pools, or
unprovisions any previously compatible nodes fewer than three in number.
746
Storage Pools
LNN values can be specified as a range, for example, --lnns=1-3, or in a commaseparated list, for example, --lnns=1,2,5,9.
747
Storage Pools
2. Run the isi storagepool settings view command to confirm that the SSD
L3 Cache Default Enabled attribute is set to Yes.
If the SSDs on the specified node pool previously were used as storage drives, a
message appears asking you to confirm the change.
2. If prompted, type yes, and then press ENTER.
On HD400 node pools, SSDs are used only for L3 cache, which is turned on by default
and cannot be turned off. If you attempt to turn off L3 cache on an HD400 node pool
through the command-line interface, OneFS generates this error message: Disabling
L3 not supported for the given node type.
Procedure
1. Run the isi storagepool nodepools modify command on a specific node
pool.
The following command disables L3 cache on a node pool named hq_datastore:
isi storagepool nodepools modify hq_datastore --l3 false
748
Storage Pools
Managing tiers
You can move node pools into tiers to optimize file and storage management.
Managing tiers requires ISI_PRIV_SMARTPOOLS or higher administrative privileges.
Create a tier
You can create a tier to group together one or more node pools for specific storage
purposes.
Depending on the types of nodes in your cluster, you can create tiers for different
categories of storage, for example, an archive tier, performance tier, or general-use tier.
After creating a tier, you need to add the appropriate node pools to the tier.
Procedure
1. Run the isi storagepool tiers create command.
The following command creates a tier named ARCHIVE_1, and adds node pools
named hq_datastore1 and hq_datastore2 to the tier.
isi storagepool tiers create ARCHIVE_1 --children hq_datastore1 \
--children hq_datastore2
If the node pool, PROJECT-A, happened to be in another tier, the node pool would be
moved to the ARCHIVE_1 tier.
Rename a tier
A tier name can contain alphanumeric characters and underscores but cannot begin with
a number.
Procedure
1. Run the isi storagepool tiers modify command.
The following command renames a tier from ARCHIVE_1 to ARCHIVE_A:
isi storagepool tiers modify ARCHIVE_1 --set-name ARCHIVE_A
Managing tiers
749
Storage Pools
Delete a tier
When you delete a tier, its node pools remain available and can be added to other tiers.
Procedure
1. Run the isi storagepool tiers delete command.
The following command deletes a tier named ARCHIVE_A:
isi storagepool tiers delete ARCHIVE_A
For example, to free up disk space on your performance tier (S-series node pools), you
could create a file pool policy to match all files greater than 25 MB in size, which have not
been accessed or modified for more than a month, and move them to your archive tier
(NL-series node pools).
You can configure and prioritize multiple file pool policies to optimize file storage for your
particular work flows and cluster configuration. When the SmartPools job runs, by default
once a day, it applies file pool policies in priority order. When a file pool matches the
criteria defined in a policy, the actions in that policy are applied, and lower-priority
custom policies are ignored for the file pool.
After the list of custom file pool policies is traversed, if any of the actions are not applied
to a file, the actions in the default file pool policy are applied. In this way, the default file
pool policy ensures that all actions apply to every file.
Note
You can reorder the file pool policy list at any time, but the default file pool policy is
always last in the list of file pool policies.
OneFS also provides customizable template policies that you can copy to make your own
policies. These templates, however, are only available from the OneFS web
administration interface.
750
Storage Pools
If existing file pool policies direct data to a specific storage pool, do not configure other
file pool policies that match this data with anywhere for the --data-storagetarget option. Because the specified storage pool is included when you use
anywhere, you should target specific storage pools to avoid unintentional file storage
locations.
Procedure
1. Run the isi filepool policies create command.
The following command creates a file pool policy that archives older files to a specific
storage tier:
isi filepool policies create ARCHIVE_OLD \
--description "Move older files to archive storage" \
--data-storage-target ARCHIVE_TIER --data-ssd-strategy metadata \
--begin-filter --file-type=file --and --birth-time=2013-09-01 \
--operator=lt --and --accessed-time=2013-12-01 --operator=lt \
--end-filter
Results
The file pool policy is applied when the next scheduled SmartPools job runs. By default,
the SmartPools job runs once a day; however, you can also start the SmartPools job
manually.
OneFS supports UNIX shell-style (glob) pattern matching for file name attributes and
paths.
The following table lists the file attributes that you can use to define a file pool policy.
File attribute
Specifies
Name
Path
751
Storage Pools
File attribute
Specifies
You can specify whether to include or exclude full or partial
paths that contain specified text. You can also include the
wildcard characters *, ?, and [ ].
File type
Size
Includes or excludes files based on one of the following filesystem object types:
l
File
Directory
Other
Created
Metadata changed
Accessed
752
Storage Pools
Description
[a-z]
bat, bet, and bit, and 1[4-7]2 matches 142, 152, 162, and
172.
You can exclude characters within brackets by following the first
bracket with an exclamation mark. For example, b[!ie] matches
Matches any character in place of the question mark. For example, t?p
matches tap, tip, and top.
SmartPools settings
SmartPools settings include directory protection, global namespace acceleration, L3
cache, virtual hot spare, spillover, requested protection management, and I/O
optimization management.
Setting
Description
Notes
Increase
directory
protection to a
higher level than
its contents
753
Storage Pools
Setting
Description
Notes
This configuration can sustain a failure of two nodes
before data loss or inaccessibility. If this setting is
enabled, all directories are protected at 4x. If the
cluster experiences three node failures, although
individual files may be inaccessible, the directory
tree is available and provides access to files that are
still accessible.
In addition, if another file pool policy protects some
files at a higher level, these too are accessible in the
event of a three-node failure.
Enable global
namespace
acceleration
Use SSDs as L3
Cache by default
for new node
pools
Virtual Hot Spare Reserves a minimum amount of space in the node pool If you configure both the minimum number of virtual
that can be used for data repair in the event of a drive
failure.
To reserve disk space for use as a virtual hot spare,
select from the following options:
l
Enable global
spillover
754
Storage Pools
Setting
Description
l
Spillover Data
Target
Notes
Manage I/O
optimization
settings
Description
Notes
Note
CAUTION
755
Storage Pools
Setting
Description
Notes
Avoid SSDs
Write all associated file data and metadata to HDDs
only.
CAUTION
Data on SSDs
Use SSDs for both data and metadata. Regardless of
whether global namespace acceleration is enabled,
any SSD blocks reside on the storage target if there
is room.
Snapshot storage target
Requested protection
756
Storage Pools
Setting
Description
Notes
SmartCache
Enables or disables
SmartCache.
Data access
pattern
Note
You can create a file pool policy from a template only in the OneFS web administration
interface.
If existing file pool policies direct data to a specific storage pool, do not configure other
file pool policies with anywhere for the Data storage target option. Because the
specified storage pool is included when you use anywhere, target specific storage
pools to avoid unintentional file storage locations.
Procedure
1. Run the isi filepool policies list command to view a list of available file
pool policies.
A tabular list of policies and their descriptions appears.
2. Run the isi filepool policies view command to view the current settings of
a file pool policy.
The following example displays the settings of a file pool policy named ARCHIVE_OLD.
isi filepool policies view ARCHIVE_OLD
3. Run the isi filepool policies modify command to change a file pool
policy.
Managing file pool policies
757
Storage Pools
The following example modifies the settings of a file pool policy named ARCHIVE_OLD.
isi filepool policies modify ARCHIVE_OLD --description
"Move older files to archive storage" --data-storage-target TIER_A
--data-ssd-strategy metadata-write --begin-filter --file-type=file
--and --birth-time=2013-01-01 --operator=lt --and --accessed-time=
2013-09-01 --operator=lt --end-filter
Results
Changes to the file pool policy are applied when the next SmartPools job runs. However,
you can also manually run the SmartPools job immediately.
Results
OneFS applies your changes to any files managed by the default file pool policy the next
time the SmartPools job runs.
default
random
True
anywhere
metadata
anywhere
metadata
758
Storage Pools
3. Run the isi filepool default-policy view command again to ensure that
default file pool policy settings reflect your intentions.
Results
OneFS implements the new default file pool policy settings when the next scheduled
SmartPools job runs and applies these settings to any files that are not managed by a
custom file pool policy.
2. Run the isi filepool policies modify command to change the priority of a
file pool policy.
The following example changes the priority of a file pool policy named PERFORM_1.
isi filepool policies modify PERFORM_1 --apply-order 1
3. Run the isi filepool policies list command again to ensure that the
policy list displays the correct priority order.
759
Storage Pools
Results
The file pool policy is removed. When you delete a policy, its file pool will be controlled
either by another policy or by the default file pool policy the next time the SmartPools job
runs.
Storage Pools
Procedure
1. Run the isi job events list command.
A tabular listing of the most recent system jobs appears. The listing for the SmartPools
job is similar to the following example:
2014-04-28T02:00:29 SmartPools [105] Succeeded
2. Locate the SmartPools job in the listing, and make note of the number in square
brackets.
This is the job ID number.
3. Run the isi job reports view command, using the job ID number.
The following example displays the report for a SmartPools job with the job ID of 105.
isi job reports view 105
Results
The SmartPools report shows the outcome of all of the file pool policies that were run,
including summaries for each policy, and overall job information such as elapsed time,
LINs traversed, files and directories processed, and memory and I/O statistics.
Options
{--path | -p} <path>
Specifies the path to the file to be processed. This parameter is required.
{--dont-restripe | -d}
Changes the per-file policies without restriping the file.
{--nop | -n}
Calculates the specified settings without actually applying them. This option is best
used with -verbose or --stats.
{--stats | -s}
Displays statistics on the files processed.
Storage pool commands
761
Storage Pools
{--quiet | -q}
Suppresses warning messages.
{--recurse | -r}
Specifies recursion through directories.
{--verbose | -v}
Displays the configuration settings to be applied.
Examples
This example shows the results of running isi filepool apply in verbose mode.
The output shows the results of comparing the path specified with each file pool policy.
The recurse option is set so that all files in the /ifs/data/projects path are
matched against all file pool policies. The first policy listed is always the system default
policy. In this example, the second match is to the file pool policy Technical Data.
isi filepool apply --path=/ifs/data/projects --verbose --recurse
Processing file /ifs/data/projects
Protection Level is DiskPool minimum
Layout policy is concurrent access
coalescer_enabled is true
data_disk_pool_policy_id is any pool group ID
data SSD strategy is metadata
snapshot_disk_pool_policy_id is any pool group ID
snapshot SSD strategy is metadata
cloud provider id is 0
New File Attributes
Protection Level is DiskPool minimum
Layout policy is concurrent access
coalescer_enabled is true
data_disk_pool_policy_id is any pool group ID
data SSD strategy is metadata
snapshot_disk_pool_policy_id is any pool group ID
snapshot SSD strategy is metadata
cloud provider id is 0
{'default' :
{'Policy Number': -2,
'Files matched': {'head':0, 'snapshot': 0},
'Directories matched': {'head':1, 'snapshot': 0},
'ADS containers matched': {'head':0, 'snapshot': 0},
'ADS streams matched': {'head':0, 'snapshot': 0},
'Access changes skipped': 0,
'Protection changes skipped': 0,
'File creation templates matched': 1,
'File data placed on HDDs': {'head':0, 'snapshot': 0},
'File data placed on SSDs': {'head':0, 'snapshot': 0},
},
'system':
'Technical Data':
{'Policy Number': 0,
'Files matched': {'head':0, 'snapshot': 0},
'Directories matched': {'head':0, 'snapshot': 0},
'ADS containers matched': {'head':0, 'snapshot': 0},
'ADS streams matched': {'head':0, 'snapshot': 0},
'Access changes skipped': 0,
'Protection changes skipped': 0,
'File creation templates matched': 0,
'File data placed on HDDs': {'head':0, 'snapshot': 0},
'File data placed on SSDs': {'head':0, 'snapshot': 0},
762
Storage Pools
This example shows the result of using the --nop option to calculate the results that
would be produced by applying the file pool policy.
isi filepool apply --path=/ifs/data/projects --nop --verbose
Processing file /ifs/data/projects
Protection Level is DiskPool minimum
Layout policy is concurrent access
coalescer_enabled is true
data_disk_pool_policy_id is any pool group ID
data SSD strategy is metadata
snapshot_disk_pool_policy_id is any pool group ID
snapshot SSD strategy is metadata
cloud provider id is 0
New File Attributes
Protection Level is DiskPool minimum
Layout policy is concurrent access
coalescer_enabled is true
data_disk_pool_policy_id is any pool group ID
data SSD strategy is metadata
snapshot_disk_pool_policy_id is any pool group ID
snapshot SSD strategy is metadata
cloud provider id is 0
{'default' :
{'Policy Number': -2,
'Files matched': {'head':0, 'snapshot': 0},
'Directories matched': {'head':1, 'snapshot': 0},
'ADS containers matched': {'head':0, 'snapshot': 0},
'ADS streams matched': {'head':0, 'snapshot': 0},
'Access changes skipped': 0,
'Protection changes skipped': 0,
'File creation templates matched': 1,
'File data placed on HDDs': {'head':0, 'snapshot': 0},
'File data placed on SSDs': {'head':0, 'snapshot': 0},
},
'system':
{'Policy Number': -1,
'Files matched': {'head':0, 'snapshot': 0},
'Directories matched': {'head':0, 'snapshot': 0},
'ADS containers matched': {'head':0, 'snapshot': 0},
'ADS streams matched': {'head':0, 'snapshot': 0},
'Access changes skipped': 0,
'Protection changes skipped': 0,
'File creation templates matched': 0,
'File data placed on HDDs': {'head':0, 'snapshot': 0},
'File data placed on SSDs': {'head':0, 'snapshot': 0},
},
763
Storage Pools
[--enable-coalescer <boolean>]
[{--verbose | -v}]
Options
--data-access-pattern <string>
Specifies the preferred data access pattern, one of random, streaming, or
concurrent.
--set-requested-protection<string>
Specifies the requested protection for files that match this filepool policy (for
example, +2:1).
--data-storage-target<string>
Specifies the node pool or tier to which the policy moves files on the local cluster.
--data-ssd-strategy <string>
Specifies how to use SSDs to store local data.
avoid
Writes all associated file data and metadata to HDDs only.
metadata
Writes both file data and metadata to HDDs. This is the default setting. An extra
mirror of the file metadata is written to SSDs, if SSDs are available. The SSD
mirror is in addition to the number required to satisfy the requested protection.
Enabling global namespace acceleration (GNA) makes read acceleration
available to files in node pools that do not contain SSDs.
metadata-write
Writes file data to HDDs and metadata to SSDs, when available. This strategy
accelerates metadata writes in addition to reads but requires about four to five
times more SSD storage than the Metadata setting. Enabling GNA does not
affect read/write acceleration.
data
Uses SSD node pools for both data and metadata, regardless of whether global
namespace acceleration is enabled. This SSD strategy does not result in the
creation of additional mirrors beyond the normal requested protection but
requires significantly increases storage requirements compared with the other
SSD strategy options.
--snapshot-storage-target <integer>
The ID of the node pool or tier chosen for storage of snapshots.
--snapshot-ssd-strategy <string>
Specifies how to use SSDs to store snapshots. Valid options are metadata,
metadata-write, data, avoid. The default is metadata.
--enable-coalescer <boolean>
Enable or disable the coalescer, also referred to as SmartCache.
--verbose
Displays more detailed information.
Example
The command shown in the following example modifies the default file pool policy in
several ways. The command sets the requested-protection-level to +2:1, sets
764
Storage Pools
the data-storage-target to anywhere (the system default), changes the data-ssd-strategy to metadata-write, and sets the coalescer to off.
isi filepool default-policy modify --set-requested-protection=+2:1 -enable-coalescer=false --data-storage-target=anywhere --data-ssdstrategy=metadata-write
default
random
True
anywhere
metadata
anywhere
metadata
Options
<name>
Specifies the name of the file pool policy to create.
--begin-filter {<predicate> <operator> <link>}... --end-filter
Specifies the file-matching criteria that determine the files to be managed by the
filepool policy.
Each file matching criterion consists of three parts:
l
Predicate. Specifies what attribute(s) to filter on. You can filter by path, name, file
type, timestamp, or custom attribute, or use a combination of these attributes.
765
Storage Pools
[ ]
--birth-time=<timestamp>
Selects files that were created relative to the specified date and time. Timestamp
arguments are formed as YYYY-MM-DDTHH:MM:SS. For example,
2013-09-01T08:00:00 specifies a timestamp of September 1, 2013 at 8:00 A.M.
You can use --operator= with an argument of gt to mean after the timestamp or
lt to mean before the timestamp.
--changed-time=<timestamp>
Selects files that were modified relative to the specified date and time.
--metadata-changed-time=<timestamp>
Selects files whose metadata was modified relative to the specified date and time.
--accessed-time=<timestamp>
Selects files that were accessed at the specified time interval.
--custom-attribute=<value>
Selects files based on a custom attribute.
You can use the operator= option to specify a qualifier for the file-matching
criterion. Specify operators in the following form:
--operator=<value>
766
Storage Pools
ne
Not equal
lt
Less than
le
gt
Greater than
ge
not
Not
Link arguments can be used to specify multiple file-matching criteria. The following
links are valid:
--and
Connects two file-matching criteria where files must match both criteria.
--or
Connects two file-matching criteria where files must match one or the other criteria.
--description <string>
Specifies a description of the filepool policy
--apply-order <integer>
Specifies the order index for execution of this policy.
--data-access-pattern <string>
Data access pattern random, streaming or concurrent.
--set-requested-protection <string>
Specifies a protection level for files that match this filepool policy (e.g., +3, +2:3, 8x).
--data-storage-target <name>
The name of the node pool or tier to which the policy moves files on the local cluster.
--data-ssd-strategy <string>
Specifies how to use SSDs to store local data.
avoid
Writes all associated file data and metadata to HDDs only.
metadata
Writes both file data and metadata to HDDs. This is the default setting. An extra
mirror of the file metadata is written to SSDs, if SSDs are available. The SSD
mirror is in addition to the number required to satisfy the requested protection.
Enabling GNA makes read acceleration available to files in node pools that do
not contain SSDs.
metadata-write
Writes file data to HDDs and metadata to SSDs, when available. This strategy
accelerates metadata writes in addition to reads but requires about four to five
times more SSD storage than the Metadata setting. Enabling GNA does not
affect read/write acceleration.
767
Storage Pools
data
Uses SSD node pools for both data and metadata, regardless of whether global
namespace acceleration is enabled. This SSD strategy does not result in the
creation of additional mirrors beyond the normal requested protection but
requires significantly increases storage requirements compared with the other
SSD strategy options.
--snapshot-storage-target <name>
The name of the node pool or tier chosen for storage of snapshots.
--snapshot-ssd-strategy <string>
Specifies how to use SSDs to store snapshots. Valid options are metadata,
metadata-write, data, avoid. The default is metadata.
--enable-coalescer <boolean>
Enable the coalescer.
--verbose
Displays more detailed information.
Examples
The following example creates a file pool policy that moves all files in directory /ifs/
data/chemical/arco/finance to the local storage target named Archive_2.
isi filepool policies create Save_Fin_Data --begin-filter
--path=/ifs/data/chemical/arco/finance --end-filter
--data-storage-target Archive_2 --data-ssd-strategy=metadata
The following example matches older files that have not been accessed or modified later
than specified dates, and moves the files to an archival tier of storage.
isi filepool policies create archive_old
--data-storage-target ARCHIVE_1 --data-ssd-strategy avoid
--begin-filter --file-type=file --and --birth-time=2013-09-01
--operator=lt --and --accessed-time=2013-12-01 --operator=lt
--and --changed-time=2013-12-01 --operator=lt --end-filter
Options
<name>
768
Storage Pools
Example
The following command deletes a file pool policy named ARCHIVE_OLD. The --force
option circumvents the requirement to confirm the deletion:
isi filepool policies delete ARCHIVE_OLD --force
Options
--format
Output the list of file pool policies in a variety of formats. The following values are
valid:
l
table
json
csv
list
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Example
The following example lists custom file pool policies in .csv format and outputs the list to
a file in the OneFS file system.
isi filepool policies list --format csv > /ifs/data/policy.csv
769
Storage Pools
Options
<name>
Specifies the name of the file pool policy to create.
--begin-filter {<predicate> <operator> <link>}... --end-filter
Specifies the file-matching criteria that determine the files to be managed by the
filepool policy.
Each file matching criterion consists of three parts:
l
Predicate. Specifies what attribute(s) to filter on. You can filter by path, name, file
type, timestamp, or custom attribute, or use a combination of these attributes.
[ ]
--birth-time=<timestamp>
Selects files that were created relative to the specified date and time. Timestamp
arguments are formed as YYYY-MM-DDTHH:MM:SS. For example,
2013-09-01T08:00:00 specifies a timestamp of September 1, 2013 at 8:00 A.M.
770
Storage Pools
You can use --operator= with an argument of gt to mean after the timestamp or
lt to mean before the timestamp.
--changed-time=<timestamp>
Selects files that were modified relative to the specified date and time.
--metadata-changed-time=<timestamp>
Selects files whose metadata was modified relative to the specified date and time.
--accessed-time=<timestamp>
Selects files that were accessed at the specified time interval.
--custom-attribute=<value>
Selects files based on a custom attribute.
You can use the operator= option to specify a qualifier for the file-matching
criterion. Specify operators in the following form:
--operator=<value>
ne
Not equal
lt
Less than
le
gt
Greater than
ge
not
Not
Link arguments can be used to specify multiple file-matching criteria. The following
links are valid:
--and
Connects two file-matching criteria where files must match both criteria.
--or
Connects two file-matching criteria where files must match one or the other criteria.
--description <string>
Specifies a description of the filepool policy
--apply-order <integer>
Specifies the order index for execution of this policy.
--data-access-pattern <string>
Data access pattern random, streaming or concurrent.
--set-requested-protection <string>
Specifies a protection level for files that match this filepool policy (for example, +3,
+2:3, 8x).
--data-storage-target <name>
isi filepool policies modify
771
Storage Pools
The name of the node pool or tier to which the policy moves files on the local cluster.
--data-ssd-strategy <string>
Specifies how to use SSDs to store local data.
avoid
Writes all associated file data and metadata to HDDs only.
metadata
Writes both file data and metadata to HDDs. This is the default setting. An extra
mirror of the file metadata is written to SSDs, if SSDs are available. The SSD
mirror is in addition to the number required to satisfy the requested protection.
Enabling GNA makes read acceleration available to files in node pools that do
not contain SSDs.
metadata-write
Writes file data to HDDs and metadata to SSDs, when available. This strategy
accelerates metadata writes in addition to reads but requires about four to five
times more SSD storage than the Metadata setting. Enabling GNA does not
affect read/write acceleration.
data
Uses SSD node pools for both data and metadata, regardless of whether global
namespace acceleration is enabled. This SSD strategy does not result in the
creation of additional mirrors beyond the normal requested protection but
requires significantly increases storage requirements compared with the other
SSD strategy options.
--snapshot-storage-target <name>
The name of the node pool or tier chosen for storage of snapshots.
--snapshot-ssd-strategy <string>
Specifies how to use SSDs to store snapshots. Valid options are metadata,
metadata-write, data, avoid. The default is metadata.
--enable-coalescer <boolean>
Enable the coalescer.
--verbose
Display more detailed information.
Examples
The following example modifies a file pool policy to move matched files to a different
local storage target named Archive_4. The next time the SmartPools job runs, matched
files would be moved to the new storage target.
isi filepool policies modify Save_Fin_Data --begin-filter
--path=/ifs/data/chemical/arco/finance --end-filter
--data-storage-target Archive_4 --data-ssd-strategy=metadata
The following example matches older files that have not been accessed or modified later
than specified dates, and moves the files to an archival tier of storage.
isi filepool policies modify archive_old
--data-storage-target ARCHIVE_1 --data-ssd-strategy avoid
--begin-filter --file-type=file --and --birth-time=2013-06-01
--operator=lt --and --accessed-time=2013-09-01 --operator=lt
--and --changed-time=2013-09-01 --operator=lt --end-filter
772
Storage Pools
Options
<name>
Specifies the name of the file pool policy. Names must begin with a letter or an
underscore and contain only letters, numbers, hyphens, underscores or periods.
Options
{--limit | -l} <integer>
Specifies the number of templates to display.
--sort <string>
Sorts data by the field specified.
{--descending | -l} <integer>
Sorts data in descending order.
--format
Displays file pool templates in the specified format. The following values are valid:
table
json
csv
list
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
isi filepool policies view
773
Storage Pools
Options
<name>
The name of the template to view.
Options
<class-1>
An existing node pool class, one of S200 or X400.
<class-2>
The node class that is compatible with the existing node pool, one of S210 or X410.
Note that S210 nodes are only compatible with S200 node pools, and X410 nodes
are only compatible with X400 node pools.
{--assess | -a} {yes | no}
Checks whether the compatibility is valid without actually creating the compatibility.
{--verbose | -v}
Displays more detailed information.
{--force | -f}
Performs the action without asking for confirmation.
Examples
The following command creates a compatibility between S200 and S210 nodes without
asking for confirmation:
isi storagepool compatibilities active create S200 S210 --force
774
Storage Pools
Options
<ID>
The ID number of the compatibility. You can use the isi storagepool
compatibilities active list command to view the ID numbers of active
compatibilities.
{--assess | -a} {yes | no}
Checks the results without actually deleting the compatibility.
{--verbose | -v}
Displays more detailed information.
{--force | -f}
Performs the action without asking for confirmation.
Example
The following command provides information about the results of deleting a compatibility
without actually performing the action:
isi storagepool compatibilities active delete 1 --assess yes
Provided that a compatibility with the ID of 1 exists, OneFS displays information similar to
the following example:
Deleting compatibility with id 1 is possible.
This delete will cause these nodepools to split:
1: Nodepool s200_0b_0b will be split. A tier will be created and all
resultant nodepools from this split will be incorporated into it. All
filepool policies targeted at the splitting pool will be redirected
towards this new tier. That tier's name is s200_0b_0b-tier
Options
{--limit | -l} <integer>
isi storagepool compatibilities active delete
775
Storage Pools
Options
<ID>
The ID number of the compatibility to view. You can use the isi storagepool
compatibilities active list command to display the ID numbers of active
compatibilities.
Example
The following command displays information about an active compatibility with ID
number 1:
isi storagepool compatibilities active view 1
776
Storage Pools
Options
{--limit | -l} <integer>
Limits the number of available compatibilities that are listed.
{--format | -f}
Lists available compatibilities in the specified format. The following values are valid:
table
json
csv
list
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Example
The following command lists available compatibilities:
isi storagepool compatibilities available list
777
Storage Pools
Options
{--verbose | -v}
Displays more detailed information.
Options
--format
Displays node pools and tiers in the specified format. The following values are valid:
table
json
csv
list
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Options
<name>
778
Storage Pools
Specifies the name for the node pool. Names must begin with a letter or an
underscore and may contain only letters, numbers, hyphens, underscores, or periods.
{--lnns <lnns> | -n <lnns>}
Specifies the nodes in this pool. Nodes can be a comma-separated list or range of
LNNsfor example, 1,4,10,12,14,15 or 1-6.
{--verbose | -v}
Displays more detailed information.
Options
<name>
Specifies the name of the node pool to be deleted.
{--force | -f}
Suppresses any prompts, warnings, or confirmation messages that would otherwise
appear.
{--verbose | -v}
Displays more detailed information.
Options
{--limit | -l} <integer>
Specifies the number of node pools to display.
--format
Displays tiers in the specified format. The following values are valid:
table
json
csv
isi storagepool nodepools delete
779
Storage Pools
list
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Options
<string>
+1n
+2d:1n
+2n
+3d:1n
+3d:1n1d
+3n
+4d:1n
+4d:2n
+4n
Storage Pools
Add nodes for the manually managed node pool. Specify --add-lnns for each
additional node to add.
--remove-lnns <integer>
Remove nodes for the manually managed node pool. Specify --remove-lnns for
each additional node to remove.
--tier <string>
Set parent for the node pool. Node pools can be grouped into a tier to service
particular file pools.
--clear-tier
Remove the specified node pool from its parent tier.
--l3 {yes | no}
Use SSDs in the specified node pool as L3 cache. Note that, on Isilon HD400 node
pools, L3 cache is on by default and you cannot disable it. If you try to disable L3
cache on an HD400 node pool, OneFS generates the following error message:
Disabling L3 not supported for the given node type.
--set-name <string>
New name for the manually managed node pool.
Examples
The following command specifies that SSDs in a node pool named hq_datastore are to be
used as L3 cache:
isi storagepool nodepools modify hq_datastore --l3 yes
The following command adds the node pool hq_datastore to an existing tier named
archive-1:
isi storagepool nodepools modify hq_datastore --tier archive-1
Options
<name>
781
Storage Pools
Required Privileges
ISI_PRIV_SMARTPOOLS
Options
--automatically-manage-protection {all | files_at_default |
none}
Specifies whether SmartPools manages files' protection settings.
--automatically-manage-io-optimization {all | files_at_default
| none}
Specifies whether SmartPools manages I/O optimization settings for files.
--protect-directories-one-level-higher {yes | no}
Protects directories at one level higher.
--global-namespace-acceleration-enabled {yes | no}
Enables or disables global namespace acceleration.
--virtual-hot-spare-deny-writes {yes | no}
Denies new data writes to the virtual hot spare.
--virtual-hot-spare-hide-spare {yes | no}
Reduces the amount of available space for the virtual hot spare.
--virtual-hot-spare-limit-drives <integer>
Specifies the maximum number of virtual drives.
--virtual-hot-spare-limit-percent <integer>
Limits the percentage of node resources that is allocated to virtual hot spare.
--spillover-target <string>
Specifies the target for spillover.
--no-spillover
Globally disables spillover.
--spillover-anywhere
Globally sets spillover to anywhere.
782
Storage Pools
The following command specifies that 20 percent of node resources can be used for the
virtual hot spare:
isi storagepool settings modify --virtual-hot-spare-limit-percent 20
Options
There are no options for this command.
Example
The following command displays the global SmartPools settings on your cluster:
isi storagepool settings view
Options
<name>
Specifies the name for the storage pool tier. Specify as any string.
isi storagepool settings view
783
Storage Pools
{--verbose | -v}
Displays more detailed information.
Options
{<name> | --all}
Specifies the tier to delete. The acceptable values are the name of the tier or all.
{--verbose | -v}
Displays more detailed information.
Options
--format
Displays tiers in the specified format. The following values are valid:
table
json
csv
list
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
784
Storage Pools
Options
<name>
Specifies the tier to be renamed.
{--set-name | -s} <string>
Sets the new name for the tier.
{--verbose | -v}
Displays more detailed information.
Options
<name>
Specifies the name of the tier.
{--verbose | -v}
Displays more detailed information.
785
Storage Pools
786
CHAPTER 21
System jobs
System jobs
787
System jobs
To initiate any Job Engine tasks, you must have the role of SystemAdmin in the OneFS
system.
788
Job name
Description
Exclusion Impact
Set
Policy
Priority Operation
AutoBalance
Restripe
Low
Auto
System jobs
Job name
Description
Exclusion Impact
Set
Policy
Priority Operation
AutoBalanceLin
Restripe
Low
Auto
AVScan
Performs an antivirus
scan on all files.
None
Low
Manual
Collect
Mark
Low
Auto
Dedupe*
None
Low
Manual
None
Low
Manual
DomainMark
Low
Manual
FlexProtect
Medium
Auto
789
System jobs
Job name
Description
Exclusion Impact
Set
Policy
Priority Operation
Note
790
FlexProtectLin
Medium
Auto
FSAnalyze
Gathers information
about the file system.
None
Low
Scheduled
IntegrityScan
Mark
Medium
Manual
MediaScan
Low
Scheduled
MultiScan
Restripe
Mark
Low
Auto
PermissionRepair
None
Low
Manual
QuotaScan*
Updates quota
None
accounting for
domains created on an
existing file tree.
Available only if you
activate a
SmartQuotas license.
Low
Auto
System jobs
Job name
Description
Exclusion Impact
Set
Policy
Priority Operation
SetProtectPlus
Restripe
Low
Manual
None
Low
Scheduled
SmartPools*
Enforces SmartPools
file policies. Available
only if you activate a
SmartPools license.
Restripe
Low
Scheduled
SnapRevert
Reverts an entire
snapshot back to
head.
None
Low
Manual
SnapshotDelete
None
Medium
Auto
TreeDelete
Medium
Manual
Job operation
OneFS includes system maintenance jobs that run to ensure that your Isilon cluster
performs at peak health. Through the Job Engine, OneFS runs a subset of these jobs
automatically, as needed, to ensure file and data integrity, check for and mitigate drive
and node failures, and optimize free space. For other jobs, for example, Dedupe, you can
use Job Engine to start them manually or schedule them to run automatically at regular
intervals.
The Job Engine runs system maintenance jobs in the background and prevents jobs
within the same classification (exclusion set) from running simultaneously. Two exclusion
sets are enforced: restripe and mark.
Restripe job types are:
l
AutoBalance
AutoBalanceLin
FlexProtect
FlexProtectLin
MediaScan
MultiScan
SetProtectPlus
SmartPools
791
System jobs
Collect
IntegrityScan
MultiScan
Note that MultiScan is a member of both the restripe and mark exclusion sets. You
cannot change the exclusion set parameter for a job type.
The Job Engine is also sensitive to job priority, and can run up to three jobs, of any
priority, simultaneously. Job priority is denoted as 110, with 1 being the highest and 10
being the lowest. The system uses job priority when a conflict among running or queued
jobs arises. For example, if you manually start a job that has a higher priority than three
other jobs that are already running, Job Engine pauses the lowest-priority active job, runs
the new job, then restarts the older job at the point at which it was paused. Similarly, if
you start a job within the restripe exclusion set, and another restripe job is already
running, the system uses priority to determine which job should run (or remain running)
and which job should be paused (or remain paused).
Other job parameters determine whether jobs are enabled, their performance impact, and
schedule. As system administrator, you can accept the job defaults or adjust these
parameters (except for exclusion set) based on your requirements.
When a job starts, the Job Engine distributes job segmentsphases and tasksacross
the nodes of your cluster. One node acts as job coordinator and continually works with
the other nodes to load-balance the work. In this way, no one node is overburdened, and
system resources remain available for other administrator and system I/O activities not
originated from the Job Engine.
After completing a task, each node reports task status to the job coordinator. The node
acting as job coordinator saves this task status information to a checkpoint file.
Consequently, in the case of a power outage, or when paused, a job can always be
restarted from the point at which it was interrupted. This is important because some jobs
can take hours to run and can use considerable system resources.
792
Impact policy
Allowed to run
Resource consumption
LOW
Low
MEDIUM
Medium
HIGH
High
OFF_HOURS
Outside of business
hours. Business hours are
defined as 9AM to 5pm,
Monday through Friday.
OFF_HOURS is paused
during business hours.
Low
System jobs
If you want to specify other than a default impact policy for a job, you can create a custom
policy with new settings.
Jobs with a low impact policy have the least impact on available CPU and disk I/O
resources. Jobs with a high impact policy have a significantly higher impact. In all cases,
however, the Job Engine uses CPU and disk throttling algorithms to ensure that tasks that
you initiate manually, and other I/O tasks not related to the Job Engine, receive a higher
priority.
Job priorities
Job priorities determine which job takes precedence when more than three jobs of
different exclusion sets attempt to run simultaneously. The Job Engine assigns a priority
value between 1 and 10 to every job, with 1 being the most important and 10 being the
least important.
The maximum number of jobs that can run simultaneously is three. If a fourth job with a
higher priority is started, either manually or through a system event, the Job Engine
pauses one of the lower-priority jobs that is currently running. The Job Engine places the
paused job into a priority queue, and automatically resumes the paused job when one of
the other jobs is completed.
If two jobs of the same priority level are scheduled to run simultaneously, and two other
higher priority jobs are already running, the job that is placed into the queue first is run
first.
Start a job
Although OneFS runs several critical system maintenance jobs automatically when
necessary, you can also manually start any job at any time.
The Collect job, used here as an example, reclaims free space that previously could not
be freed because the node or drive was unavailable.
Procedure
1. Run the isi job jobs start command.
The following command runs the Collect job with a stronger impact policy and a
higher priority.
isi job jobs start Collect --policy MEDIUM --priority 2
Results
When the job starts, a message such as Started job [7] appears. In this example, 7
is the job ID number, which you can use to run other commands on the job.
Job priorities
793
System jobs
Pause a job
To free up system resources, you can pause a job temporarily.
Before you begin
To pause a job, you need to know the job ID number. If you are unsure of the job ID
number, you can use the isi job jobs list command to see a list of running jobs.
Procedure
1. Run the isi job jobs pause command.
The following command pauses a job with an ID of 7.
isi job jobs pause 7
If there is only one instance of a job type currently active, you can specify the job type
instead of the job ID.
isi job jobs pause Collect
In all instructions that include the isi job jobs command, you can omit the jobs
entry.
isi job pause Collect
Modify a job
You can change the priority and impact policy of an active, waiting, or paused job.
Before you begin
To modify a job, you need to know the job ID number. If you are unsure of the job ID
number, you can use the isi job jobs list command to see a list of running jobs.
When you modify a job, only the current instance of the job runs with the updated
settings. The next instance of the job returns to the default settings for that job type.
Procedure
1. Run the isi job jobs modify command.
The following command updates the priority and impact policy of an active job (job ID
number 7).
isi job jobs modify 7 --priority 3 --policy medium
If there is only one instance of a job type currently active, you can specify the job type
instead of the job ID.
isi job jobs modify Collect --priority 3 --policy medium
Resume a job
You can resume a paused job.
Before you begin
To resume a job, you need to know the job ID number. If you are unsure of the job ID
number, you can use the isi job jobs list command.
794
System jobs
Procedure
1. Run the isi job jobs resume command.
The following command resumes a job with the ID number 7.
isi job jobs resume 7
If there is only one instance of a job type currently active, you can specify the job type
instead of the job ID.
isi job jobs resume Collect
Cancel a job
If you want to free up system resources, or for any reason, you can cancel a running,
paused, or waiting job.
Before you begin
To cancel a job, you need to know the job ID number. If you are unsure of the job ID
number, you can use the isi job jobs list command.
Procedure
1. Run the isi job jobs cancel command.
The following command cancels a job with the ID number 7.
isi job jobs cancel 7
If there is only one instance of a job type currently active, you can specify the job type
instead of the job ID.
isi job jobs cancel Collect
When you run this command, the system prompts you to confirm the change. Type
yes or no, and then press ENTER.
Cancel a job
795
System jobs
Results
All subsequent iterations of the MediaScan job type run with the new settings. If a
MediaScan job is in progress, it continues to use the old settings.
796
System jobs
2. View a list of available impact policies to see if your custom policy was created
successfully.
The following command displays a list of impact policies.
isi job policies list
797
System jobs
Procedure
1. Run the isi job policies view command.
The following command displays the impact policy settings of the custom impact
policy MY_POLICY.
isi job policies view MY_POLICY
2. Run the isi job policies modify command to establish new impact level and
interval settings for the custom policy.
The following command defines the new impact level and interval of a custom policy
named MY_POLICY.
isi job policies modify MY_POLICY --impact high --begin
'Saturday 09:00' --end 'Sunday 11:59'
3. Verify that the custom policy has the settings that you intended.
The following command displays the current settings for the custom policy.
isi job policies view MY_POLICY
OneFS displays a message asking you to confirm the deletion of your custom policy.
2. Type yes and press ENTER.
798
System jobs
799
System jobs
The following command displays the report of a Collect job with an ID of 857:
isi job reports view 857
Options
--job-type <string>
Displays all events of all instances of a specific job type (for example, SmartPools).
--job-id <integer>
Displays all events of a specific job instance.
--begin <timestamp>
800
System jobs
Specifies the beginning of the time period for which job events should be listed. For
example: --begin "2013-09-17T00:00". This means that job events beginning at
the first moment of September 17, 2013 should be listed.
--end <timestamp>
Specifies the end of the time period for job events to be listed. For example, --end
"2013-09-17T23:59" means that job events right up to the last minute of
September 17, 2013 should be listed.
--state {failed | running | cancelled_user | succeeded |
paused_user | unknown | paused_priority | cancelled_system |
paused_policy |paused_system}
Specifies that events of the given state or states should be listed.
{--limit | -l} <integer>
Displays no more than the specified number of job events. If no timestamp
parameters are specified, the most recent job events of the specified number are
listed.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information about job events.
Examples
The following command lists all FSAnalyze events that happened in the month of
September.
isi job events list --job-type fsanalyze --begin "2013-09-01" --end
"2013-09-30"
801
System jobs
The following command lists all the job events that happened on a specific day.
isi job events list --begin "2013-09-17T00:00" --end
"2013-09-17T23:59"
Options
<job>
Specifies the job to cancel. You can specify the job by job ID or job type. Specify a job
type only if one instance of that job type is active.
Examples
The following command cancels an active MultiScan job.
isi job jobs cancel multiscan
In all instructions that include the isi job jobs command, you can omit the jobs
entry.
isi job cancel 14
802
System jobs
Options
--state {running | paused_user | paused_priority | paused_policy |
paused_system}
Controls which jobs are listed according to status.
{--limit | -l} <integer>
Displays no more than the specified number of items. If no other parameters are
specified, displays the most recently activated jobs up to the specified number.
--sort {id | type | state | impact | policy | priority | start_time |
running_time}
Sorts the output by the specified attribute.
--descending
Sorts the output in descending order of activation time.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header}
Displays table and CSV output without headers.
{--no-footer}
Displays table output without footers.
{--verbose}
Displays more detailed information about active jobs.
Examples
The following example lists jobs that have been manually paused.
isi job jobs list --state paused_user
803
System jobs
23
SmartPools Paused by user Low
6
1/8
40s
---------------------------------------------------------------Total: 2
The following example outputs a CSV-formatted list of jobs to a file in the /ifs/data
path.
isi job jobs list --format csv > /ifs/data/joblist.csv
In all instructions that include the isi job jobs command, you can omit the jobs
entry.
isi job list --format csv > /ifs/data/joblist.csv
Options
<job>
Specifies the job ID or job type to modify. If you specify job type (for example,
FlexProtect), only one instance of that type can be active.
{--priority | -p} <integer>
Sets the priority level for the specified job.
{--policy | -o} <string>
Sets the impact policy for the specified job.
Examples
The following command changes the impact policy of an active MultiScan job. This
command example, which specifies the job type, works only when a single instance of
MultiScan is active.
isi job jobs modify multiscan --policy high
If more than one instance of a job type is active, you can specify the job ID number
instead of job type. The following command changes the priority of an active job with an
ID of 7.
isi job jobs modify 7 --priority 2
In all instructions that include the isi job jobs command, you can omit the jobs
entry.
isi job modify 7 --priority 2
804
System jobs
Options
<job>
Specifies the job to pause. You can specify the job by job type or job ID. If you use job
type, only one instance of the job type can be active.
Examples
The following command pauses an active AutoBalance job.
isi job jobs pause autobalance
In all instructions that include the isi job jobs command, you can omit the jobs
entry.
isi job pause 18
Options
<job>
Specifies the job to resume. You can specify the job by job type or job ID. If you use
the job type parameter, only one instance of this job type can be in the Job Engine
queue.
Examples
The following command resumes a paused AutoBalance job.
isi job jobs resume autobalance
805
System jobs
In all instructions that include the isi job jobs command, you can omit the jobs
entry.
isi job resume 16
Options
<type>
Specifies the type of job to add to the job queue (for example, MediaScan).
{--priority} <integer>
Sets the priority level for the specified job, with 1 being the highest priority and 10
being the lowest.
{--policy} <string>
Sets the impact policy for the specified job.
{--no-dup}
Disallows duplicate jobs. If an instance of the specified job is already in the queue,
the new job does not start.
--paths <path>
Specifies the path of the job, which must be within /ifs. This option is valid only for
the TreeDelete and PermissionRepair jobs.
--delete
Valid for the DomainMark job only. Deletes the domain mark.
--root <path>
Valid for the DomainMark job only. Specifies the root path location for the
DomainMark job.
806
System jobs
The following command starts a MultiScan job with a priority of 8 and a high impact
policy.
isi job jobs start multiscan --priority 8 --policy high
The following command starts a TreeDelete job with a priority of 10 and a low impact
policy that deletes the /ifs/data/old directory.
isi job jobs start treedelete --path /ifs/data/old --priority 10 -policy low
In all instructions that include the isi job jobs command, you can omit the jobs
entry.
isi job start autobalance
Options
<job>
isi job jobs view
807
System jobs
Specifies the job to view. You can specify the job by job type or job ID. If you specify a
job type, only one instance of this job can be active.
Examples
The following command displays information about an AutoBalance job with a job ID of
15.
isi job jobs view 15
In all instructions that include the isi job jobs command, you can omit the jobs
entry.
isi job view 15
Options
<name>
Specifies a name for the new impact policy. The following names are reserved and
cannot be used: LOW, MEDIUM, HIGH, and OFF_HOURS.
--description <string>
Describes the job policy.
--impact {Low | Medium High | Paused}
Specifies an impact level for the policy: Low, Medium, High, or Paused. You can
specify an --impact parameter for each impact interval that you define.
--begin <interval_time>
808
System jobs
Specifies the beginning time, on a 24-hour clock, of the period during which a job can
run. For example: --begin "Friday 20:00".
--end <interval_time>
Specifies the ending time, on a 24-hour clock, of the period during which a job can
run. For example: --end "Sunday 11:59".
Examples
The following command creates a new impact policy named HIGH-WKEND.
isi job policies create HIGH-WKEND --impact high --begin "Saturday
00:01" --end "Sunday 23:59"
The following command creates a more complex impact policy named HI-MED-WKEND.
This policy includes multiple impact levels and time intervals. At the end of the specified
intervals, a job running with this policy would automatically return to LOW impact.
isi job policies create HI-MED-WKEND --description "High to medium
impact when run on the weekend" --impact high --begin "Friday 20:00"
--end "Monday 03:00" --impact medium --begin "Monday 03:01" --end
"Monday 08:00"
Options
<id>
Specifies the name of the impact policy to delete. If you are unsure of the name, you
can use the isi job policies list command.
--force
Forces deletion of the impact policy without the system asking for confirmation.
Examples
The following command deletes a custom impact policy named HIGH-MED.
isi job policies delete HIGH-MED
When you press ENTER, OneFS displays a confirmation message: Are you sure you
want to delete the policy HIGH-MED? (yes/[no]):
Type yes, and then press ENTER.
The following command deletes a custom impact policy named HIGH-WKEND without the
confirmation message being displayed.
isi job policies delete HIGH-WKEND --force
809
System jobs
Options
{--limit | -l} <integer>
Displays no more than the specified number of items.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information.
Examples
The following command displays a list of available impact policies.
isi job policies list
810
System jobs
The system displays verbose output in a list format as shown in the following partial
example:
ID: HIGH
Description: Isilon template: high impact at all times
System: True
Impact Intervals
Impact : High
Begin : Sunday 00:00
End : Sunday 00:00
---------------------------------------------------------ID: LOW
Description: Isilon template: low impact at all times
System: True
Impact Intervals
Impact : Low
Begin : Sunday 00:00
End : Sunday 00:00
----------------------------------------------------------
Options
<ID>
Specifies the name of the policy to modify.
--description <string>
Specifies a description for the policy. Replaces an older description if one was in
place.
--impact {Low | Medium High | Paused}
Specifies an impact level for the policy: Low, Medium, High, or Paused. Specify an
--impact parameter for each additional impact interval that you define.
--begin <interval_time>
Specifies the beginning time, on a 24-hour clock, of the period during which a job can
run. For example: --begin "Friday 20:00".
--end <interval_time>
Specifies the ending time, on a 24-hour clock, of the period during which a job can
run. For example: --end "Sunday 11:59".
--reset-intervals
Clears all job policy intervals and restores the defaults.
isi job policies modify
811
System jobs
Examples
The following command clears the custom intervals from a custom policy named
MY_POLICY as the first step to adding new intervals.
isi job policies modify MY_POLICY --reset-intervals
Options
<id> <string>
Specifies the job policy to display by policy ID.
Examples
The following command displays the details for the default job policy, HIGH.
isi job policies view HIGH
Options
--job-type <string>
812
System jobs
813
System jobs
Options
<id>
Specifies the job ID for the reports you want to view.
Examples
The following command requests reports for an FSAnalyze job with an ID of 7.
isi job reports view 7
The system displays output similar to the following example. Note that when a job has
more than one phase, a report for each phase is provided.
FSAnalyze[7] phase 1 (2013-09-19T22:01:58)
-----------------------------------------FSA JOB QUERY PHASE
Elapsed time:
83 seconds
LINS traversed:
433
Errors:
0
CPU usage:
max 30% (dev 2), min 0% (dev 1), avg 10%
Virtual memory size:
max 111772K (dev 1), min 104444K (dev 2),
avg 109423K
Resident memory size:
max 14348K (dev 1), min 9804K (dev 3),
avg 12706K
Read:
9 ops, 73728 bytes (0.1M)
Write:
3035 ops, 24517120 bytes (23.4M)
FSAnalyze[7] phase 2 (2013-09-19T22:02:47)
-----------------------------------------FSA JOB MERGE PHASE
Elapsed time:
47 seconds
Errors:
0
CPU usage:
max 33% (dev 1), min 0% (dev 1), avg 8%
Virtual memory size:
max 113052K (dev 1), min 110748K (dev 2),
avg 111558K
Resident memory size:
max 16412K (dev 1), min 13424K (dev 3),
avg 14268K
Read:
2 ops, 16384 bytes (0.0M)
Write:
2157 ops, 16871424 bytes (16.1M)
814
System jobs
Options
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information about active jobs, including node activity, CPU
and memory usage, and number of workers (processes) involved.
Examples
The following command requests a statistical summary for active jobs.
isi job statistics list
The following command requests more detailed statistics about active jobs.
isi job statistics list --verbose
The system displays output similar to the following example. In the example, PID is the
process ID and CPU indicates CPU utilization by the job. Also indicated are how many
worker threads exist for the job on each node and what the sleep-to-work (STW) ratio is
for each thread. The statistics represent how the system throttles the job based on
impact policies.
Job ID: 16
Phase: 1
Nodes
Node :
PID :
CPU :
Memory
Virtual
Physical
I/O
Read
Write
Workers :
Node :
PID :
CPU :
Memory
Virtual
Physical
I/O
Read
Write
1
30977
0.00% (0.00% min, 5.91% max, 2.84% avg)
: 102.25M (102.12M min, 102.25M max, 102.23M avg)
: 9.99M (9.93M min, 9.99M max, 9.98M avg)
: 5637 ops, 62.23M
: 3601 ops, 23.11M
2 (0.60 STW avg.)
2
27704
0.00% (0.00% min, 5.91% max, 2.18% avg)
: 102.25M (102.00M min, 102.25M max, 102.22M avg)
: 9.57M (9.46M min, 9.57M max, 9.56M avg)
: 4814 ops, 53.30M
: 1658 ops, 7.94M
815
System jobs
Workers :
Node :
PID :
CPU :
Memory
Virtual
Physical
I/O
Read
Write
Workers :
Options
--job-id <integer>
Displays statistics for a specific job ID.
--devid <integer>
Displays statistics for a specific node (device) in the cluster.
{--verbose | -v}
Displays more detailed statistics for an active job or jobs.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
Examples
The following command requests statistics for an AutoBalance job with an ID of 6.
isi job statistics view --job-id 6
The system displays output similar to the following example. In the example, PID is the
process ID, and CPU indicates CPU utilization by the job. Also indicated are how many
worker threads exist for the job on each node and what the sleep-to-work (STW) ratio is
for each thread. The statistics represent how the system throttles the job based on
impact policies.
Job ID: 6
Phase: 2
Nodes
Node :
PID :
CPU :
Memory
Virtual
Physical
I/O
816
1
17006
0.00% (0.00% min, 7.91% max, 4.50% avg)
: 104.62M (104.37M min, 104.62M max, 104.59M avg)
: 10.08M (10.01M min, 10.11M max, 10.09M avg)
System jobs
Read
Write
Workers :
Node :
PID :
CPU :
Memory
Virtual
Physical
I/O
Read
Write
Workers :
Node :
PID :
CPU :
Memory
Virtual
Physical
I/O
Read
Write
Workers :
Options
--all
Displays all job types available in the Job Engine.
--sort {id | policy | exclusion_set | priority}
Sorts the output by the specified parameter.
--descending
In conjunction with --sort option, specifies that output be sorted descending
order. By default, output is sorted in ascending order.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated values (CSV), or list format.
{--no-header | -a}
Displays table and CSV output without headers.
{--no-footer | -z}
Displays table output without footers.
{--verbose | -v}
Displays more detailed information about a specific job type or all job types.
isi job types list
817
System jobs
Examples
The following command provides detailed information about job types.
isi job types list --sort id --verbose
Options
{--verbose | -v}
Displays more detailed job status information, including information about the
cluster and nodes.
Examples
The following command provides basic job status.
isi job status
818
System jobs
ID
Type
State
Time
-------------------------------------------------------1
MultiScan
System Cancelled 2013-09-24T08:23:44
3
MultiScan
Succeeded
2013-09-24T08:26:37
2
SetProtectPlus Succeeded
2013-09-24T08:27:16
4
FlexProtect
Succeeded
2013-09-24T09:14:27
-------------------------------------------------------Total: 4
The system displays additional output that includes cluster and node information.
The job engine is running.
Coordinator: 1
Connected: True
Disconnected Nodes: Down or Read-Only Nodes: False
Statistics Ready: True
Cluster Is Degraded: False
Run Jobs When Degraded: False
No running or queued jobs.
Recent finished jobs:
ID
Type
State
Time
-------------------------------------------------------1
MultiScan
System Cancelled 2013-09-24T08:23:44
3
MultiScan
Succeeded
2013-09-24T08:26:37
2
SetProtectPlus Succeeded
2013-09-24T08:27:16
4
FlexProtect
Succeeded
2013-09-24T09:14:27
-------------------------------------------------------Total: 4
Options
<id>
Specifies the job type to modify.
--enabled <boolean>
Specifies whether the job type is enabled or disabled.
--policy<string>
Sets the policy for the specified job type.
isi job types modify
819
System jobs
--schedule <string>
Sets a recurring date pattern to run the specified job type.
--priority<integer>
Sets the priority level for the specified job type. Job types have a priority value
between 1 and 10, with 1 being the highest priority and 10 being the lowest.
--clear-schedule
Clears any schedule associated with the specified job type.
--force
Forces the modification without a confirmation message.
Examples
The following command adds a recurring schedule to the MultiScan command.
isi job types modify multiscan --schedule "Every Friday at 22:00"
When you run this command, the system prompts you to confirm the change. Type yes or
no, and then press ENTER.
Options
<id>
Specifies the job type to view.
Examples
The following command displays the parameters of the job type MultiScan.
isi job types view multiscan
820
CHAPTER 22
Networking
Networking
821
Networking
Networking overview
After you determine the topology of your network, you can set up and manage your
internal and external networks.
There are two types of networks associated with an EMC Isilon cluster:
Internal
Nodes communicate with each other using a high speed low latency InfiniBand
network. You can optionally configure a second InfiniBand network as a failover for
redundancy.
External
Clients connect to the cluster through the external network with Ethernet. The Isilon
cluster supports standard network communication protocols, including NFS, SMB,
HTTP, and FTP. The cluster includes various external Ethernet connections, providing
flexibility for a wide variety of network configurations. External network speeds vary
by product.
822
Networking
Creates a default external network subnet called subnet0, with the specified
netmask, gateway, and SmartConnect service address.
Creates a default IP address pool called pool0 with the specified IP address range,
the SmartConnect zone name, and the external interface of the first node in the
cluster as the only member.
Creates a default network provisioning rule called rule0, which automatically assigns
the first external interface for all newly added nodes to pool0.
Adds pool0 to subnet0 and configures pool0 to use the virtual IP of subnet0 as its
SmartConnect service address.
Sets the global, outbound DNS settings to the domain name server list and DNS
search list, if provided.
Internal network failover
823
Networking
Once the initial external network has been established, you can configure the following
information about your external network:
l
Netmask
IP address range
Gateway
You can make modifications to the external network through the web administration
interface and the command-line interface.
IP address pools
You can partition EMC Isilon cluster nodes and external network interfaces into logical IP
address pools. IP address pools are also utilized when configuring SmartConnect zones
and IP failover support for protocols such as NFS. Multiple pools for a single subnet are
available only if you activate a SmartConnect Advanced license.
IP address pools:
l
The IP address pool of a subnet consists of one or more IP address ranges and a set of
cluster interfaces. All IP address ranges in a pool must be unique.
A default IP address pool is configured during the initial cluster setup through the
command-line configuration wizard. You can modify the default IP address pool at any
time. You can also add, remove, or modify additional IP address pools.
If you add external network subnets to your cluster through the subnet wizard, you must
specify the IP address pools that belong to the subnet.
IP address pools are allocated to external network interfaces either dynamically or
statically. The static allocation method assigns one IP address per pool interface. The IP
addresses remain assigned, regardless of that interface's status, but the method does
not guarantee that all IP addresses are assigned. The dynamic allocation method
distributes all pool IP addresses, and the IP address can be moved depending on the
interface's status and connection policy settings.
IPv6 support
You can configure dual stack support for IPv6.
With dual-stack support in OneFS, you can configure both IPv4 and IPv6 addresses.
However, configuring an EMC Isilon cluster to use IPv6 exclusively is not supported. When
you set up the cluster, the initial subnet must consist of IPv4 addresses.
The following table describes important distinctions between IPv4 and IPv6.
824
IPv4
IPv6
32-bit addresses
128-bit addresses
Networking
IPv4
IPv6
Subnet mask
Prefix length
SmartConnect module
SmartConnect is a module that specifies how the DNS server on the EMC Isilon cluster
handles connection requests from clients and the methods used to assign IP addresses
to network interfaces.
Settings and policies configured for SmartConnect are applied per IP address pool. You
can configure basic and advanced SmartConnect settings.
SmartConnect Basic
SmartConnect Basic is included with OneFS as a standard feature and does not require a
license.
SmartConnect Basic supports the following settings:
l
You may only assign one IP address pool per external network subnet.
SmartConnect Advanced
SmartConnect Advanced extends the settings available from SmartConnect Basic. It
requires an active license.
SmartConnect Advanced supports the following settings:
l
SmartConnect Advanced allows you to specify the following IP address pool configuration
options:
l
You can define an IP address failover policy for the IP address pool.
You can define an IP address rebalance policy for the IP address pool.
Connection balancing
The connection balancing policy determines how the DNS server handles client
connections to the EMC Isilon cluster.
You can specify one of the following balancing methods:
SmartConnect module
825
Networking
Round robin
Selects the next available node on a rotating basis. This is the default method.
Without a SmartConnect license for advanced settings, this is the only method
available for load balancing.
Connection count
Determines the number of open TCP connections on each available node and selects
the node with the fewest client connections.
Network throughput
Determines the average throughput on each available node and selects the node
with the lowest network interface load.
CPU usage
Determines the average CPU utilization on each available node and selects the node
with lightest processor usage.
IP address allocation
The IP address allocation policy ensures that all of the IP addresses in the pool are
assigned to an available network interface.
You can specify whether to use static or dynamic allocation.
Static
Assigns one IP address to each network interface added to the IP address pool, but
does not guarantee that all IP addresses are assigned.
Once assigned, the network interface keeps the IP address indefinitely, even if the
network interface becomes unavailable. To release the IP address, remove the
network interface from the pool or remove it from the cluster.
Without a license for SmartConnect Advanced, static is the only method available for
IP address allocation.
Dynamic
Assigns IP addresses to each network interface added to the IP address pool until all
IP addresses are assigned. This guarantees a response when clients connect to any
IP address in the pool.
If a network interface becomes unavailable, its IP addresses are automatically
moved to other available network interfaces in the pool as determined by the IP
address failover policy.
This method is only available with a license for SmartConnect Advanced.
826
SMB
NFSv4
HTTP
FTP
Static
Networking
sFTP
FTPS
HDFS
SyncIQ
NFSv2
NFSv3
Dynamic
IP address failover
The IP address failover policy specifies how to handle the IP addresses of network
interfaces that become unavailable
To define an IP address failover policy, you must have a license for SmartConnect
Advanced, and the IP address allocation policy must be set to dynamic. Dynamic IP
allocation to ensures that all of the IP addresses in the pool are assigned to available
network interfaces.
When a network interface becomes unavailable, the IP addresses that were assigned to it
are redistributed to available network interfaces according to the IP address failover
policy. Subsequent client connections are directed to the new network interfaces.
You can select one of the following the connection balancing methods to determine how
the IP address failover policy selects which network interface receives a redistributed IP
address:
l
Round robin
Connection count
Network throughput
CPU usage
IP address rebalancing
The IP address rebalance policy specifies when to redistribute IP addresses if one or more
previously unavailable network interfaces becomes available again.
To define an IP address rebalance policy, you must have a license for SmartConnect
Advanced, and the IP address allocation policy must be set to dynamic. Dynamic IP
addresses allocation ensures that all of the IP addresses in the pool are assigned to
available network interfaces.
You can set rebalancing to occur manually or automatically:
Manual
Does not redistribute IP addresses until you manually issue a rebalance command
through the command-line interface.
Upon rebalancing, IP addresses will be redistributed according to the connection
balancing method specified by the IP address failover policy defined for the IP
address pool.
IP address failover
827
Networking
Automatic
Automatically redistributes IP addresses according to the connection balancing
method specified by the IP address failover policy defined for the IP address pool.
Automatic rebalance may also be triggered by changes to cluster nodes, network
interfaces, or the configuration of the external network.
Note
Rebalancing can disrupt client connections. Ensure the client workflow on the IP
address pool is appropriate for automatic rebalancing.
SmartConnect requires that you add a new name server (NS) record to the existing
authoritative DNS zone that contains the cluster, and you must provide the fully qualified
domain name (FQDN) of the SmartConnect zone.
828
Networking
NIC aggregation
Network interface card (NIC) aggregation, also known as link aggregation, is optional, and
enables you to combine the bandwidth of a node's physical network interface cards into
a single logical connection. NIC aggregation provides improved network throughput.
Note
Some NICs may allow aggregation of ports only on the same network card.
For LACP and FEC aggregation modes, the switch must support IEEE 802.3ad link
aggregation. Since the trunks on the network switch must also be configured, the
node must be connected with the correct ports on the switch.
Routing options
By default, outgoing client traffic on the EMC Isilon cluster is destination-based; traffic is
routed to a particular gateway based on where the traffic is going. OnefS supports sourcebased routing and static routes; these options allow for more granular control of the
direction of outgoing client traffic.
Source-based routing
Source-based routing selects which gateway to direct outgoing client traffic through
based on the source IP address in each packet header.
In the following example, you enable source-based routing on an Isilon cluster that is
connected to SubnetA and SubnetB. Each subnet is configured with a SmartConnect zone
NIC aggregation
829
Networking
and a gateway, also labeled A and B. When a client on SubnetA makes a request to
SmartConnect ZoneB, the response originates from ZoneB. This results in a ZoneB
address as the source IP in the packet header, and the response is routed through
GatewayB. Without source-based routing, the default route is destination-based, so the
response is routed through GatewayA.
In another example, a client on SubnetC, which is not connected to the Isilon cluster,
makes a request to SmartConnect ZoneA and ZoneB. The response from ZoneA is routed
through GatewayA, and the response from ZoneB is routed through GatewayB. In other
words, the traffic is split between gateways. Without source-based routing, both
responses are routed through the same gateway.
When enabled, source-based routing automatically scans your network configuration to
create client traffic rules. If you make modifications to your network configuration, such
as changing the IP address of a gateway server, source-based routing adjusts the rules.
Source-based routing is applied across the entire cluster and only supports the IPv4
protocol.
Enabling or disabling source-based routing goes into effect immediately. Packets in
transit continue on their original courses, and subsequent traffic is routed based on the
status change. Transactions composed of multiple packets might be disrupted or delayed
if the status of source-based routing changes during transmission.
Source-based routing can conflict with static routes. If a routing conflict occurs, sourcebased routing rules are prioritized over the static route.
You might enable source-based routing if you have a large network with a complex
topology. For example, if your network is a multi-tenant environment with several
gateways, traffic is more efficiently distributed with source-based routing.
Static routing
A static route directs outgoing client traffic to a specified gateway based on the IP
address the client is connected through.
You configure static routes by IP address pool, and each route applies to all network
interfaces that are members of the IP address pool. Static routes only support the IPv4
protocol.
You might configure static routing in order to connect to networks that are unavailable
through the default routes or if you have a small network that only requires one or two
routes.
Note
If you have upgraded from a version earlier than OneFS 7.0, existing static routes that
were added through rc scripts will no longer work and must be re-created by running the
isi networks modify pool command with the --add-static-routes option.
VLANs
Virtual LAN (VLAN) tagging is an optional setting that enables an EMC Isilon cluster to
participate in multiple virtual networks.
You can partition a physical network into multiple broadcast domains, or virtual local
area networks (VLANs). You can enable a cluster to participate in a VLAN which allows
multiple cluster subnet support without multiple network switches; one physical switch
enables multiple virtual subnets.
830
Networking
VLAN tagging inserts an ID into packet headers. The switch refers to the ID to identify from
which VLAN the packet originated and to which network interface a packet should be
sent.
The following command deletes an existing IP address range from the int-a internal
network:
deliprange int-a 192.168.206.15-192.168.206.20
3. Run the commit command to complete the configuration changes and exit isi
config.
831
Networking
3. Run the commit command to complete the configuration changes and exit isi
config.
You must reboot the EMC Isilon cluster to apply modifications to internal network failover.
Procedure
1. Run the isi config command.
The command-line prompt changes to indicate that you are in the isi config
subsystem.
2. Set a netmask for the second interface by running the netmask command.
The following command changes the int-b internal network netmask:
netmask int-b 255.255.255.0
3. Set an IP address range for the second interface by running the iprange command.
The following command adds an IP range to the int-b internal network:
iprange int-b 192.168.206.21-192.168.206.30
4. Set an IP address range for the failover interface by running the iprange command.
The following command adds an IP range to the internal failover network:
iprange failover 192.168.206.31-192.168.206.40
6. Run the commit command to complete the configuration changes and exit isi
config.
7. Restart the cluster to apply netmask modifications.
832
Networking
You must reboot the cluster to apply modifications to internal network failover.
Procedure
1. Run the isi config command.
The command-line prompt changes to indicate that you are in the isi config
subsystem.
2. Disable the int-b interface by running the interface command.
The following command specifies the int-b interface and disables it:
interface int-b disable
3. Run the commit command to complete the configuration changes and exit isi
config.
4. Restart the cluster to apply failover modifications.
2. Specify a list of DNS search suffixes by running the isi networks --dns-search
command.
The following command specifies three DNS search suffixes:
isi networks --dns-search
storage.company.com,data.company.com,support.company.com
833
Networking
Create a subnet
You can add a subnet to the external network of a cluster.
When you create a subnet, a netmask is required. The first subnet you create must use
IPv4 protocol; however, subsequent subnets can use IPv6. Your Isilon cluster must
always have at least one IPv4 subnet.
Procedure
1. Run the isi networks create subnet command.
The following command creates a subnet named subnet10 with an associated IPv4
netmask:
isi networks create subnet --name subnet10 --netmask 255.255.255.0
Modify a subnet
You can modify a subnet on the external network.
Note
Modifying an external network subnet that is in use can disable access to the cluster.
Procedure
1. Optional: To identify the name of the external subnet you want to modify, run the
following command:
isi networks list subnets
2. Modify the external subnet by running the isi networks modify subnet
command
The following command changes the name of the subnet:
isi networks modify subnet --name subnet10 --new-name subnet20
The following command sets MTU to 1500, specifies the gateway address as
198.162.205.10, and sets the gateway priority to 1 on subnet10:
isi networks modify subnet --name subnet10 --mtu 1500 --gateway
198.162.205.10 --gateway-prio 1
Delete a subnet
You can delete an external network subnet that you no longer need.
Note
Deleting an external network subnet also deletes any associated IP address pools.
Deleting a subnet that is in use can prevent access to the cluster.
834
Networking
Procedure
1. Optional: To identify the name of the subnet you want to delete, run the following
command:
isi networks list subnets
View subnets
You can view all subnets on the external network.
You can also view subnets by the following criteria:
l
Subnet name
Procedure
1. Run the isi networks list subnets command.
The system displays output similar to the following example:
Name
--------subnet0
subnet10
subnet5
Subnet
----------------192.168.205.0/24
192.158.0.0/16
192.178.128.0/22
Gateway:Prio
SC Service Pools
---------------- --------------- ----192.168.205.2:1
0.0.0.0
1
192.158.0.2:2
192.158.205.40
1
192.178.128.2:3
0.0.0.0
1
The following command displays subnets with a pool containing 192.168.205.5 in its
range:
isi networks list subnets --has-addr 192.168.205.5
835
Networking
2. Enable or disable VLAN tagging on the external subnet by running the isi
networks modify subnet command.
The following command enables VLAN tagging on subnet10 and sets the required
VLAN ID to 256:
isi networks modify subnet --name subnet10 --enable-vlan --vlan-id
256
2. Add or remove DSR addresses on the subnet by running the isi networks
modify subnet command.
The following command adds a DSR address to subnet10:
isi networks modify subnet --name subnet10 --add-dsr-addrs
198.162.205.30
If you have not activated a SmartConnect Advanced license, the cluster is allowed one IP
address pool per subnet. If you activate a SmartConnect Advanced license, the cluster is
allowed unlimited IP address pools per subnet.
When you create an address pool, you must assign it to a subnet.
836
Networking
Procedure
1. Run the isi networks create pool command.
The following command creates a pool named pool5 and assigns it to subnet10:
isi networks create pool --name subnet10:pool5
2. Modify the IP address pool by running the isi networks modify pool
command.
The following command changes the name of the pool:
isi networks modify pool --name subnet0:pool5 --new-name pool15
Pool name
Assigned subnet
Rule name
IP address
837
Networking
Procedure
1. Run the isi networks list pools command.
The system displays output similar to the following example:
Subnet
Pool
SmartConnect Zone
Ranges Alloc
---------------------------------------------------------------subnet0 pool2
data.company.com 192.168.205.5-192.1... Static
subnet10 pool5
192.158.0.5-192.158... Static
subnet5 pool8 support.company.com 192.178.128.0-192.1... Static
The following command displays pools that have been assigned the first external
network interface of a node:
isi networks list pools --iface ext-1
2. Configure a pool's IP address range by running the isi networks modify pool
command.
The following command adds an address range to pool5:
isi networks modify pool --name subnet10:pool5 --add-ranges
192.168.205.128-192.168.205.256
Networking
Dynamic
Select this IP allocation method to ensure that all IP addresses in the IP address pool
are assigned to network interfaces, which allows clients to connect to any IP
addresses in the pool and be guaranteed a response. If a node or a network interface
becomes unavailable, the IP addresses are automatically moved to other available
network interfaces in the pool.
Note
2. Specify a manual or automatic rebalance policy for a pool by running the isi
networks modify pool command.
a. Run the --manual-failback option to set a manual policy.
839
Networking
Round robin
Connection count
Network throughput
CPU usage
Procedure
1. Optional: To identify the name of the IP address pool you want to modify for an IP
failover policy, run the following command:
isi networks list pools
2. Specify a failover policy for a pool by running the isi networks modify pool
command.
The following command specifies IP redistribution by CPU usage upon failback for
pool5:
isi networks modify pool --name subnet10:pool5 --failover-policy
cpu-usage
840
Networking
Basic
If you have not activated a SmartConnect Advanced license, SmartConnect operates
in Basic mode. In Basic mode, client connection balancing is limited to round robin.
Basic mode is limited to static IP address allocation and to one IP address pool per
external network subnet. This mode is included with OneFS as a standard feature.
Advanced
If you activate a SmartConnect Advanced license, SmartConnect operates in
Advanced mode. Advanced mode enables client connection balancing based on
round robin, CPU utilization, connection counting, or network throughput. Advanced
mode supports IP failover and allows IP address pools to support multiple DNS
zones within a single subnet.
Note
SmartConnect requires that a new name server (NS) record is added to the existing
authoritative DNS zone containing the cluster.
2. Add or remove one or more zone aliases in a pool by running the isi networks
modify pool command.
Configure a SmartConnect zone
841
Networking
Round robin
Note
Connection count
Network throughput
CPU usage
Procedure
1. Optional: To identify the name of the IP address pool you want to modify for a
connection balancing policy, run the following command:
isi networks list pools
2. Specify the connection balancing policy for a pool by running the isi networks
modify pool command.
The following command specifies a round robin balancing policy for pool5:
isi networks modify pool --name subnet10:pool5 --connect-policy
round-robin
You must have at least one subnet configured with a SmartConnect service IP in order to
balance DNS requests.
A SmartConnect service IP address can handle DNS requests on behalf of any pool on the
Isilon cluster if it is specified as the pool's SmartConnect service subnet.
842
Networking
Procedure
1. Optional: To identify the name of the external subnet you want to modify for a
SmartConnect service IP address, run the following command:
isi networks list subnets
2. Specify the name of the SmartConnect service subnet that will answer a pool's DNS
requests by running the isi networks modify pool command.
The following command specifies that subnet20 is responsible for the SmartConnect
zone in pool5:
isi networks modify pool --name subnet10:pool5 --sc-subnet subnet20
843
Networking
The following command removes the first network interface on node 3 from pool5:
isi networks modify pool --name subnet10:pool5 --remove-ifaces
3:ext-1
LACP
Round robin
Failover
FEC
Legacy
Note
You must enable NIC aggregation on the cluster before you can enable NIC aggregation
on the network switch. If the cluster is configured but the node is not, the cluster can
continue to communicate. If the node is configured but the cluster is not, the cluster
cannot communicate.
Procedure
1. Optional: To identify the name of the IP address pool you want to modify for NIC
aggregation, run the following command:
isi networks list pools
2. Configure an aggregated interface and specify a NIC aggregation method for a pool by
running the isi networks modify pool command.
The following command adds interfaces ext-1 and ext-2 on node 1 to pool5 and
aggregates them underneath ext-agg using the FEC driver:
isi networks modify pool --name subnet10:pool5 --iface 1:ext-agg
--aggregation-mode fec
844
Networking
LNI numbering corresponds to the physical positioning of the NIC ports as found on
the back of the node. LNI mappings are numbered from left to right.
Aggregated LNIs are listed in the order in which they are aggregated at the time they
are created.
LNI
NIC
Aggregated LNI
ext-1
em0
lagg0
fec0
ext-2
em1
ext-1
em2
lagg0
fec0
ext-2
em3
lagg1
fec1
ext-3
em0
lagg2
fec2
ext-4
em1
ext-1
em0
lagg0
fec0
ext-2
em1
lagg1
fec1
10gige-1 cxgb0
10gige-1 cxgb1
Configure a static route on a per-pool basis. Static routes support only the IPv4 protocol.
You can only add or remove a static route through the command-line interface.
Procedure
1. Optional: Identify the name of the IP address pool you want to modify for static routes
by running the following command:
isi networks list pools
2. Configure static routes on a pool by running the isi networks modify pool
command.
Specify the route in classless inter-domain routing (CIDR) notation format.
The following command adds a static route to pool5 and assigns the route to all
network interfaces that are members of the pool:
isi networks modify pool subnet10:pool5 --add-staticroutes=192.168.205.128/24-192.168.205.2
845
Networking
Node number
Inactive status
Procedure
1. Run the isi networks list interfaces command.
The system displays output similar to the following example:
Iface
Stat
Membership
Addresses
-----------------------------------------------------1:ext-1 up
subnet0:pool2
192.168.205.5
1:int-a up
int-a-subnet:int-a-pool
192.168.206.10
2:ext-1 up
2:int-a up
int-a-subnet:int-a-pool
192.168.206.11
3:ext-1 up
subnet5:pool8
192.178.128.0
3:int-a up
int-a-subnet:int-a-pool
192.168.206.12
846
Networking
2. Modify the network interface provisioning rule by running the isi networks
modify rule command.
The following command changes the name of rule3 to rule3accelerator:
isi networks modify rule --name subnet10:pool5:rule3
rule3acceleratori
--new-name
The following command changes rule3 so that it applies only to new storage node
types added to the cluster.
isi networks modify rule --name subnet10:pool5:rule3
--storage
847
Networking
Rule name
Assigned subnet
Assigned pool
Node type
Procedure
1. Run the isi networks list rules command:
The system displays output similar to the following example:
Name
Pool
Filter Iface
-----------------------------------------------rule1accel
subnet0:pool2
Accelerator ext-1
rule3any
subnet10:pool5 Any
ext-1
rule4accel
subnet10:pool5 Accelerator ext-1
rule2storage subnet5:pool8
Storage
ext-1
The following command displays rules in pool5 that apply to new accelerator nodes:
isi networks list rules
--pool subnet10:pool5
--accelerator
Source-based routing is enabled or disabled on the entire EMC Isilon cluster and
supports only the IPv4 protocol.
Enabling and disabling source-based routing is only supported through the commandline interface.
Procedure
1. Enable source-based routing by running the following command:
isi networks sbr enable
848
Networking
Configure a static route on a per-pool basis. Static routes support only the IPv4 protocol.
You can only add or remove a static route through the command-line interface.
Procedure
1. Optional: Identify the name of the IP address pool you want to modify for static routes
by running the following command:
isi networks list pools
2. Configure static routes on a pool by running the isi networks modify pool
command.
Specify the route in classless inter-domain routing (CIDR) notation format.
The following command adds a static route to pool5 and assigns the route to all
network interfaces that are members of the pool:
isi networks modify pool subnet10:pool5 --add-staticroutes=192.168.205.128/24-192.168.205.2
Networking commands
You can view and configure settings for the internal and external networks on an EMC
Isilon cluster through the networking commands.
isi networks
Manages external network configuration settings.
Syntax
isi networks
[--dns-servers<ip-address-list>]
[--add-dns-servers<ip-address-list>]
[--remove-dns-servers<ip-address-list>]
[--dns-search<dns-domain-list>]
[--add-dns-search<dns-domain-list>]
[--remove-dns-search<dns-domain-list>]
[--dns-options<dns-option-list>]
[--add-dns-option<dns-option-list>]
[--remove-dns-option<dns-option-list>]
849
Networking
[--tcp-ports<number>]
[--add-tcp-port<number>]
[--remove-tcp-port<number>]
[--server-side-dns-search<boolean>]
[--sc-rebalance-all]
[--sc-rebalance-delay<number>]
Options
If no options are specified, displays the current settings for each option and lists all
subnets configured on the EMC Isilon cluster.
--dns-servers <ip-address-list>
Sets a list of DNS IP addresses. Nodes issue DNS requests to these IP addresses. The
list cannot contain more than three IP addresses. This option overwrites the current
list of DNS IP addresses.
--add-dns-servers<ip-address-list>
Adds one or more DNS IP addresses, separated by commas. The list cannot contain
more than three IP addresses.
--remove-dns-servers<ip-address-list>
Removes one or more DNS IP addresses, separated by commas.
--dns-search <dns-domain-list>
Sets the list of DNS search suffixes. Suffixes are appended to domain names that are
not fully qualified. The list cannot contain more than six suffixes. This option
overwrites the current list of DNS search suffixes.
Note
Do not begin suffixes with a leading dot; leading dots are automatically added.
--add-dns-search <dns-domain-list>
Adds one or more DNS search suffixes, separated by commas. The list cannot contain
more than six search suffixes.
--remove-dns-search <dns-domain-list>
Removes one or more DNS search suffixes, separated by commas.
--dns-options <dns-option-list>
Sets the DNS resolver options list. DNS resolver options are described in the /etc/
resolv.conf file.
Note
Setting DNS resolver options is not recommended. Most clusters do not need DNS
resolver options and setting them may change how OneFS performs DNS lookups.
--add-dns-option <dns-option-list>
Adds DNS resolver options.
--remove-dns-option <dns-option-list>
Removes DNS resolver options.
--tcp-ports <number>
Sets one or more recognized client TCP ports. This option overwrites the current list of
TCP ports.
850
Networking
--add-tcp-port <number>
Adds one or more recognized client TCP ports, separated by commas.
--remove-tcp-port<number>
Removes one or more recognized client TCP ports, separated by commas.
--server-side-dns-search <boolean>
Specifies whether to append DNS search suffixes to a SmartConnect zone that does
not use a fully qualified domain name when attempting to map incoming DNS
requests. This option is enabled by default.
--sc-rebalance-all
Rebalances all dynamic IP address pools on the EMC Isilon cluster. This option
requires an active license for SmartConnect Advanced.
--sc-rebalance-delay <number>
Specifies a period of time (in seconds) that should pass after a qualifying event
before an automatic rebalance is performed. The default value is 0 seconds.
Examples
Run the isi networks command without any specified options to display the current
settings for each option and list all subnets configured on the cluster.
The system displays output similar to the following example:
Domain Name Server:
DNS Search List:
DNS Resolver Opti...
Server-side DNS S...
DNS Caching:
Client TCP ports:
Rebalance delay:
Subnet:
10.52.0.1,10.52.0.2
company.com,storage.company.com
N/A
Enabled
Enabled
2049, 445, 20, 21, 80
0
(192.168.205.0/24)
The following command sets 10.52.0.1 and 10.52.0.2 as the current DNS IP addresses::
isi networks --dns-servers=10.52.0.1,10.52.0.2
The following command sets 2049, 445, 20, 21, and 80 as recognized client TCP ports:
isi networks --tcp-port=2049,445,20,21,80
851
Networking
[--dynamic]
[--static]
[--aggregation-mode <mode>]
[--add-static-routes <route>]...
[--remove-static-routes <route>]...
[--ttl <number>]
[--auto-unsuspend-delay <integer>]
[--zone <zone>]
[--add-zone-aliases <aliases>]...
[--remove-zone-aliases <aliases>]...
[--access-zone <zone>]
[--connect-policy <policy>]
[--failover-policy <policy>]
[--manual-failback]
[--auto-failback]
[--sc-suspend-node <node>]
[--sc-resume-node <node>]
[--verbose]
[--force]
Options
<name>
Specifies the name of the new pool that you want to create. The name includes the
name of the subnet and the name of the pool separated by a colonfor example,
subnet1:pool0. The pool name must be unique in the subnet.
Specify the name in the following format:
<subnet>:<pool>
--ranges <ip-address-range-list>...
Specifies one or more IP address ranges for the pool. IP addresses within these
ranges are assigned to the network interfaces that are members of the IP address
pool.
Specify the IP address range in the following format:
<low-ip-address>-<high-ip-address>
--ifaces <node-interface>...
Specifies which network interfaces should be members of the IP address pool. The
interface values specified through this option override any previously set values.
Specify network interfaces in the following format:
<node-number>:<interface>
To specify multiple nodes, separate each node ID with a comma. To specify a range of
nodes, separate the lower and upper node IDs with a dash. To specify multiple
network interfaces, separate each interface name with a comma.
The following example adds interfaces ext-1 and ext-2 on nodes 1, 2, 3 and 5 to the
IP address pool:
--ifaces 1-3,5:ext-1,ext-2
--sc-subnet <subnet>
Specifies the name of the service subnet that is responsible handling DNS requests
for the SmartConnect zone.
--desc <string>
Specifies a description of the pool.
852
Networking
--dynamic
Specifies that all pool IP addresses must be assigned to a network interface at all
times. Allows multiple IP addresses to be assigned to an interface. If a network
interface becomes unavailable, this option ensures that the assigned IP address are
redistributed to another interface.
Note
--remove-static-routes <route>...
853
Networking
Removes a static route from the pool. Multiple routes can be specified in a commaseparated list.
Specify the route in the following CIDR notation format:
<network-address>/<subnet-mask>-<gateway-ip-address>
--ttl <integer>
Specifies the time to live value for SmartConnect DNS query responses (in seconds).
DNS responses are only valid for the time specified. The default value is 0 seconds.
--auto-unsuspend-delay <integer>
Specifies the time delay (in seconds) before a node that is automatically
unsuspended resumes SmartConnect DNS query responses for the node. During
certain cluster operations such as rolling upgrades, general node splits, or node
reboots, a node is automatically suspended and then unsuspended by the system.
This setting is only available through the command line interface; you can view the
current setting by listing the current pools in verbose mode.
--zone <zone>
Specifies the SmartConnect zone name for this pool. Pool IP addresses are returned
in response to DNS queries to this zone. The --connect-policy option
determines which pool IP addresses are returned.
--add-zone-aliases <aliases>...
Adds specified DNS names to the pool as SmartConnect zone aliases. Multiple
aliases can be specified in a comma-separated list.
--remove-zone-aliases <aliases>...
Removes SmartConnect zone aliases from the pool as comma separated string of
DNS names.
--access-zone <zone>
Sets access zone for connections to the pool.
--connect-policy <connection-policy>
Specifies how incoming client connections are balanced across IP addresses.
The following values are valid:
round-robin
Rotates connections through nodes equally. This value is the default policy.
conn-count
Assigns connections to the node that has least connections.
throughput
Assigns connections to the node with the least throughput.
cpu-usage
Assigns connections to the node with the least CPU usage.
--failover-policy <failover-policy>
Specifies how IP addresses that belong to an unavailable interface are rebalanced
across the remaining network interfaces.
The following values are valid:
round-robin
Assigns IP addresses across nodes equally. This is the default policy.
conn-count
Assigns IP addresses to the node that has least connections.
854
Networking
throughput
Assigns IP addresses to the node with least throughput.
cpu-usage
Assigns IP addresses to the node with least CPU usage.
--manual-failback
Requires that connection rebalancing be performed manually after failback.
To manually rebalance a pool, run the following command:
isi networks modify pool --name <subnet>:<pool> --sc-rebalance
--auto-failback
Causes connections to be rebalanced automatically after failback. This is the default
setting.
sc-suspend-node <node>
Suspends SmartConnect DNS query responses for a node.
sc-resume-node <node>
Resumes SmartConnect DNS query responses for a node.
{--verbose | -v}
Displays more detailed information.
{--force | -f}
Forces commands without warnings.
Examples
The following command creates a new IP address pool called pool1 under subnet0 that
assigns IP addresses 192.168.8.10-192.168.8.15 to ext-1 network on nodes 1, 2, and 3.
The SmartConnect zone name of this pool is storage.company.com, but it accepts the
alias of storage.company:
isi networks create pool subnet0:pool1 --ifaces=1-3:ext-1 -ranges=192.168.8.10-192.168.8.15 --zone=storage.company.com --addzone-aliases=storage.company
The following command creates a new IP address pool named pool2 under subnet0 that
includes interfaces ext-1 and ext-2 on node 1, and aggregates them underneath ext-agg,
alternating equally between them.
isi networks create pool subnet0:pool2 --iface=1:ext-agg -ranges=192.168.8.10-192.168.8.15 --aggregation-mode=roundrobin
The following command creates a new IP address pool named pool3 under a subnet
named subnet0 and specifies that connection rebalancing must be performed manually:
isi networks create pool subnet0:pool3 --ifaces=1,2:ext-1,ext-2 -ranges=192.168.8.10-192.168.8.15 --manual-failback
855
Networking
Options
<name>
Specifies the name and location of the new provisioning rule. Valid names include
the subnet, pool, and a unique rule name, separated by colons. The rule name must
be unique throughout the given pool.
Specify in the following format:
<subnet>:<pool>:<rule>
<iface>
Specifies the interface name the rule applies to. To view a list of interfaces on your
system, run the isi networks list interfaces command.
--desc <description>
Specifies an optional description of the rule.
--any
Sets the provisioning rule to apply to all nodes. This is the default setting.
--storage
Sets the provisioning rule to apply to storage nodes.
--accelerator
Sets the provisioning rule to apply to Accelerator nodes.
--backup-accelerator
Sets the provisioning rule to apply to Backup Accelerator nodes.
{--verbose | -v}
Displays more detailed information.
856
Networking
Options
--name
Specifies the name of the subnet. Must be unique throughout the cluster.
--netmask <ip-address>
Specifies the netmask for an IPv4 subnet. You must specify either a netmask or an
IPv6 subnet prefix length.
--prefixlen <number>
Sets the prefix length of an IPv6 subnet. You must specify either a prefix length or an
IPv4 netmask.
--dsr-addrs <ip-address-list>
Sets the Direct Server Return address(es) for the subnet. If an external hardware load
balancer is used, this parameter is required.
--desc <description>
Sets a description for the subnet.
{--gateway | -g} <ip-address>
Specifies the gateway IP address used by the subnet. If unspecified, the default
gateway is used.
Note
857
Networking
Note
Although OneFS supports both 1500 MTU and 9000 MTU, using a larger frame size for
network traffic permits more efficient communication on the external network
between clients and cluster nodes. For example, if a subnet is connected through a
10 GbE interface and NIC aggregation is configured for IP address pools in the
subnet, it is recommended you set the MTU to 9000. To benefit from using jumbo
frames, all devices in the network path must be configured to use jumbo frames.
--sc-service-addr <ip-address>
Specifies the IP address on which the SmartConnect module listens for domain name
server (DNS) requests on this subnet.
--vlan-id <vlan-identifier>
Specifies the VLAN ID for all interfaces in the subnet.
{--verbose | -v}
Displays more detailed information.
Examples
The following command creates a subnet named example1 with a netmask of
255.255.255.0:
isi networks create subnet --name example1 --netmask 255.255.255.0
Options
--name <subnet>:<pool>
Required. Specifies the name of the IP address pool to be delete.
{--force | -f}
Suppresses any prompts, warnings, or confirmation messages that would otherwise
appear.
Examples
The following command deletes an IP address pool named pool0 from subnet1:
isi networks delete pool subnet1:pool0
858
Networking
Options
{name | -n} <subnet>:<pool>:<rule>
Required. Specifies the provisioning rule to delete.
{--force | -f}
Suppresses any prompts, warnings, or confirmation messages that would otherwise
appear.
Examples
The following command deletes a provisioning rule named rule0 from subnet1:pool2:
isi networks delete rule subnet1:pool2:rule0
Options
--name
Required. Specifies the subnet for deletion.
{--force | -f}
Suppresses any prompts, warnings, or confirmation messages that would otherwise
appear.
Examples
The following command deletes a subnet named subnet1:
isi networks delete subnet subnet1
859
Networking
Options
This command has no options.
Options
This command has no options.
Options
This command has no options.
Options
--ttl-max-noerror <integer>
Specifies the upper boundary on ttl for cache hits.
--ttl-min-noerror <integer>
860
Networking
Options
There are no options for this command.
Options
{--verbose | -v}
Displays more detailed information.
{--wide | -w}
Displays entries without enforcing truncation.
--show-inactive
Includes inactive interfaces in the list.
{--nodes | -n} <integer>
isi networks dnscache statistics
861
Networking
Status
----------up
no carrier
inactive
up
no carrier
inactive
Membership
Addresses
---------------- -----------subnet0:pool0
11.22.3.45
subnet0:pool0
11.22.34.56
Options
If you run this command without options or with only the --verbose option, the system
displays a list of all available IP address pools.
--name <subnet>:<pool>
Displays only pool names that match the specified string, or specifies a full pool
name.
--subnet <string>
Displays only pools within a subnet whose name matches the specified string.
--iface <node-interface>
Displays only pools containing the specified member interface.
--rule <string>
Displays only pools containing a rule name that matches the specified string.
--has-addr <ip-address>
Displays only the pool that contains the specified IP address.
{--verbose | -v}
Displays more detailed information.
862
Networking
Examples
The following command displays a list all available IP address pools:
isi networks list pools
Pool
SmartConnect Zone
-------- ------------------pool0
pool0
pool01
pool10
example1 example.site.com
Ranges
-----------10.22.136.1-6
10.22.136.1-6
10.22.136.1-6
10.22.136.1-6
10.33.150.20-30
Alloc
------Static
Static
Static
Static
Dynamic
The following command displays a list of all pools whose names contain the string
'pool0.'
isi networks list pools --name pool0
Pool
SmartConnect Zone
-------- ------------------pool0
pool0
pool01
Ranges
-----------10.22.136.1-6
10.22.136.1-6
10.22.136.1-6
Alloc
------Static
Static
Static
Options
If no options are specified, the command displays a list of all provisioning rules.
--name <subnet>:<pool>:<rule>:
Specifies the name of the rule.
--pool <subnet>:<pool>
Name of the pool the provisioning rule applies to.
--iface <node-interface>
Names the interface that the provisioning rule applies to.
--any
Sets the provisioning rule to apply to any type of node.
--storage
isi networks list rules
863
Networking
The system displays the list of rules in output similar to the following example:
Name
Pool
Node Type
Interface
------- --------------- ------------ --------rule0
subnet0:pool0
All
ext-1
Options
If you run this command without options or with only the --verbose option, the system
displays a list of all available subnets.
--name
Displays only subnets that contain the specified string.
--has-addr <ip-address>
Displays only pools containing the specified member interface.
{--verbose | -v}
Displays more detailed information.
Examples
The following command displays a list of all subnets:
isi networks list subnets
864
Subnet
--------------11.22.3.0/24
11.22.33.0/24
Networking
Options
You must specify at least one IP address pool setting to modify.
<name>
Specifies the name of the pool to modify. Must be unique throughout the subnet.
Specify the pool name in the following format:
<subnet>:<pool>
--new-name <string>
Required. Specifies a new name for the IP address pool.
Specify the new pool name in the following format:
<subnet>:<pool>
--sc-rebalance
Rebalances IP addresses for the pool.
--ranges <ip-address-range-list>...
Specifies one or more IP address ranges for the pool. IP addresses within these
ranges are assigned to the network interfaces that are members of the IP address
pool.
865
Networking
Note
Specifying new ranges with this option will remove any previously entered ranges
from the pool.
Specify the IP address range in the following format:
<low-ip-address>-<high-ip-address>
--add-ranges <ip-address-range-list>...
Adds specified IP address ranges to the pool.
Specify the IP address range in the following format:
<low-ip-address>-<high-ip-address>
--remove-ranges <ip-address-range-list>...
Removes specified IP address ranges from the pool.
Specify the IP address range in the following format:
<low-ip-address>-<high-ip-address>
--ifaces <node-interface>
Specifies which network interfaces should be members of the IP address pool. The
interface values specified through this option override any previously set values.
Specify network interfaces in the following format:
<node-number>:<interface>
To specify multiple nodes, separate each node ID with a comma. To specify a range of
nodes, separate the lower and upper node IDs with a dash. To specify multiple
network interfaces, separate each interface name with a comma.
The following example adds interfaces ext-1 and ext-2 on nodes 1, 2, 3 and 5 to the
IP address pool:
--ifaces 1-3,5:ext-1,ext-2
--add-ifaces <node-interface>
Adds a network interface to the pool.
Specify network interfaces in the following format:
<node-number>:<interface>
--remove-ifaces <node-interface>
Removes a network interface from the pool.
Specify network interfaces in the following format:
<node-number>:<interface>
--sc-subnet <subnet>
Specifies the name of the service subnet that is responsible handling DNS requests
for the SmartConnect zone.
--desc <string>
Specifies a description of the pool.
866
Networking
--dynamic
Specifies that all pool IP addresses must be assigned to a network interface at all
times. Allows multiple IP addresses to be assigned to an interface. If a network
interface becomes unavailable, this option ensures that the assigned IP address are
redistributed to another interface.
Note
--remove-static-routes <route>...
867
Networking
Removes a static route from the pool. Multiple routes can be specified in a commaseparated list.
Specify the route in the following CIDR notation format:
<network-address>/<subnet-mask>-<gateway-ip-address>
--ttl <integer>
Specifies the time to live value for SmartConnect DNS query responses, in seconds.
DNS responses are only valid for the time specified. The default value is 0.
--auto-unsuspend-delay <integer>
Specifies the time delay (in seconds) before a node that is automatically
unsuspended resumes SmartConnect DNS query responses for the node. During
certain cluster operations such as rolling upgrades, general node splits, or node
reboots, a node is automatically suspended and then unsuspended by the system.
This setting is only available through the command line interface; you can view the
current setting by listing the current pools in verbose mode.
--zone <zone>
Specifies the SmartConnect zone name for the pool. DNS queries to this zone return
pool IP addresses. The --connect-policy setting determines which pool IP
addresses are returned.
--add-zone-aliases <aliases>...
Adds specified DNS names to the pool as SmartConnect zone aliases. Multiple
aliases can be specified in a comma-separated list.
--remove-zone-aliases <aliases>...
Removes specified DNS names from the pool as SmartConnect zone aliases. Multiple
aliases can be specified in a comma-separated list.
--access-zone <zone>
Sets the access zone for connections to the pool.
--connect-policy <connection-policy>
Specifies how incoming client connections are balanced across IP addresses.
The following values are valid:
round-robin
Rotates connections through nodes equally. This value is the default policy.
conn-count
Assigns connections to the node that has least connections.
throughput
Assigns connections to the node with the least throughput.
cpu-usage
Assigns connections to the node with the least CPU usage.
--failover-policy <failover-policy>
Specifies how IP addresses that belong to an unavailable interface are rebalanced
across the remaining network interfaces.
The following values are valid:
round-robin
Assigns IP addresses across nodes equally. This is the default policy.
conn-count
Assigns IP addresses to the node that has least connections.
868
Networking
throughput
Assigns IP addresses to the node with least throughput.
cpu-usage
Assigns IP addresses to the node with least CPU usage.
--manual-failback
Requires that connection rebalancing be performed manually after failback.
You can manually rebalance a pool by running the following command:
isi networks modify pool --name <subnet>:<pool> --sc-rebalance
--auto-failback
Rebalances connections automatically after failback. This option is the default
setting.
--sc-suspend-node <node>
Suspends SmartConnect DNS query responses for the specified node. While
suspended, SmartConnect does not return IP addresses for this node, but allows
active clients to remain connected.
--sc-resume-node <node>
Resumes SmartConnect DNS query responses for a node.
{--verbose | -v}
Displays the results of running this command.
{--force | -f}
Suppresses warning messages about pool modification.
Examples
The following command removes node 6 from participating in the SmartConnect profile
for subnet0:pool0:
isi networks modify pool subnet0:pool0 --sc-suspend-node=6
You can confirm that a node has been suspended by running the following command:
isi networks list pools --verbose
869
Networking
[--new-name <rule>]
[--pool <subnet>:<pool>]
[--iface <node-interface>]
[--desc <description>]
[--any]
[--storage]
[--accelerator]
[--backup-accelerator]
[--verbose]
Options
You must specify at least one network provisioning rule setting to modify.
--name <subnet>:<pool>
Required. Specifies the name and location of the rule being modified. Must be unique
throughout the cluster.
--new-name
Specifies a new name for the rule. This name must be unique throughout the pool.
Note
This option does not include the name of the subnet or the pool.
--pool <subnet>:<pool>
Changes the pool to which the rule belongs. You must specify both the name of the
subnet and the name of the pool.
--iface <node-interface>
Specifies the node interface to which the rule applies.
--desc <description>
Specifies an optional description of the rule.
--any
Applies this rule to all nodes. This is the default setting.
--storage
Sets the provisioning rule to apply to storage nodes.
--accelerator
Sets the provisioning rule to apply to Accelerator nodes.
--backup-accelerator
Sets the provisioning rule to apply to Backup Accelerator nodes.
{--verbose | -v}
Displays more detailed information.
Examples
The following example applies rule3 on subnet0:pool0 only to storage nodes:
isi networks modify rule subnet0:pool0:rule3 --storage
870
Networking
Options
You must specify at least one network subnet setting to modify.
--name <string>
Required. Specifies the name of the subnet to modify.
--new-name <subnet>
Specifies a new name for the subnet. Must be unique throughout the cluster.
--netmask <ip-address>
Sets the netmask of the subnet.
--prefixlen <number>
Sets the prefix length of an IPv6 subnet.
--enable-vlan
Enables all VLAN tagging on the subnet.
--disable-vlan
Disables all VLAN tagging on the subnet.
--dsr-addrs <ip-address-list>
Specifies the Direct Server Return addresses for the subnet.
--add-dsr-addrs <ip-address-list>
Adds one or more Direct Server Return addresses to the subnet.
--remove-dsr-addrs <ip-address-list>
Removes one or more Direct Server Return addresses from the subnet.
--desc <description>
Specifies an optional description for this subnet.
{--gateway | -g} <ip-address>
Specifies the gateway IP address used by the subnet. If not specified, the default
gateway is used.
isi networks modify subnet
871
Networking
Note
Options
There are no options for this command.
Options
There are no options for this command.
872
CHAPTER 23
Hadoop
Hadoop
873
Hadoop
Hadoop overview
Hadoop is an open-source platform that runs analytics on large sets of data across a
distributed file system.
In a Hadoop implementation on an EMC Isilon cluster, OneFS acts as the distributed file
system and HDFS is supported as a native protocol. Clients from a Hadoop cluster
connect to the Isilon cluster through the HDFS protocol to manage and process data.
Hadoop support on the cluster requires you to activate an HDFS license. To obtain a
license, contact your EMC Isilon sales representative.
Hadoop architecture
Hadoop consists of a compute layer and a storage layer.
In a typical Hadoop implementation, both layers exist on the same cluster.
874
Hadoop
The compute and storage layers are on separate clusters instead of the same cluster.
Instead of storing data within a Hadoop distributed file system, the storage layer
functionality is fulfilled by OneFS on an EMC Isilon cluster. Nodes on the Isilon cluster
function as both a NameNode and a DataNode.
The compute layer is established on a Hadoop compute cluster that is separate from
the Isilon cluster. MapReduce and its components are installed on the Hadoop
compute cluster only.
In addition to HDFS, clients from the Hadoop compute cluster can connect to the
Isilon cluster over any protocol that OneFS supports such as NFS, SMB, FTP, and
HTTP.
Hadoop compute clients can connect to any node on the Isilon cluster that functions
as a NameNode instead of being routed by a single NameNode.
Cloudera
Manager
Greenplum GPHD
HAWQ
Hortonworks Data
Platform
Pivotal HD
Versions supported
3 (Updates 25)
4.2
5.0
4.0
1.1
1.2
1.1.0.1
1.1.11.3.3 (non-GUI)
2.1
1.0.1
2.0
How Hadoop is implemented on OneFS
875
Hadoop
Hadoop
distribution
Apache Hadoop
Versions supported
0.20.203
0.20.205
1.0.01.0.3
1.2.1
2.0.x
2.22.4
WebHDFS
OneFS supports access to HDFS data through WebHDFS client applications.
WebHDFS is a RESTful programming interface based on HTTP operations such as GET,
PUT, POST, and DELETE that is available for creating client applications. WebHDFS client
applications allow you to access HDFS data and perform HDFS operations through HTTP
and HTTPS.
WebHDFS is supported by OneFS on a per-access zone basis and is enabled by default.
WebHDFS supports simple authentication and Kerberos authentication. If the HDFS
authentication method for an access zone is set to All, OneFS uses simple
authentication by default.
Note
Secure impersonation
Secure impersonation enables you to create proxy users that can impersonate other
users to run Hadoop jobs.
You might configure secure impersonation if you use applications, such as Apache Oozie,
to automatically schedule, manage, and run Hadoop jobs. For example, you can create an
Oozie proxy user that securely impersonates a user called HadoopAdmin, which allows
the Oozie user to request that Hadoop jobs be performed by the HadoopAdmin user.
You configure proxy users for secure impersonation on a perzone basis, and users or
groups of users that you assign as members to the proxy user must be from the same
access zone. A member can be one or more of the following identity types:
l
If the proxy user does not present valid credentials or if a proxy user member does not
exist on the cluster, access is denied. The proxy user can only access files and subdirectories located in the HDFS root directory of the access zone. It is recommended that
876
Hadoop
you limit the members that the proxy user can impersonate to users that have access only
to the data the proxy user needs.
Ambari agent
The Ambari client/server framework is a third-party tool that enables you to configure,
manage, and monitor a Hadoop cluster through a browser-based interface. The OneFS
Ambari agent allows you to monitor the status of HDFS services on the EMC Isilon cluster
through the Ambari interface.
The Ambari agent is configured per access zone; you can configure the OneFS Ambari
agent in any access zone that contains HDFS data. To start an Ambari agent in an access
zone, you must specify the address of the external Ambari server and the address of a
NameNode that acts as the point of contact for the access zone.
The external Ambari server receives communications from the OneFS Ambari agent. Once
the Ambari agent assigned to the access zone registers with the Ambari server, the agent
provides a heartbeat status at regular intervals. The OneFS Ambari agent does not
provide metrics or alerts to the Ambari server. The external Ambari server must be
specified by a resolvable hostname, FQDN, or IP address and must be assigned to an
access zone.
The NameNode is the designated point of contact in an access zone that Hadoop services
managed through the Ambari interface should connect through. For example, if you
manage services such as YARN or Oozie through the Ambari interface, the services will
connect to the access zone through the specified NameNode. The Ambari agent
communicates the location of the designated NameNode to the Ambari server, and to the
Ambari interface, the NameNode represents the access zone. If you change the
designated NameNode address, the Ambari agent will inform the Ambari server. The
NameNode must be a resolvable SmartConnect zone name or an IP address from the IP
address pool associated with the access zone.
Note
The specified NameNode value maps to the NameNode, secondary NameNode, and
DataNode components on the Ambari interface.
The OneFS Ambari agent is based on the Apache Ambari framework and is compatible
with Ambari server versions 1.5.1.110 and 1.6.0.
Ambari agent
877
Hadoop
Hadoop
Each SmartConnect zone represents a specific pool of IP addresses. When you associate
a SmartConnect zone with an access zone, OneFS only allows Hadoop clients connecting
through the IP addresses in the SmartConnect zone to reach the HDFS data in the access
zone. A root HDFS directory is specified for each access zone. This configuration isolates
data within access zones and allows you to restrict client access to the data.
A SmartConnect zone evenly distributes NameNode requests from Hadoop compute
clients across the access zone. When a Hadoop compute client makes an initial DNS
request to connect to the SmartConnect zone, the Hadoop client is routed to an Isilon
node that serves as a NameNode. Subsequent requests from the Hadoop compute client
go the same node. When a second Hadoop client makes a DNS request for the
SmartConnect zone, SmartConnect balances the traffic and routes the client connection
to a different node than that used by the previous Hadoop compute client.
If you create a SmartConnect zone, you must add a new name server (NS) record as a
delegated domain to the authoritative DNS zone that contains the Isilon cluster. On the
Hadoop compute cluster, you must add the name of the DNS entry of the SmartConnect
zone to the core-site.xml file so that your Hadoop compute clients connect to a
NameNode with the DNS name of the zone.
SmartConnect is discussed in further detail in the Networking section of this guide.
Activate a license for HDFS. When a license is activated, the HDFS service is enabled
by default.
Create directories on the cluster that will be set as HDFS root directories.
Create a SmartConnect zone for balancing connections from Hadoop compute clients.
Create local Hadoop users in access zones that do not have directory services such
as Active Directory or LDAP.
Set the HDFS root directory in each access zone that supports HDFS connections.
Set an authentication method in each access zone that supports HDFS connections.
879
Hadoop
Procedure
1. Open a secure shell (SSH) connection to any node in the cluster and then log in.
2. Run the isi hdfs settings modify command.
The following example command sets the block size to 1 GB:
isi hdfs settings modify --default-block-size=1G
You must specify the block size in bytes. Suffixes K, M, and G are allowed.
The following example command sets the checksum type to crc32:
isi hdfs settings modify --default-checksum-type=crc32
The following example command sets the number of server threads to 32:
isi hdfs settings modify --server-threads=32
Description
Block size
The HDFS block size setting on the EMC cluster determines how the HDFS
service returns data upon read requests from Hadoop compute client.
You can modify the HDFS block size on the cluster to increase the block size
from the default of 64 MB up to 128 MB. Increasing the block size enables the
Isilon cluster nodes to read and write HDFS data in larger blocks and optimize
performance for most use cases.
The Hadoop cluster maintains a different block size that determines how a
Hadoop compute client writes a block of file data to the Isilon cluster. The
optimal block size depends on your data, how you process your data, and other
factors. You can configure the block size on the Hadoop cluster in the hdfssite.xml configuration file in the dfs.block.size property.
Checksum type
The HDFS service sends the checksum type to Hadoop compute clients, but it
does not send any checksum data, regardless of the checksum type. The
default checksum type is set to None. If you Hadoop distribution requires a
checksum type other than None to the client, you can set the checksum type to
CRC32 or CRC32C.
Service threads The HDFS service generates multiple threads to handle HDFS traffic from EMC
Isilon nodes.
By default, the service thread value is set to auto, which calculates the thread
count by multiplying the number of cores on a node by eight and adding a
minimum threshold of thirteen. It generates a maximum of 96 threads on a
node.
To support a large system of Hadoop compute clients, you might need to
increase the number of threads. If you are distributing HDFS traffic across all of
the nodes in an Isilon cluster through a SmartConnect zone, the total number of
880
Hadoop
Setting
Description
HDFS service threads should equal at least half of the total number of maps and
reduces on the Hadoop compute cluster. The maximum thread count is 256 per
node.
Logging level
Errorgeneral errors
Noticeconditions that are not errors, but might require special handling
64M
none
crit
auto
881
Hadoop
Authentication method
WebHDFS support
Kerberos only
Hadoop
2. Optional: To identify the name of the access zone that you want to modify for HDFS,
run the following command:
isi zone zones list
3. Set the HDFS authentication method for the access zone by running the isi zone
zones modify <zone> command, where <zone> is the name of the zone.
The following command specifies that Hadoop compute clients connecting to zone3
must be identified through the simple authentication method:
isi zone zones modify zone3 --hdfs-authentication=simple_only
The following example command specifies that Hadoop compute clients connecting to
zone3 must be identified through the Kerberos authentication method:
isi zone zones modify zone3 --hdfs-authentication=kerberos_only
883
Hadoop
Hadoop user that maps to a user on a Hadoop compute client for that access zone. If
directory services are available, a local user account is not required.
Procedure
1. Open a secure shell (SSH) connection to any node in the cluster.
2. Run the isi auth users create command.
The following example command creates a user named hadoop-user1 assigned to a
local authentication provider within the zone3 access zone:
isi auth users create --name=hadoop-user1 --provider=local -zone=zone3
3. Configure the HDFS root directory for this access zone by running the isi zone
zones modify <zone> command, where <zone> is the name of the zone.
The following command specifies that Hadoop compute clients connecting to zone3
are given access to the /ifs/hadoop/ directory:
isi zone zones modify zone3 --hdfs-root-directory=/ifs/hadoop
3. Enable or disable WebHDFS in the access zone by running the isi zone zones
modify command.
The following command enables WebHDFS in zone3:
isi zone zones modify zone3 --webhdfs-enabled=yes
884
Hadoop
The following command designates hadoop-user23 in zone1 as a new proxy user and
adds the group hadoop-users to the list of members that the proxy user can
impersonate:
isi hdfs proxyusers create hadoop-user23 --zone=zone1 --addgroup=hadoop-users
885
Hadoop
The following command designates hadoop-user23 in zone1 as a new proxy user and
adds UID 2155 to the list of members that the proxy user can impersonate:
isi hdfs proxyusers create hadoop-user23 --zone=zone1 --addUID=2155
Hadoop
Name: krb_users
ID: SID:S-1-22-2-1003
----------------------Type: wellknown
Name: LOCAL
ID: SID:S-1-2-0
3. To view the configuration details for a specific proxy user, run the isi hdfs
proxyusers viewcommand.
The following command displays the configuration details for the hadoop-user23
proxy user in zone1:
isi hdfs proxyusers view hadoop-user23 --zone=zone1
887
Hadoop
Procedure
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. Create a virtual HDFS rack by running the isi hdfs racks create command.
A rack name begins with a forward slashfor example, /hdfs-rack2.
The following command creates a rack named /hdfs-rack2:
isi hdfs racks create /hdfs-rack2
3. Modify the virtual HDFS rack by running the isi hdfs racks modify command.
A rack name begins with a forward slashfor example, /hdfs-rack2.
The following example command renames a rack named /hdfs-rack2 to /hdfs-rack5:
isi hdfs racks modify /hdfs-rack2 --new-name=/hdfs-rack5
In addition to adding a new range to the list of existing ranges, you can modify the
client IP address ranges by replacing the current ranges, deleting a specific range or
deleting all ranges.
The following example command replaces any existing IP pools with subnet1:pool1
and subnet2:pool2 on the rack named /hdfs-rack2:
isi hdfs racks modify /hdfs-rack2 --ippools=subnet1:pool1,subnet2:pool2
In addition to replacing the list of existing pools with new pools, you can modify the IP
pools by adding pools to list of current pools, deleting a specific pool or deleting all
pools.
888
Hadoop
3. Delete a virtual HDFS rack by running the isi hdfs racks delete command.
A rack name begins with a forward slashfor example, /hdfs-rack2.
The following command deletes the virtual HDFS rack named /hdfs-rack2:
isi hdfs racks delete /hdfs-rack2
The following example command displays setting details for all virtual HDFS racks
configured on the cluster:
isi hdfs racks list -v
3. To view the setting details for a specific virtual HDFS rack, run the isi hdfs racks
view command:
Each rack name begins with a forward slashfor example /hdfs-rack2.
Delete a virtual HDFS rack
889
Hadoop
The following example command displays setting details for the virtual HDFS rack
named /hdfs-rack2:
isi hdfs racks view /hdfs-rack2
HDFS commands
You can access and configure the HDFS service through the HDFS commands.
Options
--default-block-size <size>
Specifies the block size (in bytes) reported by the HDFS service. K, M, and G; for
example, 64M, 512K, 1G, are valid suffixes. The default value is 64M.
--default-checksum-type {none | crc32 | crc32c}
Specifies the checksum type reported by the HDFS service. The default value is none
--server-log-level {emerg | alert | crit | err | notice | info | debug}
Sets the default logging level for the HDFS service on the cluster. The following values
are valid:
EMERG
A panic condition. This is normally broadcast to all users.
ALERT
A condition that should be corrected immediately, such as a corrupted system
database.
CRIT
Critical conditions, such as hard device errors.
ERR
Errors.
NOTICE
Conditions that are not error conditions, but may need special handling.
INFO
Information messages.
890
Hadoop
DEBUG
Messages that contain information typically of use only when debugging a
program.
The default value is NOTICE.
--server-threads {<integer> | auto}
Specifies the number of worker threads generated by the HDFS service. The default
value is auto, which enables the HDFS service to determine the number of necessary
worker threads.
Options
There are no options for this command.
Options
<proxyuser-name>
Specifies the user name of a user currently configured on the cluster to be designated
as a proxy user.
--zone <zone-name>
Specifies the access zone the user authenticates through.
--add-group <group-name>...
Adds the group specified by name to the list of proxy user members. The proxy user
can impersonate any user in the group. The users in the group must authenticate to
the same access zone as the proxy user. You can specify multiple group names in a
comma-separated list.
--add-gid <group-identifier>...
Adds the group by specified by UNIX GID to the list of proxy user members. The proxy
user can impersonate any user in the group. The users in the group must authenticate
to the same access zone as the proxy user. You can specify multiple UNIX GIDs in a
comma-separated list.
--add-user <user-name>...
isi hdfs settings view
891
Hadoop
Adds the user specified by name to the list of members the proxy user can
impersonate. The user must authenticate to the same access zone as the proxy user.
You can specify multiple user names in a comma-separated list.
--add-uid <user-identifier>...
Adds the user specified by UNIX UID to the list of members the proxy user can
impersonate. The user must authenticate to the same access zone as the proxy user.
You can specify multiple UNIX UIDs in a comma-separated list.
--add-sid <security-identifier>...
Adds the user, group of users, machine or account specified by Windows SID to the
list of proxy user members. The object must authenticate to the same access zone as
the proxy user. You can specify multiple Windows SIDs in a comma-separated list.
--add-wellknown <well-known-name>...
Adds the well-known user specified by name to the list of members the proxy user
can impersonate. The well-known user must authenticate to the same access zone as
the proxy user. You can specify multiple well-known user names in a commaseparated list.
{ --verbose | -v}
Displays more detailed information.
Examples
The following command designates hadoop-user23 in zone1 as a new proxy user:
isi hdfs proxyusers create hadoop-user23 --zone=zone1
The following command designates hadoop-user23 in zone1 as a new proxy user and
adds the group of users named hadoop-users to the list of members that the proxy user
can impersonate:
isi hdfs proxyusers create hadoop-user23 --zone=zone1 \
--add-group=hadoop-users
The following command designates hadoop-user23 in zone1 as a new proxy user and
adds UID 2155 to the list of members that the proxy user can impersonate:
isi hdfs proxyusers create hadoop-user23 --zone=zone1 --add-UID=2155
892
Hadoop
[--remove-wellknown <well-known-name>...]
[--verbose]
Options
<proxyuser-name>
893
Hadoop
--remove-uid <user-identifier>...
Removes the user specified by UNIX UID from the list of members the proxy user can
impersonate. You can specify multiple UNIX UIDs in a comma-separated list.
--remove-sid <security-identifier>...
Removes the user, group of users, machine or account specified by Windows SID
from the list of proxy user members. You can specify multiple Windows SIDs in a
comma-separated list.
--remove-wellknown <well-known-name>...
Removes the well-known user specified by name from the list of members the proxy
user can impersonate. You can specify multiple well-known user names in a commaseparated list.
{--verbose | -v}
Displays more detailed information.
Examples
The following command adds the well-known local user to, and removes the user whose
UID is 2155 from, the list of members for proxy user hadoop-user23 in zone1:
isi hdfs proxyusers modify hadoop-user23 --zone=zone1 \
--add-wellknown=local --remove-uid=2155
Options
<proxyuser-name>
894
Hadoop
Options
<proxyuser-name>
895
Hadoop
Options
--zone <zone-name>
Specifies the name of the access zone.
--format {table | json | csv | list}
Displays output in table (default), JavaScript Object Notation (JSON), commaseparated value (CSV), or list format.
--no-header
Displays table and CSV output without headers.
--no-footer
Displays table output without footers.
{ --verbose | -v}
Displays more detailed information.
Examples
The following command displays a list of all proxy users that are configured in zone1:
isi hdfs proxyusers list --zone=zone1
Options
<proxyuser-name>
Hadoop
Options
<rack-name>
Specifies the name of the virtual HDFS rack. The rack name must begin with a forward
slashfor example, /example-name.
--client-ip-ranges <low-ip-address>-<high-ip-address>...
Specifies IP address ranges of external Hadoop compute clients assigned to the
virtual rack.
--ip-pools <subnet>:<pool>...
Assigns a pool of Isilon cluster IP addresses to the virtual rack.
Options
<rack-name>
Specifies the virtual HDFS rack to be modified. Each rack name begins with a forward
slashfor example /example-name.
--new-name <rack-name>
isi hdfs racks create
897
Hadoop
Assigns a new name to the specified virtual rack. The rack name must begin with a
forward slashfor example /example-name.
--client-ip-ranges <low-ip-address>-<high-ip-address>...
Specifies IP address ranges of external Hadoop compute clients assigned to the
virtual rack. The value assigned through this option overwrites any existing IP address
ranges. You can add a new range through the --add-client-ip-ranges option.
--add-client-ip-ranges <low-ip-address>-<high-ip-address>...
Adds a specified IP address range of external Hadoop compute clients to the virtual
rack.
--remove-client-ip-ranges <low-ip-address>-<high-ip-address>...
Removes a specified IP address range of external Hadoop compute clients from the
virtual rack. You can only remove an entire range; you cannot delete a subset of a
range.
--clear-client-ip-ranges
Removes all IP address ranges of external Hadoop compute clients from the virtual
rack.
--ip-pools <subnet>:<pool>...
Assigns pools of Isilon node IP addresses to the virtual rack. The value assigned
through this option overwrites any existing IP address pools. You can add a new pool
through the --add-ip-pools option.
--add-ip-pools <subnet>:<pool>...
Adds a specified pool of Isilon cluster IP addresses to the virtual rack.
--remove-ip-pools <subnet>:<pool>...
Removes a specified pool of Isilon cluster IP addresses from the virtual rack.
--clear-ip-pools
Removes all pools of Isilon cluster IP addresses from the virtual rack.
Options
<rack-name>
Deletes the specified virtual HDFS rack. Each rack name begins with a forward slash
for example, /example-name.
898
Hadoop
Options
{--verbose | -v}
Displays more detailed information.
Options
<rack-name>
Specifies the name of the virtual HDFS rack to view. Each rack name begins with a
forward slashfor example, /example-name.
899
Hadoop
900
CHAPTER 24
Antivirus
Antivirus
901
Antivirus
Antivirus overview
You can scan the files you store on an Isilon cluster for computer viruses and other
security threats by integrating with third-party scanning services through the Internet
Content Adaptation Protocol (ICAP). OneFS sends files through ICAP to a server running
third-party antivirus scanning software. These servers are referred to as ICAP servers.
ICAP servers scan files for viruses.
After an ICAP server scans a file, it informs OneFS of whether the file is a threat. If a threat
is detected, OneFS informs system administrators by creating an event, displaying near
real-time summary information, and documenting the threat in an antivirus scan report.
You can configure OneFS to request that ICAP servers attempt to repair infected files. You
can also configure OneFS to protect users against potentially dangerous files by
truncating or quarantining infected files.
Before OneFS sends a file to be scanned, it ensures that the scan is not redundant. If a
file has already been scanned and has not been modified, OneFS will not send the file to
be scanned unless the virus database on the ICAP server has been updated since the last
scan.
Note
Antivirus scanning is available only if all nodes in the cluster are connected to the
external network.
On-access scanning
You can configure OneFS to send files to be scanned before they are opened, after they
are closed, or both. Sending files to be scanned after they are closed is faster but less
secure. Sending files to be scanned before they are opened is slower but more secure.
If OneFS is configured to ensure that files are scanned after they are closed, when a user
creates or modifies a file on the cluster, OneFS queues the file to be scanned. OneFS then
sends the file to an ICAP server to be scanned when convenient. In this configuration,
users can always access files without any delay. However, it is possible that after a user
modifies or creates a file, a second user might access the file before the file is scanned. If
a virus was introduced to the file from the first user, the second user will be able to
access the infected file. Also, if an ICAP server is unable to scan a file, the file will still be
accessible to users.
If OneFS ensures that files are scanned before they are opened, when a user attempts to
download a file from the cluster, OneFS first sends the file to an ICAP server to be
scanned. The file is not sent to the user until the scan is complete. Scanning files before
they are opened is more secure than scanning files after they are closed, because users
can access only scanned files. However, scanning files before they are opened requires
users to wait for files to be scanned. You can also configure OneFS to deny access to files
that cannot be scanned by an ICAP server, which can increase the delay. For example, if
no ICAP servers are available, users will not be able to access any files until the ICAP
servers become available again.
If you configure OneFS to ensure that files are scanned before they are opened, it is
recommended that you also configure OneFS to ensure that files are scanned after they
are closed. Scanning files as they are both opened and closed will not necessarily
improve security, but it will usually improve data availability when compared to scanning
files only when they are opened. If a user wants to access a file, the file may have already
902
Antivirus
been scanned after the file was last modified, and will not need to be scanned again if
the ICAP server database has not been updated since the last scan.
The name and IP address of the user that triggered the scan.
This information is not included in reports triggered by antivirus scan policies.
903
Antivirus
ICAP servers
The number of ICAP servers that are required to support an Isilon cluster depends on how
virus scanning is configured, the amount of data a cluster processes, and the processing
power of the ICAP servers.
If you intend to scan files exclusively through antivirus scan policies, it is recommended
that you have a minimum of two ICAP servers per cluster. If you intend to scan files on
access, it is recommended that you have at least one ICAP server for each node in the
cluster.
If you configure more than one ICAP server for a cluster, it is important to ensure that the
processing power of each ICAP server is relatively equal. OneFS distributes files to the
ICAP servers on a rotating basis, regardless of the processing power of the ICAP servers. If
one server is significantly more powerful than another, OneFS does not send more files to
the more powerful server.
McAfee VirusScan Enterprise 8.7 and later with VirusScan Enterprise for Storage 1.0
and later.
Antivirus
You can configure OneFS and ICAP servers to react in one of the following ways when
threats are detected:
Repair or quarantine
Attempts to repair infected files. If an ICAP server fails to repair a file, OneFS
quarantines the file. If the ICAP server repairs the file successfully, OneFS sends the
file to the user. Repair or quarantine can be useful if you want to protect users from
accessing infected files while retaining all data on a cluster.
Repair or truncate
Attempts to repair infected files. If an ICAP server fails to repair a file, OneFS
truncates the file. If the ICAP server repairs the file successfully, OneFS sends the file
to the user. Repair or truncate can be useful if you do not care about retaining all
data on your cluster, and you want to free storage space. However, data in infected
files will be lost.
Alert only
Only generates an event for each infected file. It is recommended that you do not
apply this setting.
Repair only
Attempts to repair infected files. Afterwards, OneFS sends the files to the user,
whether or not the ICAP server repaired the files successfully. It is recommended that
you do not apply this setting. If you only attempt to repair files, users will still be
able to access infected files that cannot be repaired.
Quarantine
Quarantines all infected files. It is recommended that you do not apply this setting. If
you quarantine files without attempting to repair them, you might deny access to
infected files that could have been repaired.
Truncate
Truncates all infected files. It is recommended that you do not apply this setting. If
you truncate files without attempting to repair them, you might delete data
unnecessarily.
905
Antivirus
906
Antivirus
The system displays the ID of an ICAP server as an integer in the ICAP server field.
2. To disconnect from an ICAP server, run the isi avscan settings command.
The following command temporarily disconnects from an ICAP server with an ID of 1:
isi avscan settings --disable-server 1
907
Antivirus
The system displays the ID of an ICAP server as an integer in the ICAP server field.
2. To reconnect to an ICAP server, run the isi avscan settings command.
The following command reconnects to an ICAP server with an ID of 1:
isi avscan settings --enable-server 1
The system displays the ID of an ICAP server as an integer in the ICAP server field.
2. To remove an ICAP server, run the isi avscan settings command.
The following command removes an ICAP server with an ID of 1:
isi avscan settings --del-server 1
908
Antivirus
909
Antivirus
Scan a file
You can manually scan an individual file for viruses. This procedure is available only
through the command-line interface (CLI).
Procedure
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. Run the isi avscan manual command.
For example, the following command scans /ifs/data/virus_file:
isi avscan manual /ifs/data/virus_file
910
Antivirus
View threats
You can view files that have been identified as threats by an ICAP server.
Procedure
1. Run the isi avscan report threat command.
The following command displays all recently detected threats:
isi avscan report threat
911
Antivirus
Report ID
The ID of the antivirus report.
File
The path of the potentially infected file.
Time
The time that the threat was detected.
Remediation
How OneFS responded to the file when the threat was detected. If OneFS did not
quarantine or truncate the file, Infected is displayed.
Threat
The name of the detected threat as it is recognized by the ICAP server.
Infected
The name of the potentially infected file.
Policy ID:
The ID of the antivirus policy that detected the threat. If the threat was detected as a
result of a manual antivirus scan of an individual file, MANUAL is displayed.
912
Antivirus
All events related to antivirus scans are classified as warnings. The following events
are related to antivirus activities:
Anti-Virus scan found threats
A threat was detected by an antivirus scan. These events refer to specific reports
on the Antivirus Reports page but do not provide threat details.
No ICAP Servers available
OneFS is unable to communicate with any ICAP servers.
ICAP Server Unresponsive or Invalid
OneFS is unable to communicate with an ICAP server.
Antivirus commands
You can control antivirus scanning activity on an Isilon cluster through the antivirus
commands.
Options
--name <name>
Specifies a name for the policy.
--id <id>
Specifies an ID for the policy. If no ID is specified, one is dynamically generated.
--enable {true | false}
Determines whether the policy is enabled or disabled. If set to true, the policy is
enabled. The default value is false.
--description <string>
Specifies a description for the policy.
--path <path>
View antivirus events
913
Antivirus
This option has been deprecated and will not impact antivirus scans if specified.
Specifies the depth of subdirectories to include in the scan.
--force {true | false}
Determines whether to force policy scans. If a scan is forced, all files are scanned
regardless of whether OneFS has marked files as having been scanned, or if global
settings specify that certain files should not be scanned.
--schedule <schedule>
Specifies when the policy is run.
Specify in the following format:
"<interval> [<frequency>]"
You can optionally append "st", "th", or "rd" to <integer>. For example, you can specify
"Every 1st month"
Specify <day> as any day of the week or a three-letter abbreviation for the day. For
example, both "Saturday" and "sat" are valid.
914
Antivirus
Options
--option <id>
Modifies the policy of the specified ID.
--name <new-name>
Specifies a new name for this policy.
--enable {true | false}
Determines whether this policy is enabled or disabled. If set to true, the policy is
enabled. The default value is false.
--description <string>
Specifies a new description for this policy.
--path <path>
Specifies a directory to scan when this policy is run.
--recurse <integer>
Note
This option has been deprecated and will not impact antivirus scans if specified.
Specifies the depth of subdirectories to include in the scan.
--force {true | false}
Determines whether to force policy scans. If a scan is forced, all files are scanned
regardless of whether OneFS has marked files as having been scanned, or if global
settings specify that certain files should not be scanned.
--schedule <schedule>
Specifies when the policy is run.
Specify in the following format:
"<interval> [<frequency>]"
915
Antivirus
You can optionally append "st", "th", or "rd" to <integer>. For example, you can specify
"Every 1st month"
Specify <day> as any day of the week or a three-letter abbreviation for the day. For
example, both "Saturday" and "sat" are valid.
Options
--id <id>
Deletes the policy of the specified ID.
Options
--id <id>
Displays information on only the policy of the specified ID.
Options
--id <policy-id>
Runs the policy of the specified ID.
--report <id>
Assigns the specified ID to the report generated for this run of the avscan policy.
--force {true | false}
Determines whether to force the scan. If the scan is forced, all files are scanned
regardless of whether OneFS has marked files as having been scanned, or if global
settings specify that certain files should not be scanned.
--update {yes | no}
916
Antivirus
Specifies whether to update the last run time in the policy file. The default value is
yes.
Options
<name>
Scans the specified file. Specify as a file path.
{--policy | -p} <id>
Assigns a policy ID for this scan. The default ID is MANUAL.
{--report | -r} <id>
Assigns the specified report ID to the report that will include information about this
scan. If this option is not specified, the report ID is generated dynamically.
--force {yes | no}
Determines whether to force the scan. If the scan is forced, the scan will complete
regardless of whether OneFS has marked the file as having been scanned, or if global
settings specify that the file should not be scanned.
Options
<name>
Quarantines the specified file. Specify as a file path.
Options
<name>
Removes the specified file from quarantine. Specify as a file path.
917
Antivirus
Options
{--detail | -d}
Displays detailed information.
{--wide | -w}
Displays output in a wide table without truncations.
{--export | -e} <path>
If specified, exports the output to the specified file path.
{--max-result | -m} <integer>
If specified, displays no more than the specified number of results.
{--all | -a}
If specified, displays all threats, regardless of when the threats were detected.
{--report-id | -r} <id>
Displays only threats included in the report of the specified ID.
{--file | -f} <path>
Displays information about only the specified file.
{--remediation | -R} <action>
Displays information about threats that caused the specified action.
The following values are valid:
l
infected
truncated
repaired
quarantined
918
Antivirus
Options
If no options are specified, displays a summary of recently completed scans.
{--detail | -d}
Displays detailed output.
{--wide | -w}
Displays output in a wide table.
{--export | -e} <path>
If specified, exports the output to the specified file path.
{--max-result | -m} <integer>
If specified, displays no more than the specified number of results.
{--all | -a}
If specified, displays all scans, regardless of when the scans were run.
{--report-id | -r} <id>
Displays only the report of the specified ID.
{--policy-id | -p} <id>
Displays only reports about the policy of the specified ID.
{--running | -R}
Displays only scans that are still in progress.
Options
If no options are specified, deletes reports that are older than the value specified by the
isi avscan config --report-expiry option.
{--expire | -e} <integer> <time>
Sets the minimum age of reports to be deleted.
The following <time> values are valid:
s
Specifies seconds
d
Specifies days
m
Specifies minutes
w
Specifies weeks
919
Antivirus
Options
--scan-on-open {true | false}
Determines whether files are scanned before the files are sent to users.
--fail-open {true | false}
If --scan-on-open is set to true, determines whether users can access files that
cannot be scanned. If this option is set to false, users cannot access a file until the
file is scanned by an ICAP server.
If --scan-on-open is set to true, this option has no effect.
--scan-on-close {true | false}
Determines whether files are scanned after the files are closed.
--max-scan-size <float> [{B | KB | MB | GB}]
If specified, OneFS will not send files larger than the specified size to an ICAP server
to be scanned.
Note
Although the parameter accepts values larger than 2GB, OneFS does not scan files
larger than 2GB.
--repair {true | false}
Determines whether OneFS attempts to repair files that threats are detected in.
--quarantine {true | false}
Determines whether OneFS quarantines files that threats are detected in. If -repair is set to true, OneFS will attempt to repair the file before quarantining it.
--truncate {true | false}
Determines whether OneFS truncates files that threats are detected in. If --repair
is set to true, OneFS will attempt to repair the file before truncating it.
--report-expiry <integer> <time>
Determines how long OneFS will retain antivirus scan reports before deleting them.
920
Antivirus
[ ]
--path-prefix <path>
If specified, only files contained in the specified directory path will be scanned. This
option affects only on-access scans. To specify multiple directories, you must include
multiple --path-prefix options within the same command. Specifying this option
will remove any existing path prefixes.
isi avscan settings
921
Antivirus
--add-server <url>
Adds an ICAP server of the specified URL.
--del-server <id>
Removes an ICAP server of the specified ID.
--enable-server <id>
Enables an ICAP server of the specified ID.
--disable-server <id>
Disables an ICAP server of the specified ID.
Options
<name>
Displays information about the file of the specified name. Specify as a file path.
922
CHAPTER 25
VMware integration
VMware integration
923
VMware integration
VAAI
OneFS uses VMware vSphere API for Array Integration (VAAI) to support offloading
specific virtual machine storage and management operations from VMware ESXi
hypervisors to an Isilon cluster.
VAAI support enables you to accelerate the process of creating virtual machines and
virtual disks. For OneFS to interact with your vSphere environment through VAAI, your
VMware environment must include ESXi 5.0 or later hypervisors.
If you enable VAAI capabilities for an Isilon cluster, when you clone a virtual machine
residing on the cluster through VMware, OneFS clones the files related to that virtual
machine.
To enable OneFS to use VMware vSphere API for Array Integration (VAAI), you must install
the VAAI NAS plug-in for Isilon on the ESXi server. For more information on the VAAI NAS
plug-in for Isilon, see the VAAI NAS plug-in for Isilon Release Notes.
VASA
OneFS communicates with VMware vSphere through VMware vSphere API for Storage
Awareness (VASA).
VASA support enables you to view information about Isilon clusters through vSphere,
including Isilon-specific alarms in vCenter. VASA support also enables you to integrate
with VMware profile driven storage by providing storage capabilities for Isilon clusters in
vCenter. For OneFS to communicate with vSphere through VASA, your VMware
environment must include ESXi 5.0 or later hypervisors.
924
VMware integration
Alarm name
Description
Thin-provisioned LUN There is not enough available space on the cluster to allocate space for
capacity exceeded
writing data to thinly provisioned LUNs. If this condition persists, you will
not be able to write to the virtual machine on this cluster. To resolve this
issue, you must free storage space on the cluster.
If the cluster is composed of i-Series, X-Series , or S-Series nodes, but the cluster
does not contain SSDs, the cluster is recognized as a capacity cluster.
Capacity
The Isilon cluster is composed of Isilon X-Series nodes that do not contain SSDs. The
cluster is configured for a balance between performance and capacity.
Hybrid
The Isilon cluster is composed of nodes associated with two or more storage
capabilities. For example, if the cluster contained both Isilon S-Series and NL-Series
nodes, the storage capability of the cluster is displayed as Hybrid.
Enable VASA
You must enable an Isilon cluster to communicate with VMware vSphere API for Storage
Awareness (VASA) by enabling the VASA daemon.
Procedure
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. Enable VASA by running the following command:
isi services isi_vasa_d enable
925
VMware integration
Record the location of where you saved the certificate. You will need this file path
when adding the vendor provider in vCenter.
926
VMware integration
3. Disable or enable the VASA daemon by running one of the following commands:
l
927
VMware integration
928