Nutanix - Command-Ref-AOS-v51
Nutanix - Command-Ref-AOS-v51
Nutanix - Command-Ref-AOS-v51
Acropolis 5.1
25-May-2017
Notice
Copyright
Copyright 2017 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.
Conventions
Convention Description
user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.
root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.
> command The commands are executed in the Hyper-V host shell.
Version
Last modified: May 25, 2017 (2017-05-25 8:02:06 GMT-7)
4
nutanix-guest-tools: Nutanix Guest Tools...................................................................... 134
progress-monitor: Progress Monitor................................................................................. 136
protection-domain: Protection domain............................................................................. 137
pulse-config: Pulse Configuration..................................................................................... 151
rackable-unit: Rackable unit............................................................................................ 151
remote-site: Remote Site.................................................................................................. 152
rsyslog-config: RSyslog Configuration.............................................................................157
share: Share.........................................................................................................................160
smb-server: Nutanix SMB server........................................................................................ 160
snapshot: Snapshot............................................................................................................. 161
snmp: SNMP.......................................................................................................................... 162
software: Software.............................................................................................................. 166
ssl-certificate: SSL Certificate...................................................................................... 168
storagepool: Storage Pool................................................................................................. 169
storagetier: Storage Tier...................................................................................................170
tag: Tag.................................................................................................................................172
task: Tasks........................................................................................................................... 172
user: User.............................................................................................................................173
vdisk: Virtual Disk................................................................................................................ 176
virtual-disk: Virtual Disk.................................................................................................. 179
virtualmachine: Virtual Machine........................................................................................180
volume-group: Volume Groups........................................................................................... 182
vstore: VStore..................................................................................................................... 187
vzone: vZone........................................................................................................................ 188
5
1
Acropolis Command-Line Interface (aCLI)
Acropolis provides a command-line interface for managing hosts, networks, snapshots, and VMs.
To access the Acropolis CLI, log on to a Controller VM in the cluster with SSH and type acli at the shell
prompt.
To exit the Acropolis CLI and return to the shell, type exit at the <acropolis> prompt.
• The keyword is a literal string required by the command, and the value is the unique value for your
environment.
keyword=value
ads
Operations
<acropolis> ads.get
Required arguments
None
core
Operations
<acropolis> core.exit
Required arguments
None
ha
Operations
<acropolis> ha.get
Required arguments
None
host
Operations
This command initiates a transition into maintenance mode. The host will be marked as unschedulable, so
that no new VMs are instantiated on it. Subsequently, an attempt is made to evacuate VMs from the host.
If the evacuation attempt fails (e.g., because there are insufficient resources available elsewhere in the
cluster), the host will remain in the "entering maintenance mode" state, where it is marked unschedulable,
waiting for user remediation. The user may safely run this command again, and may do so with different
options (e.g., by specifying mode=power_off to power off the remaining VMs on the host). A request to
enter maintenance mode may be aborted at any time using the host.exit_maintenance_mode command.
The user should use the host.get command to determine the host's current maintenance mode state.
This command may be used to abort a prior attempt to enter maintenance mode, even if the attempt is
ongoing. If the host is no longer in maintenance mode, this command has no effect. The host may not be
removed from maintenance mode synchronously. Use the host.get command to check the host's current
maintenance mode state.
<acropolis> host.exit_maintenance_mode host
<acropolis> host.list
Required arguments
None
image
Operations
Create an image
We support two different modes of creation. A URL to a disk image can be provided with the source_url
keyword argument or an existing vmdisk can be provided with the clone_from_vmdisk keyword argument.
If the image is created from a source_url then a container must also be provided. Otherwise the container
keyword argument should not be specified and the image will reside in the same container as the vmdisk.
In addition to a creation mode, an image type must also be provided. Image types can either be an ISO
(kIsoImage) or a disk image (kDiskImage). Optionally, a checksum may also be specified if we are creating
an image from a source_url in order to verify the correctness of the image.
Delete an image(s)
<acropolis> image.list
Required arguments
None
Update an image
iscsi_client
Operations
<acropolis> iscsi_client.list
Required arguments
None
net
Operations
A managed network may have zero or more non-overlapping DHCP pools. Each pool must be entirely
contained within the network's managed subnet. In the absence of a DHCP pool, the user must specify
an IPv4 address when creating a virtual network adapter (see vm.nic_create). If the managed network
has a DHCP pool, the user need not provide an address; the NIC will automatically be assigned an IPv4
address from one of the pools at creation time, provided at least one address is available. Addresses in the
DHCP pool are not reserved. That is, a user may manually specify an address belonging to the pool when
creating a virtual adapter.
<acropolis> net.add_dhcp_pool network [ end="end" ][ start="ip_addr" ]
Required arguments
network
Network identifier
Type: network
Optional arguments
end
Last IPv4 address
Type: IPv4 address
start
First IPv4 address
Type: IPv4 address
Examples
1. Auto-assign addresses from the inclusive range 192.168.1.16 - 192.168.1.32.
<acropolis> net.add_dhcp_pool vlan.16 start=192.168.1.16 end=192.168.1.32
A blacklisted IP address can not be assigned to a VM network adapter. This property may be useful for
avoiding conflicts between VMs and other hosts on the physical network.
<acropolis> net.add_to_ip_blacklist network [ ip_list="ip_addr_list" ]
Each VM network interface is bound to a virtual network (see vm.nic_create). While a virtual network is
in use by a VM, it cannot be modified or deleted. Currently, the only supported L2 type is VLAN. Each
virtual network is bound to a single VLAN, and trunking VLANs to a virtual network is not supported.
A virtual network on VLAN 66 would be named "vlan.66". Each virtual network maps to a hypervisor-
Deletes a network
Note that a network may not be deleted while VMs are still attached to it. To determine which VMs are on a
network, use net.list_vms.
<acropolis> net.delete network
Required arguments
network
Network identifier
Type: network
<acropolis> net.list
Required arguments
None
This command is used to configure the TFTP information that the DHCP server includes in its responses
to clients on the virtual network. In particular, it is used to configure the TFTP server name (option 66)
and boot file name (option 67). The DHCP server's TFTP configuration may be modified while VMs are
connected to the network. However, the TFTP server hands out infinite leases, so clients will need to
manually renew to pick up the new settings.
<acropolis> net.update_dhcp_tftp network [ bootfile_name="bootfile_name" ][
server_name="server_name" ]
Required arguments
network
Network identifier
Type: network
Optional arguments
bootfile_name
Boot file name
Type: string
server_name
TFTP server name
Type: string
nf
Operations
<acropolis> nf.chain_list
Required arguments
None
parcel
Operations
<acropolis> parcel.list
Required arguments
None
policy
Operations
Deletes a policy
<acropolis> policy.list
Required arguments
None
snapshot
Operations
<acropolis> snapshot.list
Required arguments
None
task
Operations
Cancel tasks
If any of the specified tasks finish, then the poll returns. The response will specify if the request timed out
without any tasks completing or if they did, the exact of tasks that completed. Invalid task uuids will also be
specified in the response.
<acropolis> task.poll task_list [ timeout="timeout" ]
Required arguments
task_list
Task identifier
Type: list of task identifiers
Optional arguments
timeout
Poll timeout in seconds
Type: int
Default: 30
Examples
1. Poll list of tasks for completion.
<acropolis> task.poll 783d9fed-131e-406d-ae4b-e8ca2726cc02,90a13330-
b2cb-4995-82cf-06e2efb51d3d
vg
Operations
The source volume group must be specified through the 'clone_from_vg' argument. If the ISCSI target
names for the clones are not specified through the 'iscsi_target_prefix_list' argument, then default values
will be used.
<acropolis> vg.clone name_list [ clone_from_vg="clone_from_vg"
][ iscsi_target_prefix_list="iscsi_target_prefix_list" ][
target_secret_list="target_secret_list" ]
Required arguments
name_list
Comma-delimited list of VG names
Type: list of strings with expansion wildcards
Optional arguments
clone_from_vg
VG from which to clone
Type: volume group
iscsi_target_prefix_list
Comma-delimited list of iscsi target prefixes for each of the VGs
Type: list of strings
target_secret_list
Comma delimited CHAP secrets associated with each of the VGs. To delete the secret, set
target_secret="" using vg.update
Type: list of strings
Examples
1. Clone two VGs vg1 and vg2 with iscsi targets vgt1 and vgt2 from source-vg
<acropolis> vg.clone vg1,vg2 clone_from_vg=source-vg iscsi_target_prefix_list=vgt1,vgt2
2. Clone two VGs vg1 and vg2 with target_secrets vg1_target_secret and vg2_target_secret
<acropolis> vg.clone vg1,vg2 clone_from_vg=source-vg
target_secret_list=vg1_target_secret,vg2_target_secret
<acropolis> vg.detach_from_vm vg vm
<acropolis> vg.list
Required arguments
None
vm
Operations
This will unset a VM affinity configuration, including policy, constraint, and binding entities.
<acropolis> vm.affinity_unset vm_list
Required arguments
vm_list
Comma-delimited list of VM identifiers
Type: list of VMs
One of the 'clone_from_*' arguments must be provided. The resulting VMs will be cloned from the
specified source. When the source has one or more NICs on a managed network, the caller may optionally
provide set of initial IP addresses. The first clone will get the first IP address set for each of its NIC, and
subsequent clones will be assigned subsequent IP addresses in sequence. If memory size or CPU-related
parameters are specified, they override the values allotted to the source VM/snapshot. Memory size must
be specified with a multiplicative suffix. The following suffixes are valid: M=2^20, G=2^30.
<acropolis> vm.clone name_list [ clone_affinity="{ true | false }" ][
clone_from_snapshot="clone_from_snapshot" ][ clone_from_vm="clone_from_vm"
][ clone_ip_address="clone_ip_address" ][ memory="memory" ][
num_cores_per_vcpu="num_cores_per_vcpu" ][ num_vcpus="num_vcpus" ]
Required arguments
name_list
Comma-delimited list of VM names
Type: list of strings with expansion wildcards
Optional arguments
clone_affinity
Clone source VM's affinity rules.
Type: boolean
clone_from_snapshot
Snapshot from which to clone
Type: snapshot
clone_from_vm
VM from which to clone
Type: VM
clone_ip_address
IP addresses to assign to clones
Type: list of IPv4 addresses
memory
Memory size
Type: size with MG suffix
num_cores_per_vcpu
Number of cores per vCPU
Type: int
num_vcpus
Number of vCPUs
Type: int
Memory size must be specified with a multiplicative suffix. The following suffixes are valid: M=2^20,
G=2^30.
<acropolis> vm.create name_list [ agent_vm="{ true | false }" ][
extra_flags="extra_flags" ][ memory="memory" ][ num_cores_per_vcpu="num_cores_per_vcpu"
][ num_vcpus="num_vcpus" ]
If the VM is running, the disk is hot-removed from the VM. Note that certain buses, like IDE, are not hot-
pluggable.
<acropolis> vm.disk_delete vm disk_addr
Required arguments
vm
VM identifier
Type: VM
disk_addr
Disk address ("bus.index")
Type: VM disk
<acropolis> vm.disk_list vm
Required arguments
vm
VM identifier
Type: VM
If a VM's host becomes disconnected from the cluster, and is not expected to return, this command may be
used to force the VM back to the powered off state. Use this command with extreme caution; if the VM is
actually still running on the host after a force off, a subsequent attempt to power on the VM elsewhere may
succeed. The two instances may experience IP conflicts, or corrupt the VM's virtual disks. Therefore, the
user should take adequate precautions to ensure that the old instance is really gone.
<acropolis> vm.force_off vm
Required arguments
vm
VM identifier
Type: VM
Changes to the GPU configuration can only be made when VM is powered off
<acropolis> vm.gpu_assign vm [ gpu="gpu" ]
Required arguments
vm
VM identifier
Type: VM
Optional arguments
gpu
GPU
Type: gpu
Examples
1. Add a new GPU in passthrough mode
<acropolis> vm.gpu_assign my_vm gpu=Nvidia_Tesla_M60
Changes to the GPU configuration can only be made when VM is powered off
<acropolis> vm.gpu_deassign vm [ gpu="gpu" ]
Required arguments
vm
VM identifier
If no host is specified, the scheduler will pick the one with the most available CPU and memory that can
support the VM. Note that no such host may be available. The user may abort an in-progress migration
with the vm.abort_migrate command. If multiple VMs are specified, it is recommended to also provide the
bandwidth_mbps parameter. This limit is applied to each of the migrations individually.
<acropolis> vm.migrate vm_list [ bandwidth_mbps="bandwidth_mbps" ][ host="host" ][
live="{ true | false }" ]
Required arguments
vm_list
Comma-delimited list of VM identifiers
Type: list of VMs
Optional arguments
bandwidth_mbps
Maximum bandwidth in MiB/s
Type: int
Default: 0
A VM NIC must be associated with a virtual network. It is not possible to change this association. To
connect a VM to a different virtual network, it is necessary to create a new NIC. If the virtual network is
managed (see network.create), the NIC must be assigned an IPv4 address at creation time. If the network
has no DHCP pool, the user must specify the IPv4 address manually. If the VM is running, the NIC is hot-
added to the VM.
<acropolis> vm.nic_create vm [ ip="ip" ][ mac="mac" ][ model="model" ][
network="network" ][ network_function_nic_type="network_function_nic_type" ][
request_ip="{ true | false }" ][ trunked_networks="trunked_networks" ][ type="type" ][
vlan_mode="vlan_mode" ]
Required arguments
vm
VM identifier
Type: VM
Optional arguments
ip
IPv4 address
Type: IPv4 address
mac
MAC address
Type: MAC address
model
Virtual hardware model
Type: string
If the VM is running, the NIC is hot-removed from the VM. If the NIC to be removed is specified as the boot
device in the boot configuration, the boot device configuration will be cleared as a side effect of removing
the NIC.
<acropolis> vm.nic_delete vm mac_addr
Required arguments
vm
VM identifier
Type: VM
mac_addr
NIC MAC address
Type: NIC address
Updates a network adapter on a VM, specified by the mac address of network adapater
If no host is specified, the scheduler will pick the one with the most available CPU and memory that can
support the VM. Note that no such host may be available.
<acropolis> vm.on vm_list [ host="host" ]
Required arguments
vm_list
Comma-delimited list of VM identifiers
Type: list of VMs
Optional arguments
host
Host on which to power on the VM
Type: host
If the VM is currently running, it will be powered off. Since VM snapshots do not include the VM memory
image, the VM will remain powered off after the restore is complete. A VM snapshot may no longer be
compatible with the current virtual network configuration. In this case, the user may choose not to restore
the VM's network adpaters using the "restore_network_config" keyword argument.
<acropolis> vm.restore vm snapshot [ restore_network_config="{ true | false }" ]
<acropolis> vm.resume_all
Required arguments
None
Changes to the serial port configuration only take effect after a full power cycle.
<acropolis> vm.serial_port_create vm [ index="index" ][ type="type" ]
Required arguments
vm
VM identifier
Type: VM
Optional arguments
index
Serial port index
Type: int
type
Serial port type
Type: serial port type
Changes to the serial port configuration only take effect after a full power cycle.
<acropolis> vm.serial_port_delete vm index
Required arguments
vm
VM identifier
Type: VM
index
Serial port index
Type: int
Examples
1. Remove the serial port at COM2.
<acropolis> vm.serial_port_delete my_vm 1
If multiple VMs are specified, all of their configurations and disks will fall into the same consistency group.
Since this operation requires the coordination of multiple resources, it should not be abused by specifying
more than several VMs at a time. Snapshots are crash-consistent. They do not include the VM's current
memory image, only the VM configuration and its disk contents. The snapshot is taken atomically across
all of a VM's configuration and disks to ensure consistency. If no snapshot name is provided, the snapshot
will be referred to as "<vm_name>-<timestamp>", where the timestamp is in ISO 8601 format (YYYY-MM-
DDTHH:MM:SS.mmmmmm).
<acropolis> vm.snapshot_create vm_list [ snapshot_name_list="snapshot_name_list" ]
Required arguments
vm_list
Comma-delimited list of VM identifiers
Type: list of VMs
Optional arguments
snapshot_name_list
Comma-delimited list of names for each snapshot
<acropolis> vm.snapshot_get_tree vm
Required arguments
vm
VM identifier
Type: VM
<acropolis> vm.snapshot_list vm
Required arguments
vm
VM identifier
Type: VM
Note that some attributes may not be modifiable while the VM is running. For instance, the KVM hypervisor
supports at present only hot add for CPU and Memory. Memory size must be specified with a multiplicative
suffix. The following suffixes are valid: M=2^20, G=2^30. The hwclock_timezone attribute specifies the
VM's hardware clock timezone. Most operating systems assume the system clock is UTC, but some (like
Windows) expect the local timezone. Changes to the clock timezone only take effect after a full VM power
cycle. The vga_console attribute controls whether the VM has a VGA console device. Changes to this
attribute only take effect after a full VM power cycle. The agent_vm attribute controls whether the VM is an
agent VM. When their host enters maintenance mode, after normal VMs are evacuated, agent VMs are
powered off. When the host is restored, agent VMs are powered on before normal VMs are restored. Agent
VMs cannot be HA-protected.
<acropolis> vm.update vm_list [ agent_vm="{ true | false }" ][ annotation="annotation"
][ cbr_not_capable_reason="cbr_not_capable_reason" ][ cpu_passthrough="{ true
| false }" ][ extra_flags="extra_flags" ][ ha_priority="ha_priority" ][
hwclock_timezone="hwclock_timezone" ][ memory="memory" ][ name="name" ][
num_cores_per_vcpu="num_cores_per_vcpu" ][ num_vcpus="num_vcpus" ][ uefi_boot="{ true |
false }" ][ vga_console="{ true | false }" ]
Required arguments
vm_list
Comma-delimited list of VM identifiers
Type: list of VMs
vm_group
Operations
<acropolis> vm_group.list
Required arguments
None
1. Verify that your system has Java Runtime Environment (JRE) version 5.0 or higher.
To check which version of Java is installed on your system or to download the latest version, go to
http://www.java.com/en/download/installed.jsp.
b.
Click the user icon at the top of the console.
1. On your local system, open a command prompt (such as bash for Linux or CMD for Windows).
2. At the command prompt, start the nCLI by using one of the following commands.
• Replace management_ip_addr with the IP address of any Nutanix Controller VM in the cluster.
• Replace username with the name of the user (if not specified, the default is admin).
• (Optional) Replace user_password with the password of the user.
Troubleshooting
Error Explanation/Resolution
ncli not found or not recognized The Windows %PATH% or Linux $PATH environment variable is
as a command not set.
Error: Bad credentials The admin user password has been changed from the default and
you did not specify the correct password.
Type exit and start the nCLI again with the correct password.
Results: The Nutanix CLI is now in interactive mode. To exit this mode, type exit at the ncli> prompt.
Some actions require parameters at the end of the command. For example, when creating an NFS
datastore, you need to provide both the name of the datastore as it will appear to the hypervisor and the
name of the source storage container.
ncli> datastore create name="NTNX-NFS" ctr-name="nfs-ctr"
Parameter-value pairs can be listed in any order, as long as they are preceded by a valid entity and action.
Tip: To avoid syntax errors, surround all string values with double-quotes, as demonstrated in the
preceding example. This is particularly important when specifying parameters that accept a list of
values.
Embedded Help
The nCLI provides assistance on all entities and actions. By typing help at the command line, you can
request additional information at one of three levels of detail.
help
Provides a list of entities and their corresponding actions
entity help
Provides a list of all actions and parameters associated with the entity, as well as which parameters
are required, and which are optional
entity action help
Provides a list of all parameters associated with the action, as well as a description of each parameter
The nCLI provides additional details at each level. To control the scope of the nCLI help output, add the
detailed parameter, which can be set to either true or false.
For example, type the following command to request a detailed list of all actions and parameters for the
cluster entity.
ncli> cluster help detailed=true
You can also type the following command if you prefer to see a list of parameters for the cluster edit-
params action without descriptions.
ncli> cluster edit-params help detailed=false
nCLI Entities
alerts: An Alert
authconfig: Configuration information used to authenticate user
cloud: Manage AWS or AZURE Cloud
• The keyword is a literal string required by the command, and the value is the unique value for your
environment.
keyword=value
alerts: Alert
Description An Alert
Alias alert
Acknowledge Alerts
Resolve Alerts
Get the list of entity values for the specified entity type and the directory name.
cloud: Cloud
cluster: Cluster
Clear configuration of SMTP Server used for transmitting alerts and report emails to Nutanix support
Configure discovered node with IP addresses (Hypervisor, CVM and IPMI addresses)
Generates and downloads the csr from discovered node based on certification information from the cluster
Get configuration of SMTP Server used for transmitting alerts and report emails to Nutanix support
Join the Nutanix storage cluster to the Windows AD domain specified in the cluster name. This operation is only
valid for clusters having hosts running Hyper-V.
Delete public key with the specified name from the cluster
Set configuration of SMTP Server used for transmitting alert and report emails to Nutanix support
Get the down-migrate times (in minutes) for Storage Tiers in a Storage Container
List of results of the certificate tests that were performed against key management servers
Assigns new passwords to encryption capable disks when cluster is password protected. If disk ids are not given,
rekey will be performed on all disks of the cluster
Test encryption configuration on given hosts and key management servers. If no parameters are specified, test will
be conducted on all nodes and key management servers configured in the cluster
datastore: Datastore
Create a new NFS datastore on the Physical Hosts using the Storage Container (ESX only). Storage Container name
will be used as datastore name, if datastore name is not specified
List Physical Disks that are not assigned to any Storage Pool
events: Event
Description An Event
Alias event
Acknowledge Events
Add a Share
Delete a Share
Update a Share
Configure discovered node with IP addresses (Hypervisor, CVM and IPMI addresses)
Generates and downloads the csr from discovered node based on certification information from the cluster
Join one or more host(s) to a domain. This operation is only valid for hosts running Hyper-V.
Reset to factory setting, the default location to be used for storing the virtual machine configuration files and the
virtual hard disk files. This operation is only valid for hosts running Hyper-V.
license: License
Returns a list of information for management servers which are used for managing the cluster.
multicluster: Multicluster
Add to multicluster
network: Network
Create a new out of band snapshot schedule in a Protection domain to take a snapshot at a specified time
Mark Protection domain as inactive and failover to the specified Remote Site
Mark a Protection domain for removal. Protection domain will be removed from the appliance when all
outstanding operations on it are cancelled
Mark Virtual Machines and NFS files for removal from a given Protection domain. They will be removed when all
outstanding operations on them are completed/cancelled
List networks corresponding to the local cluster or a remote site. If remote-site-name is provided then networks
corresponding to that remote site are returned else local cluster's networks are returned
Mark a Remote Site for removal. Site will be removed from the appliance when all outstanding operations that are
using the remote site are cancelled
Description Share
Alias
Operations
Disable Kerberos security services in the SMB server. This operation is only valid for clusters having hosts running
Hyper-V.
Enable Kerberos security services in the SMB server. This operation is only valid for clusters having hosts running
Hyper-V.
Get the status of Kerberos for the SMB server. This operation is only valid for clusters having hosts running Hyper-
V.
List Snapshots
Delete a Snapshot
snmp: SNMP
Add a transport to the list of snmp transports. Each transport is a protocol:port pair
Add a trap sink to the list of trap sinks. Each trap sink is a combination of trap sink address, username and
authentication information
Add an snmp user along with its authentication and privacy keys
Edit one of the trap sinks from the list of trap sinks. Editable properties are username, authentication and privacy
settings and protocol
List all the transports specified for the snmp agent. Each transport is a protocol:port pair
List all the configured trap sinks along with their user information.
Lists all the snmp users along with their properties like authentication and privacy information
software: Software
Download a Software
List Software
Delete a Software
Upload a Software
Generates SSL Certificate with cipher Strength 2048 bits and replaces the existing certificate
Import SSL Certificate, key and CA certificate or chain file. This import replaces the existing certificate
tag: Tag
task: Tasks
Description A Task
Alias
Operations • Inspect Task : get
• List all Tasks : list | ls
• Poll Task to completion : wait-for-task
Inspect Task
user: User
Description A User
Alias
Operations • Change the password of a User : change-password
• Add a new User : create | add
• Delete a User : delete | remove | rm
• Disable a User : disable
• Edit a User : edit | update
• Enable a User : enable
• Get the IP Addresses and browser details of a user who is currently logged in : get-
logged-in-user | get-logged-in-user
• Get a list of all users who are currently logged in to the system along with their IP
Addresses and browser details : get-logged-in-users | get-logged-in-users
• Grant cluster administration role to a User : grant-cluster-admin-role
• Grant user administration role to a User : grant-user-admin-role
• List Users : list | ls
• Reset the password of a User : reset-password
• Revoke cluster administration role from a User : revoke-cluster-admin-role
• Revoke user administration role from a User : revoke-user-admin-role
• Show profile of current User : show-profile
Delete a User
Disable a User
Edit a User
Enable a User
Get the IP Addresses and browser details of a user who is currently logged in
Get a list of all users who are currently logged in to the system along with their IP Addresses and browser details
List Users
List Snapshots
Operations • Attach a disk from file level restore capable snapshot to a VM : attach-flr-disk
• Detach a file level restore disk from a VM : detach-flr-disk
• List Virtual Machine : list | ls
• Get all file level restore capable snapshots attached to a VM : list-attached-flr-
snapshots
• Get file level restore capable snapshots of a VM : list-flr-snapshots | ls-flr-
snaps
• Get snapshots of a VM : list-snapshots | ls-snaps
• Get stats data for Virtual Machine : list-stats | ls-stats
• Update FingerPrintOnWrite on all vdisks of a VM : update-fingerprint-on-write
• Update OnDiskDedup on all vdisks of a VM : update-on-disk-dedup
vstore: VStore
List VStores
Protect a VStore. Files in a protected VStore are replicated to a Remote Site at a defined frequency and these
protected files can be recovered in the event of a disaster
Unprotect a VStore
vzone: vZone
Description A vZone
Alias
List vZones
Delete avZone
Specifying Credentials
When specifying a password on the command line, always enclose the password in single quotes. For
example: --hypervisor_password='nutanix/4u'
• To display all user name and password options for diagnostics.py, type /home/nutanix/diagnostics/
diagnostics.py --help | egrep -A1 'password|user'
--hypervisor_password: Default hypervisor password.
(default: 'nutanix/4u')
--ipmi_password: The password to use when logging into the local IPMI device.
(default: 'ADMIN')
--ipmi_username: The username to use when logging into the local IPMI device.
(default: 'ADMIN')
• You can find all user name and password options for cluster, genesis, and setup_hyperv.py by also
typing --help | egrep -A1 'password|user' as part of the command. For example, setup_hyperv.py
--help | egrep -A1 'password|user'
cluster
commands:
add_public_key
convert_cluster
create
destroy
disable_auto_install
enable_auto_install
firmware_upgrade
foundation_upgrade
host_upgrade
ipconfig
migrate_zeus
pass_shutdown_token
reconfig
remove_all_public_keys
remove_public_key
reset
restart_genesis
start
status
stop
upgrade
upgrade_node
/usr/local/nutanix/cluster/bin/cluster
--add_dependencies
Include Dependencies.
Default: false
--bundle
Bundle for upgrading host in cluster.
--clean_debug_data
If 'clean_debug_data' is True, then when we destroy a cluster we will also remove the logs, binary
logs, cached packages, and core dumps on each node.
Default: false
--cluster_external_ip
Cluster ip to manage the entire cluster.
--cluster_function_list
List of functions of the cluster (use with create). Accepted functions are ['minerva', 'multicluster', 'ndfs',
'extension_store_vm', 'witness_vm', 'cloud_data_gateway']
Default: ndfs
--cluster_name
Name of the cluster (use with create).
--config
Path to the cluster configuration file.
--container_name
Name of the default container on the cluster.
--dns_servers
Comma separated list of one or more DNS servers.
cluster.ce_helper
--ce_version_map_znode_path
Zookeeper node containing the CE version mapping.
Default: /appliance/logical/community_edition/version_map
cluster.cluster_upgrade
--svm_reboot_timeout
Maximum time expected for SVM to reboot/shutdown.
cluster.consts
--allow_hetero_sed_node
Flag that can be set by an SRE to let a node have a mix of sed and non-sed disks.
Default: true
--app_deployment_progress_zknode
Zknode to use for deployment state machine
Default: /appliance/logical/app_deployment_progress
--app_deployment_proto_zknode
Zknode to use for deployment state machine
Default: /appliance/logical/app_deployment_info
--authorized_certs_file
Path to file containing list of permitted SSL certs.
Default: /home/nutanix/ssh_keys/AuthorizedCerts.txt
--azure_cert_dir
Directory in which Azure certificates are stored.
Default: /home/cloud/azure
--cassandra_health_znode
Zookeeper node where each cassandra creates an ephmeral node indicating it is currently available.
Default: /appliance/logical/health-monitor/cassandra
--cloud_credentials_zkpath
Zookeeper node path to the cloud credentials node.
Default: /appliance/logical/cloud_credentials
--command_timeout_secs
Number of seconds to spend retrying an RPC request.
Default: 180
--convert_cluster_foundation_zknode
Holds IP address of CVM which needs to be reimaged using foundation service.
Default: /appliance/logical/genesis/convert_cluster/foundation
--convert_cluster_zknode
Holds information about cluster conversion operations and current status for each node.
Default: /appliance/logical/genesis/convert_cluster
--create_backplane_interface_marker
Backplane interface is created during NOS upgrade if this marker file is present.
Default: /home/nutanix/.create_backplane_interface
--csr_cn_entry
Common name to use instead of <node_uuid>.nutanix.com
Default: None
--csr_cn_suffix
Suffix to use instead of nutanix.com when creating CSR
Default: nutanix.com
cluster.disk_flags
--clean_disk_log_path
Path to the logs from the clean_disks script.
Default: /home/nutanix/data/logs/clean_disks.log
--clean_disk_script_path
Path to the clean_disks script.
Default: /home/nutanix/cluster/bin/clean_disks
--disk_partition_margin
Limit for the number of bytes we will allow to be unpartitioned on a disk.
Default: 2147483648
--disk_size_threshold_percent
Percentage of available disk space to be allocated to stargate
Default: 95
--enable_all_ssds_for_oplog
DEPRECATED: Use all ssds attached to this node for oplog storage.
Default: true
--enable_fio_realtime_scheduling
Use realtime scheduling policy for fusion io driver.
Default: false
--fio_realtime_priority
Priority for fusion io driver, when realtime scheduling policy is being used.
Default: 10
--format_fusion_percent
The percentage of total capacity of fusion-io drives that should be formatted as usable
Default: 60
--max_ssds_for_oplog
Maximum number of ssds used for oplog per node. If value is -1, use all ssds available. If
only_select_nvme_disks_for_oplog gflag is true and NVMe disks are present, only NVMe disks are
used for selecting oplog disks.
Default: 8
--metadata_maxsize_GB
Maximum size of metadata in GB
Default: 30
--only_select_nvme_disks_for_oplog
If true and NVMe disks are present, only use NVMe disks for selecting oplog disks.
Default: true
--path_to_setscheduler_binary
Path to setscheduler binary, which is used to set realtime priority for fusion io driver.
Default: /home/nutanix/bin/setscheduler
--skip_metadata_link_setup
Skip creation of metadata links (Use rootfs for storing metadata
cluster.esx_upgrade_helper
--poweroff_uvms
Power off UVMs during hypervisor upgrade if Vmotion is not enabled or Vcenter is not configured for
cluster.
Default: false
--update_foundation_vibs
Update VIBS which are present in foundation during ESX hypervisor upgrade.
Default: true
cluster.genesis.breakfix.host_bootdisk_graceful
--clone_bootdisk_default_timeout
The default timeout for completion of cloning of bootdisk.
Default: 28800
--restore_bootdisk_default_timeout
The default timeout for completion of restore of bootdisk.
Default: 14400
--wait_for_phoenix_boot_timeout
The maximum amount of time for which the state machine waits after cloning for the node, to be
booted in phoenix environment.
Default: 36000
cluster.genesis.breakfix.host_bootdisk_utils
--host_boot_timeout
The maximum amount of time for which the state machine waits for host to be up.
Default: 36000
cluster.genesis.breakfix.ssd_breakfix_esx_helper
--svm_regex
Regular expression used to find the SVM vmx name.
Default: ServiceVM
cluster.genesis.cluster_manager
--agave_dir
Identify if agave is running on cluster.
Default: /home/nutanix/agave
cluster.genesis.convert_cluster.esx_helper
--block_forward_conversion_on_dvs
Flag needs explicit reset by SRE to let a ESX DVS cluster proceed with conversion to AHV with
awareness that conversion back to ESX will break networking
Default: true
cluster.genesis.convert_cluster.utils
--convert_cluster_disable_marker
Marker file to disable hypervisor conversion on node.
Default: /home/nutanix/.convert_cluster_disable
cluster.genesis.expand_cluster.expand_cluster
--node_up_retries
Number of retries for node genesis rpcs to be up after reboot
Default: 40
cluster.genesis.expand_cluster.pre_expand_cluster_checks
--nutanix_installer_size
Size of nutanix installer in KBs
Default: 2500000
cluster.genesis.expand_cluster.utils
--nos_packages_file
File containing packages present in the nos software
Default: install/nutanix-packages.json
--nos_tar_timeout_secs
Timeout in secs for tarring nos package
Default: 3600
cluster.genesis.la_jolla.la_jolla
--add_la_jolla_disk
Flag to add La Jolla disk back
Default: true
--nfs_buf_size
NFS buffer size
Default: 8388608
cluster.genesis.migration_manager
--num_migration_commit_retries
Number of times to retry updating the zeus configuration with new zookeeper ensemble.
Default: 5
--num_migration_rpc_retries
Number of times to retry Rpcs to other nodes during zookeeper migration.
Default: 10
--tcpkill
Path to the tcpkill binary
Default: /usr/sbin/tcpkill
--zookeeper_migration_wal_path
Path to zookeeper write-ahead-log file where migration state is recorded.
Default: /home/nutanix/data/zookeeper_migration.wal
--zookeeper_session_check_time_secs
Number of seconds zookeeper takes to verify and disconnect zookeeper quorum ip addresses that
are no longer valid.
Default: 10
--zookeeper_tcpkill_timeout_secs
Number of seconds to let tpckill to disconnect the tcp connections of zookeeper ensemble member
that is to be removed.
Default: 10
cluster.genesis.network_segmentation_helper
--retry_count_zk_map_publish
Retry count for publishing new zk mapping.
Default: 3
cluster.genesis.node_manager
--auto_discovery_interval_secs
Number of seconds to sleep when local node can't join any discovered cluster.
Default: 5
--dhcp_ntp_conf
dhcp ntp configuration file.
Default: /var/lib/ntp/ntp.conf.dhcp
--download_staging_area
Directory where we will download directories from other SVMs.
Default: /home/nutanix/tmp
cluster.genesis.resource_management.rm_helper
--common_pool_map
Mapping of node with its common pool memory in kb
Default: /appliance/logical/genesis/common_pool_map
cluster.genesis.resource_management.rm_prechecks
--cushion_memory_in_kb
Cushion Memory required in nodes before update
Default: 2097152
--delta_memory_for_nos_upgrades_kb
Amount of CVM memory to be increased during NOS upgrade
Default: 4194304
--host_memory_threshold_in_kb
Min host memory for memory update , set to 62 Gb
Default: 65011712
cluster.genesis.resource_management.rm_tasks
--cvm_reconfig_component
Component for CVM reconfig
Default: kGenesis
--cvm_reconfig_operation
Component for CVM reconfig
Default: kCvmreconfig
cluster.genesis_utils
--svm_default_login
User name for logging into SVM.
Default: nutanix
--timeout_zk_operation
Timeout for zk operation like write
Default: 120
cluster.hades.client
--hades_jsonrpc_url
URL of the JSON RPC handler on the Hades HTTP server.
Default: /jsonrpc
--hades_port
Port that Hades listens on.
Default: 2099
--hades_rpc_timeout_secs
Timeout for each Hades RPC.
Default: 30
cluster.hades.disk_diagnostics
--hades_retry_count
Default retry count.
Default: 5
--max_disk_offline_count
Maximum error count for disk after which disk is to removed.
Default: 3
--max_disk_offline_timeout
Maximum time value where disk offline events areignored.
Default: 3600
cluster.hades.disk_manager
--aws_cores_partition
Partition in which core files are stored on AWS.
Default: /dev/xvdb1
--boot_part_size
The size of a regular boot partition in sectors.
Default: 20969472
--device_mapper_name
A name of the device mapper that is to be created in case striped devices are discovered.
Default: dm0
--disk_unmount_retry_count
Number of times to retry unmounting the disk.
Default: 60
--firmware_upgrade_default_wait
Default wait time after issuing firmware upgrade.
Default: 10
cluster.hades.raid_utils
--min_raid_sync_speed
Minimum raid sync speed
Default: 50000
cluster.host_upgrade_common
--host_poweroff_uvm_file
File containing list of UVM which are powered off for host upgrade.
Default: /home/nutanix/config/.host_poweroff_vm_list
--upgrade_delay
Seconds to wait before runninguprgade script in Esx Host.
Default: 0
cluster.host_upgrade_helper
--host_disable_auto_upgrade_marker
Path to marker file to indicate that automatic host upgrade should not be performed on this node.
Default: /home/nutanix/.host_disable_auto_upgrade
--hypervisor_upgrade_history_file
File path where hypervisor upgrade history is recorded.
Default: /home/nutanix/config/hypervisor_upgrade.history
--hypervisor_upgrade_info_znode
Location in a zookeeper where we keep the Hypervisor upgrade information.
Default: /appliance/logical/upgrade_info/hypervisor
cluster.ipv4config
--end_linklocal_ip
End of the range of link local IP4 addresses.
Default: 169.254.254.255
--esx_cmd_timeout_secs
Default timeout for running a remote command on an ESX host.
Default: 120
--hyperv_cmd_timeout_secs
Default timeout for running a remote command on an hyperv host.
Default: 120
--ipmi_apply_config_retries
Number of times to try applying an IPMI IPv4 configuration before failing.
Default: 6
cluster.kvm_upgrade_helper
--ahv_enter_maintenance_mode_timeout_secs
Seconds to wait before retrying enter maintenance mode operation on failures
Default: 7200
--kvm_reboot_delay
Seconds to delay before reboot KVM host
Default: 30
cluster.license_config
--license_config_file
Zookeeper path where license configuration is stored.
Default: /appliance/logical/license/configuration
--license_config_proto_file
License configuration file shipped with NOS.
Default: configuration.cfg
--license_dir
License feature set files directory shipped with NOS.
Default: /home/nutanix/serviceability/license
--license_public_key
License public key string shipped with NOS.
Default: /appliance/logical/license/public_key
--license_public_key_str
License public key string shipped with NOS.
Default: public_key.pub
--zookeeper_license_root_path
Zookeeper path where license information is stored.
cluster.ncc_upgrade_helper
--ncc_installation_path
Location where NCC is installed on a CVM.
Default: /home/nutanix/ncc
--ncc_num_nodes_to_upload
Number of nodes to upload the NCC installer directory to.
Default: 2
--ncc_uncompress_path
Location for uncompressing nutanix NCC binaries.
Default: /home/nutanix/data/ncc/installer
--ncc_upgrade_info_znode
Location in a zookeeper where we keep the Upgrade node information.
Default: /appliance/logical/upgrade_info/ncc
--ncc_upgrade_params_znode
Zookeeper location to store NCC upgrade parameters.
Default: /appliance/logical/upgrade_info/ncc_upgrade_params
--ncc_upgrade_status
Location in Zookeeper where we store upgrade status of nodes.
Default: /appliance/logical/genesis/ncc_upgrade_status
--ncc_upgrade_timeout_secs
Timeout in seconds for the NCC upgrade module.
Default: 30
--ncc_version_znode
Zookeeper node where we keep the current release version of NCC.
Default: /appliance/logical/genesis/ncc_version
cluster.preupgrade_checks
--arithmos_binary_path
Path to the arithmos binary.
Default: /home/nutanix/bin/arithmos
--min_disk_space_for_upgrade
Minimum space (KB) required on /home/nutanix for upgrade to proceed.
Default: 3600000
--min_replication_factor
Minimum replication factor required per container.
Default: 2
--mountsfile
Path to the mounts file in proc.
Default: /proc/mounts
cluster.preupgrade_checks_ncc_helper
--ncc_temp_location
Location to extract NCC.
Default: /home/nutanix/ncc_preupgrade
cluster.reset_helper
--reset_status_path
Location to store script used to get cluster reset progress.
Default: /tmp/reset_status
--source_reset_status_path
Location of source script used to generate reset status script with IPs injected.
Default: /home/nutanix/cluster/bin/reset_status
cluster.rsyslog_helper
--lock_dir
Default path for nutanix lock files.
Default: /home/nutanix/data/locks/
--log_dir
Default path for nutanix log files.
Default: /home/nutanix/data/logs
--log_state_dir
Default path for syslog state files. This stores thestate of rsyslog across restarts.
Default: /home/nutanix/config/syslog
--module_level
Level of syslog used for sending module logs
Default: local0
--rsyslog_conf_file
Default Configuration file for Rsyslog service.
Default: /etc/rsyslog.d/rsyslog-nutanix.conf
--rsyslog_rule_header
Nutanix specified rsyslog rules are appended only below this marker.
Default: # Nutanix remote server rules
--rsyslog_rule_header_end
Nutanix specified rsyslog rules are added above this marker.
Default: # Nutanix remote server rules end
cluster.salt.consts
--iptables_salt_config_path
Path to salt config for iptables state.
Default: /srv/pillar/iptables.sls
--salt_call_command_path
Path to the salt-call command.
Default: /usr/bin/salt-call
--salt_jinja_template
Path to salt ipv4 jinja template
Default: /srv/salt/security/CVM/iptables/iptables4.jinja
--salt_states_templates_dir
Path to dir holding the salt templates.
Default: /home/nutanix/config/salt_templates
cluster.salt.firewall_config_helper
--consider_salt_framework
Whether to consider salt framework or not
Default: false
cluster.service.cluster_config_service
--cluster_config_path
Path to the Cluster Config binary.
Default: /home/nutanix/bin/cluster_config
--cluster_config_server_rss_mem_limit
Maximum amount of resident memory Cluster Config may use on an Svm with 8GB memory
configuration.
Default: 268435456
cluster.service.curator_service
--curator_config_json_file
JSON file with curator configuration
Default: curator_config.json
--curator_data_dir_size
Curator data directory size in MB (80 GB).
Default: 81920
--curator_data_dir_symlink
Path to curator data directory symlink.
Default: /home/nutanix/data/curator
cluster.service.foundation_service
--foundation_path
Path to the foundation service script.
Default: /home/nutanix/foundation/bin/foundation
--foundation_rss_mem_limit
Maximum amount of resident memory Foundation may use on a CVM with 8GB memory
configuration.
Default: 1073741824
cluster.service.ha_service
--def_stargate_stable_interval
Default number of seconds a stargate has to be alive to be considered asstable and healthy.
Default: 30
--hyperv_internal_switch_health_timeout
Timeout for how long we should wait before waking up the thread that monitors internal switch health
on HyperV.
Default: 30
--num_worker_threads
The number of worker threads to use for running tasks.
Default: 8
--old_stop_ha_zk_node
When this node is created the old ha should not take any actions on the cluster.
Default: /appliance/logical/genesis/ha_stop
--stargate_aggressive_monitoring_secs
Default number of seconds a stargate is aggressively monitored after it is down.
Default: 3
--stargate_exit_handler_timeout_secs
Default timeout for accessing the Stargate exit handler page.
Default: 10
cluster.service.service_utils
--auto_set_cloud_gflags
If true, recommended gflags will be automatically set for cloud instances.
Default: true
--cgroup_subsystems
Default subsystems used for cgroup creation.
Default: cpu,cpuacct,memory,freezer,net_cls
--enable_service_monitor
If true, C based service_monitor will re-spawn the service process upon exit, else Python based
self_monitor will be used.
Default: true
--memory_limits_base_size_kb
Total memory size of the standard Svm based on which memory limits are derived.
Default: 8388608
--path_to_cgclassify_binary
Path to cgclassify binary, which is used to add a process into cgroup.
Default: /bin/cgclassify
--path_to_cgcreate_binary
Path to cgcreate binary, which is used to create a cgroup.
Default: /bin/cgcreate
--path_to_cgset_binary
Path to cgset binary, which is used to set parameter for cgroup.
Default: /bin/cgset
--path_to_chrt_binary
Path to chrt binary, which is used to set realtime priority for a process.
Default: /usr/bin/chrt
cluster.service.zookeeper_service
--zookeeper_client_port
TCP port number for zookeeper service clients.
Default: 9876
--zookeeper_config_template
Path to zookeeper configuration template.
Default: /home/nutanix/cluster/config/zoo.cfg.template
--zookeeper_cpuset_exclude_cpu_default
Default CPU affinity. This is a comma seperated list of cpus to be excluded for this component. If
specified as -1 then cpuset exclude cpu is disabled for this component.
Default: -1
--zookeeper_cpuset_exclude_cpu_multicluster
Default CPU affinity for multiclusters. This is a comma seperated list of cpus to be excluded for this
component. If specified as -1 then cpuset exclude cpu is disabled for this component.
Default: -1
--zookeeper_cpuset_exclude_cpu_ndfs
Default CPU affinity for AOS clusters. This is a comma seperated list of cpus to be excluded for this
component. If specified as -1 then cpuset exclude cpu is disabled for this component.
Default: 0
--zookeeper_data_dir
Path to zookeeper data directory.
Default: /home/nutanix/data/zookeeper
--zookeeper_env_path
Path to shell script with zookeeper environment.
Default: /etc/profile.d/zookeeper_env.sh
--zookeeper_init
Path to the zookeeper_init tool.
Default: /home/nutanix/bin/zookeeper_init
--zookeeper_leader_election_port
TCP port number for zookeeper service leader election.
cluster.sshkeys_helper
--authorized_keys_file
Path to file containing list of permitted RSA keys
Default: /home/nutanix/.ssh/authorized_keys2
--id_rsa_path
Nutanix default SSH key used for logging into SVM.
Default: /home/nutanix/.ssh/id_rsa
--ssh_client_configuration
Location of ssh client configuration file.
Default: /home/nutanix/.ssh/config
--ssh_config_server_alive_count_max
Max SSH keepalive messages missed before declaring connection dead.
Default: 3
--ssh_config_server_alive_interval
SSH keepalive message interval.
Default: 10
--ssh_path
Location of ssh folder for nutanix.
Default: /home/nutanix/.ssh
cluster.upgrade_helper
--cluster_name_update_timeout
Default timeout for updating the cluster name in zeus.
Default: 5
--num_nodes_to_upload
Number of nodes to upload the installer directory to.
Default: 2
--nutanix_packages_json_basename
Base file name of the JSON file that contains the list of packages to expect in the packages directory.
Default: nutanix-packages.json
--upgrade_genesis_restart
Location in Zookeeper where we store if genesis restart is required or not.
Default: /appliance/logical/upgrade_info/upgrade_genesis_restart
--foundation_ipv6_interface
Ipv6 interface corresponding to eth0
Default: 2
cluster.utils.foundation_utils
--foundation_root_dir
Root directory for foundation in CVM
Default: /home/nutanix
cluster.utils.hyperv_ha_utils
--default_internal_switch_monitor_interval
Default polling period for monitoring internal switch health
Default: 30
--default_ping_success_percent
Default percentage of success which is used to determine switch health
Default: 100
--default_total_ping_count
Default number of pings sent to determine switch health
Default: 10
cluster.utils.new_node_nos_upgrade
--stand_alone_upgrade_log
Log file for stand-alone upgrade.
Default: /home/nutanix/data/logs/stand_alone_upgrade.out
cluster.xen_upgrade_helper
--xen_maintenance_mode_retries
Seconds to delay before reboot Xen host
Default: 5
--xen_reboot_delay
Seconds to delay before reboot Xen host
Default: 30
--xen_uvm_no_migration_counter
Number of retries to wait for UVMs to migrate from xen host after it is put in maintenance mode
Default: 7
--xen_webserver_port
Port for webserver that will serve files required during xen upgrades
Default: 8999
Usage
commands:
cleanup
delete_disks
drain_oplog
list_runtime_test_args
recreate_disks
run
run_iperf
/home/nutanix/diagnostics/diagnostics.py
--add_vms_to_pd
Whether to add Diagnostic VMs to pd.
Default: false
--cluster_external_data_services_ip
Cluster external data services IP
--collect_cassandra_latency_stats
Collect cassandra latency stats for each test.
Default: true
--collect_iostat_info
Reads and writes to disk
Default: false
--collect_sched_stats
Collect stats related to Linux scheduling in SVM
Default: false
--collect_stargate_stats
Grab snapshot of 2009 stargate stats page before and after every test.
Default: false
--collect_stargate_stats_interval
Internal in secs to collect stargate stats.
Default: 10
--collect_stargate_stats_timeout
Max timeout in secs for stargate stats collection.
Default: 720
--collect_top_stats
Collect top stats for each test.
Default: false
--collect_uvm_stats
Collects uvm cpu and latency stats.
Default: false
genesis
Usage
/usr/local/nutanix/cluster/bin/genesis
--foreground
Run Genesis in foreground.
Default: false
--genesis_debug_stack
Flag to indicate whether signal handler need to be registered for debugging greenlet stacks.
Default: true
cluster.genesis.breakfix.host_bootdisk_graceful
--clone_bootdisk_default_timeout
The default timeout for completion of cloning of bootdisk.
Default: 28800
--restore_bootdisk_default_timeout
The default timeout for completion of restore of bootdisk.
Default: 14400
--wait_for_phoenix_boot_timeout
The maximum amount of time for which the state machine waits after cloning for the node, to be
booted in phoenix environment.
Default: 36000
cluster.genesis.breakfix.host_bootdisk_utils
--host_boot_timeout
The maximum amount of time for which the state machine waits for host to be up.
Default: 36000
cluster.genesis.breakfix.ssd_breakfix_esx_helper
--svm_regex
Regular expression used to find the SVM vmx name.
Default: ServiceVM
cluster.genesis.cluster_manager
--agave_dir
Identify if agave is running on cluster.
Default: /home/nutanix/agave
cluster.genesis.convert_cluster.esx_helper
--block_forward_conversion_on_dvs
Flag needs explicit reset by SRE to let a ESX DVS cluster proceed with conversion to AHV with
awareness that conversion back to ESX will break networking
Default: true
cluster.genesis.convert_cluster.utils
--convert_cluster_disable_marker
Marker file to disable hypervisor conversion on node.
Default: /home/nutanix/.convert_cluster_disable
cluster.genesis.expand_cluster.expand_cluster
--node_up_retries
Number of retries for node genesis rpcs to be up after reboot
Default: 40
cluster.genesis.expand_cluster.pre_expand_cluster_checks
--nutanix_installer_size
Size of nutanix installer in KBs
Default: 2500000
cluster.genesis.expand_cluster.utils
--nos_packages_file
File containing packages present in the nos software
Default: install/nutanix-packages.json
--nos_tar_timeout_secs
Timeout in secs for tarring nos package
Default: 3600
cluster.genesis.la_jolla.la_jolla
--add_la_jolla_disk
Flag to add La Jolla disk back
Default: true
--nfs_buf_size
NFS buffer size
Default: 8388608
cluster.genesis.migration_manager
--num_migration_commit_retries
Number of times to retry updating the zeus configuration with new zookeeper ensemble.
Default: 5
--num_migration_rpc_retries
Number of times to retry Rpcs to other nodes during zookeeper migration.
Default: 10
--tcpkill
Path to the tcpkill binary
Default: /usr/sbin/tcpkill
--zookeeper_migration_wal_path
Path to zookeeper write-ahead-log file where migration state is recorded.
Default: /home/nutanix/data/zookeeper_migration.wal
--zookeeper_session_check_time_secs
Number of seconds zookeeper takes to verify and disconnect zookeeper quorum ip addresses that
are no longer valid.
Default: 10
--zookeeper_tcpkill_timeout_secs
Number of seconds to let tpckill to disconnect the tcp connections of zookeeper ensemble member
that is to be removed.
Default: 10
cluster.genesis.network_segmentation_helper
--retry_count_zk_map_publish
Retry count for publishing new zk mapping.
Default: 3
cluster.genesis.node_manager
--auto_discovery_interval_secs
Number of seconds to sleep when local node can't join any discovered cluster.
Default: 5
--dhcp_ntp_conf
dhcp ntp configuration file.
Default: /var/lib/ntp/ntp.conf.dhcp
--download_staging_area
Directory where we will download directories from other SVMs.
Default: /home/nutanix/tmp
cluster.genesis.resource_management.rm_helper
--common_pool_map
Mapping of node with its common pool memory in kb
Default: /appliance/logical/genesis/common_pool_map
cluster.genesis.resource_management.rm_prechecks
--cushion_memory_in_kb
Cushion Memory required in nodes before update
Default: 2097152
--delta_memory_for_nos_upgrades_kb
Amount of CVM memory to be increased during NOS upgrade
Default: 4194304
--host_memory_threshold_in_kb
Min host memory for memory update , set to 62 Gb
Default: 65011712
cluster.genesis.resource_management.rm_tasks
--cvm_reconfig_component
Component for CVM reconfig
Default: kGenesis
--cvm_reconfig_operation
Component for CVM reconfig
Default: kCvmreconfig
cluster.genesis.server
--genesis_document_root
Document root where static files are served from.
Default: /home/nutanix/cluster/www
--genesis_server_timeout_secs
Timeout for rpc made through http server.
Default: 30
--svm_default_login
User name for logging into SVM.
Default: nutanix
--timeout_zk_operation
Timeout for zk operation like write
Default: 120
--upgrade_fail_marker
Marker to indicate upgrade has failed.
Default: /appliance/logical/genesis/upgrade_failed
ncc
Usage
/home/nutanix/ncc/bin/ncc
--generate_plugin_config_template
Generate plugin config for the plugin
Default: false
--help
show this help
Default: 0
--helpshort
show usage only for this module
Default: 0
--helpxml
like --help, but generates XML output
Default: false
--ncc_logging_dir
Directory where script log files are stored.
Default: /home/nutanix/data/logs
--ncc_run_on_dev_vm
Whether NCC is running on dev vm or not.
Default: false
--ncc_version
Show script version.
Default: false
--pre_upgrade_check_enable
The flag to specify if the current ncc run is for pre-upgrade
cluster.ncc_upgrade_helper
--ncc_installation_path
Location where NCC is installed on a CVM.
Default: /home/nutanix/ncc
--ncc_num_nodes_to_upload
Number of nodes to upload the NCC installer directory to.
Default: 2
--ncc_uncompress_path
Location for uncompressing nutanix NCC binaries.
Default: /home/nutanix/data/ncc/installer
--ncc_upgrade_info_znode
Location in a zookeeper where we keep the Upgrade node information.
Default: /appliance/logical/upgrade_info/ncc
--ncc_upgrade_params_znode
Zookeeper location to store NCC upgrade parameters.
Default: /appliance/logical/upgrade_info/ncc_upgrade_params
--ncc_upgrade_status
Location in Zookeeper where we store upgrade status of nodes.
Default: /appliance/logical/genesis/ncc_upgrade_status
--ncc_upgrade_timeout_secs
Timeout in seconds for the NCC upgrade module.
Default: 30
--ncc_version_znode
Zookeeper node where we keep the current release version of NCC.
Default: /appliance/logical/genesis/ncc_version
ncc.analytics.algorithms
--auto_find_frequency
Whether let algorithm automatic find the frequency of data set
Default: false
ncc.analytics.client
--r_server_wait_time_secs
The waiting time for r server to start.
Default: 0.01
--root_dir
The root directory of rserver.
Default: /home/nutanix
ncc.analytics.model
--model_dir
The directory to store all models get from R
Default: /home/nutanix/data/ncc/
ncc.analytics.recommendation
--alert_cpu_check_id
Check id of the alert for cpu runway.
Default: 120089
--alert_disk_check_id
Check id of the alert for storage runway.
Default: 120113
ncc.cluster_checker
--acquire_ncc_lock
If true run NCC only if NCC lock is acquired.
Default: true
--anonymize_ncc_output
Flag to specify if ncc output should be anonymized.
Default: false
--factory_test_mode
Flag to specify if ncc is running in factory testing mode. By default, it is not running in factory mode.
Default: false
ncc.config_module.config
--min_nos_version_with_impact_type
Minimum NOS version for which we move category_list members to impact_type_list,
classification_list.
Default: 5.0
ncc.data_access.insights_data_access
--batch_entity_cnt
Batch cnt of entities read and written to Entity DB
Default: 1000
--use_ergon_interface
Flag to specify if the code should use ergon as the task interface.
Default: true
ncc.ncc_logger
--enable_plugin_wise_logging
Enabling this flag will add plugin name to log record during plugin run
Default: true
ncc.ncc_utils.arithmos_utils
--cluster_cpu_usage_sampling_interval_sec
Sampling interval for CPU usage for a cluster.
Default: 300
--cluster_memory_usage_sampling_interval_sec
Sampling interval for Memory usage for a cluster.
Default: 300
--cpu_capacity_data_path
The file containing the cpu capacity of a cluster in hz.
Default: /home/nutanix/data/ncc/cpu_capacity.json
--cpu_usage_processed_data_path
The file containing the cpu usage of a cluster in hz.
Default: /home/nutanix/data/ncc/cpu_usage.json
--cpu_usage_raw_data_path
The file containing the cpu usage of a cluster in cycles along with timestamp.
Default: /home/nutanix/data/ncc/cpu_usage_raw.json
--memory_capacity_data_path
The file containing the memory capacity of a cluster in bytes.
Default: /home/nutanix/data/ncc/memory_capacity.json
--memory_usage_processed_data_path
The file containing the Memory usage of a cluster in bytes.
Default: /home/nutanix/data/ncc/memory_usage.json
--memory_usage_raw_data_path
The file containing the memory usage of a cluster in bytes along with timestamp.
Default: /home/nutanix/data/ncc/memory_usage_raw.json
ncc.ncc_utils.cluster_utils
--cmd_timeout_secs
Timeout seconds for commands run on ESX and CVMs
Default: 30
--copy_timeout_secs
Timeout seconds for file copy operations.
ncc.ncc_utils.config_utils
--ncc_config_dir
Directory path where ncc config files are kept
Default: /home/nutanix/ncc/config
--ncc_node_config_file_name
Config file name for keeping node specific info
ncc.ncc_utils.esx_utils
--fix_high_perf_policy
Fix high performance policy.
Default: False
--temp_cvm_vmx
Temporary path to store the CVM vmx file.
Default: /tmp/ServiceVM.vmx
ncc.ncc_utils.gflags_definition
--alert_severity_list
Comma separated list of severity types for alerts to retrieve. The valid values are: ['kInfo', 'kWarning',
'kCritical', 'kAudit', 'kAll']
Default: kAll
--anonymize_output
Flag to specify if the output of log collector should be anonymized.
Default: false
--append_logs
Flag to specify if the logs should be appended to the previously existing logs. E.g. append: --
append_logs=1
Default: 0
--case_number
The case number for which logs are collected. If specified, logs are stored in this directory on FTP
server.
--collect_activity_traces
Boolean flag to specify whether to collect the activity traces.
Default: 1
--collect_all_hypervisor_logs
Flag to specify if all /var/log/* files should be collected.
Default: true
--collect_binary_logs
Boolean flag to specify wheteher to collect binary logs or not.
Default: 0
--collect_boot_log
Flag to specify if the /var/log/boot.log should be collected.
Default: true
--collect_component_page
Boolean flag to specify whether to collect snapshots of the coponent pages
Default: 1
--collect_cores
Boolean flag to specify whether to collect cores files of the given components
Default: 0
ncc.ncc_utils.globals
--insights_rpc_server_ip
The IP address of Insights RPC server.
Default: 127.0.0.1
--insights_rpc_server_port
The port where the Insights RPC server listens.
Default: 2027
ncc.ncc_utils.hyperv_utils
--ncc_server_port
Port that the HTTP server started by NCC listens on.
Default: 2101
ncc.ncc_utils.hypervisor_utils
--nic_link_down_job_state
Path to the job state for nic status.
ncc.ncc_utils.network_utils
--ping_command_timeout_secs
Ping command timeout secs for checking reachable hosts
Default: 10
ncc.ncc_utils.progress_monitor
--progress_sleep_duration_sec
Time duration for which which progress monitor should sleep between retries.
Default: 2
ncc.ncc_utils.recommendation_util
--cluster_zeus_config_base_path
Path where we keep zeus config for each cluster.
Default: /appliance/physical/zeusconfig
ncc.plugins.base_plugin
--default_cluster_uuid
The flags specifies default_cluster_uuid
Default: Unknown
--waiting_plugin_sleep_time
The flags specifies the amount of time the waiting plugins should sleep before checking for idle
threads
Default: 0.05
ncc.plugins.consts
--HDD_latency_threshold_ms
HDD await threshold (ms/command) to determine Disk issues.
Default: 500
--SSD_latency_threshold_ms
SSD await threshold (ms/command) to determine Disk issues.
Default: 50
ncc.plugins.firstimport
--ncc_base_dir
The base NCC directory
Default: /home/nutanix/ncc
ncc.plugins.log_collector.binary_log_collector
--binary_log_tool
The binary log analyzer tool to retrieve binary logs.
Default: /home/nutanix/bin/binary_log_analyzer
--chunk_time
The number of hours for which binary logs are collected at once. Binary logs are collected in chunks
to avoid overfilling the memory.
Default: 1
ncc.plugins.log_collector.component_data_collector
--chronos_master_port
The port from where chronos master activity traces should be collected.
Default: 2011
--chronos_node_port
The port from where chronos node activity traces should be collected.
Default: 2012
--component_data_cerebro_port
The port on which cerebro activity traces should be collected.
Default: 2020
--component_data_curator_port
The port from where curator activity traces should be collected.
Default: 2010
--component_data_ip
The IP address from where activity traces should be collected.
Default: http://127.0.0.1
--component_data_stargate_port
The port on which stargate activity traces should be collected.
ncc.plugins.log_collector.cvm_kernel_logs_collector
--kernel_logs_subcomponent_list
Comma seperated list of subcomponents for which kernel logs are to becollected. The valid values
are: ['dmesg.old', 'dmesg', 'wtmp']
Default: all
ncc.plugins.log_collector.cvm_logs_collector
--collect_zookeeper_transaction_logs
Flag to specify if zookeeper transaction logs should be collected.
Default: false
ncc.plugins.log_collector.fileserver_logs_collector
--minerva_collect_cores
Whether to collect sysstats logs
Default: true
--minerva_collect_sysstats
Whether to collect sysstats logs
Default: false
ncc.plugins.log_collector.log_collector
--curr_logfile_timestamp
The timestamp for the current tarball generated
Default: 0
--gz_extension
The extension of compressed tarfile.
Default: .tar.gz
--hypervisor_log_input_dir
If this flag is mentioned the logs on the hypervisor will be collected from the given directory, otherwise
they will be collected from the default configured directory.
--log_input_dir
If this flag is mentioned, logs will be collected from the given directory, otherwise they will be collected
from the default configured directory.
--max_used_percent_hard_limit
The maximum used percentage after which we do not write any logs
Default: 95
ncc.plugins.log_collector.log_utils.hyperv_log
--hyperv_cluster_logs
Collect HyperV cluster logs
Default: false
setup_hyperv.py
Usage
commands:
register_shares
setup_scvmm
/usr/local/nutanix/bin/setup_hyperv.py
--configure_library_share
Whether a library share should be configured
Default: None
--default_host_group_path
The default SCVMM host group
Default: All Hosts
--help
Print detailed help
Default: false
--library_share_name
The name of the container that will be registered as a library share in SCVMM
--ncli_password
Password to be used when running ncli
--password
Domain account password for the host