Scality RING7 Setup and Installation Guide (v7.3.0)
Scality RING7 Setup and Installation Guide (v7.3.0)
Scality RING7 Setup and Installation Guide (v7.3.0)
About Scality
Scality, world leader in object and cloud storage, develops cost-effective Software Defined Storage
(SDS): the RING, which serves over 500 million end-users worldwide with over 800 billion objects in pro-
duction; and the open-source S3 Server. Scality RING software deploys on any industry-standard x86
server, uniquely delivering performance, 100% availability and data durability, while integrating easily in
the datacenter thanks to its native support for directory integration, traditional file applications and over
45 certified applications. Scality’s complete solutions excel at serving the specific storage needs of
Global 2000 Enterprise, Media and Entertainment, Government and Cloud Provider customers while
delivering up to 90% reduction in TCO versus legacy storage. A global company, Scality is
headquartered in San Francisco.
Check the iteration date of the Scality RING7 Setup and Installation Guide (v7.3.0) against the Scal-
ity RING Customer Resources web page to ensure that the latest version of the publication is in
hand.
With connectors servers, it can be advantageous to put in place dedicated FrontEnd interfaces (i.e., applic-
ation facing) and BackEnd interfaces (i.e., Chord interfaces). In this case, the same bonding/teaming
recommendations are applicable on both FrontEnd and BackEnd Interfaces.
Link Aggregation Control Protocol is the standard 802.3ad. To increase bandwidth and redundancy,
combine multiple links into a single logical link. All links participating in a single logical link must have the
same settings (e.g., duplex mode, link speed) and interface mode (e.g., access or trunk). It is possible to
have up to 16 ports in an LACP EtherChannel, however only eight can be active at one time.
LACP can be configured in either passive or active mode. In active mode, the port actively tries to bring
up LACP. In passive mode, it does not initiate the negotiation of LACP.
Upon logging into the Cisco switch:
Juniper Switches
The IEEE 802.3ad link aggregation enables Ethernet interfaces to be grouped to form a single link layer
interface, also known as a link aggregation group (LAG) or bundle. Aggregating multiple links between
physical interfaces creates a single logical point-to-point trunk link or a LAG.
LAGs balance traffic across the member links within an aggregated Ethernet bundle and effectively
increases the uplink bandwidth. Another advantage of link aggregation is increased availability, because
LAGs are composed of multiple member links. If one member link fails, the LAG will continue to carry
traffic over the remaining links.
Link Aggregation Control Protocol (LACP), a component of IEEE 802.3ad, provides additional func-
tionality for LAGs.
Physical Ethernet ports belonging to different member switches of a Virtual Chassis configuration
can be combined to form a LAG.
After logging in to the switch (virtual chassis, i.e. multiple switches acting like one switch or single switch):
{master}[edit]
save the command and exit the configure mode
user@switch01# commit
Minimum
Device Comment
Recommended
OS disks 2 Raid 1
Memory 16 GB If the RING Infrastructure is 12 servers or more, consider adding more memory.
Minimum
Device Comment
Recommended
OS disks 2 in RAID1 HDDs (minimum 400GB for logs) in RAID 1, as SSDs typically wear out faster
than other disk types when data is written and erased multiple times.
Memory 32GB Recommended (unless the Scality Sizing Tool indicates the need for more)
Frontend interface (optional) 2 x 10 Gb/s Linux bonding with 802.3ad dynamic link aggregation (LACP).
Chord Interface 2 x 10 Gb/s Linux bonding with 802.3ad dynamic link aggregation (LACP).
Admin Interface (optional) 1 x 1 Gb/s Required if Supervisor-connector communications must be separated
from production traffic
CPU 8 vCPUs Connectors are CPU bound (more is better)
Power supply 2 Power with redundant power supplies (on the hosts if VMs)
Metadata disks 1 (minimum capacity) SSD disks (>600GB) for bizobj metadata (mandatory, except for archi-
2 (medium capacity) tectures using REST connectors).
6+ (high capacity)
Refer to the Scality Sizing Tool.
Memory 128GB Recommended, unless the Scality Sizing Tool indicates that more memory
is required.
Chord Interface 2 x 10 GB/s Linux bonding with 802.3ad dynamic link aggregation (LACP).
Admin Interface 1 x 1Gb/s Required if Supervisor-connector communications must be sep-
arated from production traffic
CPU 12 cores (24 threads) Storage nodes are not CPU bound
RAID controller Mandatory (>1GB of A RAID controller with a minimum of 1GB cache (refer to Supported RAID
cache) Controllers).
Power supply 2 Power with redundant power supplies
Supported RAID Controllers
Controller Cache Size (GB) Valid Platforms Automatic Disk Management Support?
P440 4 HPE Yes
P840ar 2 HPE Yes
Cisco 12Gbps Modular RAID PCIe Gen 3.0 2/4 Cisco Yes
PERC H730 2 Dell Yes
LSI 9361-8i 4 Dell Yes
RAID controllers with a minimum of 1GB cache are mandatory for RING Storage
Node Servers. Contact Scality Sales for installations involving devices not recog-
nized above as Supported.
Scality also supports Ubuntu in very specific use cases. Please con-
tact Scality Sales in the event that CentOS/RHEL cannot be deployed
in the target environment.
As the partitioning directives example shows, DHCP acquires an IP address for the installation.
Once the system has been built and is on the network, perform the necessary network con-
figurations (e.g., bonding, teaming etc.) to achieve higher throughput or redundancy.
To ensure that the required Scality Installer packages can be downloaded when connecting to the Inter-
net via a proxy, perform the following procedure.
1. Add the proxy address and port to the yum configuration file (including authentification settings
as required).
proxy=http://yourproxyaddress:proxyport
# If authentification settings are needed:
proxy_username=yum-user-name
proxy_password=yum-user-password
Deploying SSH
The deployment of SSH keys between the Supervisor and the other RING servers facilitates the install-
ation process.
1. Working from the Supervisor, create the private/public key pair.
1. Start Scality Installer (refer to "Starting the Scality Installer" on page 23 for detailed information).
2. Indicate centos at the first prompt, asking for the user to connect to the nodes.
Please provide the user to connect to the nodes (leave blank for "root"): centos
Please provide the SSH password to connect to the nodes (leave blank if you have
a private key):
Please provide the private SSH key to use or leave blank to use the SSH agent:
/home/centos/.ssh/id_rsa
Please provide the passphrase for the key /home/centos/.ssh/id_rsa (leave blank
if no passphrase is needed):
Load the platform description file '/home/centos/pdesc.csv'... OK
The Scality Installer installs and starts the NTP daemon only if chrony
or NTP is not previously installed and running.
Scality recommends regular syncing of the hardware clock with the System up-to-date clock to ensure
that the boot logs are time consistent with the network clock.
hwclock --systohc
For more information on installing NTP, refer to the RHEL Network Time Protocol Setup webpage.
selinux=disabled
getenforce
l CentOS 7/RedHat 7:
1. Edit the /etc/default/grub file to add the following text to the GRUB_CMDLINE_LINUX_
DEFAULT line:
transparent_hugepage=never
grub2-mkconfig -o /boot/grub2/grub.cfg
3. Reboot.
l CentOS 6/RedHat 6:
1. Add the following text to the appropriate kernel command line in the grub.conf file.
transparent_hugepage=never
2. Reboot.
l CentOS 7/RedHat 7:
Add the following lines to an rc.local script, which is run as a service from systemctl:
l CentOS 6/RedHat 6:
Add the following lines to an rc script:
THP is disabled by the Pre-Install Suite, which must be executed prior to RING installation.
Firewall Configuration
If a connection cannot be established between the Supervisor and the nodes or connectors, it is neces-
sary to disable iptables on all servers.
l CentOS 7/RedHat 7:
iptables is controlled via firewalld, acting as a front-end. Use the following command set to
deactive the firewall:
l CentOS 6/RedHat 6:
1. Open /etc/sysconfig/iptables for editing.
2. Remove all lines containing "REJECT".
Turning iptables on is not recommended, as this can have a significant negative
impact on performance. Please contact Scality Customer Service for better traffic fil-
tering recommendations.
# /etc/init.d/iptables restart
iptables is disabled by the Pre-Install Suite, which must be executed prior to RING installation.
l RedHat 6
$ yum-config-manager \
--enable rhel-6-server-optional-rpms \
--enable rhel-rs-for-rhel-6-server-rpms \
--enable rhel-ha-for-rhel-6-server-rpms
$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
l RedHat 7
l scality-ring-offline-
Offline packages without S3
7.2.0.0.r170919232505.d6512f5df5_redhat_7.3_
201709200618.run
l scality-ring-offline-
7.2.0.0.r170919232505.d6512f5df5_centos_7.3_
201709200618.run
Use when internet access is not required. The file size is approx-
imately 2.8GB.
Examples
l scality-ring-with-s3-offline-
Offline packages with S3
7.2.0.0.r170919232505.d6512f5df5_centos_7.3_
201709200618.run
l scality-ring-with-s3-offline-
7.2.0.0.r170919232505.d6512f5df5_redhat_7.3_
201709200618.run
Utilities
Network Tools
Performance Monitoring
Operational Monitoring
Troubleshooting Tools
Connectors typically require minimal configuration adjustments once they have been installed using
the Scality Installer.
Salt is used by the Scality Installer to deploy and configure the components on the correct servers. The
installation mechanism takes place out of view, however it can be exposed should more flexibility be
required.
The Scality Installer supports all device-mapper-based block devices, including multipath, RAID, LVM,
and dm-crypt encrypted disks.
The exemplified Platform Description File (for RING + NFS) is correct, however due to PDF con-
straints it cannot simply be cut and pasted for use as a template.
ring ,,,,,,,,,,,,,,,,,,,,,,,,,,,,
sizing_version,customer_name,#ring,data_ring_name,meta_ring_name,HALO API key,S3
endpoint,cos,arc-data,arc-coding,,,,,,,,,,,,,,,,,,,
14.6,Sample,2,DATA,META,,s3.scality.com,2,9,3,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,,,,,,,,,,,
servers,,,,,,,,,,,,,,,,,,,,,,,,,,,,
data_ip,data_iface,mgmt_ip,mgmt_iface,s3_ip,s3_iface,svsd_ip,svsd_iface,ring_
membership,role,minion_id,enclosure,site,#cpu,cpu,ram,#nic,nic_size,#os_disk,os_disk_
size,#data_disk,data_disk_size,#raid_card,raid_cache,raid_card_type,#ssd,ssd_size,#ssd_
for_s3,ssd_for_s3_size
10.0.0.11,eth0,,,,,,,"DATA,META","storage,elastic",storage01,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,4,20,0,0,,1,50,0,0
10.0.0.12,eth0,,,,,,,"DATA,META","storage,elastic",storage02,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,4,20,0,0,,1,50,0,0
10.0.0.13,eth0,,,,,,,"DATA,META","storage,elastic",storage03,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,4,20,0,0,,1,50,0,0
10.0.0.14,eth0,,,,,,,"DATA,META","storage,elastic",storage04,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,4,20,0,0,,1,50,0,0
10.0.0.15,eth0,,,,,,,"DATA,META","storage,elastic",storage05,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,4,20,0,0,,1,50,0,0
10.0.0.16,eth0,,,,,,,"DATA,META","storage,elastic",storage06,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,4,20,0,0,,1,50,0,0
10.0.0.17,eth0,,,,,,,"DATA,META","connector,nfs",connector01,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,0,0,0,0,,0,0,0,0
10.0.0.18,eth0,,,,,,,"DATA,META","connector,nfs",connector02,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,0,0,0,0,,0,0,0,0
10.0.0.19,eth0,,,,,,,,supervisor,supervisor01,VIRTUAL MACHINE,site1,8,CPU (2.2GHz/1
cores),16,1,1,1,160,0,0,0,0,,0,0,0,0
,,,,,,,,,,,,,,,,,,,,,,,,,,,,
Confirm that the .run archive file has root executable permission.
The Scality Installer archive comes with several packages that allow RING installation to proceed without
Internet connection. Located in /srv/scality/repository, these packages include:
The folder "/srv/scality" already exists, overwrite content? [y/n/provide a new path to extract to]
Input Result
None Installation is aborted; Scality Installer invites user to provide --destination option to determine the
extraction location.
n Installation continues without any extraction, to existing /srv/scality directory.
y Installation continues, with archive content extracted and written over the existing /srv/scality dir-
ectory.
{{newDir- Installation continues, with archive content extracted to {{newDirectoryPath}}.
ectoryPath
If a directory already exists in the {{newDirectoryPath}} the user is prompted again for extraction
action.
At this point the user can opt to overwrite or not overwrite the content of the {{newDirectoryPath}}, or
they can indicate either a different existing director path or a path to a non-existent directory (which
will subsequently be created).
To employ a different user than root (e.g.: "centos"), refer to "Using “centos” When root/ssh Login is
Disabled" on page 12
By default, the topmost Scality Installer command is highlighted at initial start up.
Running a command suspends the Scality Installer Menu, replacing it in the window instead with
the command output. Once a selected command completes, review the output and press Enter to
return to the Scality Installer Menu.
The command runs various checks to determine the environment's readiness for RING installation.
At completion of the command the environment is ready to continue to the next command step, Run the
Pre-Install Suite. Press the Enter key to return to the Scality Installer Menu or the Ctrl+c keyboard com-
bination to exit the Scality Installer.
The command will run various checks to determine whether the hardware and software in place is com-
pliant with the RING installation.
Results Description
MATCH A correspondence was found between the platform description and the actual hardware.
MISSING An item in the platform description file could not be located on the server. Such items may prevent a
successful installation.
UNEXPECTED An item was located on the server, but is not present in the platform description. Such items should
not cause harm, however the situation should be closely monitored.
l A number of items have a dependency relationship (e.g., disks attached to a RAID card),
and thus if the parent item is missing all of its dependencies will also be missing.
l For RAID card matching, a "best fit" algorithm is used since the platform description file is
not detailed enough to give the exact configuration. As such, when scanning hardware the
hardware check attempts to find the closest possible configuration (in terms of the number
of disks and SSDs of the expected size) for each RAID card in the system.
In the event the result set reveals extra or missing hardware items, check for permutations between serv-
ers, network configuration, or inaccurate RAID card settings. Corrective action implies changing hard-
ware, such as the addition of missing disks or CPU, or the plugging of disks into the appropriate RAID
card.
OS System Checks
The OS system checks run a batch of tests to confirm that the target Linux system is properly configured
for the Scality RING installation. A criticity level is associated with each system check test, which may trig-
ger a repair action by the Pre-Install Suite or that may require user intervention in the event of test failure.
By default, the Pre-Install Suite examines various operating system parameters and applies any neces-
sary value corrections.
Initiate the Install Scality RING menu command to install the Scality RING and all needed components on
every node, as described in the Platform Description File (CSV/XLS file provided to the Installer).
S3 Connector installation is handled separately via the Install S3 Service (Optional) Scality Installer
Menu command (refer to "Installing S3 Connector Service (Optional)" on the next page).
Scality Installer will next prompt for the Supervisor password. If the prompt is left blank, a password will
be automatically generated.
RING installation will proceed, with on-screen output displaying the various process steps as they occur.
The Install the S3 Service (Optional) menu command installs the S3 Connector components on the
nodes as described in the Platform Description File (CSV/XLS file provided to the Installer).
If all installation steps return OK, the environment is ready to continue to the next command step, Run the
Post-Install Suite.
{
"env_logger": {
"es_heap_size_gb": 1
}
}
3. Restart the Scality Installer from the command line with the --s3-external-data option
along with the path to the extdata file..
4. Select the Install S3 Service (Optional) command from the Scality Installer Menu.
The command will run various checks to determine whether the RING installation is successful. In
sequence, the specific tasks that comprise the Post-Install Suite include:
1. Running script using salt
2. Starting checks
3. Checking if server is handled by salt
4. Checking missing pillars
5. Gathering info from servers
6. Running tests
As opposed to the Pre-Install Suite, the Post-Install Suite results are provided in an index.html file that is
delivered in the form of a tarball (/root/post-install-checks-results.tgz).
At the completion of the Post-Install Suite, RING installation is complete and the system can be put to use.
lsb_release -a
2. Navigate to the Scality repository page (linked from Scality RING Customer Resources) and
download the offline archive for the identified operating system.
3. Copy the archive to the server that will act as the Supervisor for the RING.
The packages provided for the offline archive for RedHat can be slightly different than
those provided for CentOS, and thus it is necessary to select the offline archive that
exactly correlates with the distribution and release on which the RING will be installed. If
Scality does not provide an Offline Archive for the distribution, one must be generated.
l Scality does not provide an Offline Installer for the Linux distribution in use (e.g., RedHat 6.5)
l A repository is already in use for packages within a customer's existing infrastructure
l A specific set of packages needs to be added to the Offline Installer for later use
The custom generated archive name conforms to the following naming standard:
scality-ring-offline-{{ringVersion}}_{{distributionTargetLocation}}_{{generationDate}}.run
1. Navigate to the Scality repository page (linked from Scality RING Customer Resources) and
download the Scality Offline Archive.
Download the corresponding CentOS version if the plan is to generate an archive for a
RedHat distribution.
2. Set up a server (either a VM or a container) with the desired target distribution. For RedHat it is
necessary to set up the Epel Repository, whereas for CentOS or Ubuntu it is not necessary to
download any additional files.
3. Copy the Scality Offline Archive to the target server.
4. Complete the offline archive generation via the applicable distribution-specific sub-procedure.
1. Run the Scality Installer, leaving blank all authentication questions (not required for Offline
Archive generation).
2. Select the Generate the Offline Archive (Optional) command. Consequently, the Scality Installer
will prompt for the path to where to generate the custom offline archive, while also proposing a
default path that includes the version of the actual distribution.
3. Press Enter or specify an alternative path to begin downloading the external dependencies
required for a RING installation without internet access.
Registration is required for access to the RedHat repository. If a local mirror of the RedHat repos-
itories is in use, confirm the availability of these repositories.
subscription-manager status
+-------------------------------------------+
System Status Details
+-------------------------------------------+
Overall Status: Current
l RedHat 6:
./scality-ring-7.2.0.0.centos_7.run --extract-only
Extracting archive content to /srv/scality
--extract-only option specified
Run /srv/scality/bin/launcher --description-file <path> manually to install the RING
For an overview of the options that can be run when extracting Scality Installer, refer to
"scality-installer.run Options" on page 40.
4. Generate the Custom Offline Archive using the generate-offline command with the --
use-existing-repositories option (by default, the command is available and is run
from /srv/scality/bin/generate-offline).
# /srv/scality/bin/generate-offline --use-existing-repositories
The repositories that will be employed in preparing the environment are thus set, and at
this point the Scality Installer can install the RING using the offline dependencies.
Option Description
-h, --help Display help content and exit
-d {{centOSVersion}}, --dis- Force the download of the specified distribution
tribution {{centOSVersion}}
-D, --debug Print debugging information
--http-proxy {{httpProxy}} URL of form http://{{user}}:{{password}}@{{url}}:{{port}} used during package download
-l {{logFile}},--log-file {{logFile}} Specify a log file
--no-install-prereq Donot automatically install all prerequisites to generate the offline repository, such
as createrrepo/repropro
-o {{outputPath}}, --output Path to the new archive to generate
{{outputPath}}
-p {{packagesToAdd}} ..., -- List of packages to add to the default ones
packages {{packagesToAdd}}
...
-r {{directoryForRepository}}, - Directory where the offline repository will be stored
-repository {{dir-
ectoryForRepository}}
--use-existing-repositories Do not use a temporary configuration to generate the offline installer archive. Use
this option to generate an offline without an online connection when a local repos-
itory is already set
--skip-offline-generation Do not generate the offline repository
--skip-offline-mode Do not set the offline mode as the default one
--skip-repack Do not repack the installer file. The installer will not use offline mode by default.
The Installer can also be closed by tapping the q key or the Ctrl+c
keyboard combination.
Upon exiting Scality Installer — if the RING is installed successfully — a link for the Supervisor will display.
Enter the provided URL into a web browser to access the Supervisor Web UI.
Web browsers that support the new Supervisor GUI include Chrome,
Firefox, Internet Explorer, and Opera.
3.6.2 --extract-only
Although the --noexec option remains available for the Installer archive extraction, it is now deprec-
ated, and the --extract-only command is recommended in its stead. If either of these options is
used, the Installer will not be run automatically but will simply be extracted.
After the archive extraction, /srv/scality/bin/launcher can be called at a later time to display the Installer
menu and start the installation.
3.6.3 --destination
Although the default extraction directory is /srv/scality, the installer can be extracted to any location with
the --destination option and a directory name argument. The --destination option can be
used with either the --description-file option or the --extract-only option.
l description-file:
l extract-only :
/srv/scality/bin/tools/setup-httpd -b -d \
/srv/scality/repository {{supervisorIP}}:{{httpPort}}
The embedded web server will run until the next reboot. To kill the web server, if neces-
sary, run the kill $(cat /var/run/http_pyserver.pid) command.
mkdir -p /etc/salt
3. Build a roster file to /etc/salt/roster for the platform (as exemplified). Refer to the official Saltstack
Roster documentation for more information.
sup:
host: 10.0.0.2
user: root
priv: /root/.ssh/id_rsa
node1:
host: 10.0.0.10
user: root
priv: /root/.ssh/id_rsa
.
.
.
mkdir -p /srv/scality/pillar
base:
'*':
- bootstrap
scality:
repo:
host: {{supervisorIP}}
port: {{httpPort}}
saltstack:
master: {{supervisorIP}}
l Supervisor ID and IP
l Data and management ifaces
l Supervisor credentials
l Type of installation (online or offline)
l Storage node Salt matcher#
l Storage node data and management interfaces
l Type of ARC for the RING named DATA
l Type of replication (COS) for the RING named META
l Connector Salt matcher#
l Connector type(s)
l Connector services (optional)
l Connector interfaces
l SVSD interface (optional)
l Netuitive key (optional)
# The matcher is a word used to match all the storage node or connector minions (e.g.,
*store* for the storage nodes on machine1.store.domain, machine2.store.domain, etc.).
The matchers are assigned in the --nodes and --conns-* options (where * represents
the type of connector, so the matcher for NFS would be --conns-nfs).
generate-pillar.sh generates a main SLS file, named as the first argument on the command line. It also
generates separate pillar files for nodes and conns groups in the same directory.
https://packages.scality.com/scality-support/centos/6/x86_64/scality/ring/scality-
preinstall
l CentOS7
https://packages.scality.com/scality-support/centos/7/x86_64/scality/ring/scality-
preinstall
chmod +x scality-preinstall
storage:
{{ iinclude("default/storage.yaml")|indent(8) }}
resources:
- {{ iinclude("tests/storage.yaml")|indent(14) }}
#login: user
#password: password
ip:
- 10.200.47.226
5. Run the Pre-Install Suite with the modified platform.yaml file as a --file argument.
Employ the --dummy argument to prevent the Pre-Install Suite from automatically applying
any recommended corrections.
/srv/scality/bin/tools/preinstall
./scality-preinstall --file platform.yaml --color --dummy
Some Examples
When an error occurs during the installation process, the script exits immediately. Once the error is
resolved, restart the script with a - -{{stepCLITag}} - only option to confirm that the error is
resolved prior to forward with the installation.
To illustrate, in the following scenario an old repository file confuses the package manager which causes
the installation of the Supervisor to fail.
1. Retry the Supervisor step.
l CentOS/RedHat 6:
{{ringVersion}} is the three-digit RING version number separated by periods (e.g., 7.1.0),
while {{ringBuildNumber}} is comprised by the letter "r", a 12-digit timestamp, a period, a 10-
digit hash, a dash and an instance number (e.g., r170720075220.506ab05e3b-1).
/usr/share/scality-post-install-checks/install.sh
l psutil 5.1.3
l pytest 2.8
l pytest-html 1.13.0
l salt-ssh (Same version included in the Scality Installer)
In addition, the install.sh script creates a master_pillar directory for pillar data files under
the virtual env /var/lib/scality-post-install-checks/venv/srv directory.
The Post-Install Checks Tool accesses all the machines in the RING. As the tool uses Salt or Salt-SSH to
test the RING installation, it requires the data that defines the Salt minions (addresses and pillar data).
The tool automatically obtains the data if pillar data is available in the /srv directory, otherwise it can be
created manually.
For a RING that was not installed via the Scality Installer, there is typically no Salt pillar data on the Super-
visor, and the RING servers are not declared as Salt minions. As such, using Salt-SSH, it is necessary to
create both a YAML roster file and the pillar data.
The roster file should be stored in the virtual environment directory on the machine that simulates the
Salt master (typically the RING Supervisor server).
The scal_group value of each server in the roster file must match the pillar data included in the
top.sls file under the /var/lib/scality-post-install-checks/venv/srv/master_pillar/ directory.
top.sls Types and roles of RING servers available, and assigns the state
(pillar data files) to be applied to them.
The top.sls file lists the types and roles of RING servers available, and assigns the state (pillar data files)
to be applied. It serves to link the type of server to the pillar data.
As exemplified, the top.sls file lists separate server groups for nodes, sproxyd connectors and SOFS-
FUSE connectors.
base:
# for each server, apply the scality-common state (scality-common.sls will be sent to these serv-
ers)
'*':
- scality-common
- order: 1
# for servers whose grain scal_group is storenode, apply the state scality-storenode (scality-
storenode.sls will be sent to these servers)
'scal_group:storenode':
- match: grain
- scality-storenode
- order: 2
'scal_group:sofs':
- match: grain
- scality-sofs
- order: 2
'scal_group:sproxyd':
- match: grain
- scality-sproxyd
- order: 2
scality-common.sls File
Note that if the default root/admin credentials are in use, it is not necessary to include the cre-
dentials section (with the internal_password and internal_user parameters).
Specifically, the file must contain the all of the fields examplified (an exception being single RING envir-
onments, in which the name and is_ssd fields of a separate metadata RING are not included).
scality:
credentials:
internal_password: admin
internal_user: root
prod_iface: eth0
ring_details:
DATA:
is_ssd: false
META:
is_ssd: true
supervisor_ip: 192.168.10.10
As exemplified, the scality-common.sls file lists separate data and metadata RINGs. The Post- Install
Checks Tool can also work for single RING environments, only the name of that RING in the scality-com-
mon.sls file needs to be listed.
The scality-storenode.sls file offers RING node server information. The required fields for the Post-Install
Checks Tool tests as they apply to pillar data for node servers is exemplified below:
scality:
mgmt_iface: eth0
mount_prefix: /scality/disk,/scality/ssd
nb_disks: 1,1
prod_iface: eth0
The scality-sproxyd.sls and scality-sofs.sls connector SLS files require only network interface fields for
the post-install-checks Post-Install Checks Tool tests.
scality:
mgmt_iface: eth0
prod_iface: eth0
The pillar files for connectors are typically very similar. However, if a specific network configuration is
used for the same type of server (e.g., if the network interface used is different from one sproxyd server
to another), this information must be specified in each of the connector SLS files.
If both /srv/scality/salt and /srv/scality/pillar are valid directories, the post-install-checks tool script tries to
use the local Salt master to launch checks; otherwise it defaults to using salt-ssh. Use the -r option to
force the runner to use salt-ssh.
Tool Options
This test takes time and uses significant network resources, and
is therefore disabled by default.
scalpostinstallchecks
scalpostinstallchecks -r {{pathToRosterFile}}
Use the -e option to exclude a specific server from the list of servers to check.
scalpostinstallchecks -e "connector-1"
scalpostinstallchecks -L
Use the -M option to list all test categories and their associated descriptions.
scalpostinstallchecks -M
Example Details
Additional details (e.g., test name, error message) regarding failed tests, skipped tests and passed tests
are provided for each server.
The first failed test exemplified, check_node.py::test_garbage_collector, concerns a RING
named all_rings0 and its DATA mountpoint, with the failure (an Assertion Error) indicating in red font that
the garbage collector is not enabled. The second failed test, check_sproxyd.py::test_
Defined for use with SOFS Folder Scale-Out, ROLE_ZK_NODE can be assigned to Salt minions, the pur-
pose of which is to install and configure the minion as an active member of the shared cache.
To determine whether ROLE_ZK_NODE is currently assigned to the Salt minions, run the following com-
mand on the Salt master (SOFS Folder Scale- Out uses the same roles grain name as the RING
Installer).
data_ring: {{dataRINGName}}
metadata_ring: {{metadataRINGName}}
sfused:
dev: {{volumeDeviceId}}
4. Install and enable the SOFS Folder Scale-Out feature using Salt.
a. Set the role on the storage nodes:
b. Install the packages (calling the main state.sls file only installs packages):
A spare node assures an odd number of active ZooKeeper instances, with the
spare instance maintained for eventual failover events.
6. Uncomment the following line in the fastcgi.conf file (in /etc/httpd/conf.d/ on CentOS/RedHat) to
load the fastcgi module:
7. Restart Apache on shared cache nodes to enable the fastcgi configuration for sophiad.
8. Update sfused.conf and start sfused on all SOFS connectors.
a. Edit /etc/sfused.conf and assign the following "general" section parameters, replacing
the example IP addresses in the “dir_sophia_hosts_list” by the list of sophia
server IP addresses:
"general": {
"dir_sophia_enable": true,
"dir_sophia_hosts_list":
"192.168.0.170:81,192.168.0.171:81,192.168.0.179:81,192.168.0.17:81,
192.168.0.18:81",
"dir_async_enable": false,
},
scality:
....
data_iface: eth0
svsd:
namespace: smb
count: 1
first_vip: 10.0.0.100
10. Run the 'registered' Salt state to register the shared cache with sagentd.
The Seamless Ingest feature requires ZooKeeper to be installed and running on an odd number of
storage node servers – preferably five, but three at the least. Refer to "Installing Folder Scale-Out
for SOFS Connectors" on page 59 for a ZooKeeper installation method using Salt formulas (this
method can also be used to install ZooKeeper for the Seamless Ingest feature).
"geosync": true,
"geosync_prog": "/usr/bin/sfullsyncaccept",
"geosync_args": "/usr/bin/sfullsyncaccept --v3 --user scality -w /ring_a/journal $FILE",
"geosync_interval": 10,
"geosync_run_cmd": true,
"geosync_tmp_dir": "/var/tmp/geosync"
1. Mount the volume and add it to /etc/fstab configuration on the main connector.
2. The mountpoint described in this procedure assumes that it is mounted under /ring_a/journal.
3. Confirm that the path where journals are to be stored (e.g, /ring_a/journal) have read and
write permissions for owner scality:scality.
4. Install the scality-sfullsyncd-source package, which is required for the sfullsyncaccept binary
that is part of the scality-fullsyncd-source package.
5. Add geosync parameter settings to the general section of the sfused.conf file (or dewpoint-sof-
s.js if Dewpoint is the main connector), similar to the following:
"geosync": true,
"geosync_prog": "/usr/bin/sfullsyncaccept",
"geosync_args": "/usr/bin/sfullsyncaccept --v3 --user scality -w /ring_a/journal $FILE",
"geosync_interval": 10,
"geosync_run_cmd": true,
"geosync_tmp_dir": "/var/tmp/geosync"
6. Confirm that the dev parameter in the sfused.conf file (or dewpoint-sofs.js) has the same setting
as the dev number that will be assigned to the source and target CDMI connectors. Note that
this number is different than the dev number set for the volume used for journal storage.
7. Restart the main connector server to load the modified configuration.
2. Install either the Nginx or Apache web server on both the source and target machines, and con-
figure the installed web server to run Dewpoint as an FCGI backend directly on the root URL.
For Nginx, the server section should contain a configuration block similar to the following:
location / {
fastcgi_pass 127.0.0.1:1039;
include /etc/nginx/fastcgi_params;
}
{
"fcgx": {
"bind_addr": "",
"port": 1039,
"backlog": 1024,
"nresponders": 32
}
}
4. Confirm that volume ID (the dev parameter in the general section of /etc/dewpoint-sof-
s.js) is set to the same number on both the source and target connector machine. This is
necessary, as in the Full Geosynchronization architecture RING keys (which contain volume
IDs) must be the same on both the source and target volumes.
{
"cdmi_source_url": "http://SOURCE_IP",
"cdmi_target_url": "http://TARGET_IP",
"sfullsyncd_target_url": "http://TARGET_IP:8381",
"log_level": "info",
"journal_dir": "/var/journal",
"ship_interval_secs": 5,
"retention_days": 5
}
3. If a volume for journal resiliency was created at the start, create two shares and mount the
shares under /var/journal/received and /var/journal/replayed, ensuring that
read/write permissions for the shares are scality:scality, then add the shares to /etc/fstab.
4. Install the scality-sfullsyncd-target package.
{
"port": 8381,
"log_level": "info",
"workdir": "/var/journal",
"cdmi_source_url": "http://SOURCE_IP",
"cdmi_target_url": "http://TARGET_IP",
"enterprise_number": 37489,
"sfullsyncd_source_url": "http://SOURCE_IP:8380"
}
Parameter Description
backpressure_tolerance Sets a threshold for triggering a backpressure mechanism based on the number of con-
secutive throughput measurements in the second percentile (i.e., measurements that fall
within the slowest 2% since the previous start, with allowance for a large enough sample
space). The backpressure mechanism causes the target daemon to delay requests to the
source daemon. Set to 0 to disable backpressure (default is 5).
cdmi_source_url IP:Port address of the CDMI connector on the source machine (ports must match the web
server ports of the Dewpoint instances)
cdmi_target_url IP:Port address of the CDMI connector on the target machine (ports must match the web
server ports of the Dewpoint instances)
journal_dir Directory where the journals of transfer operations are stored (by default: /var/-
journal/source/ctors:source)
log_level Uses conventional syslog semantics; valid values are debug, info (default), warning or error.
sfullsyncd_target_url IP Address and port (8381) of the sfullsyncd-target daemon
ship_interval_secs Interval, in seconds, between the shipping of journals to the sfullsyncd-target daemon
retention_days Number of days the journals are kept on the source machine. Journals are never removed if
this parameter is set to 0.
Parameter Description
cdmi_source_url IP:Port address of the CDMI connector on the source machine (ports must match the web
server ports of the Dewpoint instances)
cdmi_target_url IP:Port address of the CDMI connector on the target machine (ports must match the web
server ports of the Dewpoint instances)
chunk_size_bytes Size of the file data chunks transferred from the source machine to the target machine
(default is 4194304)
enterprise_number Must correspond to the enterprise number configured in the sofs section of the /etc/dew-
point.js file on both the source and target machines
graph_size_threshold Maximum number of operations in the in-memory dependency graph (default is 10000)
log_level Uses conventional syslog semantics; valid values are debug, info (default), warning and error
max_idle_time_secs Sets a maximum idle time, in seconds, that triggers an inactivity warning when no journals
arrive within the time configured (default is 3600)
notification_command Command to be executed when a certain event occurs, such as an RPO violation
Sagentd Configuration
Sagentd provides both the SNMP support (via the net-snmpd daemon) and the output of metrics into the
Elasticsearch cluster. To get this feature, the sagentd daemon must be configured on the server hosting
the sfullsyncd-target daemon. Also, exporting statuses through SNMP requires configuring the
net-snmpd daemon.
To register the sfullsyncd-target daemon with sagentd:
1. Run the sagentd-manageconf add command with the following arguments:
l The name of the sfullsyncd-target daemon
l address (external IP address) of the server hosting the sfullsyncd-target dae-
mon
l port, which must match the value in the sfullsyncd-target configuration
l type, to set the daemon type to sfullsyncd-target
# sagentd-manageconf -c /etc/sagentd.yaml \
add {{nameOfsfullsyncd-targetDaemon}} \
address=CURRENT_IP \
port={{portNumber}} \
type=sfullsyncd-target
{{nameOfsfullsyncd-targetDaemon}}:
address: CURRENT_IP
port: {{portNumber}}
type: sfullsyncd-target
cat /var/lib/scality-sagentd/oidlist.txt
1.3.6.1.4.1.37489.2.1.1.1.6.1.1.2.1 string sfullsync01
1.3.6.1.4.1.37489.2.1.1.1.6.1.1.3.1 string CURRENT_IP:{{portNumber}}
1.3.6.1.4.1.37489.2.1.1.1.6.1.1.4.1 string sfullsyncd-target
1.3.6.1.4.1.37489.2.1.1.1.6.1.1.5.1 string running
If the status changes from running, use snmpd to send a notification to a remote trap host.
Refer to the Scality RING7 Operations Guide (v7.3.0) for more information on the Scality
SNMP architecture and configuration, as well as for MIB Field Definitions.
Attribute Description
message Description of the event
timestamp Timestamp (including the timezone) of the event
level Severity of the event (“INFO”, “WARNING”, “CRITICAL”, or ”ERROR”)
The following sample script illustrates how the event information is passed to a custom HTTP API.
#!/usr/bin/python
# Copyright (c) 2017 Scality
"""
This example script handles events, or alerts, sent by the geosync daemons.
It would be invoked each time an event is emitted. It is passed, through STDIN, a JSON formatted
string which is a serialized `Event` object. This object has three interesting fields `timestamp`,
`level` and `message`.
Although this particular event handler is written in Python, any other general-purpose language would
do; all we need to do is to read from STDIN, de-serialize JSON, and take some action.
Furthermore, the program is free to accept command line arguments if needed.
"""
import json
import sys
import requests
data = sys.stdin.read()
event = json.loads(data)
InstallationTo install Scality Cloud Monitor, the scality-halod package must be installed with the
RING. The package consists of a single daemon – halod – and its configuration files.
l When the Metricly API key is included in the Platform Description File the Scality Cloud Monitor
is installed automatically by the Scality Installer.
l Salt pillar files for advanced installations accept entries for the Scality Cloud Monitor API key (
scality:halo:api_key: <netuitiveApiKey> ) that enable the Scality Cloud Monitor
installation. This entry must be set in the pillar file for the advanced installation, however the
scality.halo.configured Salt state must also be run to install Scality Cloud Monitor.
If Scality Cloud Monitor is installed on the Supervisor only the Metricly API key is necessary. If, though,
Scality Cloud Monitor is installed on a different RING server, additional information is required.
Once Scality Cloud Monitor is configured, the monitored metrics are uploaded to the Metricly cloud. To
access Metricly, go to https://app.netuitive.com/#/login and log in using valid credentials
(Email, Password).
5.4.4 Inventory
The Metricly inventory is composed of the elements that make up an environment, such as RING(s), con-
nectors or servers. Use the Metricly Inventory Explorer to view and search the elements within an Invent-
ory (refer to Inventory on the Metricly Online documentation for more information).
The RING Installer is based on SaltStack states. Most states are idempotent, meaning that if the same
state is reapplied, the result will be the same, a property that allows for the fixing and rerunning of failed
Error messages at the beginning of the log file can be safely ignored.
These messages are generated whenever Salt tries to load all of its
modules, which may rely on features not available on the server.
Each tool used by the RING Installer can generate errors that are collected by the Installer itself. For
example, if the Installer reports an error while installing a package on a CentOS 7 system, a recom-
mendation is issued to check the /var/log/yum.log for details.
As the RING Installer relies on the Supervisor to configure several components, such as the nodes or the
Elasticsearch cluster; checking the RING components logs is necessary in the event of a failure.
l Hardware (types and the number of CPUs, memory, disks, RAID cards, and network cards)
l Installed software (distribution packages with their versions, and configuration parameters)
l Environment (IP addresses and networks, machine names, such network information as DNS,
domain names, etc.)
l RING configuration (number of nodes, IP addresses, connectors, and configuration of the con-
nectors)
l Scality Scality daemons crashes (includes stack traces, and references to objects that were pro-
cessed at the time of the crash, etc.)
l Geosynchronization configuration files on source (sfullsyncd-source) and target (sfullsyncd-tar-
get) connectors.
Installing sreport
Install the scality- sreport package on all the machines from which diagnostic information will be col-
lected. In addition to being installed on RING node and connector servers, having the sreport package
installed on the Supervisor enables the collection of information from several machines at once using the
--salt option.
Using sreport
1. As root, run sreport on the command line — with or without options — to generate the output
archive, sosreport-*.tar.gz, which is saved by default to the /tmp or the /var/tmp/ directory.
2. The full output archive file name displays on the console (the asterisk (*) in the output archive file
name is replaced by the host name, the date, plus additional information).
The report can be automatically sent to Scality instead of — or in addition to — saving it loc-
ally.
The Salt Master is unable to apply a Salt state because the environment is not properly configured.
file_recv: True
file_roots:
base:
- /srv/scality/salt/local/
- /srv/scality/salt/formula/
pillar_roots:
base:
- /srv/scality/pillar
extension_modules: /srv/scality/salt/formula
ext_pillar:
- scality_keyspace: []
file_recv: True
file_roots:
base:
- /srv/scality/salt/local/
- /srv/scality/salt/formula/
pillar_roots:
base:
- /srv/scality/pillar
extension_modules: /srv/scality/salt/formula
ext_pillar:
- scality_keyspace: []
Pillar render error: Specified SLS 'beats' in environment 'base' is not available on the salt master
# Check consistency
ls /srv/scality/pillar
[...]
cat /srv/scality/pillar/top.sls
[...]
2017-07-27 13:08:26,575 [installer] - ERROR - Unable to sync repo. Missing store04, store05
A manual running of a Salt command such as salt '*' test.ping comes back as successful.
Server-Wide Cleanup
Perform the following procedure on all servers (nodes and connectors).
1. Remove all packages that can cause re-installation failure.
# On the supervisor
salt '*' state.sls scality.node.purged
2. On the supervisor, delete any remaining Scality Apache configuration file or directory (oth-
erwise, re-installation of scality-*-httpd or scality-*-apache2 will fail).
l CentOS/RHEL:
rm -rf /etc/httpd/conf.d/*scality*
l Ubuntu:
rm /etc/apache2/sites-enabled/*scality*
# On each server
shutdown -r
Server reboot closes any active connections and interrupts any services running on the
server.
parted -s /dev/sdX rm 1
blockdev --rereadpt
If the reloading of the partition table does not work, use the Unix utility dd to reset all
disks.
5. On a VM, set the correct flag for SSD disks before restarting the RING Installer.
2017-07-27 16:36:24,521 [installer] - ERROR - error in apply for store05 : start elasticsearch ser-
vice : The named service elasticsearch is not available
The error occurs when the Elasticsearch package is installed but the service does not start.
# systemctl daemon-reload
At this point it is possible to restart the service with a successful RING installation.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 440K 0 rom
vda 253:0 0 160G 0 disk
`-vda1 253:1 0 160G 0 part /
vdb 253:16 0 5G 0 disk
vdc 253:32 0 5G 0 disk
vdd 253:48 0 5G 0 disk
vde 253:64 0 5G 0 disk
If the command set returns True, the error is confirmed. In this case, remove the salt grains and run the
Installer again.
scality:
mgtm_iface: ethX
data_iface: ethX
base:
'*':
- <main_pillar>
- order: 1
'scal_group:group1':
- match: grain
- group1
- order: 2
Ignore the two last lines from the package manager error message,
which suggest installing the RING using alternative options.
Ignore the two last lines from the package manager error message,
which suggest installing the RING using alternative options.
Resolution
The SSH connection must be enabled, according to the type of authentication method in use (password,
agent or SSH keys). Once enablement is complete, the command can be run again.
Resolution
Confirm that the IP addresses provided in the Platform Description File match the machine IPs.
The most likely cause of the Unresponsive Environment issue is that an IP provided in the description file
is erroneous. The server tries to connect with SSH to a wrong host and waits for the SSH timeout.
$
# export roster={{rosterVariable}} ; \
export user=$(awk '/^ user:/{print $2; exit}' "$roster") ; \
export priv=$(awk '/^ priv:/{print $2; exit}' "$roster"); \
for host in $(awk '/^ host:/{print $2}' "$roster"); do \
ssh -t -i "$priv" "$user@$host" \
"ls -la /tmp/scality-salt-bootstrap/running_data/salt-call.log"; \
done
Alternatively, all the logs can be collected by running the next command, which will result in all files being
downloaded under /tmp/{{IpAddress}}_bootstrap.log.
# export roster={{rosterVariable} ; \
export user=$(awk '/^ user:/{print $2; exit}' "$roster") ; \
export priv=$(awk '/^ priv:/{print $2; exit}' "$roster"); \
for host in $(awk '/^ host:/{print $2}' "$roster"); do \
scp -i "$priv" "$user@$host":/tmp/scality-salt-bootstrap/running_data/salt-call.log /tmp/"$host"_
bootstrap.log; \
done