Scality RING7 Setup and Installation Guide (v7.3.0)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 98

Scality RING7

Setup and Installation Guide


v7.3.0
Contents
1. Introduction 1
2. Requirements and Recommendations 3
2.1. Environment Requirements 4
2.1.1 Rack Layout 4
2.1.2 Power Supply 4
2.1.3 Network Requirements 4
2.2. Hardware Dependencies and Server Recommendations 7
2.2.1 Supervisor Server 7
2.2.2 Connector Servers 8
2.2.3 Storage Node Servers 8
2.3. Operating System Factors 10
2.3.1 Supported Operating Systems 10
2.3.2 Recommended Kickstart File Configuration (CentOS/RHEL) 10
2.3.3 Supported Filesystems 11
2.3.4 Proxy Use 11
2.3.5 root User Access Requirement 12
2.4. Software Considerations 12
2.4.1 Secure Shell (SSH) 12
2.4.2 Network Time Protocol (NTP or chrony) 13
2.4.3 Incompatible Software 13
2.4.4 Epel Repository 17
2.4.5 Scality Installer 17
2.4.6 Additional Recommended Packages 18
3. Automated RING Installation 21
3.1. The Role of the Platform Description File 22
3.2. Obtaining and Extracting the Scality Installer 23
3.3. Starting the Scality Installer 23
3.3.1 Setting of the root Execution Flag 23
3.3.2 Executing scality-installer.run 23
3.3.3 Establishing the /srv/scality Directory 24
3.3.4 SSH Information Prompt 24
3.4. Using the Scality Installer 25
3.4.1 Set Up the Environment and Bootstrap Salt 26

© 2017 Scality. All rights reserved i


3.4.2 Running the Pre-Install Suite 27
3.4.3 Installing Scality RING 31
3.4.4 Installing S3 Connector Service (Optional) 32
3.4.5 Running the Post-Install Suite 34
3.4.6 Generating an Offline Archive (Optional) 35
3.5. Exiting the Scality Installer 39
3.6. scality-installer.run Options 40
3.6.1 --description-file (or -d) 40
3.6.2 --extract-only 40
3.6.3 --destination 40
4. Advanced RING Installation 41
4.1. Set Up the Environment and Bootstrap Salt 41
4.1.1 Deploying Salt 41
4.1.2 Completing the RING Architecture 42
4.2. Running the OS System Checks Manually 45
4.3. Installing Scality RING 46
4.3.1 Using the Install Script Command Line 47
4.3.2 Installation Steps Recognized by the scripted-install.sh Script 47
4.4. Running Post-Install Checks 49
4.4.1 Setting Up the Post-Install Checks Tool 49
4.4.2 Post-Install Checks Tool Configuration 49
4.4.3 Post-Install Checks Tool Syntax and Options 53
4.4.4 Running the Post-Install Checks Tool 54
4.4.5 Examples: Server Targeting 54
4.4.6 Examples: Test Category 55
4.4.7 Examples: Network Performance Test 56
4.4.8 Examples: Tool Results 56
5. Individual RING Component Installation 59
5.1. Installing Folder Scale-Out for SOFS Connectors 59
5.2. Installing Seamless Ingest for SMB-CIFS Connectors 62
5.3. Installing Full Geosynchronization Mode for SOFS Connectors 63
5.3.1 Enabling SOFS Connector Access Coordination 63
5.3.2 Setting Up the Volume for Journal Storage 64
5.3.3 Setting Up the Source and Target CDMI Connectors 65
5.3.4 Setting Up the Full Geosynchronization Daemon on the Source Machine 66

ii © 2017 Scality. All rights reserved


5.3.5 Setting Up the Full Geosynchronization Daemon on the Target Machine 67
5.3.6 Daemon Configuration Settings 68
5.3.7 Monitoring the Full Geosynchronization Daemons 69
5.3.8 Custom Alerts 70
5.4. Installing Scality Cloud Monitor 71
5.4.1 Creating a Dashboard 72
5.4.2 Configuring a Dashboard 72
5.4.3 Collecting Data 72
5.4.4 Inventory 72
5.4.5 Configuring Policies 72
6. RING Installation Troubleshooting 73
6.1. Log Locations 74
6.2. sreport Tool 75
6.3. Timeout Installation Failure 75
6.4. Salt Master Unable to Find State 76
6.5. Salt Master Unable to Call Function 77
6.6. Salt Master Unable to Find Pillar 77
6.7. Minion Not Found 78
6.8. Minion Not Responding 78
6.9. Jinja Rendering Errors 79
6.10. Cleaning Installation 79
6.11. Elasticsearch Start Failure During Re-installation 80
6.12. SSD Drives Not Detected 81
6.13. Disks on Nodes Not Detected 81
6.14. Heterogeneous Network Interfaces in Advanced Installation 82
6.15. Package Not Found (CentOS) 83
6.16. Package Not Found (RedHat) 83
6.17. Post-Install Checks Troubleshooting 84
6.17.1 Connection Forbidden to sagentd 84
6.17.2 Connection Reset by SSH Server 85
6.17.3 Salt Client Error on RHEL 6 85
6.18. Cannot Connect to a Server via SSH 86
6.19. Unresponsive Environment 87
6.20. Collecting Error Logs 87

© 2017 Scality. All rights reserved iii


This page is intentionally left blank to ensure new chapters start
on right (odd number) pages.
Typographic Conventions
Text that is entered into or that displays within terminal windows is presented in monospace typeface.
Text that displays as a component of a graphical user interface (GUI) – content, menus, menu com-
mands, etc. – is presented in a bold typeface.
Proprietary terms, non-conventional terms, and terms to be emphasized are presented initially in italic
text, and occasionally thereafter for purposes of clarity. File names are always presented in italic text.
Variable values are offered in lower camelCase within curly braces (e.g., {{variableValue}}).

© 2017 Scality. All rights reserved v


This page is intentionally left blank to ensure new chapters start
on right (odd number) pages.
Legal Notice
All brands and product names cited in this publication are the property of their respective owners.
The author and publisher have taken care in the preparation of this book but make no expressed or
implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed
for incidental or consequential damages in connection with or arising out of the use of the information or
programs contained therein.
Scality retains the right to make changes at any time without notice.
Scality assumes no liability, including liability for infringement of any patent or copyright, for the license,
sale, or use of its products except as set forth in the Scality licenses.
Scality assumes no obligation to update the information contained in its documentation.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or trans-
mitted, in any form, or by any means, electronic, mechanical, photocopying, recording, or otherwise,
without prior written consent from Scality, S.A.
Copyright © 2009-2017 Scality. All rights reserved.
Scality RING7 Setup and Installation Guide (v7.3.0) – (2017/11/30)

About Scality
Scality, world leader in object and cloud storage, develops cost-effective Software Defined Storage
(SDS): the RING, which serves over 500 million end-users worldwide with over 800 billion objects in pro-
duction; and the open-source S3 Server. Scality RING software deploys on any industry-standard x86
server, uniquely delivering performance, 100% availability and data durability, while integrating easily in
the datacenter thanks to its native support for directory integration, traditional file applications and over
45 certified applications. Scality’s complete solutions excel at serving the specific storage needs of
Global 2000 Enterprise, Media and Entertainment, Government and Cloud Provider customers while
delivering up to 90% reduction in TCO versus legacy storage. A global company, Scality is
headquartered in San Francisco.

© 2017 Scality. All rights reserved vii


Publication History
Iteration Date Section Affected Abstract
30-November-2017 2.4, 3.6, 5.3, and Application of command line break convention ('/').
6.20
14-November-2017 4.1, 4.3 Edits introducing a new typographic convention for command line break
('/'), to allow for copy-paste operations.
06-November-2017 First issue

Check the iteration date of the Scality RING7 Setup and Installation Guide (v7.3.0) against the Scal-
ity RING Customer Resources web page to ensure that the latest version of the publication is in
hand.

viii © 2017 Scality. All rights reserved


1
1. Introduction
By intent, the Scality RING7 Setup and Installation Guide (v7.3.0) provides Scality customers with the
knowledge needed to prepare for and execute the installation of Scality RING software. Specifically, the
publication describes in detail the RING system prerequisites and configuration tasks, as well as both
Automated and Advanced methods for installing a RING. In addition, the guide offers troubleshooting
instruction to help in fine-tuning the RING, and to assist in the handling of any post-installation issues.
Scality now offers the Scality Installer, which leverages a Platform Description File to automate the RING
installation process. This Platform Description File is generated by the Scality Sizing Tool — available via
Scality Sales — for the purpose of supplying information on the planned RING system architecture to the
Scality Installer.

© 2017 Scality. All rights reserved 1


This page is intentionally left blank to ensure new chapters start
on right (odd number) pages.
2
2. Requirements and Recommendations
Scality RING packages are provided for CentOS or RedHat 6.x and 7.x, for 64-bit
x86_64 architectures only. The base system packages must be available from a
repository, either locally or via the network.

2.1. Environment Requirements 4


2.1.1 Rack Layout 4
2.1.2 Power Supply 4
2.1.3 Network Requirements 4
2.2. Hardware Dependencies and Server Recommendations 7
2.2.1 Supervisor Server 7
2.2.2 Connector Servers 8
2.2.3 Storage Node Servers 8
2.3. Operating System Factors 10
2.3.1 Supported Operating Systems 10
2.3.2 Recommended Kickstart File Configuration (CentOS/RHEL) 10
2.3.3 Supported Filesystems 11
2.3.4 Proxy Use 11
2.3.5 root User Access Requirement 12
2.4. Software Considerations 12
2.4.1 Secure Shell (SSH) 12
2.4.2 Network Time Protocol (NTP or chrony) 13
2.4.3 Incompatible Software 13
2.4.4 Epel Repository 17
2.4.5 Scality Installer 17
2.4.6 Additional Recommended Packages 18

© 2017 Scality. All rights reserved 3


Code examples running multiple lines are correct only as displayed
on the page, as due to PDF constraints such examples cannot be
accurately copy-pasted.

2.1. Environment Requirements


2.1.1 Rack Layout
Rack layout should be as redundant as possible, with separate network switches employed for each half
of the bonded network card pairs (thus, if a switch fails, the bond will stay up).
Servers should be spread over multiple racks to circumvent outages due to rack failure. The more racks
the servers are dispersed over, the more secure the RING installation.

2.1.2 Power Supply


Dual power supplies are required for all RING server components (e.g., Supervisor, store nodes, con-
nectors). In addition, dual power circuits are preferred. Given the RING's datacenter environment, in fact,
power redundancy should be maximized (for instance, if the datacenter has circuits from separate power
providers, both should be utilized).
Scality strongly recommends that RINGs not be visible from public networks and that the power be
backed up. Ideally, the RING should be backed up by generators. At the least, local UPS systems should
be employed that are large enough to keep all RING servers running in the event of a power outage until
such time as the system can be shut down.

2.1.3 Network Requirements


All of the storage node servers and connector servers in a RING communicate with via Chord, a peer-to-
peer proprietary protocol that runs over TCP. Chord is used by the connectors to send IO requests to the
RING, and by storage servers to maintain their topology. As Chord is based on TCP, there will be no con-
straints on vlans or subnetting as long as the network infrastructure (switching and routing) allows for
Chord protocol communications.
The minimum recommended bandwidth between servers on the RING is 10Gb/s, however Scality recom-
mends bonding/teaming for redundancy purposes. As network cards will already be bonded/teamed, act-
ive/active LACP (802.3ad) bonding can be used to double available bandwidth. That done, most RING
installations will have a minimum of 20Gb/s between nodes and to the connectors.
In addition, Scality recommends using LACP with a Layer3+4 balancing algorithm (xmit_hash_policy) to
ensure that the various Chord communications are evenly balanced among the bond/team physical inter-
faces. For RedHat 6 and 7 and CentoOS 6 and 7, edit /etc/modprobe.conf to add the mode, lacp_rate,
and xmit_hash_policy parameters.

alias bond0 bonding


options bond0 miimon=100 mode=4 lacp_rate=1 xmit_hash_policy=layer3+4

4 © 2017 Scality. All rights reserved


To further increase redundancy, connect each physical Interface in the bond/team to separate physical
switches. Such a move will guard against switch failures or link failures between rack switches and core
switches.
On the switch side, all the bond/team physical interface ports of a given server must be configured as
members of the same channel-group of the same port-channel Interface.

Bandwidth Minimum # of Interfaces Fault tolerance Minimum # of switches SPOF


10 Gbe/40 Gbe 2 active/passive 1 switch
20 Gbe/80 Gbe 2 Active/Active with LACP 1 switch
20 Gbe/80 Gbe 2 Active/Active with LACP and VPC 2 None

With connectors servers, it can be advantageous to put in place dedicated FrontEnd interfaces (i.e., applic-
ation facing) and BackEnd interfaces (i.e., Chord interfaces). In this case, the same bonding/teaming
recommendations are applicable on both FrontEnd and BackEnd Interfaces.

Recommended Switch Configuration


Any switch that supports link aggregation can be used to interconnect servers running Scality software.
For instance, Cisco or Juniper switches can be used together, as both are built on the 802.3ad. The
setup on each is quite different, though, with Cisco using a more top-down approach in which order mat-
ters while Juniper uses individual commands to form their config parameters.
Cisco Switches

Link Aggregation Control Protocol is the standard 802.3ad. To increase bandwidth and redundancy,
combine multiple links into a single logical link. All links participating in a single logical link must have the
same settings (e.g., duplex mode, link speed) and interface mode (e.g., access or trunk). It is possible to
have up to 16 ports in an LACP EtherChannel, however only eight can be active at one time.
LACP can be configured in either passive or active mode. In active mode, the port actively tries to bring
up LACP. In passive mode, it does not initiate the negotiation of LACP.
Upon logging into the Cisco switch:

type command enable


switch01> enable
Enter configuration mode. Short commands work as well i.e. config t
type command configure terminal
switch01# configure terminal
Create Port Group
switch01(config)# interface port-channel1
Configure Port 0/0/1 to be a part of previously created portgroup
switch01(config)#interface GigabitEthernet0/0/1
switch01(config-int-gig0/0/1)# channel-group 1 mode active
switch01(config-int-gig0/0/1)# switchport mode access vlan 1
switch01(config-int-gig0/0/1)# switchport mode access
switch01(config-int-gig0/0/1)# spanning-tree portfast
switch01(config-int-gig0/0/1)#
Configure Port 1/0/1 to be a part of previously created portgroup
switch01(config)#interface GigabitEthernet1/0/1
switch01(config-int-gig1/0/1)# channel-group 1 mode active
switch01(config-int-gig1/0/1)# switchport mode access vlan 1
switch01(config-int-gig1/0/1)# switchport mode access

© 2017 Scality. All rights reserved 5


switch01(config-int-gig1/0/1)# spanning-tree portfast
Configure port-channel options
switch01(config)# interface port-channel1
switch01(config-port-channel1)# description scality lacp port-channel1
switch01(config-port-channel1)# channel-group 1 mode active
switch01(config-port-channel1)# channel-protocol lacp

Juniper Switches

The IEEE 802.3ad link aggregation enables Ethernet interfaces to be grouped to form a single link layer
interface, also known as a link aggregation group (LAG) or bundle. Aggregating multiple links between
physical interfaces creates a single logical point-to-point trunk link or a LAG.
LAGs balance traffic across the member links within an aggregated Ethernet bundle and effectively
increases the uplink bandwidth. Another advantage of link aggregation is increased availability, because
LAGs are composed of multiple member links. If one member link fails, the LAG will continue to carry
traffic over the remaining links.
Link Aggregation Control Protocol (LACP), a component of IEEE 802.3ad, provides additional func-
tionality for LAGs.

Physical Ethernet ports belonging to different member switches of a Virtual Chassis configuration
can be combined to form a LAG.

After logging in to the switch (virtual chassis, i.e. multiple switches acting like one switch or single switch):

type command configure


user@switch01> configure
Entering configuration mode
{master}[edit]
type command set chassis aggregated-devices ethernet device-count 10 ← recommended to set
this value to more than just what will be set up now which is one lag because later
more lags may need to be added.
user@switch01# set chassis aggregated-devices ethernet device-count 10
{master}[edit]
type command delete interface ge-<your switch,pic,port>
user@switch01# delete interface ge-0/0/0 unit 0
{master}[edit]
user@switch01# delete interface ge-1/0/0 unit 0
{master}[edit]
type command set interface ge-<your switch,pic,port> ether-options 802.3ad ae0
user@switch01# set interface ge0/0/0 ether-options 802.3ad ae0
{master}[edit]
user@switch01# set interface ge1/0/0 ether-options 802.3ad ae0
type command set interfaces ae0 aggregated-ether-options lacp active
user@switch01# set interfaces ae0 aggregated-ether-options lacp active
{master}[edit]
type command set interface ae0 description “DESCRIPTION”
user@switch01# set interfaces ae0 description “ae0 lag for scality”
{master}[edit]

6 © 2017 Scality. All rights reserved


type command set interfaces ae0 mtu 9216
user@switch01# set interfaces ae0 mtu 9216
type command set interface ae0 aggregated-ether-options minimum-links 1
user@switch01# set interface ae0 aggregated-ether-options minimum-links 1
type command set interfaces ae0 aggregated-ether-options link-speed <your speed>g
user@switch01# set interfaces ae0 aggregated-ether-options link-speed 10g
{master}[edit]
type command set interfaces ae0 unit 0 family ethernet-switching port-mode access
user@switch01# set interfaces ae0 unit 0 family ethernet-switching port-mode access
{master}[edit]
type command set interfaces ae0 unit 0 family ethernet-switching vlan members <vlan name>
user@switch01# set interfaces ae0 unit 0 family ethernet-switching vlan members servers
{master}[edit]
Lastly check newly created aggregated interface.
type command run show interfaces ae0
user@switch01# run show interfaces ae0

{master}[edit]
save the command and exit the configure mode
user@switch01# commit

2.2. Hardware Dependencies and Server Recommendations


The Platform Description File ensures that the physical RING architecture put in place is the one
best suited to address customer needs. Connector servers architecture and Node server archi-
tecture both require validation via the tool.

2.2.1 Supervisor Server


Scality Supervisor software can run on a virtual machine (VM) or a dedicated server.

Supervisor High Availability is not provided, thus Scality recom-


mends running the Supervisor on a VM with failover capability from
one physical host (hypervisor) to another.

Minimum
Device Comment
Recommended
OS disks 2 Raid 1

Recommended Partitioning - Operating System Disk


/boot Partition 1GB as recommended by RedHat
/ LV or Partition 20GB
/var LV or Partition 400GB+
/tmp tmpfs or Partition (2GB if partition)

Memory 16 GB If the RING Infrastructure is 12 servers or more, consider adding more memory.

© 2017 Scality. All rights reserved 7


Minimum
Device Comment
Recommended
Network 1 Gb/s Linux bonding with 802.3ad dynamic link aggregation. Consider bond/teaming at host
level
CPU 4 vCPUs The Supervisor is CPU bound (more is better)
Power supply 2 Power with redundant power supplies (on the hosts if VMs)

2.2.2 Connector Servers


As a best practice, run Scality connector software directly on storage node servers. The software can
also run on virtual machines or dedicated servers.

The use of virtual machines is recommended as the process is


single-threaded, and thus it is easy to provision new servers when
needed (such as during peak periods).

Connector server architecture requires validation by the Scality Sizing Tool.

Minimum
Device Comment
Recommended
OS disks 2 in RAID1 HDDs (minimum 400GB for logs) in RAID 1, as SSDs typically wear out faster
than other disk types when data is written and erased multiple times.

Recommended Partitioning - Operating System Disk


/boot Partition 1GB as recommended by RedHat
/ LV or Partition 20GB
/var LV or Partition 400GB+
/tmp tmpfs or Partition (2GB if partition)

Memory 32GB Recommended (unless the Scality Sizing Tool indicates the need for more)
Frontend interface (optional) 2 x 10 Gb/s Linux bonding with 802.3ad dynamic link aggregation (LACP).
Chord Interface 2 x 10 Gb/s Linux bonding with 802.3ad dynamic link aggregation (LACP).
Admin Interface (optional) 1 x 1 Gb/s Required if Supervisor-connector communications must be separated
from production traffic
CPU 8 vCPUs Connectors are CPU bound (more is better)
Power supply 2 Power with redundant power supplies (on the hosts if VMs)

2.2.3 Storage Node Servers


Storage node software must be installed on a physical server (no VMs). A minimum of six different phys-
ical servers is required (the minimum requirements for which apply to all deployment environments – min-
imum, medium, and high capacity – unless otherwise noted).

Storage Node server architecture requires validation by the Scality


Sizing Tool.

8 © 2017 Scality. All rights reserved


Minimum Required Configuration per Physical Server
Capacity (Min-
Device Comment
imum/Medium/High)
OS disks Two in RAID1 HDDs (minimum 400GB for logs) in RAID1, as SSDs typically wear out
quicker when data is written and erased multiple times.

Recommended Partitioning - Operating System Disk


/boot Partition 1GB as recommended by RedHat
/ LV or Partition 20GB
/var LV or Partition 400GB+
/tmp tmpfs or Partition (2GB if partition)

Data disks 12 (minimum capacity) HDDs or SSDs, 4-6-8-10 TB SATA 7200rpm.


24 (medium capacity)
64+ (high capacity) Refer to the Scality Sizing Tool.

Metadata disks 1 (minimum capacity) SSD disks (>600GB) for bizobj metadata (mandatory, except for archi-
2 (medium capacity) tectures using REST connectors).
6+ (high capacity)
Refer to the Scality Sizing Tool.

Memory 128GB Recommended, unless the Scality Sizing Tool indicates that more memory
is required.
Chord Interface 2 x 10 GB/s Linux bonding with 802.3ad dynamic link aggregation (LACP).
Admin Interface 1 x 1Gb/s Required if Supervisor-connector communications must be sep-
arated from production traffic
CPU 12 cores (24 threads) Storage nodes are not CPU bound
RAID controller Mandatory (>1GB of A RAID controller with a minimum of 1GB cache (refer to Supported RAID
cache) Controllers).
Power supply 2 Power with redundant power supplies

Supported RAID Controllers
Controller Cache Size (GB) Valid Platforms Automatic Disk Management Support?
P440 4 HPE Yes
P840ar 2 HPE Yes
Cisco 12Gbps Modular RAID PCIe Gen 3.0 2/4 Cisco Yes
PERC H730 2 Dell Yes
LSI 9361-8i 4 Dell Yes

RAID controllers with a minimum of 1GB cache are mandatory for RING Storage
Node Servers. Contact Scality Sales for installations involving devices not recog-
nized above as Supported.

© 2017 Scality. All rights reserved 9


2.3. Operating System Factors
All servers hosting RING components must be running the same OS
and OS version (and for new installations, Scality recommends using
the most recent version).

2.3.1 Supported Operating Systems


Scality RING software runs on the CentOS and RedHat families of Linux.

Operating System Version Number Version Name Architecture


CentOS 6.8 and 7.3 (recommended) DVD iso x86-64
RedHat 6.8 and 7.3 (recommended) Red Hat Enterprise Linux® Server, Standard x86-64

Scality also supports Ubuntu in very specific use cases. Please con-
tact Scality Sales in the event that CentOS/RHEL cannot be deployed
in the target environment.

2.3.2 Recommended Kickstart File Configuration (CentOS/RHEL)


On CentOS/RHEL it is possible to automate part or all of a RING installation with the appropriate dir-
ectives for the Anaconda installer.

As the partitioning directives example shows, DHCP acquires an IP address for the installation.
Once the system has been built and is on the network, perform the necessary network con-
figurations (e.g., bonding, teaming etc.) to achieve higher throughput or redundancy.

#version=DEVEL -- This changes to RHEL7 for RedHat


# System authorization information
auth --enableshadow --passalgo=sha512
# Use graphical install
graphical
# Disable the Setup Agent on first boot
firstboot --disable
ignoredisk --only-use=sda
# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us'
# System language
lang en_US.UTF-8
# Network information
network --bootproto=dhcp --device=<Your_DeviceName_Here> --noipv6 --activate
network --hostname=<Your_Hostname_Here>
# Root password
rootpw scality
# Disable selinux and firewall
selinux --disabled
firewall --disabled
# System services
services --enabled="chronyd"
# System timezone

10 © 2017 Scality. All rights reserved


timezone America/New_York --isUtc
# System bootloader configuration
bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=sda
autopart --type=lvm
# Partition clearing information
zerombr
clearpart --all --initlabel
# Create a single partition on the boot drive
part / --fstype=ext4 --grow --size=20000 --ondisk=sda
part /boot --asprimary --fstype=ext4 --size=4096 --ondisk=sda
part /var --asprimary --fstype=ext4 --size=grow --ondisk=sda
part swap --size=16384 --ondisk=sda
%packages
@^infrastructure-server-environment
@base
@core
chrony
kexec-tools
%end
%addon com_redhat_kdump --enable --reserve-mb='auto'
%end
%anaconda
pwpolicy root --minlen=6 --minquality=50 --notstrict --nochanges --notempty
pwpolicy user --minlen=6 --minquality=50 --notstrict --nochanges --notempty
pwpolicy luks --minlen=6 --minquality=50 --notstrict --nochanges --notempty
%end

For more detailed information on how to perform a kickstart install-


ation, refer to https://access.redhat.com/documentation/en-US/Red_
Hat_Enterprise_Linux/7/html/Installation_ Guide/sect-kickstart- how-
to.html.

2.3.3 Supported Filesystems


Scality does not make recommendations regarding the filesystem used by the base operating system.
However, on the storage nodes the only supported filesystem for data storage is ext4. The installer will
automatically create partitions and format them in ext4 on the new disks.

2.3.4 Proxy Use


An offline repository can be used as an alternative to a proxy.

To ensure that the required Scality Installer packages can be downloaded when connecting to the Inter-
net via a proxy, perform the following procedure.
1. Add the proxy address and port to the yum configuration file (including authentification settings
as required).

proxy=http://yourproxyaddress:proxyport
# If authentification settings are needed:
proxy_username=yum-user-name
proxy_password=yum-user-password

© 2017 Scality. All rights reserved 11


2. Unset the http_proxy, https_proxy , ftp_proxy and ftps_proxy environment vari-
ables to enable the Scality Installer to communicate with the Supervisor without trying to use a
proxy.

2.3.5 root User Access Requirement


Scality software runs under root user, and the installation of Scality RING requires root credentials as
well. As necessary, sudo can be used to grant super-user privileges to a Scality user.

2.4. Software Considerations


2.4.1 Secure Shell (SSH)
Secure Shell (SSH), which is provided by default with CentOS and Redhat Linux systems, is used by the
Scality Installer to initialize the cluster. On launch, the Scality Installer prompts for either a SSH password
or a private SSH key (protected or not).

Although RING installation can be performed via a standard user run-


ning sudo commands or by way of the root user, Scality recom-
mends setting up a passwordless key authentication for the root
user.

Deploying SSH
The deployment of SSH keys between the Supervisor and the other RING servers facilitates the install-
ation process.
1. Working from the Supervisor, create the private/public key pair.

[root@scality] # ssh-keygen -t rsa

The public key is located at /root/.ssh/id_rsa

2. Accept the defaults with no passphrase.


3. Deploy the public key on each server of the platform.

[root@scality] # ssh-copy-id {{ipAddressOfServerOrFQDN1}}


[root@scality] # ssh-copy-id {{ipAddressOfServerOrFQDN2}}
[root@scality] # ssh-copy-id {{ipAddressOfServerOrFQDN3}}

Using “centos” When root/ssh Login is Disabled


In order to gain root privileges, ensure that the user centos is part
of the sudoers.

1. Start Scality Installer (refer to "Starting the Scality Installer" on page 23 for detailed information).
2. Indicate centos at the first prompt, asking for the user to connect to the nodes.

12 © 2017 Scality. All rights reserved


3. Indicate the custom private SSH key at the third prompt, which requests the private SSH key.

Please provide the user to connect to the nodes (leave blank for "root"): centos
Please provide the SSH password to connect to the nodes (leave blank if you have
a private key):
Please provide the private SSH key to use or leave blank to use the SSH agent:
/home/centos/.ssh/id_rsa
Please provide the passphrase for the key /home/centos/.ssh/id_rsa (leave blank
if no passphrase is needed):
Load the platform description file '/home/centos/pdesc.csv'... OK

2.4.2 Network Time Protocol (NTP or chrony)


All RING servers (Supervisor, store nodes, connectors) must be time synchronized.
The standard protocol for time synchronization is NTP, the software for which is provided with many OS
distributions (available from www.ntp.org). With the release of RHEL7, Red Hat changed the default time
sync protocol to chrony. No structural changes were put in place, however, as chrony uses the stand-
ardized NTP protocol.

The Scality Installer installs and starts the NTP daemon only if chrony
or NTP is not previously installed and running.

Scality recommends regular syncing of the hardware clock with the System up-to-date clock to ensure
that the boot logs are time consistent with the network clock.

hwclock --systohc

For more information on installing NTP, refer to the RHEL Network Time Protocol Setup webpage.

2.4.3 Incompatible Software


SELinux Security Module
The SELinux kernel security module is incompatible with RING software and must be disabled.
1. Set the line starting with selinux in the /etc/selinux/config file to disabled.

selinux=disabled

2. Restart the server to bring the changes into effect.


3. Run the getenforce command to check the current state.

getenforce

© 2017 Scality. All rights reserved 13


In addition, the setenforce command can be used to dynamically change the SELinux state, though it
is not possible to use that command to completely disable SELinux. If the current state is enabled, seten-
force can change the set to "permissive" at best.

SELinux is disabled by the Pre-Install Suite, which must be executed


prior to RING installation.

Transparent Huge Pages


A hugepage option was introduced in Linux kernels (including CentOS and RedHat) to allow for more
optimized memory management with certain workload types. Beginning with the 2.6.32 kernel, man-
agement of the hugepage option was relocated to the system level, making it transparent to client
applications. Setting this transparent_hugepage option allows the kernel to automatically use large
pages, either opportunistically (for alignment purposes) or forcibly (for copying and remapping).
With transparent hugepages set at the system level, systems under relative heavy use can end up with
(a) not enough contiguous memory segments to honor memory allocations, or (b) memory that is
extremely fragmented, wherein additional CPU usage (caused by the kernel memory manager attempt-
ing to rearrange the fragmented memory) will diminish system responsiveness. To avoid such situations,
deactivate the transparent_hugepages funtionality (it is not needed by Scality RING software) via
the GRUB configuration or a rc.local script (in the event that GRUB cannot be modified).
Disabling THP – GRUB Configuration Method

l CentOS 7/RedHat 7:
1. Edit the /etc/default/grub file to add the following text to the GRUB_CMDLINE_LINUX_
DEFAULT line:

transparent_hugepage=never

2. Run the following command:

grub2-mkconfig -o /boot/grub2/grub.cfg

3. Reboot.

l CentOS 6/RedHat 6:
1. Add the following text to the appropriate kernel command line in the grub.conf file.

transparent_hugepage=never

2. Reboot.

14 © 2017 Scality. All rights reserved


Disabling THP – Script Method

l CentOS 7/RedHat 7:
Add the following lines to an rc.local script, which is run as a service from systemctl:

cat << /etc/rc.d/rc.local >> END


echo "never" < /sys/kernel/mm/transparent_hugepage/enabled
echo "never" < /sys/kernel/mm/transparent_hugepage/defrag
END
chmod +x /etc/rc.d/rc.local
systemctl start rc-local

l CentOS 6/RedHat 6:
Add the following lines to an rc script:

echo "never" < /sys/kernel/mm/redhat_transparent_hugepage/enabled


echo "never" < /sys/kernel/mm/redhat_transparent_hugepage/defrag

For the rc.local script method, transparent hugepages created while


THP was enabled by other initialization scripts or daemons may
already exist, depending on when the script is run (the rc.local script
is run at the end of the initialization process). The scripts will not
remove already existing hugepages.

THP is disabled by the Pre-Install Suite, which must be executed prior to RING installation.

Non-Uniform Memory Access


Scality recommends disabling Non-Uniform Memory Access (NUMA), as leaving it enabled can cause
performance problems (e.g., high CPU and swap usage). NUMA can be disabled either in the BIOS by
enabling node interleaving (this is vendor- and model-specific), or in the kernel with a grubby command
(which also turns off transparent huge pages):

grubby --update-kernel=ALL --args="numa=off transparent_hugepage=never"

Irqbalance on Multiprocessor Systems


On multiprocessor systems, the irqbalance command distributes hardware interrupts across processors
in order to increase performance. On RING servers, irqbalance should be run only once at system startup
to ensure that interrupt request lines (IRQs) are balanced on the different cores. However, if the irqbal-
ance daemon does not exit after the initial startup run, it can have an adverse impact on RING processes
or cause throttling by the CPU.
To run irqbalance only at startup, confirm that irqbalance is deactivated on system servers and add
irqbalance --oneshot to the /etc/rc.local script.

The irqbalance option is enabled by the Pre-Install Suite, which must


be executed prior to RING installation.

© 2017 Scality. All rights reserved 15


Server Swappiness
Server "swappiness" refers to the propensity of a server to swap disk space to free up memory. The
Linux kernel accepts swappiness values between 0 and 100. A low swappiness value reduces the like-
lihood that the kernel will swap out mapped pages to increase the available virtual memory. It is gen-
erally advisable to reduce swappiness on RING servers and to set the free memory minimum to a
reasonable value, such as 2000000 kB (or 2GB). Note that for a production system to function properly,
the minimum value for min_free_kbytes is 358400.
The RING Installer sets the swappiness default value to 1. For all installations with the RING Installer, the
main constraint on memory swapping is the value set for the vm.min_free_ kbytes parameter. Server
swappiness is optimized by the Pre-Install Suite which should be executed prior to RING installation.

Firewall Configuration
If a connection cannot be established between the Supervisor and the nodes or connectors, it is neces-
sary to disable iptables on all servers.

Scality does not recommend enabling iptables.

l CentOS 7/RedHat 7:
iptables is controlled via firewalld, acting as a front-end. Use the following command set to
deactive the firewall:

# systemctl stop firewalld


# systemctl mask firewalld

l CentOS 6/RedHat 6:
1. Open /etc/sysconfig/iptables for editing.
2. Remove all lines containing "REJECT".
Turning iptables on is not recommended, as this can have a significant negative
impact on performance. Please contact Scality Customer Service for better traffic fil-
tering recommendations.

3. Restart the server iptables.

# /etc/init.d/iptables restart

iptables is disabled by the Pre-Install Suite, which must be executed prior to RING installation.

16 © 2017 Scality. All rights reserved


2.4.4 Epel Repository
A few packages (supervisor, sindexd) depend on packages that are not part of the base CentOS/RedHat
system, and thus the EPEL repository that provides those packages must be enabled.

l RedHat 6

$ yum-config-manager \
--enable rhel-6-server-optional-rpms \
--enable rhel-rs-for-rhel-6-server-rpms \
--enable rhel-ha-for-rhel-6-server-rpms
$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm

l RedHat 7

$ yum-config-manager --enable rhel-7-server-optional-rpms \


--enable rhel-7-server-extras-rpms \
--enable rhel-rs-for-rhel-7-server-rpms \
--enable rhel-ha-for-rhel-7-server-rpms
$ rpm -Uvh \
https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

l CentOS6 and CentOS7

$ yum install epel-release

2.4.5 Scality Installer


Offered as a standard executable archive file, the Scality Installer is available from the Scality repository
page, linked from Scality RING Customer Resources.
The Scality Installer archive comes with three packages with which RING installation can proceed: Online
packages, Offline packages without S3, and Offline packages with S3.

Use when full internet access is mandatory. The file size is


approximately 300MB.
Examples
Online packages
l scality-ring-7.2.0.0.r170919232505.d6512f5df5_cen-
tos_7.run
l scality-ring-7.2.0.0.r170919232505.d6512f5df5_red-
hat_7.run

© 2017 Scality. All rights reserved 17


Use when internet access is not required. The file size is approx-
imately 800MB.
Examples

l scality-ring-offline-
Offline packages without S3
7.2.0.0.r170919232505.d6512f5df5_redhat_7.3_
201709200618.run
l scality-ring-offline-
7.2.0.0.r170919232505.d6512f5df5_centos_7.3_
201709200618.run
Use when internet access is not required. The file size is approx-
imately 2.8GB.
Examples

l scality-ring-with-s3-offline-
Offline packages with S3
7.2.0.0.r170919232505.d6512f5df5_centos_7.3_
201709200618.run
l scality-ring-with-s3-offline-
7.2.0.0.r170919232505.d6512f5df5_redhat_7.3_
201709200618.run

2.4.6 Additional Recommended Packages


Various extras and erratas packages will be automatically installed on servers running Scality software.
These packages are useful for troubleshooting, and for various conventional or routine operations.

Utilities

l bc: GNU's bc (a numeric processing language) and dc (a calculator)


l bind-utils: Utilities for querying DNS name servers
l createrepo: Creates a common metadata repository
l gdb: A GNU source-level debugger for C, C++, Java and other languages
l lsof: A utility which lists open files on a Linux/UNIX system
l parted: The GNU disk partition manipulation program
l pciutils: PCI bus related utilities
l rpmdevtools: RPM Development Tools
l rsync: A program for synchronizing files over a network
l screen: A screen manager that supports multiple logins on one terminal
l vim-enhanced: A version of the VIM editor which includes recent enhancements
l yum-utils: Utilities based around the yum package manager
l tmux: Utility which allows within one terminal window to open multiple windows and split-views

18 © 2017 Scality. All rights reserved


Benchmark Tools

l bonnie++: Filesystem and disk benchmark & burn-in suite


l fio: I/O tool for benchmark and stress/hardware verification
l iozone: IOzone File Benchmark (not available in a repository)
l iperf/iperf3: Measurement tool for TCP/UDP bandwidth performance

Network Tools

l ngrep: Network layer grep tool


l curl: A utility for getting files from remote servers (FTP, HTTP, and others)
l wget: A utility for retrieving files using the HTTP or FTP protocols
l mtr: A network diagnostic tool

Performance Monitoring

l dstat: Versatile resource statistics tool


l htop: Interactive process viewer
l iotop: Shows process I/O performance
l jnettop: Network traffic tracker

Operational Monitoring

l bmon: Bandwidth monitor and rate estimator


l iftop: System monitor listing network connections by bandwidth use
l net-snmp: A collection of SNMP protocol tools and libraries
l net-snmp-perl: The perl NET-SNMP module and the mib2c tool
l net-snmp-utils: Network management utilities using SNMP, from the NET-SNMP project
l smartmontools: Tools for monitoring SMART capable hard disks
l sysstat: Tools for monitoring system performance

Troubleshooting Tools

l ngrep: Network layer grep tool


l strace: Tracks and displays system calls associated with a running process
l tcpdump: A network traffic monitoring tool
l telnet: The client program for the Telnet remote login protocol

© 2017 Scality. All rights reserved 19


This page is intentionally left blank to ensure new chapters start
on right (odd number) pages.
3
3. Automated RING Installation
Scality recommends Automated RING Installation.
The Scality Installer allows for automated installation for the various components of the RING, including
the Supervisor, storage nodes, and connectors. The tool uses SaltStack to deploy RING components to
designated servers, with the server designated to host the RING Supervisor set up as Salt Master and
the other servers in the RING environment acting as Salt Minions.

3.1. The Role of the Platform Description File 22


3.2. Obtaining and Extracting the Scality Installer 23
3.3. Starting the Scality Installer 23
3.3.1 Setting of the root Execution Flag 23
3.3.2 Executing scality-installer.run 23
3.3.3 Establishing the /srv/scality Directory 24
3.3.4 SSH Information Prompt 24
3.4. Using the Scality Installer 25
3.4.1 Set Up the Environment and Bootstrap Salt 26
3.4.2 Running the Pre-Install Suite 27
3.4.3 Installing Scality RING 31
3.4.4 Installing S3 Connector Service (Optional) 32
3.4.5 Running the Post-Install Suite 34
3.4.6 Generating an Offline Archive (Optional) 35
3.5. Exiting the Scality Installer 39
3.6. scality-installer.run Options 40
3.6.1 --description-file (or -d) 40
3.6.2 --extract-only 40
3.6.3 --destination 40

© 2017 Scality. All rights reserved 21


The Scality Installer leverages a Platform Description File in automating RING installation. This Platform
Description File is generated by the Scality Sizing Tool — available via Scality Sales — for the purpose of
supplying information on the planned RING system architecture to the Scality Installer.

Connectors typically require minimal configuration adjustments once they have been installed using
the Scality Installer.

Salt is used by the Scality Installer to deploy and configure the components on the correct servers. The
installation mechanism takes place out of view, however it can be exposed should more flexibility be
required.
The Scality Installer supports all device-mapper-based block devices, including multipath, RAID, LVM,
and dm-crypt encrypted disks.

3.1. The Role of the Platform Description File


Key to the automated RING installation process, the Platform Description File supplies information to the
Scality Installer concerning the infrastructure on which the RING will be installed. It is generated from the
Scality Sizing Tool, with system hardware information entered by Sales Engineers and technical details
(e.g., minion IDs, IP addresses and RING names) entered by Customer Service Engineers.
The Platform Description File is the only external input required by the RING Installer to run the install-
ation. The Platform Description File is either an XLSX document or a CSV file generated from the XLSX
document. Either format can be supplied to the Installer.

The exemplified Platform Description File (for RING + NFS) is correct, however due to PDF con-
straints it cannot simply be cut and pasted for use as a template.
ring ,,,,,,,,,,,,,,,,,,,,,,,,,,,,
sizing_version,customer_name,#ring,data_ring_name,meta_ring_name,HALO API key,S3
endpoint,cos,arc-data,arc-coding,,,,,,,,,,,,,,,,,,,
14.6,Sample,2,DATA,META,,s3.scality.com,2,9,3,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,,,,,,,,,,,
servers,,,,,,,,,,,,,,,,,,,,,,,,,,,,
data_ip,data_iface,mgmt_ip,mgmt_iface,s3_ip,s3_iface,svsd_ip,svsd_iface,ring_
membership,role,minion_id,enclosure,site,#cpu,cpu,ram,#nic,nic_size,#os_disk,os_disk_
size,#data_disk,data_disk_size,#raid_card,raid_cache,raid_card_type,#ssd,ssd_size,#ssd_
for_s3,ssd_for_s3_size
10.0.0.11,eth0,,,,,,,"DATA,META","storage,elastic",storage01,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,4,20,0,0,,1,50,0,0
10.0.0.12,eth0,,,,,,,"DATA,META","storage,elastic",storage02,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,4,20,0,0,,1,50,0,0
10.0.0.13,eth0,,,,,,,"DATA,META","storage,elastic",storage03,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,4,20,0,0,,1,50,0,0
10.0.0.14,eth0,,,,,,,"DATA,META","storage,elastic",storage04,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,4,20,0,0,,1,50,0,0
10.0.0.15,eth0,,,,,,,"DATA,META","storage,elastic",storage05,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,4,20,0,0,,1,50,0,0
10.0.0.16,eth0,,,,,,,"DATA,META","storage,elastic",storage06,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,4,20,0,0,,1,50,0,0
10.0.0.17,eth0,,,,,,,"DATA,META","connector,nfs",connector01,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,0,0,0,0,,0,0,0,0
10.0.0.18,eth0,,,,,,,"DATA,META","connector,nfs",connector02,VIRTUAL MACHINE,site1,8,CPU
(2.2GHz/1 cores),16,1,1,1,160,0,0,0,0,,0,0,0,0
10.0.0.19,eth0,,,,,,,,supervisor,supervisor01,VIRTUAL MACHINE,site1,8,CPU (2.2GHz/1
cores),16,1,1,1,160,0,0,0,0,,0,0,0,0
,,,,,,,,,,,,,,,,,,,,,,,,,,,,

22 © 2017 Scality. All rights reserved


3.2. Obtaining and Extracting the Scality Installer
Offered as a standard makeself archive file, the Scality Installer is available from the Scality repository
page, linked from the Scality RING Customer Resources.

Confirm that the .run archive file has root executable permission.

The Scality Installer archive comes with several packages that allow RING installation to proceed without
Internet connection. Located in /srv/scality/repository, these packages include:

salt embedded salt packages mirror

internal Scality packages for the RING7 v10 release

offline_dependencies A complete offline repository with Scality dependencies, basic sys-


tem packages and extra tools required for Scality RING usage

3.3. Starting the Scality Installer


Several distinct steps comprise the start up of the Scality Installer, including the setting of the root exe-
cution flag, execution of the scality-installer.run file, the establishment of the /srv/scality directory, and the
inputting of all necessary SSH information.

3.3.1 Setting of the root Execution Flag


Ensure the root execution flag is set on the .run file:

[root@scality] # chmod +x scality-ring-offline-{{ringVersion}}_{{distributionTargetLocation}}


_{{generationDate}}.run

3.3.2 Executing scality-installer.run


Invoke the scality-installer.run file with the --description-file option and the platform description
file argument.

[root@scality]# scality-installer.run --description-file /root/{{platformDescriptionFile}}

Refer to "scality-installer.run Options" on page 40 for more information on the --description-


file option, as well as for information on other applicable scality-installer.run file options.

© 2017 Scality. All rights reserved 23


3.3.3 Establishing the /srv/scality Directory
On execution of the scality-installer.run file, the Scality Installer prompts for the /srv/scality directory. If the
/srv/scality directory does not yet exist the Scality Installer will create it prior to running the Platform
Description File. Otherwise, Scality Installer will prompt for extraction action.

The folder "/srv/scality" already exists, overwrite content? [y/n/provide a new path to extract to]

Input Result
None Installation is aborted; Scality Installer invites user to provide --destination option to determine the
extraction location.
n Installation continues without any extraction, to existing /srv/scality directory.
y Installation continues, with archive content extracted and written over the existing /srv/scality dir-
ectory.
{{newDir- Installation continues, with archive content extracted to {{newDirectoryPath}}.
ectoryPath
If a directory already exists in the {{newDirectoryPath}} the user is prompted again for extraction
action.

The folder "/{{foldername}}/{{subfoldername}}" already exists, overwrite content?


[y/n/provide a new path to extract to]

At this point the user can opt to overwrite or not overwrite the content of the {{newDirectoryPath}}, or
they can indicate either a different existing director path or a path to a non-existent directory (which
will subsequently be created).

3.3.4 SSH Information Prompt


At this point, the console offers prompts for the SSH information required to deploy the software on all
servers.

Scality recommends setting up a passwordless key authentication


for the user root (refer to "Secure Shell (SSH)" on page 12).

To employ a different user than root (e.g.: "centos"), refer to "Using “centos” When root/ssh Login is
Disabled" on page 12

24 © 2017 Scality. All rights reserved


Credentials, once entered, remain active only until such time as the
user exits the Scality Installer.

By default, the topmost Scality Installer command is highlighted at initial start up.

3.4. Using the Scality Installer


The Scality Installer Menu offers commands that correlate to major RING installation steps (with a com-
mand, also, for generating an offline archive). These commands are presented in the sequence in which
the installation steps are best followed, though it is possible to run commands out of order as needed.

Running a command suspends the Scality Installer Menu, replacing it in the window instead with
the command output. Once a selected command completes, review the output and press Enter to
return to the Scality Installer Menu.

3.4.1 Set Up the Environment and Bootstrap Salt 26


3.4.2 Running the Pre-Install Suite 27
3.4.3 Installing Scality RING 31
3.4.4 Installing S3 Connector Service (Optional) 32
3.4.5 Running the Post-Install Suite 34
3.4.6 Generating an Offline Archive (Optional) 35

© 2017 Scality. All rights reserved 25


3.4.1 Set Up the Environment and Bootstrap Salt
The first step in an Automated Installation is to set up the environment for RING installation, set up the
Scality repositories to administer Scality, third-party and offline packages, and to deploy SaltStack on
every machine listed in the Platform Description File.
From the Scality Installer Menu, invoke the Set Up the Environment and Bootstrap Salt command.

The command runs various checks to determine the environment's readiness for RING installation.

At completion of the command the environment is ready to continue to the next command step, Run the
Pre-Install Suite. Press the Enter key to return to the Scality Installer Menu or the Ctrl+c keyboard com-
bination to exit the Scality Installer.

26 © 2017 Scality. All rights reserved


3.4.2 Running the Pre-Install Suite
Execute Run the Pre-Install Suite menu command to check the availability and accessibility of hardware
servers and components as defined in the Platform Description File. In addition, the Pre-Install Suite also
checks and updates OS settings per Scality recommendations.

The command will run various checks to determine whether the hardware and software in place is com-
pliant with the RING installation.

© 2017 Scality. All rights reserved 27


Hardware Checks
The hardware checks performed by the Pre-Install Suite use the Platform Description File as input to gen-
erate a Salt roster file and use salt-ssh to connect to the system servers to retrieve hardware information.
Differences between the actual hardware and the referenced Platform Description File are then com-
puted and displayed, with the results grouped by server and categorized as MATCH, MISSING, or
UNEXPECTED.

Results Description
MATCH A correspondence was found between the platform description and the actual hardware.
MISSING An item in the platform description file could not be located on the server. Such items may prevent a
successful installation.
UNEXPECTED An item was located on the server, but is not present in the platform description. Such items should
not cause harm, however the situation should be closely monitored.

l A number of items have a dependency relationship (e.g., disks attached to a RAID card),
and thus if the parent item is missing all of its dependencies will also be missing.
l For RAID card matching, a "best fit" algorithm is used since the platform description file is
not detailed enough to give the exact configuration. As such, when scanning hardware the
hardware check attempts to find the closest possible configuration (in terms of the number
of disks and SSDs of the expected size) for each RAID card in the system.

In the event the result set reveals extra or missing hardware items, check for permutations between serv-
ers, network configuration, or inaccurate RAID card settings. Corrective action implies changing hard-
ware, such as the addition of missing disks or CPU, or the plugging of disks into the appropriate RAID
card.

OS System Checks
The OS system checks run a batch of tests to confirm that the target Linux system is properly configured
for the Scality RING installation. A criticity level is associated with each system check test, which may trig-
ger a repair action by the Pre-Install Suite or that may require user intervention in the event of test failure.

Test Criticity Action to be Taken in the Event of Failure


MANDATORY Critical problem, usually corresponding to existing system hardware or the OS installation. The install will fail if
such issues are not fixed prior to running the RING Installer.
OPTIONAL A parameter differs from the recommended value, however no specific action is needed.
SKIPPED A test has not been run, however no specific action is needed.
WARNING A parameter is incorrect or not in suitable range (the Pre-Install Suite will automatically fix the issue).

Resolving Mandatory Errors

All reported MANDATORY issues must be resolved prior to RING


installation.

28 © 2017 Scality. All rights reserved


Examined by the Pre-Install Recommended Action in the Even of Fail-
Analyses Run by Pre-Install Suite
Suite ure
bizstorenode Ring ports TCP ports 4244->4255 are free and Stop and uninstall application running on these
available for the bizstorenode pro- ports
cesses
bizstorenode Ringsh Admin ports TCP ports 8084->8095 are free and Stop and uninstall application running on these
available for the bizstorenode pro- ports
cesses
bizstorenode Web Admin ports TCP ports 6444->6455 are free and Stop and uninstall application running on these
available for the bizstorenode pro- ports
cesses
Current UID User ID Run script as user "root"
Duplex for interface: %%val%% Network interface must be in full Check network (network controller, switch)
duplex mode
Elasticsearch TCP ports TCP ports 9200,9300 are free and Stop and uninstall application running on these
available for Elasticsearch ports
FastCGI internal ports TCP ports 10000 and 10002 are Stop and uninstall application running on these
free and available for the sproxy- ports
d/srebuild processes
Filesystem /var That /var has its own filesystem Add a dedicated /var partition
Free space in /var Free space reviewed per server Create a /var partition with at least 200GB for
type storage nodes and 160GB for connectors and
Supervisor
Glibc version Glibc minimum version Upgrade glibc
required:For CentOS/RHEL 6, glibc-
2.12-192 or laterFor CentOS/RHEL
7, glibc-2.17-106 patch 2.4 (patch
date 2/16/2016) or later
Grafana interface TCP port TCP port 3000 is free and Stop and uninstall application running on this
available for Grafana port
HDDs cache reads consistency timings Cache reads of all free (unpar- Check the full report, one or more HDD devices
titioned) HDDs seems to be slower than another
HDDs devices reads consistency timings Device reads of all free (unpar- Check the full report, one or more HDD devices
titioned) HDDs seems to be slower than another
Kibana interface TCP port TCP port 5601 is free and Stop and uninstall application running on this
available for Kibana port
NTP peering NTP or chrony clients are syn- Install and configure ntpd or chronyd  
chronized with an NTP or chronyd
server
Operating System bits The CPU uses a 64 bit architecture Substitute another server with a 64-bit server-
/cpu.
Raid controller cache size Raid controllers have at least 1GB Use a Raid controller cache module with a size
of cache of 1GB or more
Rest TCP port TCP port 81 is free and available Stop and uninstall application running on this
for incoming REST requests port

© 2017 Scality. All rights reserved 29


Examined by the Pre-Install Recommended Action in the Even of Fail-
Analyses Run by Pre-Install Suite
Suite ure
Sagentd port TCP port 7084 is free and available Stop and uninstall application running on this
for the sagentd process port
Salt master TCP ports TCP ports 4505 and 4506 are free Stop and uninstall application running on these
and available for the Salt master ports
Speed for interface: %%val%% Run speed for the Interfaces is at Check network (network controller, switch)
least 10Gb/s
SSDs cache reads consistency timings Cache reads of all free (unpar- Check the full report, one or more SSD devices
titioned) SSDs seems to be slower than another
SSDs devices reads consistency timings Device reads of all free (unpar- Check the full report, one or more SSD devices
titioned) SSDs seems to be slower than another
supervisor TCP ports TCP ports 80, 443, 3080, 2443, Stop and uninstall application running on these
3443, 4443, 5580, and 12345 are ports
open and available for the Super-
visor

Parameters Set by the Pre-Install Suite

By default, the Pre-Install Suite examines various operating system parameters and applies any neces-
sary value corrections.

Pre-Install Suite Parameters Description


Cache pressure Vfs cache pressure set to 50 to reduce swapping
Check swappiness current value Swapping should be avoided, vm.swappiness value '1' is set
ip_local_port_range Extend the default local port range between, set to 20480 -> 65000
iptables iptables kernel modules should not be loaded and iptables must be disabled
IRQ balancing service Irqbalance should run once at boot then stop (reboot needed)
Keep "not reserved space" At least 2GB of memory should not be reserved
Localhost definition localhost should be defined once in the /etc/hosts file
Max open files Max open files set to '65535' (ulimit -n)
Max stack size Max stack size set to '10240' (ulimit -s)
Max user processes Max user processes set to '1031513' (ulimit -u)
NUMA NUMA has to be disabled (reboot needed)
Number of incoming connections Number of max socket connections set to 4096
Number of semaphores SEMMNI Maximum number of semaphore sets in the entire system to 256
Number of semaphores SEMMNS Maximum semaphore value in the system set to 32000
Number of semaphores SEMMSL Minimum semaphore value set to 256
Number of semaphores SEMOPM Maximum number of operations for each semaphore call set to 32
Overcommit heuristic activation Allow heuristic overcommit, set to 0
SELinux deactivation SELinux has to be deactivated (CentOS & RHEL)
Transparent hugepages Transparent hugepages disabled (reboot needed)
Tuned-adm profile Run 'tuned-adm profile latency-performance' to optimize the system (RHEL/CentOS)

30 © 2017 Scality. All rights reserved


3.4.3 Installing Scality RING

Prior to installing Scality RING, the installation environment must be


prepared, the repository must be set up, and Salt must be deployed
on every node (refer to "Set Up the Environment and Bootstrap Salt"
on page 26).

Initiate the Install Scality RING menu command to install the Scality RING and all needed components on
every node, as described in the Platform Description File (CSV/XLS file provided to the Installer).

S3 Connector installation is handled separately via the Install S3 Service (Optional) Scality Installer
Menu command (refer to "Installing S3 Connector Service (Optional)" on the next page).

Scality Installer will next prompt for the Supervisor password. If the prompt is left blank, a password will
be automatically generated.

RING installation will proceed, with on-screen output displaying the various process steps as they occur.

© 2017 Scality. All rights reserved 31


More details about the installation progress are available if opening a new console and viewing the con-
tents of the installer.log file located at /tmp/scality-installer.log.
If all installation steps return OK, the environment is ready to continue to the next command step, be it
Install S3 Service (optional) or Run the Post-Install Suite.

3.4.4 Installing S3 Connector Service (Optional)

A RING must be in place prior to installing S3 Connector service. In


addition, to use Scality Installer to install the S3 Connector service
the target RING must have been installed via the Scality Installer.

The Install the S3 Service (Optional) menu command installs the S3 Connector components on the
nodes as described in the Platform Description File (CSV/XLS file provided to the Installer).

32 © 2017 Scality. All rights reserved


The installation of the S3 Connector service will proceed, with the on-screen output displaying the vari-
ous steps in the process as they occur.

Various elements of the S3 Connector installation process can run


for long periods of time without generating any output (typically
between 20 and 45 minutes). To view installation progress in detail,
open a new console and view the contents of the ansible.log file.
[root@scality] tail -F /srv/scality/s3/s3-off-
line/federation/ansible.log

If all installation steps return OK, the environment is ready to continue to the next command step, Run the
Post-Install Suite.

Specifying an Override for Heap Size


Using Scality Installer it is possible to override the Elasticsearch instance heap size by specifying a file
with external data.
1. Exit the Scality Installer if it is currently running, either by clicking the Exit command, or by tap-
ping the Ctrl+c keyboard combination or the q key.
2. Create an extdata file with the following content (as exemplified with a 1 gigabyte heap size):

{
"env_logger": {
"es_heap_size_gb": 1
}
}

3. Restart the Scality Installer from the command line with the --s3-external-data option
along with the path to the extdata file..

scality-installer.run --description-file /path/to/cluster.csv --s3-external-data /


{{filePath}}/extdata

4. Select the Install S3 Service (Optional) command from the Scality Installer Menu.

Installing S3 Connector Service without Scality Installer


Use the generate_playbook tool to install S3 Connector Service without the use of the Scality
Installer. For more information, refer to the S3 Connector Setup and Installation Guide.

© 2017 Scality. All rights reserved 33


3.4.5 Running the Post-Install Suite
Issue the Run the Post-Install Suite menu command to validate the installation.

The command will run various checks to determine whether the RING installation is successful. In
sequence, the specific tasks that comprise the Post-Install Suite include:
1. Running script using salt
2. Starting checks
3. Checking if server is handled by salt
4. Checking missing pillars
5. Gathering info from servers
6. Running tests

As opposed to the Pre-Install Suite, the Post-Install Suite results are provided in an index.html file that is
delivered in the form of a tarball (/root/post-install-checks-results.tgz).

At the completion of the Post-Install Suite, RING installation is complete and the system can be put to use.

34 © 2017 Scality. All rights reserved


3.4.6 Generating an Offline Archive (Optional)
Installing Scality RING without Internet access requires the use of an offline archive, either one provided
by Scality or a custom-generated one designed to meet a customer's specific needs.

Acquiring a Scality-Provided Offline Archive


1. From the command line, invoke the lsb_release -a command to identify the Linux dis-
tribution/version on which RING installation will occur (exemplified with a CentOS 7.3 release).

lsb_release -a

LSB Version: :core-4.1-amd64:core-4.1-noarch


Distributor ID: CentOS
Description: CentOS Linux release 7.3.1611 (Core)
Release: 7.3.1611
Codename: Core

2. Navigate to the Scality repository page (linked from Scality RING Customer Resources) and
download the offline archive for the identified operating system.
3. Copy the archive to the server that will act as the Supervisor for the RING.
The packages provided for the offline archive for RedHat can be slightly different than
those provided for CentOS, and thus it is necessary to select the offline archive that
exactly correlates with the distribution and release on which the RING will be installed. If
Scality does not provide an Offline Archive for the distribution, one must be generated.

Generating a Custom Offline Archive


Installing Scality RING without Internet access requires the use of an offline archive, either one provided
by Scality or a custom-generated one designed to meet a customer's specific needs.
Reasons for generating a custom offline archive rather than using the one provided by Scality, include:

l Scality does not provide an Offline Installer for the Linux distribution in use (e.g., RedHat 6.5)
l A repository is already in use for packages within a customer's existing infrastructure
l A specific set of packages needs to be added to the Offline Installer for later use

The custom generated archive name conforms to the following naming standard:
scality-ring-offline-{{ringVersion}}_{{distributionTargetLocation}}_{{generationDate}}.run

1. Navigate to the Scality repository page (linked from Scality RING Customer Resources) and
download the Scality Offline Archive.
Download the corresponding CentOS version if the plan is to generate an archive for a
RedHat distribution.

2. Set up a server (either a VM or a container) with the desired target distribution. For RedHat it is
necessary to set up the Epel Repository, whereas for CentOS or Ubuntu it is not necessary to
download any additional files.
3. Copy the Scality Offline Archive to the target server.
4. Complete the offline archive generation via the applicable distribution-specific sub-procedure.

© 2017 Scality. All rights reserved 35


Generating a Custom Offline Archive for CentOS

1. Run the Scality Installer, leaving blank all authentication questions (not required for Offline
Archive generation).

./scality-ring-7.2.0.0.centos_7.run --description-file /var/tmp/ring-centos7.csv

Extracting archive content to /var/tmp/ring-centos7.csv


Running /srv/scality/bin/launcher --description-file /var/tmp/ring-centos7.csv
Please provide the user to connect to the nodes (leave blank for "root"):
Please provide the SSH password to connect to the nodes (leave blank if you have a
private key):
Please provide the private SSH key to use or leave blank to use the SSH agent:

The Scality Installer Menu displays.

2. Select the Generate the Offline Archive (Optional) command. Consequently, the Scality Installer
will prompt for the path to where to generate the custom offline archive, while also proposing a
default path that includes the version of the actual distribution.

[2017-08-19 03:17:11,571] Detailed logs can be found in /var/log/scality/setup/


generate_offline/debug.log
Choose a destination for the new Scality Setup Archive
(default to /var/tmp/scality/scality-ring-offline-7.2.0.0.r170819024102.57af35feb6
_centos_7.3.1611_201708190317.run)

3. Press Enter or specify an alternative path to begin downloading the external dependencies
required for a RING installation without internet access.

36 © 2017 Scality. All rights reserved


Once the download process is complete, the Scality Installer steps can be run using the
generated offlline archive.

Generating a Custom Offline Archive for RedHat

Registration is required for access to the RedHat repository. If a local mirror of the RedHat repos-
itories is in use, confirm the availability of these repositories.

1. Confirm that the server is registered.

subscription-manager status
+-------------------------------------------+
System Status Details
+-------------------------------------------+
Overall Status: Current

2. Execute the subscription-manager repos command.


l RedHat 7:

subscription-manager repos --enable=rhel-7-server-optional-rpms --enable=rhel-7-


server-extras-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-rs-
for-
rhel-7-server-rpms
# yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-
latest-7.noarch.rpm

l RedHat 6:

subscription-manager repos --enable=rhel-6-server-optional-rpms --enable=rhel-


ha-for-rhel-6-server-rpms --enable=rhel-rs-for-rhel-6-server-rpms
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-
latest-6.noarch.rpm

© 2017 Scality. All rights reserved 37


3. Extract the Scality Installer using the --extract-only option.

./scality-ring-7.2.0.0.centos_7.run --extract-only
Extracting archive content to /srv/scality
--extract-only option specified
Run /srv/scality/bin/launcher --description-file <path> manually to install the RING

For an overview of the options that can be run when extracting Scality Installer, refer to
"scality-installer.run Options" on page 40.

4. Generate the Custom Offline Archive using the generate-offline command with the --
use-existing-repositories option (by default, the command is available and is run
from /srv/scality/bin/generate-offline).

# /srv/scality/bin/generate-offline --use-existing-repositories

The repositories that will be employed in preparing the environment are thus set, and at
this point the Scality Installer can install the RING using the offline dependencies.

generate-offline Command Options


In addition to the - - use-existing- repositories option used to generate a custom offline
archive, several other options can be run with the generate-offline command.

Option Description
-h, --help Display help content and exit
-d {{centOSVersion}}, --dis- Force the download of the specified distribution
tribution {{centOSVersion}}
-D, --debug Print debugging information
--http-proxy {{httpProxy}} URL of form http://{{user}}:{{password}}@{{url}}:{{port}} used during package download
-l {{logFile}},--log-file {{logFile}} Specify a log file
--no-install-prereq Donot automatically install all prerequisites to generate the offline repository, such
as createrrepo/repropro
-o {{outputPath}}, --output Path to the new archive to generate
{{outputPath}}
-p {{packagesToAdd}} ..., -- List of packages to add to the default ones
packages {{packagesToAdd}}
...
-r {{directoryForRepository}}, - Directory where the offline repository will be stored
-repository {{dir-
ectoryForRepository}}
--use-existing-repositories Do not use a temporary configuration to generate the offline installer archive. Use
this option to generate an offline without an online connection when a local repos-
itory is already set
--skip-offline-generation Do not generate the offline repository
--skip-offline-mode Do not set the offline mode as the default one
--skip-repack Do not repack the installer file. The installer will not use offline mode by default.

38 © 2017 Scality. All rights reserved


3.5. Exiting the Scality Installer
To exit the Scality Installer, press Exit in the Scality Installer Menu.

The Installer can also be closed by tapping the q key or the Ctrl+c
keyboard combination.

Upon exiting Scality Installer — if the RING is installed successfully — a link for the Supervisor will display.

Enter the provided URL into a web browser to access the Supervisor Web UI.

Web browsers that support the new Supervisor GUI include Chrome,
Firefox, Internet Explorer, and Opera.

© 2017 Scality. All rights reserved 39


3.6. scality-installer.run Options
scality- installer.run requires either the - - description- file option or the - -
extract-only option.

3.6.1 --description-file (or -d)


Use of the --description-file option with the scality-installer.run command requires a Platform
Description File that describes the planned RING system architecture (either a CSV file or a XSLX file).
Calling the command with the --description-file option extracts the Installer archive.

[doc@scality]$ build/scality-installer.run --description-file /home/scality/cluster.csv


Extracting archive content to /srv/scality
Running /srv/scality/bin/launcher --description-file /home/scality/cluster.csv

3.6.2 --extract-only
Although the --noexec option remains available for the Installer archive extraction, it is now deprec-
ated, and the --extract-only command is recommended in its stead. If either of these options is
used, the Installer will not be run automatically but will simply be extracted.
After the archive extraction, /srv/scality/bin/launcher can be called at a later time to display the Installer
menu and start the installation.

[doc@scality]$ build/scality-installer.run --extract-only


Extracting archive content to /srv/scality
--extract-only option specified
Run /srv/scality/bin/launcher --description-file <path> manually to install the RING

3.6.3 --destination
Although the default extraction directory is /srv/scality, the installer can be extracted to any location with
the --destination option and a directory name argument. The --destination option can be
used with either the --description-file option or the --extract-only option.

l description-file:

[doc@scality]$ build/scality-installer.run --description-file /home/scality/cluster.csv \


--destination /home/me/scality-ring
Extracting archive content to /home/me/scality-ring
Running /home/me/scality-ring/bin/launcher --description-file /home/scality/cluster.csv

l extract-only :

[doc@scality]$ build/scality-installer.run --extract-only --destination /tmp/scality


Extracting archive content to /tmp/scality
--extract-only option specified
Run /tmp/scality/bin/launcher --description-file <path> manually to install the RING

40 © 2017 Scality. All rights reserved


4
4. Advanced RING Installation
Unlike an Automated RING Installation, an Advanced
RING Installation does not perform hardware checks, nor does it
include an automated S3 installation routine.

4.1. Set Up the Environment and Bootstrap Salt 41


4.2. Running the OS System Checks Manually 45
4.3. Installing Scality RING 46
4.4. Running Post-Install Checks 49

The Scality Installer is required for an Advanced RING Installation.


Refer to "Obtaining and Extracting the Scality Installer" on page 23.

4.1. Set Up the Environment and Bootstrap Salt


To effect an Advanced RING Installation it is necessary establish a valid Saltstack deployment and to gen-
erate several files that contain crucial RING architecture information. Alternatively, if a working Saltstack
installation is already in place, proceed to "Completing the RING Architecture" on the next page

The deploy-salt binary provides for the manual bootstrapping of SaltStack.

4.1.1 Deploying Salt


1. Start the web server with the salt repository (typically located at /srv/scality/repository).

/srv/scality/bin/tools/setup-httpd -b -d \
/srv/scality/repository {{supervisorIP}}:{{httpPort}}

The embedded web server will run until the next reboot. To kill the web server, if neces-
sary, run the kill $(cat /var/run/http_pyserver.pid) command.

© 2017 Scality. All rights reserved 41


2. Create the /etc/salt directory.

mkdir -p /etc/salt

3. Build a roster file to /etc/salt/roster for the platform (as exemplified). Refer to the official Saltstack
Roster documentation for more information.

sup:
host: 10.0.0.2
user: root
priv: /root/.ssh/id_rsa

node1:
host: 10.0.0.10
user: root
priv: /root/.ssh/id_rsa
.
.
.

4. Create the /srv/scality/pillar directory.

mkdir -p /srv/scality/pillar

5. Create the /srv/scality/pillar/top.sls file, containing the following content.

base:
'*':
- bootstrap

6. Create the /srv/scality/pillar/bootstrap.sls file, containing the following content.

scality:
repo:
host: {{supervisorIP}}
port: {{httpPort}}
saltstack:
master: {{supervisorIP}}

7. Install salt on every machine.

/srv/scality/bin/tools/deploy-salt --master {{rosterID}} --all \


--accept --roster /etc/salt/roster

4.1.2 Completing the RING Architecture


A script — generate-pillar.sh — is deployed to complete the RING architecture, through the use of several
files. By default, this script provides a basic RING configuration with two RINGs: DATA (ARC) on HDDs,
and META (replication) on SSDs (though a single RING can be installed by deleting the unwanted RING
entry in the generated configuration). The script is located at /srv/scality/scripts/generate-pillar.sh.

The Role of the generate-pillar.sh Script


The generate-pillar.sh script generates the pillar file used to perform a complete Scality RING installation
with Salt.

42 © 2017 Scality. All rights reserved


To ease the process and simplify the configuration, generate-pillar.sh assumes the following:

l The Supervisor is the Salt master instance


l The Supervisor, storage nodes, and connectors are on dedicated machines
l HDDs are available on storage nodes for a RING named DATA
l SSDs are available on storage nodes for a RING named META
l The RING named DATA is of type ARC
l The RING named META is of type replication

Required generate-pillar.sh Script Details


The generate-pillar.sh script requires details on the planned installation. Once these details are entered
the script can generate a valid pillar file that can be used to perform the installation with the salt orches-
trate runner, or with the scripted-install.sh script.

l Supervisor ID and IP
l Data and management ifaces
l Supervisor credentials
l Type of installation (online or offline)
l Storage node Salt matcher#
l Storage node data and management interfaces
l Type of ARC for the RING named DATA
l Type of replication (COS) for the RING named META
l Connector Salt matcher#
l Connector type(s)
l Connector services (optional)
l Connector interfaces
l SVSD interface (optional)
l Netuitive key (optional)
# The matcher is a word used to match all the storage node or connector minions (e.g.,
*store* for the storage nodes on machine1.store.domain, machine2.store.domain, etc.).
The matchers are assigned in the --nodes and --conns-* options (where * represents
the type of connector, so the matcher for NFS would be --conns-nfs).

generate-pillar.sh generates a main SLS file, named as the first argument on the command line. It also
generates separate pillar files for nodes and conns groups in the same directory.

generate-pillar.sh Script Syntax

{{supervisorServer}}# /srv/scality/scripts/generate-pillar.sh [options...] {{filename}}

© 2017 Scality. All rights reserved 43


generate-pillar.sh Execution
Use of the generate-pillar.sh script requires that the dialog package be run, using a simple file that is gen-
erated following a dialog enquiry on RING topology.

{{supervisorServer}}# yum install -y dialog


{{supervisorServer}}# /srv/scality/scripts/generate-pillar.sh /srv/scality/pillar/scality-common.sls

generate-pillar.sh Script Options


The pillar generation can be automatized using generate-pillar.sh script options.

Option Argument Description


-h or --help — Shows Help for the script to STDOUT
--sup-id ID Supervisor minion ID
--sup-ip IP Supervisor IP
--data-iface IFACE Selects the data interface for the Supervisor
--mgmt-iface IFACE Selects the management interface for the Supervisor
--password PASSWD Uses the provided PASSWD as a password for the Supervisor Web UI
--nodes MATCHER Uses the provided salt MATCHER to target storage nodes (i.e., --nodes={{store}})
--nodes-data-iface IFACE Selects the data interface for the storage nodes
--nodes-mgmt-iface IFACE Selects the management interface for the storage nodes
--data-arc SCHEMA Uses the provided SCHEMA as the ARC erasure code setting for the RING named
DATA (e.g., 4+2*3)
--meta-cos VALUE Uses the provided VALUE as the COS for the RING named META (e.g., 3)
--repo-http-port PORT Port used to share package repositories through HTTP
--repo-mode online|offline Package repositories mode, online or offline
--[no-]repo-use-system — Use system repositories; option is negated if "no-" prefix is used (default is true
if --repo-mode is online and false, ie, "no-", if offline)
--[no-]repo-conf-3rd- — Whether to configure third party package repositories (default is false)
party
--[no-]repo-use-embed- — Whether to use embedded package repositories (default is true)
ded
--no-conns — Connectorless pillar generation
--conns-data-iface IFACE Selects the data interface for connectors
--conns-mgmt-iface IFACE Selects the management interface for connectors
--conns-svsd-iface IFACE Selects the SVSD interface for connectors
--conns-sofs MATCHER Uses the provided Salt MATCHER to target connectors (i.e., - -conns- sofs={{min-
ionsIdentifier}})
--conns-nfs MATCHER Uses the provided Salt MATCHER to target connectors (i.e., --conns-nfs={{min-
ionsIdentifier}})
--conns-cifs MATCHER Uses the provided Salt MATCHER to target connectors (i.e., --conns-cifs={{min-
ionsIdentifier}})
--conns-cdmi MATCHER Uses the provided Salt MATCHER to target connectors (i.e., --conns-cdmi={{min-
ionsIdentifier}})

44 © 2017 Scality. All rights reserved


Option Argument Description
--conns-rs2 MATCHER Uses the provided Salt MATCHER to target connectors (i.e., --conns-rs2={{min-
ionsIdentifier}})
--conns-sproxyd MATCHER Uses the provided Salt MATCHER to target connectors (i.e., --conns-sproxyd=
{{minionsIdentifier}})
--conns-svsd MATCHER Uses the provided Salt MATCHER to target connectors (i.e., --conns-svsd={{min-
ionsIdentifier}})
--conns-halo MATCHER Uses the provided Salt MATCHER to target connectors (i.e., --conns-halo={{min-
ionsIdentifier}})

4.2. Running the OS System Checks Manually


The OS system checks serve to confirm that the target Linux system is properly configured for the Scality
RING installation.
1. Download the packages using the same credentials employed to download RING packages.
l CentOS6

https://packages.scality.com/scality-support/centos/6/x86_64/scality/ring/scality-
preinstall

l CentOS7

https://packages.scality.com/scality-support/centos/7/x86_64/scality/ring/scality-
preinstall

2. Make the script executable.

chmod +x scality-preinstall

3. Copy or move the downloaded packages to the Supervisor server.


4. Modify the platform.yaml file with the IP addresses of the RING servers (as exemplified).

{% macro iinclude(filename) %}{% include filename %}{% endmacro %}


my_config:
supervisor:
{{ iinclude("default/supervisor.yaml")|indent(8) }}
resources:
- {{ iinclude("tests/supervisor.yaml")|indent(14) }}
#login: user
#password: password
ip:
- 10.200.47.220

© 2017 Scality. All rights reserved 45


connector:
{{ iinclude("default/connector.yaml")|indent(8) }}
resources:
- {{ iinclude("tests/connector.yaml")|indent(14) }}
#login: user
#password: password
ip:
- 10.200.47.224

storage:
{{ iinclude("default/storage.yaml")|indent(8) }}
resources:
- {{ iinclude("tests/storage.yaml")|indent(14) }}
#login: user
#password: password
ip:
- 10.200.47.226

5. Run the Pre-Install Suite with the modified platform.yaml file as a --file argument.
Employ the --dummy argument to prevent the Pre-Install Suite from automatically applying
any recommended corrections.

/srv/scality/bin/tools/preinstall
./scality-preinstall --file platform.yaml --color --dummy

6. After fixing any MANDATORY issues, rerun the Pre-Install Suite.

scality-preinstall --file platform.yaml --color

No MANDATORY issues should display in the command output.

7. Send the compressed result archive to Scality for analysis.

4.3. Installing Scality RING


Silent installation of the RING can be performed using scripts in conjunction with the RING Installer.
Silent RING Installation with Scripts is an advanced operation that may require assistance from Scality
Customer Service.

4.3.1 Using the Install Script Command Line 47


4.3.2 Installation Steps Recognized by the scripted-install.sh Script 47

Code examples running multiple lines are correct only as displayed


on the page, as due to PDF constraints such examples cannot be
accurately copy-pasted.

RING installation via scripts requires that a working Salt installation be in place.

46 © 2017 Scality. All rights reserved


4.3.1 Using the Install Script Command Line
The installation will be performed with a script — scripted-install.sh — that is located at /srv/scality/scripts.
By default, all the outputs of the Salt command lines in the scripted- install.sh script are directed to
/dev/sdtderr. To change the output directory, use either the environment variable INSTALLER_
LOGFILE or the redirection command line.

{{supervisorName}}# /srv/scality/scripts/scripted-install.sh -l /tmp/installer.log \


--destination {{saltRootDirectory}}

4.3.2 Installation Steps Recognized by the scripted-install.sh Script


Installation Step Step Description Step CLI Tag
Salt Configuration 1. Sets up the master configuration file salt
2. Restarts the master instance
3. Waits for minions to reconnect
4. Clears the cache and synchronizes the components
Pre-Install Setup 1. Configures offline mode setup
2. Sets up the repo on all minions
3. Installs python-scality
Roles Setup 1. Resets the scal_group and roles grains roles
2. Sets up the storage nodes group (nodes)
3. Sets up the connectors group (conns)
4. Sets up the storage role
5. Sets up the elastic cluster role
6. Sets up connector roles
Supervisor Installation 1. Installs and configures the Supervisor packages sup
2. Installs RingSH
RING Setup 1. Appends RING(s) to storage nodes groups rings
2. Configures the RING(s) on the Supervisor
Disk Setup 1. Partitions and formats disks disks
2. Mounts all disks
Storage Node Installation 1. Advertises the Elasticsearch cluster nodes
2. Advertises ZooKeeper cluster (if needed)
3. Installs and configures the storage nodes
4. Installs and configures the Elasticsearch cluster
5. Installs the ZooKeeper cluster (if needed)
Keyspace Installation 1. Computes the keyspace keyspace
2. Assigns keys to storage nodes
3. Joins the nodes to the RING(s)

© 2017 Scality. All rights reserved 47


Installation Step Step Description Step CLI Tag
Connector Installation 1. Installs NFS connectors (if needed) conns
2. Installs SMB/CIFS connectors (if needed)
3. Installs SOFS (FUSE) connectors (if needed)
4. Installs CDMI connectors (if needed)
5. Installs RS2 connectors (if needed)
6. Installs sproxyd connectors (if needed)
7. Installs svsd (if needed)
8. Installs Scality Cloud Monitor (if needed)
SupAPI Setup Configures the SupAPI service. supapi

Installation Step Options


Three options are available for each installation step:

--{{stepCLITag}}-only Execute only the indicated step

--after-{{stepCLITag} Continue the process after the indicated step

Not valid for the post step.

--no-{{stepCLITag} Completely disable the indicated step

Some Examples
When an error occurs during the installation process, the script exits immediately. Once the error is
resolved, restart the script with a - -{{stepCLITag}} - only option to confirm that the error is
resolved prior to forward with the installation.
To illustrate, in the following scenario an old repository file confuses the package manager which causes
the installation of the Supervisor to fail.
1. Retry the Supervisor step.

{{supervisorName}}# /srv/scality/scripts/scripted-install.sh -l /tmp/installer.log \


--sup-only

2. Resume the installation process after the supervisor step.

{{supervisorName}}# /srv/scality/scripts/scripted-install.sh -l /tmp/installer.log \


--after-sup

3. Exclude specific step to be run (useful to skip keyspace).

{{supervisorName}}# /srv/scality/scripts/scripted-install.sh -l /tmp/installer.log \


--no-keyspace --no-conns

48 © 2017 Scality. All rights reserved


4.4. Running Post-Install Checks
Use the Post-Install Checks Tool with RING installations that are not performed via the Scality Installer.
Available in the Scality repository, the scality-post-install-checks package provides a quick
means for checking the validity of RING installations.

4.4.1 Setting Up the Post-Install Checks Tool


1. Copy the scality-post-install-checks package to the Supervisor server and extract
the package binary file.
l CentOS/RedHat 7:

rpm -ivh scality-post-install-checks.{{ringVersion}}.{{ringBuildNumber}}.el7.x86_


64.rpm

l CentOS/RedHat 6:

rpm -ivh scality-post-install-checks.{{ringVersion}}.{{ringBuildNumber}}.el6.x86_


64.rpm

{{ringVersion}} is the three-digit RING version number separated by periods (e.g., 7.1.0),
while {{ringBuildNumber}} is comprised by the letter "r", a 12-digit timestamp, a period, a 10-
digit hash, a dash and an instance number (e.g., r170720075220.506ab05e3b-1).

2. Run the install.sh script from the /usr/share/scality-post-install-checks directory to install a


python virtual env (in the /var/lib/ scality-post-install-checks directory), and required support
tools in the virtual environment.

/usr/share/scality-post-install-checks/install.sh

The initial version of the scality-post-install-checks package requires an Inter-


net connection, to enable the install.sh script to download and to allow for the installation of
various supporting tools, including:

l psutil 5.1.3
l pytest 2.8
l pytest-html 1.13.0
l salt-ssh (Same version included in the Scality Installer)

In addition, the install.sh script creates a master_pillar directory for pillar data files under
the virtual env /var/lib/scality-post-install-checks/venv/srv directory.

4.4.2 Post-Install Checks Tool Configuration


For a RING installed using the Scality Installer, no manual preparation is required. In this scenario,
the Salt minions are all known to the Salt master, and the pillar data has been created in the /srv

© 2017 Scality. All rights reserved 49


directory by the Scality Installer. The Post-Install Checks Tool makes a local copy of this data and
runs automatically using Salt.

The Post-Install Checks Tool accesses all the machines in the RING. As the tool uses Salt or Salt-SSH to
test the RING installation, it requires the data that defines the Salt minions (addresses and pillar data).
The tool automatically obtains the data if pillar data is available in the /srv directory, otherwise it can be
created manually.
For a RING that was not installed via the Scality Installer, there is typically no Salt pillar data on the Super-
visor, and the RING servers are not declared as Salt minions. As such, using Salt-SSH, it is necessary to
create both a YAML roster file and the pillar data.

Creating the YAML Roster File


Salt-SSH uses a YAML roster file to determine the remote machines to access when Salt minions are not
available. This roster file must contain the description of remote servers, their means of access, and the
roles they play in the RING.

To aid in roster file creation, a template roster file — roster_file — is


installed by the Post-Install Checks Tool in the script root directory at
the time the script is run.

The roster file should be stored in the virtual environment directory on the machine that simulates the
Salt master (typically the RING Supervisor server).

The scal_group value of each server in the roster file must match the pillar data included in the
top.sls file under the /var/lib/scality-post-install-checks/venv/srv/master_pillar/ directory.

# Example Roster File


# scal_group must be defined to:
# + apply some pillar state to the listed servers
# + group connectors for consistency checks
supervisor:
host: supervisor
user: root
priv: ~/.ssh/id_rsa
grains:
roles:
- ROLE_SUP
node-1:
host: node-1
user: root
priv: ~/.ssh/id_rsa
grains:
roles:
- ROLE_STORE
scal_group:
- storenode
connector-1:
host: connector-1
user: root
priv: ~/.ssh/id_rsa
grains:
roles:
- ROLE_CONN_CDMI

50 © 2017 Scality. All rights reserved


- ROLE_CONN_SOFS
scal_group:
- sofs
connector-2:
host: connector-2
user: root
priv: ~/.ssh/id_rsa
grains:
roles:
- ROLE_CONN_SPROXYD
scal_group:
- sproxyd
connector-3:
host: connector-3
user: root
priv: ~/.ssh/id_rsa
grains:
roles:
- ROLE_CONN_RS2

Available Server Roles Description


ROLE_SUP Supervisor server
ROLE_STORE (RING node servers, role may include srebuildd) RING node servers (role may include srebuildd)
ROLE_CONN_SPROXYD sproxyd
ROLE_CONN_SOFS sfused for FUSE
ROLE_CONN_CDMI sfused for CDMI (aka, Dewpoint)
ROLE_CONN_NFS sfused for NFS)
ROLE_CONN_CIFS sfused for SMB/CIFS
ROLE_CONN_RS2 role includes sindexd

Refer to https://docs.saltstack.com/en/latest/topics/ssh/roster.html. for more information on roster file


Salt options.

Creating the Pillar Data


The Post-Install Checks Tool uses data contained in the pillars for tests that are run on each machine in
use by the RING. On the RING Supervisor server, create pillar files in the master_pillar directory (/var/lib/s-
cality-post-install-checks/venv/srv/master_pillar/) created by the post-install-checks install.sh script.
The pillar files to create include top.sls, scality-common.sls, scality-storenode.sls, scality-sproxyd-sls, and
scality-sofs.sls.

top.sls Types and roles of RING servers available, and assigns the state
(pillar data files) to be applied to them.

scality-common.sls General RING information

scality-storenode.sls RING node server information

scality-sproxyd.sls RING sproxyd connector information

scality-sofs.sls RING SOFS connector information

© 2017 Scality. All rights reserved 51


top.sls File

The top.sls file lists the types and roles of RING servers available, and assigns the state (pillar data files)
to be applied. It serves to link the type of server to the pillar data.
As exemplified, the top.sls file lists separate server groups for nodes, sproxyd connectors and SOFS-
FUSE connectors.

base:
# for each server, apply the scality-common state (scality-common.sls will be sent to these serv-
ers)
'*':
- scality-common
- order: 1
# for servers whose grain scal_group is storenode, apply the state scality-storenode (scality-
storenode.sls will be sent to these servers)
'scal_group:storenode':
- match: grain
- scality-storenode
- order: 2
'scal_group:sofs':
- match: grain
- scality-sofs
- order: 2
'scal_group:sproxyd':
- match: grain
- scality-sproxyd
- order: 2

scality-common.sls File

The scality-common.sls file contains information common to all RING servers.


For the Post-Install Checks Tool to function correctly, the scality-common.sls file must provide RING cre-
dentials, the network interface controller, whether the RING(s) uses SSL, and the Supervisor IP address.

Note that if the default root/admin credentials are in use, it is not necessary to include the cre-
dentials section (with the internal_password and internal_user parameters).

Specifically, the file must contain the all of the fields examplified (an exception being single RING envir-
onments, in which the name and is_ssd fields of a separate metadata RING are not included).

scality:
credentials:
internal_password: admin
internal_user: root
prod_iface: eth0
ring_details:
DATA:
is_ssd: false
META:
is_ssd: true
supervisor_ip: 192.168.10.10

As exemplified, the scality-common.sls file lists separate data and metadata RINGs. The Post- Install
Checks Tool can also work for single RING environments, only the name of that RING in the scality-com-
mon.sls file needs to be listed.

52 © 2017 Scality. All rights reserved


scality-storenode.sls File

The scality-storenode.sls file offers RING node server information. The required fields for the Post-Install
Checks Tool tests as they apply to pillar data for node servers is exemplified below:

scality:
mgmt_iface: eth0
mount_prefix: /scality/disk,/scality/ssd
nb_disks: 1,1
prod_iface: eth0

Connector SLS Files

The scality-sproxyd.sls and scality-sofs.sls connector SLS files require only network interface fields for
the post-install-checks Post-Install Checks Tool tests.

scality:
mgmt_iface: eth0
prod_iface: eth0

The pillar files for connectors are typically very similar. However, if a specific network configuration is
used for the same type of server (e.g., if the network interface used is different from one sproxyd server
to another), this information must be specified in each of the connector SLS files.

4.4.3 Post-Install Checks Tool Syntax and Options


Syntax:

scalpostinstallchecks [-h] [-r [ROSTER_FILE]] [-s [SERVER_NAME]] [-e [EXCLUDE_SERVERS]] [-l]


[-o OUTPUT] [-p OUTPUT_PREFIX] [-t INDEX_TEMPLATE] [-m MSG_TEMPLATE]

If both /srv/scality/salt and /srv/scality/pillar are valid directories, the post-install-checks tool script tries to
use the local Salt master to launch checks; otherwise it defaults to using salt-ssh. Use the -r option to
force the runner to use salt-ssh.
Tool Options

Option Value Description


-h, -- help — Shows tool help and exits
-r, --roster_file ROSTER_FILE Full file name for the salt-ssh roster file
-s, --server-name SERVER_NAME Runs the check on a specified server. Multiple servers can be spe-
cified in a comma-separated string, for example: -s
"node1,node2,node3"
-e, --exclude-servers EXCLUDE_ Servers to be excluded from the postinstall check
SERVERS
-l, --list-servers — Lists all servers
-o {{OUTPUT}}, --out- OUTPUT Specifies the output archive file name (default is post-install-checks-
put results.tgz)
-p, --output-prefix OUTPUT_PREFIX Specifies the name of the root directory for the output archive (default
is post-install-checks-results)

© 2017 Scality. All rights reserved 53


Option Value Description
-t, --index-template INDEX_TEMPLATE Specifies a file name for an index template
-m, --msg-templatee MSG_TEMPLATE Specifies a file name for a message template
-L, --list-checks — Lists all check functions
-k {{testName}}, TEST_NAME Specifies the test name expression
--test-name
-c {{testCategory}}, TEST_CATEGORY Specifies the test category
--test-category
-M or --list-categories — Lists all categories
--network-perf — Uses ping/iperf for testing network performance between the server where
iperf/iperf3 is installed (the Supervisor server) and all the other servers. The
iperf server is started on port 54321 of all servers, after which the iperf cli-
ent is started on the Supervisor server

This test takes time and uses significant network resources, and
is therefore disabled by default.

4.4.4 Running the Post-Install Checks Tool


1. Depending on whether Salt is configured, from a command line:
l Salt is configured:

scalpostinstallchecks

l Salt is not configured:

scalpostinstallchecks -r {{pathToRosterFile}}

2. Download the result (packaged in post-install-checks-results.tgz) to a local machine and unzip


the file to view the result in HTML format.

4.4.5 Examples: Server Targeting


Use the -s option to run checks on a specific list of servers.

scalpostinstallchecks -s "node-1, node-2"

Use the -e option to exclude a specific server from the list of servers to check.

scalpostinstallchecks -e "connector-1"

The -s and the -e options can be used together.

54 © 2017 Scality. All rights reserved


Use the -L option to list all test names by modules.

scalpostinstallchecks -L

Running script using salt


Available checks function:
--------------------
check_biziod.py
--------------------
test_biziod_sync(salt_cmd, biziod)
test_biziod_nba(salt_cmd, ssd_exists, biziod)
test_biziod_nvp(salt_cmd, ssd_exists, biziod)
test_biziod_statcheck(salt_cmd, biziod)
test_biziod_sweeper(salt_cmd, biziod)
test_biziod_no_relocation_overlap(biziods_by_ssd)
test_biziod_no_common_sweeper_start_time(biziods_by_ssd)
--------------------
check_consistency.py
--------------------
test_sproxyd_config_consistency(sproxyd_group, all_sproxyd_profiles)
test_sindexd_config_consistency(sindexd_group, all_sindexd_profiles, all_rings)
test_sofs_config_consistency(sofs_group, all_sofs_profiles)
test_node_config_consistency(ring_name, all_ring_nodes)

4.4.6 Examples: Test Category


Use the -c option to launch tests that are tagged with a specific category.

scalpostinstallchecks -s {{serverName}} -c '{{taggedCategoryName}}'

The -c option can be used with expressions.

scalpostinstallchecks -s {{serverName}} -c '{{taggedCategoryName1}} or {{taggedCategoryName2}}'


scalpostinstallchecks -s {{serverName}} -c '{{taggedCategoryName1}} and not {{taggedCat-
egoryName2}}

Use the -M option to list all test categories and their associated descriptions.

scalpostinstallchecks -M

Running script using salt


Available check categories:
hardware: hardware category.
system: system category.
software: software category.
package: package related checks.
service: service related checks.
node: node related checks.
biziod: biziod related checks.
storage: storage/disk related checks.
risky: risky configuration items
task: task related checks
consistency: consistency configuration checks (run on the supervisor)
supervisor: supervisor checks
sproxyd: sproxyd related checks

© 2017 Scality. All rights reserved 55


sindexd: sindexd related checks
sofs: sofs related checks
srebuildd: srebuildd related checks
restconnector: restconnector related checks

4.4.7 Examples: Network Performance Test


Network performance tests are performed using iperf and ping (which, by default, are disabled to avoid
impacting the production platform).

Tests are run on the Supervisor machine on which iperf/iperf3 is installed.

Use the --network-perf subcommand to enable iperf and ping

scalpostinstallchecks -s supervisor --network-perf

Use the thek-M option to run only one of the tests.

scalpostinstallchecks -s supervisor --network-perf -k ping

4.4.8 Examples: Tool Results


If the Post-Install Checks Tool reveals errors, send the resulting .tgz file to Scality Customer Support.

Example Summary Output


Summary output is available from the Post-Install Checks Tool for all servers tested (additional details
are available on each server), as exmplified below.

Example Details
Additional details (e.g., test name, error message) regarding failed tests, skipped tests and passed tests
are provided for each server.
The first failed test exemplified, check_node.py::test_garbage_collector, concerns a RING
named all_rings0 and its DATA mountpoint, with the failure (an Assertion Error) indicating in red font that
the garbage collector is not enabled. The second failed test, check_sproxyd.py::test_

56 © 2017 Scality. All rights reserved


bstraplist, concerns a RING named all_rings0 and its -node-0-sproxyd-chord mountpoint, with the
failure (an Assertion Error) indicating in red font that the bstraplist should have at least four different node
servers and not just the three that are recognized.

© 2017 Scality. All rights reserved 57


This page is intentionally left blank to ensure new chapters start
on right (odd number) pages.
5
5. Individual RING Component Installation
The preferred RING installation method involves the use of the RING Installer or variants using the Salt
deployment framework (refer to "Installing the RING" on page 1), however circumstances may warrant
the manual installation of components.

5.1. Installing Folder Scale-Out for SOFS Connectors 59


5.2. Installing Seamless Ingest for SMB-CIFS Connectors 62
5.3. Installing Full Geosynchronization Mode for SOFS Connectors 63
5.4. Installing Scality Cloud Monitor 71

Code examples running multiple lines are correct only as displayed


on the page, as due to PDF constraints such examples cannot be
accurately copy-pasted.

5.1. Installing Folder Scale-Out for SOFS Connectors


The installation procedure for SOFS Folder Scale-Out assumes that the RING has been installed using
the RING Installer, and that as a result a Salt master and Salt minions are already configured and the Scal-
ity formula is already available on the master under /srv/scality/salt.

All Salt commands employed for the installation and configuration of


the SOFS Folder Scale-Out feature should be run on the Salt master.

Defined for use with SOFS Folder Scale-Out, ROLE_ZK_NODE can be assigned to Salt minions, the pur-
pose of which is to install and configure the minion as an active member of the shared cache.
To determine whether ROLE_ZK_NODE is currently assigned to the Salt minions, run the following com-
mand on the Salt master (SOFS Folder Scale- Out uses the same roles grain name as the RING
Installer).

salt '*' grains.get roles

© 2017 Scality. All rights reserved 59


Roles are lists, and thus to add a role use the grains.append function and to remove a role use the grain-
s.remove function (instead of the grains.setval and grains.delval functions).
1. Install and set up RING7 using the RING Installer.
2. Configure an SOFS volume using the Supervisor Volumes tab to enter a volume Device ID,
select names for the Data and Metadata RINGs and create a catalog for the volume.
3. Append the following lines to /srv/scality/pillar/scality-common.sls, replacing the variables with
the volume information entered in the previous step.
The mine_functions and grains.items already exist in the pillar. Entering the parameters
more than once will cause failures in later steps.

data_ring: {{dataRINGName}}
metadata_ring: {{metadataRINGName}}
sfused:
dev: {{volumeDeviceId}}

4. Install and enable the SOFS Folder Scale-Out feature using Salt.
a. Set the role on the storage nodes:

salt 'store*' grains.append roles ROLE_ZK_NODE

b. Install the packages (calling the main state.sls file only installs packages):

salt -G 'roles:ROLE_ZK_NODE' state.sls scality.zookeeper

c. Advertise the cluster:

salt -G 'roles:ROLE_ZK_NODE' state.sls scality.zookeeper.advertised

A spare node assures an odd number of active ZooKeeper instances, with the
spare instance maintained for eventual failover events.

salt 'store3' grains.setval scal_zookeeper_spare true


salt '*' mine.update

d. Configure the cluster and start the services:

salt -G 'roles:ROLE_ZK_NODE' state.sls scality.zookeeper.configured

e. Register the cluster with sagentd:

salt -G 'roles:ROLE_ZK_NODE' state.sls scality.zookeeper.registered

60 © 2017 Scality. All rights reserved


5. After ZooKeeper is installed for SOFS Folder Scale-Out:
a. Use the predefined role for ZooKeeper cluster to deploy the shared cache:

salt -G 'roles:ROLE_ZK_NODE' state.sls scality.sharedcache

b. Advertise the sophiad endpoint:

salt -G 'roles:ROLE_ZK_NODE' state.sls scality.sharedcache.advertised

c. Configure and enable the services:

salt -G 'roles:ROLE_ZK_NODE' state.sls scality.sharedcache.configured

6. Uncomment the following line in the fastcgi.conf file (in /etc/httpd/conf.d/ on CentOS/RedHat) to
load the fastcgi module:

#LoadModule fastcgi_module modules/mod_fastcgi.so

7. Restart Apache on shared cache nodes to enable the fastcgi configuration for sophiad.
8. Update sfused.conf and start sfused on all SOFS connectors.
a. Edit /etc/sfused.conf and assign the following "general" section parameters, replacing
the example IP addresses in the “dir_sophia_hosts_list” by the list of sophia
server IP addresses:

"general": {
"dir_sophia_enable": true,
"dir_sophia_hosts_list":
"192.168.0.170:81,192.168.0.171:81,192.168.0.179:81,192.168.0.17:81,
192.168.0.18:81",
"dir_async_enable": false,
},

b. Confirm that “dir_sophia_hosts_list” is configured with the correct shared


cache nodes IPs.
c. Restart sfused

service scality-sfused restart

© 2017 Scality. All rights reserved 61


9. (Optional) If ZooKeeper is also installed for SVSD:
a. Edit the .sls pillar files for the targeted connector machines, changing the values for
data_iface and the svsd variables (namespace, count and first_vip) as required.

scality:
....
data_iface: eth0
svsd:
namespace: smb
count: 1
first_vip: 10.0.0.100

b. Add the role ROLE_SVSD to targeted machines:

salt -L 'conn1,conn2,conn3' grains.append roles ROLE_SVSD

c. Install the virtual server:

salt -G 'roles:ROLE_SVSD' state.sls scality.svsd

d. Configure and start the service:

salt -G 'roles:ROLE_SVSD' state.sls scality.svsd.configured

e. Provision the service:

salt -G 'roles:ROLE_SVSD' state.sls scality.svsd.provisioned

10. Run the 'registered' Salt state to register the shared cache with sagentd.

salt -G roles:ROLE_ZK_NODE state.sls scality.sharedcache.registered

Currently, changes to the Supervisor GUI volume interface overwrite


sfused.conf, thus necessitating the re-application of step 8.

5.2. Installing Seamless Ingest for SMB-CIFS Connectors


The Seamless Ingest feature for SMB connectors eliminates file ingest disruption and maintains per-
formance during storage node failovers. This feature provides faster detection of a node failure through
the implementation of an external heartbeat mechanism that monitors the availability of all the storage
nodes.

The Seamless Ingest feature requires ZooKeeper to be installed and running on an odd number of
storage node servers – preferably five, but three at the least. Refer to "Installing Folder Scale-Out
for SOFS Connectors" on page 59 for a ZooKeeper installation method using Salt formulas (this
method can also be used to install ZooKeeper for the Seamless Ingest feature).

Contact Scality Customer Support to install the Seamless Ingest feature.

62 © 2017 Scality. All rights reserved


5.3. Installing Full Geosynchronization Mode for SOFS Connectors
Full Geosynchronization replicates an SOFS volume from a source site and a target site. This requires
that the SOFS (FUSE, SMB, NFS or CDMI) volume of a main connector that is accessed by client applic-
ations is also exposed by a dedicated Dewpoint (CDMI) connectors on both the source and target sites.
In addition, separate connectors (typically NFS) should be set up on the source and target sites to host
the journals for the replication process.
Journals managed by the source and target daemons are best stored on SOFS volumes separate from
the one under replication. This facilitates recovery from the eventual loss of a connector machine, and
does not impose storage requirements on the connector servers. It also means, however, that an addi-
tional connector (typically NFS) must be configured to host the journals for the data replicated from the
source site to the target site.

The installation of Full Geosynchronization consists of five major steps:


1. Enabling File Access Coordination (FACO)
2. Setting Up the Volume for Journal Storage
3. Setting Up the Source and Target CDMI Connectors
4. Setting Up the Full Geosynchronization Daemon on the Source Machine
5. Setting Up the Full Geosynchronization Daemon on the Target Machine
For descriptions of source and target configuration settings, refer to "Daemon Configuration Settings" on
page 68 , For information on sagentd and SNMP configuration, refer to "Monitoring the Full Geo-
synchronization Daemons" on page 69.

5.3.1 Enabling SOFS Connector Access Coordination


The SOFS Connector Access Coordination feature must be installed and properly configured on the
source site before geosynchronization configuration can commence. This feature enables coordination
of concurrent directory and file access from several connectors, and in addition the Full Geo-
synchronization mode also leverages the feature to relieve the main connector from the replication work-
load and enhance performance.

© 2017 Scality. All rights reserved 63


The SOFS Connector Access Coordination feature is available for production use with files starting with
RING 7.1.0

The SOFS Connector Access Coordination feature is only used by


the data replication process in the Full Geosynchronization archi-
tecture. The SOFS Connector Access Coordination is not supported
by Full Geosynchornization mode for client-facing applications using
multiple connectors to modify a given volume.

5.3.2 Setting Up the Volume for Journal Storage


The main connector on the source site must have journaling enabled. The journals emitted must be
stored on a dedicated volume that is not under replication (i.e., exposed by a connector other than the
main connector). Although the main purpose of this volume is to host journals and provide availability of
journals in the event that the main connector machine is lost, it also has the added benefit of requiring no
additional local storage space on the main connector machine.

"geosync": true,
"geosync_prog": "/usr/bin/sfullsyncaccept",
"geosync_args": "/usr/bin/sfullsyncaccept --v3 --user scality -w /ring_a/journal $FILE",
"geosync_interval": 10,
"geosync_run_cmd": true,
"geosync_tmp_dir": "/var/tmp/geosync"

Prior to the volume setup it is necessary to create a new volume on


which to store the journal. Typically, this volume will reside on an
NFS server for performance reasons (though it can also reside on an
SMB server).

1. Mount the volume and add it to /etc/fstab configuration on the main connector.
2. The mountpoint described in this procedure assumes that it is mounted under /ring_a/journal.
3. Confirm that the path where journals are to be stored (e.g, /ring_a/journal) have read and
write permissions for owner scality:scality.
4. Install the scality-sfullsyncd-source package, which is required for the sfullsyncaccept binary
that is part of the scality-fullsyncd-source package.
5. Add geosync parameter settings to the general section of the sfused.conf file (or dewpoint-sof-
s.js if Dewpoint is the main connector), similar to the following:

"geosync": true,
"geosync_prog": "/usr/bin/sfullsyncaccept",
"geosync_args": "/usr/bin/sfullsyncaccept --v3 --user scality -w /ring_a/journal $FILE",
"geosync_interval": 10,
"geosync_run_cmd": true,
"geosync_tmp_dir": "/var/tmp/geosync"

64 © 2017 Scality. All rights reserved


l The --v3 flag is for the journal format and is required for the Full Geo-
synchronization mode to work properly.
l The --user flag in the geosync_args parameter specifies the owner of written
journals, and must be the same as the user running the sfullsyncd-source wsgi
application (scality by default).
l The working directory set by the -w argument must match the journal_dir in the
sfullsyncd-source.conf file.
l The geosync_interval parameter sets the interval, in seconds, between writes
of the journal files to disk. It should be optimized for specific deployment envir-
onments.
Refer to SOFS Geosynchronization Variables in the Scality RING7 Operations
Guide (v7.3.0) for more information.

6. Confirm that the dev parameter in the sfused.conf file (or dewpoint-sofs.js) has the same setting
as the dev number that will be assigned to the source and target CDMI connectors. Note that
this number is different than the dev number set for the volume used for journal storage.
7. Restart the main connector server to load the modified configuration.

5.3.3 Setting Up the Source and Target CDMI Connectors


For Full Geosynchronization, at installation the CDMI connectors must be configured with the same
volume (the dev parameter in the general section of dewpoint-sofs.js) as the main connector on the
source site. For example, if a volume to be replicated is exposed over NFS, the source and target CDMI
connectors must have the same setting for the dev parameter as configured in the /etc/sfused.conf file
of the NFS connector.
1. Install the scality-dewpoint connector package for the web server.
l For Nginx install scality-dewpoint-fcgi
l For Apache, install scality-dewpoint-fcgi-httpd (CentOS)
The Apache packages automatically pull in the main scality-dewpoint-fcgi
package as a dependency.

2. Install either the Nginx or Apache web server on both the source and target machines, and con-
figure the installed web server to run Dewpoint as an FCGI backend directly on the root URL.
For Nginx, the server section should contain a configuration block similar to the following:

location / {
fastcgi_pass 127.0.0.1:1039;
include /etc/nginx/fastcgi_params;
}

© 2017 Scality. All rights reserved 65


3. Confirm that the CDMI connectors are properly configured and use the same FCGI port as the
Nginx or Apache configuration files. For instance, to match the Nginx example, /etc/dew-
point.js should contain an fcgx section similar to the following:

{
"fcgx": {
"bind_addr": "",
"port": 1039,
"backlog": 1024,
"nresponders": 32
}
}

l The n_responders parameter must be set to at least the number of worker_


processes configured in Nginx.
l When configuring CDMI connectors, confirm that the sofs section of the
/etc/dewpoint.js configuration file contains the enterprise number of the cli-
ent organization or the default Scality enterprise number (37489). The same num-
ber must also be configured in the sfullsyncd-target configuration.

4. Confirm that volume ID (the dev parameter in the general section of /etc/dewpoint-sof-
s.js) is set to the same number on both the source and target connector machine. This is
necessary, as in the Full Geosynchronization architecture RING keys (which contain volume
IDs) must be the same on both the source and target volumes.

5.3.4 Setting Up the Full Geosynchronization Daemon on the Source Machine


1. Mount the journal volume and add it to /etc/fstab configuration on the CDMI connector (the
mountpoint described in this procedure assumes that it is mounted under /ring_
a/journal).
2. Install the scality-sfullsyncd-source package on the CDMI connector. The scal-
ity-sfullsyncd-source package pulls in uwsgi as a dependency and creates the scality
user.

yum install scality-sfullsyncd-source

3. To configure the source daemon, add an sfullsyncd-source.conf file to the /etc/s-


cality directory (create the directory if necessary) with content similar to the following:

{
"cdmi_source_url": "http://SOURCE_IP",
"cdmi_target_url": "http://TARGET_IP",
"sfullsyncd_target_url": "http://TARGET_IP:8381",
"log_level": "info",
"journal_dir": "/var/journal",
"ship_interval_secs": 5,
"retention_days": 5
}

Refer to the Daemon Configuration Settings, as necessary.

66 © 2017 Scality. All rights reserved


4. Restart the Dewpoint connector service.

systemctl restart scality-dewpoint-fcgi

5. Start the uwsgi service required by the sfullsyncd-source application.

systemctl start uwsgi

5.3.5 Setting Up the Full Geosynchronization Daemon on the Target Machine


To set up the sfullsyncd-target daemon on the target connector machine:
1. (Optional, recommended for journal resiliency) Create a new volume to store the journal on the
target site. Typically, for performance reasons, this will be on an NFS server (though it could also
be an SMB server).
2. Create the data directory for scality-sfullsyncd-target, e.g: /var/journal (with owner scal-
ity:scality). This is where journals and the daemon state are stored.

mkdir /var/journal && chown scality:scality /var/journal

3. If a volume for journal resiliency was created at the start, create two shares and mount the
shares under /var/journal/received and /var/journal/replayed, ensuring that
read/write permissions for the shares are scality:scality, then add the shares to /etc/fstab.
4. Install the scality-sfullsyncd-target package.

yum install scality-sfullsyncd-target

5. To configure the target daemon, add an sfullsyncd-target.conf file to the /etc/s-


cality directory (create the directory if it does not already exist) with parameter settings sim-
ilar to the following:

{
"port": 8381,
"log_level": "info",
"workdir": "/var/journal",
"cdmi_source_url": "http://SOURCE_IP",
"cdmi_target_url": "http://TARGET_IP",
"enterprise_number": 37489,
"sfullsyncd_source_url": "http://SOURCE_IP:8380"
}

Refer to the Daemon Configuration Settings, as necessary.

6. Restart the Dewpoint connector service (scality-dewpoint-fcgi, scality-dew-


point-httpd or scality-dewpoint-apache2).

systemctl restart scality-dewpoint-fcgi

© 2017 Scality. All rights reserved 67


7. Start the scality-sfullsyncd-target service.

systemctl start scality-sfullsyncd-target

5.3.6 Daemon Configuration Settings


Source Site Daemon Settings
Several parameters can be set via sfullsyncd-source daemon configuration files on the source
site.

Parameter Description
backpressure_tolerance Sets a threshold for triggering a backpressure mechanism based on the number of con-
secutive throughput measurements in the second percentile (i.e., measurements that fall
within the slowest 2% since the previous start, with allowance for a large enough sample
space). The backpressure mechanism causes the target daemon to delay requests to the
source daemon. Set to 0 to disable backpressure (default is 5).
cdmi_source_url IP:Port address of the CDMI connector on the source machine (ports must match the web
server ports of the Dewpoint instances)
cdmi_target_url IP:Port address of the CDMI connector on the target machine (ports must match the web
server ports of the Dewpoint instances)
journal_dir Directory where the journals of transfer operations are stored (by default: /var/-
journal/source/ctors:source)
log_level Uses conventional syslog semantics; valid values are debug, info (default), warning or error.
sfullsyncd_target_url IP Address and port (8381) of the sfullsyncd-target daemon
ship_interval_secs Interval, in seconds, between the shipping of journals to the sfullsyncd-target daemon
retention_days Number of days the journals are kept on the source machine. Journals are never removed if
this parameter is set to 0.

Target Site Daemon Settings


Several parameters can be set via sfullsyncd-target daemon configuration files on the target site.

Parameter Description
cdmi_source_url IP:Port address of the CDMI connector on the source machine (ports must match the web
server ports of the Dewpoint instances)
cdmi_target_url IP:Port address of the CDMI connector on the target machine (ports must match the web
server ports of the Dewpoint instances)
chunk_size_bytes Size of the file data chunks transferred from the source machine to the target machine
(default is 4194304)
enterprise_number Must correspond to the enterprise number configured in the sofs section of the /etc/dew-
point.js file on both the source and target machines
graph_size_threshold Maximum number of operations in the in-memory dependency graph (default is 10000)
log_level Uses conventional syslog semantics; valid values are debug, info (default), warning and error
max_idle_time_secs Sets a maximum idle time, in seconds, that triggers an inactivity warning when no journals
arrive within the time configured (default is 3600)
notification_command Command to be executed when a certain event occurs, such as an RPO violation

68 © 2017 Scality. All rights reserved


Parameter Description
port Port for the target machine (default is 8381)
progress_check_secs Sets an expected threshold time for a file transfer from the source machine. If the configured
value is exceeded, the source daemon is queried to determine whether the file is still in
flight, and if that is not the case it is prompted to restart the transfer (default is 1800)
replay_threads Number of operations to replay in parallel (default is 10)
rpo_secs Expected recovery point objective (RPO) of the system (reports whether the RPO is satisfied)
sfullsyncd_source_url IP Address and port (8380) of the sfullsyncd-source daemon

5.3.7 Monitoring the Full Geosynchronization Daemons


The Full Geosynchronization feature supports integration with the Supervisor through sagentd. It is also
possible to have a tailored command executed when certain events of interest happen,such as an unsat-
isfied RPO or if no journals have been received during the last hour. Furthermore, statistics exported by
the Full Geosynchronization are sent to the Elasticsearch cluster by sagentd if this option is activated.

Sagentd Configuration
Sagentd provides both the SNMP support (via the net-snmpd daemon) and the output of metrics into the
Elasticsearch cluster. To get this feature, the sagentd daemon must be configured on the server hosting
the sfullsyncd-target daemon. Also, exporting statuses through SNMP requires configuring the
net-snmpd daemon.
To register the sfullsyncd-target daemon with sagentd:
1. Run the sagentd-manageconf add command with the following arguments:
l The name of the sfullsyncd-target daemon
l address (external IP address) of the server hosting the sfullsyncd-target dae-
mon
l port, which must match the value in the sfullsyncd-target configuration
l type, to set the daemon type to sfullsyncd-target

# sagentd-manageconf -c /etc/sagentd.yaml \
add {{nameOfsfullsyncd-targetDaemon}} \
address=CURRENT_IP \
port={{portNumber}} \
type=sfullsyncd-target

The change will be reflected in the sagentd.yaml file.

{{nameOfsfullsyncd-targetDaemon}}:
address: CURRENT_IP
port: {{portNumber}}
type: sfullsyncd-target

2. Restart sagentd after any modifications.

service scality-sagentd restart

© 2017 Scality. All rights reserved 69


3. Verify that the sfullsyncd-target is monitored and running.

cat /var/lib/scality-sagentd/oidlist.txt
1.3.6.1.4.1.37489.2.1.1.1.6.1.1.2.1 string sfullsync01
1.3.6.1.4.1.37489.2.1.1.1.6.1.1.3.1 string CURRENT_IP:{{portNumber}}
1.3.6.1.4.1.37489.2.1.1.1.6.1.1.4.1 string sfullsyncd-target
1.3.6.1.4.1.37489.2.1.1.1.6.1.1.5.1 string running

If the status changes from running, use snmpd to send a notification to a remote trap host.

Refer to the Scality RING7 Operations Guide (v7.3.0) for more information on the Scality
SNMP architecture and configuration, as well as for MIB Field Definitions.

5.3.8 Custom Alerts


An executable can be set in the sfullsyncd-target configuration to be invoked each time an event is emit-
ted. This executable will receive a JSON document on standard input with the following attributes:

Attribute Description
message Description of the event
timestamp Timestamp (including the timezone) of the event
level Severity of the event (“INFO”, “WARNING”, “CRITICAL”, or ”ERROR”)

The following sample script illustrates how the event information is passed to a custom HTTP API.

#!/usr/bin/python
# Copyright (c) 2017 Scality
"""
This example script handles events, or alerts, sent by the geosync daemons.

It would be invoked each time an event is emitted. It is passed, through STDIN, a JSON formatted
string which is a serialized `Event` object. This object has three interesting fields `timestamp`,
`level` and `message`.

Although this particular event handler is written in Python, any other general-purpose language would
do; all we need to do is to read from STDIN, de-serialize JSON, and take some action.
Furthermore, the program is free to accept command line arguments if needed.

"""
import json
import sys

import requests

data = sys.stdin.read()
event = json.loads(data)

# Possible levels are "INFO", "WARNING", "ERROR" and "CRITICAL"


print("I got an event with level: %s" % event['level'])

# The timestamp has the timezone information (e.g 2016-04-22T12:37:01+0200)


print("It was emitted on %s" % event['timestamp'])

# Be careful, the message could be an unicode object.


print("The message is %r" % event['message'])

70 © 2017 Scality. All rights reserved


if len(sys.argv) > 1:
print("I got some optional arguments: %s" % sys.argv[1:])
url_to_alerting = sys.argv[1]
requests.post(url_to_alerting, json=data)

5.4. Installing Scality Cloud Monitor


Installation of the Scality Cloud Monitor must be performed by Scality
Customer Support.

InstallationTo install Scality Cloud Monitor, the scality-halod package must be installed with the
RING. The package consists of a single daemon – halod – and its configuration files.

l When the Metricly API key is included in the Platform Description File the Scality Cloud Monitor
is installed automatically by the Scality Installer.
l Salt pillar files for advanced installations accept entries for the Scality Cloud Monitor API key (
scality:halo:api_key: <netuitiveApiKey> ) that enable the Scality Cloud Monitor
installation. This entry must be set in the pillar file for the advanced installation, however the
scality.halo.configured Salt state must also be run to install Scality Cloud Monitor.
If Scality Cloud Monitor is installed on the Supervisor only the Metricly API key is necessary. If, though,
Scality Cloud Monitor is installed on a different RING server, additional information is required.

Metricly API key scality:halo:api_key

Supervisor IP address scality:supervisor_ip

RING username scality:credentials:internal_user

RING password scality:credentials:internal_password

Customers using the Dedicated Case Service (DCS) version of Scal-


ity Cloud Monitor require an additional upgrade, to be performed by
Scality Customer Support.

Once Scality Cloud Monitor is configured, the monitored metrics are uploaded to the Metricly cloud. To
access Metricly, go to https://app.netuitive.com/#/login and log in using valid credentials
(Email, Password).

© 2017 Scality. All rights reserved 71


5.4.1 Creating a Dashboard
Use Metricly dashboards to create custom data visualizations, by adding widgets and setting display pref-
erences (refer to Creating a New Dashboard in the Metricly Online Documentation for more information).

5.4.2 Configuring a Dashboard


All available Metricly dashboards are accessible from the dashboard management screen, and for each
it is possible to manage the layout, change the dashboard settings, or configure the time frame set-
ting/refresh interval (refer to Editing dashboards on the Metricly Online Documentation for more inform-
ation).

5.4.3 Collecting Data


Quickly visualize metric charts from any element in the Metricly environment via the Metrics page.
A metric is a quantifiable measurement whose values are monitored by Metricly and used to assess the
performance of an element. Metrics are always associated with one or more elements. Examples include
CPU utilization, network bytes per second, and Response Time.
Any metric chart can be added to a Metricly dashboard.

5.4.4 Inventory
The Metricly inventory is composed of the elements that make up an environment, such as RING(s), con-
nectors or servers. Use the Metricly Inventory Explorer to view and search the elements within an Invent-
ory (refer to Inventory on the Metricly Online documentation for more information).

5.4.5 Configuring Policies


A policy is a set of conditional tests used to set custom rules for when Metricly will generate an event or
other notifications. Policies allow users to define various types of abnormal element behavior and to
issue notifications when such behavior occurs (refer to Policies on the Metricly Online documentation for
more information).

72 © 2017 Scality. All rights reserved


6
6. RING Installation Troubleshooting
6.1. Log Locations 74
6.2. sreport Tool 75
6.3. Timeout Installation Failure 75
6.4. Salt Master Unable to Find State 76
6.5. Salt Master Unable to Call Function 77
6.6. Salt Master Unable to Find Pillar 77
6.7. Minion Not Found 78
6.8. Minion Not Responding 78
6.9. Jinja Rendering Errors 79
6.10. Cleaning Installation 79
6.11. Elasticsearch Start Failure During Re-installation 80
6.12. SSD Drives Not Detected 81
6.13. Disks on Nodes Not Detected 81
6.14. Heterogeneous Network Interfaces in Advanced Installation 82
6.15. Package Not Found (CentOS) 83
6.16. Package Not Found (RedHat) 83
6.17. Post-Install Checks Troubleshooting 84
6.18. Cannot Connect to a Server via SSH 86
6.19. Unresponsive Environment 87
6.20. Collecting Error Logs 87

Code examples running multiple lines are correct only as displayed


on the page, as due to PDF constraints such examples cannot be
accurately copy-pasted.

The RING Installer is based on SaltStack states. Most states are idempotent, meaning that if the same
state is reapplied, the result will be the same, a property that allows for the fixing and rerunning of failed

© 2017 Scality. All rights reserved 73


states while manually finishing the installation. On the contrary, states that are not idempotent cannot be
reapplied when they fail, in which case commands must be manually launched.
The scality.supervisor.installed is an idempotent state; thus if the packages are not already
installed, they are installed and the state succeeds, if the packages are already installed, nothing
changes and the state succeeds anyway. The scality.node.configured, on the other hand, is not
idempotent, and so if the is node is already configured the disks cannot be formatted and the installation
fails.
The RING Installer attempts to apply Salt states once the configuration information is available. By con-
trast, the advanced installations rely on a configuration file that is used to apply all the Salt states.
The Installer relies on the Supervisor to configure several components. For instance, the Elasticsearch
cluster is registered through the Supervisor, which means that if any error occur the Supervisor logs
must be checked first. Once those components are installed, sagentd is configured to allow incoming
commands from the Supervisor, after which custom SaltStack modules are used to trigger commands
on the Supervisor to configure the components via sagentd or other methods, depending on the com-
ponents.

6.1. Log Locations


The interactive Installer and a number of custom Salt modules generate logs on each server upon which
the installation steps are run. The log files are located in /tmp/directory.

scality-installer.log Log file featuring only the error messages

scality-installer-debug.log Log file featuring detailed debug information

Whenever an issue occurs, check /tmp/directory first.


To check logs generated by Salt, change the value of the log_level variable to debug for the Salt min-
ion and the Salt master. Changing the variable to debug increases the verbosity, thus the level of details.
The Salt master logs files are located under /var/log/salt/master, and the Salt minion log files are located
under /var/log/salt/minion.

Error messages at the beginning of the log file can be safely ignored.
These messages are generated whenever Salt tries to load all of its
modules, which may rely on features not available on the server.

Each tool used by the RING Installer can generate errors that are collected by the Installer itself. For
example, if the Installer reports an error while installing a package on a CentOS 7 system, a recom-
mendation is issued to check the /var/log/yum.log for details.
As the RING Installer relies on the Supervisor to configure several components, such as the nodes or the
Elasticsearch cluster; checking the RING components logs is necessary in the event of a failure.

74 © 2017 Scality. All rights reserved


6.2. sreport Tool
If the installation has progressed to the point where the Scality repository is available on all servers, the
sreport tool can be installed to collect relevant information for troubleshooting.
The sreport tool provides a quick way to assemble information for Scality support purposes. It is run on
demand only, does not make changes to any component installed on the system, and can be safely run
on live systems (as all commands issued by the tool are read-only).
The output archive generated by sreport offers such system information as:

l Hardware (types and the number of CPUs, memory, disks, RAID cards, and network cards)
l Installed software (distribution packages with their versions, and configuration parameters)
l Environment (IP addresses and networks, machine names, such network information as DNS,
domain names, etc.)
l RING configuration (number of nodes, IP addresses, connectors, and configuration of the con-
nectors)
l Scality Scality daemons crashes (includes stack traces, and references to objects that were pro-
cessed at the time of the crash, etc.)
l Geosynchronization configuration files on source (sfullsyncd-source) and target (sfullsyncd-tar-
get) connectors.

Installing sreport
Install the scality- sreport package on all the machines from which diagnostic information will be col-
lected. In addition to being installed on RING node and connector servers, having the sreport package
installed on the Supervisor enables the collection of information from several machines at once using the
--salt option.

salt '*' pkg.install scality-sreport

Using sreport
1. As root, run sreport on the command line — with or without options — to generate the output
archive, sosreport-*.tar.gz, which is saved by default to the /tmp or the /var/tmp/ directory.
2. The full output archive file name displays on the console (the asterisk (*) in the output archive file
name is replaced by the host name, the date, plus additional information).
The report can be automatically sent to Scality instead of — or in addition to — saving it loc-
ally.

3. Explore the output archive to determine the information collected by sreport.


4. Send the output archive to Scality support or use FTP to upload it to a designated Scality sup-
port server.

6.3. Timeout Installation Failure


Problem Description
The installation fails with a message about a timeout while trying to register srebuildd.

© 2017 Scality. All rights reserved 75


Resolving the Problem
Check that the firewall is disabled. The network connections created as a result of registering of RING
components must not be blocked.

6.4. Salt Master Unable to Find State


Problem Description
When applying Salt state, the commands fail and Salt reports that the state cannot be found.

# salt '*' state.sls scality.req.sreport


centos6-sup:
Data failed to compile:
----------
No matching sls found for 'scality.req.sreport' in env 'base'
centos6-conn1:
Data failed to compile:
----------
No matching sls found for 'scality.req.sreport' in env 'base'
centos6-store1:
Data failed to compile:
----------
No matching sls found for 'scality.req.sreport' in env 'base'
centos6-store2:
Data failed to compile:
----------
No matching sls found for 'scality.req.sreport' in env 'base'
ERROR: Minions returned with non-zero exit code

The Salt Master is unable to apply a Salt state because the environment is not properly configured.

Resolving the Problem


Salt states for RING installation are located in a non-standard directory and require specific configuration
for the Salt master. Verify that the configuration is correct in the /etc/salt/master and /etc/salt/master.d/*
configuration files.

# Salt master configuration file


[...]

file_recv: True

file_roots:
base:
- /srv/scality/salt/local/
- /srv/scality/salt/formula/

pillar_roots:
base:
- /srv/scality/pillar

extension_modules: /srv/scality/salt/formula
ext_pillar:
- scality_keyspace: []

76 © 2017 Scality. All rights reserved


6.5. Salt Master Unable to Call Function
Problem Description
When calling a Scality function for Salt, the Salt master reports that the function cannot be found.

# salt '*store*' scality.repo.configured


store04:
'scality.repo.configured' is not available.
store03:
'scality.repo.configured' is not available.
[...]

Resolving the Problem


Salt states for RING installation are located in a non-standard directory and require specific configuration
for the Salt master. Verify that the configuration is correct in the /etc/salt/master and /etc/salt/master.d/*
configuration files.

# Salt master configuration file


[...]

file_recv: True

file_roots:
base:
- /srv/scality/salt/local/
- /srv/scality/salt/formula/

pillar_roots:
base:
- /srv/scality/pillar

extension_modules: /srv/scality/salt/formula
ext_pillar:
- scality_keyspace: []

6.6. Salt Master Unable to Find Pillar


Problem Description
The installation fails with an error message, which can occur when reusing a pillar configuration from
another version of the RING Installer.

Pillar render error: Specified SLS 'beats' in environment 'base' is not available on the salt master

© 2017 Scality. All rights reserved 77


Resolving the Problem
Confirm that the top pillar configuration is consistent with the files present on the disk.

# Check consistency
ls /srv/scality/pillar
[...]
cat /srv/scality/pillar/top.sls
[...]

6.7. Minion Not Found


Problem Description
When running an advanced installation, Salt reports that it cannot find minions.

[INFO ] Loading fresh modules for state activity


[INFO ] Running state [install the supervisor] at time 19:59:02.279200
[INFO ] Executing state salt.state for install the supervisor
No minions matched the target. No command was sent, no jid was assigned.

Resolving the Problem


The minion targeted by Salt commands are listed in the installation pillar. Make sure:

l The Supervisor identifier is properly configured in scality:supervisor_id


l Storage nodes and connectors are properly configured in scality:selector
To be properly configured means that if the Salt minion ID contains the domain name, the configuration
entries in the pillar must contain the domain name as well.
When using wildcards, quotes are also required in the pillar to avoid YAML parsing issues (i.e., scal-
ity:selector:storage: '*store*'). Use the command "salt '*' test.ping" to view
all the Salt minion IDs and make sure the pillar has the right configuration.

6.8. Minion Not Responding


Problem Description
The installation fails and the logs show that some Salt minions failed to respond.

2017-07-27 13:08:26,575 [installer] - ERROR - Unable to sync repo. Missing store04, store05

A manual running of a Salt command such as salt '*' test.ping comes back as successful.

Resolving the Problem


Although the basic command seems to successfully complete, it is likely that a network issue is causing
Salt commands to randomly fail. Check whether there is packet loss on the links of the Salt minion using
a command such as ifconfig.

78 © 2017 Scality. All rights reserved


6.9. Jinja Rendering Errors
Problem Description
The installation fails with an error message from Salt, indicating that it is unable to render a state due to
Jinja errors or that a dict object is missing an attribute.
Salt errors related to Jinja rendering are often caused by errors in the configuration files. Also, Salt mine
functions returning empty values may also be a sign that there is an issue in the Pillar.

Resolving the Problem


Check the /srv/scality/pillardirectory for:

l Bad YAML syntax


l Bad configuration value
l Missing configuration variables (if reusing configuration from a previous RING release).
It is also possible that Jinja rendering issues are related to the Salt mine.
1. Confirm that the Salt mine functions are properly defined in /srv/scality/pillar.
2. Confirm that the mine is up to date when running an advanced installation.

6.10. Cleaning Installation


Server-wide or node-specific cleanup may be required for the RING Installer to work correctly following a
failure.

Server-Wide Cleanup
Perform the following procedure on all servers (nodes and connectors).
1. Remove all packages that can cause re-installation failure.

# On the supervisor
salt '*' state.sls scality.node.purged

2. On the supervisor, delete any remaining Scality Apache configuration file or directory (oth-
erwise, re-installation of scality-*-httpd or scality-*-apache2 will fail).
l CentOS/RHEL:

rm -rf /etc/httpd/conf.d/*scality*

l Ubuntu:

rm /etc/apache2/sites-enabled/*scality*

© 2017 Scality. All rights reserved 79


Node-Specific Cleanup
1. Edit the file /etc/fstab and remove any entry preceded by Added with scaldiskadd(com-
ment included).
2. Reboot servers to release disks.

# On each server
shutdown -r

Server reboot closes any active connections and interrupts any services running on the
server.

3. Wipe the partition of a disk using parted.

parted -s /dev/sdX rm 1

4. Reload the partition table.

blockdev --rereadpt

If the reloading of the partition table does not work, use the Unix utility dd to reset all
disks.

dd if=/dev/zero of=/dev/sdX bs=1M count=100

5. On a VM, set the correct flag for SSD disks before restarting the RING Installer.

echo 0 > /sys/block/sdX/queue/rotational

6.11. Elasticsearch Start Failure During Re-installation


Problem Description
When trying to re-install the RING using the installer, an error message indicates that the Elasticsearch
service is not available:

2017-07-27 16:36:24,521 [installer] - ERROR - error in apply for store05 : start elasticsearch ser-
vice : The named service elasticsearch is not available

The error occurs when the Elasticsearch package is installed but the service does not start.

# rpm -qa | grep elastic


python-elasticsearch-2.3.0-1.el7.noarch
elasticsearch-2.3.4-1.noarch
# systemctl restart elasticsearch
Failed to restart elasticsearch.service: Unit not found.

80 © 2017 Scality. All rights reserved


Resolving the Problem
A bug is present in the service description of the Elasticsearch package, the workaround for which is to
force Systemd to reload the list of services once the Elasticsearch package has been uninstalled.

# systemctl daemon-reload

At this point it is possible to restart the service with a successful RING installation.

6.12. SSD Drives Not Detected


Problem Description
The Installer does not detect SSD drives (which can happen on a virtual machine).

Resolving the Problem


When installation on an SSD fails, the Installer can use the Linux virtual file system sysfs to detect
drives. In this case, if the rotational flag has been set automatically by the kernel or manually by an
administrator, the Installer identifies the drive as an HDD (spinner); if the flag is unset then the drive is
identified as an SSD.
The administrator can change the flag value and restart the installation.

# Print current settings


lsblk -o NAME,ROTA
# Unset the ROTA flag of all SSD disks sdX
echo 0 > /sys/block/sdX/queue/rotational
# Check changes
lsblk -o NAME,ROTA
# Start the install with disk type detection based on the rotational file
scality-installer-*.run -- -- --ssd-detection=sysfs

6.13. Disks on Nodes Not Detected


Problem Description
During installation, the Salt group containing the nodes presents no HDD, no SSD,, even though the
disks are physically present. For instance, running the command lsblk on a storage node produces the
following output:

# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 440K 0 rom
vda 253:0 0 160G 0 disk
`-vda1 253:1 0 160G 0 part /
vdb 253:16 0 5G 0 disk
vdc 253:32 0 5G 0 disk
vdd 253:48 0 5G 0 disk
vde 253:64 0 5G 0 disk

© 2017 Scality. All rights reserved 81


Resolving the Problem
If the RING Installer has been run previously with success, it will have set two salt grains the first time
through (specifically to ensure that it does not redo disks should it need to be rerun).
Check the salt grains:

salt '*' grains.item scal_format


salt '*' grains.item scal_mount

If the command set returns True, the error is confirmed. In this case, remove the salt grains and run the
Installer again.

salt '*' grains.delval scal_format


salt '*' grains.delval scal_mount
rm -rf /tmp/scality*

6.14. Heterogeneous Network Interfaces in Advanced Installation


Problem Description
When the management and data interfaces are different on each host, pillars need to be properly pre-
pared.

Resolving the Problem


The easiest solution is to create multiple groups within which the settings are homogenous.
For example, define the group for some hosts.

salt 'group1*' grains.append scal_group group1

Next, prepare the network interfaces definition in a pillar file /srv/scality/pillar/group1.sls.

scality:
mgtm_iface: ethX
data_iface: ethX

Next, include the settings in the top pillar file /srv/scality/pillar/top.sls

base:
'*':
- <main_pillar>
- order: 1
'scal_group:group1':
- match: grain
- group1
- order: 2

82 © 2017 Scality. All rights reserved


6.15. Package Not Found (CentOS)
Problem Description
The package manager reports that a dependency could not be found.

--> Processing Dependency: python-dnslib >= 0.8.3 for package: scality-svsd-7.3.0.r{{buildNum-


ber.hashedBuildID}}.el7.x86_64
--> Processing Dependency: daemonize for package: scality-svsd-7.3.0.r{{buildNum-
ber.hashedBuildID}}.el7.x86_64
--> Running transaction check
---> Package python-dnslib.noarch 0:0.8.3-1.el7 will be installed
---> Package python-pyroute2.noarch 0:0.4.13-1.el7 will be installed
---> Package scality-common.x86_64 0:7.3.0.r{{buildNumber.hashedBuildID}}.el7 will be installed
---> Package scality-svsd.x86_64 0:7.3.0.r{{buildNumber.hashedBuildID}}.el7 will be installed
--> Processing Dependency: daemonize for package: scality-svsd-7.3.0.r{{buildNum-
ber.hashedBuildID}}.el7.x86_64
---> Package scality-upgrade-tools.x86_64 0:7.3.0.r{{buildNumber.hashedBuildID}}.el7 will be
installed
--> Finished Dependency Resolution
Error: Package: scality-svsd-7.3.0.r{{buildNumber.hashedBuildID}}.el7.x86_64 (scality)
Requires: daemonize
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

Resolving the Problem


RING installation on CentOS requires several third-party packages that are not available from the base
repositories. When using the Scality Installer these packages are already included, however the pack-
ages must be downloaded when performing an advanced online installation.
In addition to the base installation media, the Installer requires access to the EPEL repository. Confirm
that the file /etc/yum.repos.d/epel.repo exists (it is installed by the package epel-release). It is pos-
sible that the EPEL server is under maintenance or synchronizing, in which case the only options are to (1)
wait, (2) edit the configuration file to point to another mirror, or (3) switch to the Scality Installer.

Ignore the two last lines from the package manager error message,
which suggest installing the RING using alternative options.

6.16. Package Not Found (RedHat)


Problem Description
The Installer log reveals that a package cannot be downloaded on a system running RedHat.

Error on minion 'store01' :


While installing python-gevent
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Resolving Dependencies
--> Running transaction check
---> Package python-gevent.x86_64 0:1.1.2-1.el7 will be installed
--> Processing Dependency: python-greenlet for package: python-gevent-1.1.2-1.el7.x86_64

© 2017 Scality. All rights reserved 83


--> Finished Dependency Resolution
Error: Package: python-gevent-1.1.2-1.el7.x86_64 (scality)
Requires: python-greenlet
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

Resolving the Problem


RING installation on RedHat requires several third-party packages that are not available from the base
repositories. When using the Scality Installer these packages are already included, however the pack-
ages must be downloaded when performing an advanced online installation.
In addition to the base installation media and the RedHat extra repository, the Scality Installer requires
access to the EPEL repository. Confirm that the file /etc/yum.repos.d/epel.repo exists. It is possible that
the EPEL server is under maintenance or synchronizing, in which case the only options are to (1) wait, (2)
edit the configuration file to point to another mirror, or (3) switch to the Scality Installer.
It is also possible that the RedHat subscription has expired or is not available. Use the subscription-man-
ager list command to verify the RedHat subscription. Exemplified below is a system on which the sub-
scription has not been enabled.

[root@ip-172-31-0-100 ~]# subscription-manager list


-------------------------------------------
Installed Product Status
-------------------------------------------
Product Name: Red Hat Enterprise Linux Server
Product ID: 69
Version: 7.3
Arch: x86_64
Status: Unknown
Status Details:
Starts:
Ends:

Ignore the two last lines from the package manager error message,
which suggest installing the RING using alternative options.

6.17. Post-Install Checks Troubleshooting


The following issues may be seen and resolutions applied, before or after running the Post- Install
Checks script.

6.17.1 Connection Forbidden to sagentd


Problem Description
Before running checks, the Post- Install Checks script connects to sagentd to gather information. If
sagentd is bound only to a public network interface, the connection will be forbidden from localhost
because the source IP address of the connection is not whitelisted in /etc/sagentd.yaml.

84 © 2017 Scality. All rights reserved


Resolving the Problem
Add the public IP address of the host to the list of whitelisted hosts in /etc/sagentd.yaml and restart
sagentd.

6.17.2 Connection Reset by SSH Server


Problem Description
If salt-ssh is in use and the test duration is longer than the idle timeout of the SSH server, the connection
between the Supervisor and each machine may be reset, presenting an error such as the following:

[root @supervisor scality-post-install-checks]# /usr/bin/scality-post-install-checks -r /root/post-


install-checks/roster --iperf -s supervisor -k 'iperf'
Running script using salt-ssh with roster file /root/post-install-checks/roster
Starting checks on supervisor
Checking if server is in roster or handled by salt
Checking missing pillars
Gathering info from servers (salt mine.send) for consistency check later
Running tests
Traceback (most recent call last):
File "/root/.pex/install/postinstallchecks-7.2.0.0-py2.py3-none-any-
.whl.da68ba54764e6b153d157b24fc716a0f8c885661/postinstallchecks-7.2.0.0-py2.py3-none-any-
.whl/postinstallchecks/main.py" , line 619 , in <module>
cr.parse_checks_results(minion_id, actual_result)
File "/root/.pex/install/postinstallchecks-7.2.0.0-py2.py3-none-any-
.whl.da68ba54764e6b153d157b24fc716a0f8c885661/postinstallchecks-7.2.0.0-py2.py3-none-any-
.whl/postinstallchecks/main.py" , line 102 , in parse_checks_results
if not state_result[ 'result' ] and state_result[ '__run_num__' ] < first_error_index:
TypeError: string indices must be integers, not str

Resolving the Problem


Add the following parameter to /etc/ssh/ssh_config on the Supervisor to keep the SSH connection alive
during the test:
ServerAliveInterval 60

6.17.3 Salt Client Error on RHEL 6


Problem Description
The following Salt client error can be seen on RHEL 6 platforms with incompatible versions of Salt. (This
error is the result of a known Salt issue https://github.com/saltstack/salt/issues/28403.)

[root @munashe bin]# /usr/bin/scality-post-install-checks -s store-1


Running script using salt
Traceback (most recent call last):
raise SaltClientError(general_exception)
salt.exceptions.SaltClientError: 'IOLoop' object has no attribute 'make_current'

© 2017 Scality. All rights reserved 85


Resolving the Problem
Update Salt to version 2016.11.x or 2016.3.x. Also check the PyZMQ and Tornado versions, as known
compatibility issues exist between PyZMQ <= 13.0 and Tornado >= 3.0.
Compatibility should not be an issue if a similar version set of PyZMQ and Tornado is in use (e.g., either
PyZMQ 2.x with Tornado 2.x, or else PyZMQ >= 13 with Tornado >= 3).

6.18. Cannot Connect to a Server via SSH


Problem Description
After invoking the scality-installer.run file with the --description-file option and the platform
description file argument, the console prompts for the SSH information required to deploy the
software on all servers.
In the example below, the connection to a server via SSH is failing with an ERROR: Cannot connect
with SSH on {{RINGName}} error message. (The reason for the failure is a misconfiguration of the
SSH connection between the Supervisor and another server.)

[root@ring-ring72-support cloud-user]# ./scality-ring-offline.run --description-file pdesc.csv


The folder "/srv/scality" already exists, overwrite content? [y/N] y
Extracting archive content
Running /srv/scality/bin/launcher --description-file /home/cloud-user/pdesc.csv
Please provide the user to connect to the nodes (leave blank for "root"):
Please provide the SSH password to connect to the nodes (leave blank if you have a private key):
Please provide the private SSH key to use or leave blank to use the SSH agent:
Load the platform description file '/home/cloud-user/pdesc.csv'... OK
Extract platform description data... OK
Generate the salt roster file... OK
Generate the bootstrap configuration... OK
Install scality-setup-httpd on '10.100.1.203'... KO
ERROR: Cannot connect with SSH on ring-ring72-support: Warning: Permanently added '10.100.1.203'
(ECDSA) to the list of known hosts.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
error: Command '[u'/srv/scality/bin/bootstrap', '--offline']' returned non-zero exit status 5
[2017-09-20 06:28:35-04:00] The bootstrap step failed

Resolution
The SSH connection must be enabled, according to the type of authentication method in use (password,
agent or SSH keys). Once enablement is complete, the command can be run again.

86 © 2017 Scality. All rights reserved


6.19. Unresponsive Environment
Problem Description
After invoking the scality-installer.run file with the --description-file option and the platform
description file argument, the output seems to freeze after the Install salt-master step.

[root@ring-ring72-support cloud-user]# ./scality-ring-offline.run --description-file pdesc.csv


The folder "/srv/scality" already exists, overwrite content? [y/N] y
Extracting archive content
Running /srv/scality/bin/launcher --description-file /home/cloud-user/pdesc.csv
Please provide the user to connect to the nodes (leave blank for "root"):
Please provide the SSH password to connect to the nodes (leave blank if you have a private key):
Please provide the private SSH key to use or leave blank to use the SSH agent:
Load the platform description file '/home/cloud-user/pdesc.csv'... OK
Extract platform description data... OK
Generate the salt roster file... OK
Generate the bootstrap configuration... OK
Install scality-setup-httpd on '10.100.1.203'... OK
Install salt-master on 'supervisor'...

Resolution
Confirm that the IP addresses provided in the Platform Description File match the machine IPs.
The most likely cause of the Unresponsive Environment issue is that an IP provided in the description file
is erroneous. The server tries to connect with SSH to a wrong host and waits for the SSH timeout.

6.20. Collecting Error Logs


Problem Description
When an error occurs during the Setup the Environment and Bootstrap Salt Scality Installer step, the logs
displayed on the console are insufficient to diagnose the issue.

[root@ring-ring72-support cloud-user]# ./scality-ring-offline.run --description-file pdesc.csv


The folder "/srv/scality" already exists, overwrite content? [y/N] y
Extracting archive content
Running /srv/scality/bin/launcher --description-file /home/cloud-user/pdesc.csv
Please provide the user to connect to the nodes (leave blank for "root"):
Please provide the SSH password to connect to the nodes (leave blank if you have a private key):
Please provide the private SSH key to use or leave blank to use the SSH agent:
Load the platform description file '/home/cloud-user/pdesc.csv'... OK
Extract platform description data... OK
Generate the salt roster file... OK
Generate the bootstrap configuration... OK
Install scality-setup-httpd on '10.100.2.73'... OK
Install salt-master on 'supervisor'... OK
Install salt-minion on every machines... KO
You can debug this operation with this command: /srv/scality/bin/tools/deploy-salt -r {{roster-
Variable}} -d -a
ERROR: malformed step result
error: Command '[u'/srv/scality/bin/bootstrap', '--offline']' returned non-zero exit status 5
[2017-10-09 06:54:08+00:00] The bootstrap step failed

© 2017 Scality. All rights reserved 87


Resolution
Run the following command on the Supervisor machine to display all of the logs on each server.

$
# export roster={{rosterVariable}} ; \
export user=$(awk '/^ user:/{print $2; exit}' "$roster") ; \
export priv=$(awk '/^ priv:/{print $2; exit}' "$roster"); \
for host in $(awk '/^ host:/{print $2}' "$roster"); do \
ssh -t -i "$priv" "$user@$host" \
"ls -la /tmp/scality-salt-bootstrap/running_data/salt-call.log"; \
done

Alternatively, all the logs can be collected by running the next command, which will result in all files being
downloaded under /tmp/{{IpAddress}}_bootstrap.log.

# export roster={{rosterVariable} ; \
export user=$(awk '/^ user:/{print $2; exit}' "$roster") ; \
export priv=$(awk '/^ priv:/{print $2; exit}' "$roster"); \
for host in $(awk '/^ host:/{print $2}' "$roster"); do \
scp -i "$priv" "$user@$host":/tmp/scality-salt-bootstrap/running_data/salt-call.log /tmp/"$host"_
bootstrap.log; \
done

88 © 2017 Scality. All rights reserved

You might also like