Setup Guide DA240-DS630
Setup Guide DA240-DS630
Setup Guide DA240-DS630
Before using this information and the product it supports, be sure to read and understand the safety
information and the safety instructions, which are available at:
https://pubs.lenovo.com/safety_documentation/
In addition, be sure that you are familiar with the terms and conditions of the Lenovo warranty for your
solution, which can be found at:
http://datacentersupport.lenovo.com/warrantylookup
ii ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Chapter 1. Introduction
The ThinkSystem DA240 Enclosure Type 7D1J and ThinkSystem SD630 V2 Compute Node Type 7D1K is a
2U solution designed for high-performance computing. This solution includes a single enclosure that can
contain up to four SD630 V2 compute nodes, which are designed to deliver a dense, scalable platform for
distributed enterprise and hyper-converged solutions.
The solution comes with a limited warranty. For details about the warranty, see:
https://support.lenovo.com/us/en/solutions/ht503310
Note: Some of the items listed are available on selected models only.
• Compute node(s)
• Enclosure
• Rail installation kit (optional). Detailed instructions for installing the rail installation kit are provided in the
package with the rail installation kit.
• Material box, including items such as power cords, rack installation template, and accessory kit.
Features
Performance, ease of use, reliability, and expansion capabilities were key considerations in the design of
your solution. These design features make it possible for you to customize the system hardware to meet your
needs today and provide flexible expansion capabilities for the future.
Enclosure:
• Redundant cooling and optional power capabilities
The enclosure supports two 1800-watt and 2400-watt hot-swap ac power supplies and a maximum of
three 8080 dual-rotor hot-swap fans, which provide redundancy for a typical configuration. The redundant
cooling by the fans in the enclosure enables continued operation if one of them fails.
Note: You cannot mix 1800-watt and 2400-watt power supplies in the enclosure.
• Integrated network support
The enclosure comes with two RJ45 Ethernet ports located on the System Management Module 2 which
support the connection to a 10 Mbps, 100 Mbps, or 1000 Mbps network.
Compute node:
• Multi-core processing
Each compute node supports two 3rd Gen Intel® Xeon® Scalable, up to 250W processors.
• Flexible data-storage capacity and hot-swap capability
The solution supports up to two 7mm 2.5-inch hot-swap SATA/NVMe solid-state drives or one 15mm 2.5-
inch hot-swap NVMe solid-state drive per compute node.
• Active Memory
The Active Memory feature improves the reliability of memory through memory mirroring. Memory
mirroring mode replicates and stores data on two pairs of DIMMs within two channels simultaneously. If a
failure occurs, the memory controller switches from the primary pair of memory DIMMs to the backup pair
of DIMMs.
• Large system-memory capacity
The solution supports up to maximum of 1024 GB of system memory (with 16 x 64 GB RDIMMs). Industry-
standard double-data-rate 4 (DDR4), dynamic random-access memory (DRAM) registered dual in-line
memory modules (RDIMMs) with error-correcting code (ECC) are supported. For more information about
the specific types and maximum amount of memory, see “Compute node specifications” on page 6.
• PCI adapter capabilities
The solution supports one 1U PCIe Gen4 x16 adapter per compute node.
• ThinkSystem RAID support
Each compute node supports RAID levels 0 and 1 for SATA storage including 7mm 2.5-inch solid-state
drives and M.2 drives.
• Features on Demand
If a Features on Demand feature is integrated in the solution or in an optional device that is installed in the
solution, you can purchase an activation key to activate the feature. For information about Features on
Demand, see: https://fod.lenovo.com/lkms
• Integrated network support
Each compute node comes with an integrated 1-port 1Gb Ethernet controller with a RJ45 connector and
1-port 25Gb Ethernet controller with a SFP28 connector for Lenovo XClarity Controller.
• Redundant networking connection
The Lenovo XClarity Controller provides failover capability to a redundant Ethernet connection with the
applicable application installed. If a problem occurs with the primary Ethernet connection, all Ethernet
traffic that is associated with the primary connection is automatically switched to the optional redundant
Ethernet connection. If the applicable device drivers are installed, this switching occurs without data loss
and without user intervention.
• Integrated Trusted Platform Module (TPM)
This integrated security chip performs cryptographic functions and stores private and public secure keys.
It provides the hardware support for the Trusted Computing Group (TCG) specification. You can
download the software to support the TCG specification.
2 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Note: For customers in Chinese Mainland, integrated TPM is not supported. However, customers in
Chinese Mainland can install a Lenovo-qualified TPM adapter (sometimes called a daughter card).
• Lenovo XClarity Controller (XCC)
The Lenovo XClarity Controller is the common management controller for Lenovo ThinkSystem solution
hardware. The Lenovo XClarity Controller consolidates multiple management functions in a single chip on
the solution system board.
Some of the features that are unique to the Lenovo XClarity Controller are enhanced performance, higher-
resolution remote video, and expanded security options. For additional information about the Lenovo
XClarity Controller, refer to the XCC documentation compatible with your solution at:
https://pubs.lenovo.com/lxcc-overview/
Important: Lenovo XClarity Controller (XCC) supported version varies by product. All versions of Lenovo
XClarity Controller are referred to as Lenovo XClarity Controller and XCC in this document, unless
specified otherwise. To see the XCC version supported by your server, go to https://pubs.lenovo.com/lxcc-
overview/.
• UEFI-compliant solution firmware
Lenovo ThinkSystem firmware is Unified Extensible Firmware Interface (UEFI) compliant. UEFI replaces
BIOS and defines a standard interface between the operating system, platform firmware, and external
devices.
Lenovo ThinkSystem solutions are capable of booting UEFI-compliant operating systems, BIOS-based
operating systems, and BIOS-based adapters as well as UEFI-compliant adapters.
Note: The solution does not support Disk Operating System (DOS).
• Light path diagnostics
Light path diagnostics provides LEDs to help you diagnose problems. For more information about the light
path diagnostics, see Light path diagnostics panel and Light path diagnostics LEDs.
• Mobile access to Lenovo Service Information website
Each compute node provides a QR code on the system service label which is located on the top of the
compute node. You can scan the QR code using a QR code reader and scanner with a mobile device and
get quick access to the Lenovo Service Information website. The Lenovo Service Information website
provides additional information for parts installation and replacement videos, and error codes for solution
support.
Specifications
The following information is a summary of the features and specifications of the solution. Depending on the
model, some features might not be available, or some specifications might not apply.
Enclosure specifications
Features and specifications of the enclosure.
Chapter 1. Introduction 3
Table 1. Enclosure specifications
Specification Description
Dimension 2U enclosure
• Height: 87.0 mm (3.4 inches)
• Depth: 936.9 mm (36.8 inches)
• Width: 488.0 mm (19.2 inches)
Weight (depending on the • Minimum configuration (with one minimal configuration node): 24.3 kg (53.5 lbs)
configuration) • Maximum configuration (with four maximal configuration nodes): 44.2 kg (97.4 lbs)
System Management Module Hot-swappable (see “System Management Module 2 (SMM2)” on page 25 and
2 (SMM2) System Management Module 2 User Guide for more information).
4 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Table 1. Enclosure specifications (continued)
Specification Description
Acoustical noise emissions The solution has the following acoustic noise emissions declaration:
• Sound power level (LWAd):
– Idling: Typical configuration: 6.1 Bel; Max configuration: 6.1 Bel
– Operating: Typical configuration: 7.6 Bel; Max configuration: 8.9 Bel
• Sound pressure level (LpAm):
– Idling: Typical configuration: 45 dBA; Max configuration: 61 dBA
– Operating: Typical configuration: 61 dBA; Max configuration: 74 dBA
Notes:
• These sound levels were measured in controlled acoustical environments
according to procedures specified by ISO7779 and are reported in accordance
with ISO 9296.
• The declared acoustic sound levels are based on the specified configurations,
which may change slightly depending on configuration/conditions.
– Typical configuration: Two 205W processors, 16x 16GB DIMMs, 1x S4510
240GB SSD, 1x Mellanox HDR200 ConnectX-6 adapter, 25Gb SFP+ LOM,
TPM 2.0, two 2400W power supply units
– Max configuration: Two 250W processors, 16x 64GB DIMMs, 1x S4510 240GB
SSD, 1x Mellanox HDR200 ConnectX-6 adapter, 25Gb SFP+ LOM, TPM 2.0,
two 2400W power supply units
• Government regulations (such as those prescribed by OSHA or European
Community Directives) may govern noise level exposure in the workplace and may
apply to you and your solution installation. The actual sound pressure levels in
your installation depend upon a variety of factors, including the number of racks in
the installation; the size, materials, and configuration of the room; the noise levels
from other equipment; the room ambient temperature, and employee's location in
relation to the equipment. Further, compliance with such government regulations
depends on a variety of additional factors, including the duration of employees'
exposure and whether employees wear hearing protection. Lenovo recommends
that you consult with qualified experts in this field to determine whether you are in
compliance with the applicable regulations.
Chapter 1. Introduction 5
Compute node specifications
Features and specifications of the compute node.
Specification Description
Processor (depending on the Supports two 3rd Gen Intel® Xeon® Scalable, up to 250W processors per compute
model) node.
Notes:
1. Use the Setup utility to determine the type and speed of the processors in the
node.
2. For a list of supported processors, see https://serverproven.lenovo.com/.
Memory See “Memory module installation rules and order” on page 43 for detailed
information about memory configuration and setup.
• Minimum: 16 GB (single DDR4 DRAM RDIMM with one processor)
• Maximum: 2048 GB with 16 x 128 GB RDIMMs
• Memory module types: Industry-standard double-data-rate 4 (DDR4), dynamic
random-access memory (DRAM) registered dual in-line memory modules
(RDIMMs) with error-correcting code (ECC)
• Capacity (depending on the model): 16 GB, 32 GB, 64 GB, and 128 GB RDIMM
• Slots: Supports up to 16 DIMM slots
Drive bays Supports up to two 7mm 2.5-inch hot-swap SATA/NVMe solid-state drive bays or
one 15mm 2.5-inch hot-swap NVMe solid-state drive bay per compute node.
M.2 drive/backplane The ThinkSystem M.2 backplane supports up to two identical M.2 drives.
RAID • Supports RAID levels 0 and 1 for SATA storage including 7mm 2.5-inch solid-state
drives and M.2 drives.
• Supports RAID levels 0 and 1 for NVMe storage (Intel VROC NVMe RAID).
Notes:
– VROC Intel-SSD-Only supports Intel NVMe drives.
– VROC Premium requires an activation key for non-Intel NVMe drives. For more
information about acquiring and installing the activation key, see https://
fod.lenovo.com/lkms
Video controller (integrated • ASPEED
into Lenovo XClarity • SVGA compatible video controller
Controller) • Avocent Digital Video Compression
• Video memory is not expandable
Note: Maximum video resolution is 1920 x 1200 at 60 Hz.
Input/Output (I/O) features • Node operator panel
• USB 3.0 Console Breakout Cable connector
• External LCD diagnostics handset connector
• One 1 Gb RJ45 Ethernet port with share-NIC feature for Lenovo XClarity Controller
• One 25 Gb SFP28 Ethernet port with share-NIC feature for Lenovo XClarity
Controller
Note:
Lenovo XClarity Controller can be accessed by either RJ45 Ethernet port or SFP28
Ethernet port.
6 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Table 2. Compute node specifications (continued)
Specification Description
Ambient temperature Adjust ambient temperature when specific components are installed:
management • Keep ambient temperature to 30°C or lower when one or more of the following
processors are installed:
– Processors with TDP (thermal design power) 165 watts or lower
• Keep ambient temperature to 25°C or lower when one or more of the following
processors are installed:
– Processors with TDP (thermal design power) higher than 165 watts
– Intel(R) Xeon(R) Gold 6334 (165 watts, 8 core)
Environment The ThinkSystem SD630 V2 Compute Node complies with ASHRAE Class A2
specifications.
The ThinkSystem SD630 V2 Compute Node is supported in the following
environment:
• Air temperature:
– Operating: 10°C - 35°C (50°F - 95°F); the maximum ambient temperature
decreases by 1°C for every 300 m (984 ft) increase in altitude above 900 m
(2,953 ft)
– Solution powered off: 5°C to 45°C (41°F to 113°F)
• Maximum altitude: 3,050 m (10,000 ft)
• Relative humidity (non-condensing):
– Operating: 20% - 80%, maximum dew point: 21°C (70°F)
– Solution powered off: 8% - 80%, maximum dew point: 27°C (81°F)
• Particulate contamination:
Particulate contamination
Attention: Airborne particulates (including metal flakes or particles) and reactive gases acting alone or in
combination with other environmental factors such as humidity or temperature might pose a risk to the
device that is described in this document.
Risks that are posed by the presence of excessive particulate levels or concentrations of harmful gases
include damage that might cause the device to malfunction or cease functioning altogether. This
specification sets forth limits for particulates and gases that are intended to avoid such damage. The limits
must not be viewed or used as definitive limits, because numerous other factors, such as temperature or
Chapter 1. Introduction 7
moisture content of the air, can influence the impact of particulates or environmental corrosives and gaseous
contaminant transfer. In the absence of specific limits that are set forth in this document, you must
implement practices that maintain particulate and gas levels that are consistent with the protection of human
health and safety. If Lenovo determines that the levels of particulates or gases in your environment have
caused damage to the device, Lenovo may condition provision of repair or replacement of devices or parts
on implementation of appropriate remedial measures to mitigate such environmental contamination.
Implementation of such remedial measures is a customer responsibility.
Contaminant Limits
Reactive gases Severity level G1 as per ANSI/ISA 71.04-19851:
• The copper reactivity level shall be less than 200 Angstroms per month (Å/month ≈ 0.0035 μg/
cm2-hour weight gain).2
• The silver reactivity level shall be less than 200 Angstroms per month (Å/month ≈ 0.0035 μg/
cm2-hour weight gain).3
• The reactive monitoring of gaseous corrosivity must be conducted approximately 5 cm (2 in.) in
front of the rack on the air inlet side at one-quarter and three-quarter frame height off the floor
or where the air velocity is much higher.
Airborne Data centers must meet the cleanliness level of ISO 14644-1 class 8.
particulates
For data centers without airside economizer, the ISO 14644-1 class 8 cleanliness might be met by
choosing one of the following filtration methods:
• The room air might be continuously filtered with MERV 8 filters.
• Air entering a data center might be filtered with MERV 11 or preferably MERV 13 filters.
For data centers with airside economizers, the choice of filters to achieve ISO class 8 cleanliness
depends on the specific conditions present at that data center.
• The deliquescent relative humidity of the particulate contamination should be more than 60%
RH.4
• Data centers must be free of zinc whiskers.5
cu
1 ANSI/ISA-71.04-1985. Environmental conditions for process measurement and control systems: Airborne
contaminants. Instrument Society of America, Research Triangle Park, North Carolina, U.S.A.
2The derivation of the equivalence between the rate of copper corrosion growth in the thickness of the corrosion
product in Å/month and the rate of weight gain assumes that Cu2S and Cu2O grow in equal proportions.
3The derivation of the equivalence between the rate of silver corrosion growth in the thickness of the corrosion
product in Å/month and the rate of weight gain assumes that Ag2S is the only corrosion product.
4The deliquescent relative humidity of particulate contamination is the relative humidity at which the dust absorbs
enough water to become wet and promote ionic conduction.
5 Surface debris is randomly collected from 10 areas of the data center on a 1.5 cm diameter disk of sticky
electrically conductive tape on a metal stub. If examination of the sticky tape in a scanning electron microscope
reveals no zinc whiskers, the data center is considered free of zinc whiskers.
Management options
The XClarity portfolio and other system management options described in this section are available to help
you manage the server/solutions more conveniently and efficiently.
8 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Overview
Options Description
Consolidates the service processor functionality, Super I/O, video controller, and
remote presence capabilities into a single chip on the server/solution system board.
Interface
• CLI application
Lenovo XClarity Controller
• Web GUI interface
• Mobile application
• REST API
https://pubs.lenovo.com/lxcc-overview/
Interface
• Web GUI interface
Lenovo XClarity Administrator • Mobile application
• REST API
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/aug_product_page.html
Portable and light toolset for server/solution configuration, data collection, and
firmware updates. Suitable both for single-server/solution or multi-server/solution
management contexts.
Interface
https://pubs.lenovo.com/lxce-overview/
Chapter 1. Introduction 9
Options Description
Interface
• Web interface (BMC remote access)
• GUI application
Important:
Lenovo XClarity Provisioning Manager (LXPM) supported version varies by product.
All versions of Lenovo XClarity Provisioning Manager are referred to as Lenovo
XClarity Provisioning Manager and LXPM in this document, unless specified
otherwise. To see the LXPM version supported by your server, go to https://
pubs.lenovo.com/lxpm-overview/.
Interface
Lenovo XClarity Integrator
GUI application
https://pubs.lenovo.com/lxci-overview/
Application that can manage and monitor server/solution power and temperature.
Interface
Lenovo XClarity Energy • Web GUI Interface
Manager
Usage and downloads
https://datacentersupport.lenovo.com/solutions/lnvo-lxem
Interface
https://datacentersupport.lenovo.com/solutions/lnvo-lcp
10 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Functions
Functions
Firm- Event-
Options Multi- OS System Inven- Pow-
ware s/alert Power
system deploy- configu- tory/ er
up- moni- planning
mgmt ment ration logs mgmt
dates1 toring
Lenovo OneCLI √ √ √2 √ √4
XClarity
Bootable Media
Essen- √ √2 √4
Creator
tials
toolset UpdateXpress √ √2
Lenovo XClarity Provisioning
Manager √ √ √3 √5
Notes:
1. Most options can be updated through the Lenovo tools. Some options, such as GPU firmware or Omni-
Path firmware require the use of supplier tools.
2. The server/solution UEFI settings for option ROM must be set to Auto or UEFI to update firmware using
Lenovo XClarity Administrator, Lenovo XClarity Essentials, or Lenovo XClarity Controller.
3. Firmware updates are limited to Lenovo XClarity Provisioning Manager, Lenovo XClarity Controller, and
UEFI updates only. Firmware updates for optional devices, such as adapters, are not supported.
4. The server/solution UEFI settings for option ROM must be set to Auto or UEFI for detailed adapter card
information, such as model name and firmware levels, to be displayed in Lenovo XClarity Administrator,
Lenovo XClarity Controller, or Lenovo XClarity Essentials.
5. Limited inventory.
6. The Lenovo XClarity Integrator deployment check for System Center Configuration Manager (SCCM)
supports Windows operating system deployment.
7. Power management function is supported only by Lenovo XClarity Integrator for VMware vCenter.
8. It is highly recommended that you check the power summary data for your server/solution using Lenovo
Capacity Planner before purchasing any new parts.
Chapter 1. Introduction 11
12 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Chapter 2. Solution components
Use the information in this section to learn about each of the components associated with your solution.
Note: The illustrations in this document might differ slightly from your model.
The enclosure machine type, model number and serial number are on the ID label that can be found on the
front of the enclosure, as shown in the following illustration.
The compute node model number and serial number are on the ID label that can be found on the front of the
compute node (on the underside of the network access tag), as shown in the following illustration.
QR code
The system service label, which is on the top of the compute node, provides a QR code for mobile access to
service information. You can scan the QR code using a QR code reader and scanner with a mobile device
and get quick access to the Lenovo Service Information website. The Lenovo Service Information website
provides additional information for parts installation and replacement videos, and error codes for solution
support.
14 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Front view
Enclosure
Notes:
1. The illustrations in this document might differ slightly from your hardware.
2. For proper cooling, each compute node bay must be installed with either a compute node or a node filler
before the solution is powered on.
The following illustration shows the front view of the enclosure and respective node bays in the enclosure.
Figure 5. Enclosure front view with compute nodes and node bay numbering
Compute node
The following illustration shows the controls, LEDs, and connectors on the front of the compute node.
Configuration with two 7mm 2.5-inch SATA/NVMe SSDs and PCIe riser
See the following illustration for components, connectors, drive bay and PCIe slot numbering in the
configuration with two 7mm 2.5-inch SATA/NVMe SSDs and a PCIe riser.
Figure 6. Configuration with two 7mm 2.5-inch SATA/NVMe SSDs and PCIe riser
Table 5. Components in configuration with two 7mm 2.5-inch SATA/NVMe SSDs and PCIe riser
1 7mm 2.5-inch SATA/NVMe SSD bay 1 5 USB 3.0 Console Breakout Cable connector
4 External diagnostics connector 8 25 Gb SFP28 Ethernet port with share-NIC feature for
Lenovo XClarity Controller *
Figure 7. Configuration with 15mm 2.5-inch NVMe SSD and PCIe riser
Table 6. Components in configuration with 15mm 2.5-inch NVMe SSD and PCIe riser
3 External diagnostics connector 7 25 Gb SFP28 Ethernet port with share-NIC feature for
Lenovo XClarity Controller *
Note: * Lenovo XClarity Controller can be accessed by either RJ45 Ethernet port or SFP28 Ethernet port.
16 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Table 7. Front LEDs and buttons
1 25 Gb SFP28 Ethernet port link and activity LED (green) 5 M.2 drive activity LED (green)
2 1 Gb RJ45 Ethernet port link LED (green) 6 Identification button / LED (blue)
3 1 Gb RJ45 Ethernet port activity LED (green) 7 System error LED (yellow)
1 25 Gb SFP28 Ethernet port link and activity LED (green): Use this green LED to distinguish the network
status.
Off: The network is disconnected.
Blinking: The network is accessing.
On: The network is established.
2 1 Gb RJ45 Ethernet port link LED (green): Use this green LED to distinguish the network status.
Off: The network link is disconnected.
On: The network link is established.
3 1 Gb RJ45 Ethernet port activity LED (green): Use this green LED to distinguish the network status.
Off: The node is disconnected from a LAN.
Blinking: The network is connected and active.
4 Node power button / LED (green): When this LED is lit (green), it indicates that the compute node has
power. This green LED indicates the power status of the compute node:
Off: Power is not present; the power supply or the LED itself has failed.
Flashing rapidly (4 times per second): The compute node is turned off and is not ready to be turned on.
The power button is disabled. This will last approximately 5 to 10 seconds.
Flashing slowly (once per second): The compute node is turned off, connected to power through the
enclosure and ready to be turned on. You can press the power button to turn on the node.
On: The compute node is turned on and connected to power through the enclosure.
6 Identification button / LED (blue): Use this blue LED to visually locate the compute node among others.
This LED is also used as a presence detection button. You can use Lenovo XClarity Administrator to light this
LED remotely.
Note: The behavior of this LED is determined by the SMM2 ID LED when SMM2 ID LED is turned on or
blinking. For the exact location of SMM2 ID LED, see “System Management Module 2 (SMM2)” on page 25.
On All of the node ID LEDs will be on except the blinking ones, which will remain blinking.
Blink All of the node ID LEDs will be blinking regardless of previous status.
7 System error LED (yellow): When this LED (yellow) is lit, it indicates that at least one system error has
occurred. Check the event log for additional information.
8 NMI pinhole: Insert the tip of a straightened paper clip into this pinhole to force a non-maskable interrupt
(NMI) upon the node, and consequent memory dump would take place. Only use this function while advised
by Lenovo support representative.
Note: When unplugging the external handset, see the following instructions:
18 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Press and hold on the latch on top of the connector.
Pull to disconnect the cable from the compute node.
1 LCD display
20 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Option flow diagram
The LCD panel displays various system information. Navigate through the options with the scroll keys.
Depending on the model, the options and entries on the LCD display might be different.
Depending on the model, the options and entries on the LCD display might be different.
1 System name
2 System status
4 Temperature
5 Power consumption
6 Checkpoint code
Active Alerts
Home screen:
Active error quantity
Note: The “Active Alerts” menu displays only the quantity 1 Active Alerts
of active errors. If no errors occur, the “Active Alerts”
menu will not be available during navigation.
Active Alerts: 1
Press ▼ to view alert details
Details screen:
• Error message ID (Type: Error/Warning/Information) FQXSPPU009N(Error)
• Occurrence time
04/07/2020 02:37:39 PM
• Possible sources of the error
CPU 1 Status:
Configuration Error
22 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
System Firmware
UEFI
UEFI (Inactive)
• Firmware level (status)
Build: D0E101P
• Build ID
Version: 1.00
• Version number
Date: 2019-12-26
• Release date
XCC Primary
XCC Primary (Active)
• Firmware level (status)
Build: DVI399T
• Build ID
Version: 4.07
• Version number
Date: 2020-04-07
• Release date
XCC Backup
XCC Backup (Active)
• Firmware level (status)
Build: D8BT05I
• Build ID
Version: 1.00
• Version number
Date: 2019-12-30
• Release date
• XCC hostname
XCC Network Information
• MAC address
XCC Hostname: XCC-xxxx-SN
• IPv4 Network Mask
MAC Address:
• IPv4 DNS
xx:xx:xx:xx:xx:xx
• IPv6 Link Local IP
IPv4 IP:
• Stateless IPv6 IP
xx.xx.xx.xx
• Static IPv6 IP
IPv4 Network Mask:
• Current IPv6 Gateway
x.x.x.x
• IPv6 DNS
IPv4 Default Gateway:
Note: Only the MAC address that is currently in use is
displayed (extension or shared). x.x.x.x
Ambient Temp: 24 C
Exhaust Temp: 30 C
• Ambient temperature PSU1: Vin= 213 w
• Exhaust temperature Inlet= 26 C
• PSU status FAN1 Front: 21000 RPM
• Spinning speed of fans by RPM FAN2 Front: 21000 RPM
FAN3 Front: 21000 RPM
FAN4 Front: 21000 RPM
Active Sessions
Actions
Rear view
The following illustration shows the connectors and LEDs on the rear of the solution.
Enclosure
The following illustration shows the components on the rear of the enclosure.
The following illustration shows the rear view of the entire system.
24 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Figure 9. Enclosure rear view
Notes:
• The illustrations in this document might differ slightly from your hardware.
• Make sure the power cord is properly connected to every power supply unit installed.
2 Ethernet port 1 and port 2 activity (RJ-45) LED (green) 8 Identification LED (blue)
2 Ethernet port 1 and port 2 activity (RJ-45) LED (green): When this LED is flashing (green), it indicates
that there is an activity through the remote management and console (Ethernet) port 1 and port 2 over the
management network.
3 USB connector: Insert the USB storage device to this connector and then press the USB port service
mode button to collect FFDC logs.
4 Reset pinhole: Press the button for one to four seconds, SMM2 reboots. Press over four seconds, SMM2
reboots and loads to the default settings.
5 Power LED: When this LED is lit (green), it indicates that the SMM2 has power.
6 Status LED: This LED (green) indicates the operating status of the SMM2.
• Continuously on: The SMM2 has encountered one or more problems.
• Off: When the enclosure power is on, it indicates the SMM2 has encountered one or more problems.
• Flashing: The SMM2 is working.
– During the pre-boot process, the LED flashes rapidly.
– Ten times per second: The SMM2 hardware is working and the firmware is ready to initialize.
– Two times per second: The firmware is initializing.
– When the pre-boot process is completed and the SMM2 is working correctly, the LED flashes at a
slower speed (about once every two seconds).
7 Check log LED: When this LED is lit (yellow), it indicates that a system error has occurred. Check the
SMM2 event log for additional information.
8 Identification LED: When this LED is lit (blue), it indicates the enclosure location in a rack.
9 USB port service mode button: Press this button to collect FFDC logs after inserting the USB storage
device to the USB connector.
Power supply
The ThinkSystem DA240 Enclosure Type 7D1J supports two autoranging power supplies.
The power supplies get electrical power from a 200 - 240 Vac power source and convert the AC input into
12.2 V outputs. The power supplies are capable of autoranging within the input voltage range. There is one
common power domain for the enclosure that distributes power to each of the compute nodes through the
system power distribution boards.
Each power supply has internal fans and a controller. The power supply controller can be powered by any
installed power supply that is providing power through the power distribution boards.
Attention: The power supplies contain internal cooling fans. Do not obstruct the fan exhaust vents.
26 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
It is required to install two power supplies regardless of the type of power supply, the enclosure power load,
or selected enclosure power policy.
The ThinkSystem DA240 Enclosure Type 7D1J does not support mixing of low input voltage power supplies
with high input voltage power supplies. For example, if you install a power supply with an input voltage of
100 - 127 Vac in an enclosure that is powered by 200 - 240 Vac power supplies, the 100 - 127 Vac power
supply will not power on. A configuration error will be flagged to indicate that this power supply configuration
is not supported.
Note: The power supplies for your solution might look slightly different from that shown in the illustration.
1 Input (AC) power LED (green) 3 Power supply error LED (yellow)
1 AC power LED (green): When this LED is lit (green), it indicates that AC power is being supplied to the
power supply.
2 DC power LED (green): When this LED is lit (green), it indicates that DC power is being supplied from the
power supply to the power distribution boards in the enclosure.
3 Power supply error LED (yellow): When this LED is lit (yellow), it indicates that there is a fault with the
power supply.
Note: Before unplugging the AC power cord from one power supply or removing one power supply from the
enclosure, verify that the capacity of the other power supply is sufficient to meet the minimum power
requirements for all components in the enclosure.
The following illustration shows the location of the DIMM connectors on the system board.
Figure 13. The location of the DIMM connectors on the system board
System-board switches
The following illustration shows the location and description of the switches.
Important:
1. If there is a clear protective sticker on the switch blocks, you must remove and discard it to access the
switches.
2. Any system-board switch or jumper block that is not shown in the illustrations in this document are
reserved.
28 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Figure 14. Location of the switches on the system board
2 SW3 3 CMOS clear jumper Clears the real-time clock (RTC) Normal (default)
registry
Important:
1. Before you change any switch settings or move any jumpers, turn off the solution. Then, disconnect all
power cords and external cables. Review the information in https://pubs.lenovo.com/safety_
documentation/, “Installation guidelines” on page 40, and “Power off the compute node” on page 84.
2. Any system-board switches or jumper blocks that are not shown in the illustrations in this document are
reserved.
Notes:
• Disengage all latches, release tabs, or locks on cable connectors when you disconnect cables from the
system board. Failing to release them before removing the cables will damage the cable sockets on the
system board, which are fragile. Any damage to the cable sockets might require replacing the system
board.
Figure 15. Cable routing for two 7mm 2.5-inch SATA/NVMe drive backplanes
Cable From To
1 Y cable for the 7mm 2.5-inch SATA/NVMe connector on the Slimline x8 connector on the two
SATA/NVMe drive backplanes system board backplanes
Figure 16. Cable routing for 15mm 2.5-inch NVMe drive backplane
30 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Cable From To
1 Cable for the 15mm 2.5-inch NVMe connector on the system Slimline x8 connector on the
NVMe drive backplane board backplane
Figure 17. Cable routing for the power distribution boards and fans
Cable From To
1 Three fan cables Fan connectors on the system fans Fan connectors on the lower power
distribution board
2 Two interconnect cables (bundled Interconnect cables connectors on Interconnect cables connectors on
together as illustrated) the lower power distribution board the upper power distribution board
Note: Before installing the upper power distribution board and connecting the two interconnect cables
between the lower and upper power distribution boards, make sure that the enclosure air baffles have been
installed into the enclosure (see “Install the enclosure air baffles” in Maintenance Manual).
Use the USB 3.0 Console Breakout Cable to connect external I/O devices to a compute node. The USB 3.0
Console Breakout Cable connects through the USB 3.0 Console Breakout Cable connector on the front of
each compute node (see “Compute node” on page 15). The USB 3.0 Console Breakout Cable has
connectors for a display device (video), a USB 3.2 Gen 1 connector for a USB keyboard or mouse, and a
serial port connector.
The following illustration identifies the connectors and components on the USB 3.0 Console Breakout Cable.
Table 13. Connectors and components on the USB 3.0 Console Breakout Cable
2 USB 3.2 Gen 1 connector 5 To USB 3.0 Console Breakout Cable connector
3 VGA connector
Parts list
Use the parts list to identify each of the components that are available for your solution.
Note: Depending on the model, your solution might look slightly different from that in the following
illustrations.
32 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Enclosure components
This section includes the components that come with the enclosure.
The parts listed in the following table are identified as one of the following:
• Tier 1 customer replaceable unit (CRU): Replacement of Tier 1 CRUs is your responsibility. If Lenovo
installs a Tier 1 CRU at your request with no service agreement, you will be charged for the installation.
• Tier 2 customer replaceable unit (CRU): You may install a Tier 2 CRU yourself or request Lenovo to
install it, at no additional charge, under the type of warranty service that is designated for your solution.
• Field replaceable unit (FRU): FRUs must be installed only by trained service technicians.
• Consumable and Structural parts: Purchase and replacement of consumable and structural parts
(components, such as a cover or bezel) is your responsibility. If Lenovo acquires or installs a structural
component at your request, you will be charged for the service.
For more information about ordering the parts shown in Figure 19 “Enclosure components” on page 33:
https://datacentersupport.lenovo.com/us/en/products/servers/thinksystem/da240-enclosure/7d1j/parts
It is highly recommended that you check the power summary data for your server/solution using Lenovo Capacity
Planner before purchasing any new parts.
1 Enclosure √
2 Upper power distribution board √
3 Lower power distribution board √
4 Enclosure air baffle for power supply 2 (marked √
with )
5 Enclosure air baffle for power supply 1 (marked √
with )
6 Fan filler panel (for the configuration with two √
system fans)
7 8080 dual-rotor fan √
8 Power supply filler panel √
9 Power supply √
10 System Management Module 2 (SMM2) √
11 CMOS battery (CR2032) for SMM2 √
12 Compute node bay filler √
13 External LCD diagnostics handset √
14 USB 3.0 Console Breakout Cable √
15 Shipping bracket √
34 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Compute node components
This section includes the components that come with the compute node.
The parts listed in the following table are identified as one of the following:
• Tier 1 customer replaceable unit (CRU): Replacement of Tier 1 CRUs is your responsibility. If Lenovo
installs a Tier 1 CRU at your request with no service agreement, you will be charged for the installation.
• Tier 2 customer replaceable unit (CRU): You may install a Tier 2 CRU yourself or request Lenovo to
install it, at no additional charge, under the type of warranty service that is designated for your solution.
• Field replaceable unit (FRU): FRUs must be installed only by trained service technicians.
• Consumable and Structural parts: Purchase and replacement of consumable and structural parts
(components, such as a cover or bezel) is your responsibility. If Lenovo acquires or installs a structural
component at your request, you will be charged for the service.
For more information about ordering the parts shown in Figure 20 “Compute node components” on page 35:
https://datacentersupport.lenovo.com/us/en/products/servers/thinksystem/sd630v2/7d1k/parts
It is highly recommended that you check the power summary data for your server/solution using Lenovo Capacity
Planner before purchasing any new parts.
36 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Table 15. Parts list — Compute node (continued)
The solution setup procedure varies depending on the configuration of the solution when it was delivered. In
some cases, the solution is fully configured and you just need to connect the solution to the network and an
ac power source, and then you can power on the solution. In other cases, the solution needs to have
hardware options installed, requires hardware and firmware configuration, and requires an operating system
to be installed.
The following steps describe the general procedure for setting up a solution:
1. Unpack the solution package. See “Solution package contents” on page 1.
2. Set up the solution hardware.
a. Install any required hardware or solution options. See the related topics in “Install solution hardware
options” on page 47.
b. If necessary, install the solution into a standard rack cabinet by using the rail kit shipped with the
solution. See the Rack Installation Instructions that comes with optional rail kit.
c. Connect the Ethernet cables and power cords to the solution. See “Rear view” on page 24 to locate
the connectors. See “Cable the solution” on page 83 for cabling best practices.
d. Power on the solution. See “Power on the compute node” on page 83.
Note: You can access the management processor interface to configure the system without
powering on the solution. Whenever the solution is connected to power, the management processor
interface is available. For details about accessing the management processor, see:
https://pubs.lenovo.com/lxcc-overview/
e. Validate that the solution hardware was set up successfully. See “Validate solution setup” on page
84.
3. Configure the system.
a. Connect the Lenovo XClarity Controller to the management network. See “Set the network
connection for the Lenovo XClarity Controller” on page 85.
b. Update the firmware for the solution, if necessary. See “Update the firmware” on page 86.
c. Configure the firmware for the solution. See “Configure the firmware” on page 89.
The following information is available for RAID configuration:
• https://lenovopress.com/lp0578-lenovo-raid-introduction
• https://lenovopress.com/lp0579-lenovo-raid-management-tools-and-resources
d. Install the operating system. See “Deploy the operating system” on page 92.
e. Back up the solution configuration. See “Back up the solution configuration” on page 93.
f. Install the applications and programs for which the solution is intended to be used.
Attention: Prevent exposure to static electricity, which might lead to system halt and loss of data, by
keeping static-sensitive components in their static-protective packages until installation, and handling these
devices with an electrostatic-discharge wrist strap or other grounding system.
• Read the safety information and guidelines to ensure your safety at work:
– A complete list of safety information for all products is available at:
https://pubs.lenovo.com/safety_documentation/
– The following guidelines are available as well: “Handling static-sensitive devices” on page 42 and
“Working inside the solution with the power on” on page 42.
• Make sure the components you are installing are supported by your server. For a list of supported optional
components for the server, see https://serverproven.lenovo.com/.
• When you install a new server, download and apply the latest firmware. This will help ensure that any
known issues are addressed, and that your server is ready to work with optimal performance. Go to
ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Drivers and Software to download
firmware updates for your server.
Important: Some cluster solutions require specific code levels or coordinated code updates. If the
component is part of a cluster solution, verify the latest Best Recipe code level menu for cluster supported
firmware and driver before you update the code.
• It is good practice to make sure that the server is working correctly before you install an optional
component.
• Keep the working area clean, and place removed components on a flat and smooth surface that does not
shake or tilt.
• Do not attempt to lift an object that might be too heavy for you. If you have to lift a heavy object, read the
following precautions carefully:
– Make sure that you can stand steadily without slipping.
– Distribute the weight of the object equally between your feet.
– Use a slow lifting force. Never move suddenly or twist when you lift a heavy object.
– To avoid straining the muscles in your back, lift by standing or by pushing up with your leg muscles.
• Make sure that you have an adequate number of properly grounded electrical outlets for the server,
monitor, and other devices.
• Back up all important data before you make changes related to the disk drives.
• Have a small flat-blade screwdriver, a small Phillips screwdriver, and a T8 torx screwdriver available.
• To view the error LEDs on the system board and internal components, leave the power on.
• You do not have to turn off the server to remove or install hot-swap power supplies, hot-swap fans, or hot-
plug USB devices. However, you must turn off the server before you perform any steps that involve
removing or installing adapter cables, and you must disconnect the power source from the server before
you perform any steps that involve removing or installing a riser card.
• Blue on a component indicates touch points, where you can grip to remove a component from or install it
in the server, open or close a latch, and so on.
• Orange on a component or an orange label on or near a component indicates that the component can be
hot-swapped if the server and operating system support hot-swap capability, which means that you can
40 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
remove or install the component while the server is still running. (Orange can also indicate touch points on
hot-swap components.) See the instructions for removing or installing a specific hot-swap component for
any additional procedures that you might have to perform before you remove or install the component.
• The Red strip on the drives, adjacent to the release latch, indicates that the drive can be hot-swapped if
the server and operating system support hot-swap capability. This means that you can remove or install
the drive while the server is still running.
Note: See the system specific instructions for removing or installing a hot-swap drive for any additional
procedures that you might need to perform before you remove or install the drive.
• After finishing working on the server, make sure you reinstall all safety shields, guards, labels, and ground
wires.
Notes:
• The product is not suitable for use at visual display workplaces according to §2 of the Workplace
Regulations.
• The set-up of the server is made in the server room only.
CAUTION:
This equipment must be installed or serviced by trained personnel, as defined by the NEC, IEC 62368-
1 & IEC 60950-1, the standard for Safety of Electronic Equipment within the Field of Audio/Video,
Information Technology and Communication Technology. Lenovo assumes you are qualified in the
servicing of equipment and trained in recognizing hazards energy levels in products. Access to the
equipment is by the use of a tool, lock and key, or other means of security, and is controlled by the
authority responsible for the location.
Important: Electrical grounding of the solution is required for operator safety and correct system function.
Proper grounding of the electrical outlet can be verified by a certified electrician.
Use the following checklist to verify that there are no potentially unsafe conditions:
1. Make sure that the power is off and the power cord is disconnected.
2. Check the power cord.
• Make sure that the third-wire ground connector is in good condition. Use a meter to measure third-
wire ground continuity for 0.1 ohm or less between the external ground pin and the frame ground.
• Make sure that the power cord is the correct type.
To view the power cords that are available for the solution:
a. Go to:
http://dcsc.lenovo.com/#/
b. Click Preconfigured Model or Configure to order.
c. Enter the machine type and model for your server to display the configurator page.
d. Click Power ➙ Power Cables to see all line cords.
• Make sure that the insulation is not frayed or worn.
3. Check for any obvious non-Lenovo alterations. Use good judgment as to the safety of any non-Lenovo
alterations.
Attention: The solution might stop and data loss might occur when internal solution components are
exposed to static electricity. To avoid this potential problem, always use an electrostatic-discharge wrist
strap or other grounding systems when working inside the solution with the power on.
• Avoid loose-fitting clothing, particularly around your forearms. Button or roll up long sleeves before
working inside the solution.
• Prevent your necktie, scarf, badge rope, or hair from dangling into the solution.
• Remove jewelry, such as bracelets, necklaces, rings, cuff links, and wrist watches.
• Remove items from your shirt pocket, such as pens and pencils, in case they fall into the solution as you
lean over it.
• Avoid dropping any metallic objects, such as paper clips, hairpins, and screws, into the solution.
42 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Attention: Prevent exposure to static electricity, which might lead to system halt and loss of data, by
keeping static-sensitive components in their static-protective packages until installation, and handling these
devices with an electrostatic-discharge wrist strap or other grounding system.
The following memory configurations and population sequences are supported for the ThinkSystem DA240
Enclosure and ThinkSystem SD630 V2 Compute Node:
The DIMM (memory) population sequences in this section show all of the memory population combinations
that are supported by your compute node. Each processor in your compute node has four memory
controllers, each memory controller has two memory channels, and each memory channel has one DIMM
slot. To populate balanced memory configurations for the best memory performance, observe the following
guidelines:
• The operating speed of the compute node is determined by the operating mode, memory speed, memory
ranks, memory population and processor.
• The compute node only supports industry-standard double-data-rate 4 (DDR4), dynamic random-access
memory (DRAM) registered dual in-line memory modules (RDIMMs) with error-correcting code (ECC).
Confirm that the compute node supports the DIMM that you are installing (see https://
serverproven.lenovo.com/).
• Do not mix RDIMMs and 3DS RDIMMs in the same compute node.
• Install an equal number of DIMMs for each DIMM type.
For more detailed installation guidelines and DIMM population sequences in the independent and memory
mirroring mode, refer to
• “Independent memory mode: Installation guidelines and sequence” on page 44
• “Memory mirroring mode: Installation guidelines and sequence” on page 45
The following illustration shows the location of the DIMM connectors on the system board.
Figure 21. The location of the DIMM connectors on the system board
See the following table for memory channels and DIMM slots information around a processor.
Table 16. Memory channels and DIMM slots information around a processor
Integrated
Memory
Controller 0 Controller 1 Controller 2 Controller 3
Controller
(iMC)
DIMM
connector 15 16 13 14 10 9 12 11
(Processor 2)
• Install the DIMMs for processor 1 before working on processor 2. Balance the DIMMs across the two
processors so that all processors have the same memory capacity.
44 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
• Populate memory controller 0 first for each processor. Balance the DIMMs across the processor memory
controllers so all of the memory controllers have exactly the same DIMM population and memory
capacity.
• DA240 Enclosure and ThinkSystem SD630 V2 supports up to two types of memory capacity.
• Of all the memory controllers, make sure that memory channel 0 and memory channel 1 are configured
with the same total memory capacity respectively. It is required to install DIMMs with the same capacity
for memory channel 0 . However, the total memory capacity of memory channel 0 is allowed to be
different from that of memory channel 1.
• A maximum of eight logical ranks per memory channel are allowed.
• Populate all memory channels for optimal performance. For memory configurations that do not require or
allow use of all memory channels, all memory channels that are populated should have the same number
of DIMMs, the same total memory capacity, and the same total number of memory ranks.
• Memory configurations with DIMM quantities of 4, 8, 12, and 16 are supported.
• Installing an equal number of DIMMs for each processor is recommended. A minimum of two DDR4 DIMM
is required for each processor.
For the independent mode, install the DIMMs (of the same capacity) in the following order: 2, 15, 4, 13, 7, 10,
5, 12, 1, 16, 8, 9, 3, 14, 6, 11
Note: When adding one or more DIMMs during a memory upgrade, you might need to remove some DIMMs
that are already installed to new locations.
Notes:
1. Follow this population sequence if all of the DIMMs have the same memory capacity.
2. Follow this population sequence if the DIMMs in memory channel 0 have a different memory capacity
from those in memory channel 1.
3. DIMM configurations that support Sub NUMA Clustering (SNC), which can be enabled via UEFI.
4. DIMM configurations that support Software Guard Extensions (SGX), see “Enable Software Guard
Extensions (SGX)” on page 91 to enable this feature.
The following illustration shows the location of the DIMM connectors on the system board.
See the following table for memory channels and DIMM slots information around a processor.
Table 18. Memory channels and DIMM slots information around a processor
Integrated
Memory
Controller 0 Controller 1 Controller 2 Controller 3
Controller
(iMC)
DIMM
connector 15 16 13 14 10 9 12 11
(Processor 2)
• Memory mirroring can be configured across memory channel 0 and memory channel 1.
• The total memory capacity of memory channel 0 must equal to that of memory channel 1.
• Populate both memory channel 0 and memory channel 1 of each memory controller.
• For the memory-mirroring mode, ThinkSystem SD630 V2 Compute Node only supports the memory
configuration with 16 DIMMs. Populate all of the DIMM slots with DIMMs that are identical in capacity and
architecture.
Note: When adding one or more DIMMs during a memory upgrade, you might need to remove some DIMMs
that are already installed to new locations.
Notes:
1. DIMM configurations that support Sub NUMA Clustering (SNC), which can be enabled via UEFI.
46 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Install solution hardware options
This section includes instructions for performing initial installation of optional hardware. Each component
installation procedure references any tasks that need to be performed to gain access to the component
being replaced.
Attention: To ensure the components you install work correctly without problems, read the following
precautions carefully.
• Make sure the components you are installing are supported by your server. For a list of supported optional
components for the server, see https://serverproven.lenovo.com/.
• Always download and apply the latest firmware. This will help ensure that any known issues are
addressed, and that your server is ready to work with optimal performance. Go to ThinkSystem DA240
Enclosure and ThinkSystem SD630 V2 Compute Node Drivers and Software to download firmware updates
for your server.
• It is good practice to make sure that the server is working correctly before you install an optional
component.
• Follow the installation procedures in this section and use appropriate tools. Incorrectly installed
components can cause system failure from damaged pins, damaged connectors, loose cabling, or loose
components.
To avoid possible danger, read and follow the following safety statement.
• S017
CAUTION:
Hazardous moving fan blades nearby. Keep fingers and other body parts away.
• S033
CAUTION:
Hazardous energy present. Voltages with hazardous energy might cause heating when shorted
with metal, which might result in spattered metal, burns, or both.
Attention:
• Read the “Installation guidelines” on page 40 to ensure that you work safely.
Procedure
Step 1. Align the fan with the fan socket on the enclosure.
Step 2. Press and hold the orange latch; then, slide the fan into the socket until it clicks into place.
Attention:
• When replacing a fan with power on, complete the replacement within 30 seconds to ensure
proper operation.
• To maintain proper system cooling, do not operate the DA240 Enclosure without a fan or a fan
filler panel installed in each socket.
• For the configuration with two system fans, make sure that a fan filler panel has been installed in
fan socket 2 before operating the DA240 Enclosure. For the location of the fan socket, refer to
the rear view of “Enclosure” on page 24.
Demo video
48 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
CAUTION:
The power-control button on the device and the power switch on the power supply do not turn off
the electrical current supplied to the device. The device also might have more than one power cord.
To remove all electrical current from the device, ensure that all power cords are disconnected from
the power source.
Attention: Read the “Installation guidelines” on page 40 to ensure that you work safely.
Procedure
Step 1. Turn off the corresponding compute node on which you are going to perform the task.
Step 2. Remove the compute node from the enclosure.
a. Release and rotate the front handle on the compute node as shown in the illustration.
b. Slide the compute node out about 10 inches (25.4 cm); then, grip the node with both hands
and carefully pull it out of the enclosure.
Attention:
• When you remove the compute node, note the node bay number. Reinstalling a compute node
into a different node bay from the one it was removed from could lead to unintended
consequences. Certain configuration information and update options are established based on
respective node bay numbers. If you reinstall the compute node into a different node bay, you
might have to reconfigure the compute node.
• Install either a node bay filler or another compute node in the node bay within one minute.
• To maintain proper system cooling, do not operate the DA240 Enclosure without a compute
node or node bay filler installed in each node bay.
Demo video
CAUTION:
Hazardous voltage, current, and energy levels might be present. Only a qualified service technician
is authorized to remove the covers where the label is attached.
• S033
CAUTION:
Hazardous energy present. Voltages with hazardous energy might cause heating when shorted
with metal, which might result in spattered metal, burns, or both.
Attention: Read the “Installation guidelines” on page 40 to ensure that you work safely.
Procedure
Step 1. Remove the node front cover from the compute node.
a. Loosen the screw on the node front cover.
b. Press on the two push points on the node front cover to slide the cover toward the rear of
the compute node until it has disengaged from the node. Then, lift the cover away from the
node.
Demo video
50 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Watch the procedure on YouTube
Procedure
Step 1. Remove the front air baffle from the compute node.
a. Slightly push the right and left release latches.
b. Lift the front air baffle out of the compute node.
Attention: For proper cooling and airflow, replace the front air baffle before you turn on the
compute node. Operating the node without the front air baffle might damage node components.
Demo video
Procedure
Step 1. Remove the middle air baffle from the compute node.
a. Slightly push the right and left release latches.
b. Lift the middle air baffle out of the compute node.
Attention: For proper cooling and airflow, replace the middle air baffle before you turn on the
compute node. Operating the node without the middle air baffle might damage node components.
Demo video
To avoid potential danger, read and follow the following safety statements.
• S001
52 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
DANGER
Attention: Read the “Installation guidelines” on page 40 to ensure that you work safely.
Procedure
Step 1. Remove the M.2 backplane from the system board by pulling straight up on both ends of the
backplane at the same time.
Demo video
Procedure
Step 1. Make sure that you have saved the data on your drive before removing it from the compute node.
Step 2. Based on your configuration, follow the corresponding procedures to remove a 7mm 2.5-inch
SATA/NVMe or 15mm 2.5-inch NVMe solid-state drive.
Step 3. Place the drive(s) on the static protective surface to be installed back into the drive cage assembly
if needed.
54 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Demo video
Procedure
Step 1. Based on your existing configuration, follow the corresponding procedures to remove a 7mm or
15mm drive cage assembly.
Demo video
Procedure
Step 1. Remove the PCIe riser assembly from the compute node.
a. Loosen the captive screw on the PCIe riser assembly.
b. Carefully grasp the PCIe riser assembly by its edges and lift it out of the compute node tray.
Note: The PCIe riser assembly is located on the left side of the compute node as illustrated
while the drive cage assembly is on the right.
56 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Figure 33. PCIe riser assembly removal
Demo video
Procedure
Step 1. Remove the screw; then, grasp the adapter by its edges and carefully pull it out of the PCIe riser-
cage.
Procedure
Notes: Make sure the following components have been removed from the compute node before replacing a
spacer:
• Front air baffle (see “Remove the front air baffle” on page 51).
• Node front cover (see “Remove the node front cover” on page 49).
• Drive cage assembly (see “Remove the drive cage assembly” on page 55).
• PCIe riser assembly (see “Remove the PCIe riser assembly” on page 56).
Step 1. Remove the existing spacer from the compute node by removing the screw located inside of the
spacer as illustrated.
Step 2. Install the SATA or NVMe spacer based on the type of drive(s) you will install into the drive cage
assembly.
58 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Table 20. List of spacers
a. Align the opening on top of the spacer with the slot on the compute node tray as illustrated.
b. Insert the screw into the slot through the opening on top of the spacer and tighten the screw to
secure the spacer to the compute node tray.
To avoid potential danger, read and follow the following safety statements.
• S001
DANGER
Attention: Read the “Installation guidelines” on page 40 to ensure that you work safely.
Procedure
Note: Some M.2 backplanes support two identical M.2 drives. When two drives are installed,
align and support both drives when sliding the retainer forward to secure the drives.
Attention: When sliding the retainer forward, make sure the two nubs on the retainer enter the
small holes on the M.2 backplane. Once they enter the holes, you will hear a soft “click” sound.
Demo video
Procedure
60 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Figure 38. M.2 backplane retainer adjustment
To avoid potential danger, read and follow the following safety statements.
• S001
DANGER
Attention: Read the “Installation guidelines” on page 40 to ensure that you work safely.
Step 1. Align the openings located at the bottom of the blue plastic supports at each end of the M.2
backplane with the guide pins on the system board; then, insert the backplane into the system
board connector. Press down on the M.2 backplane to fully seat it.
Demo video
Procedure
Notes: Make sure the following components are in place before installing the drive cage assembly:
• Two 7mm 2.5-inch SATA/NVMe backplanes or one 15mm 2.5-inch NVMe solid-state drive backplane
• The correct SATA or NVMe spacer based on the type of drive(s) you will install into the drive cage
assembly (see “Replace SATA and NVMe spacers” in ThinkSystem DA240 Enclosure Type 7D1J and
ThinkSystem SD630 V2 Compute Node Type 7D1K Setup Guide).
Step 1. Follow the corresponding procedures to install a 7mm or 15mm drive cage assembly.
Important: During normal operation, the drive bay must contain either a drive cage assembly
or drive bay filler for proper cooling.
62 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
b. Fasten the screw on the left and tighten the captive screw on the right to secure the drive cage
assembly to the compute node tray.
c. Reconnect the cable connecting the system board and drive backplanes (see “7mm 2.5-inch
SATA/NVMe drive backplane cable” on page 30).
Important: During normal operation, the drive bay must contain either a drive cage assembly
or drive cage filler panel for proper cooling.
b. Fasten the screw on the left and tighten the captive screw on the right to secure the drive cage
assembly to the compute node tray.
c. Reconnect the cable connecting the system board and drive backplane (see “15mm 2.5-inch
NVMe drive backplane cable” on page 30).
Demo video
The following notes describe the type of drives that the compute node supports and other information that
you must consider when you install a drive. For a list of supported drives, see https://
serverproven.lenovo.com/.
• Locate the documentation that comes with the drive and follow those instructions in addition to the
instructions in this chapter.
• You can install up to two 7mm 2.5-inch SATA/NVMe solid-state drives or one 15mm 2.5-inch NVMe solid-
state drive into each drive cage.
• The electromagnetic interference (EMI) integrity and cooling of the solution are protected by having all
bays and PCI and PCIe slots covered or occupied. When you install a drive, PCI, or PCIe adapter, save the
EMC shield and filler panel from the bay or PCI or PCIe adapter slot cover in the event that you later
remove the device.
• For a complete list of supported optional devices for the node, see https://serverproven.lenovo.com/.
Procedure
Step 1. If a drive filler has been installed in the drive bay, remove it.
64 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Figure 42. 7mm / 15mm solid-state drive fillers removal
Step 2. Based on your configuration, follow the corresponding procedures to install a 7mm 2.5-inch SATA/
NVMe or 15mm 2.5-inch NVMe solid-state drive.
Demo video
Procedure
Step 1. Remove the screw; then, remove the filler from the PCIe riser-cage.
Step 2. Align the adapter with the PCIe connector on the riser-cage; then, carefully press the adapter
straight into the connector until it is securely seated.
Step 3. Fasten the screw to secure the adapter.
Demo video
66 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Watch the procedure on YouTube
Procedure
Note: Make sure a PCIe adapter has been installed into the PCIe riser assembly.
Step 1. Install the PCIe riser assembly into the compute node.
a. Align the guide pin and hook on the rear end of the PCIe riser assembly with the notches on
the spacer and on the compute node tray as illustrated. Then, insert the PCIe riser assembly
into the connector located in the middle of the system board.
b. Tighten the captive screw to secure the PCIe riser assembly to the compute node tray.
Important: During normal operation, the PCIe bay must contain either a PCIe riser assembly
or PCIe riser filler panel for proper cooling.
Note: The PCIe riser assembly is located on the left side of the compute node as illustrated
while the drive cage assembly is on the right.
Demo video
See “Memory module installation rules and order” on page 43 for detailed information about memory
configuration and setup.
Attention:
• Read the “Installation guidelines” on page 40 to ensure that you work safely.
• Memory modules are sensitive to static discharge and require special handling. Refer to the standard
guidelines for “Handling static-sensitive devices” on page 42.
– Always wear an electrostatic-discharge strap when removing or installing memory modules.
Electrostatic-discharge gloves can also be used.
– Never hold two or more memory modules together so that they do not touch each other. Do not stack
memory modules directly on top of each other during storage.
– Never touch the gold memory module connector contacts or allow these contacts to touch the outside
of the memory module connector housing.
– Handle memory modules with care: never bend, twist, or drop a memory module.
– Do not use any metal tools (such as jigs or clamps) to handle the memory modules, because the rigid
metals may damage the memory modules.
– Do not insert memory modules while holding packages or passive components, which can cause
package cracks or detachment of passive components by the high insertion force.
The following illustration shows the location of the DIMM connectors on the system board.
Figure 47. The location of the DIMM connectors on the system board
Procedure
Step 1. Determine which DIMM you want to install in the compute node and locate its corresponding
connector on the system board.
Step 2. Open the retaining clips on each end of the DIMM connector. If necessary, you can use a pointed
tool to open the retaining clips due to space constraints. Pencils are not recommended as a tool as
they may not be strong enough.
a. Place the tip of the tool in the recess on the top of the retaining clip.
b. Carefully rotate the retaining clip away from the DIMM connector.
68 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Attention: To avoid breaking the retaining clips or damaging the DIMM connectors, open and
close the retaining clips gently.
Step 3. Touch the static-protective package that contains the DIMM to any unpainted surface on the
outside of the compute node. Then, remove the DIMM from the package and place it on a static-
protective surface.
Step 4. Install the DIMM.
a. Make sure that the retaining clips are in the fully open position.
b. Align the DIMM with its connector and gently place the DIMM on the connector with both
hands.
c. Firmly press both ends of the DIMM straight down into the connector until the retaining clips
snap into the locked position.
Note: If there is a gap between the DIMM and the retaining clips, the DIMM has not been correctly
inserted. In this case, open the retaining clips, remove the DIMM, and then reinsert it.
Demo video
70 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
• Do not allow the thermal grease on the processor or heat sink to come in contact with anything. Contact
with any surface can compromise the thermal grease, rendering it ineffective. Thermal grease can damage
components, such as the electrical connectors in the processor socket.
• Remove and install only one PHM at a time. If the system board supports multiple processors, install the
PHMs starting with the first processor socket.
Notes:
• The heat sink, processor, and processor carrier for your system might be different from those shown in the
illustrations.
• PHMs are keyed for the socket where they can be installed and for their orientation in the socket.
• See https://serverproven.lenovo.com/ for a list of processors supported for your server. All processors on
the system board must have the same speed, number of cores, and frequency.
• Before you install a new PHM or replacement processor, update your system firmware to the latest level.
See “Update the firmware” on page 86.
• For ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node, T-shaped heat sink is
only applicable to processor socket 2.
• Installing an additional PHM can change the memory requirements for your system. See “Install a memory
module” on page 68 for a list of processor-to-memory relationships.
• The following types of heat sinks are applicable to SD630 V2:
Processors with TDP (thermal design power) ≤ 165W:
– 113x124x23.5mm heat sink (aluminum fins) is applicable to both processor socket 1 and 2.
Processors with TDP (thermal design power) ≥ 185W:
– 113x124x23.5mm heat sink (copper fins) is only applicable to processor socket 1.
– T-shaped heat sink is only applicable to processor socket 2.
• Make sure to install the correct number of fans based on your configuration.
– Two fans:
– Processors with TDP (thermal design power) ≤ 165W
– Three fans:
– Processors with TDP (thermal design power) ≥ 185W
– Intel(R) Xeon(R) Gold 6334 (165W, 8 core)
Procedure
Step 1. Remove the processor socket cover, if one is installed on the processor socket, by placing your
fingers in the half-circles at each end of the cover and lifting it from the system board.
Step 2. Install the processor-heat-sink module into the system board socket.
72 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Figure 52. PHM installation
Note: For reference, the torque required to fully tighten or loosen the fasteners is 1.1 newton-
meters, 10 inch-pounds.
Attention: To prevent damage to components, make sure that you follow the indicated
installation sequence.
Step 3. Based on your configuration, follow the corresponding procedures to install the T-shaped heat sink
that comes with processor 2.
a. For the four Torx T30 nuts, rotate the anti-tilt wire bails inward.
b. Align the triangular mark and four Torx T30 nuts on the T-shaped heat sink with the
triangular mark and threaded posts of the processor socket; then, insert the T-shaped heat
sink into the processor socket.
c. Rotate the anti-tilt wire bails outward until they engage with the hooks in the socket.
d. Fully tighten the four Torx T30 nuts and three captive screws in the installation sequence
shown on the T-shaped heat sink label as illustrated below. Then, visually inspect to make sure
that there is no gap between the screw shoulder beneath the heat sink and the processor
socket.
Note: For reference, the torque required to fully tighten or loosen the fasteners is 1.1 newton-
meters, 10 inch-pounds.
Attention: To prevent damage to components, make sure that you follow the indicated
installation sequence.
Installation sequence: 1, 2, 3, 4, 5, 6, 7.
74 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Figure 54. Torx T30 nuts and screws numbering on T-shaped heat sink label
1. Reinstall the node air baffles (see “Install the front air baffle” on page 76 and “Install the middle air baffle”
on page 75).
2. Reinstall the compute node into the enclosure (see “Install a compute node in the enclosure” on page
78).
3. Check the power LED on each node to make sure it changes from fast blink to slow blink to indicate the
node is ready to be powered on.
Demo video
Procedure
Step 1. Align the middle air baffle tabs with the baffle slots located on both sides of the compute node in
between DIMM connectors 1 to 8 and 9 to 16.
Step 2. Lower the middle air baffle into the compute node and press it down until it is securely seated.
Attention:
• For proper cooling and airflow, reinstall the middle air baffle before you turn on the compute
node. Operating the node without the middle air baffle might damage node components.
• Pay attention to the cables routed along the sidewalls of the compute node as they may catch
under the middle air baffle.
Demo video
Procedure
76 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Figure 56. Front air baffle installation
Step 1. Align the front air baffle tabs with the baffle slots located on both sides of the compute node in
between the drive cage assembly/PCIe riser assembly and DIMM connectors 1 to 8.
Step 2. Lower the front air baffle into the compute node and press it down until it is securely seated.
Attention:
• For proper cooling and airflow, reinstall the front air baffle before you turn on the compute node.
Operating the node without the front air baffle might damage node components.
• Ensure that the M.2 adapter pull tab is tucked under the front air baffle.
• Pay attention to the cables routed along the sidewalls of the compute node as they may catch
under the front air baffle.
Demo video
CAUTION:
Hazardous voltage, current, and energy levels might be present. Only a qualified service technician
is authorized to remove the covers where the label is attached.
• S033
Attention: Read the “Installation guidelines” on page 40 to ensure that you work safely.
Procedure
Step 1. Install the node front cover into the compute node.
a. Align the pins on the inside of the node front cover with the notches in the side walls of the
compute node. Then, position the cover on top of the node and slide the cover forward until it
latches in place.
b. Fasten the screw on the node front cover.
Demo video
To avoid possible danger, read and follow the following safety statement.
• S002
78 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
CAUTION:
The power-control button on the device and the power switch on the power supply do not turn off
the electrical current supplied to the device. The device also might have more than one power cord.
To remove all electrical current from the device, ensure that all power cords are disconnected from
the power source.
Attention:
• Read the “Installation guidelines” on page 40 to ensure that you work safely.
• Be careful when you are removing or installing the compute node to avoid damaging the node connector
(s).
Procedure
Attention:
• If you are reinstalling a compute node that you removed, you must install it in the same node bay
from which you removed it. Certain configuration information and update options are
established based on respective node bay numbers. Reinstalling a compute node into a different
node bay could lead to unintended consequences. If you reinstall the compute node into a
different node bay, you might have to reconfigure the compute node.
• The time required for a compute node to initialize varies by system configurations. The power
LED flashes rapidly; the power button on the compute node will not respond until the power LED
flashes slowly, indicating that the initialization process is complete.
• To maintain proper system cooling, do not operate the DA240 Enclosure without a compute
node or node bay filler installed in each node bay.
Step 2. If you have other compute nodes to install, do so now.
Demo video
The USB 3.0 Console Breakout Cable connects through the USB 3.0 Console Breakout Cable connector on
the front of each compute node (see “Compute node” on page 15).
To avoid possible danger, read and follow the following safety statement.
• S014
CAUTION:
Hazardous voltage, current, and energy levels might be present. Only a qualified service technician
is authorized to remove the covers where the label is attached.
80 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
• S017
CAUTION:
Hazardous moving fan blades nearby. Keep fingers and other body parts away.
• S033
CAUTION:
Hazardous energy present. Voltages with hazardous energy might cause heating when shorted
with metal, which might result in spattered metal, burns, or both.
Attention:
• Read the “Installation guidelines” on page 40 to ensure that you work safely.
• Touch the static-protective package that contains the component to any unpainted metal surface on the
solution; then, remove it from the package and place it on a static-protective surface.
Procedure
Step 1. Align the connector on the cable with that on the compute node and push it in.
Step 2. Attach the external LCD diagnostics handset to a metal surface with the magnetic bottom.
Check the power LED on each node to make sure it changes from fast blink to slow blink to indicate the node
is ready to be powered on.
Demo video
Step 1. Remove all of the compute nodes, power supplies, fans, and SMM2 from the enclosure to reduce
its weight.
Step 2. Align and place the enclosure onto the rails; then, slide it into the rack along the rails.
82 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Figure 62. Installing the enclosure into the rails
CAUTION:
Use safe practices when lifting the enclosure.
Step 3. Secure the captive screws.
Step 4. Reinstall all of the solution components that you previously removed.
For information about powering off the compute node, see “Power off the compute node” on page 84.
To power off the compute node that is in a standby state (power status LED flashes once per second):
Note: The Lenovo XClarity Controller can place the compute node in a standby state as an automatic
response to a critical system failure.
• Start an orderly shutdown using the operating system (if supported by your operating system).
• Press the power button to start an orderly shutdown (if supported by your operating system).
• Press and hold the power button for more than 4 seconds to force a shutdown.
When in a standby state, the compute node can respond to remote power-on requests sent to the Lenovo
XClarity Controller. For information about powering on the compute node, see “Power on the compute node”
on page 83.
84 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Chapter 4. System configuration
Complete these procedures to configure your system.
The following methods are available to set the network connection for the Lenovo XClarity Controller if you
are not using DHCP:
• If a monitor is attached to the solution, you can use Lenovo XClarity Provisioning Manager to set the
network connection.
Complete the following steps to connect the Lenovo XClarity Controller to the network using the Lenovo
XClarity Provisioning Manager.
1. Start the solution.
2. Press the key specified in the on-screen instructions to display the Lenovo XClarity Provisioning
Manager interface. (For more information, see the “Startup” section in the LXPM documentation
compatible with your server at https://pubs.lenovo.com/lxpm-overview/.)
3. Go to LXPM ➙ UEFI Setup ➙ BMC Settings to specify how the Lenovo XClarity Controller will
connect to the network.
– If you choose a static IP connection, make sure that you specify an IPv4 or IPv6 address that is
available on the network.
– If you choose a DHCP connection, make sure that the MAC address for the server has been
configured in the DHCP server.
4. Click OK to apply the setting and wait for two to three minutes.
5. Use an IPv4 or IPv6 address to connect Lenovo XClarity Controller.
Important: The Lenovo XClarity Controller is set initially with a user name of USERID and password
of PASSW0RD (with a zero, not the letter O). This default user setting has Supervisor access. It is
required to change this user name and password during your initial configuration for enhanced
security.
• If no monitor attached to the solution, you can set the network connection through the System
Management Module 2 interface. Connect an Ethernet cable from your laptop to the Ethernet port on the
System Management Module 2, which is located at the rear of the solution.
Note: Make sure that you modify the IP settings on the laptop so that it is on the same network as the
solution default settings.
To access the System Management Module 2 interface, the System Management Module 2 network must
be enabled. For more information, see System Management Module 2 User Guide.
The default IPv4 address and the IPv6 Link Local Address (LLA) is provided on the Lenovo XClarity
Controller Network Access label that is affixed to the Pull Out Information Tab.
• If you are using the Lenovo XClarity Administrator Mobile app from a mobile device, you can connect to
the Lenovo XClarity Controller through the Lenovo XClarity Controller USB connector or USB 3.0 Console
Breakout Cable. For the location of Lenovo XClarity Controller USB connector and USB 3.0 Console
Breakout Cable connector, see “Compute node” on page 15.
You can use the tools listed here to update the most current firmware for your server and the devices that are
installed in the server.
86 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
• Off-Target update. The installation or update is initiated from a computing device interacting directly with
the server’s Lenovo XClarity Controller.
• UpdateXpress System Packs (UXSPs). UXSPs are bundled updates designed and tested to provide the
interdependent level of functionality, performance, and compatibility. UXSPs are server machine-type
specific and are built (with firmware and device driver updates) to support specific Windows Server, Red
Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) operating system distributions.
Machine-type-specific firmware-only UXSPs are also available.
On-Target
Off-Target
Off-Target
Off-Target
Off-Target
Notes:
1. For I/O firmware updates.
2. For BMC and UEFI firmware updates.
Note: By default, the Lenovo XClarity Provisioning Manager Graphical User Interface is displayed when
you start the server and press the key specified in the on-screen instructions. If you have changed that
default to be the text-based system setup, you can bring up the Graphical User Interface from the text-
based system setup interface.
For additional information about using Lenovo XClarity Provisioning Manager to update firmware, see:
“Firmware Update” section in the LXPM documentation compatible with your server at https://
pubs.lenovo.com/lxpm-overview/
• Lenovo XClarity Controller
If you need to install a specific update, you can use the Lenovo XClarity Controller interface for a specific
server.
Notes:
– To perform an in-band update through Windows or Linux, the operating system driver must be installed
and the Ethernet-over-USB (sometimes called LAN over USB) interface must be enabled.
For additional information about configuring Ethernet over USB, see:
“Configuring Ethernet over USB” section in the XCC documentation version compatible with your
server at https://pubs.lenovo.com/lxcc-overview/
– If you update firmware through the Lenovo XClarity Controller, make sure that you have downloaded
and installed the latest device drivers for the operating system that is running on the server.
For additional information about using Lenovo XClarity Controller to update firmware, see:
“Updating Server Firmware” section in the XCC documentation compatible with your server at https://
pubs.lenovo.com/lxcc-overview/
• Lenovo XClarity Essentials OneCLI
88 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Lenovo XClarity Essentials OneCLI is a collection of command line applications that can be used to
manage Lenovo servers. Its update application can be used to update firmware and device drivers for
your servers. The update can be performed within the host operating system of the server (in-band) or
remotely through the BMC of the server (out-of-band).
For additional information about using Lenovo XClarity Essentials OneCLI to update firmware, see:
https://pubs.lenovo.com/lxce-onecli/onecli_c_update
• Lenovo XClarity Essentials UpdateXpress
Lenovo XClarity Essentials UpdateXpress provides most of OneCLI update functions through a graphical
user interface (GUI). It can be used to acquire and deploy UpdateXpress System Pack (UXSP) update
packages and individual updates. UpdateXpress System Packs contain firmware and device driver
updates for Microsoft Windows and for Linux.
You can obtain Lenovo XClarity Essentials UpdateXpress from the following location:
https://datacentersupport.lenovo.com/solutions/lnvo-xpress
• Lenovo XClarity Essentials Bootable Media Creator
You can use Lenovo XClarity Essentials Bootable Media Creator to create bootable media that is suitable
for firmware updates, VPD updates, inventory and FFDC collection, advanced system configuration, FoD
Keys management, secure erase, RAID configuration, and diagnostics on supported servers.
You can obtain Lenovo XClarity Essentials BoMC from the following location:
https://datacentersupport.lenovo.com/solutions/lnvo-bomc
• Lenovo XClarity Administrator
If you are managing multiple servers using the Lenovo XClarity Administrator, you can update firmware for
all managed servers through that interface. Firmware management is simplified by assigning firmware-
compliance policies to managed endpoints. When you create and assign a compliance policy to managed
endpoints, Lenovo XClarity Administrator monitors changes to the inventory for those endpoints and flags
any endpoints that are out of compliance.
For additional information about using Lenovo XClarity Administrator to update firmware, see:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/update_fw.html
• Lenovo XClarity Integrator offerings
Lenovo XClarity Integrator offerings can integrate management features of Lenovo XClarity Administrator
and your server with software used in a certain deployment infrastructure, such as VMware vCenter,
Microsoft Admin Center, or Microsoft System Center.
For additional information about using Lenovo XClarity Integrator to update firmware, see:
https://pubs.lenovo.com/lxci-overview/
Important: Do not configure option ROMs to be set to Legacy unless directed to do so by Lenovo Support.
This setting prevents UEFI drivers for the slot devices from loading, which can cause negative side effects for
Lenovo software, such as Lenovo XClarity Administrator and Lenovo XClarity Essentials OneCLI, and to the
Lenovo XClarity Controller. The side effects include the inability to determine adapter card details, such as
model name and firmware levels. When adapter card information is not available, generic information for the
model name, such as "Adapter 06:00:00" instead of the actually model name, such as "ThinkSystem RAID
930-16i 4GB Flash." In some cases, the UEFI boot process might also hang.
Notes: The Lenovo XClarity Provisioning Manager provides a Graphical User Interface to configure a
server. The text-based interface to system configuration (the Setup Utility) is also available. From Lenovo
XClarity Provisioning Manager, you can choose to restart the server and access the text-based interface.
In addition, you can choose to make the text-based interface the default interface that is displayed when
you start LXPM. To do this, go to Lenovo XClarity Provisioning Manager ➙ UEFI Setup ➙ System
Settings ➙ <F1>Start Control ➙ Text Setup. To start the server with Graphic User Interface, select Auto
or Tool Suite.
Memory configuration
Memory performance depends on several variables, such as memory mode, memory speed, memory ranks,
memory population and processor.
More information about optimizing memory performance and configuring memory is available at the Lenovo
Press website:
90 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
https://lenovopress.com/servers/options/memory
In addition, you can take advantage of a memory configurator, which is available at the following site:
http://1config.lenovo.com/#/memory_configuration
For specific information about the required installation order of memory modules in your compute node(s)
based on the system configuration and memory mode that you are implementing, see “Memory module
installation guidelines” on page 43.
Step 1. Make sure you follow the memory module population sequence for SGX configurations in
“Independent memory mode: Installation guidelines and sequence” on page 44. (DIMM
configuration must be at least 8 DIMMs per socket to support SGX).
Step 2. Restart the system. Before the operating system starts up, press the key specified in the on-screen
instructions to enter the Setup Utility. (For more information, see the “Startup” section in the LXPM
documentation compatible with your server at https://pubs.lenovo.com/lxpm-overview/.)
Step 3. Go to System settings ➙ Processors ➙ UMA-Based Clustering and disable the option.
Step 4. Go to System settings ➙ Processors ➙ Total Memory Encryption (TME) and enable the option.
Step 5. Save the changes, then go to System settings ➙ Processors ➙ SW Guard Extension (SGX) and
enable the option.
RAID configuration
Using a Redundant Array of Independent Disks (RAID) to store data remains one of the most common and
cost-efficient methods to increase the storage performance, availability, and capacity of your compute node
(s).
RAID increases performance by allowing multiple drives to process I/O requests simultaneously. RAID can
also prevent data loss in case of a drive failure by reconstructing (or rebuilding) the missing data from the
failed drive using the data from the remaining drives.
RAID array (also known as RAID drive group) is a group of multiple physical drives that uses a certain
common method to distribute data across the drives. A virtual drive (also known as virtual disk or logical
drive) is a partition in the drive group that is made up of contiguous data segments on the drives. Virtual drive
is presented up to the host operating system as a physical disk that can be partitioned to create OS logical
drives or volumes.
https://lenovopress.com/lp0578-lenovo-raid-introduction
Detailed information about RAID management tools and resources is available at the following Lenovo Press
website:
Notes:
• Before setting up RAID for NVMe drives, follow the below steps to enable VROC:
1. Restart the system. Before the operating system starts up, press the key specified in the on-screen
instructions to enter the Setup Utility.
2. Go to System settings ➙ Devices and I/O Ports ➙ Intel VMD and enable the option.
3. Save the changes and reboot the system.
• VROC Intel-SSD-Only supports RAID levels 0 and 1 with Intel NVMe drives.
• VROC Premium requires an activation key and supports RAID levels 0 and 1 with non-Intel NVMe drives.
For more information about acquiring and installing the activation key, see https://fod.lenovo.com/lkms
Tool-based deployment
• Multi-server
Available tools:
– Lenovo XClarity Administrator
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/compute_node_image_deployment.html
– Lenovo XClarity Essentials OneCLI
https://pubs.lenovo.com/lxce-onecli/onecli_r_uxspi_proxy_tool
– Lenovo XClarity Integrator deployment pack for SCCM (for Windows operating system only)
https://pubs.lenovo.com/lxci-deploypack-sccm/dpsccm_c_endtoend_deploy_scenario
• Single-server
Available tools:
– Lenovo XClarity Provisioning Manager
“OS Installation” section in the LXPM documentation compatible with your server at https://
pubs.lenovo.com/lxpm-overview/
– Lenovo XClarity Essentials OneCLI
https://pubs.lenovo.com/lxce-onecli/onecli_r_uxspi_proxy_tool
– Lenovo XClarity Integrator deployment pack for SCCM (for Windows operating system only)
https://pubs.lenovo.com/lxci-deploypack-sccm/dpsccm_c_endtoend_deploy_scenario
92 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Manual deployment
If you cannot access the above tools, follow the instructions below, download the corresponding OS
Installation Guide, and deploy the operating system manually by referring to the guide.
1. Go to https://datacentersupport.lenovo.com/solutions/server-os.
2. Select an operating system from the navigation pane and click Resources.
3. Locate the “OS Install Guides” area and click the installation instructions. Then, follow the instructions to
complete the operation system deployment task.
Make sure that you create backups for the following server components:
• Management processor
You can back up the management processor configuration through the Lenovo XClarity Controller
interface. For details about backing up the management processor configuration, see:
“Backing up the BMC configuration” section in the XCC documentation compatible with your solution at
https://pubs.lenovo.com/lxcc-overview/.
Alternatively, you can use the save command from Lenovo XClarity Essentials OneCLI to create a backup
of all configuration settings. For more information about the save command, see:
https://pubs.lenovo.com/lxce-onecli/onecli_r_save_command
• Operating system
Use your backup methods to back up the operating system and user data for the solution.
Where:
[access_method]
The access method that you select to use from the following methods:
Where:
xcc_user_id
The BMC/IMM/XCC account name (1 of 12 accounts). The default value is USERID.
xcc_password
The BMC/IMM/XCC account password (1 of 12 accounts).
Example command is as follows:
onecli config createuuid SYSTEM_PROD_DATA.SysInfoUUID --bmc-username <xcc_user_id> --bmc-
password <xcc_password>
– Online KCS access (unauthenticated and user restricted):
You do not need to specify a value for access_method when you use this access method.
Example command is as follows:
onecli config createuuid SYSTEM_PROD_DATA.SysInfoUUID
Note: The KCS access method uses the IPMI/KCS interface, which requires that the IPMI
driver be installed.
– Remote LAN access, type the command:
[−−bmc <xcc_user_id>:<xcc_password>@<xcc_external_ip>]
Where:
xcc_external_ip
The BMC/IMM/XCC external IP address. There is no default value. This parameter is
required.
xcc_user_id
The BMC/IMM/XCC account name (1 of 12 accounts). The default value is USERID.
xcc_password
94 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
The BMC/IMM/XCC account password (1 of 12 accounts).
Note: BMC, IMM, or XCC external IP address, account name, and password are all valid for
this command.
Example command is as follows:
onecli config createuuid SYSTEM_PROD_DATA.SysInfoUUID −−bmc <xcc_user_id>:<xcc_password>@<xcc_
external_ip>
4. Restart the Lenovo XClarity Controller.
5. Restart the server.
Where:
<asset_tag>
The server asset tag number. Type aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa, where
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa is the asset tag number.
[access_method]
The access method that you select to use from the following methods:
xcc_user_id
The BMC/IMM/XCC account name (1 of 12 accounts). The default value is USERID.
xcc_password
The BMC/IMM/XCC account password (1 of 12 accounts).
Example command is as follows:
Note: The KCS access method uses the IPMI/KCS interface, which requires that the IPMI
driver be installed.
– Remote LAN access, type the command:
[−−bmc <xcc_user_id>:<xcc_password>@<xcc_external_ip>]
Where:
xcc_external_ip
The BMC/IMM/XCC IP address. There is no default value. This parameter is required.
xcc_user_id
The BMC/IMM/XCC account (1 of 12 accounts). The default value is USERID.
xcc_password
The BMC/IMM/XCC account password (1 of 12 accounts).
Note: BMC, IMM, or XCC internal LAN/USB IP address, account name, and password are all
valid for this command.
Example command is as follows:
96 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Chapter 5. Resolving installation issues
Use this information to resolve issues that you might have when setting up your system.
Use the information in this section to diagnose and resolve problems that you might encounter during the
initial installation and setup of your server.
• “Solution does not power on” on page 97
• “The solution immediately displays the POST Event Viewer when it is turned on” on page 98
• “Embedded hypervisor is not in the boot list” on page 98
• “Solution cannot recognize a drive” on page 98
• “Displayed system memory is less than installed physical memory” on page 99
• “A Lenovo optional device that was just installed does not work.” on page 100
• “Voltage planar fault is displayed in the event log” on page 100
Note: The power-control button will not function until approximately 5 to 10 seconds after the solution has
been connected to power.
1. Make sure that the power-control button is working correctly:
a. Disconnect the solution power cords.
b. Reconnect the power cords.
c. (Trained technician only) Reseat the operator information panel cable, and then repeat steps 1a and
1b.
• (Trained technician only) If the solution starts, reseat the operator information panel. If the problem
remains, replace the operator information panel.
• If the solution does not start, bypass the power-control button by using the force power-on
jumper. If the solution starts, reseat the operator information panel. If the problem remains,
replace the operator information panel.
2. Make sure that the reset button is working correctly:
a. Disconnect the solution power cords.
b. Reconnect the power cords.
c. (Trained technician only) Reseat the operator information panel cable, and then repeat steps 2a and
2b.
• (Trained technician only) If the solution starts, replace the operator information panel.
• If the solution does not start, go to step 3.
3. Make sure that both power supplies installed in the solution are of the same type. Mixing different power
supplies in the solution will cause a system error (the system-error LED on the front panel turns on).
4. Make sure that:
• The power cords are correctly connected to the solution and to a working electrical outlet.
• The type of memory that is installed is correct.
• The DIMMs are fully seated.
• The LEDs on the power supply do not indicate a problem.
• The processors are installed in the correct sequence.
Notes: To collect FFDC logs, you can perform one of the following actions:
• Insert a USB storage device to the USB connector on SMM2 and then press the USB port service
mode button to collect FFDC logs. See “System Management Module 2 (SMM2)” on page 25 for the
location of the connector and button.
• Login to the SMM2 WebGUI and click on the Capture button of FFDC in the Management Module
section under Enclosure Rear Overview (see “Enclosure Rear Overview” in System Management
Module 2 User Guidehttps://thinksystem.lenovofiles.com/help/topic/mgt_tools_smm2/c_chassis_rear_
overview.html?cp=3_4_2_2_0_1).
The solution immediately displays the POST Event Viewer when it is turned on
Complete the following procedure to solve the problem.
1. Correct any errors that are indicated by the light path diagnostics LEDs.
2. Make sure that the solution supports all the processors and that the processors match in speed and
cache size.
You can view processor details from system setup.
To determine if the processor is supported for the solution, see https://serverproven.lenovo.com/.
3. (Trained technician only) Make sure that processor 1 is seated correctly
4. (Trained technician only) Remove processor 2 and restart the solution.
5. Replace the following components one at a time, in the order shown, restarting the solution each time:
a. (Trained technician only) Processor
b. (Trained technician only) System board
98 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
1. Observe the associated yellow drive status LED. If the LED is lit, it indicates a drive fault.
2. If the LED is lit, remove the drive from the bay, wait 45 seconds, and reinsert the drive, making sure that
the drive assembly connects to the drive backplane.
3. Observe the associated green drive activity LED and the yellow status LED:
• If the green activity LED is flashing and the yellow status LED is not lit, the drive is recognized by the
controller and is working correctly. Run the diagnostics tests for the hard disk drives. When you start
a solution and press the key specified in the on-screen instructions, the Lenovo XClarity Provisioning
Manager interface is displayed by default. You can perform hard drive diagnostics from this interface.
From the Diagnostic page, click Run Diagnostic ➙ Disk Drive Test.
• If the green activity LED is flashing and the yellow status LED is flashing slowly, the drive is
recognized by the controller and is rebuilding.
• If neither LED is lit or flashing, check the drive backplane.
• If the green activity LED is flashing and the yellow status LED is lit, replace the drive. If the activity of
the LEDs remains the same, go to step drive problems. If the activity of the LEDs changes, return to
step 1.
4. Make sure that the drive backplane is correctly seated. When it is correctly seated, the drive assemblies
correctly connect to the backplane without bowing or causing movement of the backplane.
5. Reseat the backplane power cable and repeat steps 1 through 3.
6. Reseat the backplane signal cable and repeat steps 1 through 3.
7. Suspect the backplane signal cable or the backplane:
• Replace the affected backplane signal cable.
• Replace the affected backplane.
8. Run the diagnostics tests for the SAS/SATA adapter and hard disk drives. When you start a solution and
press the key according to the on-screen instructions, the LXPM interface is displayed by default. (For
more information, see the “Startup” section in the LXPM documentation compatible with your solution at
https://pubs.lenovo.com/lxpm-overview/.) You can perform hard drive diagnostics from this interface.
From the Diagnostic page, click Run Diagnostic ➙ HDD test/Disk Drive Test.
Depending on the LXPM version, you may see HDD test or Disk Drive Test.
Based on those tests:
• If the adapter passes the test but the drives are not recognized, replace the backplane signal cable
and run the tests again.
• Replace the backplane.
• If the adapter fails the test, disconnect the backplane signal cable from the adapter and run the tests
again.
• If the adapter fails the test, replace the adapter.
Note: Each time you install or remove a memory module, you must disconnect the solution from the power
source; then, wait 10 seconds before restarting the solution.
1. Make sure that:
• No error LEDs are lit on the operator information panel.
• No memory module error LEDs are lit on the system board.
• Memory mirrored channel does not account for the discrepancy.
A Lenovo optional device that was just installed does not work.
1. Check the Lenovo XClarity Controller event log for any events associated with the device.
2. Make sure that the following conditions are met:
• The device is installed in the correct port.
• The device is designed for the solution (see https://serverproven.lenovo.com/).
• You followed the installation instructions that came with the device, and the device is installed
correctly.
• You have not loosened any other installed devices or cables.
• You updated the configuration information in the Setup utility. Whenever memory or any other device
is changed, you must update the configuration.
3. Reseat the device that you just installed.
4. Replace the device that you just installed.
1. Revert the system to the minimum configuration. See “Enclosure specifications” on page 3 for the
minimally required number of processors and DIMMs.
2. Restart the system.
• If the system restarts, add each of the items that you removed one at a time, restarting the system
each time, until the error occurs. Replace the item for which the error occurs.
• If the system does not restart, suspect the system board.
100 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Appendix A. Getting help and technical assistance
If you need help, service, or technical assistance or just want more information about Lenovo products, you
will find a wide variety of sources available from Lenovo to assist you.
On the World Wide Web, up-to-date information about Lenovo systems, optional devices, services, and
support are available at:
http://datacentersupport.lenovo.com
You can find the product documentation for your ThinkSystem products at https://pubs.lenovo.com/
You can take these steps to try to solve the problem yourself:
• Check all cables to make sure that they are connected.
• Check the power switches to make sure that the system and any optional devices are turned on.
• Check for updated software, firmware, and operating-system device drivers for your Lenovo product. The
Lenovo Warranty terms and conditions state that you, the owner of the Lenovo product, are responsible
for maintaining and updating all software and firmware for the product (unless it is covered by an
additional maintenance contract). Your service technician will request that you upgrade your software and
firmware if the problem has a documented solution within a software upgrade.
• If you have installed new hardware or software in your environment, check https://
serverproven.lenovo.com/ to make sure that the hardware and software are supported by your product.
• Go to http://datacentersupport.lenovo.com and check for information to help you solve the problem.
– Check the Lenovo forums at https://forums.lenovo.com/t5/Datacenter-Systems/ct-p/sv_eg to see if
someone else has encountered a similar problem.
Gather the following information to provide to the service technician. This data will help the service
technician quickly provide a solution to your problem and ensure that you receive the level of service for
which you might have contracted.
102 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
Contacting Support
You can contact Support to obtain help for your issue.
You can receive hardware service through a Lenovo Authorized Service Provider. To locate a service
provider authorized by Lenovo to provide warranty service, go to https://datacentersupport.lenovo.com/
serviceprovider and use filter searching for different countries. For Lenovo support telephone numbers, see
https://datacentersupport.lenovo.com/supportphonelist for your region support details.
C E
cable routing
15mm 2.5-inch NVMe drive backplane 30 enclosure 3, 33
7mm 2.5-inch SATA/NVMe drive backplane 30 install
power distribution boards rail 82
fans 31 enclosure rear view 24
cable routing, 15mm 2.5-inch NVMe drive backplane 30 enclosure, front view 15
cable routing, 7mm 2.5-inch SATA/NVMe drive backplane 30 Ethernet 24
cable the solution 83 link status LED 24
check log LED 15 Ethernet activity
collecting service data 102 LED 16, 24
Common installation issues 97 Ethernet connector 24
compute node 6, 35 External
installing 78 LCD diagnostics handset 18
removing 48 external LCD diagnostics handset
Configuration - ThinkSystem DA240 Enclosure and install 80
ThinkSystem SD630 V2 Compute Node 85
configure the firmware 89
connector
USB 15
F
connectors features 1
Ethernet 24 filler, node bay 78
front of enclosure 15 front
front of solution 15 air baffle 76
internal 27 installing 76
on the rear of the enclosure 24 front view
power supply 24 connectors 15
rear 24 LED location 15
rear of power supply 26 front view of the enclosure 15
rear of SMM2 25 front view of the solution 15
rear of solution 24
USB 24
video 24
connectors, internal system board 27 G
contamination, particulate and gaseous 7
controls and LEDs gaseous contamination 7
on the node operator panel 16 Getting help 101
CPU guidelines
option install 70 options installation 40
creating a personalized support web page 101 system reliability 42
custom support web page 101
106 ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2 Compute Node Setup Guide
power on the compute node 83 solution, front view 15
power supply, rear view 26 solution, rear view 24
power-control button 15 specifications 3, 6
power-on LED 16 static-sensitive devices
presence detection button 16 handling 42
processor support web page, custom 101
option install 70 system
processor-heat-sink module error LED front 16
option install 70 information LED 16
locator LED, front 16
system board
internal connectors 27
R layout 27
system board internal connectors 27
rear view 24
system board layout 27
connectors 24–26
System configuration - ThinkSystem DA240 Enclosure
LED location 24–26
and ThinkSystem SD630 V2 Compute Node 85
of the enclosure 24
system reliability guidelines 42
rear view of the power supply 26
System-board
rear view of the SMM2 25
switches 28
rear view of the solution 24
system-error LED 15
remove
drive cage assembly 55
hot-swap
drive 53 T
M.2 backplane 52
remove node front cover 49 telephone numbers 103
removing ThinkSystem DA240 Enclosure and ThinkSystem SD630 V2
air baffle 51 Compute Node
compute node 48 Type 7D1J and 7D1K 1
front
air baffle 51
middle 51
PCIe adapter 57 U
PCIe riser assembly 56
update the firmware 86
replacing
updating
air baffles 51, 75
asset tag 95
node 51, 75
Universal Unique Identifier (UUID) 93
reset button 15
Update the Vital Product Data (VPD) 93
retainer on M.2 backplane
USB
adjustment 60
connector 15, 24
USB 3.0 Console Breakout Cable 31, 80
S
safety inspection checklist 41 V
SD630 V2 compute node 6
validate solution setup 84
service and support
video connector
before you call 101
rear 24
hardware 103
software 103
service data 102
SMM2, rear view 25
software 13
W
software service and support telephone numbers 103 working inside the solution
solution setup 39 power on 42
solution setup checklist 39