ESXi Arm Fling
ESXi Arm Fling
ESXi Arm Fling
2. Supported platforms
The Fling is launched with platforms across a wide range of foot prints and use cases, spanning from servers and datacenters to single-board computers
and far edge use cases.
3. I/O options
3.1. Supported storage
iSCSI LUNs, NVMe and SATA drives are supported, as is USB storage. On some platforms, like the Raspberry Pi 4B, USB storage and iSCSI are the only
options.
If you insist on using a disk that is smaller than or equal to 128GiB, be sure to pass autoPartitionOSDataSize when the installer boot screen prompts for
options (Shift-O).
E.g. autoPartitionOSDataSize=8192 for an 8GB VMFS-L partition, and the remainder space will be used to create VMFS datastore.
For more details on changing the default OSData volume, please see this blog post.
3.3.1. Storage
154b f009 PNY Elite 240GB USB 3.0 Portable Solid State https://smile.amazon.com/gp/product
Drive /B01GQPXBQC
152d 0583 JMicron NVMe Bridge/Enclosure https://www.amazon.com/gp/product Must be used with a powered USB hub on the
/B07HCPCMKN Raspberry Pi 4B
090c 1000 Samsung MUF-256AB https://www.amazon.com/gp/product (So far) Only tested with a powered USB hub
/B07D7Q41PM
0930 6545 Toshiba TransMemory (So far) Only tested with a powered USB hub
13fe 5700 Kingston Technology Company Inc USB Stick from Microcenter DO NOT USE, I/O errors formatting
3.3.2. Networking
Raspberry Pi 4B note: For performance reasons, we recommend using the on-board gigabit ethernet port for ESXi host networking.
0bda 8153 Realtek RTL8153 Gigabit Ethernet https://www.amazon.com/gp/product Not recommended with a stock power supply on the
Adapter /B01J6583NK Raspberry Pi 4B
https://www.amazon.com/gp/product
/B00BBD7NFU
https://www.amazon.com/gp/product
/B01KA0UR3O
https://www.amazon.com/gp/product
/B01M7PL2WP
Note: USB RTL8153 NIC will show up in ESXi-Arm incorrectly as 100Mbit, although speeds can be faster it can be constrained further when using it with
Raspberry Pi due to hardware/software constraints.
3.3.3. Keyboards
Generally, any USB keyboard should work.
4. Preparation
Follow the hardware-specific guides around configuring the system.
4.1. iSCSI
This is a non-exhaustive guide to using iSCSI and installing ESXi to an iSCSI LUN.
4.1.1.1. QNAP
Note: QNAP’s LUNs start at 0.
The most important item here is the IQN. For the sake of this example, authentication is left disabled.
4.1.1.2. Synology
Synology tells you what number to use for your created volumes. If you have multiple LUNs in the same target, you need to match the value in Number col
umn.
Start at the main UEFI setup page. On the Pi, you can reach this screen by mashing the ESC key. Use arrow keys to select Device Manager.
Select the NIC that you will use for the iSCSI boot attempt. On the Pi, there’s only one on-board NIC, so just press ENTER.
Now, navigate to the iSCSI Mode field, press ENTER, and select Enabled, pressing ENTER to complete selection. For the sake of this example, use IPv4.
Navigate to Enable DHCP and press SPACEBAR to enable.
This is the most important info: enter the IQN correctly under Target Name. If you mistype it here, don’t correct it as it won’t “stick”. Exit out and try re-
entering the entire form (it’s a TianoCore bug). You also need the server connection info and the LUN ID.
Press ENTER. You should see your added iSCSI boot attempt listed.
Exit out (ESC) all the way to the main setup screen and select Reset.
Press ENTER. Your Pi will reboot. Hit ESC to enter UEFI setup again. It will take a bit of time (it's connecting to the iSCSI target). Select Boot Manager.
Press ENTER. You should see the iSCSI target listed here.
Of course now you can boot the ESXi installer (e.g from USB drive).
Important: if you made any errors in the config, delete and re-create the attempt. There’s a UEFI bug where the attempt configuration won’t be updated.
As soon as the problem is fixed (i.e. NIC cable is back), the boot option will re-appear, even at the same spot/ordering as it was before.
Note: if you got the wrong LUN number, the entry may appear but be non-functional.
Also, if you configure iSCSI with DHCP, note that the boot entry will change if the DHCP offer (IP address) changes. This means that the boot order will
change, as the old entry will be removed and the new entry added at the bottom of the list. Caveto!
5. Create ESXi Installer USB Key
For this you'll need the ESXi-Arm Fling ISO, of course.
You can ignore the warning about the missing partition table, it's an ISO.
5.2. On macOS
Identify the disk using the following command and make note of the disk path (e.g. /dev/diskX), and make sure any existing partitions are unmounted.
$ diskutil list
Raw-write the ISO file to the drive, using the disk identified above. Note the use of the raw device (/dev/rdisk4, not /dev/disk4)
6. Installing ESXi-Arm
Make sure to also follow the notes in the the hardware-specific guides for installation and post-installation steps.
Fling on Raspberry Pi
Fling on Ampere eMAG 8180-based Servers
Fling on Ampere Altra-based Servers
Fling on Arm Neoverse N1 SDP
Fling on SolidRun HoneyComb LX2K
Fling on NXP LS1046A FRWY
Fling on NXP LS1046A RDB
Fling on Jetson Xavier AGX Developer Kit
Fling on Jetson Xavier NX Developer Kit
After accepting the EULA, the installer will list available storage media for installation. Use the arrow keys to select the drive to install it.
Select your keyboard layout:
Choose a password:
Note: If you're using the Raspberry Pi USB keyboard, F11 is the combination of Fn and F1.
Installation should be complete. Press ENTER to reboot.
Like a system with a video console, ESXi will boot up to a DCUI (console UI) screen, although this will look a bit different:
ESXi actually supports several different "roles" for the serial port. These roles are like virtual terminals and can be switched between using key combos:
There are a few settings you can change directly from DCUI (console UI).
Press ENTER.
You can toggle ESXi Shell or SSH support by selecting the entry and pressing ENTER.
4. Select storage Standard: <choose from the available datastores> (Note: If you're using the Pi and installing the ESXi-Arm bits on a usb stick, and
the usb stick is <128GB, and you do not see an available datastore at this step, it's possible that you may have neglected the autoPartitionOSDat
aSize during the initial boot-install. Unfortunately the only way to correct this is to redo the installation. This time make sure to append the field
along with the size. See the Pi installation guide Section 4 for details)
5. Customize settings
a. CPU: <choose from available list>
b. Memory: <within available limit>
c. Hard disk: <within available limit>
d. USB controller: <default> (USB 3.1)
e. Network Adapter: <default> (E1000e)
f. CD/DVD Drive: Choose "Datastore ISO file" Browse datastore to upload/find the required ISO
g. Video Card: <default>
Additional hardware can be added with "Add hard disk", "Add network adapter" and "Add other device" options.
Note: The USB controller is required to use the keyboard and mouse to interact with the Virtual Machine.
6. Review settings Finish
Virtual Hardware
CPU 4
Memory 4 GB
Hard disk 16 GB
SATA Controller
These operating systems support both the UEFI firmware in the virtual machine and the DT (device tree) method of describing virtual machine hardware.
Many guest operating systems will also support booting with ACPI.
Raspberry Pi Yes
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-D19EA1CB-5222-49F9-A002-4F8692B92D63.html
Note: Do not mix systems in the same cluster. E.g. do not mix eMAGs and Pies in the same cluster. Also, do not mix x86 and Arm systems in the same
cluster.
9.2. Pre-requisites
1. It is recommended that a separate NIC be configured for vMotion and FT logging to ensure that sufficient bandwidth is available.
a. From the vSphere Web Client navigate to the host Configure Networking Vmkernel adapters
b. Choose the vmkernel port group to be configured for vMotion and select Edit
c. Enable vMotion from the list of available services
2. When you migrate virtual machines with vMotion and choose to change only the compute host, the VM needs to be on shared storage
to ensure that it is accessible to both source and target hosts. Shared storage can be configured with a SAN, or implemented using iSCSI and
NAS.
10.2. Pre-requisites
1. Download the specific FDM VIB from the ESXi-Arm Fling site for your version of your vCenter Server. At launch of the fling, vCenter Server 7.0d
(Build 16749653) and vCenter Server 7.0c (Build 16620007) are supported.
Step 1 - Upload the FDM VIB to ESXi host via SCP or vSphere Datastore Browser
Step 4 - If you wish to get rid of "The number of vSphere HA heartbeat datastores for this host is 1, which is less than required: 2" message, you can add
the following Advanced Setting das.ignoreInsufficientHbDatastore = true and then right click on one of the ESXi hosts and select "Reconfigure for
vSphere HA" operation the message to go away
11. Enabling vSphere Fault Tolerance
11.1. Tested platforms
Platform Supported
NXP FRWY No
11.2. Pre-requisites
0. Suggested reading - General guide to FT on virtual machines: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.avail.doc
/GUID-7525F8DD-9B8F-4089-B020-BAA4AC6509D2.html
1. It is recommended that a separate NIC be configured for vMotion and FT logging to ensure that sufficient bandwidth is available.
a. From the vSphere Web Client navigate to the host Configure Networking Vmkernel adapters
b. Choose the vmkernel port group to be configured for FT and select Edit
c. Enable vMotion and Fault Tolerance logging from the list of available services (Yes, you need both)
2.
2. vSphere HA should be enabled for the cluster. See Enabling vSphere HA section for detailed instructions
Step 1 - Browse to the VM in vSphere web client Right-click and select Fault Tolerance Turn On Fault Tolerance
Here is an example of compiling latest Open VM Tools 11.1.5 for Ubuntu 20.04 AARCH64. For VMware Photon OS AARCH64, you can refer to this blog
post for instructions.
apt update
apt install -y automake-1.15 pkg-config libtool libmspack-dev libglib2.0-dev \
libpam0g-dev libssl-dev libxml2-dev libxmlsec1-dev libx11-dev libxext-dev \
libxinerama-dev libxi-dev libxrender-dev libxrandr-dev libgtk2.0-dev \
libgtk-3-dev libgtkmm-3.0-dev
Step 3 - Run the following commands to build and install Open VM Tools:
autoreconf -i
./configure
sudo make
sudo make install
sudo ldconfig
Step 4 - We need to create a systemd unit file so we can enable and start Open VM Tools Daemon upon startup. Run the following command to create vm
toolsd.service file
[Service]
ExecStart=
ExecStart=/usr/local/bin/vmtoolsd
Restart=always
RestartSec=1sec
[Install]
WantedBy=multi-user.target
EOF
Step 5 - Enable and Start Open VM Tools Daemon and verify using either the ESXi Host Client UI or vSphere UI that Open VM Tools is now running
14. Troubleshooting
14.1. Support
If you are running into installation or setup issues with the ESXi-Arm Fling, please use the Comments and/or Bug section of the ESXi-Arm Fling website.
In addition, you can also engage with the ESXi-Arm team and community on Slack at #esxi-arm-fling on VMware {code}
There are two methods to generate support bundle, using either the ESXi Host Client UI or ESXi Shell.
15.1.2. Virtualization
Workaround: Only use virtual USB3 controller (which is the default when creating a Virtual Machine).
15.2. vCenter
15.2.1. A general system error occurred: Unable to push signed certificate to host
The warning message is shown in vSphere UI when adding ESXi-Arm host to vCenter Server. This occurs as there is a time skew between the ESXi-Arm
host and vCenter Server, and is exacerbated due to some systems (e.g. Raspberry Pi) not having a battery backed RTC.
Workaround: Ensure all systems sync their time from the same source. For detailed instructions on configuring NTP for ESXi-Arm host, please refer to the
"VMware ESXi Host Client" section.