Vmware Vsphere: Install, Configure, Manage: Lecture Manual Esxi 7 and Vcenter Server 7
Vmware Vsphere: Install, Configure, Manage: Lecture Manual Esxi 7 and Vcenter Server 7
Vmware Vsphere: Install, Configure, Manage: Lecture Manual Esxi 7 and Vcenter Server 7
www.vmware.com/education
CONTENTS
Contents i
2-12 About the Software-Defined Data Center ............................................................27
2-13 vSphere and Cloud Computing .............................................................................29
2-14 About VMware Skyline .........................................................................................31
2-15 VMware Skyline Family ........................................................................................32
2-16 Review of Learner Objectives ...............................................................................34
2-17 Lesson 2: vSphere Virtualization of Resources .....................................................35
2-18 Learner Objectives................................................................................................36
2-19 Virtual Machine: Guest and Consumer of ESXi Host ............................................37
2-20 Physical and Virtual Architecture .........................................................................38
2-21 Physical Resource Sharing ....................................................................................39
2-22 CPU Virtualization.................................................................................................41
2-23 Physical and Virtualized Host Memory Usage ......................................................42
2-24 Physical and Virtual Networking ..........................................................................43
2-25 Physical File Systems and Datastores ...................................................................45
2-26 GPU Virtualization ................................................................................................47
2-27 Review of Learner Objectives ...............................................................................48
2-28 Lesson 3: vSphere User Interfaces .......................................................................49
2-29 Learner Objectives................................................................................................50
2-30 vSphere User Interfaces .......................................................................................51
2-31 About VMware Host Client ...................................................................................52
2-32 About vSphere Client............................................................................................53
2-33 About PowerCLI and ESXCLI .................................................................................54
2-34 Lab 1: Accessing the Lab Environment .................................................................55
2-35 Review of Learner Objectives ...............................................................................56
2-36 Lesson 4: Overview of ESXi ...................................................................................57
2-37 Learner Objectives................................................................................................58
2-38 About ESXi ............................................................................................................59
2-39 Configuring an ESXi Host ......................................................................................61
2-40 Configuring an ESXi Host: Root Access .................................................................62
2-41 Configuring an ESXi Host: Management Network................................................63
2-42 Configuring an ESXi Host: Other Settings .............................................................64
ii Contents
2-43 Controlling Remote Access to an ESXi Host .........................................................65
2-44 Managing User Accounts: Best Practices .............................................................66
2-45 ESXi Host as an NTP Client ....................................................................................67
2-46 Demonstration: Installing and Configuring ESXi Hosts .........................................68
2-47 Lab 2: Configuring an ESXi Host ............................................................................69
2-48 Review of Learner Objectives ...............................................................................70
2-49 Virtual Beans: Data Center ...................................................................................71
2-50 Key Points .............................................................................................................72
Contents iii
3-23 About Virtual Machine Files .................................................................................98
3-24 About VM Virtual Hardware ...............................................................................100
3-25 Virtual Hardware Versions .................................................................................102
3-26 About CPU and Memory.....................................................................................103
3-27 About Virtual Storage .........................................................................................105
3-28 About Thick-Provisioned Virtual Disks ...............................................................107
3-29 About Thin-Provisioned Virtual Disks .................................................................108
3-30 Thick-Provisioned and Thin-Provisioned Disks ...................................................109
3-31 About Virtual Networks ......................................................................................110
3-32 About Virtual Network Adapters ........................................................................111
3-33 Other Virtual Devices .........................................................................................114
3-34 About the Virtual Machine Console ...................................................................115
3-35 Lab 5: Adding Virtual Hardware .........................................................................116
3-36 Review of Learner Objectives .............................................................................117
3-37 Lesson 3: Introduction to Containers .................................................................118
3-38 Learner Objectives..............................................................................................119
3-39 Traditional Application Development ................................................................120
3-40 Modern Application Development .....................................................................122
3-41 Benefits of Microservices and Containerization ................................................123
3-42 Container Terminology .......................................................................................124
3-43 About Containers................................................................................................125
3-44 Rise of Containers...............................................................................................126
3-45 About Container Hosts .......................................................................................127
3-46 Containers at Runtime........................................................................................128
3-47 About Container Engines ....................................................................................129
3-48 Virtual Machines and Containers (1) ..................................................................130
3-49 Virtual Machines and Containers (2) ..................................................................131
3-50 About Kubernetes ..............................................................................................132
3-51 Challenges of Running Kubernetes in Production ..............................................134
3-52 Architecting with Common Application Requirements......................................135
3-53 Review of Learner Objectives .............................................................................136
iv Contents
3-54 Virtual Beans: Virtualizing Workloads ................................................................137
3-55 Key Points ...........................................................................................................138
Contents v
4-29 Lesson 3: vSphere Licensing ...............................................................................168
4-30 Learner Objectives..............................................................................................169
4-31 vSphere Licensing Overview ...............................................................................170
4-32 vSphere License Service .....................................................................................171
4-33 Adding License Keys to vCenter Server ..............................................................172
4-34 Assigning a License to a vSphere Component ....................................................173
4-35 Viewing Licensed Features .................................................................................174
4-36 Lab 6: Adding vSphere Licenses..........................................................................175
4-37 Review of Learner Objectives .............................................................................176
4-38 Lesson 4: Managing the vCenter Server Inventory ............................................177
4-39 Learner Objectives..............................................................................................178
4-40 vSphere Client Shortcuts Page ...........................................................................179
4-41 Using the Navigation Pane .................................................................................180
4-42 vCenter Server Views for Hosts, Clusters, VMs, and Templates ........................181
4-43 vCenter Server Views for Storage and Networks ...............................................182
4-44 Viewing Object Information ...............................................................................183
4-45 About Data Center Objects.................................................................................184
4-46 Organizing Inventory Objects into Folders .........................................................185
4-47 Adding a Data Center and Organizational Objects to vCenter Server................187
4-48 Adding ESXi Hosts to vCenter Server ..................................................................188
4-49 Creating Custom Tags for Inventory Objects......................................................189
4-50 Labs .....................................................................................................................190
4-51 Lab 7: Creating and Managing the vCenter Server Inventory ............................191
4-52 Lab 8: Configuring Active Directory: Joining a Domain ......................................192
4-53 Review of Learner Objectives .............................................................................193
4-54 Lesson 5: vCenter Server Roles and Permissions ...............................................194
4-55 Learner Objectives..............................................................................................195
4-56 About vCenter Server Permissions .....................................................................196
4-57 About Roles ........................................................................................................197
4-58 About Objects .....................................................................................................199
4-59 Adding Permissions to the vCenter Server Inventory ........................................200
vi Contents
4-60 Viewing Roles and User Assignments.................................................................201
4-61 Applying Permissions: Scenario 1 .......................................................................202
4-62 Applying Permissions: Scenario 2 .......................................................................203
4-63 Activity: Applying Group Permissions (1) ...........................................................204
4-64 Activity: Applying Group Permissions (2) ...........................................................205
4-65 Applying Permissions: Scenario 3 .......................................................................206
4-66 Applying Permissions: Scenario 4 .......................................................................207
4-67 Creating a Role ...................................................................................................208
4-68 About Global Permissions ..................................................................................209
4-69 Labs .....................................................................................................................210
4-70 Lab 9: Configuring Active Directory: Adding an Identity Source ........................211
4-71 Lab 10: Users, Groups, and Permissions ............................................................212
4-72 Review of Learner Objectives .............................................................................213
4-73 Lesson 6: Backing Up and Restoring vCenter Server Appliance .........................214
4-74 Learner Objectives..............................................................................................215
4-75 Virtual Beans: vCenter Server Operations..........................................................216
4-76 About vCenter Server Backup and Restore ........................................................217
4-77 Methods for vCenter Server Appliance Backup and Restore .............................218
4-78 File-Based Backup of vCenter Server Appliance .................................................219
4-79 File-Based Restore of vCenter Server Appliance ................................................220
4-80 Scheduling Backups ............................................................................................221
4-81 Viewing the Backup Schedule ............................................................................222
4-82 Demonstration: Backing Up and Restoring a vCenter Server Appliance
Instance ..............................................................................................................223
4-83 Review of Learner Objectives .............................................................................224
4-84 Lesson 7: Monitoring vCenter Server and Its Inventory.....................................225
4-85 Learner Objectives..............................................................................................226
4-86 vCenter Server Events ........................................................................................227
4-87 About Log Levels.................................................................................................228
4-88 Setting Log Levels ...............................................................................................229
4-89 Forwarding vCenter Server Appliance Log Files to a Remote Host ....................230
Contents vii
4-90 vCenter Server Database Health ........................................................................231
4-91 Monitoring vCenter Server Appliance ................................................................232
4-92 Monitoring vCenter Server Appliance Services ..................................................233
4-93 Monthly Patch Updates for vCenter Server Appliance ......................................234
4-94 Review of Learner Objectives .............................................................................235
4-95 Lesson 8: vCenter Server High Availability .........................................................236
4-96 Learner Objectives..............................................................................................237
4-97 Importance of Keeping vCenter Server Highly Available ...................................238
4-98 About vCenter Server High Availability ..............................................................239
4-99 Scenario: Active Node Failure ............................................................................240
4-100 Scenario: Passive Node Failure ...........................................................................241
4-101 Scenario: Witness Node Failure .........................................................................242
4-102 Benefits of vCenter Server High Availability .......................................................243
4-103 vCenter Server High Availability Requirements .................................................244
4-104 Demonstration: Configuring vCenter Server High Availability ...........................245
4-105 Review of Learner Objectives .............................................................................246
4-106 Virtual Beans: vCenter Server Maintenance and Operations ............................247
4-107 Key Points ...........................................................................................................248
viii Contents
5-13 Viewing the Configuration of Standard Switches ...............................................262
5-14 Network Adapter Properties ..............................................................................263
5-15 Distributed Switch Architecture .........................................................................264
5-16 Standard and Distributed Switches: Shared Features ........................................265
5-17 Additional Features of Distributed Switches ......................................................266
5-18 Lab 11: Using Standard Switches........................................................................267
5-19 Review of Learner Objectives .............................................................................268
5-20 Lesson 2: Configuring Standard Switch Policies .................................................269
5-21 Learner Objectives..............................................................................................270
5-22 Network Switch and Port Policies ......................................................................271
5-23 Configuring Security Policies ..............................................................................272
5-24 Traffic-Shaping Policies.......................................................................................274
5-25 Configuring Traffic Shaping ................................................................................275
5-26 NIC Teaming and Failover Policies......................................................................277
5-27 Load-Balancing Method: Originating Virtual Port ID ..........................................279
5-28 Load-Balancing Method: Source MAC Hash .......................................................281
5-29 Load-Balancing Method: Source and Destination IP Hash .................................283
5-30 Detecting and Handling Network Failure ...........................................................285
5-31 Physical Network Considerations .......................................................................287
5-32 Review of Learner Objectives .............................................................................288
5-33 Virtual Beans: Networking Requirements ..........................................................289
5-34 Key Points ...........................................................................................................290
Contents ix
6-9 Storage Protocol Overview .................................................................................300
6-10 About VMFS ........................................................................................................302
6-11 About NFS ...........................................................................................................304
6-12 About vSAN.........................................................................................................305
6-13 About vSphere Virtual Volumes .........................................................................306
6-14 About Raw Device Mapping ...............................................................................307
6-15 Physical Storage Considerations.........................................................................308
6-16 Review of Learner Objectives .............................................................................309
6-17 Lesson 2: Fibre Channel Storage ........................................................................310
6-18 Learner Objectives..............................................................................................311
6-19 About Fibre Channel ...........................................................................................312
6-20 Fibre Channel SAN Components ........................................................................313
6-21 Fibre Channel Addressing and Access Control ...................................................315
6-22 Multipathing with Fibre Channel........................................................................317
6-23 FCoE Adapters ....................................................................................................319
6-24 Configuring Software FCoE: Creating VMkernel Ports .......................................320
6-25 Configuring Software FCoE: Activating Software FCoE Adapters .......................321
6-26 Review of Learner Objectives .............................................................................322
6-27 Lesson 3: iSCSI Storage .......................................................................................323
6-28 Learner Objectives..............................................................................................324
6-29 iSCSI Components...............................................................................................325
6-30 iSCSI Addressing .................................................................................................327
6-31 Storage Device Naming Conventions .................................................................329
6-32 iSCSI Adapters.....................................................................................................330
6-33 ESXi Network Configuration for IP Storage ........................................................332
6-34 Activating the Software iSCSI Adapter ...............................................................333
6-35 Discovering iSCSI Targets....................................................................................334
6-36 iSCSI Security: CHAP ...........................................................................................335
6-37 Multipathing with iSCSI Storage .........................................................................337
6-38 Binding VMkernel Ports with the iSCSI Initiator .................................................338
6-39 Lab 12: Accessing iSCSI Storage ..........................................................................339
x Contents
6-40 Review of Learner Objectives .............................................................................340
6-41 Lesson 4: VMFS Datastores ................................................................................341
6-42 Learner Objectives..............................................................................................342
6-43 Creating a VMFS Datastore ................................................................................343
6-44 Browsing Datastore Contents.............................................................................344
6-45 About VMFS Datastores .....................................................................................345
6-46 Managing Overcommitted Datastores ...............................................................346
6-47 Increasing the Size of VMFS Datastores .............................................................347
6-48 Datastore Maintenance Mode ...........................................................................348
6-49 Deleting or Unmounting a VMFS Datastore .......................................................349
6-50 Multipathing Algorithms ....................................................................................351
6-51 Configuring Storage Load Balancing ...................................................................352
6-52 Lab 13: Managing VMFS Datastores...................................................................354
6-53 Review of Learner Objectives .............................................................................355
6-54 Lesson 5: NFS Datastores ...................................................................................356
6-55 Learner Objectives..............................................................................................357
6-56 NFS Components ................................................................................................358
6-57 NFS 3 and NFS 4.1...............................................................................................359
6-58 NFS Version Compatibility with Other vSphere Technologies ...........................360
6-59 Configuring NFS Datastores................................................................................362
6-60 Configuring ESXi Host Authentication and NFS Kerberos Credentials ...............363
6-61 Configuring the NFS Datastore to Use Kerberos ................................................365
6-62 Unmounting an NFS Datastore...........................................................................366
6-63 Multipathing and NFS Storage ...........................................................................367
6-64 Enabling Multipathing for NFS 4.1......................................................................369
6-65 Lab 14: Accessing NFS Storage ...........................................................................370
6-66 Review of Learner Objectives .............................................................................371
6-67 Lesson 6: vSAN Datastores .................................................................................372
6-68 Learner Objectives..............................................................................................373
6-69 About vSAN Datastores ......................................................................................374
6-70 Disk Groups.........................................................................................................375
Contents xi
6-71 vSAN Hardware Requirements ...........................................................................376
6-72 Viewing the vSAN Datastore Summary ..............................................................378
6-73 Objects in vSAN Datastores ................................................................................379
6-74 VM Storage Policies ............................................................................................380
6-75 Viewing VM Settings for vSAN Information .......................................................381
6-76 Lab 15: Using a vSAN Datastore .........................................................................382
6-77 Review of Learner Objectives .............................................................................383
6-78 Virtual Beans: Storage ........................................................................................384
6-79 Activity: Using vSAN Storage at Virtual Beans (1) ..............................................385
6-80 Activity: Using vSAN Storage at Virtual Beans (2) ..............................................386
6-81 Key Points ...........................................................................................................387
xii Contents
7-20 Review of Learner Objectives .............................................................................408
7-21 Lesson 2: Working with Content Libraries..........................................................409
7-22 Learner Objectives..............................................................................................410
7-23 About Content Libraries .....................................................................................411
7-24 Benefits of Content Libraries ..............................................................................412
7-25 Types of Content Libraries..................................................................................413
7-26 Adding VM Templates to a Content Library .......................................................415
7-27 Deploying VMs from Templates in a Content Library ........................................416
7-28 Lab 17: Using Content Libraries ..........................................................................417
7-29 Review of Learner Objectives .............................................................................418
7-30 Lesson 3: Modifying Virtual Machines ...............................................................419
7-31 Learner Objectives..............................................................................................420
7-32 Modifying Virtual Machine Settings ...................................................................421
7-33 Hot-Pluggable Devices ........................................................................................423
7-34 Dynamically Increasing Virtual Disk Size ............................................................425
7-35 Inflating Thin-Provisioned Disks .........................................................................426
7-36 VM Options: General Settings ............................................................................427
7-37 VM Options: VMware Tools Settings .................................................................428
7-38 VM Options: VM Boot Settings...........................................................................429
7-39 Removing VMs....................................................................................................431
7-40 Lab 18: Modifying Virtual Machines...................................................................432
7-41 Review of Learner Objectives .............................................................................433
7-42 Lesson 4: Migrating VMs with vSphere vMotion ...............................................434
7-43 Learner Objectives..............................................................................................435
7-44 About VM Migration...........................................................................................436
7-45 About vSphere vMotion .....................................................................................437
7-46 Enabling vSphere vMotion .................................................................................438
7-47 vSphere vMotion Migration Workflow ..............................................................439
7-48 VM Requirements for vSphere vMotion Migration ...........................................441
7-49 Host Requirements for vSphere vMotion Migration (1) ....................................442
7-50 Host Requirements for vSphere vMotion Migration (2) ....................................443
Contents xiii
7-51 Checking vSphere vMotion Errors ......................................................................444
7-52 Encrypted vSphere vMotion ...............................................................................445
7-53 Cross vCenter Migrations ...................................................................................446
7-54 Cross vCenter Migration Requirements .............................................................447
7-55 Network Checks for Cross vCenter Migrations ..................................................448
7-56 VMkernel Networking Layer and TCP/IP Stacks .................................................449
7-57 vSphere vMotion TCP/IP Stacks .........................................................................451
7-58 Long-Distance vSphere vMotion Migration .......................................................452
7-59 Networking Prerequisites for Long-Distance vSphere vMotion.........................453
7-60 Lab 19: vSphere vMotion Migrations .................................................................454
7-61 Review of Learner Objectives .............................................................................455
7-62 Lesson 5: Enhanced vMotion Compatibility .......................................................456
7-63 Learner Objectives..............................................................................................457
7-64 CPU Constraints on vSphere vMotion Migration ...............................................458
7-65 About Enhanced vMotion Compatibility ............................................................459
7-66 Enhanced vMotion Compatibility Cluster Requirements ...................................461
7-67 Enabling EVC Mode on an Existing Cluster .........................................................462
7-68 Changing the EVC Mode for a Cluster ................................................................463
7-69 Virtual Machine EVC Mode ................................................................................464
7-70 Review of Learner Objectives .............................................................................465
7-71 Lesson 6: Migrating VMs with vSphere Storage vMotion ..................................466
7-72 Learner Objectives..............................................................................................467
7-73 About vSphere Storage vMotion ........................................................................468
7-74 vSphere Storage vMotion In Action ...................................................................469
7-75 Identifying Storage Arrays That Support vSphere Storage APIs - Array
Integration ..........................................................................................................471
7-76 vSphere Storage vMotion Guidelines and Limitations .......................................472
7-77 Changing Both Compute Resource and Storage During Migration (1) ..............473
7-78 Changing Both Compute Resource and Storage During Migration (2) ..............474
7-79 Lab 20: vSphere Storage vMotion Migrations ....................................................475
7-80 Review of Learner Objectives .............................................................................476
xiv Contents
7-81 Lesson 7: Creating Virtual Machine Snapshots ..................................................477
7-82 Learner Objectives..............................................................................................478
7-83 VM Snapshots .....................................................................................................479
7-84 Taking Snapshots ................................................................................................480
7-85 Types of Snapshots .............................................................................................481
7-86 VM Snapshot Files ..............................................................................................483
7-87 VM Snapshot Files Example (1) ..........................................................................485
7-88 VM Snapshot Files Example (2) ..........................................................................486
7-89 VM Snapshot Files Example (3) ..........................................................................487
7-90 Managing Snapshots ..........................................................................................488
7-91 Deleting VM Snapshots (1) .................................................................................490
7-92 Deleting VM Snapshots (2) .................................................................................491
7-93 Deleting VM Snapshots (3) .................................................................................492
7-94 Deleting All VM Snapshots .................................................................................493
7-95 About Snapshot Consolidation ...........................................................................494
7-96 Discovering When to Consolidate Snapshots .....................................................495
7-97 Consolidating Snapshots ....................................................................................496
7-98 Lab 21: Working with Snapshots ........................................................................497
7-99 Review of Learner Objectives .............................................................................498
7-100 Lesson 8: vSphere Replication and Backup ........................................................499
7-101 Learner Objectives..............................................................................................500
7-102 About vSphere Replication .................................................................................501
7-103 About the vSphere Replication Appliance ..........................................................502
7-104 Replication Functions .........................................................................................504
7-105 Deploying the vSphere Replication Appliance ...................................................505
7-106 Configuring vSphere Replication for a Single VM...............................................506
7-107 Configuring Recovery Point Objective and Point in Time Instances ..................507
7-108 Recovering Replicated VMs ................................................................................508
7-109 Backup and Restore Solution for VMs ................................................................510
7-110 vSphere Storage APIs - Data Protection: Offloaded Backup Processing ............511
7-111 vSphere Storage APIs - Data Protection: Changed-Block Tracking ....................513
Contents xv
7-112 Review of Learner Objectives .............................................................................514
7-113 Activity: Virtual Beans VM Management (1) ......................................................515
7-114 Activity: Virtual Beans VM Management (2) ......................................................516
7-115 Activity: Virtual Beans VM Management (3) ......................................................517
7-116 Key Points ...........................................................................................................518
xvi Contents
8-26 Viewing VM Resource Allocation Settings..........................................................547
8-27 Lab 22: Controlling VM Resources .....................................................................548
8-28 Review of Learner Objectives .............................................................................549
8-29 Lesson 3: Resource Monitoring Tools ................................................................550
8-30 Learner Objectives..............................................................................................551
8-31 Performance-Tuning Methodology ....................................................................552
8-32 Resource-Monitoring Tools ................................................................................553
8-33 Guest Operating System Monitoring Tools ........................................................554
8-34 Using Perfmon to Monitor VM Resources .........................................................555
8-35 Using esxtop to Monitor VM Resources .............................................................556
8-36 Monitoring Inventory Objects with Performance Charts ...................................557
8-37 Working with Overview Performance Charts.....................................................558
8-38 Working with Advanced Performance Charts ....................................................559
8-39 Chart Options: Real-Time and Historical ............................................................560
8-40 Chart Types: Bar and Pie ....................................................................................562
8-41 Chart Types: Line ................................................................................................563
8-42 Chart Types: Stacked ..........................................................................................564
8-43 Chart Types: Stacked Per VM .............................................................................565
8-44 Saving Charts ......................................................................................................566
8-45 About Objects and Counters ..............................................................................567
8-46 About Statistics Types ........................................................................................568
8-47 About Rollup .......................................................................................................569
8-48 Review of Learner Objectives .............................................................................571
8-49 Lesson 4: Monitoring Resource Use ...................................................................572
8-50 Learner Objectives..............................................................................................573
8-51 Interpreting Data from Tools..............................................................................574
8-52 CPU-Constrained VMs (1) ...................................................................................575
8-53 CPU-Constrained VMs (2) ...................................................................................577
8-54 Memory-Constrained VMs (1) ............................................................................578
8-55 Memory-Constrained VMs (2) ............................................................................579
8-56 Memory-Constrained Hosts ...............................................................................580
Contents xvii
8-57 Disk-Constrained VMs ........................................................................................581
8-58 Monitoring Disk Latency .....................................................................................582
8-59 Network-Constrained VMs .................................................................................583
8-60 Lab 23: Monitoring Virtual Machine Performance ............................................584
8-61 Review of Learner Objectives .............................................................................585
8-62 Lesson 5: Using Alarms .......................................................................................586
8-63 Learner Objectives..............................................................................................587
8-64 About Alarms ......................................................................................................588
8-65 Predefined Alarms (1).........................................................................................589
8-66 Predefined Alarms (2).........................................................................................590
8-67 Creating a Custom Alarm....................................................................................591
8-68 Defining the Alarm Target Type .........................................................................592
8-69 Defining the Alarm Rule: Trigger (1) ...................................................................593
8-70 Defining the Alarm Rule: Trigger (2) ...................................................................594
8-71 Defining the Alarm Rule: Setting the Notification ..............................................595
8-72 Defining the Alarm Reset Rules ..........................................................................596
8-73 Enabling the Alarm .............................................................................................597
8-74 Configuring vCenter Server Notifications ...........................................................598
8-75 Lab 24: Using Alarms ..........................................................................................599
8-76 Review of Learner Objectives .............................................................................600
8-77 Activity: Virtual Beans Resource Monitoring (1) ................................................601
8-78 Activity: Virtual Beans Resource Management and Monitoring (2) ..................602
8-79 Key Points ...........................................................................................................603
xviii Contents
9-8 Creating a vSphere Cluster and Enabling Cluster Features ................................612
9-9 Configuring the Cluster Using Quickstart ...........................................................613
9-10 Configuring the Cluster Manually .......................................................................615
9-11 Adding a Host to a Cluster ..................................................................................616
9-12 Viewing Cluster Summary Information ..............................................................617
9-13 Monitoring Cluster Resources ............................................................................618
9-14 Review of Learner Objectives .............................................................................619
9-15 Lesson 2: vSphere DRS........................................................................................620
9-16 Learner Objectives..............................................................................................621
9-17 About vSphere DRS.............................................................................................622
9-18 vSphere DRS: VM Focused..................................................................................623
9-19 About the VM DRS Score ....................................................................................624
9-20 VM DRS Score List...............................................................................................625
9-21 Viewing VM DRS Scores Using Performance Charts (1) .....................................626
9-22 Viewing VM DRS Scores Using Performance Charts (2) .....................................627
9-23 Viewing vSphere DRS Settings ............................................................................628
9-24 vSphere DRS Settings: Automation Level ...........................................................629
9-25 vSphere DRS Settings: Migration Threshold.......................................................630
9-26 vSphere DRS Settings: Predictive DRS ................................................................632
9-27 vSphere DRS Settings: VM Swap File Location ...................................................633
9-28 vSphere DRS Settings: VM Affinity .....................................................................634
9-29 vSphere DRS Settings: DRS Groups.....................................................................635
9-30 vSphere DRS Settings: VM-Host Affinity Rules ...................................................636
9-31 VM-Host Affinity Preferential Rules ...................................................................637
9-32 VM-Host Affinity Required Rules........................................................................638
9-33 vSphere DRS Settings: VM-Level Automation ....................................................639
9-34 vSphere DRS Cluster Requirements ...................................................................640
9-35 Viewing vSphere DRS Cluster Resource Utilization ............................................641
9-36 Viewing vSphere DRS Recommendations ..........................................................642
9-37 Maintenance Mode and Standby Mode ............................................................643
9-38 Removing a Host from the vSphere DRS Cluster ................................................644
Contents xix
9-39 vSphere DRS and Dynamic DirectPath I/O .........................................................645
9-40 Adding a Dynamic DirectPath I/O Device to a VM .............................................646
9-41 Lab 25: Implementing vSphere DRS Clusters .....................................................647
9-42 Review of Learner Objectives .............................................................................648
9-43 Lesson 3: Introduction to vSphere HA ................................................................649
9-44 Learner Objectives..............................................................................................650
9-45 Protection at Every Level ....................................................................................651
9-46 About vSphere HA ..............................................................................................653
9-47 vSphere HA Scenario: ESXi Host Failure .............................................................654
9-48 vSphere HA Scenario: Guest Operating System Failure .....................................655
9-49 vSphere HA Scenario: Application Failure ..........................................................656
9-50 vSphere HA Scenario: Datastore Accessibility Failures ......................................657
9-51 vSphere HA Scenario: Protecting VMs Against Network Isolation .....................659
9-52 Importance of Redundant Heartbeat Networks ................................................660
9-53 Redundancy Using NIC Teaming.........................................................................661
9-54 Redundancy Using Additional Networks ............................................................662
9-55 Review of Learner Objectives .............................................................................663
9-56 Lesson 4: vSphere HA Architecture ....................................................................664
9-57 Learner Objectives..............................................................................................665
9-58 vSphere HA Architecture: Agent Communication ..............................................666
9-59 vSphere HA Architecture: Network Heartbeats .................................................669
9-60 vSphere HA Architecture: Datastore Heartbeats ...............................................670
9-61 vSphere HA Failure Scenarios .............................................................................671
9-62 Failed Subordinate Hosts....................................................................................672
9-63 Failed Master Hosts ............................................................................................674
9-64 Isolated Hosts .....................................................................................................675
9-65 VM Storage Failures ...........................................................................................676
9-66 Protecting Against Storage Failures with VMCP.................................................677
9-67 vSphere HA Design Considerations ....................................................................678
9-68 Review of Learner Objectives .............................................................................679
9-69 Lesson 5: Configuring vSphere HA......................................................................680
xx Contents
9-70 Learner Objectives..............................................................................................681
9-71 vSphere HA Prerequisites ...................................................................................682
9-72 Configuring vSphere HA Settings........................................................................683
9-73 vSphere HA Settings: Failures and Responses....................................................684
9-74 vSphere HA Settings: VM Monitoring ................................................................686
9-75 vSphere HA Settings: Heartbeat Datastores ......................................................687
9-76 vSphere HA Settings: Admission Control............................................................688
9-77 Example: Admission Control Using Cluster Resources Percentage ....................690
9-78 Example: Admission Control Using Slots (1).......................................................691
9-79 Example: Admission Control Using Slots (2).......................................................692
9-80 vSphere HA Settings: Performance Degradation VMs Tolerate .........................693
9-81 vSphere HA Setting: Default VM Restart Priority ...............................................695
9-82 vSphere HA Settings: Advanced Options............................................................696
9-83 vSphere HA Settings: VM-Level Settings ............................................................697
9-84 About vSphere HA Orchestrated Restart ...........................................................698
9-85 VM Dependencies in Orchestrated Restart (1) ..................................................699
9-86 VM Dependencies in Orchestrated Restart (2) ..................................................700
9-87 Network Configuration and Maintenance .........................................................701
9-88 Monitoring vSphere HA Cluster Status...............................................................702
9-89 Using vSphere HA with vSphere DRS..................................................................703
9-90 Lab 26: Using vSphere HA...................................................................................704
9-91 Review of Learner Objectives .............................................................................705
9-92 Lesson 6: Introduction to vSphere Fault Tolerance............................................706
9-93 Learner Objectives..............................................................................................707
9-94 About vSphere Fault Tolerance ..........................................................................708
9-95 vSphere Fault Tolerance Features ......................................................................709
9-96 vSphere Fault Tolerance with vSphere HA and vSphere DRS.............................710
9-97 Redundant VMDK Files .......................................................................................711
9-98 vSphere Fault Tolerance Checkpoint ..................................................................712
9-99 vSphere Fault Tolerance: Precopy ......................................................................713
9-100 vSphere Fault Tolerance Fast Checkpointing .....................................................714
Contents xxi
9-101 vSphere Fault Tolerance Shared Files.................................................................715
9-102 Enabling vSphere Fault Tolerance on a VM........................................................716
9-103 Review of Learner Objectives .............................................................................717
9-104 Activity: Virtual Beans Clusters (1) .....................................................................718
9-105 Activity: Virtual Beans Clusters (2) .....................................................................719
9-106 Key Points ...........................................................................................................720
xxii Contents
10-25 Creating and Editing Patch or Extension Baselines ............................................745
10-26 Creating a Baseline .............................................................................................746
10-27 Creating a Baseline: Name and Description .......................................................747
10-28 Creating a Baseline: Select Patches Automatically.............................................748
10-29 Creating a Baseline: Select Patches Manually ....................................................749
10-30 Updating Your Host or Cluster with Baselines ...................................................750
10-31 Remediation Precheck........................................................................................751
10-32 Remediating Hosts..............................................................................................752
10-33 Review of Learner Objectives .............................................................................753
10-34 Lesson 4: Working with Images ..........................................................................754
10-35 Learner Objectives..............................................................................................755
10-36 Elements of ESXi Images.....................................................................................756
10-37 Image Depots......................................................................................................758
10-38 Importing Updates .............................................................................................759
10-39 Using Images to Perform ESXi Host Life Cycle Operations .................................760
10-40 Creating an ESXi Image for a New Cluster ..........................................................761
10-41 Checking Image Compliance...............................................................................762
10-42 Running a Remediation Precheck.......................................................................763
10-43 Hardware Compatibility .....................................................................................764
10-44 Standalone VIBs ..................................................................................................765
10-45 Remediating a Cluster Against an Image ............................................................766
10-46 Reviewing Remediation Impact..........................................................................767
10-47 Recommended Images .......................................................................................768
10-48 Viewing Recommended Images .........................................................................769
10-49 Selecting a Recommended Image ......................................................................771
10-50 Customizing Cluster Images ...............................................................................772
10-51 Lab 27: Using vSphere Lifecycle Manager ..........................................................773
10-52 Review of Learner Objectives .............................................................................774
10-53 Lesson 5: Managing the Life Cycle of VMware Tools and VM Hardware ...........775
10-54 Learner Objectives..............................................................................................776
10-55 Keeping VMware Tools Up To Date....................................................................777
Contents xxiii
10-56 Upgrading VMware Tools (1)..............................................................................778
10-57 Upgrading VMware Tools (2)..............................................................................779
10-58 Keeping VM Hardware Up To Date ....................................................................780
10-59 Upgrading VM Hardware (1) ..............................................................................781
10-60 Upgrading VM Hardware (2) ..............................................................................782
10-61 Review of Learner Objectives .............................................................................783
10-62 Virtual Beans: Conclusion ...................................................................................784
10-63 Key Points ...........................................................................................................785
xxiv Contents
Module 1
Course Introduction
VMware certification sets the standards for IT professionals who work with VMware technology.
Certifications are grouped into technology tracks. Each track offers one or more levels of
certification (up to five levels).
For the complete list of certifications and details about how to attain these certifications, see
https://vmware.com/certification.
Easy to share in social media (LinkedIn, Twitter, Facebook, blogs, and so on)
A virtual machine (VM) includes a set of specification and configuration files and is supported by
the physical resources of a host. Every VM has virtual devices that provide the same functionality
as physical hardware but are more portable, more secure, and easier to manage.
VMs typically include an operating system, applications, VMware Tools, and both virtual
resources and hardware that you manage in much the same way as you manage a physical
computer.
VMware Tools is a bundle of drivers. Using these drivers, the guest operating system can interact
efficiently with the guest hardware. VMware Tools adds extra functionality so that ESXi can
better manage the VM's use of physical hardware.
In a physical machine, the operating system (for example, Windows or Linux) is installed directly
on the hardware. The operating system requires specific device drivers to support specific
hardware. If the computer is upgraded with new hardware, new device drivers are required.
If applications interface directly with hardware drivers, an upgrade to the hardware, drivers, or
both can have significant repercussions if incompatibilities exist. Because of these potential
repercussions, hands-on technical support personnel must test hardware upgrades against a wide
variety of application suites and operating systems. Such testing costs time and money.
Virtualizing these systems saves on such costs because VMs are 100 percent software.
Multiple VMs are isolated from one another. You can have a database server and an email server
running on the same physical computer. The isolation between the VMs means that software-
dependency conflicts are not a problem. Even users with system administrator privileges on a
VM’s guest operating system cannot breach this layer of isolation to access another VM. These
users must explicitly be granted access by the ESXi system administrator. As a result of VM
The operational VMs can access the resources that they need.
With VMs, you can consolidate your physical servers and make more efficient use of your
hardware. Because a VM is a set of files, features that are not available or not as efficient on
physical architectures are available to you, for example:
With VMs, you can use live migration, fault tolerance, high availability, and disaster recovery
scenarios to increase uptime and reduce recovery time from failures.
You can use multitenancy to mix VMs into specialized configurations, such as a DMZ.
With VMs, you can support legacy applications and operating systems on newer hardware when
maintenance contracts on the existing hardware expire.
A software-defined virtual data center (SDDC) is deployed with isolated computing, storage,
networking, and security resources that are faster than the traditional, hardware-based data center.
All the resources (CPU, memory, disk, and network) of a software-defined data center are
abstracted into files. This abstraction brings the benefits of virtualization at all levels of the
infrastructure, independent of the physical infrastructure.
An SDDC can include the following components:
Service management and automation: Use service management and automation to track and
analyze the operation of multiple data sources in the multiregion SDDC. Deploy vRealize
Operations Manager and vRealize Log Insight across multiple nodes for continued availability
and increased log ingestion rates.
Cloud management layer: This layer includes the service catalog, which houses the facilities
to be deployed. The cloud management layer also includes orchestration, which provides the
Virtual infrastructure layer: This layer establishes a robust virtualized environment that all
other solutions integrate with. The virtual infrastructure layer includes the virtualization
platform for the hypervisor, pools of resources, and virtualization control. Additional
processes and technologies build on the infrastructure to support Infrastructure as a Service
(IaaS) and Platform as a Service (PaaS).
Physical layer: The lowest layer of the solution includes compute, storage, and network
components.
Security: Customers use this layer of the platform to meet demanding compliance
requirements for virtualized workloads and to manage business risk.
As defined by the National Institute of Standards and Technology (NIST), cloud computing is a
model for the ubiquitous, convenient, and on-demand network access to a shared pool of
configurable computing resources.
For example, networks, servers, storage, applications, and services can be rapidly provisioned and
released with minimal management effort or little service provider interaction.
vSphere is the foundation for the technology that supports shared and configurable resource pools.
vSphere abstracts the physical resources of the data center to separate the workload from the
physical hardware. A software user interface can provide the framework for managing and
maintaining this abstraction and allocation.
VMware Cloud Foundation is the unified SDDC platform that bundles vSphere (ESXi and
vCenter Server), vSAN, and NSX into a natively integrated stack to deliver enterprise-ready cloud
infrastructure. VMware Cloud Foundation discovers the hardware, installs the VMware stack
(ESXi, vCenter Server, vSAN, and NSX), manages updates, and performs lifecycle management.
VMware Cloud Foundation can be self-deployed on compatible hardware or preloaded by partners
Cloud infrastructure: Exploit the high performance, availability, and scalability of the SDDC
to run mission-critical applications such as databases, web applications, and virtual desktop
infrastructure (VDI).
VDI: Provide a complete solution for VDI deployment at scale. It simplifies the planning and
design with standardized and tested solutions fully optimized for VDI workloads.
Hybrid cloud: Build a hybrid cloud with a common infrastructure and a consistent operational
model, connecting your on-premises and off-premises data center that is compatible,
stretched, and distributed.
VMware Skyline shortens the time it takes to resolve a problem so that you can get back to
business quickly. VMware Technical Support engineers can use VMware Skyline to view your
environment's configuration and the specific, data-driven analytics to help speed up problem
resolution.
With Basic Support, you can access Skyline findings and recommendations for vSphere and
vSAN by using Skyline Health in the vSphere Client (version 6.7 and later).
With Production or Premier Support, you must use Skyline Advisor and the full functionality of
Skyline (including Log Assist).
Scheduled and custom operational summary reports that provide an overview of the proactive
findings and recommendations
Skyline supports vSphere, NSX for vSphere, vSAN, VMware Horizon, and vRealize Operations
Manager. A Skyline management pack for vRealize Operations Manager is also available. If you
install this management pack, you can see Skyline proactive findings and recommendations within
the vRealize Operations Manager client.
The identification and tagging of VxRail and VMware Validated Design deployments help you
and VMware Technical Support to better understand and support multiproduct solutions.
Skyline identifies all ESXi 5.5 objects within a vCenter Server instance and provides additional
information in VMware knowledge base article 51491 at https://kb.vmware.com/kb/51491. This
article details the end of general support for vSphere 5.5.
For versions of vSphere, vSAN, NSX for vSphere, VMware Horizon, and vRealize Operations
Manager that are supported by Skyline, see the Skyline Collector Release Notes at
https://docs.vmware.com.
You can use virtualization to consolidate and run multiple workloads as VMs on a single
computer.
The slide shows the differences between a virtualized and a nonvirtualized host.
In traditional architectures, the operating system interacts directly with the installed hardware. The
operating system schedules processes to run, allocates memory to applications, sends and receives
data on network interfaces, and both reads from and writes to attached storage devices.
In comparison, a virtualized host interacts with the installed hardware through a thin layer of
software called the virtualization layer or hypervisor.
The hypervisor provides physical hardware resources dynamically to VMs as needed to support
the operation of the VMs. With the hypervisor, VMs can operate with a degree of independence
from the underlying physical hardware. For example, a VM can be moved from one physical host
to another. In addition, its virtual disks can be moved from one type of storage to another without
affecting the functioning of the VM.
With virtualization, you can run multiple VMs on a single physical host, with each VM sharing
the resources of one physical computer across multiple environments. VMs share access to CPUs
and are scheduled to run by the hypervisor.
In addition, VMs are assigned their own region of memory to use and share access to the physical
network cards and disk controllers. Different VMs can run different operating systems and
applications on the same physical computer.
When multiple VMs run on an ESXi host, each VM is allocated a portion of the physical
resources. The hypervisor schedules VMs like a traditional operating system allocates memory
and schedules applications. These VMs run on various CPUs. The ESXi hypervisor can also
overcommit memory. Memory is overcommitted when your VMs can use more virtual RAM than
the physical RAM that is available on the host
VMs, like applications, use network and disk bandwidth. However, VMs are managed with
elaborate control mechanisms to manage how much access is available for each VM. With the
The virtualization layer runs instructions only when needed to make VMs operate as if they were
running directly on a physical machine. CPU virtualization is not emulation. With a software
emulator, programs can run on a computer system other than the one for which they were
originally written.
Emulation provides portability but might negatively affect performance. CPU virtualization is not
emulation because the supported guest operating systems are designed for x64 processors. Using
the hypervisor the operating systems can run natively on the hosts’ physical x64 processors.
When many virtual VMs are running on an ESXi host, those VMs might compete for CPU
resources. When CPU contention occurs, the ESXi host time slices the physical processors across
all virtual machines so that each VM runs as if it had a specified number of virtual processors.
When an application starts, it uses the interfaces provided by the operating system to allocate or
release virtual memory pages during the execution. Virtual memory is a decades-old technique
used in most general-purpose operating systems. Operating systems use virtual memory to present
more memory to applications than they physically have access to. Almost all modern processors
have hardware to support virtual memory.
Virtual memory creates a uniform virtual address space for applications. With the operating
system and hardware, virtual memory can handle the address translation between the virtual
address space and the physical address space. This technique adapts the execution environment to
support large address spaces, process protection, file mapping, and swapping in modern computer
systems.
In a virtualized environment, the VMware virtualization layer creates a contiguous addressable
memory space for the VM when it is started. The allocated memory space is configured when the
VM is created and has the same properties as the virtual address space. With this configuration,
the hypervisor can run multiple VMs simultaneously while protecting the memory of each VM
from being accessed by others.
A VM can be configured with one or more virtual Ethernet adapters. VMs use virtual switches on
the same ESXi host to communicate with one another by using the same protocols that are used
over physical switches, without the need for additional hardware.
Virtual switches also support VLANs that are compatible with standard VLAN implementations
from other networking equipment vendors. With VMware virtual networking, you can link local
VMs together and link local VMs to the external network through a virtual switch.
A virtual switch, like a physical Ethernet switch, forwards frames at the data link layer. An ESXi
host might contain multiple virtual switches. The virtual switch connects to the external network
through outbound Ethernet adapters, called vmnics. The virtual switch can bind multiple vmnics
together, like NIC teaming on a traditional server, offering greater availability and bandwidth to
the VMs using the virtual switch.
Virtual switches are similar to modern physical Ethernet switches in many ways. Like a physical
switch, each virtual switch is isolated and has its own forwarding table. So every destination that
To store virtual disks, ESXi uses datastores, which are logical containers that hide the specifics of
physical storage from VMs and provide a uniform model for storing VM files. Datastores that you
deploy on block storage devices use the VMFS format, a special high-performance file system
format that is optimized for storing virtual machines.
VMFS is designed, constructed, and optimized for a virtualized environment. It is a high-
performance cluster file system designed for virtual machines. It functions in the following ways:
Uses distributed journaling of its file system metadata changes for fast and resilient recovery
if a hardware failure occurs
Increases resource usage by providing multiple VMs with shared access to a consolidated
pool of clustered storage
Is the foundation of distributed infrastructure services, such as live migration of VMs and VM
files, dynamically balanced workloads across available compute resources, automated restart
of VMs, and fault tolerance
GPUs can be used by developers of server applications. Although servers do not usually have
monitors, GPU support is important and relevant to server virtualization.
VMware Host Client provides direct management of individual ESXi hosts. VMware Host Client
is generally used only when management through vCenter Server is not possible.
With the vSphere Client, an HTML5-based client, you can manage vCenter Server Appliance and
the vCenter Server object inventory.
VMware Host Client and the vSphere Client provide the following benefits:
Clean, modern UI
VMware ESXi in the upper-left corner of the banner on the VMware Host Client interface helps
you to differentiate VMware Host Client from other clients.
vSphere Client, which in the upper-left corner of the banner on the vSphere Client interface, helps
you differentiate vSphere Client from other clients.
When you use https://vCenter_Server_Appliance_FQDN_or_IP_Address/ui to access the vSphere
Client, the URL internally redirects to port 9443 on your vCenter Server system.
With the vSphere Client, you can manage vCenter Server Appliance through a web browser, and
Adobe Flex does not have to be enabled in the browser.
You can install ESXCLI on a Windows or Linux system. You can run ESXCLI commands from
the Windows or Linux system to manage ESXi systems.
For more information about ESXCLI, see https://code.vmware.com/web/tool/7.0/esxcli.
For more information about PowerCLI, see https://code.vmware.com/web/tool/12.0.0/vmware-
powercli.
To ensure that your physical servers are supported by ESXi 7.0, check VMware Compatibility
Guide at https://www.vmware.com/resources/compatibility.
You can obtain a free version of ESXi, called vSphere Hypervisor, or you can purchase a licensed
version with vSphere. ESXi can be installed on a hard disk, a USB device, or an SD card. ESXi
can also be installed on diskless hosts (directly into memory) with vSphere Auto Deploy.
ESXi has a small disk footprint for added security and reliability. ESXi provides additional
protection with the following features:
Host-based firewall: To minimize the risk of an attack through the management interface,
ESXi includes a firewall between the management interface and the network.
Memory hardening: The ESXi kernel, user-mode applications, and executable components,
such as drivers and libraries, are located at random, nonpredictable memory addresses.
Combined with the nonexecutable memory protections made available by microprocessors,
memory hardening provides protection that makes it difficult for malicious code to use
memory exploits to take advantage of vulnerabilities.
Trusted Platform Module: TPM is a hardware element that creates a trusted platform. This
element affirms that the boot process and all drivers loaded are genuine.
UEFI secure boot: This feature is for systems that support UEFI secure boot firmware, which
contains a digital certificate that the VMware infrastructure bundles (VIBs) chain to. At boot
time, a verifier is started before other processes to check the VIB’s chain to the certificate in
the firmware.
Lockdown modes: This vSphere feature disables login and API functions from being executed
directly on an ESXi host.
ESXi Quick Boot: With this feature, ESXi can reboot without reinitializing the physical
server BIOS. Quick Boot reduces remediation time during host patch or host upgrade
operations. Quick Boot is enabled by default on supported hardware.
You use the Direct Console User Interface (DCUI) to configure certain settings for ESXi hosts.
The DCUI is a low-level configuration and management interface, accessible through the console
of the server, that is used primarily for initial basic configuration. You press F2 to start
customizing system settings.
The administrative user name for the ESXi host is root. The root password must be configured
during the ESXi installation process.
You must set up your IP address before your ESXi host is operational. By default, a DHCP-
assigned address is configured for the ESXi host. To change or configure basic network settings,
you use the DCUI.
In addition to changing IP settings, you perform the following tasks from the DCUI:
From the DCUI, you can change the keyboard layout, view support information, such as the host’s
license serial number, and view system logs. The default keyboard layout is U.S. English.
You can use the troubleshooting options, which are disabled by default, to enable or disable
troubleshooting services:
SSH: For troubleshooting issues remotely by using an SSH client, for example, PuTTY
The best practice is to keep troubleshooting services disabled until they are necessary, for
example, when you are working with VMware technical support to resolve a problem.
By selecting the Reset System Configuration option, you can reset the system configuration to its
software defaults and remove custom extensions or packages that you added to the host.
An ESXi host includes a firewall as part of the default installation. On ESXi hosts, remote clients
are typically prevented from accessing services on the host. Similarly, local clients are typically
prevented from accessing services on remote hosts.
To ensure the integrity of the host, few ports are open by default. To provide or prevent access to
certain services or clients, you must modify the properties of the firewall.
You can configure firewall settings for incoming and outgoing connections for a service or a
management agent. For some services, you can manage service details.
For example, you can use the Start, Stop, or Restart buttons to change the status of a service
temporarily. Alternatively, you can change the startup policy so that the service starts with the host
or with port use. For some services, you can explicitly specify IP addresses from which
connections are allowed.
On an ESXi host, the root user account is the most powerful user account on the system. The user
root can access all files and all commands. Securing this account is the most important step that
you can take to secure an ESXi host.
Whenever possible, use the vSphere Client to log in to the vCenter Server system and manage
your ESXi hosts. In some unusual circumstances, for example, when the vCenter Server system is
down, you use VMware Host Client to connect directly to the ESXi host.
Although you can log in to your ESXi host through the vSphere CLI or through vSphere ESXi
Shell, these access methods should be reserved for troubleshooting or configuration that cannot be
accomplished by using VMware Host Client.
If a host must be managed directly, avoid creating local users on the host. If possible, join the host
to a Windows domain and log in with domain credentials instead.
Network Time Protocol (NTP) is an Internet standard protocol that is used to synchronize
computer clock times in a network. The benefits of synchronizing an ESXi host’s time include:
Accurate time stamps appear in log messages, which make audit logs meaningful.
VMs can synchronize their time with the ESXi host. Time synchronization is beneficial to
applications, such as database applications, running on VMs.
NTP is a client-server protocol. When you configure the ESXi host to be an NTP client, the host
synchronizes its time with an NTP server, which can be a server on the Internet or your corporate
NTP server.
For information about NTP, see http://www.ntp.org.
For more information about timekeeping, see VMware knowledge base article 1318 at
http://kb.vmware.com/kb/1318.
The optimal method for provisioning VMs for your environment depends on factors such as the
size and type of your infrastructure and the goals that you want to achieve.
You can use the New Virtual Machine wizard to create a single VM if no other VMs in your
environment meet your requirements, such as a particular operating system or hardware
configuration. For example, you might need a VM that is configured only for testing purposes.
You can also create a single VM, install an operating system on it, and use that VM as a template
from which to clone other VMs.
Deploy VMs, virtual appliances, and vApps stored in Open Virtual Machine Format (OVF) to use
a preconfigured VM. A virtual appliance is a VM that typically has an operating system and other
software preinstalled. You can deploy VMs from OVF templates that are on local file systems (for
example, local disks such as C:), removable media (for example, CDs or USB keychain drives),
shared network drives, or URLs.
In addition to using the vSphere Client, you can also use VMware Host Client to create a VM by
using OVF files. However, several limitations apply when you use VMware Host Client for this
The New Virtual Machine wizard prompts you for standard information:
The VM name
If using the vSphere Client, you can also specify the folder in which to place the VM.
To install the guest operating system, you interact with the VM through the VM console. Using
the vSphere Client, you can attach a CD, DVD, or ISO image containing the installation image to
the virtual CD/DVD drive.
On the slide, the Windows Server 2008 guest operating system is being installed. You can use the
vSphere Client to install a guest operating system. You can also install a guest operating system
from an ISO image or a CD. Installing from an ISO image is typically faster and more convenient
than a CD installation.
For more information about installing guest operating systems, see vSphere Virtual Machine
Administration at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-55238059-912E-411F-A0E9-
A7A536972A91.html.
For more about the supported guest operating systems, see VMware Compatibility Guide at
https://www.vmware.com/resources/compatibility.
VMware Tools improves management of the VM by replacing generic operating system drivers
with VMware drivers tuned for virtual hardware. You install VMware Tools into the guest
operating system. When you install VMware Tools, you install these items:
The VMware Tools service: This service synchronizes the time in the guest operating system
with the time in the host operating system.
A set of scripts that helps you automate guest operating system operations: You can configure
the scripts to run when the VM's power state changes.
VMware Tools enhances the performance of a VM and makes many of the ease-of-use features in
VMware products possible:
Faster graphics performance and Windows Aero on operating systems that support Aero
Although the guest operating system can run without VMware Tools, many VMware features are
not available until you install VMware Tools. For example, if VMware Tools is not installed in
your VM, you cannot use the shutdown or restart options from the toolbar. You can use only the
power options.
For more information about using Open VM tools, see VMware Tools User Guide at
https://docs.vmware.com/en/VMware-Tools/index.html.
vSphere encapsulates each VM into a few files or objects, making VMs easier to manage and
migrate. The files and objects for each VM are stored in a separate folder on a datastore.
The slide lists some of the files that make up a VM. Except for the log files, the name of each file
starts with the VM's name <VM_name>. A VM consists of the following files:
A VM's current log file (.log) and a set of files used to archive old log entries (-#.log).
In addition to the current log file, vmware.log, up to six archive log files are maintained at
one time. For example, -1.log to -6.log might exist at first.
The next time an archive log file is created, for example, when the VM is powered off and
powered back on, the following actions occur: The -6.log is deleted, the -5.log is
One or more virtual disk files. The first virtual disk has files VM_name.vmdk and
VM_name-flat.vmdk.
If the VM has more than one disk file, the file pair for the subsequent disk files is called
VM_name_#.vmdk and VM_name_#-flat.vmdk. # is the next number in the sequence,
starting with 1. For example, if the VM called Test01 has two virtual disks, this VM has the
Test01.vmdk, Test01-flat.vmdk, Test01_1.vmdk, and Test01_1-
flat.vmdk files.
The list of files shown on the slide is not comprehensive. For a complete list of all the types of
VM files, see vSphere Virtual Machine Administration at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-55238059-912E-411F-A0E9-
A7A536972A91.html.
Each guest OS sees ordinary hardware devices. The guest OS does not know that these devices are
virtual. All VMs have uniform hardware, except for a few variations that the system administrator
can apply. Uniform hardware makes VMs portable across VMware virtualization platforms.
You can configure VM memory and CPU settings. vSphere supports many of the latest CPU
features, including virtual CPU performance counters. You can add virtual hard disks and NICs.
You can also add and configure virtual hardware, such as CD/DVD drives, and SCSI devices. Not
all devices are available to add and configure. For example, you cannot add video devices, but you
can configure available video devices and video cards.
You can add multiple USB devices, such as security dongles and mass storage devices, to a VM
that resides on an ESXi host to which the devices are physically attached. When you attach a USB
device to a physical host, the device is available only to VMs that reside on that host. Those VMs
cannot connect to a device on another host in the data center. A USB device is available to only
one VM at a time. When you remove a device from a VM, it becomes available to other VMs that
reside on the host.
VMCI provides socket APIs that are similar to APIs that are used for TCP/UDP applications. IP
addresses are replaced with VMCI ID numbers. For example, you can port netperf to use VMCI
sockets instead of TCP/UDP. VMCI is disabled by default.
For more information about virtual hardware, see vSphere Virtual Machine Administration at
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-
55238059-912E-411F-A0E9-A7A536972A91.html.
Each release of a VMware product has a corresponding VM hardware version included. The table
shows the latest hardware version that each ESXi version supports. Each VM compatibility level
supports at least five major or minor vSphere releases.
For a complete list of virtual machine configuration maximums, see VMware Configuration
Maximums at https://configmax.vmware.com.
You size the VM's CPU and memory according to the applications and the guest operating system.
You can use the multicore vCPU feature to control the number of cores per virtual socket in a VM.
With this capability, operating systems with socket restrictions can use more of the host CPU’s
cores, increasing overall performance.
A VM cannot have more virtual CPUs than the number of logical CPUs on the host. The number
of logical CPUs is the number of physical processor cores, or twice that number if hyperthreading
is enabled. For example, if a host has 128 logical CPUs, you can configure the VM for 128
vCPUs.
You can set most of the memory parameters during VM creation or after the guest operating
system is installed. Some actions require that you power off the VM before changing the settings.
The memory resource settings for a VM determine how much of the host’s memory is allocated to
the VM.
Storage adapters provide connectivity for your ESXi host to a specific storage unit or network.
ESXi supports different classes of adapters, including SCSI, iSCSI, RAID, Fibre Channel, Fibre
Channel over Ethernet (FCoE), and Ethernet. ESXi accesses the adapters directly through device
drivers in the VMkernel:
BusLogic Parallel: The latest Mylex (BusLogic) BT/KT-958 compatible host bus adapter.
LSI Logic Parallel: The LSI Logic LSI53C10xx Ultra320 SCSI I/O controller is supported.
LSI Logic SAS: The LSI Logic SAS adapter has a serial interface.
VMware Paravirtual SCSI: A high-performance storage adapter that can provide greater
throughput and lower CPU use.
Virtual NVMe: NVMe is an Intel specification for attaching and accessing flash storage
devices to the PCI Express bus. NVMe is an alternative to existing block-based server storage
I/O access protocols.
In a lazy-zeroed thick-provisioned disk, space required for the virtual disk is allocated during
creation. Data remaining on the physical device is not erased during creation. Later, the data is
zeroed out on demand on first write from the VM. This disk type is the default.
In an eager-zeroed thick-provisioned disk, the space required for the virtual disk is allocated
during creation. Data remaining on the physical device is zeroed out when the disk is created.
A thin-provisioned disk uses only as much datastore space as the disk initially needs. If the thin
disk needs more space later, it can expand to the maximum capacity allocated to it.
Thin provisioning is often used with storage array deduplication to improve storage use and to
back up VMs.
Thin provisioning provides alarms and reports that track allocation versus current use of storage
capacity. Storage administrators can use thin provisioning to optimize the allocation of storage for
virtual environments. With thin provisioning, users can optimally but safely use available storage
space through overallocation.
The types of network adapters that are available depend on the following factors:
VM compatibility level (or hardware version), which depends on the host that created or most
recently updated it. For example, the VMXNET3 virtual NIC requires hardware version 7
(ESX/ESXi 4.0 or later).
Whether the VM compatibility is updated to the latest version for the current host.
E1000E: Emulated version of the Intel 82574 Gigabit Ethernet NIC. E1000E is the default
adapter for Windows 8 and Windows Server 2012.
E1000: Emulated version of the Intel 82545EM Gigabit Ethernet NIC, with drivers available
in most newer guest operating systems, including Windows XP and later and Linux versions
2.4.19 and later.
Flexible: Identifies itself as a Vlance adapter when a VM starts, but initializes itself and
functions as either a Vlance or a VMXNET adapter, depending on which driver initializes it.
With VMware Tools installed, the VMXNET driver changes the Vlance adapter to the higher
performance VMXNET adapter.
Vlance: Emulated version of the AMD 79C970 PCnet32 LANCE NIC, an older 10 Mbps NIC
with drivers available in 32-bit legacy guest operating systems. A VM configured with this
network adapter can use its network immediately.
VMXNET3: A paravirtualized NIC designed for performance. VMXNET3 offers all the
features available in VMXNET2 and adds several new features, such as multiqueue support
(also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt
delivery.
– High availability
– vSphere DRS: Limited availability
The VM can be part of a cluster but cannot migrate across hosts.
– Snapshots.
With PVRDMA, multiple guests can access the RDMA device by using verbs API, an
industry-standard interface. A set of these verbs was implemented to expose an RDMA-
capable guest device (PVRDMA) to applications. The applications can use the PVRDMA
guest driver to communicate with the underlying physical device. PVRDMA supports
RDMA, providing the following functions:
– OS bypass
– Zero-copy
– Low latency and high bandwidth
– Less power use and faster data access
Virtual CPU (vCPU) and virtual memory are the minimum required virtual hardware. Having a
virtual hard disk, virtual NICs, and other virtual devices make the VM more useful.
For information about adding virtual devices to a VM, see vSphere Virtual Machine
Administration at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-55238059-912E-411F-A0E9-
A7A536972A91.html.
Applications that run on cloud-based environments are designed with failure in mind. They are
built to be resilient, to tolerate network or database outages, and to degrade gracefully.
Typically, cloud-native applications use microservice-based architectures. The term micro does
not correlate to lines of code. It refers to functionality and responsibility.
Each microservice should be responsible for specific parts of the system.
In the example, the application is broken into multiple services, including a UI and user, order, and
product services. Each service has its own database. With this architecture, each service can be
scaled independently. For example, during busy times, the order service might need to be scaled to
handle high throughput.
The Twelve-Factor App principles describe characteristics of microservice and cloud-native
applications.
Containers are a new format of virtualized workload. They require CPU, memory, network,
security, and storage.
Containers satisfy developers’ need for speed by removing dependencies on underlying operating
systems:
Change the paradigm on security by using a discard and restart approach to patching and
upgrades.
Use structured tooling to fully automate updates of application logic running inside.
Provide an easy user experience for developers that is infrastructure-agnostic (meaning that it
can run on any cloud).
The opportunities containers present are many, given the infrastructure and operational complexity
that they offer.
Administrators provide container hosts, which are the base structure that developers use to run
their containers. A robust microservices system includes more deliverables, many of which are
built using containers.
For developers to focus on providing services to customers, operations must provide a reliable
container host infrastructure.
In vSphere with Kubernetes, the container hosts are Photon-based VMs.
With virtualization, multiple physical machines can be consolidated into a single physical machine
that runs multiple VMs. Each VM provides virtual hardware that the guest OS uses to run
applications. Multiple applications run on a single VM but these applications are still logically
separated and isolated.
A concern about VMs is that they are hundreds of megabytes to gigabytes in size and contain
many binaries and libraries that are not relevant to the main application running on them.
With containers, developers take a streamlined base OS file system and layer on only the required
binaries and libraries that the application depends on. When a container is run as a process on the
container host OS, the container can see its dependencies and base OS packages. The container is
isolated from all other processes on the container host OS. The container processes are the only
processes that run on a minimal system.
From the container host OS perspective, the container is another process that is running, but it has
a restricted view of the file system and potentially restricted CPU and memory.
Containers are the ideal technology for microservices because the goals of containers (lightweight,
easily packaged, can run anywhere) align with the goals and benefits of the microservices
architecture.
Operators get modularized application components that are small and can fit into existing
resources.
Developers can focus on the logic of modularized application components, knowing that the
infrastructure is reliable and supports the scalability of modules.
Kubernetes automates many key operational responsibilities, providing the developer with a
reliable environment.
Kubernetes performs the following functions:
Groups containers that make up an application into logical units for easy management and
discovery
Restarts failed containers, replaces and reschedules containers when hosts fail, and stops
containers that do not respond to your user-defined health check
Progressively rolls out changes to your application, ensuring that it does not stop all your
instances at the same time and enabling zero downtime
Allocates IP addresses, mounts the storage system of your choice, load balances, and
generally looks after the containers
Kubernetes orchestrates containers that support the application. However, running Kubernetes in
production is not easy, especially for operations teams. The top challenges of running Kubernetes
are related to reliability, security, networking, scaling, logging, and complexity. How do you
monitor Kubernetes and the underlying infrastructure? How do you build a reliable platform to
deploy your applications? How do you handle the complexity that this layer of abstraction
introduces?
For years, VMware has helped to solve these types of problems for IT. VMware can offer its
expertise and solutions in this area.
Application developers prefer using Kubernetes rather than programming to the infrastructure. For
example, an application developer must build an ELK stack. The developer prefers to deal with
the Kubernetes API. The developer wants to use the resources, load balancer, and all the
primitives that Kubernetes constructs, rather than worry about the underlying infrastructure.
But the infrastructure is still there. It must be mapped for Kubernetes to use it. Usually, that
mapping is done by a platform operator so the developer can use the Kubernetes constructs.
The slide shows how the mapping is done with the VMware software-defined data center (SDDC).
The resources and availability zones map to vSphere clusters, security policy and load-balancing
map to NSX, persistent volumes map to vSphere datastores and metrics map to Wavefront. Each
of these items provides value.
With vCenter Server, you can pool and manage the resources of multiple hosts.
You can deploy vCenter Server Appliance on an ESXi host in your infrastructure. vCenter Server
Appliance is a preconfigured Linux-based virtual machine that is optimized for running vCenter
Server and the vCenter Server components.
vCenter Server Appliance provides advanced features, such as vSphere DRS, vSphere HA,
vSphere Fault Tolerance, vSphere vMotion, and vSphere Storage vMotion.
vCenter Server is a service that runs in vCenter Server Appliance. vCenter Server acts as a central
administrator for ESXi hosts that are connected in a network.
Although installation of vCenter Server services is not optional, administrators can choose
whether to use their functionalities.
vSphere Client: You use this client to connect to vCenter Server so that you can manage your
ESXi hosts centrally. When an ESXi host is managed by vCenter Server, you should always
use vCenter Server and the vSphere Client to manage that host.
vCenter Server database: The vCenter Server database is the most important component. The
database stores inventory items, security roles, resource pools, performance data, and other
critical information for vCenter Server.
Managed hosts: You can use vCenter Server to manage ESXi hosts and the VMs that run on
them.
You cannot create an Enhanced Linked Mode group after you deploy vCenter Server Appliance.
Enhanced Linked Mode provides the following features:
You can log in to all linked vCenter Server instances simultaneously with a single user name
and password.
You can view and search the inventories of all linked vCenter Server instances in the vSphere
Client.
Roles, permission, licenses, tags, and policies are replicated across linked vCenter Server
instances.
To join vCenter Server instances in Enhanced Linked Mode, connect the vCenter Server instances
to the same vCenter Single Sign-On domain.
Enhanced Linked Mode requires the vCenter Server Standard licensing level. This mode is not
supported with vCenter Server Foundation or vCenter Server for Essentials.
vCenter Server provides direct access to the ESXi host through a vCenter Server agent called
virtual provisioning X agent (vpxa). The vpxa process is automatically installed on the host and
started when the host is added to the vCenter Server inventory. The vCenter Server service (vpxd)
communicates with the ESXi host daemon (hostd) through the vCenter Server agent (vpxa).
Clients that communicate directly with the host, and bypass vCenter Server, converse with hostd.
The hostd process runs directly on the ESXi host and manages most of the operations on the ESXi
host. The hostd process is aware of all VMs that are registered on the ESXi host, the storage
volumes visible to the ESXi host, and the status of all VMs.
Most commands or operations come from vCenter Server through vpxa. Examples include
creating, migrating, and powering on virtual machines. Acting as an intermediary between the
vpxd process, which runs on vCenter Server, and the hostd process, vpxa relays the tasks to
perform on the host.
When you are logged in to the vCenter Server system through the vSphere Client, vCenter Server
passes commands to the ESXi host through the vpxa.
The GUI installer performs validations and prechecks during the deployment phase to ensure that
no mistakes are made and that a compatible environment is created.
In stage 2, you configure whether to use the ESXi host or NTP servers as the time synchronization
source. You can also enable SSH access. SSH access is disabled by default.
To access the vCenter Server system settings by using the vSphere Client, select the vCenter
Server system in the navigation pane, click the Configure tab, and expand Settings.
The vCenter Server Appliance Management Interface is an HTML client designed to configure
and monitor vCenter Server Appliance.
The vCenter Server Appliance Management Interface connects directly to port 5480. Use the URL
https://FQDN_or_IP_address:5480.
A maximum of four NICs are supported for multihoming. All four multihoming-supported NIC
configurations are preserved during upgrade, backup, and restore processes.
The License Service manages the license assignments for ESXi hosts, vCenter Server systems, and
clusters with vSAN enabled.
You can monitor the health and status of the License Service by using the vCenter Appliance
Management Interface.
In the vSphere environment, license reporting and management are centralized. All product and
feature licenses are encapsulated in 25-character license keys that you can manage and monitor
from vCenter Server.
You can view license information by product, license key, or asset:
Product: A license to use a vSphere software component or feature, for example, evaluation
mode or vSphere Enterprise Plus.
Asset: A machine on which a product is installed. For an asset to run certain software legally,
the asset must be licensed.
Before purchasing and activating licenses for ESXi and vCenter Server, you can install the
software and run it in evaluation mode. Evaluation mode is intended for demonstrating the
software or evaluating its features. During the evaluation period, the software is operational.
The evaluation period is 60 days from the time of installation. During this period, the software
notifies you of the time remaining until expiration. The 60-day evaluation period cannot be paused
or restarted. After the evaluation period expires, you can no longer perform some operations in
vCenter Server and ESXi. For example, you cannot power on or reset your virtual machines. In
addition, all hosts are disconnected from the vCenter Server system. To continue to have full use
of ESXi and vCenter Server operations, you must acquire license keys.
Select Menu > Shortcuts. The Shortcuts page has a navigation pane on the left and Inventories,
Monitoring, and Administration panes on the right.
The Hosts and Clusters inventory view shows all host and cluster objects in a data center. You can
further organize the hosts and clusters into folders.
The VMs and Templates inventory view shows all VM and template objects in a data center. You
can also organize the VMs and templates into folders.
As with the other inventory views, you can organize your datastore and network objects into
folders.
You might create a data center object for each data center geographical location. Or, you might
create a data center object for each organizational unit in your enterprise.
You might create some data centers for high-performance environments and other data centers for
less demanding VMs.
You plan the setup of your virtual environment depending on your requirements.
A large vSphere implementation might contain several virtual data centers with a complex
arrangement of hosts, clusters, resource pools, and networks. It might include multiple vCenter
Server systems.
Smaller implementations might require a single virtual data center with a less complex topology.
Regardless of the scale of your virtual environment, consider how the VMs that it supports are
used and administered.
Populating and organizing your inventory involves the following tasks:
Configuring storage systems and creating datastore inventory objects to provide logical
containers for storage devices in your inventory
The authorization to perform tasks in vCenter Server is governed by an access control system.
Through this system, the vCenter Server administrator can specify in detail which users or groups
can perform which tasks on which objects.
A permission is set on an object in the vCenter Server object hierarchy. Each permission
associates the object with a group or user and the group or user access roles. For example, you can
select a VM object, add one permission that gives the Read-only role to group 1, and add a second
permission that gives the Administrator role to user 2.
By assigning a different role to a group of users on different objects, you control the tasks that
those users can perform in your vSphere environment. For example, to allow a group to configure
memory for the host, select that host and add a permission that grants a role to that group that
includes the Host.Configuration.Memory Configuration privilege.
A role is a set of one or more privileges. For example, the Virtual Machine Power User sample
role consists of several privileges in categories such as Datastore and Global. A role is assigned to
a user or group and determines the level of access of that user or group.
You cannot change the privileges associated with the system roles:
Administrator role: Users with this role for an object may view and perform all actions on the
object.
Read-only role: Users with this role for an object may view the state of the object and details
about the object.
No cryptography administrator role: Users with this role for an object have the same
privileges as users with the Administrator role, except for privileges in the Cryptographic
operations category.
All roles are independent of each other. Hierarchy or inheritance between roles does not apply.
You can assign permissions to objects at different levels of the hierarchy. For example, you can
assign permissions to a host object or to a folder object that includes all host objects. You can also
assign permissions to a global root object to apply the permissions to all objects in all solutions.
For information about hierarchical inheritance of permissions and global permissions, see vSphere
Security at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.security.doc/GUID-52188148-C579-4F6A-8335-
CFBCE0DD2167.html
You can view all the objects to which a role is assigned and all the users or groups who are
granted the role.
To view information about a role, click Usage in the Roles pane and select a role from the Roles
list. The information provided to the right shows each object to which the role is assigned and the
users and groups who were granted the role.
In addition to specifying whether permissions propagate downward, you can override permissions
set at a higher level by explicitly setting different permissions for a lower-level object.
On the slide, user Greg is given Read-only access in the Training data center. This role is
propagated to all child objects except one, the Prod03-2 VM. For this VM, Greg is an
administrator.
On the slide, Group1 is assigned the VM_Power_On role, a custom role that contains only one
privilege: the ability to power on a VM. Group2 is assigned the Take_Snapshots role, another
custom role that contains the privileges to create and remove snapshots. Both roles propagate to
the child objects.
Because Greg belongs to both Group1 and Group2, he gets both VM_Power_On and
Take_Snapshots privileges for all objects in the Training data center.
You can override permissions set for a higher-level object by explicitly setting different
permissions for a lower-level object.
On the slide, Group1 is assigned the Administrator role at the Training data center and Group2 is
assigned the Read-only role on the VM object, Prod03-1. The permission granted to Group1 is
propagated to child objects.
Because Greg is a member of both Group1 and Group2, he gets administrator privileges on the
entire Training data center (the higher-level object), except for the VM called Prod03-1 (the
lower-level object). For this VM, he gets read-only access.
On the slide, three permissions are assigned to the Training data center:
Greg is a member of both Group1 and Group2. Assume that propagation to child objects is
enabled on all roles. Although Greg is a member of both Group1 and Group2, he gets the No
Access privilege to the Training data center and all objects under it. Greg gets the No Access
privilege because explicit user permissions on an object take precedence over all group
permissions on that same object.
The Virtual Beans VM Provisioning role is one of many examples of roles that can be created.
Define a role using the smallest number of privileges possible to maximize security and control
over your environment. Give the roles names that explicitly indicate what each role allows, to
make its purpose clear.
Often, you apply a permission to a vCenter Server inventory object such as an ESXi host or a VM.
When you apply a permission, you specify that a user or group has a set of privileges, called a
role, on the object.
Global permissions give a user or group privileges to view or manage all objects in each of the
inventory hierarchies in your deployment. The example on the slide shows that the global root
object has permissions over all vCenter Server objects, including content libraries, vCenter Server
instances, and tags. Global permissions allow access across vCenter Server instances. vCenter
Server permissions, however, are effective only on objects in a particular vCenter Server instance.
The vCenter Server Appliance Management Interface supports backing up key parts of the
appliance. You can protect vCenter Server data and minimize the time required to restore data
center operations.
The backup process collects key files into a tar bundle and compresses the bundle to reduce the
network load. To minimize the storage impact, the transmission is streamed without caching in the
appliance. To reduce the total time required to complete the backup operation, the backup process
handles the different components in parallel.
You can encrypt the compressed file before transmission to the backup storage location. When
you choose encryption, you must supply a password that can be used to decrypt the file during
restoration.
The backup operation always includes the vCenter Server database and system configuration files,
so that a restore operation has all the data to recreate an operational appliance. Optionally, you can
specify that a backup operation should include Statistics, Events, and Tasks from the current state
of the data center. Current alarms are always included in a backup.
You use the vCenter Server Appliance Management Interface to perform a file-based backup of
the vCenter Server core configuration, inventory, and historical data of your choice. The backed-
up data is streamed over the selected protocol to a remote system. The backup is not stored on
vCenter Server Appliance.
When specifying the backup location, use the following syntax: protocol:<server-
address<:port-number>/folder/subfolder.
You can perform a file-based restore only for a vCenter Server Appliance instance that you
previously backed up by using the vCenter Server Appliance Management Interface. You can
perform the restore operation by using the GUI installer of vCenter Server Appliance. The process
consists of deploying a new vCenter Server Appliance instance and copying the data from the file-
based backup to the new appliance.
You can also perform a restore operation by deploying a new vCenter Server Appliance instance
and using the vCenter Server Appliance Management Interface to copy the data from the file-
based backup to the new appliance.
The schedule can be set up with information about the backup location, recurrence, and
retention for the backups.
Changes to the logging settings take effect immediately. You do not have to restart the vCenter
Server system.
The CPU and Memory views provide a historical view of CPU and memory use.
Using the Disks view, you can monitor the available disk space.
If a vCenter Server patch or update occurs in the same time period as the monthly security patch,
the monthly security patch is rolled into the vCenter Server patch or update.
vSphere is a virtualization platform that forms the foundation for building and managing an
organization's virtual, public, and private cloud infrastructures. vCenter Server Appliance sits at
the heart of vSphere and provides services to manage various components of a virtual
infrastructure, such as ESXi hosts, virtual machines, and storage and networking resources. As
large virtual infrastructures are built using vSphere, vCenter Server becomes an important element
in ensuring the business continuity of an organization. vCenter Server must protect itself from a
set of hardware and software failures in an environment and must recover transparently from such
failures.
With vCenter Server High Availability, you can recover quickly from a vCenter Server failure.
Using automated failover, vCenter Server failover occurs with minimal downtime.
The animation demonstrates what happens if an active node fails. To play the animation, go to
https://vmware.bravais.com/s/PlUBZn2zCO7HE5qN2fm4.
The active node runs the active instance of vCenter Server Appliance. The node uses an IP address
on the Management network for the vSphere Client to connect to.
If the active node fails (because of a hardware, software, or network failure), the passive node
takes over the role of the active node. The IP address to which the vSphere Client was connected
is switched from the failed node to the new active node. The new active node starts serving client
requests. Meanwhile, the user must log back in to the vSphere Client for continued access to
vCenter Server.
Because only two nodes are up and running, the vCenter Server High Availability cluster is
considered to be running in a degraded state and subsequent failover cannot occur. A subsequent
failure in a degraded cluster means vCenter Server services are no longer available. A passive
node is required to return the cluster to a healthy state.
If the passive node fails, the active node continues to operate as normal. Because no disruption in
service occurs, users can continue to access the active node using the vSphere Client.
Because the passive node is down, the active node is no longer protected. The cluster is considered
to be running in a degraded state because only two nodes are up and running. A subsequent failure
in a degraded cluster means vCenter Server services are no longer available. A passive node is
required to return the cluster to a healthy state.
For more information about the vCenter Server High Availability requirements, see vSphere
Availability at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.avail.doc/GUID-63F459B7-8884-4818-8872-
C9753B2E0215.html.
The ESXi management network port is a VMkernel port that connects to network or remote
services, including vpxd on vCenter Server and VMware Host Client.
Each ESXi management network port and each VMkernel port must be configured with its own IP
address, netmask, and gateway.
To help configure virtual switches, you can create port groups. A port group is a template that
stores configuration information to create virtual switch ports on a virtual switch. VM port groups
connect VMs to one another with common networking properties.
VM port groups and VMkernel ports connect to the outside world through the physical Ethernet
adapters that are connected to the virtual switch uplink ports.
When you design your networking environment, you can team all your networks on a single
virtual switch. Alternatively, you can opt for multiple virtual switches, each with a separate
network. The decision partly depends on the layout of your physical networks.
For example, you might not have enough network adapters to create a separate virtual switch for
each network. Instead, you might place your network adapters in a single virtual switch and isolate
the networks by using VLANs.
Because physical NICs are assigned at the virtual switch level, all ports and port groups that are
defined for a particular switch share the same hardware.
VLANs provide for logical groupings of switch ports. All virtual machines or ports in a VLAN
communicate as if they are on the same physical LAN segment. A VLAN is a software-configured
broadcast domain. Using a VLAN provides the following benefits:
Creation of logical networks that are not based on the physical topology
Cost savings by partitioning the network without the overhead of deploying new routers
VLANs can be configured at the port group level. The ESXi host provides VLAN support through
virtual switch tagging, which is provided by giving a port group a VLAN ID. By default, a VLAN
ID is optional. The VMkernel takes care of all tagging and untagging as the packets pass through
the virtual switch.
The port on a physical switch to which an ESXi host is connected must be defined as a static trunk
port. A trunk port is a port on a physical Ethernet switch that is configured to send and receive
The slide shows the standard switch vSwitch0 on the sa-esxi-01.vclass.local ESXi host. By
default, the ESXi installation creates a virtual machine port group named VM Network and a
VMkernel port named Management Network. You can create additional port groups such as the
Production port group, which you can use for the production virtual machine network.
For performance and security, you should remove the VM Network virtual machine port group
and keep VM networks and management networks separated.
You can change the connection speed and duplex of a physical adapter to transfer data in
compliance with the traffic rate.
If the physical adapter supports SR-IOV, you can enable it and configure the number of virtual
functions to use for virtual machine networking.
vCenter Server owns the configuration of the distributed switch. The configuration is consistent
across all hosts that use the distributed switch.
During a vSphere vMotion migration, a distributed switch tracks the virtual networking state (for
example, counters and port statistics) as the virtual machine moves from host to host. The tracking
provides a consistent view of a virtual network interface, regardless of the virtual machine location
or vSphere vMotion migration history. Tracking simplifies network monitoring and
troubleshooting activities where vSphere vMotion is used to migrate virtual machines between
hosts.
Networking security policy provides protection against MAC address impersonation and
unwanted port scanning.
Traffic shaping is useful when you want to limit the amount of traffic to a VM or a group of VMs.
Use the teaming and failover policy to determine the following information:
How the network traffic of VMs and VMkernel adapters that are connected to the switch is
distributed between physical adapters
Promiscuous mode: Promiscuous mode allows a virtual switch or port group to forward all
traffic regardless of their destinations. The default is Reject.
MAC address changes: The default is Reject. If this option is set to Reject and the guest
attempts to change the MAC address assigned to the virtual NIC, it stops receiving frames.
Forged transmits: A frame’s source address field might be altered by the guest and contain a
MAC address other than the assigned virtual NIC MAC address. You can set the Forged
Transmits parameter to accept or reject such frames. The default is Reject.
A virtual machine’s network bandwidth can be controlled by enabling the network traffic shaper.
The network traffic shaper, when used on a standard switch, shapes only outbound network traffic.
To control inbound traffic, use a load-balancing system or turn on rate-limiting features on your
physical router.
The ESXi host shapes only outbound traffic by establishing parameters for the following traffic
characteristics:
Average bandwidth (Kbps): Establishes the number of kilobits per second to allow across a
port, averaged over time. The average bandwidth is the allowed average load.
Peak bandwidth (Kbps): The maximum number of kilobits per second to allow across a port
when it is sending a burst of traffic. This number tops the bandwidth that is used by a port
whenever the port is using the burst bonus that is configured using the Burst size parameter.
Burst size (KB): The maximum number of kilobytes to allow in a burst. If this parameter is
set, a port might gain a burst bonus if it does not use all its allocated bandwidth. Whenever the
port needs more bandwidth than specified in the Average bandwidth field, the port might be
allowed to temporarily transmit data at a faster speed if a burst bonus is available. This
parameter tops the number of kilobytes that have accumulated in the burst bonus and so
transfers at a faster speed.
NIC teaming increases the network bandwidth of the switch and provides redundancy. To
determine how the traffic is rerouted when an adapter fails, you include physical NICs in a
failover order.
To determine how the virtual switch distributes the network traffic between the physical NICs in a
team, you select load-balancing algorithms depending on the needs and capabilities of your
environment:
Load-balancing policy: This policy determines how network traffic is distributed between the
network adapters in a NIC team. Virtual switches load balance only the outgoing traffic.
Incoming traffic is controlled by the load-balancing policy on the physical switch.
Failback policy: By default, a failback policy is enabled on a NIC team. If a failed physical
NIC returns online, the virtual switch sets the NIC back to active by replacing the standby
NIC that took over its slot.
Notify switches policy: With this policy, you can determine how the ESXi host communicates
failover events. When a physical NIC connects to the virtual switch or when traffic is rerouted
to a different physical NIC in the team, the virtual switch sends notifications over the network
to update the lookup tables on physical switches. Notifying the physical switch offers the
lowest latency when a failover or a migration with vSphere vMotion occurs.
Default NIC teaming and failover policies are set for the entire standard switch. These default
settings can be overridden at the port group level. The policies show what is inherited from the
settings at the switch level.
Traffic is evenly distributed if the number of virtual NICs is greater than the number of
physical NICs in the team.
Resource consumption is low because, in most cases, the virtual switch calculates uplinks for
the VM only once.
The virtual switch is not aware of the traffic load on the uplinks, and it does not load balance
the traffic to uplinks that are less used.
The bandwidth that is available to a VM is limited to the speed of the uplink that is associated
with the relevant port ID, unless the VM has more than one virtual NIC.
VMs use the same uplink because the MAC address is static. Powering a VM on or off does
not change the uplink that the VM uses.
The bandwidth that is available to a VM is limited to the speed of the uplink that is associated
with the relevant port ID, unless the VM uses multiple source MAC addresses.
Resource consumption is higher than with a route based on the originating virtual port
because the virtual switch calculates an uplink for every packet.
The virtual switch is not aware of the load of the uplinks, so uplinks might become
overloaded.
The load is more evenly distributed compared to the route based on the originating virtual port
and the route based on source MAC hash because the virtual switch calculates the uplink for
every packet.
VMs that communicate with multiple IP addresses have a potentially higher throughput.
The virtual switch is not aware of the actual load of the uplinks.
Monitoring the link status that is provided by the network adapter detects failures such as cable
pulls and physical switch power failures. This monitoring does not detect configuration errors,
such as a physical switch port being blocked by the Spanning Tree Protocol or misconfigured
VLAN membership. This method cannot detect upstream, nondirectly connected physical switch
or cable failures.
Beaconing introduces a 62-byte packet load approximately every 1 second per physical NIC.
When beaconing is activated, the VMkernel sends out and listens for probe packets on all NICs
that are configured as part of the team. This technique can detect failures that link-status
monitoring alone cannot. Consult your switch manufacturer to verify the support of beaconing in
your environment. For information on beacon probing, see VMware knowledge base article
1005577 at http://kb.vmware.com/kb/1005577.
A physical switch can be notified by the VMkernel whenever a virtual NIC is connected to a
virtual switch. A physical switch can also be notified whenever a failover event causes a virtual
NIC’s traffic to be routed over a different physical NIC. The notification is sent over the network
to update the lookup tables on physical switches. In most cases, this notification process is
If Failback is set to Yes, the failed adapter is returned to active duty immediately on
recovery, displacing the standby adapter that took its place at the time of failure.
If Failback is set to No, a failed adapter is left inactive even after recovery, until another
currently active adapter fails, requiring its replacement.
A datastore is a generic term for a container that holds files and objects. Datastores are logical
containers, analogous to file systems, that hide the specifics of each storage device and provide a
uniform model for storing virtual machine files. A VM is stored as a set of files in its own
directory or as a group of objects in a datastore.
You can display all datastores that are available to your hosts and analyze their properties.
Depending on the type of storage that you use, datastores can be formatted with VMFS or NFS.
In the vSphere environment, ESXi hosts support several storage technologies:
Direct-attached storage: Internal or external storage disks or arrays attached to the host
through a direct connection instead of a network connection.
Fibre Channel (FC): A high-speed transport protocol used for SANs. Fibre Channel
encapsulates SCSI commands, which are transmitted between Fibre Channel nodes. In
general, a Fibre Channel node is a server, a storage system, or a tape drive. A Fibre Channel
switch interconnects multiple nodes, forming the fabric in a Fibre Channel network.
FCoE: The Fibre Channel traffic is encapsulated into Fibre Channel over Ethernet (FCoE)
frames. These FCoE frames are converged with other types of traffic on the Ethernet network.
iSCSI: A SCSI transport protocol, providing access to storage devices and cabling over
standard TCP/IP networks. iSCSI maps SCSI block-oriented storage over TCP/IP. Initiators,
NAS: Storage shared over standard TCP/IP networks at the file system level. NAS storage is
used to hold NFS datastores. The NFS protocol does not support SCSI commands.
iSCSI, network-attached storage (NAS), and FCoE can run over high-speed networks
providing increased storage performance levels and ensuring sufficient bandwidth. With
sufficient bandwidth, multiple types of high-bandwidth protocol traffic can coexist on the
same network. r For more information about physical NIC support and maximum ports
supported, see VMware Configuration Maximums at https://configmax.vmware.com.
* Direct-attached storage (DAS) supports vSphere vMotion when combined with vSphere Storage
vMotion.
Direct-attached storage, as opposed to SAN storage, is where many administrators install ESXi.
Direct-attached storage is also ideal for small environments because of the cost savings associated
with purchasing and managing a SAN. The drawback is that you lose many of the features that
make virtualization a worthwhile investment, for example, balancing the workload on a specific
ESXi host. Direct-attached storage can also be used to store noncritical data:
Decommissioned VMs
VM templates
vSphere vMotion
vSphere HA
vSphere DRS
ESXi supports different methods of booting from the SAN to avoid handling the maintenance of
additional direct-attached storage or if you have diskless hardware configurations, such as blade
systems. If you set up your host to boot from a SAN, your host’s boot image is stored on one or
more LUNs in the SAN storage system. When the host starts, it boots from the LUN on the SAN
rather than from its direct-attached disk.
For ESXi hosts, you can boot from software iSCSI, a supported independent hardware SCSI
adapter, and a supported dependent hardware iSCSI adapter. The network adapter must support
only the iSCSI Boot Firmware Table (iBFT) format, which is a method of communicating
parameters about the iSCSI boot device to an operating system.
VMFS is a clustered file system where multiple ESXi hosts can read and write to the same storage
device simultaneously. The clustered file system provides unique, virtualization-based services:
Migration of running VMs from one ESXi host to another without downtime
Using VMFS, IT organizations can simplify VM provisioning by efficiently storing the entire VM
state in a central location. Multiple ESXi hosts can access shared VM storage concurrently.
The size of a VMFS datastore can be increased dynamically when VMs residing on the VMFS
datastore are powered on and running. A VMFS datastore efficiently stores both large and small
files belonging to a VM. A VMFS datastore can support virtual disk files. A virtual disk file has a
maximum of 62 TB. A VMFS datastore uses subblock addressing to make efficient use of storage
for small files.
Direct-attached storage
iSCSI storage
A virtual disk stored on a VMFS datastore always appears to the VM as a mounted SCSI device.
The virtual disk hides the physical storage layer from the VM's operating system.
For the operating system in the VM, VMFS preserves the internal file system semantics. As a
result, the operating system running in the VM sees a native file system, not VMFS. These
semantics ensure correct behavior and data integrity for applications running on the VMs.
NAS is a specialized storage device that connects to a network and can provide file access services
to ESXi hosts.
NFS datastores are treated like VMFS datastores because they can hold VM files, templates, and
ISO images. In addition, like a VMFS datastore, an NFS volume allows the vSphere vMotion
migration of VMs whose files reside on an NFS datastore. The NFS client built in to ESXi uses
NFS protocol versions 3 and 4.1 to communicate with the NAS or NFS servers.
ESXi hosts do not use the Network Lock Manager protocol, which is a standard protocol that is
used to support the file locking of NFS-mounted files. VMware has its own locking protocol. NFS
3 locks are implemented by creating lock files on the NFS server. NFS 4.1 uses server-side file
locking.
Because NFS 3 and NFS 4.1 clients do not use the same locking protocol, you cannot use different
NFS versions to mount the same datastore on multiple hosts. Accessing the same virtual disks
from two incompatible clients might result in incorrect behavior and cause data corruption.
When vSAN is enabled on a cluster, a single vSAN datastore is created. This datastore uses the
storage components of each host in the cluster.
vSAN can be configured as hybrid or all-flash storage.
In a hybrid storage architecture, vSAN pools server-attached HDDs and SSDs to create a
distributed shared datastore. This datastore abstracts the storage hardware to provide a software-
defined storage tier for VMs. Flash is used as a read cache/write buffer to accelerate performance,
and magnetic disks provide capacity and persistent data storage.
Alternately, vSAN can be deployed as an all-flash storage architecture in which flash devices are
used as a write cache. SSDs provide capacity, data persistence, and consistent, fast response times.
In the all-flash architecture, the tiering of SSDs results in a cost-effective implementation: a write-
intensive, enterprise-grade SSD cache tier and a read-intensive, lower-cost SSD capacity tier.
vSphere Virtual Volumes virtualizes SAN and NAS devices by abstracting physical hardware
resources into logical pools of capacity.
vSphere Virtual Volumes provides the following benefits:
Greater scalability
Raw device mapping (RDM) is a file stored in a VMFS volume that acts as a proxy for a raw
physical device.
Instead of storing VM data in a virtual disk file that is stored on a VMFS datastore, you can store
the guest operating system data directly on a raw LUN. Storing the data is useful if you run
applications in your VMs that must know the physical characteristics of the storage device. By
mapping a raw LUN, you can use existing SAN commands to manage storage for the disk.
Use RDM when a VM must interact with a real disk on the SAN. This condition occurs when you
make disk array snapshots or have a large amount of data that you do not want to move onto a
virtual disk as a part of a physical-to-virtual conversion.
For information to help you plan for your storage needs, see vSphere Storage at
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-
8AE88758-20C1-4873-99C7-181EF9ACFA70.html.
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-
8AE88758-20C1-4873-99C7-181EF9ACFA70.htmlAnother good source of information is the
vSphere Storage page at https://storagehub.vmware.com/.
To connect to the Fibre Channel SAN, your host should be equipped with Fibre Channel host bus
adapters (HBAs).
Unless you use Fibre Channel direct connect storage, you need Fibre Channel switches to route
storage traffic. If your host contains FCoE adapters, you can connect to your shared Fibre Channel
devices by using an Ethernet network.
In this configuration, a host connects to a SAN fabric, which consists of Fibre Channel switches
and storage arrays, using a Fibre Channel adapter. LUNs from a storage array become available to
the host. You can access the LUNs and create datastores for your storage needs. These datastores
use the VMFS format.
Alternatively, you can access a storage array that supports vSphere Virtual Volumes and create
vSphere Virtual Volumes datastores on the array’s storage containers.
Each SAN server might host numerous applications that require dedicated storage for applications
processing.
SAN switches: SAN switches connect various elements of the SAN. SAN switches might
connect hosts to storage arrays. Using SAN switches, you can set up path redundancy to
address any path failures from host server to switch, or from storage array to switch.
Fabric: The SAN fabric is the network portion of the SAN. When one or more SAN switches
are connected, a fabric is created. The Fibre Channel (FC) protocol is used to communicate
over the entire network. A SAN can consist of multiple interconnected fabrics. Even a simple
SAN often consists of two fabrics for redundancy.
Connections (HBAs and storage processors): Host servers and storage systems are connected
to the SAN fabric through ports in the fabric:
A port connects from a device into the SAN. Each node in the SAN includes each host, storage
device, and fabric component (router or switch). Each node in the SAN has one or more ports that
connect it to the SAN. Ports can be identified in the following ways:
World Wide Port Name (WWPN): A globally unique identifier for a port that allows certain
applications to access the port. The Fibre Channel switches discover the WWPN of a device
or host and assign a port address to the device.
Port_ID: Within SAN, each port has a unique port ID that serves as the Fibre Channel address
for that port. The Fibre Channel switches assign the port ID when the device logs in to the
fabric. The port ID is valid only while the device is logged on.
You can use zoning and LUN masking to segregate SAN activity and restrict access to storage
devices.
You can protect access to storage in your vSphere environment by using zoning and LUN masking
with your SAN resources. For example, you might manage zones defined for testing
By default, ESXi hosts use only one path from a host to a given LUN at any one time. If the path
actively being used by the ESXi host fails, the server selects another available path.
The process of detecting a failed path and switching to another is called path failover. A path fails
if any of the components along the path (HBA, cable, switch port, or storage processor) fail.
An active-active disk array allows access to the LUNs simultaneously through the available
storage processors without significant performance degradation. All the paths are active at all
times (unless a path fails).
In an active-passive disk array, one storage processor is actively servicing a given LUN. The
other storage processor acts as a backup for the LUN and might be actively servicing other
LUN I/O.
I/O can be sent only to an active processor. If the primary storage processor fails, one of the
secondary storage processors becomes active, either automatically or through administrative
intervention.
The Fibre Channel traffic is encapsulated into FCoE frames. These FCoE frames are converged
with other types of traffic on the Ethernet network.
When both Ethernet and Fibre Channel traffic are carried on the same Ethernet link, use of the
physical infrastructure increases. FCoE also reduces the total number of network ports and
cabling.
You add the software FCoE adapter by selecting the host, clicking the Configure tab, selecting
Storage Adapters, and clicking Add Software Adapter.
An iSCSI SAN consists of an iSCSI storage system, which contains one or more LUNs and one or
more storage processors. Communication between the host and the storage array occurs over a
TCP/IP network.
The ESXi host is configured with an iSCSI initiator. An initiator can be hardware-based, where
the initiator is an iSCSI HBA. Or the initiator can be software-based, known as the iSCSI software
initiator.
An initiator transmits SCSI commands over the IP network. A target receives SCSI commands
from the IP network. Your iSCSI network can include multiple initiators and targets. iSCSI is
SAN-oriented for the following reasons:
The main addressable, discoverable entity is an iSCSI node. An iSCSI node can be an initiator or a
target. An iSCSI node requires a name so that storage can be managed regardless of address.
The iSCSI name can use one of the following formats: The iSCSI qualified name (IQN) or the
extended unique identifier (EUI).
The IQN can be up to 255 characters long. Several naming conventions are used:
Prefix iqn
Date code specifying the year and month in which the organization registered the domain or
subdomain name that is used as the naming authority string
(Optional) Colon (:), followed by a string of the assigning organization’s choosing, which
must make each assigned iSCSI name unique
Prefix is eui.
The name includes 24 bits for a company name that is assigned by the IEEE and 40 bits for a
unique ID, such as a serial number.
On ESXi hosts, SCSI storage devices use various identifiers. Each identifier serves a specific
purpose. For example, the VMkernel requires an identifier, generated by the storage device, which
is guaranteed to be unique to each LUN. If the storage device cannot provide a unique identifier,
the VMkernel must generate a unique identifier to represent each LUN or disk.
The following SCSI storage device identifiers are available:
Runtime name: The name of the first path to the device. The runtime name is a user-friendly
name that is created by the host after each reboot. It is not a reliable identifier for the disk
device because it is not persistent. The runtime name might change if you add HBAs to the
ESXi host. However, you can use this name when you use command-line utilities to interact
with storage that an ESXi host recognizes.
iSCSI name: A worldwide unique name for identifying the node. iSCSI uses the IQN and
EUI. IQN uses the format iqn.yyyy-mm.naming-authority:unique name.
The iSCSI initiators transport SCSI requests and responses, encapsulated in the iSCSI protocol,
between the host and the iSCSI target. Your host supports two types of initiators: software iSCSI
and hardware iSCSI.
A software iSCSI initiator is VMware code built in to the VMkernel. Using the initiator, your host
can connect to the iSCSI storage device through standard network adapters. The software iSCSI
initiator handles iSCSI processing while communicating with the network adapter. With the
software iSCSI initiator, you can use iSCSI technology without purchasing specialized hardware.
A hardware iSCSI initiator is a specialized third-party adapter capable of accessing iSCSI storage
over TCP/IP. Hardware iSCSI initiators are divided into two categories: dependent hardware
iSCSI and independent hardware iSCSI.
A dependent hardware iSCSI initiator, also known as an iSCSI host bus adapter, is a standard
network adapter that includes the iSCSI offload function. To use this type of adapter, you must
configure networking for the iSCSI traffic and bind the adapter to an appropriate VMkernel iSCSI
port.
Networking configuration for software iSCSI involves creating a VMkernel port on a virtual
switch to handle your iSCSI traffic.
Depending on the number of physical adapters that you want to use for the iSCSI traffic, the
networking setup can be different:
If you have one physical network adapter, you need a VMkernel port on a virtual switch.
If you have two or more physical network adapters for iSCSI, you can use these adapters for
host-based multipathing.
For performance and security, isolate your iSCSI network from other networks. Physically
separate the networks. If physically separating the networks is impossible, logically separate the
networks from one another on a single virtual switch by configuring a separate VLAN for each
network.
You must activate your software iSCSI adapter so that your host can use it to access iSCSI
storage.
You can activate only one software iSCSI adapter.
NOTE
If you boot from iSCSI using the software iSCSI adapter, the adapter is enabled, and
the network configuration is created at the first boot. If you disable the adapter, it is
reenabled each time you boot the host.
Static discovery: The initiator does not have to perform discovery. The initiator knows in
advance all the targets that it will contact. It uses their IP addresses and domain names to
communicate with them.
Dynamic discovery or SendTargets discovery: Each time the initiator contacts a specified
iSCSI server, it sends the SendTargets request to the server. The server responds by supplying
a list of available targets to the initiator.
The names and IP addresses of these targets appear as static targets in the vSphere Client. You
can remove a static target that is added by dynamic discovery. If you remove the target, the
target might be returned to the list during the next rescan operation. The target might also be
returned to the list if the HBA is reset or the host is rebooted.
You can implement CHAP to provide authentication between iSCSI initiators and targets.
ESXi supports the following CHAP authentication methods:
Unidirectional or one-way CHAP: The target authenticates the initiator, but the initiator does
not authenticate the target. You must specify the CHAP secret so that your initiators can
access the target.
Bidirectional or mutual CHAP: With an extra level of security, the initiator can authenticate
the target. You must specify different target and initiator secrets.
CHAP uses a three-way handshake algorithm to verify the identity of your host and, if applicable,
of the iSCSI target when the host and target establish a connection. The verification is based on a
predefined private value, or CHAP secret, that the initiator and target share. ESXi implements
CHAP as defined in RFC 1994.
When setting up your ESXi host for multipathing and failover, you can use multiple hardware
iSCSI adapters or multiple NICs. The choice depends on the type of iSCSI initiators on your host.
With software iSCSI and dependent hardware iSCSI, you can use multiple NICs that provide
failover for iSCSI connections between your host and iSCSI storage systems.
With independent hardware iSCSI, the host typically has two or more available hardware iSCSI
adapters, from which the storage system can be reached by using one or more switches.
Alternatively, the setup might include one adapter and two storage processors so that the adapter
can use a different path to reach the storage system.
After iSCSI multipathing is set up, each port on the ESXi system has its own IP address, but the
ports share the same iSCSI initiator IQN. When iSCSI multipathing is configured, the VMkernel
routing table is not consulted for identifying the outbound NIC to use. Instead, iSCSI multipathing
is managed using vSphere multipathing modules. Because of the latency that can be incurred,
routing iSCSI traffic is not recommended.
With software iSCSI and dependent hardware iSCSI, multipathing plug-ins do not have direct
access to physical NICs on your host. For this reason, you must first connect each physical NIC to
a separate VMkernel port. Then you use a port-binding technique to associate all VMkernel ports
with the iSCSI initiator.
For dependent hardware iSCSI, you must correctly install the physical network card, which should
appear on the host's Configure tab in the Virtual Switches view.
The Datastores pane lists all datastores currently configured for all managed ESXi hosts.
The example shows the contents of the VMFS datastore named Class-Datastore. The contents of
the datastore are folders that contain the files for virtual machines or templates.
Using thin-provisioned virtual disks for your VMs is a way to make the most of your datastore
capacity. But if your datastore is not sized properly, it can become overcommitted. A datastore
becomes overcommitted when the full capacity of its thin-provisioned virtual disks is greater than
the datastore’s capacity.
When a datastore reaches capacity, the vSphere Client prompts you to provide more space on the
underlying VMFS datastore and all VM I/O is paused.
Monitor your datastore capacity by setting alarms to alert you about how much a datastore’s disks
are fully allocated or how much disk space a VM is using.
Manage your datastore capacity by dynamically increasing the size of your datastore when
necessary. You can also use vSphere Storage vMotion to mitigate space use issues.
For example, with vSphere Storage vMotion, you can migrate a VM off a datastore. The migration
can be done by changing from virtual disks of thick format to thin format at the target datastore.
An example of the unique identifier of a volume is the NAA ID. You require this information to
identify the VMFS datastore that must be increased.
You can dynamically increase the capacity of a VMFS datastore if the datastore has insufficient
disk space. You discover whether insufficient disk space is an issue when you create a VM or you
try to add more disk space to a VM.
Use one of the following methods:
Add an extent to the VMFS datastore: An extent is a partition on a LUN. You can add an
extent to any VMFS datastore. The datastore can stretch over multiple extents, up to 32.
Expand the VMFS datastore: You expand the size of the VMFS datastore by expanding its
underlying extent first.
By selecting the Let me migrate storage for all virtual machines and continue entering
maintenance mode after migration check box, all VMs and templates on the datastore are
automatically migrated to the datastore of your choice. The datastore enters maintenance mode
after all VMs and templates are moved off the datastore.
Datastore maintenance mode is a function of the vSphere Storage DRS feature, but you can use
maintenance mode without enabling vSphere Storage DRS. For more information on vSphere
Storage DRS, see vSphere Resource Management at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-98BD5A8A-260A-494F-BAAE-
74781F5C4B87.html.
Unmounting a VMFS datastore preserves the files on the datastore but makes the datastore
inaccessible to the ESXi host.
Do not perform any configuration operations that might result in I/O to the datastore while the
unmounting is in progress.
You can delete any type of VMFS datastore, including copies that you mounted without
resignaturing. Although you can delete the datastore without unmounting, you should unmount the
datastore first. Deleting a VMFS datastore destroys the pointers to the files on the datastore, so the
files disappear from all hosts that have access to the datastore.
To keep your data, back up the contents of your VMFS datastore before you delete the datastore.
The Pluggable Storage Architecture is a VMkernel layer responsible for managing multiple
storage paths and providing load balancing. An ESXi host can be attached to storage arrays with
either active-active or active-passive storage processor configurations.
VMware offers native load-balancing and failover mechanisms. VMware path selection policies
include the following examples:
Round Robin
Fixed
Third-party vendors can design their own load-balancing techniques and failover mechanisms for
particular storage array types to add support for new arrays. Third-party vendors do not need to
provide internal information or intellectual property about the array to VMware.
Fixed: The host always uses the preferred path to the disk when that path is available. If the
host cannot access the disk through the preferred path, it tries the alternative paths. This
policy is the default policy for active-active storage devices.
Most Recently Used: The host selects the first working path discovered at system boot time.
When the path becomes unavailable, the host selects an alternative path. The host does not
revert to the original path when that path becomes available. The Most Recently Used policy
does not use the preferred path setting. This policy is the default policy for active-passive
storage devices and is required for those devices.
Round Robin: The host uses a path selection algorithm that rotates through all available paths.
In addition to path failover, the Round Robin multipathing policy supports load balancing
The NFS server contains one or more directories that are shared with the ESXi host over a TCP/IP
network. An ESXi host accesses the NFS server through a VMkernel port that is defined on a
virtual switch.
Compatibility issues between the two NFS versions prevent access to datastores using both
protocols at the same time from different hosts. If a datastore is configured as NFS 4.1, all hosts
that access that datastore must mount the share as NFS 4.1. Data corruption can occur if hosts
access a datastore with the wrong NFS version.
Native multipathing and session trunking: NFS 4.1 provides multipathing for servers that
support session trunking. When trunking is available, you can use multiple IP addresses to
access a single NFS volume. Client ID trunking is not supported.
Enhanced error recovery using server-side tracking of open files and delegations.
Many general efficiency improvements including session leases and less protocol overhead.
Protocol integration, side-band (auxiliary) protocol no longer required to lock and mount
Trunking (true NFS multipathing), where multiple paths (sessions) to the NAS array can be
created and load-distributed across those sessions
For each ESXi host that accesses an NFS datastore over the network, a VMkernel port must be
configured on a virtual switch. The name of this port can be anything that you want.
For performance and security reasons, isolate your NFS networks from the other networks, such as
your iSCSI network and your virtual machine networks.
You must take several configuration steps to prepare each ESXi host to use Kerberos
authentication.
Kerberos authentication requires that all nodes involved (the Active Directory server, the NFS
servers, and the ESXi hosts) be synchronized so that little to no time drift exists. Kerberos
authentication fails if any significant drift exists between the nodes.
To prepare your ESXi host to use Kerberos authentication, configure the NTP client settings to
reference a common NTP server (or the domain controller, if applicable).
When planning to use NFS Kerberos, consider the following points:
NFS 3 and 4.1 use different authentication credentials, resulting in incompatible UID and GID
on files.
Using different Active Directory users on different hosts that access the same NFS share can
cause the vSphere vMotion migration to fail.
After performing the initial configuration steps, you can configure the datastore to use Kerberos
authentication.
The screenshot shows a choice of Kerberos authentication only (krb5) or authentication with data
integrity (krb5i). The difference is whether only the header or the header and the body of each
NFS operation is signed using a secure checksum.
For more information about how to configure the ESXi hosts for Kerberos authentication, see
vSphere Storage at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-8AE88758-20C1-4873-99C7-
181EF9ACFA70.html.
Examples of a single point of failure in the NAS architecture include the NIC card in an ESXi
host, and the cable between the NIC card and the switch. To avoid single points of failure and to
create a highly available NAS architecture, configure the ESXi host with redundant NIC cards and
redundant physical switches.
The best approach is to install multiple NICs on an ESXi host and configure them in NIC teams.
NIC teams should be configured on separate external switches, with each NIC pair configured as a
team on the respective external switch.
In addition, you might apply a load-balancing algorithm, based on the link aggregation protocol
type supported on the external switch, such as 802.3ad or EtherChannel.
An even higher level of performance and high availability can be achieved with cross-stack,
EtherChannel-capable switches. With certain network switches, you can team ports across two or
more separate physical switches that are managed as one logical switch.
Configure NIC teaming by using adapters Configure NIC teaming with adapters attached
attached to separate physical switches. to the same physical switch.
Configure the NFS server with multiple IP Configure the NFS server with multiple IP
addresses. IP addresses can be on the same addresses. IP addresses can be on the same
subnet. subnet.
To use multiple links, configure NIC teams with To use multiple links, allow the VMkernel
the IP hash load-balancing policy. routing table to decide which link to send
packets (requires multiple datastores).
NFS 4.1 provides multipathing for servers that support the session trunking. When trunking is
available, you can use multiple IP addresses to access a single NFS volume. Client ID trunking is
not supported.
vSAN datastores help administrators use software-defined storage in the following ways:
Storage policy per VM architecture: With multiple policies per datastore, each VM can have
different storage.
vSphere and vCenter Server integration: vSAN capability is built in and requires no
appliance. You create a vSAN cluster, like vSphere HA or vSphere DRS.
Scale-out storage: Up to 64 ESXi hosts can be in a cluster. Scale out by populating new nodes
in the cluster.
Built-in resiliency: The default vSAN storage policy establishes RAID 1 redundancy for all
VMs.
vSAN uses the concept of disk groups to pool together cache devices and capacity devices as
single management constructs. A disk group is a pool of one cache device and one to seven
capacity devices
vSAN requires several hardware components that hosts do not normally have:
One Serial Attached SCSI (SAS), SATA solid-state drive (SSD), or PCIe flash device and one
to seven magnetic drives for each hybrid disk group.
One SAS, SATA SSD, or PCIe flash device and one to seven flash disks with flash capacity
enabled for all-flash disk groups.
Dedicated 1 Gbps network (10 Gbps is recommended) for hybrid disk groups.
The vSAN network must be configured for IPv4 or IPv6 and support unicast.
A vSAN cluster stores and manages data as flexible data containers called objects. When you
provision a VM on a vSAN datastore, a set of objects is created:
VM swap: Virtual machine swap file, which is created when the VM is powered on
VM storage policies are a set of rules that you configure for VMs. Each storage policy reflects a
set of capabilities that meet the availability, performance, and storage requirements of the
application or service-level agreement for that VM.
You should create storage policies before deploying the VMs that require these storage policies.
You can apply and update storage policies after deployment.
A vSphere administrator who is responsible for the deployment of VMs can select policies that are
created based on storage capabilities.
Based on the policy that is selected for the object VM, these capabilities are pushed back to the
vSAN datastore. The object is created across ESXi hosts and disk groups to satisfy these policies.
Creating templates makes the provisioning of virtual machines much faster and less error-prone
than provisioning physical machines and creating a VM by using the New Virtual Machine
wizard.
Templates coexist with VMs in the inventory. You can organize collections of VMs and templates
into arbitrary folders and apply permissions to VMs and templates. You can change VMs into
templates without having to make a full copy of the VM files and create an object.
You can deploy a VM from a template. The deployed VM is added to the folder that you selected
when creating the template.
The Clone to Template option offers you a choice of format for storing the VM's virtual disks:
Thin-provisioned format
The Convert to Template option does not offer a choice of format and leaves the VM’s disk file
intact.
To update your template to include new patches or software, you do not need to create a template.
Instead, you convert the template to a VM. You can then power on the VM.
For added security, you might want to prevent users from accessing the VM while you update it.
To prevent access, either disconnect the VM from the network or place it on an isolated network.
Log in to the VM’s guest operating system and apply the patch or install the software. When you
finish, power off the VM and convert it to a template again.
When you place ISO files in a content library, the ISO files are available only to VMs that are
registered on an ESXi host that can access the datastore where the content library is located. These
ISO files are not available to VMs on hosts that cannot see the datastore on which the content
library is located.
To clone a VM, you must be connected to vCenter Server. You cannot clone VMs if you use
VMware Host Client to manage a host directly.
When you clone a VM that is powered on, services and applications are not automatically
quiesced when the VM is cloned.
When deciding whether to clone a VM or deploy a VM from a template, consider the following
points:
VM templates use storage space, so you must plan your storage space requirements
accordingly.
Deploying a VM from a template is quicker than cloning a running VM, especially when you
must deploy many VMs at a time.
When you deploy many VMs from a template, all the VMs start with the same base image.
Cloning many VMs from a running VM might not create identical VMs, depending on the
activity happening within the VM when the VM is cloned.
Customizing the guest operating system prevents conflicts that might occur when you deploy a
VM and a clone with identical guest OS settings simultaneously.
To manage customization specifications, select Policies and Profiles from the Menu drop-down
menu.
On the VM Customization Specifications pane, you can create specifications or manage existing
ones.
You can define the customization settings by using an existing customization specification during
cloning or deployment. You create the specification ahead of time. During cloning or deployment,
you can select the customization specification to apply to the new VM.
VMware Tools must be installed on the guest operating system that you want to customize.
The guest operating system must be installed on a disk attached to SCSI node 0:0 in the VM
configuration.
For more about guest operating system customization, see vSphere Virtual Machine
Administration at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-55238059-912E-411F-A0E9-
A7A536972A91.html.
Through instant cloning, the source (parent) VM does not lose its state because of the cloning
process. You can move to just-in-time provisioning, given the speed and state-persisting nature of
this operation.
During an instant clone operation, the source VM is stunned for a short time, less than 1 second.
While the source VM is stunned, a new writable delta disk is generated for each virtual disk, and a
checkpoint is taken and transferred to the destination VM.
The destination VM powers on by using the source’s checkpoint.
After the destination VM is fully powered on, the source VM resumes running.
Instant clone VMs are fully independent vCenter Server inventory objects. You can manage
instant clone VMs like regular VMs, without any restrictions.
Instant cloning is convenient for large-scale application deployments because it ensures memory
efficiency, and you can create many VMs on a single host.
To avoid network conflicts, you can customize the virtual hardware of the destination VM during
the instant cloning operation. For example, you can customize the MAC addresses of the virtual
NICs or the serial and parallel port configurations of the destination VM.
Starting with vSphere 7, you can customize the guest operating system for Linux VMs only. You
can customize networking settings such as IP address, DNS server, and the gateway. You can
change these settings without having to power off or restart the VM.
Organizations might have multiple vCenter Server instances in data centers around the globe. On
these vCenter Server instances, organizations might have a collection of templates, ISO images,
and so on. The challenge is that all these items are independent of one another, with different
versions of these files and templates on various vCenter Server instances.
The content library is the solution to this challenge. IT can store OVF templates, ISO images, or
any other file types in a central location. The templates, images, and files can be published, and
other content libraries can subscribe to and download content. The content library keeps content
up to date by periodically synchronizing with the publisher, ensuring that the latest version is
available.
Sharing content and ensuring that the content is kept up to date are major tasks.
For example, for a main vCenter Server instance, you create a central content library to store the
master copies of OVF templates, ISO images, and other file types. When you publish this content
library, other libraries, which might be located anywhere in the world, can subscribe and
download an exact copy of the data.
When an OVF template is added, modified, or deleted from the published catalog, the subscriber
synchronizes with the publisher, and the libraries are updated with the latest content.
Starting with vSphere 7, you can update a template while simultaneously deploying VMs from the
template. In addition, the content library keeps two copies of the VM template, the previous and
current versions. You can roll back the template to reverse changes made to the template.
You can create a local library as the source for content that you want to save or share. You create
the local library on a single vCenter Server instance. You can then add or remove items to and
from the local library.
You can publish a local library, and this content library service endpoint can be accessed by other
vCenter Server instances in your virtual environment. When you publish a library, you can
configure the authentication method, which a subscribed library must use to authenticate to it.
You can create a subscribed library and populate its content by synchronizing it to a published
library. A subscribed library contains copies of the published library files or only the metadata of
the library items.
The published library can be on the same vCenter Server instance as the subscribed library, or the
subscribed library can reference a published library on a different vCenter Server instance.
You cannot add library items to a subscribed library. You can add items only to a local or
published library.
VMs and vApps have several files, such as log files, disk files, memory files, and snapshot files
that are part of a single library item. You can create library items in a specific local library or
remove items from a local library. You can also upload files to an item in a local library so that the
libraries subscribed to it can download the files to their NFS or SMB server, or datastore.
You might have to modify a VM’s configuration, for example, to add a network adapter or a
virtual disk. You can make all VM changes while the VM is powered off. Some VM hardware
changes can be made while the VM is powered on.
vSphere 7.0 makes the following virtual devices available:
Watchdog timer: Virtual device used to detect and recover from operating system problems. If
a failure occurs, the watchdog timer attempts to reset or power off the VM. This feature is
based on Microsoft specifications: Watchdog Resource Table (WDRT) and Watchdog Action
Table (WDAT).
The watchdog timer is useful with high availability solutions such as Red Hat High
Availability and the MS SQL failover cluster. This device is also useful on VMware Cloud
and in hosted environments for implementing custom failover logic to reset or power off
VMs.
Virtual SGX: Virtual device that exposes Intel's SGX technology to VMs. Intel’s SGX
technology prevents unauthorized programs or processes from accessing certain regions in
memory. Intel SGX meets the needs of the Trusted Computing Industry.
Virtual SGX is useful for applications that must conceal proprietary algorithms and
encryption keys from unauthorized users. For example, cloud service providers cannot inspect
a client’s code and data in a virtual SGX-protected environment.
Adding devices to a physical server or removing devices from a physical server requires that you
physically interact with the server in the data center. When you use VMs, resources can be added
dynamically without a disruption in service. You must shut down a VM to remove hardware, but
you can reconfigure the VM without entering the data center.
You can add CPU and memory while the VM is powered on. These features are called the CPU
Hot Add and Memory Hot Plug, which are supported only on guest operating systems that support
hot-pluggable functionality. These features are disabled by default. To use these hot-plug features,
the following requirements must be satisfied:
The guest operating system in the VM must support CPU and memory hot-plug features.
The hot-plug features must be enabled in the CPU or Memory settings on the Virtual
Hardware tab.
When you increase the size of a virtual disk, the VM must not have snapshots attached.
After you increase the size of a virtual disk, you might need to increase the size of the file system
on this disk. Use the appropriate tool in the guest OS to enable the file system to use the newly
allocated disk space.
When you inflate a thin-provisioned disk, the inflated virtual disk occupies the entire datastore
space originally provisioned to it. Inflating a thin-provisioned disk converts a thin disk to a virtual
disk in thick-provisioned format.
Under General Options, you can view the location and name of the configuration file (with the
.vmx extension) and the location of the VM’s directory.
You can select the text for the configuration file and the working location to copy and paste them
into a document. However, only the display name and the guest operating system type can be
modified.
Changing the display name does not change the names of all the VM files or the directory that the
VM is stored in. When a VM is created, the filenames and the directory name associated with the
VM are based on its display name. But changing the display name later does not modify the
filename and the directory name.
When you use the VMware Tools controls to customize the power buttons on the VM, the VM
must be powered off.
You can select the Check and upgrade VMware Tools before each power on check box to
check for a newer version of VMware Tools. If a newer version is found, VMware Tools is
upgraded when the VM is power cycled.
When you select the Synchronize guest time with host check box, the guest operating system’s
clock synchronizes with the host.
For information about time keeping best practices for the guest operating systems that you use, see
VMware knowledge base articles 1318 at http://kb.vmware.com/kb/1318 and 1006427 at
http://kb.vmware.com/kb/1006427.
When you build a VM and select a guest operating system, BIOS or EFI is selected automatically,
depending on the firmware supported by the operating system. Mac OS X Server guest operating
systems support only Extensible Firmware Interface (EFI). If the operating system supports BIOS
and EFI, you can change the boot option as needed. However, you must change the option before
installing the guest OS.
UEFI Secure Boot is a security standard that helps ensure that your PC boots use only software
that is trusted by the PC manufacturer. In an OS that supports UEFI Secure Boot, each piece of
boot software is signed, including the bootloader, the operating system kernel, and operating
system drivers. If you enable Secure Boot for a VM, you can load only signed drivers into that
VM.
With the Boot Delay value, you can set a delay between the time when a VM is turned on and the
guest OS starts to boot. A delayed boot can help stagger VM start ups when several VMs are
powered on.
When a VM is removed from the inventory, its files remain at the same storage location, and the
VM can be re-registered in the datastore browser.
A deciding factor for using a particular migration technique is the purpose of performing the
migration. For example, you might need to stop a host for maintenance but keep the VMs running.
You use vSphere vMotion to migrate the VMs instead of performing a cold or suspended VM
migration. If you must move a VM’s files to another datastore to better balance the disk load or
transition to another storage array, you use vSphere Storage vMotion.
Some migration techniques, such as vSphere vMotion migration, have special hardware
requirements that must be met to function properly. Other techniques, such as a cold migration, do
not have special hardware requirements to function properly.
You can perform the different types of migration on either powered-off (cold) or powered-on (hot)
VMs.
Using vSphere vMotion, you can migrate running VMs from one ESXi host to another ESXi host
with no disruption or downtime. With vSphere vMotion, vSphere DRS can migrate running VMs
from one host to another to ensure that the VMs have the resources that they require.
With vSphere vMotion, the entire state of the VM is moved from one host to another, but the data
storage remains in the same datastore.
The state information includes the current memory content and all the information that defines and
identifies the VM. The memory content includes transaction data and whatever bits of the
operating system and applications are in memory. The definition and identification information
stored in the state includes all the data that maps to the VM hardware elements, such as the BIOS,
devices, CPU, and MAC addresses for the Ethernet cards.
For the complete list of vSphere vMotion migration requirements, see vCenter Server and Host
Management at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-3B5AF2B1-C534-4426-B97A-
D14019A8010F.html.
You cannot migrate a VM from a host that is registered to vCenter Server with an IPv4 address to
a host that is registered with an IPv6 address.
Copying a swap file to a new location can result in slower migrations. If the destination host
cannot access the specified swap file location, it stores the swap file with the VM configuration
file.
Using 1 GbE network adapters for the vSphere vMotion network might result in migration failure,
if you migrate VMs with large vGPU profiles.
If validation succeeds, you can continue in the wizard. If validation does not succeed, a list of
vSphere vMotion errors and warnings displays in the Compatibility pane.
With warnings, you can still perform a vSphere vMotion migration. But with errors, you cannot
continue. You must exit the wizard and fix all errors before retrying the migration.
If a failure occurs during the vSphere vMotion migration, the VM is not migrated and continues to
run on the source host.
Encrypted vSphere vMotion secures confidentiality, integrity, and authenticity of data that is
transferred with vSphere vMotion. Encrypted vSphere vMotion supports all variants of vSphere
vMotion, including migration across vCenter Server systems. Encrypted vSphere Storage vMotion
is not supported.
You cannot turn off encrypted vSphere vMotion for encrypted VMs.
You can perform cross vCenter migrations between vCenter Server instances of different versions.
For information on the supported versions, see VMware knowledge base article 2106952 at
http://kb.vmware.com/kb/2106952.
Consider the following key points about TCP/IP stacks at the VMkernel level:
Default TCP/IP stack: Provides networking support for the management traffic between
vCenter Server and ESXi hosts and for system traffic such as vSphere vMotion, IP storage,
and vSphere Fault Tolerance.
vSphere vMotion TCP/IP stack: Supports the traffic for hot migrations of VMs.
Provisioning TCP/IP stack: Supports the traffic for VM cold migration, cloning, and snapshot
creation. You can use the provisioning TPC/IP stack to handle NFC traffic during long-
distance vSphere vMotion migration. VMkernel adapters configured with the provisioning
TCP/IP stack handle the traffic from cloning the virtual disks of the migrated VMs in long-
distance vSphere vMotion.
By using the provisioning TCP/IP stack, you can isolate the traffic from the cloning
operations on a separate gateway. After you configure a VMkernel adapter with the
Custom TCP/IP stacks: You can create a custom TCP/IP stack on a host to forward
networking traffic through a custom application. Open an SSH connection to the host and run
the vSphere CLI command:
Take appropriate security measures to prevent unauthorized access to the management and system
traffic in your vSphere environment. For example, isolate the vSphere vMotion traffic in a
separate network that includes only the ESXi hosts that participate in the migration. Isolate the
management traffic in a network that only network and security administrators can access.
vSphere vMotion TCP/IP stacks support the traffic for hot migrations of VMs. Use the vSphere
vMotion TCP/IP stack to provide better isolation for the vSphere vMotion traffic. After you create
a VMkernel adapter on the vSphere vMotion TCP/IP stack, you can use only this stack for
vSphere vMotion migration on this host.
The VMkernel adapters on the default TCP/IP stack are disabled for the vSphere vMotion service
after you create a VMkernel adapter on the vSphere vMotion TCP/IP stack. If a hot migration uses
the default TCP/IP stack while you configure VMkernel adapters with the vMotion TCP/IP stack,
the migration completes successfully. However, these VMkernel adapters on the default TCP/IP
stack are disabled for future vSphere vMotion sessions.
In the follow-the-sun scenario, a global support team might support a certain set of VMs. As one
support team ends their workday, another support team in a different timezone takes over support
duty. The VMs being supported can be moved from one geographical location to another so that
the support team on duty can access those VMs locally instead of long distance.
Depending on the CPU characteristic, an exact match between the source and target host might or
might not be required.
For example, if hyperthreading is enabled on the source host and disabled on the destination host,
the vSphere vMotion migration continues because the VMkernel handles this difference in
characteristics.
But, if the source host processor supports SSE4.1 instructions and the destination host processor
does not support them, the hosts are considered incompatible and the vSphere vMotion migration
fails.
SSE4.1 instructions are application-level instructions that bypass the virtualization layer and might
cause application instability if mismatched after a migration with vSphere vMotion.
Enhanced vMotion Compatibility ensures that all hosts in a cluster present the same CPU feature
set to VMs, even if the CPUs on the hosts differ.
Enhanced vMotion Compatibility facilitates safe vSphere vMotion migration across a range of
CPU generations. With Enhanced vMotion Compatibility, you can use vSphere vMotion to
migrate VMs among CPUs that otherwise are considered incompatible.
Enhanced vMotion Compatibility allows vCenter Server to enforce vSphere vMotion
compatibility among all hosts in a cluster by forcing hosts to expose a common set of CPU
features (baseline) to VMs. A baseline is a set of CPU features that are supported by every host in
the cluster. When you configure Enhanced vMotion Compatibility, you set all host processors in
the cluster to present the features of a baseline processor. After the features are enabled for a
cluster, hosts that are added to the cluster are automatically configured to the CPU baseline.
Hosts that cannot be configured to the baseline are not permitted to join the cluster. VMs in the
cluster always see an identical CPU feature set, no matter which host they happen to run on.
Before you create an Enhanced vMotion Compatibility cluster, ensure that the hosts that you
intend to add to the cluster meet the requirements.
Enhanced vMotion Compatibility automatically configures hosts whose CPUs have Intel
FlexMigration and AMD-V Extended Migration technologies to be compatible with vSphere
vMotion with hosts that use older CPUs.
For Enhanced vMotion Compatibility to function properly, the applications on the VMs must be
written to use the CPU ID machine instruction for discovering CPU features as recommended by
the CPU vendors. vSphere cannot support Enhanced vMotion Compatibility with applications that
do not follow the CPU vendor recommendations for discovering CPU features.
To determine which EVC modes are compatible with your CPU, search the VMware
Compatibility Guide at http://www.vmware.com/resources/compatibility. Search for the server
model or CPU family, and click the entry in the CPU Series column to display the compatible
EVC modes.
You can use one of the following methods to create an Enhanced vMotion Compatibility cluster:
Create an empty cluster with EVC mode enabled and move hosts into the cluster.
For information about Enhanced vMotion Compatibility processor support, see VMware
knowledge base article 1003212 at http://kb.vmware.com/kb/1003212.
With per-VM EVC mode, the EVC mode becomes an attribute of the VM rather than the specific
processor generation it happens to be booted on in the cluster. This feature supports seamless
migration between two data centers that have different processors. Further, the feature is persisted
per VM and does not lose the EVC mode during migrations across clusters or during power
cycles.
In this diagram, EVC mode is not enabled on the cluster. The cluster consists of differing CPU
models with different feature sets. The VMs with per-VM EVC mode can run on any ESXi host
that can satisfy the defined EVC mode.
vSphere Storage vMotion provides flexibility to optimize disks for performance or transform disk
types, which you can use to reclaim space.
You can place the VM and all its disks in a single location, or you can select separate locations for
the VM configuration file and each virtual disk. During a migration with vSphere Storage
vMotion, the VM does not change the host that it runs on.
With vSphere Storage vMotion, you can rename a VM's files on the destination datastore. The
migration renames all virtual disk, configuration, snapshot, and .nvram files.
The storage migration process does a single pass of the disk, copying all the blocks to the
destination disk. If blocks are changed after they are copied, the blocks are synchronized from the
source to the destination through the mirror driver, with no need for recursive passes.
A VM and its host must meet certain resource and configuration requirements for the virtual
machine disks (VMDKs) to be migrated with vSphere Storage vMotion. One of the requirements
is that the host on which the VM runs must have access both to the source datastore and to the
target datastore.
During a migration with vSphere Storage vMotion, you can change the disk provisioning type.
Migration with vSphere Storage vMotion changes VM files on the destination datastore to match
the inventory name of the VM. The migration renames all virtual disk, configuration, snapshot,
and .nvram-extension files. If the new names exceed the maximum filename length, the migration
does not succeed.
You can migrate VMs beyond storage accessibility boundaries and between hosts, within and
across clusters, data centers, and vCenter Server instances.
This type of migration is useful for performing cross-cluster migrations, when the target cluster
VMs might not have access to the source cluster’s storage. Processes on the VM continue to run
during the migration with vSphere vMotion.
Snapshots are useful when you want to revert repeatedly to the same state but do not want to
create multiple VMs. Examples include patching or upgrading the guest operating system in a
VM.
The relationship between snapshots is like the relationship between a parent and a child. Snapshots
are organized in a snapshot tree. In a snapshot tree, each snapshot has one parent and one or more
children, except for the last snapshot, which has no children.
A snapshot captures the entire state of the VM at the time that you take the snapshot, including the
following states:
Memory state: The contents of the VM’s memory. The memory state is captured only if the
VM is powered on and if you select the Snapshot the virtual machine’s memory check box
(selected by default).
At the time that you take the snapshot, you can also quiesce the guest operating system. This
action quiesces the file system of the guest operating system. This option is available only if you
do not capture the memory state as part of the snapshot.
Delta disks use different sparse formats depending on the type of datastore.
VMFSsparse: VMFS5 uses the VMFSsparse format for virtual disks smaller than 2 TB.
VMFSsparse is implemented on top of VMFS. The VMFSsparse layer processes I/O
operations issued to a snapshot VM. Technically, VMFSsparse is a redo log that starts empty,
immediately after a VM snapshot is taken. The redo log expands to the size of its base
VMDK, when the entire VMDK is rewritten with new data after the VM snapshot. This redo
log is a file in the VMFS datastore. On snapshot creation, the base VMDK attached to the VM
is changed to the newly created sparse VMDK.
SEsparse: SEsparse is a default format for all delta disks on the VMFS6 datastores. On
VMFS5, SEsparse is used for virtual disks of the size 2 TB and larger. SEsparse is a format
that is like VMFSsparse with some enhancements. This format is space efficient and supports
the space-reclamation technique. With space reclamation, blocks that the guest OS deletes are
marked. The system sends commands to the SEsparse layer in the hypervisor to unmap those
A VM can have one or more snapshots. For each snapshot, the following files are created:
Snapshot delta file: This file contains the changes to the virtual disk’s data since the snapshot
was taken. When you take a snapshot of a VM, the state of each virtual disk is preserved. The
VM stops writing to its -flat.vmdk file. Writes are redirected to >-######-
delta.vmdk (or -######-sesparse.vmdk) instead (for which ###### is the next
number in the sequence). You can exclude one or more virtual disks from a snapshot by
designating them as independent disks. Configuring a virtual disk as independent is typically
done when the virtual disk is created, but this option can be changed whenever the VM is
powered off.
Disk descriptor file: -00000#.vmdk. This file is a small text file that contains information
about the snapshot.
Configuration state file: -.vmsn. # is the next number in the sequence, starting with 1. This
file holds the active memory state of the VM at the point that the snapshot was taken,
including virtual hardware, power state, and hardware version.
Snapshot active memory file: -.vmem. This file contains the contents of the VM memory if
the option to include memory is selected during the creation of the snapshot.
The .vmsd file is the snapshot list file and is created at the time that the VM is created. It
maintains snapshot information for a VM so that it can create a snapshot list in the vSphere
Client. This information includes the name of the snapshot .vmsn file and the name of the
virtual disk file.
The snapshot state file has a .vmsn extension and is used to store the state of a VM when a
snapshot is taken. A new .vmsn file is created for every snapshot that is created on a VM and
is deleted when the snapshot is deleted. The size of this file varies, based on the options
selected when the snapshot is created. For example, including the memory state of the VM in
the snapshot increases the size of the .vmsn file.
You can exclude one or more of the VMDKs from a snapshot by designating a virtual disk in the
VM as an independent disk. Placing a virtual disk in independent mode is typically done when the
virtual disk is created. If the virtual disk was created without enabling independent mode, you
must power off the VM to enable it.
Other files might also exist, depending on the VM hardware version. For example, each snapshot
of a VM that is powered on has an associated _.vmem file, which contains the guest operating
system main memory, saved as part of the snapshot.
This example shows the snapshot and virtual disk files that are created when a VM has no
snapshots, one snapshot, and two snapshots.
You can perform the following actions from the Manage Snapshots window:
Delete the snapshot: Remove the snapshot from the Snapshot Manager, consolidate the
snapshot files to the parent snapshot disk, and merge with the VM base disk.
Delete all snapshots: Commit all the intermediate snapshots before the current-state icon (You
are here) to the VM and remove all snapshots for that VM.
Revert to a snapshot: Restore, or revert to, a particular snapshot. The snapshot that you restore
becomes the current snapshot.
When you revert to a snapshot, you return all these items to the state that they were in at the time
that you took the snapshot. If you want the VM to be suspended, powered on, or powered off
when you start it, ensure that the VM is in the correct state when you take the snapshot.
Snapshot consolidation is a way to clean unneeded delta disk files from a datastore. If no
snapshots are registered for a VM, but delta disk files exist, snapshot consolidation commits the
chain of the delta disk files and removes them.
If consolidation is not performed, the delta disk files might expand to the point of consuming all
the remaining space on the VM’s datastore or the delta disk file reaches its configured size. The
delta disk cannot be larger than the size configured for the base disk.
With snapshot consolidation, vCenter Server displays a warning when the descriptor and the
snapshot files do not match. After the warning displays, you can use the vSphere Client to commit
the snapshots.
For a list of best practices for using snapshots in a vSphere environment, see VMware knowledge
base article 1025279 at http://kb.vmware.com/kb/1025279.
A replication solution that supports flexibility in storage vendor selection at the source and
target sites
A vSphere Replication server that provides the core of the vSphere Replication infrastructure
A plug-in to the vSphere Client that provides a user interface for vSphere Replication
You can use vSphere Replication immediately after you deploy the appliance. The vSphere
Replication appliance provides the virtual appliance management interface (VAMI) that is used to
You can replicate a VM between two sites. vSphere Replication is installed on both source and
target sites. Only one vSphere Replication appliance is deployed on each vCenter Server. The
vSphere Replication (VR) appliance contains an embedded vSphere Replication server that
manages the replication process. To meet the load-balancing needs of your environment, you
might need to deploy additional vSphere Replication servers at each site.
When you configure a VM for replication, the vSphere Replication agent sends changed blocks in
the VM disks from the source site to the target site. The changed blocks are applied to the copy of
the VM. This process occurs independently of the storage layer. vSphere Replication performs an
initial full synchronization of the source VM and its replica copy. You can use replication seeds to
reduce the network traffic that is generated by data transfer during the initial full synchronization.
You can deploy vSphere Replication with either an IPv4 or IPv6 address. Mixing IP addresses, for
example having a single appliance with an IPv4 and an IPv6 address, is not supported.
After you deploy the vSphere Replication appliance, you use the VAMI to register the endpoint
and the certificate of the vSphere Replication management server with the vCenter Lookup
Service. You also use the VAMI to register the vSphere Replication solution user with the vCenter
Single Sign-On administration server.
For more details on deploying the vSphere Replication appliance, see VMware vSphere
Replication Documentation at https://docs.vmware.com/en/vSphere-Replication/index.html.
vSphere Replication can protect individual VMs and their virtual disks by replicating them to
another location.
The value that you set for the recovery point objective (RPO) affects replication scheduling.
When you configure replication, you set an RPO to determine the time between replications. For
example, an RPO of 1 hour aims to ensure that a VM loses no more than 1 hour of data during the
recovery. For smaller RPOs, less data is lost in a recovery, but more network bandwidth is
consumed to keep the replica up to date.
For a discussion about how the RPO affects replication scheduling, see vSphere Replication
Administration at https://docs.vmware.com/en/vSphere-
Replication/8.3/com.vmware.vsphere.replication-admin.doc/GUID-35C0A355-C57B-430B-876E-
9D2E6BE4DDBA.html.
To perform the recovery, you use the Recover virtual machine wizard in the vSphere Client at the
target site.
You are asked to select either to recover the VM with all the latest data or to recover the VM with
the most recent data available on the target site:
If you select Recover with recent changes to avoid data loss, vSphere Replication performs
a full synchronization of the VM from the source site to the target site before recovering the
VM. This option requires that the data of the source VM be accessible. You can select this
option only if the VM is powered off.
If you select Recover with latest available data, vSphere Replication recovers the VM by
using the data from the most recent replication on the target site, without performing
synchronization. Selecting this option results in the loss of any data that changed since the
most recent replication. Select this option if the source VM is inaccessible or if its disks are
corrupted.
vSphere Storage APIs – Data Protection is VMware’s data protection framework, which was
introduced in vSphere 4.0. A backup product that uses this API can back up VMs from a central
backup system (physical or virtual system). The backup does not require backup agents or any
backup processing to be done inside the guest operating system.
Backup processing is offloaded from the ESXi host. In addition, vSphere snapshot capabilities are
used to support backups across the SAN without requiring downtime for VMs. As a result,
backups can be performed nondisruptively at any time of the day without requiring extended
backup windows.
For frequently asked questions about vSphere Storage APIs - Data Protection, see VMware
knowledge base article 1021175 at https://kb.vmware.com/s/article/1021175.
One of the biggest bottlenecks that limits backup performance is the backup server that is handling
all the backup coordination tasks. One of these backup tasks is copying data from point A to point
B. Other backup tasks do much CPU processing. For example, tasks are performed to determine
what data to back up and what not to back up. Other tasks are performed to deduplicate data and
compress data that is written to the target.
A server with insufficient CPU resources can greatly reduce backup performance. Provide enough
resources for your backup server. A physical server or VM with an ample amount of memory and
CPU capacity is necessary for the best backup performance possible.
The motivation to use LAN-free backups is to reduce the stress on the physical resources of the
ESXi host when VMs are backed up. LAN-free backups reduce the stress by offloading backup
processing from the ESXi host to a backup proxy server.
Changed-block tracking (CBT) is a VMkernel feature that tracks the storage blocks of VMs as
they change over time. The VMkernel tracks block changes on VMs, enhancing the backup
process for applications that are developed to exploit vSphere Storage APIs - Data Protection.
By using CBT during restores, vSphere Data Protection offers fast and efficient recoveries of VMs
to their original location. During a restore process, the backup solution uses CBT to determine
which blocks changed since the last backup. The use of CBT reduces data transfer within the
vSphere environment during a recovery operation and, more important, reduces the recovery time.
When running a virtual machine, the VMkernel creates a contiguous addressable memory space
for the VM. This memory space has the same properties as the virtual memory address space
presented to applications by the guest operating system. This memory space allows the VMkernel
to run multiple VMs simultaneously while protecting the memory of each VM from being
accessed by others. From the perspective of an application running in the VM, the VMkernel adds
an extra level of address translation that maps the guest physical address to the host physical
address.
The total configured memory sizes of all VMs might exceed the amount of available physical
memory on the host. However, this condition does not necessarily mean that memory is
overcommitted. Memory is overcommitted when the working memory size of all VMs exceeds
that of the ESXi host’s physical memory size.
Because of the memory management techniques used by the ESXi host, your VMs can use more
virtual RAM than the available physical RAM on the host. For example, you can have a host with
32 GB of memory and run four VMs with 10 GB of memory each. In that case, the memory is
overcommitted. If all four VMs are idle, the combined consumed memory is below 32 GB.
However, if all VMs are actively consuming memory, then their memory footprint might exceed
32 GB and the ESXi host becomes overcommitted. An ESXi host can run out of memory if VMs
consume all reservable memory in an overcommitted-memory environment. Although the
powered-on VMs are not affected, a new VM might fail to power on because of lack of memory.
Overcommitment makes sense because, typically, some VMs are lightly loaded whereas others are
more heavily loaded, and relative activity levels vary over time.
The VMkernel uses various techniques to dynamically reduce the amount of physical RAM that is
required for each VM. Each technique is described in the order that the VMkernel uses it:
Page sharing: ESXi can use a proprietary technique to transparently share memory pages
between VMs, eliminating redundant copies of memory pages. Although pages are shared by
default within VMs, as of vSphere 6.0, pages are no longer shared by default among VMs.
Ballooning: If the host memory begins to get low and the VM's memory use approaches its
memory target, ESXi uses ballooning to reduce that VM's memory demands. Using the
VMware-supplied vmmemctl module installed in the guest operating system as part of
VMware Tools, ESXi can cause the guest operating system to relinquish the memory pages it
considers least valuable. Ballooning provides performance closely matching that of a native
system under similar memory constraints. To use ballooning, the guest operating system must
be configured with sufficient swap space.
Swap to host cache: Host swap cache is an optional memory reclamation technique that uses
local flash storage to cache a virtual machine’s memory pages. By using local flash storage,
the virtual machine avoids the latency associated with a storage network that might be used if
it swapped memory pages to the virtual swap (.vswp) file.
Regular host-level swapping: When memory pressure is severe and the hypervisor must swap
memory pages to disk, the hypervisor swaps to a host swap cache rather than to a .vswp file.
When a host runs out of space on the host cache, a virtual machine’s cached memory is
migrated to a virtual machine’s regular .vswp file. Each host must have its own host swap
cache configured.
You can configure a VM with up to 256 virtual CPUs (vCPUs). The VMkernel includes a CPU
scheduler that dynamically schedules vCPUs on the physical CPUs of the host system.
The VMkernel scheduler considers socket-core-thread topology when making scheduling
decisions. Intel and AMD processors combine multiple processor cores into a single integrated
circuit, called a socket in this discussion.
A socket is a single package with one or more physical CPUs. Each core has one or more logical
CPUs (LCPU in the diagram) or threads. With logical CPUs, the core can schedule one thread of
execution.
On the slide, the first system is a single-core, dual-socket system with two cores and, therefore,
two logical CPUs.
When a vCPU of a single-vCPU or multi-vCPU VM must be scheduled, the VMkernel maps the
vCPU to an available logical processor.
If hyperthreading is enabled, ESXi can schedule two threads at the same time on each processor
core (physical CPU). Hyperthreading provides more scheduler throughput. That is, hyperthreading
provides more logical CPUs on which vCPUs can be scheduled.
The drawback of hyperthreading is that it does not double the power of a core. So, if both threads
of execution need the same on-chip resources at the same time, one thread has to wait. Still, on
systems that use hyperthreading technology, performance is improved.
An ESXi host that is enabled for hyperthreading should behave almost exactly like a standard
system. Logical processors on the same core have adjacent CPU numbers. Logical processors 0
and 1 are on the first core, logical processors 2 and 3 are on the second core, and so on.
Consult the host system hardware documentation to verify whether the BIOS includes support for
hyperthreading. Then, enable hyperthreading in the system BIOS. Some manufacturers call this
option Logical Processor and others call it Enable Hyperthreading.
Use the vSphere Client to ensure that hyperthreading for your host is turned on. To access the
hyperthreading option, go to the host’s Summary tab and select CPUs under Hardware.
The CPU scheduler can use each logical processor independently to execute VMs, providing
capabilities that are similar to traditional symmetric multiprocessing (SMP) systems. The
VMkernel intelligently manages processor time to guarantee that the load is spread smoothly
across processor cores in the system. Every 2 milliseconds to 40 milliseconds (depending on the
socket-core-thread topology), the VMkernel seeks to migrate vCPUs from one logical processor to
another to keep the load balanced.
The VMkernel does its best to schedule VMs with multiple vCPUs on two different cores rather
than on two logical processors on the same core. But, if necessary, the VMkernel can map two
vCPUs from the same VM to threads on the same core.
If a logical processor has no work, it is put into a halted state. This action frees its execution
resources, and the VM running on the other logical processor on the same core can use the full
execution resources of the core. Because the VMkernel scheduler accounts for this halt time, a
VM running with the full resources of a core is charged more than a VM running on a half core.
This approach to processor management ensures that the server does not violate the ESXi resource
allocation rules.
Because VMs simultaneously use the resources of an ESXi host, resource contention can occur.
To manage resources efficiently, vSphere provides mechanisms to allow less, more, or an equal
amount of access to a defined resource. vSphere also prevents a VM from consuming large
amounts of a resource. vSphere grants a guaranteed amount of a resource to a VM whose
performance is not adequate or that requires a certain amount of a resource to run properly.
When host memory or CPU is overcommitted, a VM’s allocation target is somewhere between its
specified reservation and specified limit, depending on the VM’s shares and the system load.
vSphere uses a share-based allocation algorithm to achieve efficient resource use for all VMs and
to guarantee a given resource to the VMs that need it most.
When configuring a memory reservation for a VM, you can specify the VM's configured amount
of memory to reserve all of the VM's memory. For example, if a VM is configured with 4 GB of
memory, you can set a memory reservation of 4 GB for the VM. You might configure such a
memory reservation for a critical VM that must maintain a high level of performance.
Alternatively, you can select the Reserve All Guest Memory (All locked) check box. Selecting
this check box ensures that all of the VM's memory gets reserved even if you change the total
amount of memory for the VM. The memory reservation is immediately readjusted when the VM's
memory configuration changes.
Benefits: Assigning a limit is useful if you start with a few VMs and want to manage user
expectations. The performance deteriorates as you add more VMs. You can simulate having
fewer resources available by specifying a limit.
Drawbacks: You might waste idle resources if you specify a limit. The system does not allow
VMs to use more resources than the limit, even when the system is underused and idle
resources are available. Specify the limit only if you have good reasons for doing so.
High, normal, and low settings represent share values with a 4:2:1 ratio, respectively. A custom
value of shares assigns a specific number of shares (which expresses a proportional weight) to
each VM.
The proportional share mechanism applies to CPU, memory, storage I/O, and network I/O
allocation. The mechanism operates only when VMs contend for the same resource.
You can add shares to a VM while it is running, and the VM gets more access to that resource
(assuming competition for the resource). When you add a VM, it gets shares too. The VM’s share
amount factors into the total number of shares, but existing VMs are guaranteed not to be starved
for the resource.
Shares guarantee that a VM is given a certain amount of a resource (CPU, RAM, storage I/O, or
network I/O).
For example, consider the third row of VMs on the slide:
Before VM D was powered on, a total of 5,000 shares were available, but VM D’s addition
increases the total shares to 6,000.
The result is that the other VMs' shares decline in value. But each VM’s share value still
represents a minimum guarantee. VM A is still guaranteed one-sixth of the resource because it
owns one-sixth of the shares.
The best practice for performance tuning is to take a logical step-by-step approach:
For a complete view of the performance situation of a VM, use monitoring tools in the guest
operating system and in vCenter Server.
Identify the resource that the VM relies on the most. This resource is most likely to affect the
VM’s performance if the VM is constrained by it.
After making more of the limiting resource available to the VM, take another benchmark and
record changes.
Be cautious when making changes to production systems because a change might negatively affect
the performance of the VMs.
Tools in the guest operating system are available from sources external to VMware and are used in
various VMware applications. Many tools used outside of the guest OS are made available by
VMware for use with vSphere and other applications.
A partial list of these resource-monitoring tools is shown on the slide.
Windows Task Manager helps you measure CPU and memory use in the guest operating system.
The measurements that you take with tools in the guest operating system reflect resource usage of
the guest operating system, not necessarily of the VM itself.
VMware Tools includes a library of functions called the Perfmon DLL. With Perfmon, you can
access key host statistics in a guest VM. Using the Perfmon performance objects (VM Processor
and VM Memory), you can view actual CPU and memory usage and observed CPU and memory
usage of the guest operating system.
For example, you can use the VM Processor object to view the % Processor Time counter, which
monitors the VM’s current virtual processor load. Likewise, you can use the Processor object and
view the % Processor Time counter (not shown), which monitors the total use of the processor by
all running processes.
You can run the esxtop utility by using vSphere ESXi Shell to communicate with the
management interface of the ESXi host. You must have root user privileges.
Data on a wide range of metrics is collected at frequent intervals, processed, and archived in the
vCenter Server database. You can access statistical information through command-line monitoring
utilities or by viewing performance charts in the vSphere Client.
You can access overview and advanced performance charts in the vSphere Client.
Overview performance charts show the performance statistics that VMware considers most useful
for monitoring performance and diagnosing problems.
Depending on the object that you select in the inventory, the performance charts provide a quick
visual representation of how your host or VM is performing.
In the vSphere Client, you can customize the appearance of advanced performance charts.
Advanced charts have the following features:
More information than overview charts: Point to a data point in a chart to display details about
that specific data point.
Customizable charts: Change chart settings. Save custom settings to create your own charts.
To customize advanced performance charts, select Advanced under Performance. Click the Chart
Options link in the Advanced Performance pane.
Real-time information is information that is generated for the past hour at 20-second intervals.
Historical information is generated for the past day, week, month, or year, at varying specificities.
By default, vCenter Server has four archiving intervals: day, week, month, and year. Each interval
specifies a length of time that statistics are archived in the vCenter Server database.
You can configure which intervals are used and for what period of time. You can also configure
the number of data counters that are used during a collection interval by setting the collection
level.
Together, the collection interval and the collection level determine how much statistical data is
collected and stored in your vCenter Server database.
For example, using the table, past-day statistics show one data point every 5 minutes, for a total of
288 samples. Past-year statistics show 1 data point per day, or 365 samples.
Real-time statistics are not stored in the database. They are stored in a flat file on ESXi hosts and
in memory on vCenter Server instances. ESXi hosts collect real-time statistics only for the host or
Bar charts display storage metrics for datastores in a selected data center. Each datastore is
represented as a bar in the chart. Each bar displays metrics based on the file type: virtual disks,
other VM files, snapshots, swap files, and other files.
Pie charts display storage metrics for a single object, based on the file types or VMs. For example,
a pie chart for a datastore can display the amount of storage space occupied by the VMs that take
up the largest space.
In a line chart, the data for each performance counter is plotted on a separate line in the chart. For
example, a CPU chart for a host can contain a line for each of the host's CPUs. Each line plots the
CPU's usage over time.
Stacked charts display metrics for the child objects that have the highest statistical values. All
other objects are aggregated, and the sum value is displayed with the term Other. For example, a
host’s stacked CPU usage chart displays CPU usage metrics for the five VMs on the host that are
consuming the most CPU resources. The Other amount contains the total CPU usage of the
remaining VMs. The metrics for the host itself are displayed in separate line charts. By default, the
10 child objects with the highest data counter values appear.
Stacked charts display metrics for the child objects that have the highest statistical values. All
other objects are aggregated, and the sum value is displayed with the term Other. For example, a
host’s stacked CPU usage chart displays CPU usage metrics for the five VMs on the host that are
consuming the most CPU resources. The Other amount contains the total CPU usage of the
remaining VMs. The metrics for the host itself are displayed in separate line charts. By default, the
10 child objects with the highest data counter values appear.
In the vSphere Client, you can save data from the advanced performance charts to a file in various
graphics formats or in Microsoft Excel format. When you save a chart, you select the file type and
save the chart to the location of your choice.
In vCenter Server, you can determine how much or how little information about a specific device
type is displayed. You can control the amount of information a chart displays by selecting one or
more objects and counters.
An object refers to an instance for which a statistic is collected. For example, you might collect
statistics for an individual CPU, all CPUs, a host, or a specific network device.
A counter represents the actual statistic that you are collecting. An example is the amount of CPU
used or the number of network packets per second for a given device.
The statistics type refers to the measurement that is used during the statistics interval and is related
to the unit of measurement.
The statistics type is one of the following:
For example, CPU usage is a rate, CPU ready time is a delta, and memory active is an absolute
value.
Data is displayed at different specificities according to the historical interval. Past-hour statistics
are shown at a 20-second specificity, and past-day statistics are shown at a 5-minute specificity.
The averaging that is done to convert from one time interval to another is called rollup.
Different rollup types are available. The rollup type determines the type of statistical values
returned for the counter:
Average: The data collected during the interval is aggregated and averaged.
The minimum and maximum values are collected and displayed only in collection level 4.
Minimum and maximum rollup types are used to capture peaks in data during the interval. For
real-time data, the value is the current minimum or current maximum. For historical data, the
value is the average minimum or average maximum.
Counter: Usage
Summation: The collected data is summed. The measurement displayed in the performance
chart represents the sum of data collected during the interval.
Latest: The data that is collected during the interval is a set value. The value displayed in the
performance chart represents the current value.
For example, if you look at the CPU Used counter in a CPU performance chart, the rollup type is
summation. So, for a given 5-minute interval, the sum of all the 20-second samples in that interval
is represented.
The key to interpreting performance data is to observe the range of data from the perspective of
the guest operating system, the VM, and the host.
The CPU usage statistics in Task Manager, for example, do not give you the complete picture.
View CPU usage for the VM and the host on which the VM is located.
Use the performance charts in the vSphere Client to view this data.
If CPU use is high, check the VM's CPU usage statistics. Use either the overview charts or the
advanced charts to view CPU usage. The slide displays an advanced chart tracking a VM’s CPU
usage.
If a VM’s CPU use remains high over a period of time, the VM is constrained by CPU. Other
VMs on the host might have enough CPU resources to satisfy their needs.
If more than one VM is constrained by CPU, the key indicator is CPU ready time. Ready time
refers to the interval when a VM is ready to execute instructions but cannot because it cannot get
scheduled onto a CPU. Several factors affect the amount of ready time:
Overall CPU use: You are more likely to see ready time when use is high because the CPU is
more likely to be busy when another VM becomes ready to run.
Number of resource consumers (in this case, guest operating systems): When a host is running
a larger number of VMs, the scheduler is more likely to queue a VM behind VMs that are
already running or queued.
To determine whether a VM is being constrained by CPU resources, view CPU usage in the guest
operating system using, for example, Task Manager.
If more than one VM is constrained by CPU, the key indicator is CPU readiness. CPU readiness is
the percent of time that the VM cannot run because it is contending for access to the physical
CPUs.
You are more likely to see readiness values when use is high because the CPU is more likely to be
busy when another VM becomes ready to run. You are also more likely to see readiness values
when a host is running many VMs. In this case, the scheduler is more likely to queue a VM behind
VMs that are already running or queued.
A good readiness value varies from workload to workload.
You might see VMs with high ballooning activity and VMs being swapped in and out by the
VMkernel. This serious situation indicates that the host memory is overcommitted and must be
increased.
Disk performance problems are commonly caused by saturating the underlying physical storage
hardware. You can use the vCenter Server advanced performance charts to measure storage
performance at different levels. These charts provide insight about a VM performance. You can
monitor everything from the VM's datastore to a specific storage path.
If you select a host object, you can view throughput and latency for a datastore, a storage adapter,
or a storage path. The storage adapter charts are available only for Fibre Channel storage. The
storage path charts are available for Fibre Channel and iSCSI storage, not for NFS.
If you select a VM object, you can view throughput and latency for the VM’s datastore or specific
virtual disk.
To monitor throughput, view the Read rate and Write rate counters. To monitor latency, view the
Read latency and Write latency counters.
To determine whether your vSphere environment is experiencing disk problems, monitor the disk
latency data counters. Use the advanced performance charts to view these statistics. In particular,
monitor the following counters:
Kernel command latency: This data counter measures the average amount of time, in
milliseconds, that the VMkernel spends processing each SCSI command. For best
performance, the value should be 0 through 1 millisecond. If the value is greater than 4
milliseconds, the VMs on the ESXi host are trying to send more throughput to the storage
system than the configuration supports.
Physical device command latency: This data counter measures the average amount of time, in
milliseconds, for the physical device to complete a SCSI command.
Like disk performance problems, network performance problems are commonly caused by
saturating a network link between client and server. Use a tool such as Iometer, or a large file
transfer, to measure the effective bandwidth.
Network performance depends on application workload and network configuration. Dropped
network packets indicate a bottleneck in the network. To determine whether packets are being
dropped, use the advanced performance charts to examine the droppedTx and droppedRx network
counter values of a VM.
In general, the larger the network packets, the faster the network speed. When the packet size is
large, fewer packets are transferred, which reduces the amount of CPU that is required to process
the data. In some instances, large packets can result in high network latency. When network
packets are small, more packets are transferred, but the network speed is slower because more
CPU is required to process the data.
You can acknowledge an alarm to let other users know that you take ownership of the issue. For
example, a VM has an alarm set to monitor CPU use. The alarm is configured to send an email to
an administrator when the alarm is triggered. The VM CPU use spikes, triggering the alarm, which
sends an email to the administrator. The administrator acknowledges the triggered alarm to let
other administrators know the problem is being addressed
After you acknowledge an alarm, the alarm actions are discontinued, but the alarm does not get
cleared or reset when acknowledged. You reset the alarm manually in the vSphere Client to return
the alarm to a normal state.
If the predefined alarms do not address the event, state, or condition that you want to monitor,
define custom alarm definitions instead of modifying predefined alarms.
You can create custom alarms for the following target types:
Virtual machines
vCenter Server
You configure the alarm trigger to show as a warning or critical event when the specified criteria
are met:
You can monitor the current condition or state of virtual machines, hosts, and datastores.
Conditions or states include power states, connection states, and performance metrics such as
CPU and disk use.
You can monitor events that occur in response to operations occurring with a managed object
in the inventory or vCenter Server itself. For example, an event is recorded each time a VM
(which is a managed object) is cloned, created, deleted, deployed, and migrated.
You must create a separate alarm definition for each trigger. The OR operator is not supported in
the vSphere Client. However, you can combine more than one condition trigger with the AND
operator.
To configure email, specify the mail server FQDN or IP address and the email address of the
sender account.
You can configure up to four receivers of SNMP traps. They must be configured in numerical
order. Each SNMP trap requires a corresponding host name, port, and community.
You can also manage host updates using images. With vSphere Lifecycle Manager, you can
update all hosts in the cluster collectively, using a specified ESXi image.
The Cluster Quickstart workflow guides you through the deployment process for clusters. It
covers every aspect of the initial configuration, such as host, network, and vSphere settings. With
Cluster Quickstart, you can also add additional hosts to a cluster as part of the ongoing expansion
of clusters.
Cluster Quickstart reduces the time it takes to configure a cluster.
The workflow includes the following tasks:
The Cluster quickstart page provides workflow cards for configuring your new cluster:
Cluster basics: Lists the services that you have already enabled and provides an option for
editing the cluster's name.
Add hosts: Adds ESXi hosts to the cluster. These hosts must already be present in the
inventory. After hosts are added, the workflow shows the total number of hosts that are
present in the cluster and provides health check validation for those hosts. At the start, this
workflow is empty.
Configure cluster: Informs you about what can be automatically configured, provides details
on configuration mismatch, and reports cluster health results through the vSAN health service
even after the cluster is configured.
For more information about creating clusters, see vCenter Server and Host Management at
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-
3B5AF2B1-C534-4426-B97A-D14019A8010F.html.
vCenter Server uses vSphere HA admission control to ensure that sufficient resources are
available in a cluster to provide failover protection and to ensure that VM resource reservations
are respected.
When you power on a VM in the cluster for the first time, vSphere DRS either places the VM on a
particular host or makes a recommendation.
DRS attempts to improve resource use across the cluster by performing automatic migrations of
VMs (vSphere vMotion) or by providing a recommendation for VM migrations.
Before an ESXi host enters maintenance mode, VMs running on the host must be migrated to
another host (either manually or automatically by DRS) or shut down.
The DRS algorithm recommends where individual VMs should be moved for maximum
efficiency. If the cluster is in fully automated mode, DRS executes the recommendations and
migrates VMs to their optimal host based on the underlying calculations performed every minute.
A VM DRS score is computed from an individual VM's CPU, memory, and network metrics. DRS
uses these metrics to gauge the goodness or wellness of the VM.
In vSphere 7, the DRS algorithm runs every minute. The Cluster DRS Score is the last result of
DRS running and is filed into one of five buckets. These buckets are simply 20 percent ranges: 0-
20, 20-40, 40-60, 60-80 and 80-100 percent over the sample period.
The VM DRS Score page shows the following values for VMs that are powered on:
DRS Score
Active CPU
Used CPU
CPU Readiness
Granted Memory
Swapped Memory
Ballooned Memory
The automation level determines whether vSphere DRS makes migration recommendations or
automatically places VMs on hosts. vSphere DRS makes placement decisions when a VM powers
on and when VMs must be rebalanced across hosts in the cluster.
The following automation levels are available:
Manual: When you power on a VM, vSphere DRS displays a list of recommended hosts on
which to place the VM. When the cluster becomes imbalanced, vSphere DRS displays
recommendations for VM migration.
Partially automated: When you power on a VM, vSphere DRS places it on the best-suited
host. When the cluster becomes imbalanced, vSphere DRS displays recommendations for
manual VM migration.
Fully automated: When you power on a VM, vSphere DRS places it on the best-suited host.
When the cluster becomes imbalanced, vSphere DRS migrates VMs from overused hosts to
underused hosts to ensure balanced use of cluster resources.
A VM's files can be on a VMFS datastore, an NFS datastore, a vSAN datastore, or a vSphere
Virtual Volumes datastore. On a vSAN datastore or a vSphere Virtual Volumes datastore, the
swap file is created as a separate vSAN or vSphere Virtual Volumes object.
A swap file is created by the ESXi host when a VM is powered on. If this file cannot be created,
the VM cannot power on. Instead of accepting the default, you can also use the following options:
Use per-VM configuration options to change the datastore to another shared storage location.
Use host-local swap, which allows you to specify a datastore stored locally on the host. You
can swap at a per-host level. However, it can lead to a slight degradation in performance for
vSphere vMotion because pages swapped to a local swap file on the source host must be
transferred across the network to the destination host. Currently, vSAN and vSphere Virtual
Volumes datastores cannot be specified for host-local swap.
After a vSphere DRS cluster is created, you can edit its properties to create rules that specify
affinity. The following types of rules can be created:
Affinity rules: vSphere DRS keeps certain VMs together on the same host (for example, for
performance reasons).
Anti-affinity rules: vSphere DRS ensures that certain VMs are not together (for example, for
availability reasons).
For ease of administration, virtual machines can be placed in VM or host groups. You can create
one or more VM groups in a vSphere DRS cluster, each consisting of one or more VMs. A host
group consists of one or more ESXi hosts.
The main use of VM groups and host groups is to help in defining the VM-Host affinity rules.
A VM-Host affinity or anti-affinity rule specifies whether the members of a selected VM group
can run on the members of a specific host group.
Unlike an affinity rule for VMs, which specifies affinity (or anti-affinity) between individual
VMs, a VM-Host affinity rule specifies an affinity relationship between a group of VMs and a
group of hosts.
Because VM-Host affinity rules are cluster-based, the VMs and hosts that are included in a rule
must all reside in the same cluster. If a VM is removed from the cluster, the VM loses its
membership from all VM groups, even if it is later returned to the cluster.
Preferential rules can be violated to allow the proper functioning of vSphere DRS, vSphere HA,
and VMware vSphere DPM.
On the slide, Group A and Group B are VM groups. Blade Chassis A and Blade Chassis B are host
groups. The goal is to force the VMs in Group A to run on the hosts in Blade Chassis A and to
force the VMs in Group B to run on the hosts in Blade Chassis B. If the hosts fail, vSphere HA
restarts the VMs on the other hosts in the cluster. If the hosts are put into maintenance mode or
become overused, vSphere DRS moves the VMs to the other hosts in the cluster.
A VM-Host affinity rule that is required, instead of preferential, can be used when the software
running in your VMs has licensing restrictions. You can enforce this rule when the software
running in your VMs has licensing restrictions. You can place such VMs in a VM group. Then
you can create a rule that requires the VMs to run on a host group, which contains hosts with the
required licenses.
When you create a VM-Host affinity rule that is based on the licensing or hardware requirements
of the software running in your VMs, you are responsible for ensuring that the groups are properly
set up. The rule does not monitor the software running in the VMs. Nor does it know which third-
party licenses are in place on which ESXi hosts.
On the slide, Group A is a VM group. You can force Group A to run on hosts in the ISV-Licensed
group to ensure that the VMs in Group A run on hosts that have the required licenses. But if the
hosts in the ISV-Licensed group fail, vSphere HA cannot restart the VMs in Group A on hosts that
are not in the group. If the hosts in the ISV-Licensed group are put into maintenance mode or
become overused, vSphere DRS cannot move the VMs in Group A to hosts that are not in the
group.
By setting the automation level for individual VMs, you can fine-tune automation to suit your
needs. For example, you might have a VM that is especially critical to your business. You want
more control over its placement so you set its automation level to Manual.
If a VM’s automation level is set to disabled, vCenter Server does not migrate that VM or provide
migration recommendations for it.
As a best practice, enable automation. Select the automation level based on your environment and
level of comfort.
For example, if you are new to vSphere DRS clusters, you might select Partially Automated
because you want control over the movement of VMs.
When you are comfortable with what vSphere DRS does and how it works, you might set the
automation level to Fully Automated.
You can set the automation level to Manual on VMs over which you want more control, such as
your business-critical VMs.
You can create vSphere DRS clusters, or you can enable vSphere DRS for existing vSphere HA or
vSAN clusters.
The CPU Utilization and Memory Utilization charts show all the hosts in the cluster and how their
CPU and memory resources are allocated to each VM.
For CPU usage, the VM information is represented by a colored box. If you point to the
colored box, the VM’s CPU usage information appears. If the VM is receiving the resources
that it is entitled to, the box is green. Green means that 100 percent of the VM’s entitled
resources are delivered. If the box is not green (for example, entitled resources are 80 percent
or less) for an extended time, you might want to investigate what is causing this shortfall (for
example, unapplied recommendations).
For memory usage, the VM boxes are not color-coded because the relationship between
consumed memory and entitlement is often not easily categorized.
In the Network Utilization chart, the displayed network data reflects all traffic across physical
network interfaces on the host.
In the DRS Recommendations pane, you can see the current set of recommendations that are
generated for optimizing resource use in the cluster through either migrations or power
management. Only manual recommendations awaiting user confirmation appear in the list.
To refresh the recommendations, click RUN DRS NOW.
To apply all recommendations, click APPLY RECOMMENDATIONS.
To apply a subset of the recommendations, select the Override DRS recommendations check
box. Select the check box next to each desired recommendation and click APPLY
RECOMMENDATIONS.
A host enters or leaves maintenance mode as the result of a user request. While in maintenance
mode, the host does not allow you to deploy or power on a VM.
VMs that are running on a host entering maintenance mode must be shut down or migrated to
another host, either manually (by a user) or automatically (by vSphere DRS). The host continues
to run the Enter Maintenance Mode task until all VMs are powered down or moved away.
When no more running VMs are on the host, the host’s icon indicates that it has entered
maintenance mode. The host’s Summary tab indicates the new state.
Place a host in maintenance mode before servicing the host, for example, when installing more
memory or removing a host from a cluster.
You can place a host in standby mode manually. However, the next time that vSphere DRS runs, it
might undo your change or recommend that you undo the changes. If you want a host to remain
powered off, place it in maintenance mode and turn it off.
When a host is put into maintenance mode, all its running VMs must be shut down, suspended, or
migrated to other hosts by using vSphere vMotion. VMs with disks on local storage must be
powered off, suspended, or migrated to another host and datastore.
When you remove the host from the cluster, the VMs that are currently associated with the host
are also removed from the cluster. If the cluster still has enough resources to satisfy the
reservations of all VMs in the cluster, the cluster adjusts resource allocation to reflect the reduced
amount of resources.
Dynamic DirectPath I/O is useful on hosts that have PCI passthrough devices and for virtualized
devices that require a directly assigned hardware device to back it.
Dynamic DirectPath I/O is also called assignable hardware. The following devices can use
assignable hardware:
For New PCI device, click Dynamic DirectPath IO. Clicking SELECT HARDWARE displays
a list of devices that can be attached to the VM. You can select one or more devices from the list.
In the image, the VM can use either an Intel NIC with the RED hardware label or vmxnet3 NIC
with the RED hardware label.
Whether planned or unplanned, downtime brings with it considerable costs. However, solutions to
ensure higher levels of availability are traditionally costly, hard to implement, and difficult to
manage.
VMware software makes it simpler and less expensive to provide higher levels of availability for
important applications. With vSphere, organizations can easily increase the baseline level of
availability provided for all applications and provide higher levels of availability more easily and
cost effectively. With vSphere, you can:
vSphere HA provides a base level of protection for your VMs by restarting VMs if a host fails.
vSphere Fault Tolerance provides a higher level of availability, allowing users to protect any VM
Unlike other clustering solutions, vSphere HA protects all workloads by using the infrastructure
itself. After you configure vSphere HA, no actions are required to protect new VMs. All
workloads are automatically protected by vSphere HA.
Power off and restart VMs - Conservative restart policy: vSphere HA does not attempt to restart
the affected VMs unless vSphere HA determines that another host can restart the VMs. The host
experiencing the all paths down (APD) communicates with the vSphere HA master host to
determine whether the cluster has sufficient capacity to power on the affected VMs. If the master
host determines that sufficient capacity is available, the host experiencing the APD stops the VMs
so that the VMs can be restarted on a healthy host. If the host experiencing the APD cannot
communicate with the master host, no action is taken.
Power off and restart VMs - Aggressive restart policy: vSphere HA stops the affected VMs even if
it cannot determine that another host can restart the VMs. The host experiencing the APD attempts
to communicate with the master host to determine whether the cluster has sufficient capacity to
power on the affected VMs. If the master host is not reachable, sufficient capacity to restart the
VMs is unknown. In this scenario, the host takes the risk and stops the VMs so they can be
restarted on the remaining healthy hosts. However, if sufficient capacity is not available, vSphere
HA might not be able to recover all the affected VMs. This result is common in a network
If you ensure that the network infrastructure is sufficiently redundant and that at least one network
path is always available, host network isolation is less likely to occur.
Redundant heartbeat networking is the best approach for your vSphere HA cluster. When a master
host’s connection fails, a second connection is still available to send heartbeats to other hosts. If
you do not provide redundancy, your failover setup has a single point of failure.
In this example, vmnic0 and vmnic1 form a NIC team in the Management network. The vmk0
VMkernel port is marked for management.
The vSphere HA cluster is managed by a master host. All other hosts are called subordinate hosts.
Fault Domain Manager (FDM) services on subordinate hosts all communicate with FDM on the
master host. Hosts cannot participate in a vSphere HA cluster if they are in maintenance mode, in
standby mode, or disconnected from vCenter Server.
To determine which host is the master host, an election process takes place. The host that can
access the greatest number of datastores is elected the master host. If more than one host sees the
same number of datastores, the election process determines the master host by using the host
Managed Object ID (MOID) assigned by vCenter Server.
vSphere HA is enabled.
The master host encounters a system failure because of one of the following factors:
The subordinate hosts cannot communicate with the master host because of a network
problem.
During the election process, the candidate vSphere HA agents communicate with each other over
the management network, or the vSAN network in a vSAN cluster, by using User Datagram
Protocol (UDP). All network connections are point-to-point. After the master host is determined,
the master host and subordinate hosts communicate using secure TCP. When vSphere HA is
started, vCenter Server contacts the master host and sends a list of hosts with membership in the
cluster with the cluster configuration. That information is saved to local storage on the master host
and then pushed out to the subordinate hosts in the cluster. If additional hosts are added to the
cluster during normal operation, the master host sends an update to all hosts in the cluster.
The master host provides an interface for vCenter Server to query the state of and report on the
health of the fault domain and VM availability. vCenter Server tells the vSphere HA agent which
VMs to protect with their VM-to-host compatibility list. The agent learns about state changes
through hostd and vCenter Server learns through vpxa. The master host monitors the health of the
subordinate hosts and takes responsibility for VMs that were running on a failed subordinate host.
A subordinate host monitors the health of VMs running locally and sends state changes to the
master host. A subordinate host also monitors the health of the master host.
vSphere HA is configured, managed, and monitored through vCenter Server. The vpxd process,
which runs on the vCenter Server system, maintains the cluster configuration data. The vpxd
process reports cluster configuration changes to the master host. The master host advertises a new
copy of the cluster configuration information and each subordinate host fetches an updated copy.
Each subordinate host writes the updated configuration information to local storage. A list of
protected VMs is stored on each datastore. The VM list is updated after each user-initiated power-
on (protected) and power off (unprotected) operation. The VM list is updated after vCenter Server
observes these operations.
Heartbeats are sent to each subordinate host from the master host over all configured management
networks. However, subordinate hosts use only one management network to communicate with
the master host. If the management network used to communicate with the master host fails, the
subordinate host switches to another management interface to communicate with the master host.
If the subordinate host does not respond within the predefined timeout period, the master host
declares the subordinate host as agent unreachable. When a subordinate host is not responding, the
master host attempts to determine the cause of the subordinate host’s inability to respond. The
master host must determine whether the subordinate host crashed, is not responding because of a
network failure, or the vSphere HA agent is in an unreachable state.
Using datastore heartbeating, the master host determines whether a host has failed or a network
isolation has occurred. If datastore heartbeating from the host stops, the host is considered failed.
In this case, the failed host’s VMs are started on another host in the vSphere HA cluster.
vSphere HA can also determine whether an ESXi host is isolated or has failed. Isolation refers to
when an ESXi host cannot see traffic coming from the other hosts in the cluster and cannot ping
its configured isolation address. If an ESXi host fails, vSphere HA attempts to restart the VMs that
were running on the failed host on one of the remaining hosts in the cluster. If the ESXi host is
isolated because it cannot ping its configured isolation address and sees no management network
traffic, the host executes the Host Isolation Response.
The master host must determine whether the subordinate host is isolated or has failed, for
example, because of a misconfigured firewall rule or component failure. The type of failure
dictates how vSphere HA responds.
When the master host cannot communicate with a subordinate host over the heartbeat network, the
master host uses datastore heartbeating to determine whether the subordinate host failed, is in a
network partition, or is network-isolated. If the subordinate host stops datastore heartbeating, the
subordinate host is considered to have failed, and its virtual machines are restarted elsewhere.
For VMFS, a heartbeat region on the datastore is read to find out if the host is still heartbeating to
it. For NFS datastores, vSphere HA reads the host--hb file, which is locked by the ESXi host
accessing the datastore. The file guarantees that the VMkernel is heartbeating to the datastore and
periodically updates the lock file.
The lock file time stamp is used by the master host to determine whether the subordinate host is
isolated or has failed.
To determine which host is the master host, an election process takes place. The host that can
access the greatest number of datastores is elected the master host. If more than one host sees the
same number of datastores, the election process determines the master host by using the host
Managed Object ID (MOID) assigned by vCenter Server. If the master host fails, is shut down, or
is removed from the cluster a new election is held.
The slide illustrates one of several scenarios that might result in host isolation. If a host loses
connectivity to both the primary heartbeat network and the alternate heartbeat network, the host no
longer receives network heartbeats from the other hosts in the vSphere HA cluster. Furthermore,
the slide depicts that this same host can no longer ping its isolation address.
If a host becomes isolated, the master host must determine if that host is still alive, and merely
isolated, by checking for datastore heartbeats. Datastore heartbeats are used by vSphere HA only
when a host becomes isolated or partitioned.
When a datastore accessibility failure occurs, the affected host can no longer access the storage
path for a specific datastore. You can determine the response that vSphere HA gives to such a
failure, ranging from the creation of event alarms to VM restarts on other hosts.
If a datastore is based on Fibre Channel, a network failure does not disrupt datastore access. When
using datastores based on IP storage (for example, NFS, iSCSI, or Fibre Channel over Ethernet),
you must physically separate the IP storage network and the management network (the heartbeat
network). If physical separation is not possible, you can logically separate the networks.
To determine the maximum number of hosts per cluster, see VMware Configuration Maximums at
https://configmax.vmware.com.
In the vSphere Client, you can configure the following vSphere HA settings:
Availability failure conditions and responses: Provide settings for host failure responses, host
isolation, VM monitoring, and VMCP.
Admission control: Enable or disable admission control for the vSphere HA cluster and select
a policy for how it is enforced.
Heartbeat datastores: Specify preferences for the datastores that vSphere HA uses for
datastore heart-eating.
Using the Failures and Responses pane, you can configure how your cluster should function when
problems are encountered. You can specify the vSphere HA cluster’s response for host failures
and isolation. You can also configure VMCP actions when permanent device loss and all paths
down situations occur and enable VM monitoring.
If a datastore encounters an All Paths Down (APD) condition, the device state is unknown and
might only be temporarily available. You can select the following options for a response to a
datastore APD:
Issue events: No action is taken against the affected VMs, however the administrator is
notified when an APD event has occurred.
Power off and restart VMs - Conservative restart policy: vSphere HA does not attempt to
restart the affected VMs unless vSphere HA determines that another host can restart the VMs.
.
Power off and restart VMs - Aggressive restart policy: vSphere HA stops the affected
VMs even if it cannot determine that another host can restart the VMs.
The host experiencing the APD attempts to communicate with the master host to determine if
sufficient capacity exists in the cluster to power on the affected VMs. If the master host is not
reachable, sufficient capacity for restarting the VMs is unknown. In this scenario, the host
takes the risk and stops the VMs so that they can be restarted on the remaining healthy hosts.
However, if sufficient capacity is not available, vSphere HA might not be able to recover all
the affected VMs. This result is common in a network partition scenario where a host cannot
communicate with the master host to get a definitive response to the likelihood of a successful
recovery.
The VM monitoring service determines that the VM has failed if one of the following events
occurs:
The guest operating system has not issued an I/O for the last 2 minutes (by default).
If the VM has failed, the VM monitoring service resets the VM to restore services.
You can configure the level of monitoring sensitivity. Highly sensitive monitoring results in a
more rapid conclusion that a failure has occurred. Although unlikely, highly sensitive monitoring
might lead to falsely identifying failures when the VM or application is still working but
heartbeats have not been received because of factors like resource constraints. Low-sensitivity
monitoring results in longer interruptions in service between actual failures and VMs being reset.
Select an option that is an effective compromise for your needs.
You can select VM and Application Monitoring to enable application monitoring.
Datastore heartbeating takes checking the health of a host to another level by checking more than
the management network to determine a host’s health. You can configure a list of datastores to
monitor for a particular host, or you can allow vSphere HA to decide. You can also combine both
methods.
After you create a cluster, you can use admission control to specify whether VMs can be started if
they violate availability constraints. The cluster reserves resources to allow failover for all running
VMs for a specified number of host failures.
The admission control settings include:
Disabled: (Not recommended) This option disables admission control, allowing the VMs
violating availability constraints to power on.
Slot Policy: A slot is a logical representation of memory and CPU resources. With the slot
policy option, vSphere HA calculates the slot size, determines how many slots each host in
the cluster can hold, and therefore determines the current failover capacity of the cluster.
Dedicated failover hosts: This option selects hosts to use for failover actions. If a default
failover host does not have enough resources, failovers can still occur to other hosts in the
cluster.
Cluster resource percentage is the default admission control policy. Recalculations occur
automatically as the cluster's resources change, for example, when a host is added to or removed
from the cluster.
Admission control can also be configured to offer warnings when the actual use exceeds the
failover capacity percentage. The resource reduction calculation takes into account a VM's
reserved memory and memory overhead.
By setting the Performance degradation VMs tolerate threshold, you can specify when a
configuration issue should generate a warning or notice. For example:
If you reduce the threshold to 0 percent, a warning is generated when cluster use exceeds the
available capacity.
If you reduce the threshold to 20 percent, the performance reduction that can be tolerated is
calculated as performance reduction = current use x 20 percent.
Optionally, you can configure a delay when a certain restart condition is met.
You can set advanced options that affect the behavior of your vSphere HA cluster. For more
details, see vSphere Availability at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.avail.doc/GUID-63F459B7-8884-4818-8872-
C9753B2E0215.html.
After a host failure, VMs are assigned to other hosts with unreserved capacity, with the highest
priority VMs placed first. The process continues to those VMs with lower priority until all have
been placed or no more cluster capacity is available to meet the reservations or memory overhead
of the VMs. A host then restarts the VMs assigned to it in priority order.
If insufficient resources exist, vSphere HA waits for more unreserved capacity to become
available, for example, because of a host coming back online, and then retries the placement of
these VMs. To reduce the chance of this situation occurring, configure vSphere HA admission
control to reserve more resources for failures. With admission control, you can control the amount
of cluster capacity that is reserved by VMs, which is unavailable to meet the reservations and
memory overhead of other VMs if a failure occurs.
The following network maintenance suggestions can help you avoid the false detection of host
failure and network isolation because of dropped vSphere HA heartbeats:
Changing your network hardware or networking settings can interrupt the heartbeats used by
vSphere HA to detect host failures, and might result in unwanted attempts to fail over VMs.
When changing the management or vSAN networks of the hosts in the vSphere HA-enabled
cluster, suspend host monitoring and place the host in maintenance mode.
Disabling host monitoring is required only when modifying virtual networking components
and properties that involve the VMkernel ports configured for the Management or vSAN
traffic, which are used by the vSphere HA networking heartbeat service.
After you change the networking configuration on ESXi hosts, for example, adding port
groups, removing virtual switches, or suspending host monitoring, you must reconfigure
vSphere HA on all hosts in the cluster. This reconfiguration causes the network information to
be reinspected. Then, you must reenable host monitoring.
You cluster or its hosts can experience configuration issues and other errors that adversely affect
proper vSphere HA operation. You can monitor these errors on the Configuration Issues page.
When vSphere HA performs failover and restarts VMs on different hosts, its first priority is the
immediate availability of all VMs. After the VMs are restarted, the hosts in which they were
powered on are usually heavily loaded, and other hosts are comparatively lightly loaded. vSphere
DRS helps to balance the load across hosts in the cluster.
You can use vSphere Fault Tolerance for most mission-critical VMs. vSphere Fault Tolerance is
built on the ESXi host platform.
The protected VM is called the primary VM. The duplicate VM is called the secondary VM. The
secondary VM is created and runs on a different host to the primary VM. The secondary VM’s
execution is identical to that of the primary VM. The secondary VM can take over at any point
without interruption and provide fault-tolerant protection.
The primary VM and the secondary VM continuously monitor the status of each other to ensure
that fault tolerance is maintained. A transparent failover occurs if the host running the primary VM
fails, in which case the secondary VM is immediately activated to replace the primary VM. A new
secondary VM is created and started, and fault tolerance redundancy is reestablished
automatically. If the host running the secondary VM fails, the secondary VM is also immediately
replaced. In either case, users experience no interruption in service and no loss of data.
You can use vSphere Fault Tolerance with vSphere DRS only when the Enhanced vMotion
Compatibility feature is enabled.
When you enable EVC mode on a cluster, vSphere DRS makes the initial placement
recommendations for fault-tolerant VMs, and you can assign a vSphere DRS automation level to
primary VMs. The secondary VM always assumes the same setting as its associated primary VM.
When vSphere Fault Tolerance is used for VMs in a cluster that has EVC mode disabled, the fault-
tolerant VMs are given the disabled vSphere DRS automation level. In such a cluster, each
primary VM is powered on only on its registered host, and its secondary VM is automatically
placed.
A fault-tolerant VM and its secondary copy are not allowed to run on the same host. This
restriction ensures that a host failure cannot result in the loss of both VMs.
vSphere Fault Tolerance provides failover redundancy by creating two full VM copies. The VM
files can be placed on the same datastore. However, VMware place these files on separate
datastores to provide recovery from datastore failures.
After you take all the required steps for enabling vSphere Fault Tolerance for your cluster, you can
use the feature by turning it on for individual VMs.
Before vSphere Fault Tolerance can be turned on, validation checks are performed on a VM.
After these checks are passed, and you turn on vSphere Fault Tolerance for a VM, new options are
added to the Fault Tolerance section of the VM's context menu. These options include turning off
or disabling vSphere Fault Tolerance, migrating the secondary VM, testing failover, and testing
restart of the secondary VM.
When vSphere Fault Tolerance is turned on, vCenter Server resets the VM’s memory limit to the
default (unlimited memory) and sets the memory reservation to the memory size of the VM. While
vSphere Fault Tolerance is turned on, you cannot change the memory reservation, size, limit,
number of virtual CPUs, or shares. You also cannot add or remove disks for the VM. When
vSphere Fault Tolerance is turned off, any parameters that were changed are not reverted to their
original values.
When generating reports, if the Customer Experience Improvement Program (CEIP) is not yet
accepted, a prompt describing CEIP appears. Reports are not generated if you do not join CEIP.
When new vCenter Server updates are released, the vSphere Client shows a notification in the
Summary tab. Clicking the notification directs you to the Updates tab.
The Updates tab has an Update Planner page. This page shows a list of vCenter Server versions
that you can select.
Details include release date, version, build, and other information about each vCenter Server
version available.
The Type column tells you if the release item is an update, an upgrade, or a patch.
If multiple versions appear, the recommended version is preselected.
After selecting a vCenter Server version from the list, you can generate product interoperability
reports and preupdate reports.
In the vSphere Client, the Interoperability page appears on the Monitor tab of vCenter Server.
This page displays VMware products currently registered with vCenter Server.
Columns show the name, current version, compatible version, and release notes of each detected
product.
If you do not see your registered VMware products, you can manually modify the list and add the
appropriate names and versions.
You do not require special privileges to access the vSphere Lifecycle Manager home view.
In the Lifecycle Manager pane, you can access the following tabs: Image Depot, Updates,
Imported ISOs, Baselines, and Settings.
A dynamic baseline is a set of patches that meet certain criteria. The content of a dynamic baseline
changes as the available patches change. You can manually exclude or add specific patches to the
baseline.
A fixed baseline is a set of patches that does not change as patch availability changes.
The ESXi base image is a complete ESXi installation package and is enough to start an ESXi host.
Only VMware creates and releases ESXi base images.
The ESXi base image is a grouping of components. You must select at least the base image or
vSphere version when creating a cluster image.
Starting with vSphere 7, the component is the smallest unit that is used by vSphere Lifecycle
Manager to install VMware and third-party software on ESXi hosts. Components are the basic
packaging for VIBs and metadata. The metadata provides the name and version of the component.
On installation, a component provides you with a visible feature. For example, vSphere HA is
provided as a component. Components are optional elements to add to a cluster image.
Vendor add-ons are custom OEM images. Each add-on is a collection of components customized
for a family of servers. OEMs can add, update, or remove components from a base image to create
an add-on. Selecting an add-on is optional.
When you select a downloaded file, the details appear to the right:
When you select an ESXi version, the details include the version name, build number,
category, and description, and the list of components that make up the base image.
When you select a vendor add-on, the details include the add-on name, version, vendor name,
release date, category, and the list of added or removed components.
When you select a component, the details include the component name, version, publisher,
release date, category, severity, and contents (VIBs).
The status of a host can be unknown, compliant, out of compliance, or not compatible with the
image.
A compliant host is one that has the same ESXi image defined for the cluster and with no
standalone VIBs or differing components.
If the host is out of compliance, a message about the impact of remediation appears. In the
example, the host must be rebooted as part of the remediation. Another impact that might be
reported is the requirement that the host enters maintenance mode.
A host is not compatible if it runs an image version that is later than the desired cluster image
version, or if the host does not meet the installation requirements for the vSphere build.
Hardware compatibility is checked only for vSAN storage controllers and not with the full
VMware Compatibility Guide.
A warning about a standalone VIB does not block the process of converting the cluster to use
vSphere Lifecycle Manager. If you continue to update ESXi, the VIB is uninstalled from the host
as part of the process.
You cannot include standalone VIBs in a cluster image.
The Review Remediation Impact dialog box shows the impact summary, applicable remediation
settings, End User License Agreement, and impact on specific hosts.
vSphere Lifecycle Manager performs a precheck on every remediation call. When the precheck is
complete, vSphere Lifecycle Manager applies the latest saved cluster image to the hosts.
During each step of a remediation process, vSphere Lifecycle Manager determines the readiness of
the host to enter or exit maintenance mode or be rebooted.
You can also click RUN PRE-CHECK to precheck hosts without updating them.
You check for image recommendations on demand and per cluster. You can check for
recommendations for different clusters at the same time. When recommendation checks run
concurrently with other checks, with compatibility scans, and with remediation operations, the
checks are queued to run one at a time.
If you have never checked recommendations for the cluster, the View recommended images
option is dimmed.
After you select Check for recommended images, the results for that cluster are generated.
The Checking for recommended images task is visible to all user sessions and cannot be canceled.
When the check completes, you can select View recommended images.
When you view recommended images, vSphere shows the following types of images:
CURRENT IMAGE: The image specification that is being used to manage the cluster.
LATEST IN CURRENT SERIES: If available, a later version within the same release series
appears. For example, if the cluster is running vSphere 7.0 and vSphere 7.1 is released, an
image based on vSphere 7.1 appears.
LATEST AND GREATEST: If available, a later version in a later major release. For
example, if the cluster is running vSphere 7.0 or 7.1 and vSphere 8.0 is released, an image
based on vSphere 8.0 appears.
If the latest release within the current series is the same as the latest major version released,
only one recommendation appears.
If the current image is the same as the latest release, no recommendations appear.
You can use a recommended image as a starting point to customize the cluster image. When you
select a recommended image, the Edit Image workflow appears.
You can perform these actions:
Upgrade Available: You can upgrade VMware Tools to match the current version available
for your ESXi hosts.
Guest Managed: Your VM is running the Linux OpenVMTools package. Use native Linux
package management tools to upgrade VMware Tools.
Unknown: vSphere Lifecycle Manager has not yet checked the status of VMware Tools.
Ensure that the VM is powered on before clicking the CHECK STATUS link.
Up to Date: The version of VMware Tools running in the VM matches the latest available
version for the ESXi host.