ESXi 5.1 PowerCLI Host Install Script

I’ve been working on vCenter a lot this week. Here’s the script I use to configure hosts. It’s not the most awesome thing ever, but it might get you pretty far if you’re just starting out with PowerCLI, vSphereCLI, or esxcli.

It’s named Host-Install-Script.ps1 and it’s on my github repo: John Puskar’s Github Repo.

I’m pasting the content here for reference. I hope this helps you out.

# ESXi-Install-Script
# 02/02/2013

# Download -
# Referece -

#vSphere CLI (for snmp)
# Download -

#VDSPowerCLI (no longer used)
# cmdlets download -
# not compatible with powercli 5.1!

#== Getting Started! ==

#== Variables ==
# Generic
$vCenterServer = ""
$vmHostName = ""
$vSwitchName = "SAN-Switch"
$ntpHostname = ""
$snmpTrapReceiver = ""
$snmpTrapCommunity = "public"
$omsaPath = "/vmfs/volumes/san-esx0-lun0/VIBs/OM-SrvAdmin-Dell-Web-7.1.0-5304.VIB-ESX50i_A00/"
# Port Groups
$arrPGsToCreate = @()
$arrPGsToCreate += New-Object –TypeName PSObject –Prop (@{"Name" = "100_san1";"VLAN" = "100"})
$arrPGsToCreate += New-Object –TypeName PSObject –Prop (@{"Name" = "200_san2";"VLAN" = "200"})
$arrPGsToCreate += New-Object –TypeName PSObject –Prop (@{"Name" = "100_san1-vmk1";"VLAN" = "100"})
$arrPGsToCreate += New-Object –TypeName PSObject –Prop (@{"Name" = "200_san2-vmk1";"VLAN" = "200"})
$arrPGsToCreate += New-Object –TypeName PSObject –Prop (@{"Name" = "100_san1-vmk2";"VLAN" = "100"})
$arrPGsToCreate += New-Object –TypeName PSObject –Prop (@{"Name" = "200_san2-vmk2";"VLAN" = "200"})
$arrPGsToCreate += New-Object –TypeName PSObject –Prop (@{"Name" = "300_nfs";"VLAN" = "300"})
$arrPGsToCreate += New-Object –TypeName PSObject –Prop (@{"Name" = "400_vmotion";"VLAN" = "400"})
$arrPGsToCreate += New-Object –TypeName PSObject –Prop (@{"Name" = "401_ft";"VLAN" = "401"})
# VMKernels
$arrVMKsToCreate = @()
$arrVMKsToCreate += New-Object –TypeName PSObject –Prop (@{"PGName" = "100_san1-vmk1";"IP" = "x.x.x.x";"subnet" = ""})
$arrVMKsToCreate += New-Object –TypeName PSObject –Prop (@{"PGName" = "200_san2-vmk1";"IP" = "x.x.x.x";"subnet" = ""})
$arrVMKsToCreate += New-Object –TypeName PSObject –Prop (@{"PGName" = "300_pan";"IP" = "x.x.x.x";"subnet" = ""})
$arrVMKsToCreate += New-Object –TypeName PSObject –Prop (@{"PGName" = "400_vmotion";"IP" = "x.xx.x";"subnet" = ""})
$arrVMKsToCreate += New-Object –TypeName PSObject –Prop (@{"PGName" = "401_ft";"IP" = "x.x.x.x";"subnet" = ""})
# iSCSI Targets
$arrIScsiTargetsInfo = @()
$arrIScsiTargetsInfo += New-Object –TypeName PSObject –Prop (@{"Address" = "x.x.x.x";"Type" = "send"})
$arrIScsiTargetsInfo += New-Object –TypeName PSObject –Prop (@{"Address" = "x.x.x.x";"Type" = "send"})
$arrIScsiTargetsInfo += New-Object –TypeName PSObject –Prop (@{"Address" = "x.x.x.x";"Type" = "send"})
$arrIScsiTargetsInfo += New-Object –TypeName PSObject –Prop (@{"Address" = "x.x.x.x";"Type" = "send"})
#NFS Targets
$arrNfsDatastores = @()
$arrNfsDatastores += New-Object -TypeName PSObject -Prop (@{"Name" = "vdr-backups"; "Path" = "/mnt/dataon1/vdrbackups/vdrbackups/"; "Host" = ""})

#==== Do the Work ====
#Get the host password (for SNMP)
$rootPass = Read-Host -Prompt "Enter host root password" -AsSecureString

#Connect to vCenter Server
$VCUserCredentials = Get-Credential
Connect-VIServer -Server vCenterServer -Protocol "https" -Credential $VCUserCredentials

$vmHost = Get-VMHost -Name $vmHostName
$oCLI = Get-ESXCli -vmhost $vmHost

#Put the host in maintenance mode
Set-VMHost -VMhost $vmHost -State "Maintenance"

#Create the SAN virtual switch
$vs = New-VirtualSwitch -VMHost $vmHost -Name $vSwitchName

#Create the Port Groups
$arrPGsToCreate | % {New-VirtualPortGroup -VirtualSwitch $vs -Name $_.Name -VLanId $_.VLAN}

#Create SAN, vMotion, FT, and NFS vmkernels
$arrVMKsToCreate | % {New-VMHostNetworkAdapter -VMHost $vmHost -PortGroup $_.PGName -VirtualSwitch $vs -IP $_.IP -SubnetMask $_.subnet}

#Enable SSH
$vmHost | Get-VMHostService | where {$_.Key -eq "TSM-SSH"} | Set-VMHostService -Policy "On"
$vmHost | Get-VMHostFirewallException | where {$_.Name -eq "SSH Server"} | Set-VMHostFirewallException -Enabled:$true
$vmHost | Get-VMHostService | where {$_.Key -eq "TSM-SSH"} | Start-VMHostService

#Enable ESXi Service Console
$vmHost | Get-VMHostService | where {$_.Key -eq "TSM"} | Set-VMHostService -Policy "On"
$vmHost | Get-VMHostService | where {$_.Key -eq "TSM"} | Start-VMHostService

#Disable SSH Warnings
Set-VmHostAdvancedConfiguration -vmhost $vmhost -Name UserVars.SuppressShellWarning -Value ( [system.int32] 1 )

#Set NTP Server and Enable
Add-VmHostNtpServer -NtpServer $ntpHostname -VMHost $vmHost
$vmHost | Get-VMHostService | where {$_.Key -eq "ntpd"} | Set-VMHostService -Policy "On"
$vmHost | Get-VMHostFirewallException | where {$_.Name -eq "NTP client"} | Set-VMHostFirewallException -Enabled:$true
$vmHost | Get-VMHostService | where {$_.Key -eq "ntpd"} | Start-VMHostService

# Enable software iSCSI HBA
Sleep -s 10

# Add iSCSI Targets
$IScsiHba = Get-VMHostHba -vmhost $vmHost -Type "iscsi"
$arrIScsiTargetsInfo | % {$IScsiHba | New-IScsiHbaTarget -Address $_.Address -type $_.Type}

#Add NFS Datastore
$nfsDatastores | % {New-Datastore -Nfs -VMHost $vmHost -Name $_.Name -Path $_.Path -NfsHost $_.Host}

#Install Dell OMSA
Install-VMHostPatch -vmhost $vmHost -HostPath $omsaPath

#Configure SNMP
$expression = "perl ""C:\Program Files (x86)\VMware\VMware vSphere CLI\bin\"" --server " + $vmHost.Name + " --username root --password " + $rootPass + " -t " + $snmpTrapReceiver + "@162/" + $snmpTrapCommunity
Invoke-Expression $expression
$expression = "perl ""C:\Program Files (x86)\VMware\VMware vSphere CLI\bin\"" --server " + $vmHost.Name + " --username root --password " + $rootPass + " --enable"
Invoke-Expression $expression
$expression = "perl ""C:\Program Files (x86)\VMware\VMware vSphere CLI\bin\"" --server " + $vmHost.Name + " --username root --password " + $rootPass + " --test"
Invoke-Expression $expression

#warn user of manual steps needed next
$msgs = @()
$msgs += " * Add vmnics to the vSwitches and Port Groups, and then test with vmkping."
$msgs += " * Bind vmk's to software iSCSI HBA."
$msgs += " * Give host's initiator access to LUNs on necessary iSCSI targets."
$msgs += " * Add host to VDS and configure dvUplinks"
$msgs += " * Migrate appropriate vmkernels to the VDS"
$msgs += " * Assign FT to the ft vmkernel"
$msgs += " * Assign mgmt traffic to PAN vmkernel"
$msgs += " * Assign vmotion to vmotion vmkernel"
$msgs | % {write-host -f yellow $_}

$go = $false
While ($go -eq $false)
	{$text = Read-Host "Type 'continue' when the steps are complete."; If($text -eq "continue"){$go = $true}}

# Configure round-robin multipathing policy on all iscsi paths
$ | group-Object –Property Device | Where {$_.Name –like "naa*"} | %{$$null, $_.Name, "VMW_PSP_RR")}

#Reboot host
Restart-VMHost -vmhost $vmHost -confirm:$false

#Exit maintenance mode
Set-VMHost -VMhost $vmHost -State "Connected"

# Attach update baselines
# Scan for updates
# Remediate updates

New Dell MD32xx and MD36xx Firmware with VAAI and Dynamic Disk Pools

For those not in the know, Dell recently released a new firmware version for the MD32xx and MD36xx series. The two big features are VAAI support and Dynamic Disk Pools.


VAAI is the VMWare hardware-acceleration API. The MD’s support hardware accelerated locking, which is a huge freakin’ deal. This means that locking operations only lock the needed blocks during a VM operation, and not the whole LUN. This means that you can create and manage a few big LUNs with a lot of VM’s per datastore, instead of a bunch of tiny LUNs.

Here’s a great article on VAAI: Why VAAI?

Dynamic Disk Pools

DDP is an interesting concept. It’s kinda like RAID 6, but instead of choosing a specific hot spare, you choose how many physical disk failures you want to tolerate. The system pools all the disks together, and uses all the spindles, but reserves a small amount of space on each disk in the pool to tolerate failures.

The benefits

  • You get to use all your spindles instead of reserving a hot spare.
  • You can create larger disk pools than would be safe with RAID6.
  • You can tolerate n-number of failures, where n is the amount of disk space you’re willing to reserve.
  • Rebuilds are about 4 times faster.

The downside

  • DDP’s are not as fast as RAID6 for fully-sequential writes. 
  • It’s new, and that might freak people out.

Here’s a video from Dell on the new feature:

I’m using both features now; they’re sweet.

SCCM Package – vSphere 5 Client

vSphere is pretty awesome, though I hope their prices come down in light of Hyper-V 3.0. In any case, here’s how to package vSphere Client with VDR and Update Manager.



  1. Download the vSphere client install executable from any of your vmware hosts by visiting http://vm-hostname then clicking “download client”. Place the client in C:\temp\viclient.
  2. Download the VMWare Data Recovery ISO from the vSphere 5 Downloads link above, then extract the file “VMwareDataRecoveryPlugin.msi” to C:\temp\viclient.
  3. Run the vsphere-update manager installer named “VMware-UMClient.exe” from the updateManager folder of the vCenter Server 5 installation media. Once the installer is running, navigate to the folder %temp% and obtain the file “VMware vSphere Update Manager Client 5.0.msi” from one of the subfolders. Place the msi file in C:\temp\viclient.
  4. Download the Visual J# x64 Redistributable from the above site.
  5. Run the following command to extract the Visual J# install files:
    vjredist64.exe /c /t:C:\temp\viclient\vjredist64
  6. Create a batch file with the following contents in C:\temp\viclient named “Install-viclient.cmd”.
    ECHO Installing vSphere 5 Client, VDR Client, and UpdateMgr Client
    ECHO Do not close this window. It will close when the install is finished.
    REM Main Install
    msiexec /i vjredist64\jsredist.msi /qb ADDEPLOY=1
    VMware-viclient.exe /q /s /w /L1033 /v" /qr /L*v \"%TEMP%\vmvcc.log\""
    msiexec /i "VMware vSphere Update Manager Client 5.0.msi" /qb
    msiexec /i VMwareDataRecoveryPlugin.msi /qb
  7. Assemble the files into a single source folder then create a SCCM package and program. The program’s command line action should be “install-viclient.cmd”. The following is a screen shot of my final source folder. My folder contains Make-Shortcuts_vSphere.ps1, which I use to move the vi client shortcuts around.

Keep on truckin’ space cowboy.

ESXi Errors – Failed write command to write-quiesced partition

I’ve been getting the following emails from all of my ESXi hosts since I’ve upgraded to 4.1 about 9 months back. I’d get 3-8 emails a day, and see large latency spikes on the corresponding datastore when the email was sent.

Stateless event alarm
Alarm Definition:
([Event alarm expression: Host error] OR [Event alarm expression: Host warning])
Event details:
Issue detected on in Chemistry Datacenter: ScsiDeviceIO: 2352: Failed write command to write-quiesced partition naa.6002219000a17f3d00003dcb4e0ccad3:1
(5:03:26:49.543 cpu1:5362)

I had engaged support from both VMWare for ESXi and Dell for my MD3000i’s. I’ve tried Jumbo Frames, Flow Control, different VLAN trunk configurations, etc. After many support calls and sessions, we found the problem on page 40 of the iSCSI SAN Configuration guide. In any situation where an iSCSI VMKernel can send data down a group of NICs, either because of a Virtual Distributed Switch, or multiple NIC’s assigned to an iSCSI port group, it’s mandatory to lock things down so that each VMKernel is assigned to send data through only 1 port group. Essentially, this forces multipathing from the network level up to the protocol level.


My VMKernels were on a VDS, so I had to perform the following operations:

  1. Open vSphere client, then navigate to Inventory -> Networking.
  2. Right click your first SAN\VMKernel Port Group -> Edit Settings.
  3. Click “Teaming and Failover”, and limit your active dvUplinks to only a single uplink. The rest should be placed under ‘unused’.
  4. Repeat this for every SAN\VMKernel port group.
Next, you must “bind” the iSCSI Software Adapter to the VMKernels:
  1. In vSphere client, find the name of your first host’s iSCSI adapter by choosing a host then clicking Configuration -> Storage Adapters. It’s typically vmhba34.
  2. Enable remote tech support mode and SSH to your first ESXi host.
  3. Run the following commands. After the first command, write down any vmk#’s that correspond with your iSCSI VMKernels.
    esxcfg-vmknic -l
    esxcli swiscsi nic list -d vmhba34
  4. If the ‘nic list’ command didn’t show any vmkernels, then you need to bind each iSCSI VMKernel with the following command:
    esxcli swiscsi nic add -n vmk# -d vmhba34
  5. When finished, run the following command to verify the work:
    esxcli swiscsi nic list -d vmhba34
  6. Repeat this for all hosts in your inventory.

Other Notes

After running these commands, it’s recommended that any unused dynamic and static iSCSI targets ne removed. However, the add\remove delay is faster with iSCSI bindings in place. For more info, see page 40 of the iSCSI SAN Configuration Guide 4.1 .