ESXi – How to list OS discrepancies

As VMs get upgraded to newer OSes the VM Guest OS setup on the ESXi hosts migth no longer be the same as the OS version the VM is running (i.e. VM got upgraded from Win 7 to 10 but ESXi host still thinks it is running 7).

Here is a one liner that shows what is set and what is actually there:
Get-View -ViewType "VirtualMachine" -Property @("Name", "Config.GuestFullName", "Guest.GuestFullName") | Where-Object {($_.Config.GuestFullName -ne $_.Guest.GuestFullName) -and ($_.Guest.GuestFullName -ne $null)} | Select-Object -Property Name, @{N="Configured OS";E={$_.Config.GuestFullName}}, @{N="Running OS";E={$_.Guest.GuestFullName}} | Format-Table -AutoSize

The output look something like that:

Name                                  Configured OS                       Running OS
----                                  -------------                       ----------
win2K                                 Microsoft Windows 2000 Server       Microsoft Windows 2000 Professional
RH3                                   Red Hat Enterprise Linux 3 (32-bit) Red Hat Enterprise Linux 3 (64-bit)
suse51                                SUSE Linux Enterprise 11 (64-bit)   SUSE Linux Enterprise 12 (64-bit)

ESXI – how to power on/off a VM from SSH

Working from linux I miss the luxury of having vSphere client on hand to power on/off VMs as I need them. There is only one solution – do that via SSH.

Of course for this to work SSH needs to be enabled on the ESXi host – see here.

First list all VMs:

[root@esxi:~] vim-cmd vmsvc/getallvms
Vmid      Name            File                  Guest OS       Version                  Annotation                   
13   CentOS2     [SSD] CentOS2/CentOS.vmx      centos64Guest      vmx-11 Linked-Clone

then verify that the VM is powered down:

[root@esxi:~] vim-cmd vmsvc/power.getstate 13
Retrieved runtime info
Powered off

now to power it on and check the status run:

[root@esxi:~] vim-cmd vmsvc/power.on 13
Powering on VM:
[root@esxi:~] vim-cmd vmsvc/power.getstate 13
Retrieved runtime info
Powered on

Next to find out the IP address of the VM that we have just powered on run:

[root@esxi:~] vim-cmd vmsvc/get.guest 13 | grep ipAddress
   ipAddress = "10.10.1.109", 
         ipAddress = (string) [
            ipAddress = (vim.net.IpConfigInfo.IpAddress) [
                  ipAddress = "10.10.1.109", 
                  ipAddress = "fe80::20c:29ff:fec8:db22", 
            ipAddress = (string) [
                     ipAddress = "10.10.1.254", 
                     ipAddress = , 
                     ipAddress = , 
                     ipAddress = "fe80::221:55ff:fefb:da4", 
                     ipAddress = , 

Now we can RDP or SSH into that box or power it off by running:

[root@esxi:~] vim-cmd vmsvc/power.off 13
Powering off VM: 

Intel NUC NUC6i3SYH

For some time now I was looking for a replacement for my UDOO home server. After long research I decided to go for Intel NUC i3 version. Intel NUC Kit NUC6i3SYH is equipped with Intel’s newest architecture, the 6th generation Intel® Core™ i3-6100U processor.

NUC6i3SYH

It has truly impressive features such as massive 32GB DDR4 RAM support, M 2 SSD slot and SATA slot – see Intel website for more info.

Shopping List
 NUC6i3SYH  NUC_SM
 Samsung 250GB 850 EVO M.2 SSD  evo_sm
 8GB USB Flash Drive  USB_sm
2x 16GB of RAM – CT16G4SFD8213  memory_sm

NUC_box

NUC_inside

The NUC itself is not supported by VMware and not listed in the HCL. However, some essential components are listed therefore when installing the latest ESXi 6.0 with patch ESXi600-201601001 (Build 3380124) released in January 2016 you will not have any issues:

esxi_nuc

Linked Clones on ESXi 5.5

1. Run sysprep and reseal the VM for cloning. Shut it down when finished and do not turn it on.

2. Install PowerCLI

3. Get the script: http://poshcode.org/1549 :

param (
 [parameter(Mandatory=$true)][string]$SourceName,
 [parameter(Mandatory=$true)][string]$CloneName
)
$vm = Get-VM $SourceName
# Create new snapshot for clone
$cloneSnap = $vm | New-Snapshot -Name "Clone Snapshot"
# Get managed object view
$vmView = $vm | Get-View
# Get folder managed object reference
$cloneFolder = $vmView.parent
# Build clone specification
$cloneSpec = new-object Vmware.Vim.VirtualMachineCloneSpec
$cloneSpec.Snapshot = $vmView.Snapshot.CurrentSnapshot
# Make linked disk specification
$cloneSpec.Location = new-object Vmware.Vim.VirtualMachineRelocateSpec
$cloneSpec.Location.DiskMoveType = [Vmware.Vim.VirtualMachineRelocateDiskMoveOptions]::createNewChildDiskBacking
# Create clone
$vmView.CloneVM( $cloneFolder, $cloneName, $cloneSpec )
# Write newly created VM to stdout as confirmation
Get-VM $cloneName

 

I have got it saved as Create-LinkedClone.ps1

4. start PowerCLI, connect to your vCenter server and run the script:

You can also run this:

And here is a screen dump of my vCenter server with the Clone:

Now let’s start up the Clone:

Sysprep will take us through the setup.

Creating iSCSI multipath IO network in vSphere v5 with ReadyNAS as a storage

The objective is to setup MPIO iSCSI connection between VM host running ESXi v5 and a ReadyNas 3200 storage.

ReadyNAS NIC Setup

Make sure teaming is disabled on all interfaces. Set IP addresses:
next step – setup iSCSI targets:
Open your ReadyNAS frontview.
Select Volume | Volume Settings | Volume C | iSCSI
Enable iSCSI support and click Apply
Select Create iSCSI Target
Enter the target name and capacity.  I believe 2000 GB is the max for the ReadyNAS.  Select Create.
OK through the confirmation windows.  We will set access control once we get the iSCSI initiator configured on the ESXi host.

Switch setup

non required

VM host setup

Verify your NIC’s in VSphere
Select the host | Configuration | Network Adapters

Create a vSwitch for the NIC’s.

We will be associating each vmnic with a vmKernel port:

Select your EXSi host | Configuration | Networking | Choose Add Networking:

Select  your NICs (vmnic2 & 3 in my case) – click NextThen enter a name in the Network Label box (vSwitch2) – click Next and Finish.Now go into the Properties of the vSwitch:

Now we need to  assign 1 NIC on each of the vmKernel ports. Select the first vmKernel port and click Edit:
Select NIC teaming and check Override switch failover order. We mus have only 1 vmnic active.
Do this for the remaining vmnic(s)

Testing

SSH into the ESXi host and attempt to ping all ReadyNAS IP addresses using vmkping command

ESXi 5 iSCSI initiator

By default ESXi 5 does not have the iSCSI Software Adapter loaded.
Select the ESXi host | Configuration | Storage Adapters | Add

Now select the iSCSI storage adapter and select properties.
Copy the name into your clipboard.  So we can paste it into the access control list on the ReadyNAS LUN.






Open your ReadyNAS frontview.  Select Volumes | Volume Settings | Volume C | iSCSI
Select your Target and the configure “gear” for the LUN.

Now back to the ESXi host.
On the iSCSI Initiator select network configuration. And add all of the vmkernel groups iSCSI1, iSCSI2

Select Dynamic Discovery and add your target.  Only add one IP for your ReadyNAS



Select the Static Discovery tab and all 2 paths should be shown:

Go to Configuration | Storage
And you should see your LUN.

Right click on the LUN and choose properties.

Select Mange Paths

By default ESXi is set to Most Recently Used (VMware).  So we have 2 nic’s but only 1 will be used.  We need to change the path selection to Round Robin.

Select round robin from the drop down press change.  Now you will see all 2 adapters are set to active i/o.  Press Close.


Final tweaking

By default, VMware has set a limit of 1000 IO’s to travel down each path before switching to the next path. However by changing that default down to 1 IO before switching paths we can sometimes achieve much greater throughput because we can more effectively utilize our links.

Enable the SSH console on your ESXi 5.0 host.

The command for ESXi5 is:
esxcli storage nmp psp roundrobin deviceconfig set -d naa.devicename --iops 1 --type iops

and to get the device name you could use:

esxcli storage nmp device list | grep naa.600



To verify, enter the following command to verify:
esxcli storage nmp device list 

You should see the following string showing IOPs now equal to one!
t10.F405E46494C45425F4D614162357D2354745A4D295162773
Device Display Name: OPNFILER iSCSI Disk (t10.F405E46494C45425F4D614162357D2354745A4D295162773)
Storage Array Type: VMW_SATP_DEFAULT_AA
Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support device configuration.
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=iops,iops=1,bytes=10485760,useANO=0;lastPathIndex=2: NumIOsPending=0,numBytesPending=0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba33:C0:T3:L4, vmhba33:C1:T3:L4, vmhba33:C2:T3:L4, vmhba33:C3:T3:L4