Nutanix Move — Manual VM Preparation

Jon Dustin
4 min readOct 30, 2020

Nutanix Move is a great utility for migrating VMs, and I have used it for thousands of VM migrations over the past two years. Although Move can perform preparation steps within the VM automatically, this has some onerous requirements (credentials, connections from Move to each VM). I decided to reverse-engineer these steps to ensure successful migrations in our environment.

A few notes about Move:

  • latest version finally works correctly for scheduling data seeding — this is quite useful for limiting the amount of time VMs spend in the snapshot cycle
  • I place the Move VM on the same subnet as vCenter, ESXi hosts, and the Nutanix AHV hosts/CVMs. Technically you could open firewall ports, but being on the same subnet should give better performance and eliminates firewall issues
  • Make sure to actually read the Move documentation for new releases. Somehow I missed the CPU/RAM requirements and starved the VM for performance, causing all sorts of troubles with migrations. I now use 8 vCPU and 8GB RAM

The environment at $job is quite restrictive, with firewalls between everything and administrative credentials locked down. Technically I could have obtained a security exception for Move to logon to VMs, but I did not like the idea.

Hence my reverse engineering:

  • set all VM actions to manual
  • tell Move to retain MAC addresses (will help Linux)

Following snippet for Linux systems will tell you if the VIRTIO drivers have been installed into the kernel ramdisk, and last line will install them if not found.

# RHEL6/RHEL7 - make sure you have virtio drivers in initrd
# our RHEL6 normally has these included, RHEL7 occasionally needs them added
echo; lsinitrd /boot/initramfs-$(uname -r).img | grep -q virtio ; if [ $? -eq 0 ]; then echo READY FOR MIGRATION; else echo FAILURE VIRTIO DRIVERS NOT IN RAMDISK TRY AGAIN; fi
# success on command prints the following:
READY FOR MIGRATION
# failure:
FAILURE VIRTIO DRIVERS NOT IN RAMDISK TRY AGAIN
# add VIRTIO drivers to initramfs
dracut --add-drivers "virtio_pci virtio_blk virtio_scsi virtio_net" -f -v /boot/initramfs-`uname -r`.img `uname -r`

Windows VMs require a bit more help to properly start after migrating to AHV. All these steps can be performed at any time, they are non-disruptive.

Install latest version of Nutanix-VirtIO drivers, else the disks will not be found (nor the network card)

Update local administrator account to a known password — IF you have any problems you want to be able to logon and troubleshoot

PowerShell script will do these tasks:

  • create scheduled task to run at next reboot and set the same IP address on the new network card (AHV migration will appear as new nic)
  • set disk policy to online all disks
  • add Windows KMS activation key — again for hardware changes
  • scheduled task will delete itself after setting IP address
  • uses Event Log to save status messages
  • you can copy this script to clipboard and paste into a PowerShell window on system — will automatically do everything required
# DNS servers to use
$dns1 = '192.168.1.10'
$dns2 = '192.168.1.11'

# enable PowerShell scripts
Set-ExecutionPolicy RemoteSigned -force

# register event log messages
New-EventLog -LogName Application -Source 'move-messages' -ErrorAction SilentlyContinue

# automatically activate VMDKs after migration
echo "san policy=OnlineAll" | diskpart

# change Windows activation key - for when hardware changes on AHV
# remove activation key
slmgr -upk
$osversion = (Get-WmiObject -class Win32_OperatingSystem).Caption
if ( $osversion -match 'server 2008 r2 standard' ) {
slmgr -ipk FILLINYOURKEY
} elseif ( $osversion -match 'server 2008 r2 enterprise' ) {
slmgr -ipk FILLINYOURKEY
} elseif ( $osversion -match 'server 2012 standard' ) {
slmgr -ipk FILLINYOURKEY
} elseif ( $osversion -match 'server 2012 r2' ) {
slmgr -ipk FILLINYOURKEY
} elseif ( $osversion -match 'server 2016 standard' ) {
slmgr -ipk FILLINYOURKEY
} elseif ( $osversion -match 'server 2019 standard' ) {
slmgr -ipk FILLINYOURKEY
}
# and register again - now that we have set key
slmgr -ato
# find primary network card
# IF you have multiple NICs this routine may be confused
$nicname = (get-wmiobject win32_networkadapter -filter "netconnectionstatus = 2" | select -expand netconnectionid)
$lines = netsh interface ip show config $nicname

foreach ( $line in $lines ) {
if ( $line -match 'IP address:\s+(\d.+?)$' ) { $ip = $matches[1] }
if ( $line -match '\(mask (.+?)\)' ) { $mask = $matches[1] }
if ( $line -match 'Default Gateway:\s+(\d.+?)$' ) { $gw = $matches[1] }
}
# now create a script to be run at next boot
$outtext = @"
# pause to let NIC be detected
sleep 120
# get current network adapter name
`$nicname = (get-wmiobject win32_networkadapter -filter "netconnectionstatus = 2" | select -expand netconnectionid)
# set IP parameters on new nic
netsh interface ip set address `$nicname static $ip $mask $gw 1
netsh interface ipv4 add dns `$nicname address=$dns1 index=1
netsh interface ipv4 add dns `$nicname address=$dns2 index=2
Write-EventLog -Message "FixIp: on reboot with nic='$nicname' ip=$ip mask=$mask gw=$gw" -LogName Application -Source move-messages -EventId 1
sleep 5
# delete our scheduled task
`$cmdout = schtasks.exe /delete /f /tn BTfixIp 2>&1
Write-EventLog -Message "FixIp: delete task output:`$cmdout" -LogName Application -Source bt-messages -EventId 1
"@
# script has ended, save to destination file
$outfile = 'c:\users\public\documents\fixip.ps1'
$outtext | out-file $outfile
# and create scheduled task to run at next reboot
schtasks.exe /create /f /tn FixIp /ru SYSTEM /sc ONSTART /tr "powershell.exe -file $outfile"
#

We have been using this routine for many months and have successfully migrated thousands of systems from VMware to AHV. I hope this might help your environment, good luck!

--

--

Jon Dustin

Manager of Infrastructure Engineering — boating enthusiast — travel fan