Hi Guys,
I have a migration scenario which I'd like to run past the knowledgable people on this forum to see if you can see any flaws, in conception or execution.
I have a 2 node management cluster, that currently hosts a vCenter (that manages some DMZ clusters) and the VSMs for the Nexus 1000v used in this vCenter. The VSMs currently use L2 connectivity to talk to their VEMs.
For a variety of reasons, this vCenter has to move into a secure network on a separate vDC. The idea is to build a new vCenter, and migrate the VSMs and hosts across.
My overview steps are as follows:
1. Change the VSM communication to L3 (mgmt0 to vmk0) so it can be routed and firewalled as required (will be done after all moves and everything is confirmed working).
2. Disconnect the VSMs from vCenter
3. Apply the VEM extension to the new vCenter
4. Move the VSM VMs to the new vCenter, and assign new mgmt0 IP addresses.
5. Reconnect the VSM to the new vCenter
6. Move the hosts to the new vCenter.
Assuming the VSMs only lose contact for a short period (during the move of the VM to the new cluster - it has to move storage + networks), and can contact the VEMs when it comes back up is this workable? Is there some host related problem with moving the VSM that I'm unaware of?
I will be running this past VM support as well, but I thought this seemed a good place to start :-)
Cheers,
Glynn.