This is my notes on the white paper: VMware vSphere 5.1 vMotion Architecture, Performance and Best Practices
As vmotion is different slightly in 5.1 vs 5.0 due to enhancements , I wanted to go over everything to get a good handle on the new features
There are some diagrams in the pdf that would be good to check out
My own comments I italicized.It is for my own understanding of how vmotion in 5.1 works vs standard vmotion in 5.0 and svmotion
Migrates live virtual machines, including their memory and storage, between vSphere hosts without any requirement for shared storage.
During storage migration, vSphere 5.1 vMotion maintains the same performance as Storage vMotion, even when using the network to migrate.
Before 5.1:
live-migration solution was limited to the hosts that shared a common set of datastores. In addition, migration of an entire virtual machine
required two separate operations, for instance vMotion followed by Storage vMotion, or vice versa.
With 5.1
enabling live migration of an entire virtual machine across vSphere hosts without any requirement for shared storage.
vMotion Architecture
. The execution state primarily consists of the following components:
The virtual machine’s virtual disks
The virtual machine’s physical memory
The virtual device state, including the state of the CPU, network and disk adapters, SVGA , and so on
External network connections
How Storage vMotion migrates storage
uses a synchronous mirroring approach to migrate a virtual disk from one datastore to anotherdatastore on the same physical host.
Uses two concurrent processes.
First, a bulk copy (also known as a clone)
Concurrently, an I/O mirroring process transports any additional changes that occur to the virtual disk, because of the guest’s ongoing modifications.
IO mirroring:
I/O mirroring process accomplishes that by mirroring the ongoing modifications to the virtual disk on both the source and the destination datastores.
Storage vMotion mirrors I/O only to the disk region that has already been copied by the bulk copy process.
Guest writes to a disk region that the bulk copy process has not yet copied are not mirrored becausechanges to this disk region will be copied by the bulk copy process eventually.
How are guest writes copied then?
A synchronization mechanism is implemented that prevents the guest write I/Os from conflicting with the bulk copy process read I/Os when the guest write I/Os are issued to the disk region currently being copied by the bulkcopy process.
No synchronization is needed for guest read I/Os, which are issued only to the source virtual disk.
Vmotion 5.1 storage migration
vSphere 5.1 vMotion uses a network transport for migrating the data.
In contrast to Storage vMotion, vSphere 5.1 vMotion cannot rely on synchronous storage mirroring because the source and destination datastores might be separated by longer physical distances.
Relies on an asynchronous transport mechanism for migrating both the bulk copy process and I/O mirroring process data.
Does Vmotion 5.1 ever use synchronous ?
vSphere 5.1 vMotion switches from asynchronous mode to synchronous mirror mode whenever the guest write I/O rate is faster than the network transfer rate (due to network limitations) or I/O throughput at the destination datastore (due to destination limitations).
vMotion 5.1 typically transfers the disk content over the vMotion network. However it optimizes the disk copy by leveraging the mechanisms of Storage vMotion whenever possible.
Vmotion 5.1 if it has access to source and destination datastores
if the source host has access to the destination datastore, vSphere 5.1 vMotion will use the source host’s storage interface to transfer the disk content, thus reducing vMotion network utilization and host CPU utilization.
if both the source and destination datastores are on the same array that is capable of VAAI, Motion 5.1 will offload the task of copying the disk content to the array usingVAAI.
Migration of Virtual Machine’s Memory
[Phase 1] Guest trace phase
guest memory is staged for migration
Traces are placed on the guest memory pages to track any modifications by the guest during the migration.
[Phase 2] Pre-copy phase
Because the virtual machine continues to run and actively modify its memory state on the source
host during this phase,
memory contents of the virtual machine are copied from the source ESXi host to the destination ESXi host in an iterative process.
Each iteration copies only the memory pages that were modified during the previous iteration. [except for the first iteration probably]
[Phase 3] Switch-over phase
the virtual machine is momentarily quiesced on the source ESXi host,
the last set of memory changes are copied to the target ESXi host,
Now virtual machine is resumed on the target ESXi host.
vSphere 5.1 vMotion begins the memory copy process only after the bulk copy process completes the copy of the disk contents. The memory copy process
runs concurrently with the I/O mirroring process (where the changes to the disk after clone are )
Stun During Page-Send (SDPS)
In most cases, each pre-copy iteration (memcopy)should take less time to complete than the previous iteration.
[ Sometimes vm ] modifies memory contents faster than it can be transferred
SDPS will kick-in and ensure the memory modification rateis slower than the pre-copy transfer rate.
This technique avoids any possible vMotion failures.
Upon activation, SDPS injects microsecond delays into the virtual machine execution and throttles its page dirty rate to a preferred rate, guaranteeing pre-copy convergence
[in effect it retards the vm slowing it down]
Migration External Network Connections
[No interuptions] as long as both the source and destination hosts are on the same subnet.
After the virtual machine is migrated, the destination ESXi host sends out a RARP packet to the physical network switch thereby ensuring that the switch updates its tables with the new switch port location of the migrated virtual machine.
Using Multiple NICs for vSphere 5.1 vMotion
Ensure that all the vMotion vmknics on the source host can freely communicate with all the vMotion vmknics on the destination host. It is recommend to use the same IP subnet for all the vMotion vmknics.
vSphere 5.1 vMotion over Metro Area Networks
vSphere 5.1 vMotion is latency aware and provides support on high-latency networks with round-trip latencies of up to 10 milliseconds.
update:
***you have to have supported hardware from cisco or f5
update:
***you have to have supported hardware from cisco or f5
.
Maximum latency of 5 milliseconds (ms) RTT (round trip time) between hosts participating in vMotion, or 10ms RTT between hosts participating with Enterprise Plus (Metro vMotion feature with certified hardware from Cisco or F5)
http://communities.vmware.com/message/2152545
5.1 vMotion Performance in Mixed VMFS Environments
5.1 vMotion has performance optimizations that depend on a 1MB file system block
VMFS 5 block vs VMFS 3 with 2MB block: 1block VMFS5 cut migration 35%
Other tips
Switch to VMFS-5 with a 1MB block size. . VMFS file systems that don’t use 1MB block sizes cannot take advantage of these performance optimizations.
provision at least a1Gbps management network.
This is advised because vSphere 5.1 vMotion uses the Network File Copy (NFC) service to transmit the virtual machine's base disk and inactive snapshot points to the destination datastore. Because NFC traffic traverses the management network, the performance of your management network willdetermine the speed at which such snapshot content can be moved during migration.
If there are no snapshots, and if the source host has access to the destination datastore, vSphere 5.1 vMotionwill preferentially use the source host’s storage interface to make the file copies, rather than using themanagement network. (if it has access to destination datastore)
When using the multiple–network adaptor feature:
configure all the vMotion vmnics under one vSwitch
create one vMotion vmknic for each vmnic.
In the vmknic properties, configure each vmknic to leverage a different vmnic as its active vmnic, with the rest marked as standby. This way, if any of the vMotion vmnics become disconnected or fail, vSphere 5.1 vMotion will transparently switch over to one of the standby vmnics. However, when all your vmnics are functional, each vmknic will route traffic over its assigned, dedicated vmnic.
0 comments:
Post a Comment