Angels Technology

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Wednesday, November 28, 2012

Vmware DRS anti affinity rules wont let you enter maintenance mode for a esxi host

Posted on 2:36 PM by Unknown

    You have a DRS rule that specifies that 2 vms need to be kept apart:
    In this case: 250-FT and 250sql3





    For larger clusters with multiple hosts it may not make a difference but for a small cluster of 2 hosts, it can  disrupt you  you from upgrading, rebooting , etc.

    You can choose yes and ignore the first message popup


    But the vm wont move over via DRS
    This operation would violate a virtual machine affinity/anti-affinity rule. Migrate SERVER from HOST1 to HOST2
    Could not enter maintenance mode.





     Solution:
    Manually move over the VM with vmotion
    Now the host is in maintenance .




Read More
Posted in esx, esxi, host, vm, vmware, vsphere | No comments

Thursday, November 22, 2012

Hot add of cpu and memory to a virtual machine in vcenter

Posted on 9:40 PM by Unknown

    Why can't you hot add cpu or memory to you esxi host's vm?
    Is the option grayed out?

    Reason: in vsphere by default a  virtual machine has it's hot add options turned off







    How to enable hot add:
  1. You have to power off the vm, no choice here if the option is not enabled.
  2. Edit Vm Settings-> options-> Memory/cpu hotplug
  3. Click on enable for both options. You can see hot remove is not selectable, that is because the os of the vm doesn’t support it.




  4. Test of hot add of cpu and ram
    *your operating system has to support hot add


    Before hot add 1 cpu and 2gb ram




    Hot add cpu
    You can add sockets NOT cores













    Hot add Ram
    As you can see I cannot go below the current 2gb







    The ram changed to 8gb
Read More
Posted in cpu, esx, esxi, memory, vcenter, vm, vsphere | No comments

Wednesday, November 21, 2012

Installing an ftp server on your esxi host itself

Posted on 9:48 AM by Unknown

    Andreas Peetz setup a way to install a ftp server on you esxi hostitself.
    <http://www.v-front.de/2012/11/release-proftpd-ftp-server-for-vmware.html>


    Why is there no ftp service on a esxi host?
    It isn't advised to keep an ftp service on your actual host.
    It leaves another security hole on your host.
    Like Peetz says, if you want to upload files, you have many other options available, like the datstore browser or winscp


    So why install an ftp service on the host itself?
    It is just a proof of concept that it can be done, if for no other reason than it being a learning experience adding offline bundles to your host.

    For Peetz's instructions and updates go to his site:  http://www.v-front.de/2012/11/release-proftpd-ftp-server-for-vmware.html


  1. Download Proftpd for esxi
  2. ProFTPD 1.3.3 for VMware ESXi 5.x


  3. Upload it to your datastore




  4. SSH into your host
  5. Run this command
  6. esxcli software vib install --no-sig-check -d /vmfs/volumes/datastore1/ProFTPD-1.3.3-8-offline_bundle.zip
    What the options mean:
     --no-sig-check     Bypasses acceptance level verification.
    -d    Specifies full remote URLs of the depot index.xml or  server file path pointing to an offline bundle. In our case I uploaded the file to datastore1 but it could be any datastore.




    Check the security profile for proftp install
    You see it listed in the Services as well as the firewall.




    Not working?
    Port commands keep failing?


     Answer:
    Use a program like filezilla.
    ProFTPd's default TLS handling rules requires SSL sessions to be re-used. Windows command line ftp doesn’t do that.
    <http://www.ateamsystems.com/blog/FireFTP-ProFTPd-Unable-to-build-data-connection-Operation-not-permitted-TLS-negotiation>





    Final verdict: works but I wouldn’t use it in production. But it’s a good exercise !

Read More
Posted in esx, esxi, vcenter, vsphere | No comments

Friday, November 16, 2012

"No bootable device" after successful ESXi 5 installation

Posted on 2:40 PM by Unknown

     Did you just install esxi 5 successfully then cant boot into your host?
    Looks like some people on this thread were able to get it working.
    http://communities.vmware.com/message/2149004

    From ldaprat
    "With ESXi 5, ESX no longer uses MBR for boot, it has gone to GPT-based partitions instead.    Which is fine, if your BIOS (and mine) supported it correctly.      More details here:  http://communities.vmware.com/message/1822854?tstart=0

    There are two ways past this:
    1) try changing your BIOS to enable UEFI support - then re-install completely.     This did not work for me, I got purple screen of death when trying to do the ESXi install.    (I chalk this up to problems with UEFI implementation in the BIOS revision on my older motherboard.)

    2) Re-install ESXi 5, and during initial install press Shift-O when prompted.   Add formatwithmbr to the options, proceed with install."


    *note if you delete

    Why does this happen?
    By default ESXi 5.0 uses GPT based partitions tables and not MBR like it did previously.  This allows you to have disks which are greater than 2TB in size, but some systems are incompatible with it (possibly including yours) or need to have the correct BIOS setting.

    You can try changing your BIOS to see if it's compatible with EFI or will work in "legacy/compatibility mode".  Alternatively you can reinstall and use the "formatwithmbr" setting at the boot prompt which will cause the partitions to be set up the way they were in ESX 4.x.  -PatrickD
    Pasted from <http://communities.vmware.com/message/1822854>




    Looks like it works on various intel mobos
    Intel DG35EC
    Intel DQ35J0
    Intel DP35DP
    Intel D965WH
    Intel DB75EN
    Those are the ones reported above, probably works with more.

    My own tests with runweasel formatwithmbr   This is good:



    This is what comes up when you press shit+o


    Now add formatwithmbr


     now install normally.













    **********************************************************************************
    My own tests with runweasel removed and ONLY formatwithmbr   DON’T DO THIS



    Typing in formatwithmbr after removing runweasel
    Hit enter





    It installed itself without input I DID NOT GET THIS, which I did when I left the runweasel command



    When I install with formatwithmbr, I don’t have a local datastore though esxi does load right into esxi,
    But upon reboot it doesn’t see anything  and goes to PXE







    ******************










Read More
Posted in esx, esxi, vmware, vsphere | No comments

Thursday, November 15, 2012

VSphere 5.1 vMotion Best Practices

Posted on 2:58 PM by Unknown

This is my notes on the white paper: VMware vSphere 5.1 vMotion  Architecture, Performance and Best Practices
http://www.vmware.com/files/pdf/techpaper/VMware-vSphere51-vMotion-Perf.pdf
As vmotion is different slightly in 5.1 vs 5.0 due to enhancements , I wanted to go over everything to get a good handle on the new features
There are some diagrams in the pdf that would be good to check out
My own comments I italicized.It is for my own understanding of how vmotion in 5.1 works vs standard vmotion in 5.0 and svmotion




Migrates live virtual machines, including their memory and storage, between vSphere hosts without any requirement for shared  storage.
During storage migration, vSphere 5.1 vMotion maintains the same performance  as Storage vMotion, even when using the network to migrate.

Before 5.1:
live-migration solution was limited to the hosts that shared a common set of datastores. In addition, migration of an entire virtual machine
required two separate operations, for instance vMotion followed by Storage vMotion, or vice versa.
With 5.1
enabling live migration of an entire virtual machine across vSphere hosts without any requirement for shared storage.


vMotion Architecture
. The execution state primarily consists of the following components:
Ÿ    The virtual machine’s virtual disks
Ÿ    The virtual machine’s physical memory
Ÿ    The virtual device state, including the state of the CPU, network and disk adapters, SVGA , and so on
Ÿ    External network connections


How Storage vMotion migrates storage
uses a synchronous mirroring approach to migrate a virtual disk from one datastore to anotherdatastore on the same physical host.
Uses two concurrent processes.
First, a bulk copy (also known as a clone)
Concurrently, an I/O mirroring process transports any additional changes that occur to the virtual disk, because of the guest’s  ongoing modifications.

IO mirroring:
I/O mirroring process accomplishes that by mirroring the ongoing modifications to the virtual disk on both the source and the destination datastores.
 Storage vMotion mirrors I/O only to the disk region that has already been copied by the bulk copy process.
Guest writes to a disk region that the bulk copy process has not yet copied are not mirrored becausechanges to this disk region will be copied by the bulk copy process eventually.

How are guest writes copied then?
A synchronization mechanism is implemented that prevents the guest write I/Os from conflicting with the bulk copy process read I/Os when the guest write I/Os are issued to the disk region currently being copied by the bulkcopy process.
No synchronization is needed for guest read I/Os, which are issued only to the source  virtual disk.

Vmotion 5.1 storage migration
vSphere 5.1 vMotion uses a network transport for migrating the data.
In contrast to Storage vMotion, vSphere 5.1 vMotion cannot rely on synchronous storage mirroring because the source and destination datastores might be separated by longer physical distances.
Relies on an asynchronous transport mechanism for migrating both the bulk copy process and I/O mirroring process data.

Does Vmotion 5.1 ever use synchronous ?
vSphere 5.1 vMotion switches from asynchronous mode to synchronous mirror mode  whenever the guest write I/O rate is faster than the network transfer rate (due to network limitations) or I/O throughput at the destination datastore (due to destination limitations).
 vMotion 5.1 typically transfers the disk content over the vMotion network. However it optimizes the disk copy by leveraging the mechanisms of Storage vMotion whenever possible.

Vmotion 5.1 if it has access to source and destination datastores
 if the source host has access to the destination datastore, vSphere 5.1 vMotion will use the source host’s storage interface to transfer the disk content, thus reducing vMotion network utilization and host CPU utilization.
if both the source and destination datastores are on the same array that is capable of VAAI, Motion 5.1  will offload the task of copying the disk content to  the array usingVAAI.

Migration of Virtual Machine’s Memory
[Phase  1] Guest trace phase
guest memory is staged for migration
Traces are placed on the guest memory pages to track any modifications by the guest during the migration.

 [Phase 2] Pre-copy phase
Because the virtual machine continues to run and actively modify its memory state on the source
host during this phase,
memory contents of the virtual machine are copied from the source ESXi host to the destination ESXi host in an iterative process.
Each iteration copies only the memory pages that were modified during the previous iteration. [except for the first iteration probably]

[Phase 3]  Switch-over phase
the virtual machine is momentarily  quiesced on the source ESXi host,
the last set of memory changes are copied to the target ESXi host,
Now  virtual machine is resumed on the target ESXi host.

vSphere 5.1 vMotion begins the memory copy process only after the bulk copy process completes the copy of the disk contents. The memory copy  process
runs concurrently with the I/O mirroring process (where the changes to the disk after clone are )

Stun During Page-Send (SDPS)
 In most cases, each pre-copy iteration (memcopy)should take less time to complete than the previous iteration.
[ Sometimes vm ] modifies memory contents faster than it can be transferred
SDPS will kick-in and ensure the memory modification rateis slower than the pre-copy transfer rate.
This technique avoids any possible vMotion failures.
Upon activation, SDPS injects microsecond delays into the virtual machine execution and throttles its page dirty rate to a preferred rate, guaranteeing pre-copy convergence
[in effect it retards the vm slowing it down]



Migration External Network Connections
[No interuptions] as long as both the source and destination hosts are on the same subnet.
 After the virtual machine is migrated, the destination ESXi host sends out a RARP packet to the physical network switch thereby ensuring that the switch updates its tables with the new switch port location of the migrated virtual machine.


Using Multiple NICs for vSphere 5.1 vMotion
Ensure that all the vMotion vmknics on the source host can freely communicate with all the vMotion vmknics on the destination host. It is recommend to use the same IP subnet for all the vMotion vmknics.

vSphere 5.1 vMotion over Metro Area Networks
vSphere 5.1 vMotion is latency aware and provides support on high-latency networks with round-trip latencies of up to 10 milliseconds.
 update:
***you have to have supported hardware from cisco or f5

.
Maximum latency of 5 milliseconds (ms) RTT (round trip time) between hosts participating in vMotion, or 10ms RTT between hosts participating with Enterprise Plus (Metro vMotion feature with certified hardware from Cisco or F5)

// Linjo
http://communities.vmware.com/message/2152545

5.1 vMotion Performance in Mixed VMFS Environments
5.1 vMotion has performance optimizations that depend on a 1MB file system block
VMFS 5 block vs VMFS 3 with 2MB block: 1block VMFS5 cut migration 35%

Other tips
Switch to VMFS-5 with a 1MB block size. . VMFS file systems that don’t use 1MB block sizes cannot take advantage of these performance optimizations.

provision at least a1Gbps management network.
This is advised because vSphere 5.1 vMotion uses the Network File Copy (NFC) service to transmit the virtual machine's base disk and inactive snapshot points to the destination datastore. Because NFC traffic traverses the management network, the performance of your management network willdetermine the speed at which such snapshot content can be moved during migration.

If there are no snapshots, and if the source host has access to the destination datastore, vSphere 5.1 vMotionwill preferentially use the source host’s storage interface to make the file copies, rather than using themanagement network. (if it has access to destination datastore)

When using the multiple–network adaptor feature: 
configure all the vMotion vmnics under one vSwitch   
create one vMotion vmknic for each vmnic. 
In the vmknic properties, configure each vmknic to leverage a different vmnic as its active vmnic, with the rest marked as standby. This way, if any  of the vMotion vmnics become disconnected or fail, vSphere 5.1 vMotion will transparently switch  over to one of the standby vmnics. However, when all your vmnics are functional, each vmknic will route traffic over its assigned, dedicated vmnic.

Read More
Posted in 5.1, vmotion, vmware, vsphere | No comments

Wednesday, November 14, 2012

Move esxi hosts between vcenter datacenters without loosing vcenter history.

Posted on 11:43 AM by Unknown

    The standard way to move a host between vcetner datacenters would be to remove the host and readd it.  JDLangdon came up with a interesting question on trying to keep the historical data
    <http://communities.vmware.com/message/2147267#2147267>

    *note this does not apply to moving between different Vcenter instances.

    So I have this host, 10.250.14.1 under datacenter 250DC
    Suppose we want to move it to 0DC



    Here is the historical data before moving it.
    Its only been up for a few days, but you may want to move a host that’s been up for years.
    A lot of data to loose.


    Place the host in maintenance mode.




    Move the host out of it's cluster




    I was able to drag it into the new  datacenter

    *it did fail the first time with "the operation is not supported on the object"
    But then I tried again.










Read More
Posted in datacenter, esx, esxi, host, vcenter, vsphere | No comments

Thursday, November 8, 2012

Log errors in hostd.log : Unable to parse X value

Posted on 7:54 AM by Unknown

Was looking at someone else's hostd.log and found that they had a lot of errors like this

2012-11-08T13:34:34.377Z [76155B90 error 'Default' opID=HB-host-14@46810-6f9cf212-65] Unable to parse MaxRam value:
2012-11-08T13:34:34.377Z [76155B90 error 'Default' opID=HB-host-14@46810-6f9cf212-65] Unable to parse MaxRamPerCpu value:
2012-11-08T13:34:34.377Z [76155B90 error 'Default' opID=HB-host-14@46810-6f9cf212-65] Unable to parse MinRamPerCpu value:


Looking here: http://communities.vmware.com/message/2053791
Seems like the answer support gave was that
" This is a known cosmetic issue and can be safely  ignored as there is no underlying issue with the license being used, it could be resolved in the next release update. "
Read More
Posted in esx, esxi | No comments

Network problems with HP DL380p Gen8 and esxi 5

Posted on 7:53 AM by Unknown

Interesting thread by Daza On the  HP DL380p Gen8 and esxi 5
Here: http://communities.vmware.com/thread/408890
What is interesting is I thought it would have been fixed by now since its been a few months and is HCL certified but looks like people still have the issue.



"Basically since the upgrade (or I should say, fresh installation of ESXi 5) there's been 2 networking based issues that have occured.
 1. Randomly a vmnic will lose connectivity to the physical network.
2. The physical network can no longer talk to the VM network through a vSwitch
"


The current workaround for the issue is to disable NetQ on the adapters.
Looks like Broadcom is working on releasing a patch/fix for the same.
-nshetty


Read More
Posted in esx, esxi, HP, vmware, vsphere | No comments

Wednesday, November 7, 2012

Windows 8 installation on esxi 5.0 is doable

Posted on 11:02 AM by Unknown

    Lets find out.
    This should also work for esxi 4.1 and vmware workstation 8-6 though I haven't tried them.


    Virtual Machine settings
    Unless specially mentioned I am using the default

    Create the VM for windows 8



    Using custom




    Name it




    Using virtual machine 8



    Guest Operating system: select other 64



    Giving it the e1000 adapter



    Choose LSI-SAS for your controller





    Get bios 440 rom
    From here: http://communities.vmware.com/message/2142390
    Upload it to the folder where your vm's vmx is.





    Edit Config file
    Add the following to your config:  -Thanks Jmattson!

    bios440.filename = "<full path to rom image>"
    mce.enable = TRUE
    cpuid.hypervisor.v0 = FALSE
    vmGenCounter.enable = FALSE

    Edit your vm    select configuration parameters










    Install of win 8
    If you cannot boot your iso try this:
     http://sparrowangelstechnology.blogspot.com/2012/07/boot-virtual-machine-from-iso-tips-on.html








    Win 8 installed!









    *note:
    You may encounter "DPC watchdog timeouts" in your Windows 8 guest with this approach, so it's not the ideal solution . -jmattson

    Pasted from <http://communities.vmware.com/message/2142610#2142610>



Read More
Posted in esxi, vmware, vsphere, windows | No comments

Tuesday, November 6, 2012

Netapp storage best practices for vmware vsphere

Posted on 1:28 PM by Unknown

Here are my notes for the white paper from netapp
http://media.netapp.com/documents/tr-3749.pdf
These are items that I felt I should review or felt that they are important to know.
Read the white paper for a fuller experience.


The recommendations and best practices presented in this document should be considered as deployment requirements.

80/20 rule,
which is that 80% of all systems virtualized are for consolidation efforts. The remaining 20% of the systems are classified as business-critical applications.
80%: consolidated datasets  20% isolated datasets

CONSOLIDATION DATASETS
VMs do not require application-specific backup and restore agents.
Individually, each VM might not address a large dataset or have demanding IOP requirements; however, the collective whole might be considerable.
served by large, shared, policy-driven storage pools (or datastores).
Consolidated datasets work well with Network File System (NFS) datastores because this design provides greater flexibility in terms of capacity than SAN datastores

Isolated datasets
require application-specific backup and restore agents.
Each individual VM might address a large amount of storage and/or have high I/O requirements.
ideally served by individual, high-performing, nonshared datastores.

SPANNED VMFS DATASTORES
spanned datastores can overcome the 2TB LUN
most commonly used to overcome scaling limits imposed by storage arrays that use per LUN I/O queues.
 NetApp does not recommend the use of spanned VMFS datastores.

RAW DEVICE MAPPINGS
RDM: ESX acts as a connection proxy between the VM and the storage array.

thin virtual disk
The process of allocating blocks on a shared VMFS datastore is considered a metadata operation and as such executes SCSI locks on the datastore while the allocation operation is executed.
Although this process is very brief, it does suspend the write operations of the VMs on the datastore.
Data thatneeds to be written must pause while the blocks required to store the data are zeroed out.

STORAGE ARRAY THIN PROVISIONING
The value of thin-provisioned storage is that storage is treated as a shared resource pool and is consumed only as each individual VM requires it.

FAS data deduplication.
With NetApp FAS deduplication, VMware deployments can eliminate the duplicate data in their environment, enabling greater storage use. Deduplication virtualization technology enables multiple VMs to share the same physical blocks in a NetApp FAS system in the same manner that VMs share system memory.
Deduplication runs on the NetApp FAS system at scheduled intervals and does not consume any CPU cycles on the ESX server. (but it does on the storage device)
For the largest storage savings, NetApp recommends grouping similar operating systems and similar applications into datastores,

To recognize the storage savings of deduplication with LUNs, you must enable NetApp LUN thin provisioning. ::
Although deduplication reduces the amount of consumed storage, the VMware administrative team does not see this benefit directly, because its view of the storage is at a LUN layer, and LUNs always represent their provisioned capacity. (seen this is my netapp  where even though there is high dedup, the amount used as far as esxi is concered is high)

DEDUPLICATION ADVANTAGES WITH NFS
when deduplication is enabled with NFS, the storage savings are immediately available and recognized by the VMware administrative team.

VAAI
vStorage APIs for array integration :: mechanism for the acceleration of certain functions typically performed at the hypervisor by offloading these operations to the storage array.
To use VAAI, the NetApp array must be running NetApp Data ONTAP version 8.0.1. VAAI is enabled by default in Data ONTAP and in ESX/ESXi.

storage I/O control (SIOC)
enables quality of service control for storage using the concepts of shares and limits
Allows the administrator to make sure that certain VMs are given priority access to storage compared to other VMs, based on the allocation of resource shares, maximum IOPS
limits, and whether or not the datastore has reached a specified congestion threshold.
SIOC is currentlyonly supported on FC or iSCSI VMFS datastores.
SIOC does not take action to limit storage throughput of a VM based on the value of its resource shares until the datastore congestion threshold is met.

STORAGE NETWORK DESIGN AND SETUP
converged networks:: current industry trend is solely focused on multipurposed Ethernet networks ) that provide storage, voice, and user access.
FC storage networks provide a single service, these single-purpose networks are simpler to design and deploy i
primary difference between SAN and NAS is in the area of multipathing. In the current versions of ESX/ESXi, NFS requires manual static path configuration,
VLANs and VLAN tagging simple but important role in securing an IP storage network. restricted to a range of IP addresses that are available only on the IP storage VLAN.
Flow control  low-level process for managing the rate of data transmission between two nodes to prevent a fast sender from overrunning a slow receiver.
Can be configured on ESX/ESXiservers, FAS storage arrays, and network switches.
NetApp recommends turning off flow control For modern network equipment, especially 10GbEequipment, and allowing congestion management to be performed higher in the network stack.
NetApp recommends older equipment, typically GbE  configuring the endpoints, ESX servers, and NetApp arrays with the flow control set to "send."
Spanning Tree Protocol (STP) : network protocol that provides a loop-free topology for any bridged LAN.
allows a network design to include spare (redundant) links to provide automatic backup paths if an active link fails, withoutthe danger of bridge loops or the need for manual enabling or disabling of these backup links.
Bridgeloops must be avoided because they result in flooding the network.

ROUTING AND IP STORAGE NETWORKS
 NetApp recommends configuring storage networks as a single network that does not route. This model helps to provide good performance and a layer of data security.

SEPARATE ETHERNET STORAGE NETWORK
NetApp recommends separating IP-based storage traffic from public IP network traffic by implementing separate physical network segments or VLAN segments.
VMware best practice for HA clusters is to define a second service console port for each ESX server.The network use for IP-based storage is a convenient network that you can use to add this second SC port.
NetApp recommends not allowing routing of data between the storage or VMkernel and other networks. In other words, do not define a default gateway for the VMkernel storage network. Withthis model, NFS deployments require defining a second service console port on the VMkernel storage virtual switch within each ESX server.

virtual network interface (VIF) or EtherChannel
mechanism that supports aggregation of network interfaces into one logical interface unit. Once created, a VIF is indistinguishable from a  physical network interface. VIFs are used to provide fault tolerance of the network connection and in some cases higher throughput to the storage device.

Cisco Nexus 1000V
Be aware that it is composed of two components, the Virtual Supervisor Module (VSM) and the Virtual Ethernet Module (VEM).
The VSM runs as a VM and is the brains of the operation with a 1000V.
Traffic continues if the VSM  fails;however, management of the vDS is suspended.
VSMs should be deployed in an active-passive pair.  These two VSAs should never reside on the same physical ESX server. This configuration can be controlled by DRS policies.
The VEM is embedded in all ESX/ESXi hosts, and it is managed by the VSM. One exists for each host in the cluster that is participating in the Cisco Nexus 1000V vDS.
NetApp and Cisco recommend making sure that all service console and VMkernel interfaces (vswif and vmknic) reside on a system VLAN.  System VLANs are
defined by an optional parameter that can be added in a port profile.
Do not configure a VM network as a system VLAN.

storage availability
Deployed storage design that meets all of these criteria can eliminate all single points of failure::
purchasing physical servers with multiple storage interconnects or HBAs,
deploying redundant storage networking and network paths
leveraging storage arrays with redundant controllers.
data protection requirements in a virtual infrastructure are greater than those in atraditional physical server infrastructure.

NetApp RAID-DP
advanced RAID technology that is provided as the default RAID level onall FAS systems.
protects against the simultaneous loss of two drives in a single RAID group.
overhead with default RAID groups is a mere 12.5%.
safer than data stored on RAID 5 and more cost effective than RAID 10.

Aggregate
is the NetApp virtualization layer,
abstracts physical disks from logical datasets that are referred to as flexible volumes.
Aggregates are the means by which the total IOPS available to all of the physical disks are pooled as a resource.
This design is well suited to meet the needs of anunpredictable and mixed workload.
NetApp recommends using a dedicated two-disk aggregate.
By default, the root aggregate is composedof three disks due to the overhead of RAID-DP. To reduce the disk total from three to two, you must
modify the RAID type from RAID-DP to RAID 4.
If small number of disk drives. aggregate is not recommended.


VMkernel swap
ESX servers create a VMkernel swap or vswap file for every running VM. ie: 4gb ram = 4gb-8GB ram swap
NetApp recommends relocating the VMkernel swap file for every VM from the VM home directory to a datastore on a separate NetApp volume dedicated to storing VMkernel swap
Reason: this data is transient in nature and is not required in the case of recovering a VM either from a backup copy or by using Site Recovery Manager. (SRM)
NetApp recommends creating either a large thin-provisionedLUN or a FlexVol volume with the autogrow feature enabled.
This design removes theneed to micromanage the swap space or to reduce the usage rate of the storage.
 If you undersize the swap space, the VMs fail to start;
The datastore that stores the vswap file is a single datastore for an entire cluster. (single point of failure here)
NetApp does not recommend implementing local datastores on each ESX/ESXi host to store vswap because this configuration has a negative impact on vMotion migration times.

OPTIMIZING WINDOWS FILE SYSTEM FOR OPTIMAL I/O PERFORMANCE
If your VM is not acting as a file server do this in the windows vm:
disable the access time updates process in (NTFS). This changereduces the amount of IOPS occurring within the file system.
In command prompt :: fsutil behavior set disablelastaccess 1.

Defrag
VMs stored on NetApp storage arrays should not use disk defragmentation utilities
the WAFL filesystem is designed to optimally place and access data at a level below the guest operating system file system.
If a software vendor advises you to run disk defragmentation utilities inside of a VM, contactthe NetApp Global Support Center before initiating this activity.

OPTIMAL STORAGE PERFORMANCE
Guest OS: align the partitions of the virtual disks to the block boundaries of VMFS and the block boundaries of the storage array.
When aligning the partitions of virtual disks for use with NetApp FAS systems, the starting partition offsetmust be divisible by 4,096.
Windows 2008, Windows 7, or Windows Vista don’t need alignment.  Windows 2000, 2003,and XP do
Datastore: NetApp systems automate the alignment of VMFS when you select the LUN type"VMware" for the LUN.

Snapshots:
NetApp Snapshot technology can easily be integrated into VMware environments,
This is the only Snapshot technology that does not have a negative impact on system performance.
VMware states that for optimum performance and scalability, hardware-based Snapshot technology ispreferred over software-based solutions.
The shortcoming of this solution is that it is not managed withinvCenter Server, requiring external scripting and/or scheduling to manage the process.

vSphere installation bundle (VIB),
support for adding partner and other third-partysoftware components.

Host profiles
To avoid conflicts where VSC sets a parameter one way, rendering the ESXi host noncompliant with its host profile, it is important that the reference host  Be configured with NetApp best practices using the VSC Monitoring and Host Configuration panel before creating the profile from the reference host.

Storage DRS
provides smart VM placement across storage by making load-balancing decisions based upon the current I/O latency and space usage and moving VMDKs
nondisruptively between the datastores in the datastore cluster (POD)
Note:Snapshot copies cannot be migrated with the VM.

datastore cluster (POD)
collection of like datastores aggregated into a single unit of consumption from anadministrator’s perspective.
enables smart and rapid placement of new virtual machines and VMDKs and load balancing of existing workloads.

Best Practices storage DRS and Datastore clusters: (NETAPP)
Set SDRS to manual mode and to review the recommendations before accepting them.
All datastores in the cluster should use the same type of storage (SAS, SATA, and so on) and have the same replication and protection settings.
SDRS will move VMDKs between datastores, and any space savings from NetApp cloning or deduplication will be lost when the VMDK is moved. rerun deduplication to regain savings.
Do not use SDRS on thinly provisioned VMFS datastores due to the risk of reaching an out-of-space situation.
Do not mix replicated and nonreplicated datastores in a datastore cluster.
All datastores in an SDRS cluster must either be all VMFS or all NFS datastores.

storage I/O (SIOC)
provide I/O prioritization of virtual machine running on a cluster of VMware ESX servers
prioritizes VMs’ access to shared I/O resources based on disk shares assigned to them.
*Storage DRS works to avoid I/O bottlenecks.
*SIOC manages unavoidable I/O bottlenecks.

Read More
Posted in esx, esxi, storage, vmware, vsphere | No comments
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • Copy and paste clipboard items to and from your vsphere virtual machines and your pc
    Wanted to copy and paste text between your pc and a vm? Now you can. Power off your VM. Go to the vm properties->Options->Advanced-...
  • Interesting look at Win cpu usage vs Vmware CPU usage
    I came across this scenario: The windows task manager shows the cpu of the vm pegged at 100%. The vmware performance monitor says that ...
  • Storage comparison
    One of Cormac Hogan s posts provides a good basis for compares of different storage types for vmware Vsphere and how they stack up. He dis...
  • E1000 vs e1000e in vmware : notes
    Performance difference " The performance should be about the same, the reason for the change is that Intel is not longer supporting the...
  • vCenter and Hosts Disconnected -- Reason: Cannot verify the SSL thumbprint
    Just saw this over on the forums, but if your hosts are getting this error: Cannot syncronize the host <hostname.fqdn>, Reason: Cannot...
  • Vmware esxi : Intel Pro/1000 ET quad port adapter and ISCSI
    I've seen issues pop up with intel quad ports here and there on the forums so I thought it would be good to note down what worked here...
  • Vmware DRS anti affinity rules wont let you enter maintenance mode for a esxi host
    You have a DRS rule that specifies that 2 vms need to be kept apart: In this case: 250-FT and 250sql3 For larger clusters with multiple...
  • Snapshot creation /reversion/ deletion/ listing with vim-cmd
    Here we are going to use the command line on a esxi host to create, revert, and delete snapshots. First ssh into your host. Important thi...
  • shutdown your esxi host using powercli
    if you want to shutdown a host using powercli: Set-VMhost -VMhost HOSTNAME -State Maintenance get-vmhost HOSTNAME | Foreach {Get-View $_.ID}...
  • Setting your esxi host to restart automatically after crash or purple screen aka psod
    The default and recommended setting is to leave the purple screen of death up to help you notice that het host has died and also leave t...

Categories

  • 5.1
  • backup
  • cloud
  • cluster
  • command line
  • console
  • converter
  • cpu
  • datacenter
  • datastore
  • datastore. rdm
  • DCUI
  • dell
  • disaster recovery
  • display
  • DR
  • e1000
  • e1000e
  • ec2
  • esx
  • esxi
  • esxtop
  • extent
  • Good for enterprise
  • HA
  • hcl
  • host
  • HP
  • ibm
  • iometer
  • iscsi
  • iso
  • linked mode
  • logs
  • MAC
  • memory
  • NFS
  • NIC
  • NTP
  • ova
  • ovf
  • p2v
  • pcie
  • performance
  • phone
  • powercli
  • powershell
  • PSOD
  • raid
  • RDM
  • resource pool
  • rvtools
  • scsi
  • sddc
  • snapshots
  • SQL
  • SRM
  • ssh
  • storage
  • svmotion
  • syslog collector
  • v2v
  • vapp
  • vcenter
  • vcloud
  • vcp
  • veeam
  • VI console
  • vm
  • vmdk
  • VMFS
  • vmkfstools
  • vmotion
  • VMUG
  • vmware
  • vmware tools
  • vmware.esxi
  • vmxnet3
  • vsphere
  • vum
  • web client
  • windows

Blog Archive

  • ►  2013 (28)
    • ►  October (2)
    • ►  September (1)
    • ►  August (1)
    • ►  July (1)
    • ►  June (14)
    • ►  May (1)
    • ►  April (1)
    • ►  March (5)
    • ►  February (1)
    • ►  January (1)
  • ▼  2012 (138)
    • ►  December (2)
    • ▼  November (13)
      • Vmware DRS anti affinity rules wont let you enter ...
      • Hot add of cpu and memory to a virtual machine in ...
      • Installing an ftp server on your esxi host itself
      • "No bootable device" after successful ESXi 5 insta...
      • VSphere 5.1 vMotion Best Practices
      • Move esxi hosts between vcenter datacenters witho...
      • Log errors in hostd.log : Unable to parse X value
      • Network problems with HP DL380p Gen8 and esxi 5
      • Windows 8 installation on esxi 5.0 is doable
      • Netapp storage best practices for vmware vsphere
      • vCenter 5.1 does not see Host CPU Host Mem or Gues...
      • Oracle database best practices on vmware
      • vmware converter 5.0.1 released!
    • ►  October (26)
    • ►  September (19)
    • ►  August (35)
    • ►  July (34)
    • ►  June (9)
Powered by Blogger.

About Me

Unknown
View my complete profile