Angels Technology

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Thursday, June 27, 2013

White paper summary :: Deploying 10 Gigabit Ethernet on VMware vSphere 4.0 with Cisco Nexus 1000V and VMware vNetwork Standard and Distributed Switches

Posted on 4:08 PM by Unknown

    http://www.vmware.com/files/pdf/techpaper/WP-VMW-Nexus1000v-vSphere4-Deployment.pdf

    My notes on the white paper above: using it for review and jotting down things I find important.
    Please read the white paper for concepts I may gloss over since it’s a review for myself.
    The reason I am going over this is even though vsphere5.x Is out, the concepts are probably relevent.


    Design guidance for implementing 10 Gigabit Ethernet networking with VMware vSphere 4.0 (including VMware ESXi 4.0 and ESX 4.0 and associated updates) in a Cisco network environment.

    Design Goals
  1. Availability: The design should be capable of recovery from any single points of failure in the network outside the VMware ESX or ESXi server. Traffic should continue to flow if a single access or distribution switch, cable, or network interface fails.
  2. Isolation: Each traffic type should be logically isolated from every other traffic type.
  3. Performance: The design should provide the capability to impose limits on some traffic types to reduce the effects on other traffic types.

  4. VMware ESX and ESXi Network Adapter Configurations
  5. the most common configurations
  6. Two 10 Gigabit Ethernet interfaces (converged network adapter [CNA], network interfacecard [NIC], or LAN on motherboard [LOM]).
  7. Two 10 Gigabit Ethernet interfaces (CNA or NIC) plus two Gigabit Ethernet LOM ports (used for management)
  8. the most common design scenario, all traffic is converged to two 10 Gigabit Ethernet interfaces.


  9. Traffic Types in a VMware vSphere 4.0
  10. Management: very low network utilization, but it should always be available and isolated from other traffic types through a management VLAN
  11. Vmotion :: separate VLAN specific to VMware Vmotion :: a single VMotion migration can use up to approximately 2.6 Gbps bandwidth, with 2 running at the same time
  12. Fault-tolerant logging:  latency less than 1ms, separate vlan
  13. (iSCSI) :: two iSCSI vmkernel ports canbe bonded to allow iSCSI traffic over both physical network interfaces:: typically iSCSI-specific VLAN , although targets may reside on another VLAN accessiblethrough a Layer 3 gateway.
  14. NFS: typically NFS-specific VLAN , although targets may reside on another VLAN accessible through a Layer 3 gateway.
  15. Virtual Machines:distributed over more than one VLAN and be subject to different policies defined in port profiles and distributed virtual port groups.

  16. Cisco Nexus 1000V 10 Gigabit Ethernet Network Design
    Network architects can use two different approaches for incorporating the Cisco Nexus 1000V into the data center network environment: virtual PortChannel (vPC) and MAC pinning
    Both design approaches provide protection against single-link and physical-switch failures,
    they differ in theway that the virtual and physical switches are coupled and the way that the VMware ESX or ESXi server traffic is distributed over the 10 Gigabit Ethernet links.
    vPC recommended when vPC or clustered physical switches are available at thephysical access layer. MAC pinning should be chosen when these options are not available.

    VPC
    allows the aggregation of two or more physical server ports to connect to a pair of Cisco Nexus 5000 or 7000 switches to make the connection look like one logical upstream switch.
    provides better bandwidthutilization and redundancy
    10 Gigabit Ethernet uplinks from the Cisco Nexus 1000V are aggregated in asingle logical link (PortChannel) to the two adjacent physical switches.
    The adjacent physical switches require vPC  theyappear as a single logical switch distributed over two physical chassis.

    MAC Pinning
    uplinks from the Cisco Nexus 1000V are treated as stand-alone links.
    each 10 Gigabit Ethernetinterface is connected to a separate physical switch with Layer 2 continuity on all IEEE 802.1Qtrunked VLANs between the two switches.
    Virtual Ethernet ports supporting virtual machines, andvmkernel ports are allocated in a round-robin fashion over the available 10 Gigabit Ethernet uplinks.
    Each MAC address is pinned to one of the uplinks until a failover event occurs

    Traffic Isolation and Prioritization
    1000V can provide consistent traffic isolation for the various VMware traffic types using port profiles.
    Port profiles map to distributed virtual port groups (Dvport)
    Within port profiles, parameters can be set that apply to a specific traffic type such as management, IP storage, VMware VMotion, or virtual machine traffic.
    parameters cover suchdetails as port security, VLAN, and ACLs.
    Policy maps for QoS treatment can be set on a per-portprofile basis to enable policing and prioritization

    Limit traffic
    critical that any one type of traffic does notoverconsume the bandwidth.
    limit the ingressor egress bandwidth down to the virtual Ethernet port level
    can be applied as part of a port profile for a particular type of traffic (ie: Vmotion )
    can also be applied on a per-virtual Ethernet interfac


    PortChannel Technology
    A MEC capability such as vPC, VSS, or VBS is required on the adjacent physical switches to enable the PortChannel to span both physical switches and still maintain availability
    When PortChannels are spread across more than one physical switch, the switches are deemed to be clustered.
    clustering is transparent to the Cisco Nexus 1000V Switch
    When the upstream switches are clustered, the Cisco Nexus 1000V Series Switch should be configured to use an LACP PortChannel with the two 10 Gigabit Ethernet uplinks defined by one port profile.
    Traffic is distributed over the available links (two 10 Gigabit Ethernet links in this case) according to the load-balancing algorithm configured at each end of the PortChannel.


    VMware vSS and vDS Configuration
     configuration for 10 GigabitEthernet with both VMware vSS and vDS is similar.

    Teaming Policy Options
  17. Originating virtual port ID ::Uplinks in same Layer 2 domain on all trunked VLANs   best practice recommendation
  18. IP hash :: Static IEEE 802.3ad PortChannel required on uplinks (no LACP) :: Traffic distributed according to SRC-IP or DST-IP hash ::
  19. requires the uplinks to be aggregated into a staticPortChannel.
  20. Source MAC hash :: Uplinks in same Layer 2 domain on all trunked VLANs :: should be used only if you have multiple MAC addresses assigned to a vnic and you require additional load distribution over the available uplinks.
  21. Explicit failover order :: Uplinks in same Layer 2 domain on all trunked VLANs :: highest-order uplink from the list of active adapters that pass failover detection. If one link fails, the next link from the list of standby adapters is activated.


  22. Teaming Policy for Two 10 Gigabit Ethernet Interfaces
    a deterministic way of directing traffic on a per-portgroup or per-distributed-virtual-port-group basis to a particular 10 Gigabit Ethernet uplink.
    Virtual switch trunking (VST) mode: Trunk the required VLANs into the VMware ESX or ESXi hosts over both 10 Gigabit Ethernet interfaces and make sure that there is Layer 2 continuity between thetwo switches on each of those VLANs.
    Virtual machine port groups or distributed virtual port groups: Make these active on one vmnic and standby on the other
    vmkernel port groups or distributed virtual port groups: Make these active on one vmnic and standby on the other in reverse to that for the virtual machines
    With both NICs active, all virtual machine traffic will use vmnic1, and all the vmkernel ports will use vmnic0. If a switch, link, or NIC failure occurs affecting one uplink, then all traffic will converge to the remaining vmnic
    Another variation spreading the virtual machine traffic over both uplinks through the originating virtual port ID policy with both 10 Gigabit Ethernet uplinks active in that port group or
    distributed virtual port group

    Using Traffic Shaping to Control and LimitTraffic
    If you have concerns about one traffic type dominating through oversubscription
    traffic shaper controls and limits traffic on a virtual port.
    VMware VMotion, management traffic, and fault-tolerant logging are effectively capped,
    this process really concerns only iSCSI and NFS
    The traffic shaper is configured on the port group (or distributed virtual port group).
    On vSS, the shaper applies only to ingress traffic
    vDS supports bidirectional traffic shaping.
    do not specify a value greater than 4 Gbps :: reason :: 4gb Mod on value: ie: 5gb =1Gb value



Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
Posted in | No comments
Newer Post Older Post Home

0 comments:

Post a Comment

Subscribe to: Post Comments (Atom)

Popular Posts

  • Copy and paste clipboard items to and from your vsphere virtual machines and your pc
    Wanted to copy and paste text between your pc and a vm? Now you can. Power off your VM. Go to the vm properties->Options->Advanced-...
  • Interesting look at Win cpu usage vs Vmware CPU usage
    I came across this scenario: The windows task manager shows the cpu of the vm pegged at 100%. The vmware performance monitor says that ...
  • Storage comparison
    One of Cormac Hogan s posts provides a good basis for compares of different storage types for vmware Vsphere and how they stack up. He dis...
  • E1000 vs e1000e in vmware : notes
    Performance difference " The performance should be about the same, the reason for the change is that Intel is not longer supporting the...
  • vCenter and Hosts Disconnected -- Reason: Cannot verify the SSL thumbprint
    Just saw this over on the forums, but if your hosts are getting this error: Cannot syncronize the host <hostname.fqdn>, Reason: Cannot...
  • Vmware esxi : Intel Pro/1000 ET quad port adapter and ISCSI
    I've seen issues pop up with intel quad ports here and there on the forums so I thought it would be good to note down what worked here...
  • Vmware DRS anti affinity rules wont let you enter maintenance mode for a esxi host
    You have a DRS rule that specifies that 2 vms need to be kept apart: In this case: 250-FT and 250sql3 For larger clusters with multiple...
  • Snapshot creation /reversion/ deletion/ listing with vim-cmd
    Here we are going to use the command line on a esxi host to create, revert, and delete snapshots. First ssh into your host. Important thi...
  • shutdown your esxi host using powercli
    if you want to shutdown a host using powercli: Set-VMhost -VMhost HOSTNAME -State Maintenance get-vmhost HOSTNAME | Foreach {Get-View $_.ID}...
  • Setting your esxi host to restart automatically after crash or purple screen aka psod
    The default and recommended setting is to leave the purple screen of death up to help you notice that het host has died and also leave t...

Categories

  • 5.1
  • backup
  • cloud
  • cluster
  • command line
  • console
  • converter
  • cpu
  • datacenter
  • datastore
  • datastore. rdm
  • DCUI
  • dell
  • disaster recovery
  • display
  • DR
  • e1000
  • e1000e
  • ec2
  • esx
  • esxi
  • esxtop
  • extent
  • Good for enterprise
  • HA
  • hcl
  • host
  • HP
  • ibm
  • iometer
  • iscsi
  • iso
  • linked mode
  • logs
  • MAC
  • memory
  • NFS
  • NIC
  • NTP
  • ova
  • ovf
  • p2v
  • pcie
  • performance
  • phone
  • powercli
  • powershell
  • PSOD
  • raid
  • RDM
  • resource pool
  • rvtools
  • scsi
  • sddc
  • snapshots
  • SQL
  • SRM
  • ssh
  • storage
  • svmotion
  • syslog collector
  • v2v
  • vapp
  • vcenter
  • vcloud
  • vcp
  • veeam
  • VI console
  • vm
  • vmdk
  • VMFS
  • vmkfstools
  • vmotion
  • VMUG
  • vmware
  • vmware tools
  • vmware.esxi
  • vmxnet3
  • vsphere
  • vum
  • web client
  • windows

Blog Archive

  • ▼  2013 (28)
    • ►  October (2)
    • ►  September (1)
    • ►  August (1)
    • ►  July (1)
    • ▼  June (14)
      • White paper summary :: Deploying 10 Gigabit Ether...
      • White paper summary : DMZ Virtualization Using VMw...
      • Personal : Whats next in IT for me
      • whitepaper summary: SAN Conceptual and Design Bas...
      • Vmware commercial : virutalze the datacenter
      • video : vmware commercial : Total Cost of Ownershi...
      • Video : vmware commercial VMware’s “Built for the ...
      • video : vmware commercial Maximum Uptime vs Microsoft
      • video: : VMware’s “Virtualize Everything” video
      • white paper review : Hyper-V vs. vSphere Understan...
      • white paper review: VMware vSphere Vs. Microsoft H...
      • vcloud : thoughts on the vcloud service
      • vcloud : thought on the vcloud and SDDC
      • VMUG NYC was a great show
    • ►  May (1)
    • ►  April (1)
    • ►  March (5)
    • ►  February (1)
    • ►  January (1)
  • ►  2012 (138)
    • ►  December (2)
    • ►  November (13)
    • ►  October (26)
    • ►  September (19)
    • ►  August (35)
    • ►  July (34)
    • ►  June (9)
Powered by Blogger.

About Me

Unknown
View my complete profile