Angels Technology

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Thursday, June 27, 2013

White paper summary :: Deploying 10 Gigabit Ethernet on VMware vSphere 4.0 with Cisco Nexus 1000V and VMware vNetwork Standard and Distributed Switches

Posted on 4:08 PM by Unknown

    http://www.vmware.com/files/pdf/techpaper/WP-VMW-Nexus1000v-vSphere4-Deployment.pdf

    My notes on the white paper above: using it for review and jotting down things I find important.
    Please read the white paper for concepts I may gloss over since it’s a review for myself.
    The reason I am going over this is even though vsphere5.x Is out, the concepts are probably relevent.


    Design guidance for implementing 10 Gigabit Ethernet networking with VMware vSphere 4.0 (including VMware ESXi 4.0 and ESX 4.0 and associated updates) in a Cisco network environment.

    Design Goals
  1. Availability: The design should be capable of recovery from any single points of failure in the network outside the VMware ESX or ESXi server. Traffic should continue to flow if a single access or distribution switch, cable, or network interface fails.
  2. Isolation: Each traffic type should be logically isolated from every other traffic type.
  3. Performance: The design should provide the capability to impose limits on some traffic types to reduce the effects on other traffic types.

  4. VMware ESX and ESXi Network Adapter Configurations
  5. the most common configurations
  6. Two 10 Gigabit Ethernet interfaces (converged network adapter [CNA], network interfacecard [NIC], or LAN on motherboard [LOM]).
  7. Two 10 Gigabit Ethernet interfaces (CNA or NIC) plus two Gigabit Ethernet LOM ports (used for management)
  8. the most common design scenario, all traffic is converged to two 10 Gigabit Ethernet interfaces.


  9. Traffic Types in a VMware vSphere 4.0
  10. Management: very low network utilization, but it should always be available and isolated from other traffic types through a management VLAN
  11. Vmotion :: separate VLAN specific to VMware Vmotion :: a single VMotion migration can use up to approximately 2.6 Gbps bandwidth, with 2 running at the same time
  12. Fault-tolerant logging:  latency less than 1ms, separate vlan
  13. (iSCSI) :: two iSCSI vmkernel ports canbe bonded to allow iSCSI traffic over both physical network interfaces:: typically iSCSI-specific VLAN , although targets may reside on another VLAN accessiblethrough a Layer 3 gateway.
  14. NFS: typically NFS-specific VLAN , although targets may reside on another VLAN accessible through a Layer 3 gateway.
  15. Virtual Machines:distributed over more than one VLAN and be subject to different policies defined in port profiles and distributed virtual port groups.

  16. Cisco Nexus 1000V 10 Gigabit Ethernet Network Design
    Network architects can use two different approaches for incorporating the Cisco Nexus 1000V into the data center network environment: virtual PortChannel (vPC) and MAC pinning
    Both design approaches provide protection against single-link and physical-switch failures,
    they differ in theway that the virtual and physical switches are coupled and the way that the VMware ESX or ESXi server traffic is distributed over the 10 Gigabit Ethernet links.
    vPC recommended when vPC or clustered physical switches are available at thephysical access layer. MAC pinning should be chosen when these options are not available.

    VPC
    allows the aggregation of two or more physical server ports to connect to a pair of Cisco Nexus 5000 or 7000 switches to make the connection look like one logical upstream switch.
    provides better bandwidthutilization and redundancy
    10 Gigabit Ethernet uplinks from the Cisco Nexus 1000V are aggregated in asingle logical link (PortChannel) to the two adjacent physical switches.
    The adjacent physical switches require vPC  theyappear as a single logical switch distributed over two physical chassis.

    MAC Pinning
    uplinks from the Cisco Nexus 1000V are treated as stand-alone links.
    each 10 Gigabit Ethernetinterface is connected to a separate physical switch with Layer 2 continuity on all IEEE 802.1Qtrunked VLANs between the two switches.
    Virtual Ethernet ports supporting virtual machines, andvmkernel ports are allocated in a round-robin fashion over the available 10 Gigabit Ethernet uplinks.
    Each MAC address is pinned to one of the uplinks until a failover event occurs

    Traffic Isolation and Prioritization
    1000V can provide consistent traffic isolation for the various VMware traffic types using port profiles.
    Port profiles map to distributed virtual port groups (Dvport)
    Within port profiles, parameters can be set that apply to a specific traffic type such as management, IP storage, VMware VMotion, or virtual machine traffic.
    parameters cover suchdetails as port security, VLAN, and ACLs.
    Policy maps for QoS treatment can be set on a per-portprofile basis to enable policing and prioritization

    Limit traffic
    critical that any one type of traffic does notoverconsume the bandwidth.
    limit the ingressor egress bandwidth down to the virtual Ethernet port level
    can be applied as part of a port profile for a particular type of traffic (ie: Vmotion )
    can also be applied on a per-virtual Ethernet interfac


    PortChannel Technology
    A MEC capability such as vPC, VSS, or VBS is required on the adjacent physical switches to enable the PortChannel to span both physical switches and still maintain availability
    When PortChannels are spread across more than one physical switch, the switches are deemed to be clustered.
    clustering is transparent to the Cisco Nexus 1000V Switch
    When the upstream switches are clustered, the Cisco Nexus 1000V Series Switch should be configured to use an LACP PortChannel with the two 10 Gigabit Ethernet uplinks defined by one port profile.
    Traffic is distributed over the available links (two 10 Gigabit Ethernet links in this case) according to the load-balancing algorithm configured at each end of the PortChannel.


    VMware vSS and vDS Configuration
     configuration for 10 GigabitEthernet with both VMware vSS and vDS is similar.

    Teaming Policy Options
  17. Originating virtual port ID ::Uplinks in same Layer 2 domain on all trunked VLANs   best practice recommendation
  18. IP hash :: Static IEEE 802.3ad PortChannel required on uplinks (no LACP) :: Traffic distributed according to SRC-IP or DST-IP hash ::
  19. requires the uplinks to be aggregated into a staticPortChannel.
  20. Source MAC hash :: Uplinks in same Layer 2 domain on all trunked VLANs :: should be used only if you have multiple MAC addresses assigned to a vnic and you require additional load distribution over the available uplinks.
  21. Explicit failover order :: Uplinks in same Layer 2 domain on all trunked VLANs :: highest-order uplink from the list of active adapters that pass failover detection. If one link fails, the next link from the list of standby adapters is activated.


  22. Teaming Policy for Two 10 Gigabit Ethernet Interfaces
    a deterministic way of directing traffic on a per-portgroup or per-distributed-virtual-port-group basis to a particular 10 Gigabit Ethernet uplink.
    Virtual switch trunking (VST) mode: Trunk the required VLANs into the VMware ESX or ESXi hosts over both 10 Gigabit Ethernet interfaces and make sure that there is Layer 2 continuity between thetwo switches on each of those VLANs.
    Virtual machine port groups or distributed virtual port groups: Make these active on one vmnic and standby on the other
    vmkernel port groups or distributed virtual port groups: Make these active on one vmnic and standby on the other in reverse to that for the virtual machines
    With both NICs active, all virtual machine traffic will use vmnic1, and all the vmkernel ports will use vmnic0. If a switch, link, or NIC failure occurs affecting one uplink, then all traffic will converge to the remaining vmnic
    Another variation spreading the virtual machine traffic over both uplinks through the originating virtual port ID policy with both 10 Gigabit Ethernet uplinks active in that port group or
    distributed virtual port group

    Using Traffic Shaping to Control and LimitTraffic
    If you have concerns about one traffic type dominating through oversubscription
    traffic shaper controls and limits traffic on a virtual port.
    VMware VMotion, management traffic, and fault-tolerant logging are effectively capped,
    this process really concerns only iSCSI and NFS
    The traffic shaper is configured on the port group (or distributed virtual port group).
    On vSS, the shaper applies only to ingress traffic
    vDS supports bidirectional traffic shaping.
    do not specify a value greater than 4 Gbps :: reason :: 4gb Mod on value: ie: 5gb =1Gb value



Read More
Posted in | No comments

Wednesday, June 26, 2013

White paper summary : DMZ Virtualization Using VMware vSphere 4 and the Cisco Nexus 1000V Virtual Switch

Posted on 12:29 PM by Unknown

    http://www.vmware.com/files/pdf/dmz-vsphere-nexus-wp.pdf




    My notes on the white paper above: using it for review and jotting down things I find important.
    Please read the white paper for concepts I may gloss over since it’s a review for myself.
    The reason I am going over this is even though vsphere5.x Is out, the concepts are probably relevant.


    This document discusses DMZ visualization and security.

    DMZ Virtualization
  1. The virtualized DMZ takes advantage of virtualization technologies to reduce the DMZ footprint, (fig1 vs fig2)
  2. Security requirements for the physical DMZ design remain applicable in the virtual design.
  3. *some virtualization-specificconsiderations need to be taken into account.
  4. In a virtualized environment, [appsare on vms], andmultiple [vms] may reside within the same physical server. Traffic may not need to leave
  5. the physical server In this environment, a virtual network (vnet) is created within each server.Multiple VLANs, IP subnets, and access ports can all reside within the server as part of a virtualnetwork. (Vswitches)

  6. Vswitch:
  7. Traditional methods for gaining visibility into server and application traffic flows may not function for inter-virtual machine traffic that resides within a physical server, and enforcement of network policies can become difficul
  8. Cisco Nexus 1000V Series Switches address these concerns by allowing network and server teams to maintain their traditional roles and responsibilities in a virtual networking environment through features and functions comparable to those in today’s physical network switches
  9. The virtual switch provides connectivity between a vm's  vNIC physical NICs of the server.

  10. 1000v
  11. 1000V is a Virtual network Distributed Switch (vDS) consists of two components:
  12. virtual supervisor module (VSM) and the virtual Ethernet module (VEM).
  13. VSM  acts in a similarfashion to a traditional Cisco® supervisor module. The networking and policy configurations areperformed on the VSM and applied to the ports on each VEM.
  14. VEM is similar to a traditionalCisco line card and provides the ports for host (virtual machine) connectivity. The VEM resides in the physical server as the virtual switching component.
  15. The physical NICs areconfigured as uplink ports on the Cisco Nexus 1000V Series.


  16. following sections describe some of the Cisco Nexus 1000V Series features

    Port Profiles and Port Groups
  17. some of the network functions now reside in the virtual server platform. VLAN assignment, port mapping, and inter-virtual machine communication
  18. The above brings some contention as to who is responsible for the networking and security policies and this virtualized layer
  19. server teams rather applying a predefined network policy to their servers.
  20. When a network policy is defined on the Cisco Nexus 1000V Series, it is updated in VMware vCenter and displayed as an option on the Port Group drop-down list
  21. 1000V Series policies are defined through a feature called port profiles.
  22. Port profiles allow you to configure network and security features in a single profile, which can be applied to multiple switch interfaces.
  23. apply that profile and any settings defined to one or more interfaces. Multiple profiles can be defined and assigned to individual interfaces


  24. Isolation and Protection
  25. VLANs, Private VLANs, ACLs, Anti-Spoofing
  26. VLANs with applied ACLs can be used to control traffic to different virtual machines and applications.
  27. VLANs provide a reliable and proven method for segmenting traffic flows in the network.
  28. Private VLANs provide a means for isolation of machines within the same VLAN ::
  29. Private Vlan :Originally developed for service providers as a means of scaling IP addresses in a hosting environment,
  30. Two types of VLANs are used in private VLANs: primary and secondary.
  31. The primary VLAN is usually the current VLAN being used for access and is the VLAN carried throughout the infrastructure.
  32. The secondary VLAN is known only within the physical or virtual switch in which it is configured. Each secondary VLAN is associated with a primary
  33. Three types of ports are available when configuring private VLANs: promiscuous, isolated, and community
  34. Isolated ports can communicate only with the promiscuous port and cannot communicate directly with other isolated ports on the switch
  35. promiscuous port is the aggregation point for access to and from each of the secondary VLANs. The promiscuous port is usually the uplink port for the switch and carries the primary VLAN.
  36. Community ports can communicate with other ports in the same community and the promiscuous port.
  37. If direct virtual machine-to-virtual machine communication is required or if server clustering is being used, a community VLAN can be a valuable feature.
  38. anti-spoofing features on the Cisco Catalyst switching platform. :: Dynamic Address Resolution Protocol (ARP) inspection; IP source guard; Dynamic Host Configuration Protocol (DHCP) snooping
  39. ARP inspection is being used to map each default gateway to the associated MAC address. This mapping helps ensure that the default gateway IP address is always associated with the correct MAC address.


  40. Increasing Visibility
    SPAN and ERSPAN are very useful tools for gaining visibility into network traffic flows.
    Traffic flows can now occur within the server between virtual machines without needing to traverse a physical access switch. Administrators may have a more difficult time identifying a virtual machine that is infected or compromised
     NetFlow defines flows as records and exports these records to collection devices. NetFlow provides information about the use of the applications in the data center network.
    host-based IPS is one of the most effective ways to protect an endpoint against exploitation attempts and malicious software.
    By looking at the behavioral aspects of an attack, Cisco Security Agent (IPS)can detect and stop new attacks without first needing installation of a signature before it can identify the particular attack

    Consolidated DMZ Architecture
    Traditionally, DMZ designs make use of a separate infrastructure:: requires the use of dedicated servers to host DMZ-based applications.\
    Consolidation of a mix of internal and DMZ virtual machines on the same physical server does support a better use of resources, but a strict security policy must be followed to maintain proper isolation



Read More
Posted in | No comments

Wednesday, June 19, 2013

Personal : Whats next in IT for me

Posted on 11:43 PM by Unknown
Interesting question, with the scope of work ever changing and tech rapidly racing ahead its time to kick start some new ideas to play with to get a foot in the future trends that are coming. I think I'm downlading puppetlab's demo this weekend and will play aroudn with it. I heard some great things and it seems like its really a force for automation in terms of vmware as well as other things.
Lets see where it heads in the next couple of years.
Read More
Posted in | No comments

whitepaper summary: SAN Conceptual and Design Basics from Vmware

Posted on 1:54 PM by Unknown

    This is my summary notes of the paper  SAN Conceptual and Design Basics from Vmware
    http://www.vmware.com/pdf/esx_san_cfg_technote.pdf
    These are things that I wanted to write down for quick review.




  1. SAN (storage area network), a specialized high‐speed network that connects computer systems to high performance storage subsystems.
  2. SAN provides extra storage for consolidation, improves reliability, and helps with disaster recovery.
  3. SAN is a specialized high‐speed network of storage devices and switches connected to computer systems.
  4. A SAN presents shared pools of storage devices to multiple servers. Each server can access the storage as if it were directly attached to that server. A SAN supports centralized storage management.
  5. The physical components of a SAN can be grouped in a single rack or data center or connected over long distances. [think metro]
  6. In its simplest form, a SAN consists of one or more servers (1) attached to a storage array (2) using one or more SAN switches

  7. Fabric (4) — The SAN fabric is the actual network portion of the SAN. When one or more SAN switches are connected, a fabric is created. The FC protocol is used to
    communicate over the entire network. A SAN can consist of multiple interconnected fabrics. Even a simple SAN often consists of two fabrics for redundancy.
    HBA: NIC on the server side
    Storage Processor (SP): NIC on the host side

    How a SAN Works
  8. host wants to access a storage sends ablock‐based access request
  9. SCSI commands are encapsulated into FC
  10. HBA transmits the request to the SAN.
  11. SAN switches receives the request and sends it to the storage processor, which sends it on to the storage device.

  12. Host Components
    HBAs and HBA drivers

    Fabric Components
  13. SAN Switches
  14. Data Routers ::intelligent bridges between SCSI devices and FC devices in the SAN. Servers in the SAN can access SCSI disk or tape devices in the SAN through the data routers in the fabric layer
  15. Cables
  16. Communications Protocol :Fabric components communicate using the FC communications protocol.

  17. Storage Components
  18. storage arrays.
  19. Storage processors (SPs). The SPs are the front end of the storage array. SPs communicate with the disk array (which includes all the disks in the storage array) and provide the
  20. RAID/LUN functionality.
  21. Data is stored on disk arrays or tape devices (or both).
  22. Disk arrays are groups of multiple disk devices
  23. Storage arrays rarely provide hosts direct access to individual drives.
  24. RAID algorithms :  commonly known as RAID levels,Using specialized algorithms, several drives are grouped to provide common pooled storage.

  25. RAID group is equivalent to a singleLUN.
    LUN: A LUN is a single unit of storage.
    Depending on the host system environment, a LUN is also known as a volume or a logical drive.

    advanced storage arrays,
  26. RAID groups can have one or more LUNs
  27. The ability to create more than one LUN from a singleRAID group provides fine granularity to the storage creation process
  28. Most storage arrays provide additional data protection and replication features such as snapshots, internal copies, and remote mirroring.
  29. snapshot is a point‐in‐time copy of a LUN
  30. Internal copies allow data movement from one LUN to another for an additional copy for testing.
  31. Remote mirroring provides constant synchronization between LUNs on one storage array and a second, independent (usually remote) storage array for disaster recovery.

  32. SAN Ports and Port Naming
  33. a port is the connection from a device into theSAN.:: Each node in the SAN— each host, storage device, and fabric component(router or switch)—has one or more ports
  34. WWPN World Wide Port Name. A globally unique identifier for a port (aka WWN)
  35. Port_ID (or port address):: Within the SAN, each port has a unique port ID that serves as the FC address for the port. This enables routing of data through the SAN to that port. The FC switches assign the port ID when the device logs into thefabric. The port ID is valid only while the device is logged on.

  36. An FC path describes a route:
  37.  From a specific HBA port in the host,
  38.  Through the switches in the fabric, and
  39.  Into a specific storage port on the storage array.
  40. multipathing.:: Having more than one path from a host to a LUN
  41. By default, VMware ESX Server systems use only one path from the host to a givenLUN at any time.[this might be outdated]
  42. path failover. The process of detecting a failed path and switching to another


  43. active/active disk array allows access to the LUNs simultaneously through all the storage processors that are available without significant performance degradation. All the paths are active at all times (unless a path fails).
  44. active/passive disk array, one SP is actively servicing a given LUN. The other SP acts as backup for the LUN and may be actively servicing other LUN I/O. I/Ocan be sent only to an active processor.

  45. Zoning provides access control in the SAN topology;
  46. defines which HBAs canconnect to which SP
  47. multiple ports to the same SP in different zones to reduce the number of presented paths.
  48. devices outside a zone are not visible to the devices inside the zone. SAN traffic within each zone is isolated fromthe other zones.
  49. You can use zoning in several ways.
  50. Zoning for security and isolation
  51. Zoning for shared services:: allow common server access for backups. backup server with tape services that require SAN‐wide access to host servers individually for backup and recoveryThese backup servers need to be able to access the servers they back up.A SAN zone might be defined for the backup server to access a particular host The zone is then redefined for access toanother host when the backup server is ready to perform backup or recoveryprocesses on that host.
  52. Multiple storage arrays — Zones are also useful when there are multiple storagearrays. Through the use of separate zones, each storage array is managed separately from the others, with no concern for access conflicts between servers.


  53. LUN Masking: used for permission management. LUN masking is also referred to as selective storage presentation, access control,
    performed at the SP or server level; LUN invisible when atarget is scanned


    SAN Design Basics
    When designing a SAN for multiple applications and servers, you must balance theperformance, reliability, and capacity attributes of the SAN.
    Defining Application Needs SAN support fast response times consistently for each application eventhough the requirements made by applications vary over peak periods for both I/O persecond and bandwidth (in megabytes per second).

    The first step in designing an optimal SAN
    to define the storagerequirements for each application in terms of:
    I/O performance (I/O per second)
  54.  Bandwidth (megabytes per second)
  55.  Capacity (number of LUNs and capacity of each LUN)
  56.  Redundancy level (RAID‐level)
  57.  Response times (average time per I/O)
  58.  Overall processing priority

  59. Storage array design
  60. mapping the defined storage requirements to theresources of the storage array
  61.  Each RAID group provides a specific level of I/O performance, capacity, andredundancy.
  62.  LUNs are assigned to RAID groups based on these requirements
  63.  The storage arrays need to distribute the RAID groups across all internal channelsand access paths. This results in load balancing

  64. Base the SAN design on peak‐period activity

    Caching
  65. cache could be saturated with sufficiently intense I/O. which can reduces the cache’s effectiveness.
  66. A read‐ahead cache may be effective for sequential I/O, such as during certaintypes of backup activities, and for template repositories.
  67.  A read cache is often ineffective when applied to a VMFS‐based LUN becausemultiple virtual machines are accessed concurrently


  68. HA
    Make sure that redundancyis built into the design at all levels. Build in additional switches, HBAs, and storage processors, creating, in effect, a redundant access path.
    Redundant I/O Paths ::from the server to the storage array must beredundant and dynamically switchable
    Mirroring ::Protection against LUN failure allows applications to survive storageaccess faults.
    Mirroring designates a second non‐addressable LUN that captures all writeoperations to the primary LUN
    Duplication of SAN Environment :: replication of san


Read More
Posted in | No comments

Wednesday, June 12, 2013

Vmware commercial : virutalze the datacenter

Posted on 12:53 PM by Unknown



Virutalizing servers , storage , networking , and security is something that I find interesting and very relevant to my situation. I believe that understanding and bring forth these changes should be a a task that those involved with infrastructure should look at. 

This is a little commercial vmware is showing to  show its position on the DC


Check the video:
http://www.vmware.com/getthefacts/microsoft/entire-data-center.html

Read More
Posted in | No comments

Tuesday, June 11, 2013

video : vmware commercial : Total Cost of Ownership of hyper-v

Posted on 8:52 AM by Unknown
http://vimeo.com/channels/polygraphtest/67456994
 one interesting point brought up is that you have to remember patch Tuesday when considering hosting vms.




Total Cost of Ownership from Polygraph Test on Vimeo.
Read More
Posted in | No comments

Video : vmware commercial VMware’s “Built for the Future” video

Posted on 8:49 AM by Unknown
More video of someone grilling a actor playing a MS employee on their company's product

http://vimeo.com/channels/polygraphtest/67457048



Built for the Future from Polygraph Test on Vimeo.
Read More
Posted in | No comments

Monday, June 10, 2013

video : vmware commercial Maximum Uptime vs Microsoft

Posted on 11:46 PM by Unknown

This video isn't really highlighting anything that a sys admin doesn't know, but vmware would like to remind everyone of certain flaws with it's competitor's system


http://vimeo.com/channels/polygraphtest/67457047

Maximum Uptime from Polygraph Test on Vimeo.

Read More
Posted in | No comments

video: : VMware’s “Virtualize Everything” video

Posted on 11:36 PM by Unknown

"How do you virutalize the entire datacenter?"
That’s a question that I think a lot of people have, and though the material is out there, vmware is going to need ot push the software defined data center further ( not that they aren't doing that)
The video is a mildly entertaining one, almost Neo vs agent smith scene.

 http://vimeo.com/channels/polygraphtest/67457046

Virtualize Everything from Polygraph Test on Vimeo.

Read More
Posted in | No comments

white paper review : Hyper-V vs. vSphere Understanding the differences by Scott :Lowe

Posted on 12:28 PM by Unknown

http://solarwinds-marketing.s3.amazonaws.com/solarwinds/PDFs/1204_sw_vmvspherehyperv_whitepaper.pdf


*excuse the grammatical mistakes, this is more of a cluster of notes I wrote for myself. Feel free to use it as a digestible review of the paper but you should check it out as its quite interesting and deserves review.



Another paper on hyper-v vs vpshere, this time its written by Scott Lowe.


Although that superiority remains largely in place today, the feature gap will ultimately shrink and Hyper-V will become a “good enough "solution for more and more organizations.

The statement above is something some vmware end users have been discussing. With Microsoft closing the gap in the hypervisor market,  there are certain features that are needed before many smaller shops will just put their hands up and say, we are with you Microsoft. Cant blame them, as having a solution based on a vendor you already know vs one you may not, even as big as vmware, is always a case for acceptance. Doesn’t need to be better just good enough.

One thing to note, while everyone talks bout how hyperV  is free, vmware does also offer a free edition of the hypervisor which people tend to forget.  Cost aside, In this comparison of hyopervisors the vmware hypervisro is still more scalable while providing a smaller foot print.

One distinction that the article has clarified is that hyperv really is a type 1 . In type 1 a "virtualization software sits directly atop the hardware, managing access to said hardware".  In a type 2 " hypervisor software simply operates like any other host-based program.".  As hyper v is a role of 2008 and not a stand alone software that is installed like virtual pc, it is architecturally a type 1.

Another thing that Scott hits upon is that the HCL for windows is much larger than the HCL for vmware. I noticed in other materials that isnt emphasized and glossed over.  This article so far seems very fair to vmware and windows's hypervisors.  So far memory management is still the area where vmware shines.
With better Oversubscription/Overcommit, Transparent Page Sharing.(dedupe of ram), balloning, and mem compression, vmware wins in the mem category.

One area where vmware is behind MS in in linked clones, called Differencing Disks on the windows side, hyper-v seems to nativly support this model, whereas esxi/vsphere doesn’t nativly support this.  Without a method of dedupe on the backend, such as netapp's  there can be a lot of storage dedicated to essentially identical files.,

Another category where vmware is doing well is workload migration, with vmotion/svmtion but hyperv is no slouch either with it's Live Migrationtool , and the quick storage migration can place a vm in a save state "for up to a minute" which on a production sys tem may not be acceptable, where as a svmotion can be  done wherever you want.
 
In conclusion, With the release of hperv 3 the gap may clsoe even more. But at the moment the features that vmware has over hyper v still makr it as a winner. The main flaws that ms has to get past are the mem management  features as well as workload migration and availability.


Read More
Posted in | No comments

white paper review: VMware vSphere Vs. Microsoft Hyper-V: A Technical Analysis

Posted on 10:11 AM by Unknown

here is a review of the paper I did while doing some cloud credits.
Its worth reading the original as the fundamentals are always strong even if the technology might be a little dated here.


VMware vSphere Vs. Microsoft Hyper-V:
http://www.cptech.com/emailimages/cloud-formations/vmware_wp.pdf

The article is a little dated, in that it focuses on vsphere4 vs Microsoft Hyper-V R2. Given its 2 years old, it’s a time machine, but its good to know where the groundwork is in the hyper visor battle as these are both still used in Dcs today. Also the technical jargon is definitely relevant even if the described software may not necessarily be.

"The host operating system runs the software that implements virtualization. Generically this is
known as the virtual machine monitor (VMM) aka hypervisor." Because the host is interpreting io for the guest rather than allowing the guest to do so this is "type 2 virtualization" . This is in contrast to type 1 where the VMM (virtual amchine monitor) "provides device drivers that the guest operating systems use to directly access the underlying hardware" As noted even a type 1 hypervisor is an os, but not to the same degree a full blown OS would be, as the capabilities are severly limited in that regard . ". It tends not to provide user environments, the execution of general-purpose programs, or a user interface" One note is that even vmware though assocaited with a type 1 hypervisor does offer a type 2 in regards to vmware workstation/fushion/player as well as the now unsupported vmware server.

One important thing the article ntes that most people forget to consider is that the role of a hyper visor isn't only you provide resources such as memory and cpu, but also to LIMIT and contain vms from utilizing excessive amount.  With a platform that is shared, its imprtant that vms in the same host play along well.

WHY VIRTUALIZE? Why not? It is an interesting argument to go over now that everyone , or at least a large majority of folks lean more towards virtualization.  "The obvious and common driver of virtualization is consolidation of systems with low utilization onto fewer systems with increased utilization".  What's spot on in the article is the fact that most apps cannot scale past a certain amount of cores, while the density of a cpu is increasing in terms of core. As rewriting the architecture of software to take advantage of this new found panacea of computing is unlikely very soon, the ability to allocate compute cycles to different apps, or in this case vms, is a blessing.  While this paper is on vmware vs MS, there exist other solutions such as Xen and KVM, which arent discussed but mentioned.

One thing that sticks out is this : "Hyper-V R1 has a “Quick Motion” feature that allows a VM to be moved
between cluster hosts, but it lacks the vMotion ability to do the move “instantly” (in less thana second)."
I don’t know about you, but I don’t recall when a vmotion has taken less then a second, unless they are talking about the slight blip that takes place when the actual move takes palce. The staging of a vmotion still can tae a while depending on the memsize.

I think there is a place where the hyper v system is showing promise, that’s in the Unlimited guests that can be hosted by 2008 datacenter edition, see "Virtual Image Use Rights". For a large system that might be able to show some savings, at the expense of other capabilities. Though is the sacrifice of some savings worth it in terms of manageability and redundant availability offered by vmware? I don’t know it it would be, which could be covered just by one outage. One example is "merging snapshots only is possible when a virtual machine is halted. " That iis a big negative, given that snaps can be made and deleted on a whim without ever needed to touch the active vm unless we need to revert in vmware.

An important topic one must note is Virtual server sprawl. With the ease of creation, vms are made without thought, but there needs to be a process in place for deletions or removals, if there aren't chargeback's to business units.  This extends to both MS anv vmware.

Conclusion, the paper though out of date with certain aspects of the esxi vs hyperv debate touches on some good topics and is a great review of terminology. A lot of  predictions that is had, such as the push to could based solutions which is already on the horizon along with pservers being the exception rather than the is spot on.  It’s a good review on the overview of virtualization and is till a excellent article.
Read More
Posted in | No comments

Friday, June 7, 2013

vcloud : thoughts on the vcloud service

Posted on 1:16 AM by Unknown
what vmware is offering
Vmware seems to be offering a good service that should be able to leverage it's existing end user customer base. With ec2 and azure already in the market, vmware coming in and deploying their own public solution was expected. what is interesting is their ability to supposedly seamlessly migrate between your own DC and their public dc. What the best offer from their point of view is that anything that works in your vsphere DC should work in the cloud, you dont even need to change your ip or mac with the streched migration from what I ahve learned so far.

from the videos it seems like its very similar to vpshere, and you should be able to add a few plugins and do whatever you like. I need toget some hands on time so Ill continue to explore further. An interesting thing is that it might make for a good personal test environemnt too,  thoguh i need to work out the pricing further. if you already have a test setup it may not be worth switching over just for a new one, i think its goodto do jut to get the hands on experiance. Hmm lets see if I can bring that up with my team.

I think before any one jumps deep into any thing, some time should be dedicated to testing the solution. vmware isnt a slouch and we already let them handle the most critical peices of our DC, so this isnt any strech to let them host it too, but many orgs may not want to relinquish control of their vms. if you dont hold it, there  is always a chance you might loose it. that is true with any cloud provider. however with it alrready being pushed to reduce capex , this seems like a option that can get a green light.

tldr: looks good need to gets hands on experiance with it


flex
control
Read More
Posted in cloud, vcloud | No comments

vcloud : thought on the vcloud and SDDC

Posted on 12:53 AM by Unknown
So I am partaking on vmware's cloud credibility site.
Its pretty cool, the vcloud offerings are an interesting take on taking on the other cloud vendors.
One thing I noticed that was being pushed was the Software defined datacenter SDDC, and vcloud is a extension of that, intheory creatig a seamless integration between your DC and the cloud DC.
Getting a good handle on it but exploring more. Seems like a good way to get into the cloud while leveraging your current vmware infrastructure.

check out this blog post by Mathew Lodge'
"Delivering on the Promise of Hybrid Cloud"  


also check
Bill Fathers' blog, "Introducing vCloud Hybrid Service" 
Read More
Posted in cloud, sddc, vcloud | No comments

VMUG NYC was a great show

Posted on 12:47 AM by Unknown
Learned about several new things and met some cool guys.
It was great to meet Aaron again , and the rest of the NYNJ Vmug guys.
definitely have to check out puppetlabs.
Read More
Posted in VMUG | No comments
Newer Posts Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • Copy and paste clipboard items to and from your vsphere virtual machines and your pc
    Wanted to copy and paste text between your pc and a vm? Now you can. Power off your VM. Go to the vm properties->Options->Advanced-...
  • Interesting look at Win cpu usage vs Vmware CPU usage
    I came across this scenario: The windows task manager shows the cpu of the vm pegged at 100%. The vmware performance monitor says that ...
  • Storage comparison
    One of Cormac Hogan s posts provides a good basis for compares of different storage types for vmware Vsphere and how they stack up. He dis...
  • E1000 vs e1000e in vmware : notes
    Performance difference " The performance should be about the same, the reason for the change is that Intel is not longer supporting the...
  • vCenter and Hosts Disconnected -- Reason: Cannot verify the SSL thumbprint
    Just saw this over on the forums, but if your hosts are getting this error: Cannot syncronize the host <hostname.fqdn>, Reason: Cannot...
  • Vmware esxi : Intel Pro/1000 ET quad port adapter and ISCSI
    I've seen issues pop up with intel quad ports here and there on the forums so I thought it would be good to note down what worked here...
  • Vmware DRS anti affinity rules wont let you enter maintenance mode for a esxi host
    You have a DRS rule that specifies that 2 vms need to be kept apart: In this case: 250-FT and 250sql3 For larger clusters with multiple...
  • Snapshot creation /reversion/ deletion/ listing with vim-cmd
    Here we are going to use the command line on a esxi host to create, revert, and delete snapshots. First ssh into your host. Important thi...
  • shutdown your esxi host using powercli
    if you want to shutdown a host using powercli: Set-VMhost -VMhost HOSTNAME -State Maintenance get-vmhost HOSTNAME | Foreach {Get-View $_.ID}...
  • Setting your esxi host to restart automatically after crash or purple screen aka psod
    The default and recommended setting is to leave the purple screen of death up to help you notice that het host has died and also leave t...

Categories

  • 5.1
  • backup
  • cloud
  • cluster
  • command line
  • console
  • converter
  • cpu
  • datacenter
  • datastore
  • datastore. rdm
  • DCUI
  • dell
  • disaster recovery
  • display
  • DR
  • e1000
  • e1000e
  • ec2
  • esx
  • esxi
  • esxtop
  • extent
  • Good for enterprise
  • HA
  • hcl
  • host
  • HP
  • ibm
  • iometer
  • iscsi
  • iso
  • linked mode
  • logs
  • MAC
  • memory
  • NFS
  • NIC
  • NTP
  • ova
  • ovf
  • p2v
  • pcie
  • performance
  • phone
  • powercli
  • powershell
  • PSOD
  • raid
  • RDM
  • resource pool
  • rvtools
  • scsi
  • sddc
  • snapshots
  • SQL
  • SRM
  • ssh
  • storage
  • svmotion
  • syslog collector
  • v2v
  • vapp
  • vcenter
  • vcloud
  • vcp
  • veeam
  • VI console
  • vm
  • vmdk
  • VMFS
  • vmkfstools
  • vmotion
  • VMUG
  • vmware
  • vmware tools
  • vmware.esxi
  • vmxnet3
  • vsphere
  • vum
  • web client
  • windows

Blog Archive

  • ▼  2013 (28)
    • ►  October (2)
    • ►  September (1)
    • ►  August (1)
    • ►  July (1)
    • ▼  June (14)
      • White paper summary :: Deploying 10 Gigabit Ether...
      • White paper summary : DMZ Virtualization Using VMw...
      • Personal : Whats next in IT for me
      • whitepaper summary: SAN Conceptual and Design Bas...
      • Vmware commercial : virutalze the datacenter
      • video : vmware commercial : Total Cost of Ownershi...
      • Video : vmware commercial VMware’s “Built for the ...
      • video : vmware commercial Maximum Uptime vs Microsoft
      • video: : VMware’s “Virtualize Everything” video
      • white paper review : Hyper-V vs. vSphere Understan...
      • white paper review: VMware vSphere Vs. Microsoft H...
      • vcloud : thoughts on the vcloud service
      • vcloud : thought on the vcloud and SDDC
      • VMUG NYC was a great show
    • ►  May (1)
    • ►  April (1)
    • ►  March (5)
    • ►  February (1)
    • ►  January (1)
  • ►  2012 (138)
    • ►  December (2)
    • ►  November (13)
    • ►  October (26)
    • ►  September (19)
    • ►  August (35)
    • ►  July (34)
    • ►  June (9)
Powered by Blogger.

About Me

Unknown
View my complete profile