Angels Technology

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Wednesday, June 19, 2013

whitepaper summary: SAN Conceptual and Design Basics from Vmware

Posted on 1:54 PM by Unknown

    This is my summary notes of the paper  SAN Conceptual and Design Basics from Vmware
    http://www.vmware.com/pdf/esx_san_cfg_technote.pdf
    These are things that I wanted to write down for quick review.




  1. SAN (storage area network), a specialized high‐speed network that connects computer systems to high performance storage subsystems.
  2. SAN provides extra storage for consolidation, improves reliability, and helps with disaster recovery.
  3. SAN is a specialized high‐speed network of storage devices and switches connected to computer systems.
  4. A SAN presents shared pools of storage devices to multiple servers. Each server can access the storage as if it were directly attached to that server. A SAN supports centralized storage management.
  5. The physical components of a SAN can be grouped in a single rack or data center or connected over long distances. [think metro]
  6. In its simplest form, a SAN consists of one or more servers (1) attached to a storage array (2) using one or more SAN switches

  7. Fabric (4) — The SAN fabric is the actual network portion of the SAN. When one or more SAN switches are connected, a fabric is created. The FC protocol is used to
    communicate over the entire network. A SAN can consist of multiple interconnected fabrics. Even a simple SAN often consists of two fabrics for redundancy.
    HBA: NIC on the server side
    Storage Processor (SP): NIC on the host side

    How a SAN Works
  8. host wants to access a storage sends ablock‐based access request
  9. SCSI commands are encapsulated into FC
  10. HBA transmits the request to the SAN.
  11. SAN switches receives the request and sends it to the storage processor, which sends it on to the storage device.

  12. Host Components
    HBAs and HBA drivers

    Fabric Components
  13. SAN Switches
  14. Data Routers ::intelligent bridges between SCSI devices and FC devices in the SAN. Servers in the SAN can access SCSI disk or tape devices in the SAN through the data routers in the fabric layer
  15. Cables
  16. Communications Protocol :Fabric components communicate using the FC communications protocol.

  17. Storage Components
  18. storage arrays.
  19. Storage processors (SPs). The SPs are the front end of the storage array. SPs communicate with the disk array (which includes all the disks in the storage array) and provide the
  20. RAID/LUN functionality.
  21. Data is stored on disk arrays or tape devices (or both).
  22. Disk arrays are groups of multiple disk devices
  23. Storage arrays rarely provide hosts direct access to individual drives.
  24. RAID algorithms :  commonly known as RAID levels,Using specialized algorithms, several drives are grouped to provide common pooled storage.

  25. RAID group is equivalent to a singleLUN.
    LUN: A LUN is a single unit of storage.
    Depending on the host system environment, a LUN is also known as a volume or a logical drive.

    advanced storage arrays,
  26. RAID groups can have one or more LUNs
  27. The ability to create more than one LUN from a singleRAID group provides fine granularity to the storage creation process
  28. Most storage arrays provide additional data protection and replication features such as snapshots, internal copies, and remote mirroring.
  29. snapshot is a point‐in‐time copy of a LUN
  30. Internal copies allow data movement from one LUN to another for an additional copy for testing.
  31. Remote mirroring provides constant synchronization between LUNs on one storage array and a second, independent (usually remote) storage array for disaster recovery.

  32. SAN Ports and Port Naming
  33. a port is the connection from a device into theSAN.:: Each node in the SAN— each host, storage device, and fabric component(router or switch)—has one or more ports
  34. WWPN World Wide Port Name. A globally unique identifier for a port (aka WWN)
  35. Port_ID (or port address):: Within the SAN, each port has a unique port ID that serves as the FC address for the port. This enables routing of data through the SAN to that port. The FC switches assign the port ID when the device logs into thefabric. The port ID is valid only while the device is logged on.

  36. An FC path describes a route:
  37.  From a specific HBA port in the host,
  38.  Through the switches in the fabric, and
  39.  Into a specific storage port on the storage array.
  40. multipathing.:: Having more than one path from a host to a LUN
  41. By default, VMware ESX Server systems use only one path from the host to a givenLUN at any time.[this might be outdated]
  42. path failover. The process of detecting a failed path and switching to another


  43. active/active disk array allows access to the LUNs simultaneously through all the storage processors that are available without significant performance degradation. All the paths are active at all times (unless a path fails).
  44. active/passive disk array, one SP is actively servicing a given LUN. The other SP acts as backup for the LUN and may be actively servicing other LUN I/O. I/Ocan be sent only to an active processor.

  45. Zoning provides access control in the SAN topology;
  46. defines which HBAs canconnect to which SP
  47. multiple ports to the same SP in different zones to reduce the number of presented paths.
  48. devices outside a zone are not visible to the devices inside the zone. SAN traffic within each zone is isolated fromthe other zones.
  49. You can use zoning in several ways.
  50. Zoning for security and isolation
  51. Zoning for shared services:: allow common server access for backups. backup server with tape services that require SAN‐wide access to host servers individually for backup and recoveryThese backup servers need to be able to access the servers they back up.A SAN zone might be defined for the backup server to access a particular host The zone is then redefined for access toanother host when the backup server is ready to perform backup or recoveryprocesses on that host.
  52. Multiple storage arrays — Zones are also useful when there are multiple storagearrays. Through the use of separate zones, each storage array is managed separately from the others, with no concern for access conflicts between servers.


  53. LUN Masking: used for permission management. LUN masking is also referred to as selective storage presentation, access control,
    performed at the SP or server level; LUN invisible when atarget is scanned


    SAN Design Basics
    When designing a SAN for multiple applications and servers, you must balance theperformance, reliability, and capacity attributes of the SAN.
    Defining Application Needs SAN support fast response times consistently for each application eventhough the requirements made by applications vary over peak periods for both I/O persecond and bandwidth (in megabytes per second).

    The first step in designing an optimal SAN
    to define the storagerequirements for each application in terms of:
    I/O performance (I/O per second)
  54.  Bandwidth (megabytes per second)
  55.  Capacity (number of LUNs and capacity of each LUN)
  56.  Redundancy level (RAID‐level)
  57.  Response times (average time per I/O)
  58.  Overall processing priority

  59. Storage array design
  60. mapping the defined storage requirements to theresources of the storage array
  61.  Each RAID group provides a specific level of I/O performance, capacity, andredundancy.
  62.  LUNs are assigned to RAID groups based on these requirements
  63.  The storage arrays need to distribute the RAID groups across all internal channelsand access paths. This results in load balancing

  64. Base the SAN design on peak‐period activity

    Caching
  65. cache could be saturated with sufficiently intense I/O. which can reduces the cache’s effectiveness.
  66. A read‐ahead cache may be effective for sequential I/O, such as during certaintypes of backup activities, and for template repositories.
  67.  A read cache is often ineffective when applied to a VMFS‐based LUN becausemultiple virtual machines are accessed concurrently


  68. HA
    Make sure that redundancyis built into the design at all levels. Build in additional switches, HBAs, and storage processors, creating, in effect, a redundant access path.
    Redundant I/O Paths ::from the server to the storage array must beredundant and dynamically switchable
    Mirroring ::Protection against LUN failure allows applications to survive storageaccess faults.
    Mirroring designates a second non‐addressable LUN that captures all writeoperations to the primary LUN
    Duplication of SAN Environment :: replication of san


Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
Posted in | No comments
Newer Post Older Post Home

0 comments:

Post a Comment

Subscribe to: Post Comments (Atom)

Popular Posts

  • Copy and paste clipboard items to and from your vsphere virtual machines and your pc
    Wanted to copy and paste text between your pc and a vm? Now you can. Power off your VM. Go to the vm properties->Options->Advanced-...
  • Interesting look at Win cpu usage vs Vmware CPU usage
    I came across this scenario: The windows task manager shows the cpu of the vm pegged at 100%. The vmware performance monitor says that ...
  • Storage comparison
    One of Cormac Hogan s posts provides a good basis for compares of different storage types for vmware Vsphere and how they stack up. He dis...
  • E1000 vs e1000e in vmware : notes
    Performance difference " The performance should be about the same, the reason for the change is that Intel is not longer supporting the...
  • vCenter and Hosts Disconnected -- Reason: Cannot verify the SSL thumbprint
    Just saw this over on the forums, but if your hosts are getting this error: Cannot syncronize the host <hostname.fqdn>, Reason: Cannot...
  • Vmware esxi : Intel Pro/1000 ET quad port adapter and ISCSI
    I've seen issues pop up with intel quad ports here and there on the forums so I thought it would be good to note down what worked here...
  • Vmware DRS anti affinity rules wont let you enter maintenance mode for a esxi host
    You have a DRS rule that specifies that 2 vms need to be kept apart: In this case: 250-FT and 250sql3 For larger clusters with multiple...
  • Snapshot creation /reversion/ deletion/ listing with vim-cmd
    Here we are going to use the command line on a esxi host to create, revert, and delete snapshots. First ssh into your host. Important thi...
  • shutdown your esxi host using powercli
    if you want to shutdown a host using powercli: Set-VMhost -VMhost HOSTNAME -State Maintenance get-vmhost HOSTNAME | Foreach {Get-View $_.ID}...
  • Setting your esxi host to restart automatically after crash or purple screen aka psod
    The default and recommended setting is to leave the purple screen of death up to help you notice that het host has died and also leave t...

Categories

  • 5.1
  • backup
  • cloud
  • cluster
  • command line
  • console
  • converter
  • cpu
  • datacenter
  • datastore
  • datastore. rdm
  • DCUI
  • dell
  • disaster recovery
  • display
  • DR
  • e1000
  • e1000e
  • ec2
  • esx
  • esxi
  • esxtop
  • extent
  • Good for enterprise
  • HA
  • hcl
  • host
  • HP
  • ibm
  • iometer
  • iscsi
  • iso
  • linked mode
  • logs
  • MAC
  • memory
  • NFS
  • NIC
  • NTP
  • ova
  • ovf
  • p2v
  • pcie
  • performance
  • phone
  • powercli
  • powershell
  • PSOD
  • raid
  • RDM
  • resource pool
  • rvtools
  • scsi
  • sddc
  • snapshots
  • SQL
  • SRM
  • ssh
  • storage
  • svmotion
  • syslog collector
  • v2v
  • vapp
  • vcenter
  • vcloud
  • vcp
  • veeam
  • VI console
  • vm
  • vmdk
  • VMFS
  • vmkfstools
  • vmotion
  • VMUG
  • vmware
  • vmware tools
  • vmware.esxi
  • vmxnet3
  • vsphere
  • vum
  • web client
  • windows

Blog Archive

  • ▼  2013 (28)
    • ►  October (2)
    • ►  September (1)
    • ►  August (1)
    • ►  July (1)
    • ▼  June (14)
      • White paper summary :: Deploying 10 Gigabit Ether...
      • White paper summary : DMZ Virtualization Using VMw...
      • Personal : Whats next in IT for me
      • whitepaper summary: SAN Conceptual and Design Bas...
      • Vmware commercial : virutalze the datacenter
      • video : vmware commercial : Total Cost of Ownershi...
      • Video : vmware commercial VMware’s “Built for the ...
      • video : vmware commercial Maximum Uptime vs Microsoft
      • video: : VMware’s “Virtualize Everything” video
      • white paper review : Hyper-V vs. vSphere Understan...
      • white paper review: VMware vSphere Vs. Microsoft H...
      • vcloud : thoughts on the vcloud service
      • vcloud : thought on the vcloud and SDDC
      • VMUG NYC was a great show
    • ►  May (1)
    • ►  April (1)
    • ►  March (5)
    • ►  February (1)
    • ►  January (1)
  • ►  2012 (138)
    • ►  December (2)
    • ►  November (13)
    • ►  October (26)
    • ►  September (19)
    • ►  August (35)
    • ►  July (34)
    • ►  June (9)
Powered by Blogger.

About Me

Unknown
View my complete profile