vmwaresupportdaywisconsinvmwareuser39sgroup内容摘要:
rical Active / Active There is still a concept of a preferred controller. As such the preferred path on all ESX hosts needs to be to a target port on that controller. If an IO is issued to a nonowning controller performance will be impacted. Symmetrical Active / Active IO can be issued to any target port and be processed without performance impact. As such all ESX hosts can be balanced out across all target ports. Configuration ―Gotchas‖ (continued) Multi Initiator Zoning Fabric environment In smaller environments it is possible to get away with To ensure there are no outages due to a host rebooting, VMware remends at a minimum Single Initiator Multi Target (SIMT) zoning Single Initiator Single Target (SIST) zoning is the highest level and is fully supported as well Performance vs. Capacity Performance vs. Capacity es into play at two main levels Physical drive size Hard disk performance doesn’t scale with drive size In most cases the larger the drive the lower the performance. LUN size Larger LUNs increase the number of VM’s, which can lead to contention on that particular LUN LUN size is often times related to physical drive size which can pound performance problems Performance Monitoring Historical performance tracking Update 4 of Virtual Center adds advanced performance charts If you are familiar with ESX it is similar to vmkusage. Detailed real time tracking Use esxtop from the mand line esxtop esxtop cont. DAVG = Raw response time from the device KAVG = Amount of time spent in the VMkernel, aka. virtualization overhead GAVG = Response time that would be perceived by virtual machines D + K = G esxtop (continued) What are correct values for these response times? As with all things revolving around performance, it is subjective Obviously the lower these numbers are the better ESX will continue to function with nearly any response time, however how well it functions is another issue Any mand that is not acknowledged by the SAN within 5000ms (5 seconds) will be aborted. This is where perceived disk performance takes a sharp dive Networking Best Practices Virtual Networking Introduced – Physical to Virtual Virtual Switch Virtual Physical Switch Physical Switch Physical Conventional access, distribution, core design Design with redundancy for enhanced availability Under the covers, virtual work same as physical Access layer implemented as virtual switches vSwitches allow additional flexibility: instant provisioning, increased control Server Connection to Physical Network Use NIC teaming to leverage multiple physical links for: Better use of bandwidth amp。 physical NICs Enhanced availability via redundancy Access Core A1 What is wrong with this design? Physical Meets Virtual – Redundancy / Load balancing Aggregating multiple ESX hosts… and multiple connections for switch redundancy… Access Core A1 A2 NIC Teaming ESX Virtual Networking Capabilities L2 Ether Switching (managed by Virtual Center) VLAN Segmentation—partition traffic without physical work amp。 NIC restrictions Rate limiting—restrict traffic generated by a VM Server NIC port aggregation (VMware NIC Teaming): Load balancing for better use of physical work Redundancy for enhanced availability Layer 2 functionality—no L3 routing No MAC learning required MAC addresses known by registration rather than learned Can control MAC spoofing VLAN Tagging Options vnic vnic vnic vSwitch Physical Switch vnic vnic vnic vSwitch Physical Switch vnic vnic vnic vSwitch Physical Switch VST – Virtual Switch Tagging VGT – Virtual Guest Tagging EST – External Switch Tagging VLAN Tags applied in vSwitch VLAN Tags applied in Guest PortGroup set to VLAN “4095” External Physical switch applies VLAN tags VST is the preferred and most mon method Port Groups assigned to a VLAN Teaming Redundancy and Load Balancing Teaming – bundling multiple physical NICs ―Originating Virtual Port ID‖ or ―Source MAC‖ based Teaming NIC chosen based on originating virtual switch port ID or source MAC Traffic from the same vNIC sent via same physical NIC (vmnic) until failover (. guest MAC address will appears on same physical switch port until failover) Considerations: Simplicity – no link aggregation required nor SUPPORTED Load sharing vs. load balancing ―IP Hash‖ Teaming NIC chosen based on SRCDST IP Considerations: Link aggregation (. EtherChannel) required on physical switch Limited teaming to single switch except where explicitly supported. . Cisco Catalyst 6500 VSS, Nortel Split MLT, and some stacked switches . Catalyst 3750 using crossstack etherchannel Better balancing if guest has large number of IP peers Remendation: Choose Originating Virtual Port ID based teaming for simplicity and multiswitch redundancy (default today) VSwitch Originating Virtual Port ID Based Teaming Default mode, distributes load on a per vnic basis Allows for multiple physical switch teaming Physical switches not aware/involved Virtual NICs VM ports uplink ports Teamed physical NICs VSwitch MAC Based Teaming Distributes load on a mostly per vnic basis Allows for multiple physical switch teaming Physical switches not aware/involved VM ports uplink ports Virtual NICs Teamed physical NICs VSwitch IP Hash Based Teaming Distributes load on a per SRC IP/DST IP basis (hash) Requires Portchannel/Etherchannel on physical switches Single switch adjacency only unless MEC supported. . VSS VM ports uplink ports Virtual NICs Teamed physical NICs PM0 PM2 PM1 SRC IP “A” DST IP “D” DST IP “E” DST IP “F” SRC IP “B” SRC IP “C。vmwaresupportdaywisconsinvmwareuser39sgroup
阅读剩余 0%
本站所有文章资讯、展示的图片素材等内容均为注册用户上传(部分报媒/平媒内容转载自网络合作媒体),仅供学习参考。
用户通过本站上传、发布的任何内容的知识产权归属用户或原始著作权人所有。如有侵犯您的版权,请联系我们反馈本站将在三个工作日内改正。