局域网交换机毕业论文中英文资料外文翻译文献(编辑修改稿)内容摘要:

e benefits of buffers is the use of flexible buffer sizes. Catalyst 5000 Ether line card port buffer memory is flexible and can create frame buffers for any frame size, making the most of the available buffer memory. Catalyst 5000 Ether cards that use the SAINT ASIC contain 192 KB of buffer memory per port, 24 kbps for receive or input buffers, and 168 KB for transmit or output buffers. Using the 168 KB of transmit buffers, each port can create as many as 2500 64byte buffers. With most of the buffers in use as an output queue, the Catalyst 5000 family has eliminated headofline blocking issues. (You learn more about headofline blocking later in this chapter in the section Congestion and HeadofLine Blocking.) In normal operations, the input queue is never used for more than one frame, because the switching bus runs at a high speed. Figure 25illustrates port buffered memory. Figure 25. Port Buffered Memory Shared Memory Some of the earliest Cisco switches use a shared memory design for port buffering. Switches using a shared memory architecture provide all ports access to that memory at the same time in the form of shared frame or packet buffers. All ingress frames are stored in a shared memory pool until the egress ports are ready to transmit. Switches dynamically allocate the shared memory in the form of buffers, acmodating ports with high amounts of ingress traffic, without allocating unnecessary buffers for idle ports. The Catalyst 1200 series switch is an early example of a shared memory switch. The Catalyst 1200 supports both Ether and FDDI and has 4 MB of shared packet dynamic randomaccess memory (DRAM). Packets are handled first in, first out (FIFO). More recent examples of switches using shared memory architectures are the Catalyst 4000 and 4500 series switches. The Catalyst 4000 with a Supervisor I utilizes 8 MB of Static RAM (SRAM) as dynamic frame buffers. All frames are switched using a central processor or ASIC and are stored in packet buffers until 中英文资料 5 switched. The Catalyst 4000 Supervisor I can create approximately 4000 shared packet buffers. The Catalyst 4500 Supervisor IV, for example, utilizes 16 MB of SRAM for packet buffers. Shared memory buffer sizes may vary depending on the platform, but are most often allocated in increments ranging from 64 to 256 bytes. Figure 26 illustrates how ining frames are stored in 64byte increments in shared memory until switched by the switching engine. Figure 26. Shared Memory Architecture 4. Oversubscribing the Switch Fabric Switch manufacturers use the term nonblocking to indicate that some or all the switched ports have connections to the switch fabric equal to their line speed. For example, an 8port Gigabit Ether module would require 8 Gb of bandwidth into the switch fabric for the ports to be considered nonblocking. All but the highest end switching platforms and configurations have the potential of oversubscribing access to the switching fabric. Depending on the application, oversubscribing ports may or may not be an issue. For example, a 10/100/1000 48port Gigabit Ether module with all ports running at 1 Gbps would require 48 Gbps of bandwidth into the switch fabric. If many or all ports were connected to highspeed file servers capable of generating consistent streams of traffic, this oneline module could outstrip the bandwidth of the entire switching fabric. If the module is connected entirely to enduser workstations with lower bandwidth requirements, a card that oversubscribes the switch fabric may not significantly impact performance. Cisco offers both nonblocking and blocking configurations on various platforms, depending on bandwidth requirements. Check the specifications of each platform and the available line cards to determine the aggregate bandwidth of the connection into the switch fabric. 5. Congestion and HeadofLine Blocking Headofline blocking occurs whenever traffic waiting to be transmitted prevents or blocks traffic destined elsewhere from being transmitted. Headofline blocking occurs most often when multiple highspeed data sources are sending to the same destination. In the earlier shared bus example, the central arbiter used the roundrobin service approach to moving traffic from one line card to another. Ports on each line card request access to transmit via a local arbiter. In turn, each line card39。 s local arbiter waits its turn for the central arbiter to grant access to the switching bus. Once access is granted to the transmitting line card, the central arbiter has to wait for the receiving line card to fully receive the frames before servicing the next request in line. The situation is not much different than needing to make a simple deposit at a bank having one teller and many lines, while the person being helped is conducting a。
阅读剩余 0%
本站所有文章资讯、展示的图片素材等内容均为注册用户上传(部分报媒/平媒内容转载自网络合作媒体),仅供学习参考。 用户通过本站上传、发布的任何内容的知识产权归属用户或原始著作权人所有。如有侵犯您的版权,请联系我们反馈本站将在三个工作日内改正。