






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The issues arising from the adoption of faster networking technologies like FDDI and HIPPI, and how Cray Research addressed these challenges through the development of the UNICOSt operating system and the use of FDDI bridges. It also covers the implementation of circuit switched networks for research sites unable to afford local Cray Research computers.
What you will learn
Typology: Exercises
1 / 12
This page cannot be seen from the preview
Don't miss anything!
High Speed Networking at Cray Research
Andy Nicholson Joe Golio David A. Borma n Jeff Youn g Wayne Roiger Cray Research, Inc. Networking and Communication s Software Divisio n Cray Research Park 655F Lone Oak Driv e Eagan, MN 5512 1
For many years, ethernet has been the mainstay for TCP/IP and local area network- ing, and issues specific to wide area and long haul networks have not been adequatel y addressed. The advent of FDDI and HIPPI standards, which are, respectively, one an d two orders of magnitude faster then Ethernet, and high speed cross country links, are causing what used to be experimental issues to become everyday problems. This paper will cover some of these issues, as they relate to the TCP/IP protocols, and the work tha t has happened at Cray Research in the development of the UNICOSt operating system t o address these issues.
Overview This paper covers three areas of work at Cray Research in the area of high speed networking. We have been working with many vendors to support the Fiber Distributed Device Interface (FDDI), and the firs t section will talk about some of our experiences. This will lead to the second section, that of using a switched T3 circuit to connect geographically displaced FDDI rings. The third section will then talk abou t extensions to TCP that are necessary to support these types of high-speed networking environments at ful l capacity.
FDDI CRAY Research Inc. is committed to conforming to industry standards. One key emerging standard is th e Fiber Distributed Device Interface (FDDI) work which is being developed via the ANSI X3T9 .5 group. The FDDI standard is a 100 Mb/s token ring network which is intended to be used as a medium speed backbone for slower speed LAN technology such Ethernet. Equally important, FDDI will be also be use d to interconnect computer systems and associated peripheral equipment. One particularly exciting use for FDDI involves using this technology to link high speed workstations t o Cray Research computers. Since Cray Research computers do not have direct attachments to the FDD I
The UMCOS operating system is derived from the AT&T UNIX System V operating system. UNICOS is also based in part on the Fourth Berkeley Software Distribution under license from The-Regents of the University of California. Cray and UNICOS are registered trademarks and CRAY-2, CRAY X-MP, CRAY X-MP EA, CRAY Y-MP and HSX ar e trademarks of Cray Research, Inc. UNIX is a trademark of AT&T. HYPERchannel is a trademark of Network System s Corporation. Sun is a trademark of Sun Microsystems, Inc. IBM is a trademark of International Business Machine s Corporation.
media, a Network Systems Corp. HYPERchannel-DX DX4130 router or a Sun workstation fitted with an FEI-3 can be used to interconnect the Cray Research computer system to an FDDI ring. Both forms o f attachment allow the Cray Research computer to communicate with any node which is reachable from th e FDDI ring, either directly attached or indirectly attached via other gateways. The DX4130 acts as an IP router for the Cray Research computer, in that, the Cray Research computer builds IP datagrams in HYPERchannel format and sends them to the router across the low speed channel. The router de - encapsulates the IP datagram from the HYPERchannel message and encapsulates it into an FDDI frame fo r transmission on the medium. The FEI-3/Sun gateway works in much the same way, but performance is limited to about 25-30% of the NSC connections capability. Cray Research has been, and will continue to be, on the leading edge of the FDDI technology, by affiliatin g itself with many workstation and internetworking equipment vendors developing FDDI interfaces. To date, we have evaluated FDDI interfaces from Network Systems Corp ., Cisco Systems, Sun Microsystems , HP/Apollo, Evans & Sutherland, Digital Equipment Corporation, Network Peripherals Inc. and Silico n Graphics. We have seen that TCP/IP memory to memory transfer rates between Cray Research computer s and the workstations average between 12 and 16 Mb/s. In house testing and customer beta tests have show n that two Cray Research computers connected across FDDI can achieve between 24 and 37 Mb/s. Work i s underway at Cray Research to improve this performance. Because there is an IP router between the Cray Research computer and the rest of the FDDI network, th e Cray Research computer looks as if it is on a different network than any of the stations directly on the FDDI ring. Because of this, implementations that don't support the 'subnets are local ' feature can severely impact the performance of a TCP/IP connection to the Cray Research computer over FDDI. The reason for the poor performance is the fact that 576 byte packets are used over the connection. Cray Research's in house network uses Class 'B' subnets and the Cray Research computer supports the idea of local subnet s which gets around the performance problem.
During our testing and evaluation of the FDDI media, we have been able to experience many problems firs t hand. Most of these problems were due to the lack of maturity of the interfaces being tested. Early inter - faces from different vendors had some interoperability problems because of the rapid changing of th e ANSI specification, and different interpretation of the same. The level of troubleshooting aids available to us in the early going was strictly dependent upon what tools the vendors built into the products themselves. This made for some very interesting work, but it did show the necessity for an FDDI monitor of some type. Lately however, interfaces that we have been receiving seem to work very well and are very stable. Muc h work has gone into stablizing the ANSI specification as well as the establishment of multi-vendor intero- perability test labs. Another area to be cautious of is bridging. There are two common methods used for interconnecting heterogeneous network media types : bridging and routing. Neither method should be considered correct or incorrect without first considering the entire network as a whole. Depending on the constraints and/or requirements of the network, one method may be more effective (both price and performance) than th e other. Cray Research ' s intent is to support both methods of interconnection, however there are some basi c facts that one should be aware of before making any decisions to use bridges or routers in a network topol- ogy. If these facts are not well understood up front, serious problems can occur if the network is actuall y implemented. For example, some nodes on the network will not be able to communicate with other node s on the same network. There area two types of bridging technologies being implemented for FDDI, transparent bridging an d source route bridging. Source route bridges are mainly used on IBM Token Ring (802 .5) networks. These bridges rely on information contained in the MAC header of a packet in order to route the packet across th e proper bridges. There is a field called the routing information field (RIF) in the header used to hold thi s information. Because the RIF field is filled in by the originating host, this method of bridging is not at al l transparent to the hosts on the network. These bridges will probably not play a major role in any network t o which there is a Cray Research computer connected.
The second type of bridge is a transparent bridge. This is the more common type of bridge for most net- work topologies and will be the type found in most Cray Research computer environments where bridgin g is used. As the name implies, these bridges can be placed in a network and take an active role in the net - work operation without any of the hosts on the network knowing that the bridges exist. Most transparen t
be from the same manufacturer. In this environment, no stations on either of the two Ethernets can com- municate with any of the directly attached FDDI stations if they are not made by the same vendor. People should be aware of this problem when designing their FDDI networks. The other mechanism for doin g transparent bridging is called address translation. Using the same example as above, station 123 sends a packet to station 456. Bridge A receives the packet and knows that station 456 is not on the same networ k as station 123. Bridge A generates an FDDI frame with a destination MAC address of 456 and a sourc e MAC address of 123 (which are the same MAC addresses that appeared in the Ethernet frame), and only the data portion of the Ethernet frame is copied into the data portion of the FDDI frame. The frame traverses the ring until Bridge C recognizes address 456 as one of the addresses on its local sub-network. I t then generates an Ethernet packet to station 456. This method of bridging will work between bridges from any vendor implementing Open Systems FDDI. Therefore, any station on any sub-network can communi- cate with any other station on any other sub-network as well as being able to communicate with directl y attached FDDI stations. Cray Research encourages its customers to use bridges which use the addres s translation method when bridges are to be a part of a network over which a Cray Research computer mus t communicate.
Circuit Switched Network s Many researchers can benefit from access to a Cray Research computer ; however, not all research sites ca n afford the cost of a local Cray Research computer. Cost-effective long distance access to a remote Cray Research computer would make supercomputer resources available to more users. A circuit switched net- work can provide a high bandwidth connection at low cost by allocating the network only when it is actu- ally in use. A combination of a low cost connection and a fast circuit switched network can be used effectively. For a visualization application, a researcher could communicate with the Cray Research computer over the lo w cost connection. When ready to run the displays from the application, the high speed circuit switched net - work can be used to transfer the graphics data. This scheme allows cost effective usage of a high bandwidth (and high cost) resource. The resource i s allocated when actually in use, rather than being dedicated and under utilized. This allows users runnin g applications which benefit from high performance connections (such as visualization) to have access t o them without paying the high cost of a dedicated link. Try to imagine updating a mandelbrot set at 56kbs! There are at least two issues to address with regards to using Circuit Switched Networks as the network layer for intemet protocols. There must be a method for the end to end protocol to activate the circui t switched network, and the user must be able to choose to use it. Support for this facility has been developed, and it suggests a framework for the support of any networ k connection which requires special handling when transport protocols establish a connection.
Switch Contro l
The switch is controlled through a dial-up line using a subset of the CCITT Q .295 protocol. However, con- trol of the switch should be more tightly integrated into the internetwork ; furthermore, the switch control protocol is too complicated. The requesting host should be able to activate the switch by sending a singl e message through the intemet. A single response message will then inform the requester of the success o r failure of the request. This greatly simplifies the state information which must be stored in hosts requestin g use of circuit switched network connections.
Network Selectio n
When there are multiple connections providing differing levels of service to a single host, there needs to b e a way to select one as the connection of choice. One possible method is to access different connection s through different interfaces. Another approach is to associate different connections with different routes. The first method requires the host to have multiple interfaces (adding to hardware cost), while using th e routing table requires only extensions to the routing software.
Route Selectio n
If the routing table is the point of selection for multiple network connections, then there must be a way t o select one of many routes to a host. Since the different network connections provide different throughpu t rates, the IP Type of Service field seemed one natural way to perform this selection. An application ca n request a type of service for its network connection, and the route lookup can attempt to satisfy tha t request. Another idea, called "route aliasing", was born of the fact that existing applications would also need t o access the high speed connections without changing the binaries. Route aliasing would allow the creation of another route to a host, but the name (internetwork address) of the host is different than the well know n name. For example, a user wishing to access host " jumbo " with an application would typ e % application jumbo to use the normal connection, an d % application jumbo-fas t to access the higher speed connection. The routing code on the requesting host would recognize "jumbo - fast" as an alias for "jumbo", and send packets to jumbo's IP address, but would also know to arrange fo r the packet to travel over a faster (possibly circuit switched) connection. This requires using more IP addresses, but is limited to communicating peers. It is not necessary that everyone in the internet be able to resolve "jumbo-fast" to a different internet address than "jumbo". There are also variations on this scheme, such as allowing a host to respond to multiple internet addresses on the same network interface. The question of how to select between connections also raised the issue of partitioning network service s amongst users. A site may wish to create a "privileged " class which always gets the better of two services , and a "regular" class which does not. For example, the privileged class always uses a faster switched line , but regular users only use a cheaper dedicated line. This happens regardless of type of service or othe r selection mechanisms. This appears to be a potentially valuable area of research.
The Prototyp e A hardware platform (Figure 2) was offered to implement a prototype circuit switched network. Th e hardware consisted of 2 NSC 703 (fddi/T1/T3 bridges), a DSC DS3 T3 switch, a dedicated T1 (1 .45Mbs) land line, a switched T3 (44 .5Mbs) land line, and a Sun 4/370 workstation. We also used a Sun Sparesta- tion IPC as the switch controller. We arranged to demonstrate the prototype at the Telecommunications Association show in San Diego i n late September 1990. The CRAY Y-MP/8 computer was located in Eagan, MN, the DSC switch wa s located in Dallas, TX, and the remote workstation was on the tradeshow floor. The switch controller was also in Eagan, MN. The demonstration consisted of running two applications on the CRAY Y-MP/8 computer and displayin g graphical images as output on the workstation. For the demonstration there were two connections to the destination host, a dedicated T1 line and a circui t switched T3 line. Applications could select which connection to use by requesting lP high throughput typ e of service. The routing code would select a route based on type of service, and IP would transmit the pack- ets with the appropriate TOS bits. The NSC 703 routers were programmed to send packets with the high throughput bit set on the T3 line and all others on the T1 line. The Sun IPC was set up as a switch controller which could be accessed through the internetwork. A spe- cial switch control daemon was written and run on the Sun IPC. When the switch controller received a message to activate the connection, it would attempt to activate the switch, speaking in the switch 's Q .29 5 protocol. After the switch activated, the switch controller would send a message to the requesting host tha t the connection was active and communications could commence. When the switch controller received a message to release the connections, it would deactivate the switch. Failure responses could be returned if i t was not possible to perform the requested operation on the switch. Two routing entries were made for the destination Sun workstation. The first was a normal entry with n o special attributes. The second entry was marked as a switch control route and included the address of the switch controller. The second route was also marked with an IP type of service value (high throughput).
sufficient to select between the possibility of 56kbps, T1, T3, and other future connections when the y become available. A more robust method of selecting between network choices will be required.
An important aspect of activating a switched route is that there is a delay between the time the route is allo- cated and when the network is actually ready to transfer data. The prototype was simplified by allowin g only TCP applications to gain the benefit from switched connection routes. When TCP attempts to send data, it checks to see if the route is switched and whether the connection is ready. If the connection is not ready, then TCP waits for the connection to complete before attempting to send any data. This is simple to implement for TCP since TCP is a connection-oriented protocol. Data is buffered for transmission unti l transmission is possible. UDP, however, is not connection-oriented. Most implementations do not buffer data, so if an error occur s when sending (such as a router dropping a packet because the connection is not yet active), the data is lost. Some buffering must be added and the UDP code must be delayed from sending the data until the connec- tion is ready. Other changes would need to be made for circuit switched networks to work with UDP. An important issue is whether it is reasonable to turn the switch on and off for each UDP packet sent (probabl y not, or then it would be a packet switched network), or if an application using UDP must be required t o specify that it will send many datagrams to a particular host (to connect and disconnect). One final change had to be made to the kernel in the destination Sun workstation. Since the routers selected between the T1 and T3 lines using the type of service field in the IP header, acks and such fro m the Sun workstation needed the type of service bits set. A simple change was made so that connections o n the Sun would adopt the type of service of incoming packets. This way the originator of a connectio n determines the type of service in both directions. This seemed reasonable since it was assumed that the ori- ginator was being billed for the cost of the connection. It seems reasonable that this framework could be extended to other situations where special processing o f the network media is required when transport protocols establish connections and/or transfer data.
Performance issue s An interesting problem appeared while setting up the demo. Keep in mind that the T3 line is land based , and not a satellite link. Initial tests showed TCP performance over a cross-country link from Eagan to Dal- las (Cray Y-MP/8 computer to a Sun 4/370 SPARC) at 0 .5 Mbps. This was disturbing, given a 44 .5 Mbps link. Other tests showed that UDP could get 19 .5 Mbps throughput at the receiver. So why was there a difference? To make a long story short, the round trip time on the link was 49 ms. and the Cray Researc h computer was filling the Sun workstation's TCP receive window (default of 4K bytes) very quickly, wel l before a window update could return. The Sun kernel was changed to allow a 48K byte receive window and performance increased to almost (^5) Mbps. Multiple streams yielded an aggregate bandwidth of over 12 Mbps. Calculations showed that a minimum of 119k bytes window would be required to achieve the full 19 .5 Mbps achieved with UDP. Clearly, the TCP expanded window option would have helped. It became even more desirable when the connection for the real demo in California timed out at nearly 100 ms!
TCP (^) and Large Delay Bandwidth products As has already been noted, when running TCP over a long delay, high speed link, the full networ k bandwidth can not be consumed by a single TCP connection. This is due to the 64K byte TCP windo w limit. In order to allow TCP to run faster, (^) this limit must be expanded. In addition, there are also potential problems with the 32 bit TCP sequence space wrapping around.
Limitations of the 64K byte TCP windo w To keep a high-speed network at full utilization, the sending TCP has to be able to send at least on e roundtrip delay bandwidth product before receiving any ACKs from the remote side of the connectio n [Jacobson88]. If you assume a 30 ms roundtrip delay for a cross country link, at Ethernet speeds (1 0 Mbits/second) you will need over 36K bytes of TCP window, well within the 64K byte limit imposed b y the TCP specification [Postel8l]. However, if we then go to DS3 speeds (45Mbits/sec), the amount of win- dow required goes up to over 164K bytes ; at FDDI (100 mbits/sec) and HIPPI (800 mbits/scc) speeds, tha t
amount goes up to over 366K bytes and 2929K bytes, respectively. If the TCP window is left at 64K bytes , a single TCP connection will only be able to use cross country DS3, FDDI and HIPPI speed connections a t 39%, 17%, and 2.2% utilization, respectively. In fact, with a 64K byte window and a 30 ms round tri p delay, a single TCP connection will never run faster than about 17 mbits/sec (64K bytes every 30 ms).
Turning the calculations around, for a speed-of-light point to point connection, a single TCP stream, lim- ited to a 64K byte window, would be able to drive a DS3 line at full speed only at a distance of just ove r 1000 miles. A full speed FDDI connection would be limited to less than 490 miles, and a HIPPI connec- tion could be no more than 60 miles in length. If routers and/or switches are inserted into the path, addi- tional delay will be introduced, and the maximum distances that a single full speed TCP connection wit h 64K byte window can be maintained continue to decrease.
The maximum TCP window 64K bytes also causes problems if the path supports arbitrarily large packets , and maximum size IP packets (65535 bytes)are being generated. In this situation, only one packet can be sent, and then the sender has to wait for an ACK. All the advantages of the TCP sliding window are lost , and the connections becomes a stop and wait protocol.
To solve the 64K byte TCP window limitation, the TCP window needs to be expanded. V. Jacobson an d R. Braden proposed a new TCP option, the "WINDOW SCALE" option [Jacobson88], to expand the TC P window. The concept is simple; at the beginning of a TCP connection, the WINDOW SCALE option i s sent in the SYN packet. The option contains a shift value that is to be applied to the WINDOW field of al l future TCP packets. With a maximum value of 14, this allows the TCP window to be expanded to a ove r 1 .04 gigabytes. This would allow a single TCP connection to the moon to run at over 3 gigabit/sec, and a TCP connection to Mars would be able to run anywhere from 3 to 16 mbits/sec, depending on planetary alignmentl'. (Of course, there would have to have to be over a gigabyte of data to send, and over a giga- byte of buffering on the sender side, but these are not TCP protocol issues. )
TCP sequence wraparound problem s
The issue of TCP sequence wraparound is one of TCP providing protection for itself from old packet s floating around in the network. The TCP specification [Postel81] has a 2 minute maximum segment life - time. As long as the the TCP sequence number does not wrap within the maximum segment lifetime, ol d out-of-sequence packets can be recognized and discarded. If the sequence space wraps in less than th e maximum segment lifetime, there is the possibility that an old packet will arrive and be incorrectl y accepted as being valid. Calculations show that any connection running faster than 286 mbits/sec can wra p the TCP sequence space in less than the 2 minute TTL. If a 2"30 byte TCP window is taken into account , any connection running faster than 215 mbits/sec will run the danger of having old packets show up in th e current window, and be accepted as valid. With or without the TCP WINDOW SCALE option, there exist s today the danger of old packets being recognized as valid over HIPPI connections, and at FDDI speeds, th e threat is a bit close for comfort. To deal with this problem, V. Jacobson, R. Braden, and L. Zhang propose a TCP ECHO option [Jacob- son88] that is to be used as a timestamp [Jacobson90a]. The basic idea is that if the sending TCP puts a timestamp on each outgoing packet, then the receiving TCP, with the knowledge that the data in ECH O options represents a monotonically increasing sequence, can discard old TCP segments by comparing th e received ECHO value with the ECHO value of the last TCP packet that was received in sequence. If it i s older, then the packet is discarded.
Packet loss with Big Windows
V. Jacobson and R. Braden also discusses the problems and impact of packet loss on connections wit h large delay bandwidth products [Jacobson88]. When packets are lost, the sending side has no informatio n about how many packets were lost, and may wind up resending packets that successfully reached the desti- nation the first time. They propose to address this issue by adding a Selective Ack (SACK) option to TCP ,
f Calculations were made assuming that Earth and Mars have mean distances from the sun of 93 and 142 million mile s respectively, and are therefore anywhere from 49 to 235 million miles apart. The moon was assumed to have a mea n distance of about 239 thousand miles from the earth.
computers. One machine is an eight processor YMP computer, and the other is a four processor CRAY- 2 computer. The machines are connected by an HSX channel (HSX is a proprietary high speed channe l interconnect, rated at 800 mbits/second). Both machines were running release 5 .1 .9 of the UNICO S operating system, with the addition of kernel code to support the TCP WINDOW SCALE option. The problem with the TCP resequencing queue had not been identified yet, and it placed limitations on the siz e of the kernel buffers (if the buffers were small enough, the problem would not appear). The test ran was a memory to memory transfer between the two machines. The user level process did 100 writes and reads a t 512K bytes each. The kernel buffer was set at 180K bytes. The MTU of the HSX interface was 6155 2 bytes. User to kernel copies were done in 32K byte chunks. With a zero window scale, data wa s transferred at 252 mbits/second. When the window scale option was set to one, the performance increase d to 363 mbits/second, an improvement of 44%! The software loopback driver on the CRAY Y-MP com- puter was used to time the maximum throughput of the TCP/IP code ; with the software loopback driver se t with an MTU of 64K bytes, a throughput rate of 631 mbits/second was measured (as compared to previou s results of 550 mbits/second, when an MTU of 32K bytes was used [Borman89]). Using the WINDO W SCALE option gave a barely noticeable change in the results. After these encouraging preliminary numbers, development continued on SNQ1, a single processor proto- type CRAY-2 computer. On this machine, the same code that was running at Nasa Ames was running a t about 350 mbits/second through the software loopback driver, with an MTU of 65535, kernel buffers set a t 256k bytes, and a WINDOW SCALE option of two. After fixing the TCP resequencing problems an d optimizing the tcp option processing code, with user reads and writes of 1024K bytes, kernel buffering a t 350K bytes, and a WINDOW SCALE of four, the code was running at 394 mbits/second. Adding in the TCP header prediction code, the throughput jumped to 423 mbits/second. Changing the kernel buffering to 370K bytes increase the throughput to 430 mbits/second. Optimizing the code that generated the TC P options bought a 1% improvement, to 434 mbits/second. Since the tcp output routine is rounding the amount of user data down to the nearest 1K boundary, changing the kernel buffer to 378K allowed exactl y six full packets to be on the input and output queues. The user read and write size was also changed, to 1512 K bytes, a multiple of the kernel buffer size, and throughput shot up to 444 mbits/second. The fina l change added was to change the size of the data being copied from user to kernel, it was increased fro m 32K bytes to 63K bytes, and the throughput shot up again, to 461 mbits/second. From the original 35 0 mbits/second to the final 461 mbits/second, the throughput was improved by over 30%. SNQ1 is an older and slower machine, so this same code was run on a CRAY Y-MP computer. On tha t machine, throughput was measured at a peak of 795 mbits/second.
Comments (^) Mbits per secon d Starting point TCP resequencing fix
Optimize TCP option input pro - cessing
TCP header prediction 42 3 370K kernel buffers (^) 43 0 Optimize TCP option output pro - cessing
378K kernel buffers (^444) User to kernel copy of 63K (^) 46 1 Same code on a CRAY Y-MP (^795)
Back of the envelope work The current TCP code in the UNICOS operating system has has a copy of the data from the user to the ker- nel, another copy of the data in the kernel to generate the IP packet, and another read of the data to calcu- late the TCP checksum. On the input side, there is one read of the data to calculate the checksum, an d copy of the data from the kernel to the user, Once again doing calculations on SNQ1, the copy and check- sums operations were timed, giving a theoretical throughput in software loopback of 750 mbits/second.
Calculations show that in order to get the observed rates of about 460 mbits/second, and assuming that th e input and output processing are about the same, about .22 msec of per packet processing has to be added in. Taking that information, and looking at just the sending side of the TCP connection, the IP layer on SNQ 1 should be able to generate a stream of 63K byte TCP packets at a rate of over 800 mbits/second ; if one o f the data copies on output was eliminated, the potential speed would exceed 1 gigabit/second. Even allow- ing an additional .2 msec/packet for the overhead for the hardware driver, near HIPPI speeds should still b e able to be obtained on SNQ1. On a CRAY Y-MP, even allowing up to .35 msec/packet for the hardware drivers, near HIPPI speeds should still be able to be obtained.
Machine Copy :Checksum Packet Overhead^ Mbits per second Output Input (microseconds) @32K pkts @63K pkt s 2:1 1 :1^324 45 9 SNQ
2 :1 none 594 80 8 1 :1 none (^706) - 103 1 2 :1 1 :1^261 38 9 SNQ 1
2 :1 none 485 70 5 1 :1 none 558 86 7 2 :1 1 :1^217 33 9 1 :1 1 :1 231 37 2 SNQ1 (^) 2 : none
1 :1 (^) none 460 74 3 2 :1 1 :1 465 79 3 1 :1 1 :1 602 86 7 Y 2 :1 none
1 :1 none 1196 1724 2 :1 1 :1 394 60 6 Y
2 :1 none 751 113 4 1 :1 (^) none 821 129 2 2 :1 1 :1 (^303) 49 1 -MPY
2 :1 none
1 :1 none (^625) 103 3 2 :1 1 :1 (^246) 41 2 Y MP
2:1 none (^477) 78 7 1 :1 none 504 86 1 2 :1 1 :1 225 38 2 -MPY 1 :1^ 1 : 1 2 :1 (^) none 475
1 :1 (^) none 460 794
Conclusio n The future of high speed networking is very exciting. FDDI is here today, and HIPPI speeds are jus t around the corner. High speed switched circuits provide new opportunities and new challenges ; but chal- lenges that are not insurmountable. Simple extensions to the TCP protocol, as described in RFC 1072 an d RFC 1185, address the limitations imposed by the 64K byte TCP window and the 32bit sequence space , and will allow TCP to run efficiently over high speed, long delay networks.