Answer Sheet
Answer Sheet
Answer Sheet
Suggested Answers
1. Suppose the following IP datagram fragments (gure 1) pass through another
Figure 1: IP fragments router onto a link with MTU 380 bytes (not counting the link header). Show the fragments produced. If the packet were originally fragmented for this MTU, how many fragments would be produced? Answer: 6 fragments, vs. 4 originally. 2. What is the maximum bandwidth at which an IP host can send 576-byte packets without having the Ident eld wrap around within 60 seconds? Suppose IPs maximum segment lifetime (MSL) is 60 seconds; that is, delayed packets can arrive up to 60 seconds late but no later. What might happen if this bandwidth were exceeded?
Answer: The Ident eld is 16 bits, so we can send 576 216 bytes per 60 seconds, or about 5Mbps. If we send more than this, then fragments of one packet could conceivably have the same Ident value as fragments of another packet. 3. If a UDP datagram is sent from host A, port P to host B, port Q, but at host B there is no process listening to port Q, then B is to send back an ICMP Port Unreachable message to A. Like all ICMP messages, this is addressed to A as a whole, not to port P on A. (a) Give an example of when an application might want to receive such ICMP messages. (b) Find out what an application has to do, on the operating system of your choice, to receive such messages. (c) Why might it not be a good idea to send such messages directly back to the originating port P on A? Answer: (a) An application such as TFTP, when sending initial connection requests, might want to know the server isnt accepting connections. (b) On typical Unix systems, one needs to open a socket with attribute IP RAW (traditionally requiring special privileges) and receive all ICMP trafc. (c) A receiving application would have no way to identify ICMP messages as such, or to distinguish between these messages and protocol-specic data. 4. Read the man page (or Windows equivalent) for the Unix/Windows utility netstat. Use netstat to see the state of the local TCP connections. Find out how long closing connections spend in TIME WAIT. Why do you think the TIME WAIT state is necessary? Answer: To cleanly close down the connection and avoid hacks where attackers pick up a closing connection after the original client has departed. 5. Suppose a router has built up the following table: SubnetNumber 128.96.39.0 128.96.39.128 128.96.40.0 192.4.153.0 (default) SubnetMask 255.255.255.128 255.255.255.128 255.255.255.128 255.255.255.192 NextHop Interface 0 Interface 1 R2 R3 R4
The router can deliver packets directly over interfaces 0 and 1, or it can forward packets to routers R2, R3 and R4. Describe what the router does with a packet addressed to the following destinations: (a) 128.96.39.10 (b) 128.96.40.12 2
(c) 128.96.40.151 (d) 192.4.153.17 (e) 192.4.153.90 Answer: Apply each subnet mask and if the corresponding subnet number matches the SubnetNumber column, then use the entry in Next-Hop. (In these tables there is always a unique match.) (a) Applying the subnet mask 255.255.255.128, we get 128.96.39.0. Use interface0 as the next hop. (b) Applying subnet mask 255.255.255.128, we get 128.96.40.0. Use R2 as the next hop. (c) All subnet masks give 128.96.40.128 as the subnet number. Since there is no match, use the default entry. Next hop is R4. (d) Next hop is R3. (e) None of the subnet number entries match, hence use default router R4. 6. Give the steps using the forward search algorithm as it builds the routing database for node A in the network shown in Figure 2)
Figure 2: Network for exercise 1 Step 1 2 Answer: 3 4 5 6 conrmed (A,0,-) (A,0,-) (A,0,-) (D,2,D) (A,0,-) (D,2,D) (B,4,D) (A,0,-) (D,2,D) (B,4,D) (E,6,D) (A,0,-) (D,2,D) (B,4,D) (E,6,D) (C,7,D) tentative (D,2,D) (B,5,B) (B,4,D) (E,7,D) (E,6,D) (C,8,D) (C,8,D)
7. Suppose that nodes in the network shown in Figure 3 participate in link-state routing, and C receives contradictory LSPs: One from A arrives claiming the AB link is down, but one from B arrives claiming the AB link is up.
(a) How could this happen? (b) What should C do? What can C expect? Do not assume that LSPs contain any synchronised timestamp. Answer: (a) A necessary and sufcient condition for the routing loop to form is that B reports to A the networks B believes it can currently reach, after A discovers the problem with the AE link, but before A has communicated to B that A no longer can reach E. (b) At the instant that A discovers the AE failure, there is a 50% chance that the next report will be Bs and a 50% chance that the next report will be As. If it is As, the loop will not form; if it is Bs, it will. (c) At the instant A discovers the AE failure, let t be the time until Bs next broadcast. t is equally likely to occur anywhere in the interval 0 <= t <= 60. The event of a loop forming is the same as the event that B 8. Consider the network shown in Figure 4, in which horizontal lines represent transit providers and numbered vertical lines are interprovider links.
Figure 4: Network for exercise (a) How many routes to P could provider Qs BGP speakers receive? (b) Suppose Q and P adopt the policy that outbound trafc is routed to the closest link to the destinations provider, thus minimising their own cost. What paths will trafc from host A to host B and from host B to host A take? (c) What could Q do to have the BA trafc use the closer link 1? (d) What could Q do to have the BA trafc pass through R? Answer: (a) Q will receive three routes to P, along links 1, 2, and 3. (b) AB trafc will take link 1. BA trafc will take link 2. Note that this strategy minimizes cost to the source of the trafc. (c) To have BA trafc take link 1, Q could simply be congured to prefer link 1 in all cases. The only general solution, though, is for Q to accept into its routing tables some of the internal structure of P, so that Q for example knows where A is relative to links 1 and 2. 4
(d) If Q were congured to prefer AS paths through R, or to avoid AS paths involving links 1 and 2, then Q might route to P via R. 9. In HTTP version 1.0, a server marked the end of a transfer by closing the connection. Explain why, in terms of the TCP layer, this was a problem for servers. Find out how HTTP version 1.1 avoids this. How might a general-purpose request/reply protocol address this? Answer: When the server initiates the close, then it is the server that must enter the TIMEWAIT state. This requires the server to keep extra records; a server that averaged 100 connections per second would need to maintain about 6000 TIMEWAIT records at any one moment. HTTP 1.1 has a variable-sized message transfer mechanism; the size and endpoint of a message can be inferred from the headers. The server can thus transfer a le and wait for the client to detect the end and close the connection. Any request-reply protocol that could be adapted to support arbitrarily large messages would also sufce here. 10. Why does the HTTP GET command on page 654 of Peterson & Davie,
(a) For what SMTP commands does the client need to pay attention to the servers responses? (b) Assume the server reads each client message with gets() or the equivalent, which reads in a string up to an <LF>. What would it have to do even to detect that a client had used command pipelining? (c) Pipelining is nonetheless known to break with some servers; nd out how a client can negotiate its use. Answer: Further information on command pipelining can be found in RFC 2197. (a) We could send the HELO, FROM, and TO all together, as these messages are all small and the cost of unnecessary transmission is low, but it would seem appropriate to examine the response for error indications before bothering to send the DATA. (b) The idea here is that a server reading with gets() in this manner would be unable to tell if two lines arrived together or separately. However, a TCP buffer ush immediately after 102 Chapter 9 the rst line was processed could wipe out the second; one way this might occur is if the connection were handed off at that point to a child process. Another possibility is that the server busyreads after reading the rst line but before sending back its response; a server that willfully refused to accept pipelining might demand that this busyread return 0 bytes. This is arguably beyond the scope of gets(), however. (c) When the client sends its initial EHLO command (itself an extension of HELO), a pipeline-safe server is supposed to respond with 250 PIPELINING, included in its list of supported SMTP extensions. 13. The sequence number eld in the TCP header is 32 bits long, which is big enough to cover over 4 billion bytes of data. Even if this many bytes were never transferred over a single connection, why might the sequence number still wrap around form 232 1 to 0? Answer: The sequence number doesnt always begin at 0 for a transfer, but is randomly or clock generated. 14. Suppose TCP operates over a 1Gbps link. Assuming TCP could utilise the full bandwidth continuously, how long would it take the sequence numbers to wrap around completely? Answer: This is 125MB/sec; the sequence numbers wrap around when we send 232 B = 4GB. This would take 4GB/(125MB/sec) = 32 seconds. 15. (Advanced) Chapter 5 of Peterson & Davie explains three sequences of state transitions during TCP connection teardown. There is a fourth possible sequence, which traverses an additional arc (not shown in Figure 5.7) from FIN WAIT 1 to TIME WAIT and labelled FIN + ACK/ACK. Explain the circumstances that result in this fourth teardown sequence. 6
Answer: Host A has sent a FIN segment to host B, and has moved from ESTABLISHED to FIN WAIT 1. Host A then receives a segment from B that contains both the ACK of this FIN, and also Bs own FIN segment. This could happen if the application on host B closed its end of the connection immediately when the host As FIN segment arrived, and was thus able to send its own FIN along with the ACK. Normally, because the host B application must be scheduled to run before it can close the connection and thus have the FIN sent, the ACK is sent before the FIN. While delayed ACKs are a standard part of TCP, traditionally only ACKs of DATA, not FIN, are delayed. See RFC 813 for further details. 16. ARP and DNS both depend on caches; ARP cache entry lifetimes are typically 10 minutes, while DNS cache is in the order of days. Justify this difference. What undesirable consequences might there be in having too long a DNS cache entry lifetime? Answer: ARP trafc is always local, so ARP retransmissions are conned to a small area. Subnet broadcasts every few minutes are not a major issue either in terms of bandwidth or cpu, so a small cache lifetime does not create an undue burden. Much of DNS trafc is nonlocal; limiting such trafc becomes more important for congestion reasons alone. There is also a sizable total cputime burden on the root nameservers. And an active web session can easily generate many more DNS queries than ARP queries. Finally, DNS provides a method of including the cache lifetime in the DNS zone les. This allows a short cache lifetime to be used when necessary, and a longer lifetime to be used more commonly. If the DNS cache-entry lifetime is too long, however, then when a hosts IP address changes the host is effectively unavailable for a prolonged interval. 17. What is the relationship between a domain name (e.g., comp.lancs.ac.uk and an IP subnet number (e.g. 194.80.35.0)? Do all hosts on the subnet have to be identied by the same name server? Answer: There is little if any relationship, formally, between a domain and an IP network, although it is nonetheless fairly common for an organization (or department) to have its DNS server resolve names for all the hosts in its network (or subnet), and no others. The DNS server for comp.lancs.ac.uk could, however, be on a different network entirely (or even on a different continent) from the hosts whose names it resolves. Alternatively, each x.comp.lancs.ac.uk host could be on a different network, and each host that is on the same network as the comp.lancs.ac.uk nameserver could be in a different DNS domain. 18. Having ARP table entries time out after 1015 minutes is an attempt at a reasonable compromise. Describe the problems that can occur if the timeout value is too small or too large. Answer: If the timeout value is too small, we clutter the network with unnecessary re-requests, and halt transmission until the re-request is answered. When a hosts Ethernet address changes, eg because of a card replacement, then 7
that host is unreachable to others that still have the old Ethernet address in their ARP cache. 10-15 minutes is a plausible minimal amount of time required to shut down a host, swap its Ethernet card, and reboot. 19. Suppose an IP implementation adheres literally to the following algorithm on receipt of a packet, P, destined for IP address D:
i f ( < Ethernet address f o r D is in ARP cache >)
< send out ARP query f o r D > < put P into queue until response comes back >
(a) If the IP layer receives a burst of packets destined for D, how might this algorithm waste resources unnecessarily? (b) Sketch an improved version. (c) Suppose we simply drop P after sending out a query when cache lookup fails. How would this behave? (Some early ARP implementations allegedly did this.) Answer: (a) If multiple packets after the rst arrive at the IP layer for outbound delivery, but before the rst ARP response comes back, then we send out multiple unnecessary ARP packets. Not only do these consume bandwidth, but, because they are broadcast, they interrupt every host and propagate across bridges. (b) We should maintain a list of currently outstanding ARP queries. Before sending a query, we rst check this list. We also might now retransmit queries on the list after a suitable timeout. (c) This might, among other things, lead to frequent and excessive packet loss at the beginning of new connections. 20. Suppose the following sequence of bits arrive over a link:
011010111110101001111111011001111110
Show the resulting frame after any stuffed bits have been removed. Indicate any errors that might have been introduced into the frame. Answer: The answer is in the book. 21. Suppose you want to send some data using the BISYNC framing protocol (P&D pg. 80), and the last 2 bytes of your data are DLE and ETX. What sequence of bytes would be transmitted immediately prior to the CRC? Answer: