uniquely identifies a TCP connection.
|9.|| The IP header checksum is not an end-to-end checksum in the sense
of : The time-to-live update forces the IP checksum to be
recomputed at each hop. The author has had unpleasant personal
experience with the consequences of violating the end-to-end argument in
 and this protocol is careful to pass the end-to-end TCP checksum
through unmodified. See sec. 4.
|10.|| This is approximately Thinwire-I from . A slight modification
is to do a `delta encoding' where the sender subtracts the previous
packet from the current packet (treating each packet as an array of 16
bit integers), then sends a 20-bit mask indicating the non-zero
differences followed by those differences. If distinct conversations
are separated, this is a fairly effective compression scheme (e.g.,
typically 12-16 byte headers) that doesn't involve the compressor
knowing any details of the packet structure. Variations on this theme
have been used, successfully, for a number of years (e.g., the Proteon
router's serial link protocol).
|11.|| Link level framing is outside the scope of this document. Any
framing that provides the facilities listed in this paragraph should be
adequate for the compression protocol. However, the author encourages
potential implementors to see  for a proposed, standard, SLIP
|12.|| The bit `P' in the figure is different from the others: It is a
copy of the `PUSH' bit from the TCP header. `PUSH' is a curious
anachronism considered indispensable by certain members of the Internet
community. Since PUSH can (and does) change in any datagram, an
information preserving compression scheme must pass it explicitly.
|13.|| All differences are computed using two's complement arithmetic.
|14.|| The connection number is limited to one byte, i.e., 256
simultaneously active TCP connections. In almost two years of
operation, the author has never seen a case where more than sixteen
connection states would be useful (even in one case where the SLIP link
was used as a gateway behind a very busy, 64-port terminal multiplexor).
Thus this does not seem to be a significant restriction and allows the
protocol field in UNCOMPRESSED_TCP packets to be used for the connection
number, simplifying the processing of those packets.
|15.|| It's also obvious that the change mask changes infrequently and
could often be elided. In fact, one can do slightly better by saving
the last compressed packet (it can be at most 16 bytes so this isn't
much additional state) and checking to see if any of it (except the TCP
checksum) has changed. If not, send a packet type that means
`compressed TCP, same as last time' and a packet containing only the
checksum and data. But, since the improvement is at most 25%, the added
complexity and state doesn't seem justified. See appendix C.
|16.|| It is not necessary (or desirable) to actually duplicate the input
packet for any of the three output types. Note that the compressor
cannot increase the size of a datagram. As the code in appendix A
shows, the protocol can be implemented so all header modifications are
made `in place'.
|17.|| Only the first fragment contains the TCP header so the fragment
offset check is necessary. The first fragment might contain a complete
TCP header and, thus, could be compressed. However the check for a
complete TCP header adds quite a lot of code and, given the arguments in
, it seems reasonable to send all IP fragments uncompressed.
|18.|| The ACK test is redundant since a standard conforming
implementation must set ACK in all packets except for the initial SYN
packet. However, the test costs nothing and avoids turning a bogus
packet into a valid one.
SYN packets are not compressed because only half of them contain a valid
ACK field and they usually contain a TCP option (the max. segment size)
which the following packets don't. Thus the next packet would be sent
uncompressed because the TCP header length changed and sending the SYN
as UNCOMPRESSED_TCP instead of TYPE_IP would buy nothing.
The decision to not compress FIN packets is questionable. Discounting
the trick in appendix B.1, there is a free bit in the header that could
be used to communicate the FIN flag. However, since connections tend to
last for many packets, it seemed unreasonable to dedicate an entire bit
to a flag that would only appear once in the lifetime of the connection.
|19.|| The two tests can be combined into a single test of the most
significant 16 bits of the difference being non-zero.
|20.|| A negative sequence number change probably indicates a
retransmission. Since this may be due to the decompressor having
dropped a packet, an uncompressed packet is sent to re-sync the
decompressor (see sec. 4).
|21.|| Note that the test here is against one, not zero. The packet ID is
typically incremented by one for each packet sent so a change of zero is
very unlikely. A change of one is likely: It occurs during any period
when the originating system has activity on only one connection.
|22.|| It's assumed that link-level framing has been removed by this point
and the packet and length do not include type or framing bytes.
|23.|| No data need be associated with a TYPE_ERROR packet. It exists so
the receive framer can tell the decompressor that there may be a gap in
the data stream. The decompressor uses this as a signal that packets
should be tossed until one arrives with an explicit connection number (C
bit set). See the last part of sec. 4.1 for a discussion of why this is
|24.|| State indices follow the C language convention and run from 0 to N
- 1, where 0 < N <= 256 is the number of available state slots.
|25.|| As with the compressor, the code can be structured so no copies are
done and all modifications are done in-place. However, since the output
packet can be larger than the input packet, 128 bytes of free space must
be left at the front of the input packet buffer to allow room to prepend
the TCP/IP header.
|26.|| modulo the TCP checksum.
|27.|| While appropriate error detection is link dependent, the CCITT CRC
used in  strikes an excellent balance between ease of computation and
robust error detection for a large variety of links, particularly at the
relatively small packet sizes needed for good interactive response.
Thus, for the sake of interoperability, the framing in  should be
used unless there is a truly compelling reason to do otherwise.
|28.|| This is an example of a generic problem with differential or delta
encodings known as `losing DC'.
|29.|| Many system managers claim that holes in an NNTP stream are more
valuable than the data.
|30.|| With worst-case traffic, this probability translates to one
undetected error every three hours over a 9600 baud line with a 30%
|31.|| The packet could be a zero-window probe rather than a retransmitted
ack but window probes should be infrequent and it does no harm to send
|32.|| The PPP protocol in  allows the end points to negotiate
compression so there is no interoperability problem. However, there
should still be a provision for the system manager at each end to
control whether compression is negotiated on or off. And, obviously,
compression should default to `off' until it has been negotiated `on'.
|33.|| Strictly speaking, there's no reason why the connection id should
be treated as an array index. If the decompressor's states were kept in
a hash table or other associative structure, the connection id would be
a key, not an index, and performance with too few decompressor slots
would only degrade enormously rather than failing altogether. However,
an associative structure is substantially more costly in code and cpu
time and, given the small per-slot cost (128 bytes of memory), it seems
reasonable to design for slot arrays at the decompressor and some
(possibly implicit) communication of the array size.
|34.|| The maximum header length, fixed by the protocol design, is 64
bytes of IP and 64 bytes of TCP.
|35.|| Allowing only one slot may make the compressor code more complex.
Implementations should avoid offering one slot if possible and
compressor implementations may disable compression if only one slot is
|36.|| The vertical axis is in percent of line speed. E.g., `95' means
that 95% of the line bandwidth is going to user data or, in other words,
the user would see a data transfer rate of 9120 bps on a 9600 bps line.
Four bytes of link-level (framer) encapsulation in addition to the
TCP/IP or compressed header were included when calculating the relative
throughput. The 200 ms packet times were computed assuming an
asynchronous line using 10 bits per character (8 data bits, 1 start, 1
stop, no parity).
|37.|| However, the 40 byte TCP MSS required for a 2400 bps line might
stress-test your TCP implementation.
|38.|| For a typical async line, a 200ms. MTU is simply .02 times the line
speed in bits per second.
|39.|| The answers, for those who wish to skip the remainder of this
section, are `yes', `no' and `either', respectively.
|40.|| The data volume from user side of a telnet is too small to benefit
from data compression and can be adversely affected by the delay most
compression algorithms (necessarily) add. The statistics and volume of
the computer side of a telnet are similar to an (ASCII) FTP so these
results should apply to either.
|41.|| The ten experiments described were each done on ten ASCII files
(four long e-mail messages, three Unix C source files and three Unix
manual entries). The results were remarkably similar for different
files and the general conclusions reached below apply to all ten files.
|42.|| This is what would be expected from the relative header sizes:
256/216 for TCP/IP and 219/216 for header compression.
|43.|| The differences are due to the wildly different byte patterns of
TCP/IP datagrams and ASCII text. Any compression scheme with an
underlying, Markov source model, such as Lempel-Ziv, will do worse when
radically different sources are interleaved. If the relative
proportions of the two sources are changed, i.e., the MTU is increased,
the performance difference between the two compressor locations
decreases. However, the rate of decrease is very slow --- increasing
the MTU by 400% (256 to 1024) only changed the difference between the
data and modem L--Z choices from 2.5% to 1.3%.
|44.|| There are other good reasons to compress at the source: Far fewer
packets have to be encapsulated and far fewer characters have to be sent
to the modem. The author suspects that the `compress data in the modem'
alternative should be avoided except when faced with an intractable,
vendor proprietary operating system.
|45.|| The time choice wasn't completely arbitrary: Decompression is
often done during the inter-frame `flag' character time so, on systems
where the decompression is done at the same priority level as the serial
line input interrupt, times much longer than a character time would
result in receiver overruns. And, with the current average of five byte
frames (on the wire, including both compressed header and framing), a
compression/decompression that takes one byte time can use at most 20%
of the available time. This seems like a comfortable budget.
|46.|| Both the test program and timer program are included in the
ftp-able package described in appendix A as files tester.c and timer.c.
|47.|| The two most common operations on the connection list are a `find'
that terminates at the first entry (a new packet for the most recently
used connection) and moving the last entry on the list to the head of
the list (the first packet from a new connection). A circular list
efficiently handles these two operations.
|48.|| In the event they are not obvious, the header files (and all the
Berkeley networking code) can be anonymous ftp'd from host
ucbarpa.berkeley.edu, files pub/4.3/tcp.tar and pub/4.3/inet.tar.
|49.|| Since  framing doesn't include error detection, one should be
careful not to `false trigger' compression on the server. The
UNCOMPRESSED_TCP packet should checked for consistency (e.g., IP
checksum correctness) before compression is enabled. Arrival of
COMPRESSED_TCP packets should not be used to enable compression.
|50.|| Tests run with several million packets from a mixed traffic load
(i.e., statistics kept on a year's traffic from my home to work) show
that 80% of packets use one of the two special encodings and, thus, the
only header is the change mask.
|51.|| If someone tries to sell you a scheme that compresses the TCP
checksum `Just say no'. Some poor fool has yet to have the sad
experience that reveals the end-to-end argument is gospel truth. Worse,
since the fool is subverting your end-to-end error check, you may pay
the price for this education and they will be none the wiser. What does
it profit a man to gain two byte times of delay and lose peace of mind?
|52.|| Note again that we must be concerned about interactive delay to be
making this argument: Bulk data transfer performance will be dominated
by the time to send the data and the difference between three and four
byte headers on a datagram containing tens or hundreds of data bytes is,
practically, no difference.
|53.|| The checksum was re-generated in the decompressor and, of course,
the `toss' logic was made considerably more aggressive to prevent error
|54.|| We have heard the suggestion that `real-time' needs require
abandoning TCP/IP in favor of a `light-weight' protocol with smaller
headers. It is difficult to envision a protocol that averages less than
one header bit per packet.