Chapter 6 - VINES Transport Layer Protocols
VINES transport layer protocols coordinate the end-to-end movement of data between any two processes, such as a client and a service, in the network. Transport layer protocols support a variety of classes of service, which are ways in which data messages can be transferred.
Transport Layer Services Provided
In the VINES architecture, the transport layer provides the following set of services:
![]()
Unreliable datagram service ![]()
Reliable (acknowledged) message service ![]()
Data stream service
An unreliable datagram is a discrete unit of data, limited in size, that one process either transfers to another or broadcasts to a set of processes of the same type. The transport layer does not acknowledge unreliable datagrams or guarantee their delivery. Unreliable datagrams can also be replicated or received in a sequence that differs from their transmission.
A reliable message is an atomic unit of data, limited in size. It is transferred from one process to another. The differences between reliable messages and unreliable datagrams are described in Table 6-1.
A data stream is a controlled flow of data between two processes. Unlike reliable message exchange, a data stream supports the transfer of messages of unlimited size.
The reliable message service and the data stream service support virtual connections, which are associations between any two processes or two IPC entities in the network. The association acts like a two-way data pipe over which data exchange takes place.
A virtual connection guarantees that one process or protocol entity is notified if the other closes the connection or terminates. A network layer or data link layer failure, such as the loss of a transmission medium, terminates the virtual connection. Both processes or protocol entities are informed of the connection loss.
In the VINES architecture, IPC and SPP provide transport layer services, as follows:
![]()
IPC provides support for the unreliable datagram service and the reliable message service. ![]()
SPP provides data stream service support.
Both IPC and SPP use the VINES IP routing services to send and receive data.
VINES Transport Layer Addressing
The VINES architecture defines a two-level transport layer addressing scheme. Processes on the network such as services are referenced by a port, which is a message queue.
Each port has a unique port address. The port address is 64 bits and consists of these two fields:
![]()
The high-order 48 bits of the port address are equal to the VINES internet address of the node where the transport layer protocol entity resides. ![]()
The low-order 16 bits make up the local port number. The local port number identifies the specific port among the set of ports that the transport layer protocol entity supports.
Local port numbers identify either well-known ports or transient ports.
Well-known port numbers are reserved for certain upper-layer VINES service entities such as the StreetTalk Service and third-party applications. Well-known ports are associated with a service type. This means that a specific well-known port number is reserved for all instances of a particular type of service. Only one instance of any well-known port number can exist in a single node at any point in time.
Well-known local port numbers range in value from 0x1 through 0x1ff. The well-known port number range 0x1 to 0xC7 is reserved for use by Banyan. The ranges 0xC8 to 0xF9 and 0x100 to 0x1FF are reserved for third-party developers, and require registration with Banyan. The range 0xFA to 0xFF is reserved for temporary use, and does not require registration with Banyan.
Transient port numbers are assigned by the transport layer protocol entity on a rotating basis. Transient local port numbers range from 0x200 through 0xFFFE.
The transport layer provides the same type or quality of service for well-known ports and transient ports. Well-known ports just simplify how certain important upper-layer services are located on the network.
IPC moves unreliable datagrams or reliable messages between transport layer ports in the network. A single IPC entity implements the IPC protocol on each VINES node.
Unreliable datagrams differ from reliable messages as follows:
![]()
An unreliable datagram consists of a single data packet that is transferred from a single source port to one or more destination ports ![]()
A reliable message consists of a set of data packets (from one to four) that is transferred from one source port to one destination port
The IPC entity reassembles multiple packets of a reliable message. It presents it as a single unit to the process that owns the socket within the transport layer port. The IPC entity acknowledges reliable messages and guarantees that the messages are placed in the destination port input queue only once. Otherwise, IPC informs the sending process of an error condition.
A single IPC virtual connection between the participating IPC entities enhances the delivery service for reliable messages, as follows:
![]()
This connection handles all reliable messages that need to be transferred between transport layer ports on two different nodes. ![]()
This connection is not associated with any individual port or set of ports, but acts as a common resource for the IPC entities.
IPC entities establish virtual connections on an "as needed" basis and terminate them when they are unused for a period of time that is based on the total path metric. The duration of this period depends on the specific connection. The establishment or termination of IPC virtual connections is transparent to processes requesting reliable message services.
See "IPC Half-Connection Initialization Implementation" later in this chapter for more information on the total path metric.
IPC headers differ for unreliable datagrams and reliable messages. An unreliable datagram uses a short IPC header that is 6 bytes. Figure 6-1 shows the short IPC header.
IPC provides virtual connection support through a long IPC header, which is 16 bytes. The long IPC header includes the first three fields in the short IPC header and other fields, as shown in Figure 6-2.
The fields in the short and long IPC headers are as follows.
Source Port Field The source port field contains the 16-bit port number of the port where the datagram or message originated.
Destination Port Field The destination port field contains the 16-bit port number of the port where the datagram or message is destined. The source and destination port numbers are concatenated to the source and destination VINES internet addresses from the VINES IP packet header, forming the full 64-bit transport layer port address.
Packet Type Field The packet type field specifies the IPC packet type. Table 6-2 shows the values.
IPC uses a packet type value of 0x0 for unreliable datagrams. This value is valid with the short IPC packet header format only. IPC uses all other packet type values for reliable message support. These values require the long IPC header.
Control Byte Field The control byte field appears in long IPC headers
only. This field
contains bit settings, which are listed in Table 6-3.
IPC aborts the current message during the processing of multipacket messages only. The abort happens when IPC sends one or more packets in the message, but fails to send one or more of the remaining packets. This failure occurs during the processing of a user send request. Examples of failures include an address fault or process termination.
The beginning-of-message and end-of-message bits delimit the user data message.
For single-packet messages, both the beginning-of-message and end-of-message bits in the IPC header are set to 1.
For multipacket messages, the beginning-of-message field is set to 1 in the IPC header attached to the first packet, and the end-of-message field is set to 1 in the IPC header attached to the last packet. Neither bit is set on packets in between.
For IPC, the 0x80 bit may be set (but is not always set) in data packets.
Padding Byte Field The padding byte field appears in short IPC headers
only. This field keeps
the short IPC packet on an even byte boundary. The byte contains
a
random value that is determined by IPC and has no special meaning.
CID Fields IPC entities assign connection IDs (CIDs) to connections to identify them. Communicating IPC entities choose the values of the local CID and remote CID fields to identify a particular IPC virtual connection.
Local CID - The value in the local CID field is the CID that the sending IPC entity uses to identify the connection.
Remote CID - The value that the sending IPC entity places in the remote CID field is the CID that the receiving IPC entity uses to identify the connection.
A CID is unique for all connections that the IPC entity supports at one time. IPC reserves a CID of 0 (zero) to indicate a null (unknown) CID.
Local CIDs and remote CIDs are not necessarily the same. The local and remote IPC entities may use different IDs to identify the same connection.
When communicating IPC entities establish a connection, each entity assigns a local CID to the connection. The entities then exchange local CIDs. Each entity uses the other's local CID as the remote CID.
Example CIDs
A mail client program establishes an IPC virtual connection with the Intelligent Messaging mail service. The client program assigns a local CID of 0x3 to the connection, and the service assigns a local CID of 0x5 to the connection.
When the client program sends a message to the service, it uses 0x3 for the local CID and 0x5 for the remote CID. When the service responds to the client, it uses 0x5 for the local CID and 0x3 for the remote CID.
Sequence Number Field The sequence number field indicates the ordering of the packet relative to other packets sent on the IPC virtual connection. This field contains a value from 0x0 through 0xFFFF.
The first sequenced packet transferred on a connection must have a sequence number of 1. Subsequent packets have sequence numbers 1 greater (modulo 0x10000) than the previous packet. The sequence numbering that an IPC entity uses in one direction on the IPC virtual connection is independent of the sequencing that the other entity uses in the opposite direction.
Acknowledgment Number Field The acknowledgment number field has two purposes:
![]()
In IPC error packets, this field indicates the sequence number field value of the data packet with which the error is associated. ![]()
In data, probe, and acknowledgment packets, this field indicates the sequence number value of the last packet that the receiving IPC entity accepted on the connection.
Error Field The error field appears in error packets only. The field returns an indication of a service failure. Communication error codes that are returned are described in Table 6-4.
Length Field The length field appears in data packets only. The length field contains the total number of bytes that follow the long IPC header in a data packet.
VINES IPC Implementation Notes
An IPC virtual connection is composed of two half-connections, one for each connected IPC entity. A half-connection is a point of reference that IPC and SPP entities use to manage a virtual connection. Each half-connection can be in one of four states:
![]()
Connection in progress ![]()
Connected ![]()
Connection-idle ![]()
Disconnected
IPC Half-Connection Initialization Implementation
When a process requests an IPC entity to establish a virtual connection or accept an incoming connection, the IPC entity attempts to create a half-connection. In order to initialize the half-connection, the IPC entity must:
![]()
Calculate the total path metric ![]()
Initialize the half-connection
Calculating the Total Path Metric - On servers, the IPC entity searches both the network table and the neighbor table for the metric value that can calculate the total path metric. The IPC entity determines the routing cost of reaching the IPC entity on the destination node with the total path metric. If the table lookup fails, the half-connection is not created.
If the destination node is a neighbor, the IPC entity uses the neighbor metric for the node from the neighbor table.
If the destination node is not a neighbor, the IPC entity uses the routing metric for the node's network from the network table.
The IPC entity uses this formula to calculate the total path metric:
known cost + last hop cost + bias = total path metric
The elements of the formula are described as follows:
Known Cost - Either the metric from the network table or the metric from the neighbor table. If the destination is a neighbor, the metric from the neighbor table is used. Otherwise, the metric from the network table is used. See Chapter 4 and Chapter 5 for more information on these tables.
Last Hop Cost - The neighbor cost for routing packets from the last hop to a final destination client node node destination. If the destination client node is a neighbor, this value is 0 (zero), since the "known cost" in this case is all that is needed. If the destination client nodes are not neighbors, this value is the metric for the data link connecting the last router that forwards packets to the destination and the client node itself.
Bias - A floating factor used to adjust the total path metric based on congestion factors. This factor is especially useful for handling congestion on serial lines. Like other routing costs, bias values are in 200-millisecond intervals.
On client nodes, the search for metrics is the same as on servers. However, client nodes store only one neighbor path entry, since they have only one active interface. Also, client nodes store only the network entries for networks with which they are currently communicating.
The IPC entity obtains the known cost from the neighbor table or the network table. However, the IPC entity must:
![]()
Learn the last hop cost. ![]()
Calculate the bias.
To learn the last hop cost when the destination client node is not a neighbor, the IPC entity performs the following actions:
1. Sets the metric subfield in the VINES IP header of a packet to 1
2. Sends this packet to the destination client node
The ICP entity on the last hop (that is, the last router) to the destination client node responds with an ICP metric notification packet with the appropriate cost information. See "VINES ICP" in Chapter 3 for more information on ICP.
To calculate the bias, the IPC entity factors in the total path metric and the round-trip time between the local IPC entity and the remote IPC entity. Each time the local IPC entity sends a packet on the connection, the entity times the packet, awaiting its acknowledgment. The time between the transmission of the packet and its acknowledgment is the round-trip time, in 200-millisecond ticks.
Each time a packet is acknowledged, the round-trip time is compared to the current total path metric. When the connection is initially established, the bias is 0 (zero). The bias remains at 0 (zero) until the round-trip time exceeds the current total path metric. At that point, the local IPC entity adjusts the bias upward using this formula:
(known cost + last hop cost) / 2
Remember the last hop cost applies only when the remote IPC entity is on a client node. Otherwise, just the known cost is used.
The connection metric is recalculated with the new bias. Each time the round-trip time exceeds the total path metric, the same calculation is performed. The bias is capped at 300, or 4 times the sum of the known cost and the last hop cost, whichever comes first.
The local IPC entity resets the bias to 0 (zero) when both of these conditions are met:
![]()
The known cost changes. ![]()
The current bias is 4 times greater than the new known cost.
When the bias is set back to 0 (zero), the IPC entity recalculates it using the new known cost.
The local IPC entity decreases the bias whenever the round-trip time is less than one-quarter of the current total path metric. Whenever this happens, the local IPC entity divides the current bias by 2, and recalculates the total path metric using the new bias.
In summary, when the network is congested or has other kinds of problems, the bias increases by one-half of the sum of the known cost and the last hop cost. When congestion or other problems begin to relax, the round-trip time has to improve to where it is less than one-quarter of the current total path metric before the local IPC entity decreases the bias.
Initializing the Connection - The IPC entity initializes the half-connection to the connection-in-progress state, if a packet is to be sent. When receiving a packet, the IPC entity initializes the half-connection to the connected state.
Figure 6-3 illustrates the half-connection initialization algorithm that IPC entities on servers use when a local process requests an outgoing connection.
If a destination client node is not a neighbor, the total path metric cannot be calculated when the half-connection is initialized because the last hop cost is not known. At this time, only the last hop cost of reaching neighbor client nodes is known. The services of VINES IP and ICP must be used to learn the last hop cost. See "VINES IP" and "VINES ICP" in Chapter 3 for more information on cost factors.
Figure 6-4 is a simplified illustration of the half-connection initialization algorithm that IPC entities on servers use when an incoming connection is established.
Each half-connection has an associated connection record, which contains information about the connection. Table 6-5 lists the fields in each half-connection in the connection record for an IPC connection.
When the IPC entity creates a half-connection, the entity performs these actions:
![]()
Initializes ccb_state to connection-in-progress or connected, depending on whether this node is initializing the connection or the remote node is initializing the connection. ![]()
Initializes ccb_nsseq, ccb_lrack, ccb_lrseq, and ccb_lsack to 0 (zero). ![]()
Sets the local CID field to the next assignable value not currently in use. ![]()
Sets the remote CID field to 0 (zero), indicating that it is currently unknown. The remote IPC entity must eventually supply the remote CID. ![]()
Sets ccb_tcount to 30 seconds plus the path metric (ccb_metric). ![]()
Sets the acknowledgment timer (ccb_acount) to 0 (zero).
The local and destination port numbers are not stored in the IPC connection record because IPC connections are public connections. When two communicating processes no longer need the connection, IPC maintains the connection record for 30 seconds plus the total path metric before deleting it. This allows two other communicating processes to use the same connection.
Example Public Connections
Suppose a client program on Workstation A and a service on Server B use an IPC connection to communicate. When the client program and service finish communicating, the IPC entity on each node maintains the connection record. Within several seconds, another client program on Workstation A communicates with another service on Server B. The same connection is used.
Table 6-6 shows the status flags that can be set on IPC connections.
Every time a process asks the IPC entity to send a reliable IPC message to a port at a remote node, the entity performs a connection record lookup. The lookup is successful if the IPC entity finds a half-connection to the remote node.
The half-connection must be in either the connection-in-progress state or the connected state. If the connection is not in these states, the IPC entity creates a new half-connection. If IPC does not find a half-connection, IPC creates one and initializes it to the connection-in-progress state.
Once the IPC entity finds the connection record, the entity creates an IPC data packet (or packets for a multipacket message). The IPC entity initializes the IPC header as follows:
![]()
Sets the local CID field to the local connection value (ccb_lid). ![]()
Sets the remote CID field to the remote connection value (ccb_rid). If the entity has not received a packet from the other node, the remote CID field is set to 0 (zero). Setting this field to 0 (zero) indicates that the local node does not know the CID that the remote node has assigned to the connection. ![]()
Initializes the local and destination port numbers in the long IPC header. ![]()
Sets the sequence number field to the value of the next sequence number to send (ccb_nsseq). The entity increments the next sequence number to send by one. ![]()
Sets the acknowledgment number field to the last received sequence number from the remote node (ccb_lrseq). See "IPC Acknowledgment Implementation" later in this chapter for more information. ![]()
Sets the length field to the total length of the message. ![]()
Sets the beginning-of-message and end-of-message subfields accordingly. ![]()
Initializes the VINES IP header. For reliable messages, the IPC entity always enables the error bit in the transport control field.
Once the IPC entity completes this procedure described above, the IPC entity passes the data packet to the VINES IP entity. The IPC entity places a copy of the packet on the acknowledgment queue. A retransmission timer and a retry count are associated with each data packet on the acknowledgment queue.
The IPC entity initializes the retransmission timer to the total path metric (ccb_metric) and initializes the retry count to 10. Each time the timer expires, the IPC entity sends the data packet again if the retry count is not equal to 0 (zero) and decrements the retry count by one (10, 9, 8, and so on). The connection terminates when the retry count equals 0 (zero). For more information on the total path metric, see "IPC Half-Connection Initialization Implementation" earlier in this chapter.
For a multipacket reliable message, the IPC entity must transfer all of the packets that make up the message on the connection in sequential order. In the Banyan IPC implementation, IPC messages cannot be intermingled on the same connection. The receiving IPC entity does not reassemble packets received out-of-order.
The flowchart in Figure 6-5 summarizes the IPC send algorithm
IPC Receive Implementation
When the IPC entity receives a data packet, the entity performs a connection record lookup. The lookup is successful if the entity finds a half-connection to the node that was the source of the data packet and if the half-connection has these characteristics:
![]()
The half-connection is in the connection in progress, connected, or connection-idle state. ![]()
The remote CID field in the received packet is not 0 (zero) and matches the local CID field (ccb_lid) in the connection record. Remember the sender uses the receiver's local CID as the remote CID. ![]()
The local CID field in the received packet matches the remote CID field in the connection record, and both fields are not 0 (zero). If the remote CID field in the connection record is 0 (zero) and the local CID field in the received packet is non-zero, the IPC entity assigns the value of the local CID field in the packet to the remote CID field in the connection record.
The IPC entity creates a half-connection if these conditions are met:
![]()
A half-connection is not found. ![]()
The remote CID in the IPC header of a received packet is 0 (zero).
If the entity does not find the half-connection and the remote CID field is not equal to 0 (zero), the entity sends a disconnect packet.
Once it creates the half-connection, the IPC entity performs these actions:
![]()
Initializes the half-connection to the connected state ![]()
Sets the remote CID field (ccb_rid) to the local CID from the IPC header in the received packet
If the IPC entity finds the connection record and the half-connection is in connection-in-progress state, the entity does the following:
![]()
Places the half-connection in the connected state ![]()
Sets the remote CID (ccb_rid) in the record ![]()
Resets the half-connection to the connected state if the record indicates that the half-connection is in connection-idle state
Every time the IPC entity receives a data packet, the entity compares the sequence number in the IPC header of the received packet to the sequence number from the last received packet (ccb_lrseq). If the IPC entity receives the packet in proper order (sequence number in header is greater than ccb_lrseq by one), it performs these actions:
![]()
Sets the relevant connection record variables. ![]()
Releases all data packets on the acknowledgment queue with sequence numbers lower than or equal to the acknowledgment number. See "IPC Acknowledgment Implementation" later in this chapter for more information. ![]()
Resets the idle receive timer (ccb_tcount) for the connection to 30 seconds plus the total path metric. ![]()
Sets the acknowledgment timer (ccb_acount) to 400 milliseconds.
The IPC entity resets the idle and acknowledgment timers when a data, probe, or acknowledgment packet is received on the connection.
The flowchart in Figure 6-6 summarizes the IPC receive algorithm for data packets. The figure assumes that no errors were found in the packet, and the packet was received in the proper order.
IPC Error Recovery Implementation
The IPC entity returns an error packet on the connection if the entity encounters an error when processing a received data packet. For example, the destination port may not exist or may be full.
The entity returns a probe packet on the connection if the sequence number of the received packet is not in proper order. If the entity received the data packet out of order, the entity makes sure that:
![]()
Values contained in the packet header do not modify variables in the connection record. ![]()
The acknowledgment queue remains intact.
See "IPC Sequencing Implementation" later in this chapter for more information on probe packets.
Like a data packet, an error packet requires a sequence number and an acknowledgment. The IPC entity sets the acknowledgment number field in the header of an error packet to the sequence number of the packet that was received in error.
When the entity receives an error packet in the correct order, the entity releases all data packets on the acknowledgment queue with sequence numbers less than or equal to the value of the acknowledgment number in the error packet.
Example Error Packets
A server receives a packet with a sequence number of 2 from a client node, and cannot process the packet due to insufficient communication buffers. The server creates an error packet and initializes the acknowledgment number and error fields in the long IPC header, as follows:
Acknowledgment Number 2
Error 162
The server notifies the client node of the error by sending the error packet. Upon receiving the packet, the client node releases all packets with a sequence number of 2 or less from its acknowledgment queue.
Sequencing of packets helps guarantee their reliable delivery on an IPC connection. In their connection records, the IPC entities at each end of the connection maintain the following information:
![]()
The next sequence number to send (ccb_nsseq) ![]()
The last sequence number received (ccb_lrseq)
One IPC entity sends a packet to another as follows:
1. The sending IPC entity copies the value of ccb_nsseq into the sequence number field of the long IPC header before sending the packet.
2. The sending IPC entity increments the value of ccb_nsseq by one.
3. The receiving IPC entity compares the sequence number in the IPC header to the value of ccb_lrseq.
4. The receiving IPC entity accepts the packet if the sequence number in the IPC header is one greater than the value of ccb_lrseq. The receiving IPC entity increments ccb_lrseq by one so that it equals the sequence number in the header.
If the values are equal, the packet is a duplicate of one previously received and dropped.
If the sequence number in the IPC header is less than the value of ccb_lrseq, the packet is dropped.
An IPC entity sends a probe packet to inform the IPC entity on the other end of the connection that it received a data or error packet out of sequence. The IPC entity sends the probe packet immediately after the sequence error is detected. A data or error packet is out-of-sequence when one of the following conditions is met:
![]()
Sequence number is less than ccb_lrseq. ![]()
Sequence number equals ccb_lrseq. ![]()
Sequence number is greater than ccb_lrseq by more than one.
The sending IPC entity prepares the probe packet for sending as follows:
![]()
Sets the sequence number field in the probe packet header to the value of ccb_nsseq minus one ![]()
Sets the acknowledgment number field to the value of ccb_lrseq
When an IPC entity receives a probe packet, it can update the acknowledgment queue if the sequence number in the probe packet header equals the value of ccb_lrseq. The IPC entity sends again all packets in the acknowledgment queue with sequence numbers greater than the acknowledgment number field of the received probe packet. In addition, the IPC entity sends a probe packet if the received probe packet is out of order.
Example Sequence Numbers and Probe Packets
Figure 6-7 shows IPC entities on a client node and a server communicating over an IPC connection. The communication goes well until the IPC entity on the client node receives an out-of-order packet. The IPC entity on the client node sends a probe packet to the IPC entity on the server, requesting a retransmission of all of the packets on the acknowledgment queue with sequence numbers greater than the number in the acknowledgment number field of the received probe packet.
The IPC entity on the server sends an out-of-sequence packet. The IPC entity on the client node was expecting a packet with a sequence number of 4, but instead received one with a sequence number of 5. The IPC entity on the client node replies with a probe packet. This probe packet tells the IPC entity on the server that the last packet received in sequence was packet number 3. The IPC entity on the server sends again packets 4 and 5.
IPC Acknowledgment Implementation
IPC acknowledges receipt of packets both implicitly and explicitly.
Implicit Acknowledgment - To provide for efficient packet transmission, IPC entities can implicitly acknowledge received packets with data packets. IPC entities use the acknowledgment number field in data packets to acknowledge previously received packets.
When an IPC entity initializes the long IPC header in data packets, it sets the acknowledgment number field to the last sequence number received (ccb_lrseq) from the IPC entity on the other end of the connection. When the IPC entity on the other end of the connection receives the data packet, the acknowledgment number field indicates that all packets previously sent with sequence numbers equal to and less than the acknowledgment number arrived successfully. The IPC entity on the other end of the connection can delete those packets from its acknowledgment queue.
The acknowledgment timer (ccb_acount) gives an implicit acknowledgment enough time to take place. When a packet is received on a connection, this timer is set to 200 milliseconds, which allows time for the IPC entity to send a data packet. The data packet provides the IPC entity on the other end of the connection with an implicit acknowledgment. When the timer expires, an explicit acknowledgment takes place.
Example Implicit Acknowledgment
In Figure 6-8, a client node sends two packets with sequence numbers of 1 and 2, respectively, to a server on an IPC connection. The IPC entity on the server receives the packets in proper order and is able to send a data packet back to the client node before the connection's acknowledgment timer expires.
The data packet contains an acknowledgment number field with a value of 2. The value was copied in the field from the last received sequence number (ccb_lrseq) in the connection record on the server. When the client node receives the data packet, it deletes the packets that it previously sent from its acknowledgment queue.
Explicit Acknowledgment - IPC entities can also explicitly acknowledge previously received packets by sending an acknowledgment packet. The entity generates an acknowledgment packet if both of these conditions are met:
![]()
The acknowledgment timer expires. ![]()
The last acknowledgment number sent variable (ccb_lsack) is not equal to the last received sequence number variable (ccb_lrseq).
When an IPC entity creates an acknowledgment packet, it copies the last received sequence number into the acknowledgment number field. When the IPC entity on the other end of the connection receives the acknowledgment packet, the acknowledgment number field indicates that all packets previously sent with sequence numbers equal to and less than the acknowledgment number arrived successfully. The entity can delete those packets from its acknowledgment queue.
IPC does not increment the sequence number in acknowledgment packets. In acknowledgment packets, the sequence number is equal to the sequence number in the last packet sent. For example, if the IPC entity sends a data packet with a sequence number of 20, and then sends an acknowledgment packet, the acknowledgment packet will also have a sequence number of 20.
IPC Idle Connection Timeout Implementation
The idle receive timer (ccb_tcount) in the connection record eliminates idle connections. This timer applies to all IPC entities regardless of whether they were sending or receiving packets. When the idle receive timer expires, the IPC entity puts the connection into connection-idle state if the connection was in connected or in connection-in-progress state previously. Otherwise, the entity terminates the connection.
The entity resets the idle receive timer when it places the connection in connection-idle state. Except for a disconnect packet, any packet received on the connection causes the connection to re-enter connected state. The reception of a disconnect packet frees the connection immediately.
SPP provides transport layer support for virtual connections. An SPP virtual connection is an association between two transport layer ports anywhere in the network. Any single transport layer port can support multiple SPP virtual connections simultaneously. Data messages of unlimited size can be transferred on an SPP virtual connection. SPP guarantees that data messages are delivered only once and in proper order. A single SPP entity exists on a node.
The SPP and IPC protocols use a similar packet header format. The differences between IPC and SPP virtual connections are described in Table 6-7.
SPP formats packets with a common 16-byte header, as shown in Figure 6-9.
The fields in the SPP header are the same as the fields in the long IPC header with the following exceptions:
![]()
The window field does not appear in the long IPC header. ![]()
SPP connections do not support the error packet type. ![]()
The 0x80 bit is set in these conditions: - Data packets, when the connection is in single buffer mode.
- All acknowledgment packets, to indicate that the local SPP entity can send an immediate acknowledgment if the remote SPP entity requests it. See "SPP Pacing Implementation" later in this chapter for more information on single buffer mode.
SPP entities use the window field for flow control. An SPP entity sets this field to the highest sequence number that it can currently accept from the SPP entity on the other end of the connection. The SPP entity on the other end of the connection cannot send a sequence number that exceeds the value of the current window.
When an SPP entity establishes a connection, it restricts itself to sending one packet until the SPP entity on the other end of the connection increases the window, indicating that additional packets can be sent.
VINES SPP Implementation Notes
An SPP virtual connection consists of two half-connections, one associated with each transport layer port handling information flow on the connection. Each half-connection can be in one of the following states:
![]()
Connection in progress ![]()
Connected ![]()
Disconnected
SPP Half-Connection Initialization Implementation
When a process requests the SPP entity to establish a virtual connection or accept an incoming connection, the SPP entity attempts to create a half-connection. The SPP entity creates half-connections using the same algorithm as IPC entities. See "VINES IPC Implementation Notes" earlier in this chapter for more information.
The rest of this section describes:
![]()
SPP connection record ![]()
SPP status flags
Connection Record - Each SPP half-connection has an associated connection record that is similar to the IPC connection record. Table 6-8 lists the fields in the connection record for an SPP connection.
While creating the connection, the SPP entity initializes several of the fields in Table 6-8, as follows:
![]()
Initializes ccb_nsseq, ccb_lrseq, ccb_lrack and ccb_lsack to 0 (zero). ![]()
Initializes the local CID to the next assignable value not currently in use and the remote CID to 0 (zero), indicating that the remote CID is currently unknown. ![]()
Initializes ccb_nswnd to 4 and ccb_lrwnd to 1. This means that the entity can only send one packet, and will allow the SPP entity on the other end of the connection to send up to four packets. Because the SPP entity on the other end of the connection initializes these fields in the same way, each side only sends one packet initially, and then allows the SPP entity on the other end of the connection to send up to 4 packets.
![]()
Initializes the idle transmit timer (ccb_tidle) to 30 seconds and the idle receive timer for the connection (ccb_tcount) to 120 seconds plus the total path metric value (ccb_metric). ![]()
Initializes the acknowledgment timer (ccb_acount) to 0 (zero).
SPP Send Implementation
Every time a process sends SPP data to a port at a remote node, the process looks up the local CID, which is assigned by the SPP entity. The process uses this ID to pass SPP data to the SPP entity.
The entity initializes the data packet header similarly to IPC, with the following differences:
![]()
The SPP entity sets the window field to the value of ccb_nswnd. ![]()
The SPP entity does not initialize the length field.
See "IPC Send Implementation" earlier in this chapter for more information.
SPP entities deliver packets to VINES IP for transmission that are up to 1450 bytes in length.
SPP Receive Implementation
When the SPP entity receives a data packet, the entity looks up the connection record for the connection where the packet was received. The lookup is successful if the entity finds a half-connection to the source node of the data packet and if the half-connection has one of these characteristics:
![]()
The half-connection is in the connection in progress state, the remote CID field in the received packet is not 0 (zero), and the remote CID field in the packet is the same as the local CID field (ccb_lid) in the connection record. Remember, the sending SPP entity places the value of the receiving entity's local CID (ccb_lid) in the remote CID field. ![]()
The half-connection is in the connected state, the remote CID field in the received packet matches the local CID field (ccb_lid), and the local CID field in the received packet matches the remote CID field in the connection record (ccb_rid). If the remote CID field in the connection record is 0 (zero) and the local CID field in the received packet is non-zero, the SPP entity assigns the value of the local CID field in the packet to the remote CID field in the connection record (ccb_rid).
If the SPP entity does not find the connection record, the entity sends a disconnect packet if one of the following conditions is true:
![]()
The remote CID field in the SPP header is non-zero. ![]()
The destination port does not exist.
If the SPP entity does not find a connection record, the entity creates one if both of the following conditions are true:
![]()
Remote CID field in the SPP header is 0 (zero). ![]()
The destination port exists.
The SPP entity initializes the connection record to the connected state and sets the remote CID variable to the local CID from the packet header received from the remote node. The entity also informs the process that owns the destination port that the connection is established.
If the SPP entity finds the connection record and the half-connection is in the connection-in-progress state, the entity performs these actions:
![]()
Places the half-connection in the connected state ![]()
Sets the remote CID (ccb_rid) in the connection record ![]()
Returns the pended connection establishment request to the application
When the entity receives a data packet, the entity compares the sequence number in the SPP header of the received packet to the sequence number from the last received packet. If the SPP entity received the packet in proper order, the entity performs these actions:
![]()
Sets the relevant connection record fields ![]()
Releases all data packets on the acknowledgment queue with sequence numbers lower than or equal to the acknowledgment number ![]()
Resets the idle receive timer (ccb_tcount) to 30 seconds plus the total path metric ![]()
Sets the acknowledgment timer to 400 milliseconds
The SPP entity resets the idle and acknowledgment timers when a data, probe, or acknowledgment packet is received on the connection.
The flowchart in Figure 6-10 summarizes the SPP receive algorithm for data packets. The figure assumes that no errors were found in the packet.
SPP Sequencing and Acknowledgment Implementations
How SPP implements sequencing and acknowledgments is identical to how IPC performs these functions, with one exception. When SPP sends probe and acknowledgment packets, it sets the window field in the packet header to the value of ccb_nswnd in the connection record. See "IPC Sequencing Implementation" and "IPC Acknowledgment Implementation" earlier in this chapter for more information.
Because SPP supports messages of unlimited size, it needs to ensure that one side of the connection does not overrun the other. SPP uses the window method to control the flow of data on a connection. This method restricts the number of packets that one side can send to another at any one time on a connection. In effect, the number of packets acts as the transmission window.
Each side of the SPP connection maintains this information in the connection record:
Last Window Value Sent (ccb_lswnd) - The current window for the SPP entity on the other end of the connection. This field determines the number of packets that the SPP entity on the other end of the connection can send. It specifies the highest sequence number that the SPP entity on the other end of the connection can send.
Next Window Value to Send (ccb_nswnd) - This field is the next window for the SPP entity on the other end of the connection. When the window changes, this value will be sent to the SPP entity on the other end of the connection in the first SPP packet that is available to be sent.
Last Window Value Received (ccb_lrwnd) - The current window for this side of the connection. This field determines the number of packets that this side is allowed to send.
Figure 6-11 illustrates how SPP pacing works. The figure shows a snapshot from an SPP packet exchange between a client node and a server.
The rest of this section describes how SPP:
![]()
Increases the window ![]()
Enters single buffer mode
Increasing the Window - How much the SPP entity increases the window value depends on the number of packets in its receive queue. When the SPP entity has 2 or less packets in its receive queue, the SPP entity increases the window size by adding 4 to the last received sequence number, as follows:
window = ccb_lrseq + 4
Example Increasing the Window by 4
Suppose an SPP entity establishes a window value of 24, and then the SPP entity on the other end of the connection sends three SPP packets with sequence numbers of 21, 22, and 23 respectively. The receiving SPP entity can process the incoming packets fast enough so that no more than two are in the receive queue at any one time. The SPP entity increases the window value to 27 (23 + 4). Notice the SPP entity does not wait for the packet with the sequence number of 24 to arrive before increasing the window to 27.
When the SPP entity has more than two packets in its receive queue, the SPP entity increases the window size by subtracting the number of packets in the receive queue from the last received sequence number, and adding 4 to the result, as follows:
window = (ccb_lrseq - ccb_norpkt) + 4
Example Increasing the Window Using the Formula
Suppose an SPP entity establishes a window value of 32, and then the SPP entity on the other end of the connection sends three SPP packets with sequence numbers of 29, 30 and 31 respectively. Because of insufficient communication buffers, the SPP entity has three packets in its receive queue. The SPP entity uses the formula above as follows:
window = (32 - 3) + 4
= 33
The SPP entity will never decrease its window value.
When SPP entities first establish a connection, each side initializes the next window value to send (ccb_nswnd) to 4 and the last window value received (ccb_lrwnd) to 1. This means that each entity can only send one packet, and allows the SPP entity on the other end of the connection to send up to four packets. Because both sides initialize these fields in the same way, each side only sends one packet initially, and then allows the SPP entity on the other end of the connection to send up to 4 packets.
Entering Single Buffer Mode - SPP connections enter single buffer mode when SPP entities:
![]()
Send the same packet more than 7 times in a 10 second period ![]()
Receive more than 7 probe packets in a 10 second period
SPP entities also enter single buffer mode when the sum of the retransmitted packets and received probe packets within a 10 second period exceeds 7.
When a connection is in single buffer mode, the SPP entity sets the "send immediate acknowledgment" bit in the control byte of every data packet that it sends on the connection. This setting forces every data packet that the SPP entity sends to be acknowledged before the entity can send the next data packet. See Table 6-3 for more information on the control byte.
The SPP connection remains in single buffer mode until the connection is idle for 30 seconds (that is, until the next time that an acknowledgment packet is sent to keep the connection alive). See "SPP Idle Connection Timeout Implementation" below for more information on sending acknowledgment packets to keep connections alive.
SPP Idle Connection Timeout Implementation
The idle receive timer in the connection record eliminates connections to nodes that no longer exist or that are no longer reachable. If the idle receive timer expires, the SPP entity puts the connection record into disconnected state and issues a disconnect packet.
Any time the entity receives a data, probe, or acknowledgment packet on the connection, the entity resets the idle receive timer to 120 seconds plus the routing metric cost.
The idle transmit timer guarantees a minimum flow of packet traffic on a live connection. The SPP entity resets the idle transmit timer to 30 seconds every time it sends a data, probe, or acknowledgment packet on the connection. If the timer expires, the SPP entity issues an acknowledgment packet on the connection and resets the idle transmit timer.