4 Flow Control and Buffering
Having considered the establishment and release of connections in some detail, let’s see how connections are handled while in use. We know a key aspects: the flow control.
In some ways the problem of flow control the transport layer is the same as the data link layer, but in others it is different.
The basic similarity is that both layers requires a sliding window or other scheme in each connection to prevent a fast sender to a receiver overflow slow.
The main difference is a router usually has relatively few lines, and a host can have multiple connections.
This difference makes it impractical to implement in the transport layer strategy of buffering the data link layer.
In the data link protocols , the frames are buffered in the router both sender and receiver. For example, Protocol 6 requires that both The sender and receiver buffers + 1 MAX_SEQ devote to each line, half for inputs and half starts.
On a host with a maximum of, say, 64 connections, and a sequence number of 4 bits, this protocol will require buffers 1024.
The data link layer, the sending side should buffer frames out because it is possible that they have to be retransmitted.
If the subnet provides a datagram service, the transport entity must also manage buffers station for the same reason.
If the recipient knows the sender buffers all TPDUs until receipt is confirmed, the receiver may or may not devote specific buffers to specific connections, as necessary.
For example, the receiver may maintain a single shared buffer pool all connections.
When an incoming TPDU, is an attempt to dynamically acquire a new buffer. If one is available, the TPDU is accepted, otherwise discarded.
Since the issuer is prepared to retransmit the lost TPDUs subnet, there is nothing wrong with let the receiver get rid of the TPDU, although some resources are wasted.
The sender simply keeps trying until it receives an acknowledgment.
In short, if the network service is unreliable, the sender must buffer all TPDU sent, as in the data link layer.
However, a reliable network services are possible other arrangements. In particular, if the sender knows the receiver has buffer space provided, no need to retain copies of TPDUs you send.
However, if the receiver does not can guarantee that you will accept each TPDU arrives, the sender will have to use buffers anyway.
In the latter case, the issuer can not rely on the confirmation of receipt of the network layer because it just means that now is the TPDU, it has been accepted.
Even if the recipient agrees to use buffers, there remains the question of size them.
If most of the TPDU is about the same size, it is natural to organize buffers as a buffer pool of identical size, with a per TPDU buffer.
However, if there is a large variation in the size of the TPDU, about few characters entered in a terminal to thousands of characters from file transfers, the buffer pool of fixed size presents problems.
If the buffer size is chosen equal to the largest TPDU will be wasted space each time you get a TPDU short.
If the size is chosen smaller than the maximum size of TPDU will require multiple buffers for large TPDUs with the inherent complexity.
Another way to address the problem of buffer size is the use of variable sized buffers.
The advantage here is a better use of memory, the cost of a more complicated buffer management.
A third possibility is to dedicate a single buffer large circular per connection.
This system also makes good use of memory when all connections have a high load, but is deficient if some connections have low.
The optimal balance between buffering at the source and destination depends on the type of traffic carried by the connection.
For a low traffic with a high-bandwidth, such as from an interactive terminal, it is better to spend no buffers, but acquire them from dynamic at both ends.
Since the sender can not be sure that the recipient will be able to acquire a buffer, the issuer must retain a copy of the TPDU to receive your acknowledgment.
Moreover, for transferring files and other high-traffic bandwidth, it is best if the receiver spends an entire window of buffers, to allow data stream at full speed.
Therefore, bursty traffic for a low bandwidth, is best to keep buffers in the transmitter, for continuous traffic of high bandwidth, better do it in the receiver.
As you open and close connections, and as the traffic pattern changes, the sender and receiver need to dynamically adjust their allocation of buffers.
Consequently, the transport protocol must allow a sending host requested buffer space on the other end.
The buffers can be shared connection or in whole, for all connections operation between the two hosts.
Alternatively, the receiver, knowing its handling capacity buffers (but not knowing the traffic generated) may indicate the sender “X buffers you have booked.”
Increasing the amount of open connections may be necessary to reduce an allocation for so the protocol should take this possibility.
A reasonably general to handle the dynamic allocation of buffers is uncoupled from the acknowledgments.
Dynamic buffer management means, in fact, a window size variable. Initially, the sender requests a certain number of buffers, based on their perceived needs.
The receiver then provides many buffers as possible. Each time the sender sends a TPDU, should decrease their allocation, stopping completely to get the assignment to zero.
The receiver then incorporates acknowledgments and assignments buffer traffic back.
Suppose the mapping information travels TPDUs different buffers, as sample, and is not incorporated in the traffic back. Initially, A wants to eight buffers, but it awarded only four. Then send three TPDUs, which lost the third.
The TPDU 6 confirms receipt of all TPDUs to sequence number 1, even thereby allowing to release those buffers, and also informs A that has permission to send three more TPDUs starting after 1 (ie, the TPDU 2, 3 and 4).
A knows that already sent the number 2, so you think you should send TPDUs 3 and 4, which proceeded to do.
At this point is blocked and must wait for a new buffer allocation.On the other hand, retransmissions induced timeouts (line 9) do occur during the blocking, since they use buffers already allocated.
On line 10, B confirms receipt of all TPDUs to 4, including, but refuses to allow to continue.
Such a situation is impossible with fixed window protocols .
The next TPDU from B to assign another buffer and allows to continue.
Potential problems may arise with the buffer allocation schemes such in datagram networks for leaking TPDU.
B has now allocated more buffer A, but the allocation was lost TPDU. Since TPDUs are not in control sequence, A is stagnating.
To avoid this situation, each host must send periodic Message
mind control TPDU with a confirmation of receipt and status buffers for each connection. Thus, the deadlock will be broken sooner or later.
So far we have tacitly assumed that the only limit on the rate of data issuer is the amount of buffer space available at the receiver.
As we continue to fall significantly the price of memory, could be feasible to equip hosts so that the lack of memory buffers would not be a problem.
If the buffer space no longer limits the maximum flow appears another bottleneck: the capacity of the subnet.
If neighboring routers can exchange at most x packets / sec and there are k non overlapping paths between a pair of hosts, there is no way that these hosts to exchange more than kx TPDUs / sec, regardless of the amount of buffer space available in each terminal.
If the sender pushes too hard (ie, sending more than kx TPDUs / sec), the subnet is congested, it will be unable to deliver the TPDU to the speed at which they arrive.
What is needed is a mechanism based on the capacity of the subnet place the buffering capacity of the recipient.
Clearly, the control mechanism flow must apply to the issuer to avoid too TPDUs pending unacknowledged at the same time.
Belsnes (1975) proposed the use of a control scheme sliding window flow in which the sender dynamically adjusts the window size to equal to the capacity load of the network.
If the network can handle c TPDUs / sec and the time cycle (including transmission, propagation, queuing, processing at the receiver and return confirmation of receipt) is r, then the sender’s window must be cr.
With A window of this size, the issuer normally operates with the channel capacity.
Any small decrease in network performance will cause it to hang.
In order to periodically adjust the size of the window, the sender could monitor both parameters and then calculate the desired window size.
The capacity can be determined by simply counting the number of TPDUs confirmed during some period and dividing period.
During measurement, the issuer must send as quickly as possible to ensure that it is the capacity of the network, and the low entry rate, the limiting factor rate of acknowledgments.
The time required for confirmation of receipt of a TPDU transmitted can be accurately measured and kept half of operation.
Given the network capacity depends on the amount of traffic on it, you adjust the size of window frequently to respond to changes in capacity. As we will see later, the Internet uses a similar scheme.