NOTICE: This document is a work in progress.  The semantics are likely to
change, and the model itself will grow much more complex as more of what is
described here is implemented.  Nothing contained within this document should
be interpreted as final and unchanging.

This document describes the operation of the pdtpd transfer manager.  The
transfer manager is an event-driven state machine which regulates the flow
of file pieces between the source server and all clients connected to a
server.  It assumes a basic familiarity with the underlying protocol.
A full specification may be found at http://pdtp.org/protocol.php

The primary structure operated on by the transfer manager is the client
manager.  The client manager tracks the following values for each client:

* What transfer modes are applicable for this client, either "active"
  or "passive".  In "active" mode a client can both make outgoing
  connections and receive incoming transactions.  In "passive" mode
  a client may only make outgoing connections.

* A boolean value indicating whether the client, if in active mode,
  has ever successfully accepted a connection from one of its peers on its
  listener port.  The server detects this by waiting to see if it receives
  a transfer authorization transaction from this client before an error
  notification is received from the connecting peer.  This is used for 
  automatic detection of clients behind firewalls.

* A counter of the number of failed attempts by other clients to connect
  to this client.  This is also used for automatic firewall detection.
  If the above boolean value is false and this counter surpasses a given
  threshhold (currently set to 4) a client is forced into passive mode.

* A counter of the number of bytes this client has downloaded.  This is
  incremented whenever the server receives confirmation from both clients
  involved that a piece has been successfully transferred.

* A counter of the number of bytes this client has successfully uploaded.
  This is incremented whenever the server receives confirmation from
  both clients involved that a piece has been successfully uploaded.

Then for each file being transferred by the given client, the server stores
the following:

* A set of pieces in the file which have successfully been transferred

* A list of pieces which the client is currently downloading, as well
  as which client they are being downloaded from.

* A queue of failed piece transfers that need to be resumed, ordered by
  the time at which that piece was selected for transfer.

When a client begins a transfer, all of its active transfers are checked to
see if that client possesses any pieces of any files.  If a given client has 
not yet transferred any pieces of any files on the server, it will be given one
"free" piece.

To prevent clients from connecting to a server, transferring a "free" piece,
then disconnecting and attempting to get another "free" piece, data in
the client manager is persistent and keyed by the client's 32-bit IPv4 address.
(IPv6 support will not be implemented until later versions of the protocol)
It will be purged periodically on a configurable schedule, with a default
of something like 1800 seconds.  This means that a client can circumvent
the need to upload pieces to other clients, but they will need to wait
the full duration of the client data purge interval in order to receive
another "free" piece.  This should hopefully be sufficient to discourage
attempting to "leech" pieces from the server.

If only one client is transferring a given file, then obviously there is
no way to distribute the load between clients.  In such a situation,
a given client will simply be instructed to transfer the file directly
from the source server or from piece proxies.

As soon as more than one client is transferring a given file, a configurable
ratio will be used to regulate the number of piece uploads required to
"earn" another piece download.  For the time being, we will simply use
a pieces downloaded:pieces uploaded ratio of 2:1, however to achieve
optimum transfer rates within a network of clients it's likely that
this ratio will need to be calculated dynamically and constantly adjusted
depending on what client behaviors are currently being exhibited.

Clients will be allowed to upload and download multiple pieces concurrently,
provided the download:upload ratio is not exceeded.  The protocol specifies
two transactions which allow clients to tune their concurrency settings
by setting the maximum number of concurrent transactions allowed.  The
server will not exceed the client specified values (and a configurable
default will be used in their place if the client elects not to specify
them) but the server should algorithmically calculate what concurrency
value results in the best transfer rate by tracking statistics.  Also,
the client specified values will be disregarded and an error response issued
to the cleints if they exceed a configurable server side hard limit on the concurrency ceiling.

When a given client has earned the right to download a piece from a given file,
which will either occur when it's the first piece of the only file they are
transferring on a given server, or if they have just completed a successful
piece upload, complex selection criteria must be used to determine both what
piece should be downloaded next and what client it should be downloaded from.
In order to improve the clarity and reduce the confusion of this document, we
will refer to a client which has earned the right to download a piece as the
"downloader".  For each piece being transferred by the "downloader" there will
be a client from which the piece is being downloaded which we will refer to as
the "uploader".

If the downloader's list of failed piece downloads is not empty, then the
first entry is taken from that list.  The criteria listed below for selecting
a new piece to transfer should be used for selecting a new client to transfer
from, but for all of the unions performed on the downloader's set of remaining
pieces to transfer, a set containing only the piece from the failed transfer
should be used.  This is done in order to prevent clients from coopting
the state model by feigning a failed piece transfer.

The first criterion we examine when selecting a piece for the downloader is
whether the downloader is in the "active" or "passive" state.  In the active
state, the downloader is capable of accepting incoming connections as well as
making outgoing ones.  So, first we attempt to select an uploader from the set
of clients in the passive state, as these clients are incapible of receiving
incoming connections.  If our selection criteria match no clients in the
passive state, we then perform the same selection criteria on all clients in
the active state.  If the selection criteria again fail to match a client,
then we must download a random piece from the piece server.  We know that since
our selection criteria failed to match any client on the network that all
pieces yet to be transferred by the downloader are not present on the peer
network at all and are only available from the source's piece server or piece
proxies, so we can simply select a random piece from the downloader's set of
pieces remaining to be transferred.  If the downloader is in the passive
state, then our selection criteria are only performed on clients in the
active state.  As for determining whether a client is active or passive,
all clients default to the active mode.  Clients may force themselves into
passive mode by sending a listener port reassignment transaction (0x4) which
does not contain an object for the port number.  Otherwise we keep a separate
counter for failed connection attempts to a given client.  If the client
making the connection reports a transfer failed before the server receives
a connection attempt notification from the client being connected to,
this counter is incremented.  A separate boolean value is kept to indicate
that the given client has successfully received a connection in active mode.
If this boolean value is false and the counter of failed connection attempts
passes a certain threshhold value (currently 4), then the client is set to
passive mode.  The counter of failed connection attempts should then be reset
to zero.  The client may force itself back into active mode by sending a
listener port reassignment transaction.

To begin selecting a piece for the downloader, we must first look at the set of
pieces which the downloader still needs to transfer.  The transfer manager must
then track what clients are currently "stalled" (i.e. have exceeded the
transfer ratio and need to upload a piece before they can be allowed to
download again).  The union of the set of pieces which each stalled client
has successfully transferred and the pieces the downloader still needs is taken
to determine which clients are eligable to upload a piece to the client which
will be downloading.  Selecting a specific client from this results set will be
addressed below.  If there are no matches, then the same union is applied, only
to all clients which have not yet exceeded their download:upload ratio.
Otherwise this stage of piece selection has failed, and depending on whether
the client downloading a piece is in the active or passive state, we either
need to fall back on selection from the set of active clients, or transfer a
piece from the source's piece server as outlined in the above paragraph.

If the pieces are available on the network, either from the set of stalled
clients or from the ones which have not yet exceeded their transfer ratio,
additional criteria must be applied to select a single client to handle
a given transfer from this set.  Separate criteria will be applied to each
set in order to properly select a client.

If we are selecting an uploader from the set of stalled clients, the
criterion used to select a client will be the total amount of time the client
has spent in a stalled state.  Clients which have been stalled longer are
favored, in order to keep clients from remaining in a stalled state for a
prolonged period of time.  If the maximum amount of time spent in a stalled
state is equal for more than one client, then the client with the fewest number
of failed piece transfer is selected as the uploader.  If multiple clients have
the same minimum number of failed transfers, then a client is selected
randomly from the set of clients with the lowest number of failed transfers.

Otherwise we are forced to select a client from ones which are actively
downloading pieces.  In order to optimize the performance of the network
we order clients by their average transfer rate, and select the fastest
of the group.  If there are multiple clients with the same maximum average
transfer rate, then we select from those with the lowest failure count,
and if there is more than one client with the same minimum failure
count a client is selected randomly from this result set.

If the union of the uploader's set of successfully transferred pieces and the
downloader's set of pieces yet to be transferred contains more than one piece,
a piece is randomly selected from the result set of this union.  The transfer
manager will then look at the given active mode versus passive mode settings
for each respective client.

The uploader's count of successfully uploaded pieces will not be incremented
until both the uploader and downloader report success.  After an uploader
has reported success, a downloader will not be assigned another piece,
even if their download:upload ratio allows for it, until the downloader
either reports success or failure for the transfer.  If the downloader
reports failure for the given transfer, then a new client will be selected
to transfer the same piece using the simplified criteria below, and the
uploader's failure count will be incremented.  An uploader may report success
multiple times, as in the event that a downloaded piece fails checksum, but
has been resumed from an offset, the downloader should request the beginning
portion of the file up to the offset from which it was transferred, as
another, malicious uploader may have sent the downloader invalid data,
which is not the fault of the current uploader.
