Example below. To keep things simple only two upload slots exist, and requests come in in equal numbers for each file.
Initially no uploads are happening:
Queued: <empty>
Queued: <empty>
Slot 1: <empty>
Slot 2: <empty>
Requests come in for the two files being shared and uploding starts:
Queued: <empty>
Queued: <empty>
Slot 1: SmallFile(1)
Slot 2: BigFile(1)
More requests arrive and are queued:
Queued: SmallFile(2)
Queued: BigFile(2)
Slot 1: SmallFile(1)
Slot 2: BigFile(1)
SmallFile(1) finishes uploading quickly and the next item in the queue starts uploading:
Queued: <empty>
Queued: <empty>
Queued: SmallFile(2)
Slot 1: BigFile(2)
Slot 2: BigFile(1)
Now the queued download for smallfile(2) would, under the current scheme, have to wait for the two active uploads of BigFile to complete, even though they might take hours.
To generate a more equal sharing of upload bandwidth, people uploading big files get moved to the tail of the queue after they've downloaded a certain amount - say 5MB. So, after a certain amount of time the downloader of BigFile(1) hits this limit and gets requeued:
Queued: <empty>
Queued: <empty>
Queued: BigFile(1)
Slot 1: SmallFile(2)
Slot 2: BigFile(2)
Note that a queue slot will always exist as one is freed up by moving from the queue to the now available upload slot. SmallFile(2) now gets a chance to upload and BigFile(1) will resume uploading when it's done (or when BigFile(2) has uploaded a certain amount and gets moved back to the tail of the queue).
To within a certain degree of tolerance, this method allows someone to download 20 x 5 MB files in the same time it takes someone to download a single 100MB file, resources are shared equally. The downside is that an oversubscribed server sharing only large files will favour giving everyone some upload time rather than finishing a single upload (and potentially sharing the upload demand with another server). Partial file sharing foils this problem in theory, alas in practise the gnet's far from ideal. |