After requesting a chunk, there is no way to abort the download without dropping the TCP connection. So if you keep the chunk sizes small, slow remote hosts don't reduce the overall download speed as much, because all the other chunks can easily be requested from faster nodes.
If you have a 4MB file and download in 1MB chunks, you can swarm from 4 hosts. If one of them is slow (say 1K/s) you would have to wait for this slow host to finish. With 100K chunks one slow remote host does not matter as much.
In addition alternate locations will be propagated much faster, because they are exchanged much more often.
__________________ Morgens ess ich Cornflakes und abends ess ich Brot
Und wenn ich lang genug gelebt hab, dann sterb ich und bin tot
--Fischmob |