Quote:
Originally posted by trap_jaw I believe it's not really intented, but I believe it's not really dangerous. |
when you consider it's really around 200 bytes with the request/response overhead and you multiply that by how many times it happens, it's a bandwidth eater.
Quote:
Originally posted by trap_jaw Those additional 20 bytes are used to verify the file content. |
that makes sense only if it was getting the extra bytes from another source, not the source it just got the chunk from.
and if you're going to download a "check piece", don't just request 20 bytes. request chunksize + 20 bytes from chunkposition - 20, and the extra 20 bytes match, you're happy and haven't wasted a lot of bandwidth. if they don't match, then start getting little pieces to see if the 20 bytes you got before is correct, or the 20 bytes you just got is correct (ie, decide whether the previous chunk was bad or the one you just got was bad). with less than two sources, i don't think you can make that decision anyway. with more than two, you could do a "voting" kind of thing.