Gnutella Forums

Gnutella Forums (https://www.gnutellaforums.com/)
-   Download/Upload Problems (https://www.gnutellaforums.com/download-upload-problems/)
-   -   limewire downloading from bearshare (https://www.gnutellaforums.com/download-upload-problems/20017-limewire-downloading-bearshare.html)

Scott Drysdale April 26th, 2003 01:00 PM

Quote:

Originally posted by trap_jaw
LimeWire does not redownload any chunks. Even if it detected a corruption it currently does not discard downloaded chunks and tries to get them someplace else. It just notifies the user that there has been a corruption.

What you are probably seeing is that LimeWire requests chunks multiple times. - That will happen if a request was not successful because your BearShare client was busy.

well guess what, kids - it's redownloading for some reason!

the log entries i'm pulling out indicate that bearshare actually sent that chunk of the file, not just that limewire requested it.

and in the case i analyzed, all log entries were checked so they were for that filename being downloaded by that IP/port for that version of limewire, and they were all part of the same "session" as far as i could tell.

Scott Drysdale April 26th, 2003 01:18 PM

Quote:

Originally posted by stief
btw--do you get flooded with spurious results from GNUC and MRPH vendors? I gather vendor names can be spoofed, so don't know how reliable they are.
well, i do, but bearshare comes with has a "hostiles" file that lists Evil Servent IP/port sources, so i don't see them :)

most of these Evil Servents are designed to make your life miserable, mostly to cause non-hashing servents to download corrupt MP3s by returning a valid filename/size but supplying garbage data, for example.

trap_jaw April 27th, 2003 04:15 AM

Quote:

Originally posted by Scott Drysdale
well guess what, kids - it's redownloading for some reason!

the log entries i'm pulling out indicate that bearshare actually sent that chunk of the file, not just that limewire requested it.

and in the case i analyzed, all log entries were checked so they were for that filename being downloaded by that IP/port for that version of limewire, and they were all part of the same "session" as far as i could tell.

LimeWire does not download a chunk twice. Your logs indicate that the user tried to download one and the same file multiple times simultaneously.

Range: bytes=0000000-0099999
Range: bytes=0000000-0099999
Range: bytes=0099980-0199989
Range: bytes=0099990-0199999

As you can see, the requested chunks are not the same. Even if LimeWire were downloading one chunk twice, it could never modify the end-byte of a chunk after it has been requested once.

Scott Drysdale April 27th, 2003 06:01 AM

Quote:

Originally posted by trap_jaw
LimeWire does not download a chunk twice. Your logs indicate that the user tried to download one and the same file multiple times simultaneously.

i've got bearshare set to allow only one download at a time from a given IP/port, so i don't think that was it.

trap_jaw April 27th, 2003 06:44 AM

There is no other explanation for your logs.

LimeWire does not re-download chunks.

Scott Drysdale April 27th, 2003 06:50 AM

2.9.8 behavior
 
things to note in this one:

- lime started the download at a funny place (not the beginning of the file). this could be because the downloader was trying (and failing on) enough other sources to cover 0-599990 before it got to me. not a problem, except bearshare's progress indicator gets confused on the rewind-to-0.

- when it started getting the earlier part of the file (starting at byte 0), it began to do extra 20 byte downloads of the last 20 bytes of the chunk it just previously downloaded. is that intentional, or a bug?

i haven't yet captured a really egregiously bad 2.9.8 download lately, but will keep looking. it's possible i had my head up my *** when i thought i saw 2.9.8 behaving like 2.8.6.

also note that my previous log duplicated at least one line because i didn't realize bearshare output a "warning" line if you download a chunk that's before the previous chunk. that's why i included the time on these.

in time order:

bytes=0599990-0699999 4/27/2003 8:57:15 AM
bytes=0699990-0799999 4/27/2003 8:57:30 AM
bytes=0799990-0899999 4/27/2003 8:57:47 AM
bytes=0899990-0999999 4/27/2003 8:58:04 AM
bytes=0999990-1099999 4/27/2003 8:58:19 AM
bytes=1099990-1199999 4/27/2003 8:58:34 AM
bytes=1199990-1299999 4/27/2003 8:58:47 AM
bytes=1299990-1399999 4/27/2003 8:59:02 AM
bytes=1399990-1499999 4/27/2003 8:59:20 AM
bytes=1499990-1599999 4/27/2003 8:59:33 AM
bytes=1599990-1699999 4/27/2003 8:59:47 AM
bytes=1699990-1799999 4/27/2003 9:00:01 AM
bytes=1799990-1899999 4/27/2003 9:00:15 AM
bytes=1899990-1999999 4/27/2003 9:00:32 AM
bytes=1999990-2099999 4/27/2003 9:00:47 AM
bytes=2099990-2199999 4/27/2003 9:01:03 AM
bytes=2199990-2299999 4/27/2003 9:01:18 AM
bytes=2299990-2399999 4/27/2003 9:01:33 AM
bytes=2399990-2499999 4/27/2003 9:01:50 AM
bytes=2499990-2599999 4/27/2003 9:02:05 AM
bytes=2599990-2699999 4/27/2003 9:02:20 AM
bytes=2699990-2799999 4/27/2003 9:02:35 AM
bytes=2799990-2899999 4/27/2003 9:02:50 AM
bytes=2899990-2999999 4/27/2003 9:03:03 AM
bytes=2999990-3099999 4/27/2003 9:03:18 AM
bytes=3099990-3199999 4/27/2003 9:03:32 AM
bytes=3199990-3299999 4/27/2003 9:03:47 AM
bytes=3299990-3399999 4/27/2003 9:04:02 AM
bytes=3399990-3499999 4/27/2003 9:04:18 AM
bytes=3499990-3599999 4/27/2003 9:04:32 AM
bytes=3599990-3699999 4/27/2003 9:04:46 AM
bytes=3699990-3799999 4/27/2003 9:04:52 AM
bytes=3799990-3887103 4/27/2003 9:04:59 AM
bytes=0000000-0099999 4/27/2003 9:05:06 AM
bytes=0099980-0199989 4/27/2003 9:05:13 AM
bytes=0199980-0199999 4/27/2003 9:05:20 AM
bytes=0199980-0299989 4/27/2003 9:05:21 AM
bytes=0299980-0299999 4/27/2003 9:05:29 AM
bytes=0299980-0399989 4/27/2003 9:05:30 AM
bytes=0399980-0399999 4/27/2003 9:05:37 AM
bytes=0399980-0499989 4/27/2003 9:05:38 AM
bytes=0499980-0499999 4/27/2003 9:05:46 AM
bytes=0499980-0599989 4/27/2003 9:05:46 AM
bytes=0599980-0599999 4/27/2003 9:05:54 AM

in file position order:
bytes=0000000-0099999 4/27/2003 9:05:06 AM
bytes=0099980-0199989 4/27/2003 9:05:13 AM
bytes=0199980-0199999 4/27/2003 9:05:20 AM
bytes=0199980-0299989 4/27/2003 9:05:21 AM
bytes=0299980-0299999 4/27/2003 9:05:29 AM
bytes=0299980-0399989 4/27/2003 9:05:30 AM
bytes=0399980-0399999 4/27/2003 9:05:37 AM
bytes=0399980-0499989 4/27/2003 9:05:38 AM
bytes=0499980-0499999 4/27/2003 9:05:46 AM
bytes=0499980-0599989 4/27/2003 9:05:46 AM
bytes=0599980-0599999 4/27/2003 9:05:54 AM
bytes=0599990-0699999 4/27/2003 8:57:15 AM
bytes=0699990-0799999 4/27/2003 8:57:30 AM
bytes=0799990-0899999 4/27/2003 8:57:47 AM
bytes=0899990-0999999 4/27/2003 8:58:04 AM
bytes=0999990-1099999 4/27/2003 8:58:19 AM
bytes=1099990-1199999 4/27/2003 8:58:34 AM
bytes=1199990-1299999 4/27/2003 8:58:47 AM
bytes=1299990-1399999 4/27/2003 8:59:02 AM
bytes=1399990-1499999 4/27/2003 8:59:20 AM
bytes=1499990-1599999 4/27/2003 8:59:33 AM
bytes=1599990-1699999 4/27/2003 8:59:47 AM
bytes=1699990-1799999 4/27/2003 9:00:01 AM
bytes=1799990-1899999 4/27/2003 9:00:15 AM
bytes=1899990-1999999 4/27/2003 9:00:32 AM
bytes=1999990-2099999 4/27/2003 9:00:47 AM
bytes=2099990-2199999 4/27/2003 9:01:03 AM
bytes=2199990-2299999 4/27/2003 9:01:18 AM
bytes=2299990-2399999 4/27/2003 9:01:33 AM
bytes=2399990-2499999 4/27/2003 9:01:50 AM
bytes=2499990-2599999 4/27/2003 9:02:05 AM
bytes=2599990-2699999 4/27/2003 9:02:20 AM
bytes=2699990-2799999 4/27/2003 9:02:35 AM
bytes=2799990-2899999 4/27/2003 9:02:50 AM
bytes=2899990-2999999 4/27/2003 9:03:03 AM
bytes=2999990-3099999 4/27/2003 9:03:18 AM
bytes=3099990-3199999 4/27/2003 9:03:32 AM
bytes=3199990-3299999 4/27/2003 9:03:47 AM
bytes=3299990-3399999 4/27/2003 9:04:02 AM
bytes=3399990-3499999 4/27/2003 9:04:18 AM
bytes=3499990-3599999 4/27/2003 9:04:32 AM
bytes=3599990-3699999 4/27/2003 9:04:46 AM
bytes=3699990-3799999 4/27/2003 9:04:52 AM
bytes=3799990-3887103 4/27/2003 9:04:59 AM

David91 April 27th, 2003 11:29 AM

Still in pit digging
 
It's probably another of my misunderstandings, and I'm sorry to rejoin this debate, but I think I'm missing something.

Is it not the case in the swarming system of downloading from multiple hosts, that a file is created for each potential host? Each file holds the metadata description of the whole file to be downloaded and will keep track of which packets of data are missing in the downloads to date. Each file also has an imperative to attempt to fill all those identified holes. Hence, until one of the metadata records shows a complete download, I do not see why Limewire will not potentially request as many copies of individual data packets as there are hosts in the given grouping.

Scott Drysdale April 27th, 2003 02:31 PM

Re: Still in pit digging
 
Quote:

Originally posted by David91
Is it not the case in the swarming system of downloading from multiple hosts, that a file is created for each potential host?
limewire is downloading from bearshare. i'm posting logs from bearshare. in other words, i'm only one of possibly many sources for limewire to download from.

no matter how limewire organizes downloads from multiple sources, it should never request the same data from the same source more than once.

trap_jaw April 27th, 2003 03:07 PM

Quote:

when it started getting the earlier part of the file (starting at byte 0), it began to do extra 20 byte downloads of the last 20 bytes of the chunk it just previously downloaded. is that intentional, or a bug?
I believe it's not really intented, but I believe it's not really dangerous. Those additional 20 bytes are used to verify the file content.

I will submit a fix for that.

Scott Drysdale April 28th, 2003 04:51 AM

Quote:

Originally posted by trap_jaw
I believe it's not really intented, but I believe it's not really dangerous.
when you consider it's really around 200 bytes with the request/response overhead and you multiply that by how many times it happens, it's a bandwidth eater.
Quote:

Originally posted by trap_jaw
Those additional 20 bytes are used to verify the file content.
that makes sense only if it was getting the extra bytes from another source, not the source it just got the chunk from.

and if you're going to download a "check piece", don't just request 20 bytes. request chunksize + 20 bytes from chunkposition - 20, and the extra 20 bytes match, you're happy and haven't wasted a lot of bandwidth. if they don't match, then start getting little pieces to see if the 20 bytes you got before is correct, or the 20 bytes you just got is correct (ie, decide whether the previous chunk was bad or the one you just got was bad). with less than two sources, i don't think you can make that decision anyway. with more than two, you could do a "voting" kind of thing.


All times are GMT -7. The time now is 11:51 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.

Copyright © 2020 Gnutella Forums.
All Rights Reserved.