![]() |
limewire downloading from bearshare i run bearshare (whatever the beta du jour is). i have noticed often that limewire seems to be downloading parts of files more than once. does limewire: 1) download in strict start-to-finish order, except at EOF to get chunks that were queued to other nonresponsive/slow sources, or can chunk download order be pretty much random (like shareaza)? 2) download overlapping pieces, perform checks, and redownload nonmatching parts? if so, does it have logic to give up eventually? i have observed limewire downloads that proceed for well over 2x the file size, measured by bearshare's upload window "response #x" and the observed chunk size. i also frequently observe multiple "download <chunksize> bytes, download 20 bytes, repeat" behavior near EOF. |
can't answer your questions, but I've noticed that downloading from a BSh beta 5 to LW 2.9.8.2 seems to stall for a long time (I usually have to manually kill that download) |
Your download window Hi You raise an interesting question as to what you see in your download window. It is, at best, an approximation of reality. Because the download function is often shared between grouped hosts, every system has to have the ability to "pick and mix" data packets to optimise downloading shared between hosts with differing amounts of bandwidth available (consolidation and internal verification of the whole being completed during the hashing process). If one host goes off-line or runs short of bandwidth, the downloading is cascaded down the list of grouped hosts. Suppose that you are the first such host and the downloader runs down to the end of the group without completing the download. The downloader will then restart the downloading and your window may show sequential connections that, in total, may add to more than 100%. This should not be unique to Limewire. |
sample from a limewire 2.8.6 download bearshare logs activity to "console.txt" in a funny format. so: fgrep FileName console.txt >x <manually inspect x to be sure it's all the same downloader> <run editor macro on x to convert \r\n to a newline> <run editor macro on x to find all "<newline>Range:" lines, output to file y> <run editor macro on y to put leading 0's on numbers> sort <y >z result: - y contains Range: lines in order they happened. - z contains Range: lines sorted by file position it's obvious, looking at z (2nd file below), that limewire repeatedly downloads several parts of the file. whether that's limewire's fault or bearshare's i don't know. here's y: Range: bytes=0000000-0099999 Range: bytes=0099990-0199999 Range: bytes=0199990-0299999 Range: bytes=0299990-0399999 Range: bytes=0399990-0499999 Range: bytes=0499990-0599999 Range: bytes=0599990-0699999 Range: bytes=0699990-0799999 Range: bytes=0799990-0899999 Range: bytes=0899990-0999999 Range: bytes=0999990-1099999 Range: bytes=1099990-1199999 Range: bytes=1199990-1299999 Range: bytes=1299990-1399999 Range: bytes=1399990-1499999 Range: bytes=1499990-1599999 Range: bytes=1599990-1699999 Range: bytes=1699990-1799999 Range: bytes=1799990-1899999 Range: bytes=1899990-1999999 Range: bytes=1999990-2099999 Range: bytes=2099990-2199999 Range: bytes=2199990-2299999 Range: bytes=2299990-2399999 Range: bytes=2399990-2499999 Range: bytes=2499990-2599999 Range: bytes=2599990-2699999 Range: bytes=2699990-2799999 Range: bytes=2799990-2899999 Range: bytes=2899990-2999999 Range: bytes=2999990-3099999 Range: bytes=3099990-3199999 Range: bytes=3199990-3299999 Range: bytes=3299990-3399999 Range: bytes=3399990-3499999 Range: bytes=3499990-3599999 Range: bytes=3599990-3699999 Range: bytes=3699990-3799999 Range: bytes=3799990-3899999 Range: bytes=3899990-3999999 Range: bytes=3999990-4099999 Range: bytes=4099990-4199999 Range: bytes=4199990-4299999 Range: bytes=4299990-4399999 Range: bytes=4399990-4499999 Range: bytes=4499990-4599999 Range: bytes=4599990-4699999 Range: bytes=4699990-4799999 Range: bytes=4799990-4899999 Range: bytes=4899990-4927615 Range: bytes=0000000-0099999 Range: bytes=0000000-0099999 Range: bytes=0099980-0199989 Range: bytes=0199980-0199999 Range: bytes=0199980-0299989 Range: bytes=0299980-0299999 Range: bytes=0299980-0399989 Range: bytes=0399980-0399999 Range: bytes=0399980-0499989 Range: bytes=0499980-0499999 Range: bytes=0499980-0599989 Range: bytes=0599980-0599999 Range: bytes=0599980-0699989 Range: bytes=0699980-0699999 Range: bytes=0699980-0799989 Range: bytes=0799980-0799999 Range: bytes=0799980-0899989 Range: bytes=0899980-0899999 Range: bytes=0899980-0999989 Range: bytes=0999980-0999999 Range: bytes=0999980-1099989 Range: bytes=1099980-1099999 Range: bytes=1099980-1199989 here's z: Range: bytes=0000000-0099999 Range: bytes=0000000-0099999 Range: bytes=0000000-0099999 Range: bytes=0099980-0199989 Range: bytes=0099990-0199999 Range: bytes=0199980-0199999 Range: bytes=0199980-0299989 Range: bytes=0199990-0299999 Range: bytes=0299980-0299999 Range: bytes=0299980-0399989 Range: bytes=0299990-0399999 Range: bytes=0399980-0399999 Range: bytes=0399980-0499989 Range: bytes=0399990-0499999 Range: bytes=0499980-0499999 Range: bytes=0499980-0599989 Range: bytes=0499990-0599999 Range: bytes=0599980-0599999 Range: bytes=0599980-0699989 Range: bytes=0599990-0699999 Range: bytes=0699980-0699999 Range: bytes=0699980-0799989 Range: bytes=0699990-0799999 Range: bytes=0799980-0799999 Range: bytes=0799980-0899989 Range: bytes=0799990-0899999 Range: bytes=0899980-0899999 Range: bytes=0899980-0999989 Range: bytes=0899990-0999999 Range: bytes=0999980-0999999 Range: bytes=0999980-1099989 Range: bytes=0999990-1099999 Range: bytes=1099980-1099999 Range: bytes=1099980-1199989 Range: bytes=1099990-1199999 Range: bytes=1199990-1299999 Range: bytes=1299990-1399999 Range: bytes=1399990-1499999 Range: bytes=1499990-1599999 Range: bytes=1599990-1699999 Range: bytes=1699990-1799999 Range: bytes=1799990-1899999 Range: bytes=1899990-1999999 Range: bytes=1999990-2099999 Range: bytes=2099990-2199999 Range: bytes=2199990-2299999 Range: bytes=2299990-2399999 Range: bytes=2399990-2499999 Range: bytes=2499990-2599999 Range: bytes=2599990-2699999 Range: bytes=2699990-2799999 Range: bytes=2799990-2899999 Range: bytes=2899990-2999999 Range: bytes=2999990-3099999 Range: bytes=3099990-3199999 Range: bytes=3199990-3299999 Range: bytes=3299990-3399999 Range: bytes=3399990-3499999 Range: bytes=3499990-3599999 Range: bytes=3599990-3699999 Range: bytes=3699990-3799999 Range: bytes=3799990-3899999 Range: bytes=3899990-3999999 Range: bytes=3999990-4099999 Range: bytes=4099990-4199999 Range: bytes=4199990-4299999 Range: bytes=4299990-4399999 Range: bytes=4399990-4499999 Range: bytes=4499990-4599999 Range: bytes=4599990-4699999 Range: bytes=4699990-4799999 Range: bytes=4799990-4899999 Range: bytes=4899990-4927615 |
So? The thoroughness of your investigation is to be applauded but it does little more than demonstrate what I said in my earlier post. You are always likely to see sequential downloads of some sections of the file when grouped searches are stopped and restarted. You will also see some packets resent when the internal verification at the receiving end shows that a corruption has occurred during sending. You may also see sending restarted and so duplicating different packets of data when the connection is interrupted for some other reason. The only point of interest is your suggestion that this is a phenomenon particular to Limewire. I find this unlikely but, to be honest, I can't get excited enough about it to investigate. While it is true that there may be a certain degree of redundancy in the traffic between two linked hosts, it is unlikely to represent a serious traffic management problem for the network as a whole. There are far more serious issues in the traffic management arena to resolve. All I will venture is that this is not a malfunction on the part of either Limewire or Bearshare. |
Re: So? Quote:
Quote:
Quote:
Quote:
does limewire have the option to generate a similar activity log? specifically, does it log "i'm getting this chunk again because i smell corruption"? |
Sorry Sincerest apologies My elders and betters have corrected me. There is a problem with LW 2.8.6. The ever-wise Trap-jaw found it was incorrectly parsing the http headers, and causing all kinds of grief. My current mentor (the one who keeps telling me to shut up before I make too big a fool of myself) says that you are observing further evidence of this. The Committee of the Good and True therefore thank you for identifying this and offer your work as a further good reason for LW users to upgrade and not use 2.8.6. I'll shut up now before I get my foot even further into my mouth. |
Re: Sorry Quote:
|
Thanks for pointing that out Scott. I can't see a quick way to log similar data to console, but if some OS/2.9.8 combination is to be avoided, I'll wait for the next release before trying to share. I'd avoided connections to 2.8.6 and earlier, but looks like that wasn't enough. I've read that the developers were concentrating on the search code, so maybe the download work is the next project. Perhaps the work on partial filesharing will flush the past problems. btw--do you get flooded with spurious results from GNUC and MRPH vendors? I gather vendor names can be spoofed, so don't know how reliable they are. |
LimeWire does not redownload any chunks. Even if it detected a corruption it currently does not discard downloaded chunks and tries to get them someplace else. It just notifies the user that there has been a corruption. What you are probably seeing is that LimeWire requests chunks multiple times. - That will happen if a request was not successful because your BearShare client was busy. |
Quote:
the log entries i'm pulling out indicate that bearshare actually sent that chunk of the file, not just that limewire requested it. and in the case i analyzed, all log entries were checked so they were for that filename being downloaded by that IP/port for that version of limewire, and they were all part of the same "session" as far as i could tell. |
Quote:
most of these Evil Servents are designed to make your life miserable, mostly to cause non-hashing servents to download corrupt MP3s by returning a valid filename/size but supplying garbage data, for example. |
Quote:
Range: bytes=0000000-0099999 Range: bytes=0000000-0099999 Range: bytes=0099980-0199989 Range: bytes=0099990-0199999 As you can see, the requested chunks are not the same. Even if LimeWire were downloading one chunk twice, it could never modify the end-byte of a chunk after it has been requested once. |
Quote:
|
There is no other explanation for your logs. LimeWire does not re-download chunks. |
2.9.8 behavior things to note in this one: - lime started the download at a funny place (not the beginning of the file). this could be because the downloader was trying (and failing on) enough other sources to cover 0-599990 before it got to me. not a problem, except bearshare's progress indicator gets confused on the rewind-to-0. - when it started getting the earlier part of the file (starting at byte 0), it began to do extra 20 byte downloads of the last 20 bytes of the chunk it just previously downloaded. is that intentional, or a bug? i haven't yet captured a really egregiously bad 2.9.8 download lately, but will keep looking. it's possible i had my head up my *** when i thought i saw 2.9.8 behaving like 2.8.6. also note that my previous log duplicated at least one line because i didn't realize bearshare output a "warning" line if you download a chunk that's before the previous chunk. that's why i included the time on these. in time order: bytes=0599990-0699999 4/27/2003 8:57:15 AM bytes=0699990-0799999 4/27/2003 8:57:30 AM bytes=0799990-0899999 4/27/2003 8:57:47 AM bytes=0899990-0999999 4/27/2003 8:58:04 AM bytes=0999990-1099999 4/27/2003 8:58:19 AM bytes=1099990-1199999 4/27/2003 8:58:34 AM bytes=1199990-1299999 4/27/2003 8:58:47 AM bytes=1299990-1399999 4/27/2003 8:59:02 AM bytes=1399990-1499999 4/27/2003 8:59:20 AM bytes=1499990-1599999 4/27/2003 8:59:33 AM bytes=1599990-1699999 4/27/2003 8:59:47 AM bytes=1699990-1799999 4/27/2003 9:00:01 AM bytes=1799990-1899999 4/27/2003 9:00:15 AM bytes=1899990-1999999 4/27/2003 9:00:32 AM bytes=1999990-2099999 4/27/2003 9:00:47 AM bytes=2099990-2199999 4/27/2003 9:01:03 AM bytes=2199990-2299999 4/27/2003 9:01:18 AM bytes=2299990-2399999 4/27/2003 9:01:33 AM bytes=2399990-2499999 4/27/2003 9:01:50 AM bytes=2499990-2599999 4/27/2003 9:02:05 AM bytes=2599990-2699999 4/27/2003 9:02:20 AM bytes=2699990-2799999 4/27/2003 9:02:35 AM bytes=2799990-2899999 4/27/2003 9:02:50 AM bytes=2899990-2999999 4/27/2003 9:03:03 AM bytes=2999990-3099999 4/27/2003 9:03:18 AM bytes=3099990-3199999 4/27/2003 9:03:32 AM bytes=3199990-3299999 4/27/2003 9:03:47 AM bytes=3299990-3399999 4/27/2003 9:04:02 AM bytes=3399990-3499999 4/27/2003 9:04:18 AM bytes=3499990-3599999 4/27/2003 9:04:32 AM bytes=3599990-3699999 4/27/2003 9:04:46 AM bytes=3699990-3799999 4/27/2003 9:04:52 AM bytes=3799990-3887103 4/27/2003 9:04:59 AM bytes=0000000-0099999 4/27/2003 9:05:06 AM bytes=0099980-0199989 4/27/2003 9:05:13 AM bytes=0199980-0199999 4/27/2003 9:05:20 AM bytes=0199980-0299989 4/27/2003 9:05:21 AM bytes=0299980-0299999 4/27/2003 9:05:29 AM bytes=0299980-0399989 4/27/2003 9:05:30 AM bytes=0399980-0399999 4/27/2003 9:05:37 AM bytes=0399980-0499989 4/27/2003 9:05:38 AM bytes=0499980-0499999 4/27/2003 9:05:46 AM bytes=0499980-0599989 4/27/2003 9:05:46 AM bytes=0599980-0599999 4/27/2003 9:05:54 AM in file position order: bytes=0000000-0099999 4/27/2003 9:05:06 AM bytes=0099980-0199989 4/27/2003 9:05:13 AM bytes=0199980-0199999 4/27/2003 9:05:20 AM bytes=0199980-0299989 4/27/2003 9:05:21 AM bytes=0299980-0299999 4/27/2003 9:05:29 AM bytes=0299980-0399989 4/27/2003 9:05:30 AM bytes=0399980-0399999 4/27/2003 9:05:37 AM bytes=0399980-0499989 4/27/2003 9:05:38 AM bytes=0499980-0499999 4/27/2003 9:05:46 AM bytes=0499980-0599989 4/27/2003 9:05:46 AM bytes=0599980-0599999 4/27/2003 9:05:54 AM bytes=0599990-0699999 4/27/2003 8:57:15 AM bytes=0699990-0799999 4/27/2003 8:57:30 AM bytes=0799990-0899999 4/27/2003 8:57:47 AM bytes=0899990-0999999 4/27/2003 8:58:04 AM bytes=0999990-1099999 4/27/2003 8:58:19 AM bytes=1099990-1199999 4/27/2003 8:58:34 AM bytes=1199990-1299999 4/27/2003 8:58:47 AM bytes=1299990-1399999 4/27/2003 8:59:02 AM bytes=1399990-1499999 4/27/2003 8:59:20 AM bytes=1499990-1599999 4/27/2003 8:59:33 AM bytes=1599990-1699999 4/27/2003 8:59:47 AM bytes=1699990-1799999 4/27/2003 9:00:01 AM bytes=1799990-1899999 4/27/2003 9:00:15 AM bytes=1899990-1999999 4/27/2003 9:00:32 AM bytes=1999990-2099999 4/27/2003 9:00:47 AM bytes=2099990-2199999 4/27/2003 9:01:03 AM bytes=2199990-2299999 4/27/2003 9:01:18 AM bytes=2299990-2399999 4/27/2003 9:01:33 AM bytes=2399990-2499999 4/27/2003 9:01:50 AM bytes=2499990-2599999 4/27/2003 9:02:05 AM bytes=2599990-2699999 4/27/2003 9:02:20 AM bytes=2699990-2799999 4/27/2003 9:02:35 AM bytes=2799990-2899999 4/27/2003 9:02:50 AM bytes=2899990-2999999 4/27/2003 9:03:03 AM bytes=2999990-3099999 4/27/2003 9:03:18 AM bytes=3099990-3199999 4/27/2003 9:03:32 AM bytes=3199990-3299999 4/27/2003 9:03:47 AM bytes=3299990-3399999 4/27/2003 9:04:02 AM bytes=3399990-3499999 4/27/2003 9:04:18 AM bytes=3499990-3599999 4/27/2003 9:04:32 AM bytes=3599990-3699999 4/27/2003 9:04:46 AM bytes=3699990-3799999 4/27/2003 9:04:52 AM bytes=3799990-3887103 4/27/2003 9:04:59 AM |
Still in pit digging It's probably another of my misunderstandings, and I'm sorry to rejoin this debate, but I think I'm missing something. Is it not the case in the swarming system of downloading from multiple hosts, that a file is created for each potential host? Each file holds the metadata description of the whole file to be downloaded and will keep track of which packets of data are missing in the downloads to date. Each file also has an imperative to attempt to fill all those identified holes. Hence, until one of the metadata records shows a complete download, I do not see why Limewire will not potentially request as many copies of individual data packets as there are hosts in the given grouping. |
Re: Still in pit digging Quote:
no matter how limewire organizes downloads from multiple sources, it should never request the same data from the same source more than once. |
Quote:
I will submit a fix for that. |
Quote:
Quote:
and if you're going to download a "check piece", don't just request 20 bytes. request chunksize + 20 bytes from chunkposition - 20, and the extra 20 bytes match, you're happy and haven't wasted a lot of bandwidth. if they don't match, then start getting little pieces to see if the 20 bytes you got before is correct, or the 20 bytes you just got is correct (ie, decide whether the previous chunk was bad or the one you just got was bad). with less than two sources, i don't think you can make that decision anyway. with more than two, you could do a "voting" kind of thing. |
Well let it be 200 bytes for 10, 20 or a hundred chunks. That's still less than 20K and nothing to worry about. It can easily be fixed and the next release will probably fix this. |
All times are GMT -7. The time now is 08:35 PM. |
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.
Copyright © 2020 Gnutella Forums.
All Rights Reserved.