|
Register | FAQ | The Twelve Commandments | Members List | Calendar | Arcade | Find the Best VPN | Today's Posts | Search |
Download/Upload Problems Problems with downloading or uploading files through the Gnutella network. * Please specify whether the file problem is a Gnutella network shared file OR a Torrent file. * |
| LinkBack | Thread Tools | Display Modes |
| |||
no more auto-requeries?! Do I understand correctly, that the new "Could Not Download; Awaiting Sources" message means that LW is not automatically re-doing the search for that file? If so, that's just ridiculous! The only reason I leave a failed download in my download window is because I'm still hoping to find it. The whole point of computers is to do things for us, i.e., labor-saving. If I have to manually re-do the search, that's no benefit to me. I can't stay home all day and click Repeat Search... I understand the need to save bandwidth but disabling one of the main features is just totally misguided. Here's some alternative suggestions: 1. Group and compress communications between Ultrapeers, e.g. wait 1.0 seconds before passing-along requests and then use an open-source zip function to reduce bandwidth. 2. "Expose" the Repeat Search parameters of each incomplete download, i.e., ability to right-click a file and see what known source IPs exist(ed) and what search term(s) generated the original download. If you've been searching for a file for a long time, you'll probably have developed a list of source IPs that have the file but who aren't on-line most of the time. If you can add those IPs to the file's "Source IPs" list, the computer can automatically check them every 5 minutes (or whatever). Finally, were auto-requeries (before they were cancelled!) doing a search for the exact filename or were they doing a search on the original search term? If the former- DUH! There's your problem! Every failed download should maintain a copy of the original search term and use that instead of the particular filename. The reasons: a) it'll be shorter and save bandwidth; b) it'll find more potential hits! |
| |||
sdsalsero, FWIW, I have been able to leave LW 2.9.8.2 unattendend for 6-8 hours and some files have completed. I do have to spend a lot of time setting up a blocked host list (to filter the spam results that mask good sources), and try to build up a list of good alternates by using narrow searches. Yeah, I'd like to see automatic requeries back if they couldn't be abused by spammers who flood the network with spurious results. Maybe requeries for narrow searches (less than 10 results) as a decreasing average of previous manual searches? I think Acq .84 has temporarily brought back limited automatic requeries, but I think newer Ultrapeers have some logic to block clients that use automatic requeries. |
| |||
Thanks for the replies, guys. I understand the need to reduce bandwidth waste but, really, I hope there is a way to improve the requery. For instance, maybe maintain a copy of the original query and then requery along with a size or file-hash? That way, the only hits would be the same file. As for blocking requeries, are they labeled as such? They must be else the Ultrapeers couldn't differentiate and block them. I've been a paying support of LW since day 1 (i'm on my 3rd subscription now) but this loss of functionality is really really pissing me off... ___________________ stief, what's this 2.9.8.2 version? I'm running LW 2.9.8-Pro on W2K, and it's a copy I downloaded within 24-48 hrs of its availability. |
| |||
re 2.9.8.2 an anonymous poster "thebigname" posted it late last week. I'm not sure what improvements it has, but I think it's related to some of the tweaks trap_jaw mentioned. Here's the link (no pro avail) http://www9.limewire.com:82/download/ I'll look up the original post and edit the link back here. http://www.gnutellaforums.com/showth...=&postid=68802 [sorry about the editing hassle--couldn't format the link properly] If you try this one, be prepared for a lot of could not move to lib errors Last edited by stief; April 21st, 2003 at 08:27 PM. |
| |||
Quote:
But it's not just that requeries are inefficient, they were abused by other vendors. The network load was unacceptable so they had to be removed. Quote:
Quote:
__________________ Morgens ess ich Cornflakes und abends ess ich Brot Und wenn ich lang genug gelebt hab, dann sterb ich und bin tot --Fischmob |
| |||
Baby? Bathwater?! Trap_jaw, I know you're not one of the developers but please post the following suggestions on the developer mailing-list: 1. The ability to continue searching automatically for files is a BASIC feature of any P2P app. If they're unable to design an intelligent requery function, they're going to lose their audience. 2. For every file left in requery mode, keep a record of every non-firewalled IP that has ever been reported with the file and periodically request the file anew from those IPs. This will not waste any Gnutella bandwidth since it's a direct request! Also, allow the enduser to right-click each file and edit the IP list, i.e., if you've been manually keeping a list. No, I don't think that the existence of dynamic IPs invalidates this function! 3. Keep track of leaf-node's requeries and throttle them. This could become part of the G2 protocol, e.g. "Do not requery more often than every 120 seconds." 4. If you're seeing a lot of unsuccessful requeries, why not try to improve them? If you assume that requeries are a required function, then making them more effective will reduce bandwidth. Requerying for the original search-term plus the filesize would be more likely to 'hit' than searching for the exact filename. Again, this could become part of the G2 protocol, e.g. "no 'filename' over 20 char and no filehashes in requeries." 5. Don't throttle or block manual requeries! (right-click on search tab, select Repeat Search) (Thank you...) Last edited by sdsalsero; April 22nd, 2003 at 08:53 AM. |
| ||||||
Quote:
This discussion already happened on the development mailinglist. Until / unless some sort of distributed hash lookup table is created, there will be no more requeries with LimeWire. Quote:
Quote:
Quote:
Quote:
Quote:
__________________ Morgens ess ich Cornflakes und abends ess ich Brot Und wenn ich lang genug gelebt hab, dann sterb ich und bin tot --Fischmob |
| |||
In addition to my problems I've described elsewhere, I'm very dissapointed that in 2.9.8 a half-done file never resumes. After uninstalling and reinstalling 2.9.8 (in order to troubleshoot), I'm finding no improvements to my problems, and the 10.2.5/2.9.8 combo genernally very inefficient. If I restore a previous LMP version, I expect the same awesome experience as before, but if that's going to 'hurt others', I won't do it. Thoughts? -r |
| |
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Auto populate | Mitch34 | General Windows Support | 2 | November 18th, 2003 04:59 AM |
Acquisition actually downloads files - LimeWire just "Requeries" | jannuss | General Mac OSX Support | 15 | February 6th, 2003 11:30 PM |
What's this I see? Auto Re-searching? | Elusive | Gnucleus (Windows) | 1 | June 20th, 2002 09:23 PM |
Auto-requeries! | Morgwen | General Gnutella / Gnutella Network Discussion | 22 | June 1st, 2002 12:18 AM |
"Requeries sent" (From Hell????) | Chs | Download/Upload Problems | 1 | March 15th, 2002 09:15 AM |