View Single Post
  #2 (permalink)  
Old April 21st, 2003
Joakim Agren's Avatar
Joakim Agren Joakim Agren is offline
Trouble Shooter
 
Join Date: June 4th, 2002
Location: Örebro Sweden
Posts: 366
Joakim Agren is flying high
Cool Re: no more auto-requeries?!

Quote:
Originally posted by sdsalsero
Do I understand correctly, that the new "Could Not Download; Awaiting Sources" message means that LW is not automatically re-doing the search for that file? If so, that's just ridiculous! The only reason I leave a failed download in my download window is because I'm still hoping to find it. The whole point of computers is to do things for us, i.e., labor-saving. If I have to manually re-do the search, that's no benefit to me. I can't stay home all day and click Repeat Search...

I understand the need to save bandwidth but disabling one of the main features is just totally misguided.


Yes it means that you manually have to search for new sources for the file. The requeries where considerd by the LW developers to be an evil function for Gnutella since it increased traffic but really never worked as advertised. LW used to send a requery out to find alternate locations for the file once every 50 minutes but most of the times it did not find any and the requery sent status could last for days. Clearly a waste of bandwidth for nothing. The new 2.9X versions will drop all requery messages in an attempt by the LW team to stop other vendors from using it to in future versions. Requeries are no good to Gnutella!. This was clearly not one of the main functions of LimeWire as you suggests.

Quote:

Here's some alternative suggestions:

1. Group and compress communications between Ultrapeers, e.g. wait 1.0 seconds before passing-along requests and then use an open-source zip function to reduce bandwidth.


I think that some kind of message compression technique is currently beeing worked on by the LimeWire team.

Quote:

2. "Expose" the Repeat Search parameters of each incomplete download, i.e., ability to right-click a file and see what known source IPs exist(ed) and what search term(s) generated the original download. If you've been searching for a file for a long time, you'll probably have developed a list of source IPs that have the file but who aren't on-line most of the time. If you can add those IPs to the file's "Source IPs" list, the computer can automatically check them every 5 minutes (or whatever).


Since a vast majority of users on Gnutella uses a dynamic IP this would not work effectivly. And also bandwidth consuming.
Quote:

Finally, were auto-requeries (before they were cancelled!) doing a search for the exact filename or were they doing a search on the original search term? If the former- DUH! There's your problem! Every failed download should maintain a copy of the original search term and use that instead of the particular filename. The reasons: a) it'll be shorter and save bandwidth; b) it'll find more potential hits!


This could work if all people on Gnutella would use specific search terms such the name of a particular song. But since the original search term is often for instance just an artist name then the result list will be slightly larger then for just that particular song so I think that this function would increase traffic on Gnutella slightly due to more query hits if the Requerys with this function added would have been implemented. I think that the requeries where by a specific Filename not search term used.

I think the the Requerys is a thing of the past now!.
__________________
<img src="http://www.jordysworld.de/emoticons/blob16.gif">Sincerely Joakim Agren!
Reply With Quote