An ultrapeer can only handle so many queries per second. If a certain value of queries is reached queries will be dropped, meaning they won't return any results. Even worse, it meant that search results have to be dropped.
This discussion already happened on the development mailinglist. Until / unless some sort of distributed hash lookup table is created, there will be no more requeries with LimeWire.
Quote:
1. The ability to continue searching automatically for files is a BASIC feature of any P2P app. If they're unable to design an intelligent requery function, they're going to lose their audience.
|
Napster didn't have it, Kazaa doesn't have it (you have to choose click 'find more sources' to do a requery - however some Kazaa-hacks do have it). Audiogalaxy didn't have it either.
Quote:
2. For every file left in requery mode, keep a record of every non-firewalled IP that has ever been reported with the file and periodically request the file anew from those IPs. This will not waste any Gnutella bandwidth since it's a direct request! Also, allow the enduser to right-click each file and edit the IP list, i.e., if you've been manually keeping a list. No, I don't think that the existence of dynamic IPs invalidates this function!
|
LimeWire already doesd this. - LimeWire keeps track of all clients that ever downloaded the file and tells every other client requesting a file, who else to ask for the file. It's called download mesh. It works spectacularly with partial-file-sharing, a feature that will be implemented in the near future (probably not 3.0 but maybe in v3.1). Of course LimeWire could be more aggressive, e.g. retrying hosts that did not respond but that does not increase the performance very much.
Quote:
3. Keep track of leaf-node's requeries and throttle them. This could become part of the G2 protocol, e.g. "Do not requery more often than every 120 seconds."
|
LimeWire does not plan on implementing G2. Besides any requery-frequency lower than 1-2 requeries per hour (for ten downloads it would take five to ten hours to requery all of them) has proven to be harmful. There are also some clients (QTrax, although it is not available anymore) that would frequently drop the ultrapeer connections to defeat any such protection. It's much safer to drop all requeries.
Quote:
4. If you're seeing a lot of unsuccessful requeries, why not try to improve them? If you assume that requeries are a required function, then making them more effective will reduce bandwidth. Requerying for the original search-term plus the filesize would be more likely to 'hit' than searching for the exact filename. Again, this could become part of the G2 protocol, e.g. "no 'filename' over 20 char and no filehashes in requeries."
|
LimeWire was never requerying for the exact filename. However, the fuzzier the search the more unwanted results the more wasted bandwidth. Searching for the SHA-1 hash alone was about as exact as you can get, using broadcast search. It would return all results that could be used to resume a download and only those.
Quote:
5. Don't throttle or block manual requeries! (right-click on search tab, select Repeat Search)
|
It's impossible to distinguish between manual and automatic requeries. 'Repeat Search' is de facto not requery but a completely new query for the same keywords.