Gnutella Forums

Gnutella Forums (https://www.gnutellaforums.com/)
-   General Gnutella Development Discussion (https://www.gnutellaforums.com/general-gnutella-development-discussion/)
-   -   solving the re-query problem (https://www.gnutellaforums.com/general-gnutella-development-discussion/6828-solving-re-query-problem.html)

Unregistered December 31st, 2001 06:55 PM

solving the re-query problem
 
How to get more results after the initial query has already been brodcast is an ongoing problem. Some developers have set the clients to send out more brodcast searches. This can burden the network with traffic. Some suggestions I have might be to:

1) Begin to structure the network more. Grouping people the file types they are sharing and the types they are looking for. This would be good to do under the ultrapeers.

2)Re-query with a non-bradcast query that is sent from 1 maching to the other in a directed manner toward areas in the network rich in the media type.

3)Have ultrapeers requery for its peers. But instead of simply sending out a query for every file for every client wait until a few of its clients are looking for the same file (or a range of files of the file hash).

4)Better yet, allow packets to contain multiple queries. Then the ultrapeer can simply group many of the queries together in one search. Re-quering the network could take 1/10-1/100 or less of the bandwith by grouping the searches. This could even be done by individual clients if multiple specific file hash queries could be grouped together. A client may be looking for 100 files. Instead of simply sending out 100 differents searches (1 for each file) it could send out 1 search with 100 file hashes it is looking for.

cultiv8r January 1st, 2002 11:02 AM

I like your ideas about this. My client actually implements your 2nd suggestion already. One thing I don't really agree with is your first suggestion (grouping).

Grouping has its benefits, but you need to be able to select which groups you prefer, per search request or session. Say that I am sharing nothing but MPx files, grouped with others sharing MPx files. But, what I need is a DOC file. How large is the chance I will actually find that DOC file in this group?

Your last suggestion is something I've proposed quite a while ago on the GDF ("piggyback riding"), but it didn't really catch on then. However, a new proposal called GGEP, or a special extension that can be attached to all Gnutella messages, will allow us to do this much better.

Cool stuff. Keep it coming :)

-- Mike

gnutellafan January 1st, 2002 06:16 PM

grouping
 
I agree that there are problems to grouping. However if it is dynamic, based both on what you are sharing and what you are searching for, and you are able to belong to multiple groups it should offer benifits.

As for packets containling multiple queries. Is bandwith more an issue of packet size or number?

Is there data on how much of the query bandwith is taken up by the top 10 or 20 searches? I imagine it is quite large. If ultrapeers could cache the results and give extra ttl to others results might be better.

BTW, when will we see a release of cultiv8r? I would love to try your client out. Email me at gnutellafan@hotmail.com

Moak January 3rd, 2002 09:37 AM

specialized horizon
 
Hi, I like idea 1+2+4! This could also help to find rare files and group similar files and it's traffic together.

I didn't get the advantages of number 3, sorry. Hmm a super peer could throttle routing searches instead of sending them immediately, then group searches from various clients and send them out together to avoid querying for the same file multiple... however a search cache in servants and superpeers does the same, reduces traffic and makes requerying not that unhealthy.

Moak January 14th, 2002 06:16 PM

hmm
 
Analyzing Gnutella traffic more... I especially like idea 4: grouping Queries or Queryhit together and so avoid protocol overhead and also use routing tables (server inside) and TCP/IP more efficient.

While I really like this idea... wouldn't it break v0.4 backwards compatibility? Or do you think we don't care because v0.4 clients will be soon outdated and rare to find. A possible solution could be, when you're connected to a v0.6 client group Queries/Queryhits together where possible, if you're connected to a v0.4 client split them up into single messages again.

hermaf January 15th, 2002 01:33 AM

Quote:

1) Begin to structure the network more. Grouping people the file types they are sharing and the types they are looking for. This would be good to do under the ultrapeers.
I think grouping can be easily done:
1) If you download a file from a client you try toconnect to this servent directly. So by time you will have many servents in your horizon that have the file typoes you search for.

2) Another option would be to connect to the client(s) you get the most query hits from.

I think something like that.

If you don't want to have to many connections open you have to kick one then of course. How to do that? I mean one problem would be that it leads to an even more dynamic network structure that could rapedly change/break paths in the net so messages are lost cause the path back is broken.
One criteria could be, to disconnect from a client you did not get back many hits or that does not share a/many file(s). This may also be a possibility to make life for freeriders harder. On the other hand you tend to block useres that sit in a short path with not many clients in their branch and those that do not share many files. And can you "punish" someone by disconnecting if he does not offer enough files?


All times are GMT -7. The time now is 10:38 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.

Copyright © 2020 Gnutella Forums.
All Rights Reserved.