|
Register | FAQ | The Twelve Commandments | Members List | Calendar | Arcade | Find the Best VPN | Today's Posts | Search |
General Gnutella Development Discussion For general discussion about Gnutella development. |
| LinkBack | Thread Tools | Display Modes |
| |||
solving the re-query problem How to get more results after the initial query has already been brodcast is an ongoing problem. Some developers have set the clients to send out more brodcast searches. This can burden the network with traffic. Some suggestions I have might be to: 1) Begin to structure the network more. Grouping people the file types they are sharing and the types they are looking for. This would be good to do under the ultrapeers. 2)Re-query with a non-bradcast query that is sent from 1 maching to the other in a directed manner toward areas in the network rich in the media type. 3)Have ultrapeers requery for its peers. But instead of simply sending out a query for every file for every client wait until a few of its clients are looking for the same file (or a range of files of the file hash). 4)Better yet, allow packets to contain multiple queries. Then the ultrapeer can simply group many of the queries together in one search. Re-quering the network could take 1/10-1/100 or less of the bandwith by grouping the searches. This could even be done by individual clients if multiple specific file hash queries could be grouped together. A client may be looking for 100 files. Instead of simply sending out 100 differents searches (1 for each file) it could send out 1 search with 100 file hashes it is looking for. |
| |||
I like your ideas about this. My client actually implements your 2nd suggestion already. One thing I don't really agree with is your first suggestion (grouping). Grouping has its benefits, but you need to be able to select which groups you prefer, per search request or session. Say that I am sharing nothing but MPx files, grouped with others sharing MPx files. But, what I need is a DOC file. How large is the chance I will actually find that DOC file in this group? Your last suggestion is something I've proposed quite a while ago on the GDF ("piggyback riding"), but it didn't really catch on then. However, a new proposal called GGEP, or a special extension that can be attached to all Gnutella messages, will allow us to do this much better. Cool stuff. Keep it coming -- Mike |
| |||
grouping I agree that there are problems to grouping. However if it is dynamic, based both on what you are sharing and what you are searching for, and you are able to belong to multiple groups it should offer benifits. As for packets containling multiple queries. Is bandwith more an issue of packet size or number? Is there data on how much of the query bandwith is taken up by the top 10 or 20 searches? I imagine it is quite large. If ultrapeers could cache the results and give extra ttl to others results might be better. BTW, when will we see a release of cultiv8r? I would love to try your client out. Email me at gnutellafan@hotmail.com |
| ||||
hmm Analyzing Gnutella traffic more... I especially like idea 4: grouping Queries or Queryhit together and so avoid protocol overhead and also use routing tables (server inside) and TCP/IP more efficient. While I really like this idea... wouldn't it break v0.4 backwards compatibility? Or do you think we don't care because v0.4 clients will be soon outdated and rare to find. A possible solution could be, when you're connected to a v0.6 client group Queries/Queryhits together where possible, if you're connected to a v0.4 client split them up into single messages again. |
| |||
Quote:
1) If you download a file from a client you try toconnect to this servent directly. So by time you will have many servents in your horizon that have the file typoes you search for. 2) Another option would be to connect to the client(s) you get the most query hits from. I think something like that. If you don't want to have to many connections open you have to kick one then of course. How to do that? I mean one problem would be that it leads to an even more dynamic network structure that could rapedly change/break paths in the net so messages are lost cause the path back is broken. One criteria could be, to disconnect from a client you did not get back many hits or that does not share a/many file(s). This may also be a possibility to make life for freeriders harder. On the other hand you tend to block useres that sit in a short path with not many clients in their branch and those that do not share many files. And can you "punish" someone by disconnecting if he does not offer enough files? |
| |
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Writing Java client.. how should I implement query/query hits? | Tw|st3r | General Gnutella Development Discussion | 1 | December 26th, 2004 11:03 PM |
Solving the GnucDNA problem (in part) | et voilą | LimeWire Beta Archives | 7 | March 24th, 2004 07:04 AM |
query | cHEssHire | User Experience | 3 | June 19th, 2002 11:43 AM |
Problem Solving Checklist | donzelli | Connection Problems | 0 | June 9th, 2002 01:50 AM |
Query | Unregistered | General Gnutella Development Discussion | 1 | April 20th, 2002 03:24 PM |