|
Register | FAQ | The Twelve Commandments | Members List | Calendar | Arcade | Find the Best VPN | Today's Posts | Search |
General Gnutella Development Discussion For general discussion about Gnutella development. |
| LinkBack | Thread Tools | Display Modes |
| |||
Now I feel like I have to make a comment. I have not been to the GDF for a while, but when I was, I was really unimpressed by the amount of mess overthere. There were nearly impossible to find something you are looking for as well as understand what those people finally agreed on. I have no time to participate in the forums. Really I develop Mutella part-time and thats ALL the time I can actually devote to the topic. It's a shame there's no place were I can go and check what were the latest updates to the protocol etc. Clip2 did the major contribution by releasing Gnutella Protocol Specification v0.4.x, and it's a shame nobody actually repeated that for v0.6 and later. Btw, first functional version of Mutella was developed in ~ 1 month. Well, I have not done it from scratch, which I regret of :-) Maksik |
| |||
New direction Well, to the programmers here who are not working on a current client let me suggest that you get involved and take gnutella in a new direction. Use the ideas, optimize the protocol and code, and add some great new features. I think the most important thing that gnutella is lacking is security in the for of anonimity (sp?). I think the best way to do this, and I have expressed it here, the GDF, ect, is to add encryption to file transfers and to cache a percent of the file transfers. In addition, the cached files would be encrypted. Users would be required to provide at least 100mb or more HD space for encrypted files (or partial files) and may choose to share as much as they want of there own files. They would not be able to unencrypt those files and therefor would not know what they had. The network should not know if the files being transfered are encrypted files or regular shared files. Therefor noone could say who is sharing what. In addition this provides a huge benifit because now all users are sharing something, even if it is only the 100mb of encrypted files. If there was encryption I would of course also have no problem with the program to require that the download and partial folders be shared providing even more resources to the net. It turns out that we have some very talented people here and it would be wonderful to see them greatly advance gnutella to gnutella2 I guess I am the only one here that doesnt know anything |
| |||
@ maksik is this what you're looking for? http://rfc-gnutella.sourceforge.net/ |
| |||
Re: Back to the TOPIC Quote:
|
| |||
Re: Re: Back to the TOPIC Quote:
As for the number of atoms in the universe.... I don't think that number is even close. Whatever scientist came up with that number is on drugs. |
| |||
The number of atoms in the universe is about 10^78. You can estimate this number by counting galaxies, measuring how bright they are and estimating how big their mass is. You don't need any drugs for that. |
| ||||
Re: Re: Re: Back to the TOPIC I just had a conversation on irc .. someone had a good idea, maybe some of you have heard it before. Anyway, the idea is this: hash the first meg of the file as well as the whole file. So that way you can tell that 'buffy vampire.divx' 20M is the same file as 'buffy vampyre.divx' 80M, and get at least the first 20M. Then you repeat search later for files with first meg hash = x. To implement this most reliably and sensibly would require instead of the HUGE proposal's technicque of always and only hashing the whole file, the best implementation would be to have a query 'please hash the file from range x-y'. This shouldn't be totally automated .. because someone might have a text file which includes a smaller text file that should be considered complete .. eg they may have tacked some personall notes onto the end of some classic document. You probably don't want the extended version, so a user control button is needed 'Find bigger files whaich start off the same' or not. In fact a really good implementation (but not necessary for each client to implement for it to work, as long as clients suppor the 'hash this part of the file please' extension, would be the one suggested below: <Justin_> bigger or smaller <Justin_> or have a slider hehe <Justin_> the way md5sum works having 100 sums is not that intensive to make right? <Justin_> cause its incremental no? <Justin_> so if you had a slider in the program, that starts at 100%, that you can lower by 10% incremnts to find more files <Justin_> as in, the default is files that match 100%, or files that match at the 90% mark, well not % it would have to be 10M intervals Having the ability to reuest hashes for arbitrary portions of files would additionally make their use for verifying contents reliable - if someone could generate two files with the same hashes (or when this happens randomly) simply checking the hash for a given subportion would detect the difference. Nos ---------- Quote:
Certainly we are not going to generate each version of a 1G file that is possible .. ever (well, unless some pr!ck sits down in the far future and does it on purpose as a programming exercise using some newfangled superdupercomputer we can't even imagine yet .. but I stray from the topic). We do need a hash that has enough values that <i>most probably</I> each individual file we generate will have a unique value .. but it can't be known for sure unless you actually generate the hash for each file (ie generate each file). Hashes are funny things. (I'm still searching for a good reference to back that statement up .. but don't have time to find right now .. see later posting.) I think if you look at the file size and the hash, you have enough certainty to call it a definite match in searching for alternate download sources. Better techinuqe described above in first portion of post. Quote:
<A HREF="http://groups.google.com/groups?q=number+atoms+universe&hl=en&scoring=r&sel m=4kc1fu%24gej%40agate.berkeley.edu&rnum=1">This</A> will do as a reference - at least the guy has the word 'physics' in his email, as well as the word 'berkely'. I couldn't be bothered checking any more thoroughly than that. Nos <I>[Editted 14-04-2000 to add URL reference for atom count left out of initial post]</I> Last edited by Nosferatu; April 14th, 2002 at 12:01 AM. |
| |
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Gnutella Protocoll v0.7 Proposal | Moak | General Gnutella Development Discussion | 41 | August 17th, 2002 11:55 AM |
gnutella development plans | Iamnacho | General Gnutella Development Discussion | 11 | March 9th, 2002 07:21 PM |
My Proposal for XoloX!!! | Unregistered | User Experience | 1 | February 6th, 2002 09:11 AM |
Xolox and Gnutella development | Moak | Rants | 6 | November 25th, 2001 07:05 AM |
---a Radical Proposal--- | Unregistered | General Gnutella / Gnutella Network Discussion | 0 | September 21st, 2001 01:08 PM |