flooding, selfishness and other acts of cruelty are not good.
That I understand and respect. The question is: does it really bother me?
I have a 1MB internet hookup, and maybe that is somewhat faster than the average user, but even when I get a peak of query's passing through my node, I can't say that I feel it too high price to pay for getting more and better results.
Distributed net's are kinda like how the brain works, except it's bidirectional. The brain has no HLT instructions, it's allways working. The same goes for the gnutella net. It will allways be some kind of activity on it. The more activity, the more information will be passed. The more information, the more files.
Do we really want retards sitting on modem-connections to slow us down? Limewire fixes this by adjusting their gnutella-connections to a proper level (as apposed to f.eks. qtella), but why allow them on in the first place?
when the thickens plot, the plot thickens. if you can't stand a 40-50 KB/s load just to be connected, you shouldn't use the gnutella network. I for one, do not like people with modem connections leeching files from me. It's not the leeching I disapprove of, but the slow connections. Maybe it's time the gnutella community got segregated into two different nets (?)
One for limewire users on dial-ups,
and one for users on hard-lines. Über-nodes.
As for the java-approach. As said before: I like the concept in theory. In practise it just won't add up. Even though an OS would be ultra-portable if run in a javavm, nobody bothers. Even though an entire office-solution would be pretty darn cold in java, it won't add up. I could go on and on, but my point is :
YOU ALLREADY SERVE DIFFERENT PACKETS TO DIFFERENT OS' ANYWAYS? Why the hassle?! If you can't port your own code, it's a fat chance in *e*l that you'll end up with a vital java-client.
At least that's my point of view..