|
Register | FAQ | The Twelve Commandments | Members List | Calendar | Arcade | Find the Best VPN | Today's Posts | Search |
General Gnutella Development Discussion For general discussion about Gnutella development. |
| LinkBack | Thread Tools | Display Modes |
| |||
On Multisegmented Downloading Hi folks, in this message (and the thread) I am trying to address mainly people who happen to know thing or two about Gnutella, networking and programming... The question I have in mind may sound rather simple, but I know there is no an easy answer to in. The question is the following: What could be the optimal strategy (in the realm of Gnutella network) to: 1. split file into the segments 2. distribute the load betweed peers 3. glue segments together For example, we start downloading from the host A. And in short time we find out that host B happens to have the same file. Now lets assume, host A is fast (what is fast?). Were shall we start sthe second download segment? Because we cannot measure the speed of the host B before we try it. My experience that the fasted downloads I get from hosts dexlaring themselves as 1.44 modems. One alternative would be to maintain the database of the fast hosts, it will at least help with the ones having permanent IPs. What else can be done? Ok, lets assume, we have made a decision about were to start new segment, and we created lets say 10 segments. So, now, lets imagine, that when all these segments are downloaded, it turns out that 3 of 10 do not match for whatever reason. What shall the client do? That is how one can maintain the tree of alternatives for example given unlimited local storage space for partial files? Or if that soace is limited? There could be a good point -- to make use of the footprints that are being sent already by some clients in the optional part of the query-reply. Well, according to my experience, I see it only in ~30% replies at the moment, but I want segmented downloads to work now, not next year. Second objection concerning use of footprints is that for example, because of whatever reason, one bit (or byte) of the particular mp3 file was modified during transmission, and after certain time network gets populated with both original and 'broken' version. For me, as an end user, the chances of getting any of them are pretty equal, and in fact I dont care. Because it's just one bit. And it would be fine, if one part will be taken from the 'damaged' file and another one -- from the original version. But obviously, these files will have different footprints... May be I've forgotten some other of my questions... buth this should be enough already to initiate a discussion. --Max |
| |||
Re: On Multisegmented Downloading Quote:
Ex: Avg. speed from before: 50 K/s Host A speed: 100 K/s Have Host B start 67% into the file (Host A is twice the average so have it cover the first two-thirds, twice what Host B needs to cover, 33%). |
| |||
Just one question: This is a computer right? You can program it to do what you want, right? Why don't you just do the second segment download... and download 'backwards'? Meaning you start with the END of the file and download towards it's start. Backwards, man. Jammet |
| |||
Thanks, efield, keeping track of the average speed seems to be a good idea, whereas this approach still may fail if the statistics is low (there were only few downloads made) or for example average speed of the service providers is very different for different types of files... but still, looks like this is the way to go. --Max |
| |||
Quote:
first segment -- strart from the begining, forward. Second -- start from the end, backwards, and third in the middle, but which way? Well, seriously, there is no support for downloading files backwards neither in Gnutella protocol nor in HTTP itself. --Max |
| |||
Quote:
So what about if you have that? 2 segments? It sounds pretty goot to me, this "downloading backwards" thing. I mean, I am not convied it makes sense, but it's at least an idea. Please don't trash the whole idea from the very beginning, but let's talk a bit about it. Jammet |
| |||
(Sorry for my grammar earlier) =). I'll be curious to see if such a funny idea would be used... it's good for a fit of giggles right now, maybe more in the future, that'd be nifty... =) Thank you for the comment Moak. Let's wait and see if anyone also finds it about as interesting as it's funny... Jammet |
| |||
It is a very interesting issue you are raising. An important thing every gnutella servant should do first of all is to support Keep-Alive. Keep-Alive means that the client can request the http server not to close the connection when the transfer is finished. That means you do not have to start another connection to get another range of the file. Then every servant maker can chose which segmenting scheme to use. There are different solutions for your question. BearShare requests a 64k (2.5% of the file in latest beta) segment at a time and allways request the next segment needed. That is a simple system that does not require any statistics, but perhaps not the most effective system. Backwards downloading might be a way in some distand future. It would require changes in Gnutella, http, operating systems and just about everthing. |
| |||
Quote:
The only thing here is that this "ease of segmented downloads" will work for only two nodes. If you add an additional node, you'll have to do the calculations again (how far is node 1 and 2, where in the middle do I start with this 3rd one?). But overall, since download speeds could fluctuate a lot during the file transfer, this is an easy way if the developer doesn't want to do some calculations, etc. |
| |
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
downloading: get message ERROR downloading codec | rosa | Download/Upload Problems | 3 | June 30th, 2005 02:16 PM |
"already downloading <somefile>" error when not downloading it dow | skronsj | Download/Upload Problems | 0 | November 28th, 2002 01:05 AM |
Swarming, Multisegmented Downloading, Partial uploads... | Moak | General Gnutella Development Discussion | 13 | March 24th, 2002 01:42 PM |