Gnutella Forums  

Go Back   Gnutella Forums > Gnutella News and Gnutelliums Forums > General Gnutella Development Discussion
Register FAQ The Twelve Commandments Members List Calendar Arcade Find the Best VPN Today's Posts

General Gnutella Development Discussion For general discussion about Gnutella development.


Reply
 
LinkBack Thread Tools Display Modes
  #1 (permalink)  
Old November 13th, 2002
Apprentice
 
Join Date: November 13th, 2002
Posts: 7
Thad is flying high
Default 97.7k chunk size

[Apologies in advance if this isn't the right place to mention this, but... ]

Acquisition 0.67, a Mac OS X Gnutella client, implements the new "chunk" download method where files are downloaded in 97.7k segments. I understand this is a new Gnutella feature (or possibly just LimeWire feature?) designed to improve the robustness of the download system. Unfortunately, it does not work very well yet. Most of the time, it will result in the client downloading 97.7k worth of data, disconnecting, and then attempting to reconnect to continue the download. Needless to say, this prevents the connection from ramping up to full speed, and also frequently results in someone else taking the "free" download slot before your client can reconnect.

What is the justification for breaking downloads into these small chunks, and are there any plans to address the serious problems caused by this method of handling downloads?

Last edited by Thad; November 13th, 2002 at 11:49 AM.
Reply With Quote
  #2 (permalink)  
Old November 13th, 2002
BearShare Mod
 
Join Date: April 12th, 2002
Location: Chicago
Posts: 29
linuxrocks is flying high
Default Re: 97.7k chunk size

This is also used in BearShare and is called 'Partial Content'. The advantages of this are that the chunks are smaller for the client and the download will complete faster. Also, this rotates the connections of Downloads. Say you're downloading a file that has 100 sources, your client may only be able to download from 16 sources at a time. What this does is allow the client to rotate hosts that it is downloading from. Hopefully this isn't too wordy!
Reply With Quote
  #3 (permalink)  
Old November 13th, 2002
Apprentice
 
Join Date: November 13th, 2002
Posts: 7
Thad is flying high
Default

Thanks for the reply, linuxrocks. However, in my experience, downloads do not complete faster with the "partial content" feature. (Like I said, most of them do not complete at all, they simply disconnect at 97.7k, never to resume.) In fact, I'm not sure how it could be faster, as I have a DSL connection that needs a solid, continuous connection in order to ramp up to full speed, and that just doesn't happen in the space of 97.7k.

As for rotating downloads from multiple hosts, that might be great if you are downloading Eminem's latest, but most of the time the file I'm trying to download is only hosted by one source. When I've finally managed to track down a rare file, the last thing I want is to have my connection to the host severed mere seconds after it begun.

I'm not sure if the problem that the client is actually sending a "disconnect" message after each 97.7 chunk, or if some hosts incorrectly interpret the end of a chunk as their cue to to close the connection and offer up the download slot to someone else. Regardless, at this point this "feature" is causing far more harm than good and I hope these problems will be addressed soon.
Reply With Quote
  #4 (permalink)  
Old November 13th, 2002
BearShare Mod
 
Join Date: April 12th, 2002
Location: Chicago
Posts: 29
linuxrocks is flying high
Default

Quote:
Originally posted by Thad
I'm not sure if the problem that the client is actually sending a "disconnect" message after each 97.7 chunk, or if some hosts incorrectly interpret the end of a chunk as their cue to to close the connection and offer up the download slot to someone else.

Look in your 'Console' or equivalent. There should be the information you need there.
Reply With Quote
  #5 (permalink)  
Old November 13th, 2002
Distinguished Member
 
Join Date: September 21st, 2002
Location: Aachen
Posts: 733
trap_jaw is flying high
Default

The smaller chunk size does not reduce download speed as much as the LimeWire/Acquisition UI tells you. (Try a network monitor or something and look at the overall downstream) The downloads are a lot more robust since then and LimeWire/Acquisiton does not create large files containing mainly empty data any more. Corruptions should be identifed earlier and the overall downloads should be more robust. - This new feature might also come very handy if tree hashes and partial file sharing are implemented one day - although partial filesharing is generally overestimated (except for it can be used to force users to share - although it is my experience that with higher connection speeds freeloaders are becoming rarer).
Reply With Quote
  #6 (permalink)  
Old November 13th, 2002
Apprentice
 
Join Date: November 13th, 2002
Posts: 7
Thad is flying high
Default

trap_jaw,

Hmm... I'll grant you I may be deceived about the download speed by the UI. So perhaps you are right that the small chunk size will eventually be a good thing overall. But for the moment, it is a serious hindrance since it doesn't maintain a steady connection -- it downloads 97.7k and then disconnects, leaving your download aborted practically as soon as it had began Yes, it tries to reconnect but 99% that doesn't happen. So while this feature may be good for the network overall, it's terrible for the end users who are stuck with it (at least until the problems with the implementation is resolved).
Reply With Quote
  #7 (permalink)  
Old November 14th, 2002
Distinguished Member
 
Join Date: September 21st, 2002
Location: Aachen
Posts: 733
trap_jaw is flying high
Default

This can only happen if you were downloading from a buggy client that claims to support http 1.1 but does not really.

I've seen that happen, too.
Reply With Quote
  #8 (permalink)  
Old November 14th, 2002
Apprentice
 
Join Date: November 13th, 2002
Posts: 7
Thad is flying high
Default

That's what I thought, but there are apparently an awful lot of buggy clients out there. It's difficult to know which ones are the culprits since that information is not available to Acquisition users, but I know for a fact that Shareaza doesn't support this (i.e., it boots Acq users after the first 97.7k). There are surely others, given the howls of outrage from users when this "feature" was introduced.

Anyway, Dave Wantanabe, the developer of Acquisition has taken steps to try to fix this on his end, and it seems to be working better now -- so far.
Reply With Quote
  #9 (permalink)  
Old November 16th, 2002
Gnutella Muse
 
Join Date: December 19th, 2001
Posts: 173
sdsalsero is flying high
Default chunks emulate Ethernet

more specifically, it's trying to copy the sucess of the CS/MA nature of ethernet -- instead of trying to establish a constant connection (with guaranteed throughput), experience has shown that it's better to have a "collision domain" where everybody competes for temporary access. If two requests collide, they both back-off for a random amount of time and try again. As long as the signaling speed is fast and you don't have more than about 25 requestors, you get consistent transfer rates (60-70% of the theoretical).

P.S. I know I've got the acronym for this technique wrong but I'm tired and don't want to try and look it up tonight...
Reply With Quote
  #10 (permalink)  
Old November 16th, 2002
Apprentice
 
Join Date: November 13th, 2002
Posts: 7
Thad is flying high
Default

I think it's obvious from the experience of many users (not just me -- really, check the Acquisition forums) that this is probably okay for local networks, and maybe even okay for files that are commonly hosted by, well, a host of clients, but it is extremely bad for that most common of Gnutella situations, i.e., when many clients are after a file hosted by a tiny minority of hosts. Especially when the clients play by different rules.

If you have one client that implements the chunk downloads (handing over its download spot to someone else after only a 97.7k transfer) versus another client that hangs on to the connection like a bulldog until it is complete, who do you think is going to end up with the successful download?
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
size < > skullhill Open Discussion topics 0 March 23rd, 2006 06:40 AM
what do i type in size box to control size? withdjkim Open Discussion topics 0 March 28th, 2005 04:11 PM
[COLOR=orangered][SIZE=3]can not stay connected[/SIZE] [/COLOR] ravinman64 Download/Upload Problems 0 December 29th, 2003 10:22 PM
Small chunk uploads? BaalDemon Download/Upload Problems 13 March 29th, 2003 07:00 PM
97k chunk mjin Download/Upload Problems 2 March 11th, 2003 12:40 AM


All times are GMT -7. The time now is 10:09 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.

Copyright © 2020 Gnutella Forums.
All Rights Reserved.