|
Register | FAQ | The Twelve Commandments | Members List | Calendar | Arcade | Find the Best VPN | Today's Posts | Search |
New Feature Requests Your idea for a cool new feature. Or, a LimeWire annoyance that has to get changed. |
| LinkBack | Thread Tools | Display Modes |
| |||
Two-tiered upload slots? For some strange reason I like to monitor my uploads. I have a cable modem with asymmetric I/O. It really bothers me to see all my upload slots taken by people downloading enormous files, while poor souls wait in the queue for hours to download tiny files. Is there some way I could download x upload slots for big files (say 10MB and up) and y upload slots for small files (under 10MB--or maybe less--maybe configurable by the user)? Regards, Berli |
| |||
What you're really after is a method of sharing your upload resources such that every gets an equal number of bytes, not an equal number of files. Is there any scope in giving those grabbing large files perhaps 1 minutes of upload time before popping them back on the tail of the upload queue ? This might be simpler to manage than multiple queues, although could slow the propogation of large files through the gnet. Of course, what one really wants to do is favour those you're confident will in turn make the file available to others. That's a trust mechanism though, leading down the route of certificates and blacklists. Urgh. |
| |||
I think an easy solution would be to alter the network in a way so that the uploader could check a box or an option that says "Don't uplaod files larger than x MBs without the download having y sources". So therefore, when someone decides to just download a 600MB file from just one source, thus monopolizing this one source, the uploader could force the downloader to have perhaps 3 sources before downloading the file. More sources will decrease the amount of time any one source is monopolized. Such an option should definitely have a max so it would not be abused. Or the above option, the two tiered system would also work well. |
| |||
Quote:
I don't want to bump people from the queue. Some of them are on modems or otherwise flaky connections, after all. What I want is to keep 2 upload slots open for small files--say, 3mb and smaller. As long as I'm sharing large files, I can't effectively share the smaller files I have. That's a shame. With 2 slots I could go through hundreds of small file shares in an hour. Meanwhile the 300MB fileshares could chug along for hours or days . . . and everyone would be happy. |
| |||
Example below. To keep things simple only two upload slots exist, and requests come in in equal numbers for each file. Initially no uploads are happening: Queued: <empty> Queued: <empty> Slot 1: <empty> Slot 2: <empty> Requests come in for the two files being shared and uploding starts: Queued: <empty> Queued: <empty> Slot 1: SmallFile(1) Slot 2: BigFile(1) More requests arrive and are queued: Queued: SmallFile(2) Queued: BigFile(2) Slot 1: SmallFile(1) Slot 2: BigFile(1) SmallFile(1) finishes uploading quickly and the next item in the queue starts uploading: Queued: <empty> Queued: <empty> Queued: SmallFile(2) Slot 1: BigFile(2) Slot 2: BigFile(1) Now the queued download for smallfile(2) would, under the current scheme, have to wait for the two active uploads of BigFile to complete, even though they might take hours. To generate a more equal sharing of upload bandwidth, people uploading big files get moved to the tail of the queue after they've downloaded a certain amount - say 5MB. So, after a certain amount of time the downloader of BigFile(1) hits this limit and gets requeued: Queued: <empty> Queued: <empty> Queued: BigFile(1) Slot 1: SmallFile(2) Slot 2: BigFile(2) Note that a queue slot will always exist as one is freed up by moving from the queue to the now available upload slot. SmallFile(2) now gets a chance to upload and BigFile(1) will resume uploading when it's done (or when BigFile(2) has uploaded a certain amount and gets moved back to the tail of the queue). To within a certain degree of tolerance, this method allows someone to download 20 x 5 MB files in the same time it takes someone to download a single 100MB file, resources are shared equally. The downside is that an oversubscribed server sharing only large files will favour giving everyone some upload time rather than finishing a single upload (and potentially sharing the upload demand with another server). Partial file sharing foils this problem in theory, alas in practise the gnet's far from ideal. |
| |||
Take my vote on that too... ..Amazing! I woke up with the same idea, and I told myself 'this I have to post it in the forum'. The next I see is this topic. Before getting straight to the point, I want to tell that I am impressed beyond words with the actual Limewire, I remember being in Gnutella from the early days of Gnucleus (maybe before); There was Limewire too, I thought for myself 'well,nice... and pretty, but JAVA, you know... plus, it's slow.' Now I returned today to Gnutella after an exile in closed networks (no, not Fasttrack) and I find all those features previously unthinkable for Gnutella implemented here. And I almost don't believe it. Believe me... Well, as we know, "large" file downloaders often harm unintentionally "small" file downloaders. Then again, what is "small"? What is "large"? 10 MB. Maybe this size is a large file for music (or e-book.. plain ascii anyway)uploaders. Or a small file for a movie uploader. What if the Limewire user does upload music + movies? Harder to say when is small and when is large... Harder still if you consider what is currently being uploaded. Confused? Fear not... What we need here is that Limewire analizes the size of the files being uploaded versus the files requested at the queue, and consecuently asign a slot : the larger the file being uploaded (in relation of what you have shared) the smaller the file intended to be uploaded next (there is already an uploading range, so no problem). This way when a large file is being uploaded, a small file must go with it, the "medium" files are uploaded together and we clean a little bit the queue; I know this theory sound a little wacky but I do not recommend to set a fixed cipher as asked previously: the result will probably be extinted files.. and I always thought that balance is the key to filesharing. What do you think about this? |
| |||
Let's analyse this Quote:
To clarify this theory further: If two files matches this judgement and the one finishes before the another, we keep replacing this piece of the puzzle 'till the big one finishes or the download is cancelled. I don't know how well this works in practice, though I think it should do only a small, but perceptively enough difference... Let's pray. |
| |||
Forgot to mention something... ...What I meant initially is that the two files examplified are to be uploaded simultaneously. Quote:
Before you ask, I know that if there is no upload slot free this isn't going to work but anyway is probably better than the way it is: Worst case, it will be the same. Four slots are needed: first two slots should be used with the big-small relation, and the other 2 are to be used in the "mediocres are always together" relation... Thus, graphicating a "cross" of file dependencies. |
| |
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Upload Slots | Mommosan | Download/Upload Problems | 0 | December 28th, 2002 07:32 AM |
upload slots | mrsteve0924 | General Gnutella / Gnutella Network Discussion | 3 | November 7th, 2002 08:12 AM |
upload slots | mrsteve0924 | Open Discussion topics | 1 | October 26th, 2002 10:30 AM |
Two-tiered upload system? | Unregistered | General Gnutella / Gnutella Network Discussion | 4 | July 16th, 2001 11:37 AM |
Two-tiered upload system? | Unregistered | Site Feedback | 0 | July 14th, 2001 02:41 PM |