Gnutella Forums  

Go Back   Gnutella Forums > Current Gnutella Client Forums > LimeWire+WireShare (Cross-platform) > New Feature Requests
Register FAQ The Twelve Commandments Members List Calendar Arcade Find the Best VPN Today's Posts

New Feature Requests Your idea for a cool new feature. Or, a LimeWire annoyance that has to get changed.


Reply
 
LinkBack Thread Tools Display Modes
  #121 (permalink)  
Old March 6th, 2005
Queued
Guest
 
Posts: n/a
Default or Awaiting Sources

I would like there to be one change that I think would be most helpful and that would be to allow us to see if we are in someones queue. I mean, I click on Resume, it then says Queued, then Awaiting sources, so what am I? Am I in someones Queue or am I still in limbo awaiting sources? I'd really like to know what position I am in someones queue and how many are before me. I think that little bit of info would be most useful.
Reply With Quote
  #122 (permalink)  
Old March 7th, 2005
GooRoo's Avatar
Gnutella Admirer
 
Join Date: February 3rd, 2005
Location: south Puget Sound, Washington (USA)
Posts: 52
GooRoo is flying high
Default Re: or Awaiting Sources

Quote:
Originally posted by Queued
I would like there to be one change that I think would be most helpful and that would be to allow us to see if we are in someones queue. I mean, I click on Resume, it then says Queued, then Awaiting sources, so what am I? Am I in someones Queue or am I still in limbo awaiting sources? I'd really like to know what position I am in someones queue and how many are before me. I think that little bit of info would be most useful.
My experience has been that "awaiting sources" means that the 'source' one attempted to download a file from has gone off-line. Otherwise, one may get "waiting for busy host" (not in the queue), or if lucky "... position ##" if in the queue, with ##-1 being the number of users ahead of you in the queue.
Reply With Quote
  #123 (permalink)  
Old March 8th, 2005
Lord of the Rings's Avatar
ContraBanned
 
Join Date: June 30th, 2004
Location: Middle of the ocean apparently (middle earth)
Posts: 664
Lord of the Rings has a distinguished reputationLord of the Rings has a distinguished reputationLord of the Rings has a distinguished reputation
Default

(Just in response to the last few posts.)

It is my understanding the Resume button was designed to attempt to connect to the original sources. And that the Find Sources button was designed to try to re-connect to files that had been downlding earlier in that same session.

However it's my belief (right/wrong) that too many people have an easy plug & play attitude where they just want to open LW & resume all their files & runaway & play another game. I believe that's abusing the abilities of LW. It puts a lot of pressure on LW, the computer's resources, & also the connection, & probably also the Gnutella network with too much simultaneous/constant traffic. On the forum people wonder why their computers freeze or lose connection, or get poor results.

I find Resume is best used when a file has been previously downlding that session but stopped or else, the search results are still present & valid. Or the function is used well after LW has settled down & attempted to make its own connections with all incomplete files. Then choosing to resume individual files. Well that's my experience & IMHO.

A search for the topic is the best way to find sources IMO. So LW sometimes needs some baby-sitting. Just letting LW look after itself seems to work for me most times. Perhaps b/c I very quickly seem to have people uploading from me - and I have a variety of shared material.
Reply With Quote
  #124 (permalink)  
Old March 10th, 2005
Queued_1
Guest
 
Posts: n/a
Default sorry if I mention something I shouldn't

I don't know if I'm allowed to mention other P2P programs here, so if you must, delete this post if I shouldn't have.

And I'm sorry if this sounds like an attack, if it does, it's not intentional and I don't mean it to but -

LOTR - Resuming a download from the original sources? Sorry, any other P2P will seek out new sources during the course of a download session for example "WinMX" will seek out new sources every 10 minutes and if it finds new sources you'll get multiple streams, just like Limewire, so I don't know why Limewire would just get stuck on a hump trying to resume from the original source host, do you? Mutliple point download is just that, if you're downloading from one individual and then he logs off, you shouldn't get stuck on a hump for days waiting for that individual to hook up again, should you? The program should just say, need more sources, not "Awaiting sources - queued - connecting - Awaiting sources" after you hit Resume.

What it should say is "Need more sources - searching - Need more sources", that's if it can't find that file on the network, not from the original source.

Or when you you go back to your computer the next day to see how your downloads are doing, if one of your downloads have stopped downloading the status pane should just be blank and that Resume button should say search when you click on the file you have partially downloaded that had stopped downloading.

Have you tried WinMX? I'm not saying it's better, but it will get results from multiple hosts for the same file even when one host disconnects you from their files, if that files available, it will pick up where the previous hosts left off and you'll continue to download.

I like Limewire, I think Limewire is one of the best P2P file sharing programs available, I just thought I would mention something that might make it even better.

I know if a file isn't available, well then, it isn't available, don't say I'm in queue and then put me back into the "Awaiting sources" limbo again, because searching and being in queue are two different things.
Reply With Quote
  #125 (permalink)  
Old March 11th, 2005
Lord of the Rings's Avatar
ContraBanned
 
Join Date: June 30th, 2004
Location: Middle of the ocean apparently (middle earth)
Posts: 664
Lord of the Rings has a distinguished reputationLord of the Rings has a distinguished reputationLord of the Rings has a distinguished reputation
Default

I don't know about WinMX b/c I use mac & there's no mac version of it. I use win LW purely for trouble-shooting. Sure some p2p apps (particularly those that use other nerdworks) are query frenzic! They send out query after query & repeat. But on the Gnutella network there's an agreement b/w the developers of the respectable p2p apps that they will develop their apps to run in a healthy manner on the network & not clog speeds due to traffic. ie: not sending out too many search queries. That's why it's very difficult to get earlier versions of LW, b/c to allow them to be still available would be irresponsible & not good for the network. There's some p2p apps out there who stick their thumbs (or finger) up at everybody & don't design responsible programs. So what would you prefer, a program that sends out query after query after query & wait half an hour or longer for it to finish & find your downlds suffer from heavy traffic speeds (which is what could potentially happen if the above isn't done), or would you prefer better downld speeds & accept a compromise in regards to finding sources. I'm sure the other members could word this better for they know a damned site more than me. By the way, I did use a mac program that works in a similar fashion to the other networks & this one also ran on another network.
Reply With Quote
  #126 (permalink)  
Old March 11th, 2005
Gnutella Muse
 
Join Date: February 18th, 2001
Posts: 207
gbildson is flying high
Default

Requerying every 10 minutes would destroy many positive features of the network. That type of activity would cause 90% or more of all query traffic to be requeries and reduce user initiated queries to 10% or less. Given that this would attempt to jam too many queries through the network, all queries would more quickly die off (from flow control) and the overall experience would get worse.

We have download meshes which connect sources for a file together directly. If new sources for the file become known, then these sources will show up in the download mesh and get propagated to new downloaders. This is a low cost way to get new sources other than through requerying.

If your download dies, it requires user intervention to restart. This is the best way to prevent excessive requerying. Any automated requerying is too much.
Reply With Quote
  #127 (permalink)  
Old March 11th, 2005
halo flow
Guest
 
Posts: n/a
Default

There should be an option to allow a third party app to scan downloaded files for viruses.

And please create a community-related interface (ex. soulseek).
Reply With Quote
  #128 (permalink)  
Old March 11th, 2005
Gnutella Muse
 
Join Date: February 18th, 2001
Posts: 207
gbildson is flying high
Default

Many virus checkers will detect that a file is being added to the disk and scan them without any integration. At least I see tat more and more.
Reply With Quote
  #129 (permalink)  
Old March 13th, 2005
massillonmarine
Guest
 
Posts: n/a
Default Multiple Media Players

I was hoping we could have a way to pick what player is to be used on what file. In my instance if it doesn't work on WMP, I switch over to DivX to see if it plays. Also, maybe a person would have a 3rd media player. So I'm asking for a way to pick between as many as 3 media players to play a file.
Reply With Quote
  #130 (permalink)  
Old March 14th, 2005
Unreg4645uytg
Guest
 
Posts: n/a
Default

  • Ultrapeers should drop search results with -4 or worse rating on Bitzi, as determined by file hash.
  • There should be a way to rate files from right inside Limewire, right next to Block Host.
  • More general Bitzi integration?
  • Block host should (perhaps optionally) block every known source for a file, not just a random one of them.
  • Small files with only one source are harder to get than they probably should be. Shortest-job-first queueing might be nice. Also, files that can fit in a single network packet should be sent in lieu of busy signals or other just-as-big responses that just mean the file will be re-requested shortly.
  • It should be possible to save off a chunk of your pending-downloads list. It might be nice to have separate lists for results from separate searches, and to also have separate download directories for separate searches. Currently the only way to have that effect is to make multiple installs, one for each query, and switch among them. Nasty workaround, and a huge waste of disk space.
  • Distributed hashing?
  • BitTorrent capability? Large files you shared would automatically create and share a .torrent as well; compatible clients (i.e. with both gnutella and bittorrent capability) would display the two search results as one item and try to get the file from bittorrent.
  • Caching of small files on ultrapeers?
  • Cascading of small file downloads -- many jpegs on the same host tarred, sent as one transfer, and untarred transparently at the other end? Downside -- interrupted transfer might not be easily resumed, since only the exact same host is likely to have all of the same files.
  • Performance is (still) poor on 1.5GHz CPU, 1GB RAM, cable inet system.
  • What is with files of only 10-20K getting 3% done and then aborting spontaneously? Are there clients out there that can be put in a "leech without seeming to leech" mode in which they share thousands of files but cap uploads at 0.01kb/s and randomly drop connections? If so can these be punished somehow?
  • 4.8.1 exhibits frequent hangs. The UI won't redraw and there is 100% cpu use. Even if its priority is lowered and there's a lot of physical RAM free, other apps are slow and unresponsive. Left alone it eventually recovers, only to hang again some time later. 4.6 did not do this, but it did have some sluggishness issues all the same.
  • Duplicate file database. Every file that was downloaded successfully has its SHA1 added. Files with the same SHA1 are filtered from future search results. Downside: if you download a file and find it's been substituted with garbage (an ad, typically) you won't be able to try to get it again and hope to get the real McCoy. The file, when completely downloaded, must be rehashed and that hash put in the duplicate file database, so the original is still going to show up until you get it. This way you can also tell if the file has a misleading name or if the file should have been something else but a bad node substituted junk when you downloaded it.
  • Integrated previewer. Newly downloaded files are moved to a "New files" list for review. You preview them and can delete, move to library, etc. Turned off by default; advanced users can turn it on. When turned off, files are moved automatically, i.e. the current behavior.
  • Library organizer. Library tab should make it easy to create subfolders and move/rename files, to categorize stuff.
  • Any time a search turns up a file that is in your library, the search query is remembered in a database associated with that file, and Limewire will return that file as a hit for any search for that query done by someone else. When you first download the file, and it is moved to the library, the search query that led to your downloading the file is associated with it in this way, and others get added later on. (These same duplicate hits get weeded out of your search results.) Thus if your search for "trees" produced an opaquely-named file like "DSC00156.jpg" it will be returned by your machine as a hit for "trees". If you later search for "plants" and this same file (same SHA1) is a hit, you won't see the search result, but that file will now match both "trees" and "plants" when your node is searched.
  • More efficient sharing of a large library. Limewire gets slow and cranky with thousands of files shared, so how about having it share only one shared folder at a time, but change which one every so often; or making this an option. (A shared folder with only subdirectories won't count; one with files and subdirectories would have the files shared; the subdirectories would get rotated to eventually.)
  • Ultrapeers should not drop search results unless they are actually overloaded, no matter how many hits for one query there are. (How widely supported is OOB result returning now, anyway? There's really no need for results to be limited at all, or for them to go through the ultrapeers; they should be sent straight to the querying node, save if they must go through one intermediate host to get past firewalls, just like a download.)
  • Ultrapeers with dialup leaves should cache the smaller files on those leaves, and return (OOB) the cached copy of such a file as a hit where possible. This will take some of the burden off modem users, and make some of the files they host more consistently available. When a query matches a small (<500K) file on a modem leaf as determined by an ultrapeer, and that ultrapeer has cache space available, it should download the file itself, and pass on the file as it receives it to the requesting node. Subsequent requests for the file that reach that ultrapeer can return the cached file without the dialup host being bothered at all. The file expires eventually when not requested for more than some period of time.
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
*feature requests hugacloud Shareaza (Windows) 7 July 8th, 2002 10:37 PM
A couple more feature requests Unregistered New Feature Requests 0 May 10th, 2002 12:58 PM
Phex feature requests Unregistered General Discussion 5 March 23rd, 2002 10:33 PM
2 feature requests dorksport@wp0.cjb.net New Feature Requests 0 September 7th, 2001 07:14 PM


All times are GMT -7. The time now is 09:38 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.

Copyright © 2020 Gnutella Forums.
All Rights Reserved.