Slow, hangs, internal errors, data loss...version 4 sucks! What is going on with Limewire? More and more features -- worse and worse performance. I get "out of memory" errors quite regularly on my Windows XP machine -- with a GIGABYTE OF RAM! No way should it need close to that much, let alone *more*. Simply leaving it on overnight suffices to ensure at least 3 of the buggers waiting for me in the morning, like christmas presents wrapped by the devil, and probably also a totally dead, hung UI. It was bad in 3.8.5, got a bit better up until 3.8.10 was almost tolerable, and then 4.0. 4.0 apparently demands as much ram as high end CAD and video editing tools, since it reports out of memory on a 1GB machine regularly, hangs a lot, and has every other error imaginable save outright closing itself down. Also, what's with it refusing to resume old downloads more often than not? I thought keeping your downloads across sessions was supposed to have been added as a feature way back in 2.something. I'm still waiting for it. It won't run for more than maybe an hour without becoming slow and unstable. Long freezes occur during which it drops half its connections, the UI won't respond, and task manager shows it gobbling up CPU, hardly touching the NIC, and spawning dozens of new threads, not to mention consuming 100MB+ of RAM (but nowhere near 1GB). These hangs begin to occur if it's left running for very long at all, and become very long and frequent and cause transfer failures and poor search results if it's left to run for more than maybe two hours. If it's left to run overnight, you can forget about it -- it's either hung, or at least in a really wacky and unstable and not properly functional state. (I personally love it when the tables start flickering wildly like a Commodore 64 game during a brownout in the days of yore around 1985 or so, and the "auto-sort table" setting is ignored even if it's checked.) Honestly, there's no reason it should need that much RAM, and still complain there's not enough memory on a 1GB machine, hang and misbehave often, become unstable just from leaving it running with a fixed workload (the same number of shared files, upload slots, and queue slots). It's obviously got a severe object leak inside, one that got abruptly much much worse with 4.0's creeping features adding yet more object churn.
Relevant system specs: Windows XP Home SP1 with all critical updates up-to-date; up to date network, video drivers; cable Internet connection; correct firewall configuration (I can download and upload successfully -- for about one hour, and then Limewire becomes virtually unusable, that is); 1.5GHz Athlon XP 1800+, 1GB RAM, and 3MB/s bandwidth up and down (i.e., may as well have a dual T1 running into this thing, except they'd cost more); and several GB of free disk space. Throw in swap, and there's effectively 1.6GB RAM available, and the system hardly ever gets above the high 700s as indicated by task manager, which eliminates swapping as the cause of the slowdowns, hangs, and errors. Sharing about 10,000 files -- too bad nobody can get the larger ones, since Limewire will have to be shut down long before the transfer completes even at cable speeds, so this isn't a freeloader complaining here. Current version: Limewire (free) 4.0.4 and wishing I had stuck with 3.8.10 which actually worked for four hours straight once without being restarted and once or twice actually remembered my downloads across sessions.
This user isn't buying Pro anytime soon, and isn't buying it ever unless there's an update Real Soon Now that addresses the worst of these performance problems. No filesharing application should need >1GB RAM to function without memory allocation errors. I'm astounded that the minimum system requirements listed include "64MB RAM" when my own machine, with 16 times that much, clearly does not meet Limewire's minimum system requirements -- and Limewire makes sure to remind me hourly of this fact!
If you want people, the vast majority of whom are running Windows XP, to leave it on overnight, it has to actually work when left running for that long, and not lose peoples' data regularly if it's not restarted very frequently. As near as I can tell the rules for keeping the dreaded internal errors and "could not resume old downloads" at bay, not to mention slowups and hangs of the UI, are as follows: Don't queue many downloads at once -- 100 is enough to give it problems right away. Don't let it run for very long. Only perform 1 search per session -- forget about five search tabs at once, though that was quite fine under 3.8.10. Close it the minute it gets slow, even if it means interrupting an incomplete download. Don't share thousands of files. In short, it's better to be a freeloader that hops on to do a quick search and grab a few files than to try to dedicatedly share a load of stuff. This isn't the message I think you want to be sending. Also, bad experiences with hanging, crashing, data loss, and the like with the free version does not make for a very good advertisement for the pro version.
On a side note: why does it seem to be more difficult to get small files than large ones? It's easier to get a 100K chunk of a 100M file than to get a 100K file, it seems -- if I queue a bunch of multi-meg MP3s I'll see a ton of them downloading, often from 5 or 6 hosts apiece, and few to no need more sources or partial downloads. If I queue a bunch of itty bitty jpegs, I'll get two, and the rest will fail, except some 20K file will die at 98% complete. What the hell? How can it fail there? The packet used to send the stupid busy signal could have been used to send the other 2% of the damn file, it's that small. I get the feeling there's an economy of scale that cuts both ways, with overhead dominating actual file data for files under 1M. People do share things other than mp3s you know. Photo albums, for instance. I suggest maybe some small file optimizations are needed in the system as a whole. For instance, when many small files are requested from a host that has them all, they could be sent as one big lump -- tar and untar them on the fly, perhaps? Small files do often occur in batches of related files on a single host. Or some sort of chain-downloading, or just not dividing files smaller than some size into chunks at all -- what's the max size of a network packet? 64K? So break small files into chunks 32K in size, sending files up to 32K in size in one packet. An express line would be useful too, limited to small files and maybe to high-speed (DSL or better) uploaders. Seeing (or being!) a cable user stuck in line behind five slow modem users all requesting multi-meg files blows chunks -- and wastes my upload bandwidth.
Of course, one pain in the network is all the Shareaza (and others, but mostly Shareaza) hosts configured to 0.000001KB/s upload bandwidth by clueless people in the mistaken belief that this helps their download speeds -- on most setups, you get separate bandwidth pools for up and downstream flow, and on 56K dialup, your connections to ultrapeers are probably already reducing your download speed to 28K. Besides, dialup speeds suck too much to suck noticeably more because you actually let people get files from you. Sharing files at 0.00001KB/s is just a way of being a freeloader without showing up as a freeloader on clients that detect them purely by counting files.
And can we please have a "no to all" button on the download overwrite prompt? I'm sick of clicking the plain "no" button 50-odd times when trying to suck up a whole category of search results, such as a big bunch of jpegs, in order to speed their propagation through the network -- having more high-speed sources for files helps everyone, so making it a pain to be such a source does not. Also, I thought I saw a new option in 4.0.2 (though it seems to be gone in 4.0.4) to ban anyone that downloads more than some number (I think the default was 5) of files from you! Needless to say I turned this off -- I want high speed hosts to be able to snarf up large numbers of file, then they become much more available over the network. Preferencing of high speed uploaders makes sense though, especially for a file your host knows of few high speed sources for via the mesh. I hear the term "greedy client" a fair bit lately -- IMO the only "greedy client" is a client that isn't sharing files. A high speed client with large numbers of files should be preferenced and never banned, as uploading a file to such a client aids its propagation through the network greatly. Besides, I don't like the implications such a feature has for rare content: if there are six rare files only to be found on one host, and that host only ever allows a given person to get five files, then they can NEVER HAVE all six rare files, unless someone else gets one and then shares it. Or they cheat and use a free AOL disk to get a temporary account with a different IP address. Speaking of which, banning specific hosts is nearly impossible anyway, since nearly everyone has a dynamic IP address. (The downside is that banning specific *files* -- mutilated files plastered with ads, for instance -- is impossible too. Even if the host polluting us with such files is a dot-com with a fixed IP, while consciencous sharers like me that get such files in the process of trying to build a comprehensive library of one type of content review all new files before sharing them and delete such tripe, plenty of people will have left their download directory shared by default and will also proffer these crooked files. I think maybe there should be a gnutella layer for voting on files -- files get a reputation and clients let you preview new files before sharing them, and delete instead. If the file sucks due to being mutilated with ads, damaged, etc. you can vote it down by deleting it from this preview window, and the information propagates through the mesh and is stored by clients as metadata. Search results indicate the file's quality with a star rating, as well as indicating the file's availability somehow and its speed somehow. The show search results filter options would include filtering by speed, availability, and reputation. If most people who downloaded the file deleted it, it's probably damaged, ad-spammed, or the filename misrepresents the content in some way.
By the way, what doofus registered the name "Unregistered"? |