![]() |
Slow, hangs, internal errors, data loss...version 4 sucks! What is going on with Limewire? More and more features -- worse and worse performance. I get "out of memory" errors quite regularly on my Windows XP machine -- with a GIGABYTE OF RAM! No way should it need close to that much, let alone *more*. Simply leaving it on overnight suffices to ensure at least 3 of the buggers waiting for me in the morning, like christmas presents wrapped by the devil, and probably also a totally dead, hung UI. It was bad in 3.8.5, got a bit better up until 3.8.10 was almost tolerable, and then 4.0. 4.0 apparently demands as much ram as high end CAD and video editing tools, since it reports out of memory on a 1GB machine regularly, hangs a lot, and has every other error imaginable save outright closing itself down. Also, what's with it refusing to resume old downloads more often than not? I thought keeping your downloads across sessions was supposed to have been added as a feature way back in 2.something. I'm still waiting for it. It won't run for more than maybe an hour without becoming slow and unstable. Long freezes occur during which it drops half its connections, the UI won't respond, and task manager shows it gobbling up CPU, hardly touching the NIC, and spawning dozens of new threads, not to mention consuming 100MB+ of RAM (but nowhere near 1GB). These hangs begin to occur if it's left running for very long at all, and become very long and frequent and cause transfer failures and poor search results if it's left to run for more than maybe two hours. If it's left to run overnight, you can forget about it -- it's either hung, or at least in a really wacky and unstable and not properly functional state. (I personally love it when the tables start flickering wildly like a Commodore 64 game during a brownout in the days of yore around 1985 or so, and the "auto-sort table" setting is ignored even if it's checked.) Honestly, there's no reason it should need that much RAM, and still complain there's not enough memory on a 1GB machine, hang and misbehave often, become unstable just from leaving it running with a fixed workload (the same number of shared files, upload slots, and queue slots). It's obviously got a severe object leak inside, one that got abruptly much much worse with 4.0's creeping features adding yet more object churn. Relevant system specs: Windows XP Home SP1 with all critical updates up-to-date; up to date network, video drivers; cable Internet connection; correct firewall configuration (I can download and upload successfully -- for about one hour, and then Limewire becomes virtually unusable, that is); 1.5GHz Athlon XP 1800+, 1GB RAM, and 3MB/s bandwidth up and down (i.e., may as well have a dual T1 running into this thing, except they'd cost more); and several GB of free disk space. Throw in swap, and there's effectively 1.6GB RAM available, and the system hardly ever gets above the high 700s as indicated by task manager, which eliminates swapping as the cause of the slowdowns, hangs, and errors. Sharing about 10,000 files -- too bad nobody can get the larger ones, since Limewire will have to be shut down long before the transfer completes even at cable speeds, so this isn't a freeloader complaining here. Current version: Limewire (free) 4.0.4 and wishing I had stuck with 3.8.10 which actually worked for four hours straight once without being restarted and once or twice actually remembered my downloads across sessions. This user isn't buying Pro anytime soon, and isn't buying it ever unless there's an update Real Soon Now that addresses the worst of these performance problems. No filesharing application should need >1GB RAM to function without memory allocation errors. I'm astounded that the minimum system requirements listed include "64MB RAM" when my own machine, with 16 times that much, clearly does not meet Limewire's minimum system requirements -- and Limewire makes sure to remind me hourly of this fact! If you want people, the vast majority of whom are running Windows XP, to leave it on overnight, it has to actually work when left running for that long, and not lose peoples' data regularly if it's not restarted very frequently. As near as I can tell the rules for keeping the dreaded internal errors and "could not resume old downloads" at bay, not to mention slowups and hangs of the UI, are as follows: Don't queue many downloads at once -- 100 is enough to give it problems right away. Don't let it run for very long. Only perform 1 search per session -- forget about five search tabs at once, though that was quite fine under 3.8.10. Close it the minute it gets slow, even if it means interrupting an incomplete download. Don't share thousands of files. In short, it's better to be a freeloader that hops on to do a quick search and grab a few files than to try to dedicatedly share a load of stuff. This isn't the message I think you want to be sending. Also, bad experiences with hanging, crashing, data loss, and the like with the free version does not make for a very good advertisement for the pro version. On a side note: why does it seem to be more difficult to get small files than large ones? It's easier to get a 100K chunk of a 100M file than to get a 100K file, it seems -- if I queue a bunch of multi-meg MP3s I'll see a ton of them downloading, often from 5 or 6 hosts apiece, and few to no need more sources or partial downloads. If I queue a bunch of itty bitty jpegs, I'll get two, and the rest will fail, except some 20K file will die at 98% complete. What the hell? How can it fail there? The packet used to send the stupid busy signal could have been used to send the other 2% of the damn file, it's that small. I get the feeling there's an economy of scale that cuts both ways, with overhead dominating actual file data for files under 1M. People do share things other than mp3s you know. Photo albums, for instance. I suggest maybe some small file optimizations are needed in the system as a whole. For instance, when many small files are requested from a host that has them all, they could be sent as one big lump -- tar and untar them on the fly, perhaps? Small files do often occur in batches of related files on a single host. Or some sort of chain-downloading, or just not dividing files smaller than some size into chunks at all -- what's the max size of a network packet? 64K? So break small files into chunks 32K in size, sending files up to 32K in size in one packet. An express line would be useful too, limited to small files and maybe to high-speed (DSL or better) uploaders. Seeing (or being!) a cable user stuck in line behind five slow modem users all requesting multi-meg files blows chunks -- and wastes my upload bandwidth. Of course, one pain in the network is all the Shareaza (and others, but mostly Shareaza) hosts configured to 0.000001KB/s upload bandwidth by clueless people in the mistaken belief that this helps their download speeds -- on most setups, you get separate bandwidth pools for up and downstream flow, and on 56K dialup, your connections to ultrapeers are probably already reducing your download speed to 28K. Besides, dialup speeds suck too much to suck noticeably more because you actually let people get files from you. Sharing files at 0.00001KB/s is just a way of being a freeloader without showing up as a freeloader on clients that detect them purely by counting files. And can we please have a "no to all" button on the download overwrite prompt? I'm sick of clicking the plain "no" button 50-odd times when trying to suck up a whole category of search results, such as a big bunch of jpegs, in order to speed their propagation through the network -- having more high-speed sources for files helps everyone, so making it a pain to be such a source does not. Also, I thought I saw a new option in 4.0.2 (though it seems to be gone in 4.0.4) to ban anyone that downloads more than some number (I think the default was 5) of files from you! Needless to say I turned this off -- I want high speed hosts to be able to snarf up large numbers of file, then they become much more available over the network. Preferencing of high speed uploaders makes sense though, especially for a file your host knows of few high speed sources for via the mesh. I hear the term "greedy client" a fair bit lately -- IMO the only "greedy client" is a client that isn't sharing files. A high speed client with large numbers of files should be preferenced and never banned, as uploading a file to such a client aids its propagation through the network greatly. Besides, I don't like the implications such a feature has for rare content: if there are six rare files only to be found on one host, and that host only ever allows a given person to get five files, then they can NEVER HAVE all six rare files, unless someone else gets one and then shares it. Or they cheat and use a free AOL disk to get a temporary account with a different IP address. Speaking of which, banning specific hosts is nearly impossible anyway, since nearly everyone has a dynamic IP address. (The downside is that banning specific *files* -- mutilated files plastered with ads, for instance -- is impossible too. Even if the host polluting us with such files is a dot-com with a fixed IP, while consciencous sharers like me that get such files in the process of trying to build a comprehensive library of one type of content review all new files before sharing them and delete such tripe, plenty of people will have left their download directory shared by default and will also proffer these crooked files. I think maybe there should be a gnutella layer for voting on files -- files get a reputation and clients let you preview new files before sharing them, and delete instead. If the file sucks due to being mutilated with ads, damaged, etc. you can vote it down by deleting it from this preview window, and the information propagates through the mesh and is stored by clients as metadata. Search results indicate the file's quality with a star rating, as well as indicating the file's availability somehow and its speed somehow. The show search results filter options would include filtering by speed, availability, and reputation. If most people who downloaded the file deleted it, it's probably damaged, ad-spammed, or the filename misrepresents the content in some way. By the way, what doofus registered the name "Unregistered"? |
That's 100+ queued, I only set it to allow 20 download connections at once. |
a good read--lots to think about. Thanks. re the crashing---FWIW, I reorganized my shares (4.5 GB; ~1400 files), and then started having start-up problems. Bit the bullet, trashed the fileurns.cache and .bak, waited the 40 mins for them to rehash (thought maybe it would be faster than the 1GB/10 min figure it used to be). The CPU stayed ~35-40 % which is MUCH better than before, and everythings been much quicker since. No data loss, gui slowdowns, etc. Can't say I've seen any error messages though, and it ran for 50 hours this weekend (OSX). So--looks like a new fileurns.cache is good here. re the 100+ downloads, I can't offer anything similar. Did try 50 plus (small) at a time last week--I guess habits develop because it seemed abusive--but couldn't see any problems other than firewalled (?) hosts that only let one or two at a time through. btw--since the search results now show already downloaded files, I'd forgotten about the old request for the "remember" checkbox. Sure like your idea of making a block of lots of small files equivalent to a single large file as far assigning upload slots, and preferencing those who request unique files. What about allowing "folder" downloads? The Library can show paths . . . and looks like the Library is on the agenda for the next batch of attention. Yeah! Directory uploads! Should cut the messaging chatter considerably too! I don't buy your "no Pro" argument though. Sounds more like it should be "Get Pro" so there's more push to get the program developed and fix those bugs--many of which remind me of when Java and OSX were (are) learning to play nicely together. Aren't all of you on XP due for a Java update soon--1.5, isn't it? cheers |
Quote:
Quote:
Quote:
As long as there are serious bugs I keep waiting one more version, one more version to see fixed, I want to keep on the "upgrading is free" side of the divide thank you very much. :) I might consider paying for a genuinely stable product. Current versions don't approach that by a country mile, especially the 4.0 series... As for a new version of Java, I hadn't heard. 1.5 sounds like an unstable version though -- wouldn't it likely make things worse? Might be better waiting for 1.6. Also, given that Limewire's performance took a steep nosedive going from late 3.8 to 4.0.2, it seems that major version jumps and the accompanying new feeping creatures tend to be accompanied by sharp drops in performance, only made up gradually over succeeding minor version increments (e.g. the improvement from 3.8.5 to 3.8.10) -- so going from Java 1.4.2 to 1.5.0 will probably mean going from a fairly well tuned VM to one that runs badly. Running an app with performance problems inside a VM with performance problems would compound problems massively. And none of this addresses the disturbing fact that it regularly complains of being out of memory when run on a 1GB machine... |
Quote:
|
Quote:
|
Well, it's using as much ram as it ever did, around 120M, but it's not complaining or slowing down quite as much. What happens to the index file that causes this (or at least, worsens it?) Why isn't it listed as a Known Problem with a Known Workaround in the appropriate section of the user's guide on the Web site? Is there a way to avoid the index file being damaged in this way? Does upgrading cause it, maybe due to a drift in the format of this file which gets markedly worse when it's a major version number increase (e.g. 3.x to 4.0)? If so, why isn't it converted when an upgrade is run for the first time, or at least rebuilt? (ICQ does this with its database files sometimes when you upgrade. Otherwise it's no example to follow: slow, bloated, somewhat buggy, though not as bad as MSN for taking matters into its own hands and deciding you wanted it to disconnect when you told it no such thing. And that silly mouse buttons dialog! Why not just make either mouse button bring up the menu, or make clicking just select a contact, double clicking launch a message window, and right clicking bring up the menu? The latter would fit standard windows UI behavior, and although currently double clicking is supposed to open a message window, half the time it opens the menu instead and sometimes it seems to simply do nothing, or the app hangs...) |
Bah. I just found out that Limewire got 32% of a file and then choked. The file is 12K (yes, you read that right, 12K) long. Why the foo didn't the remote host send the whole damn thing in one chunk? The maximum size of a network packet is 64K, easily big enough to hold the whole thing plus overhead, for god's sake. I can think of no rational reason for this, save someone wasn't thinking of anything but mp3s when they optimized this sucker. Except I am unsure that qualifies as a rational reason. On a side note, I see a lot of downloads broken off at suspiciously round numbers. 32 and 33 and 67 and 25 and 50 and 75%, for instance. I can't see why random network outages or people just happening to go offline would disporportionately often happen when a transfer's fraction completed could be expressed in small integers. But human beings have such preferences. Is there a client out there that deliberately serves a fixed, small integer fraction of a file (e.g. a third or a quarter) and then boots a host to the end of the queue, and then forgets them so they end up with "Blah blah ... 32% complete ... Awaiting Sources"? If so, can it be detected and banned with some undocumented config option? (Filtering by vendor/version regex matching seems to be a curiously lacking but desirable feature, given that all client programs are emphatically NOT created equal!) Especially as it obviously cluelessly applies this policy to files large and small, and even small enough that the whole remainder of the file could have been sent in place of the damned busy signal used to terminate the download! If the intent is to let people requesting small files get them quickly without being stuck interminably behind several multi-meg uploaders, isn't an express lane a superior solution? Especially as interrupting a transfer, in my experience, carries a measurable risk of damaging the file not incurred if it's continued to completion. Also, IMO when an upload fails, for some time the uploader should hold the slot open for resuming the same file. There should be a good faith effort to follow through and complete the transfer "through rain or snow or sleet or gloom of night" -- or routing glitches or one host's dynamic IP changing or one's dialup connection dying and needing redialing. As it stands, it's common for a download to fail partway through and uncommon to ever resume it successfully, at least not without extensive searching and requerying. And limewire makes you restart the client to requery a file again if it doesn't find it the first time... |
ARGH. The internal errors, slowdowns, hangs, etc. are back as bad as ever after letting it run a couple hours and selecting a hundred or so files to queue for downloading eventually. What is wrong with this software? How can it possibly be out of memory when Windows Task Manager shows [/b]less than half[b] the total available virtual memory in use? If it's not, why say that it is and then get unstable? Who designed this thing anyway? :P |
Actually, given the stability and slowdown problems, which lead to dropped connections with network events aren't serviced fast enough because it's gone creating dozens of threads that take CPU priority over the event handling thread for some insane reason, maybe a useful idea would be to have reserve connections. These would be reserved spaces on ultrapeers, but without anything much actually happening in the way of traffic; they wouldn't be used to route searches for instance. Like the upload queues, they'd be the next in line, but leaves would have them too -- one at least, and maybe a full four, ready to go live the moment an existing connection dropped. Thing is I'm sick of seeing at least one red bar most of the time on my connection meter. It's not just that it drops connections frequently; it's also slow to reestablish them. (It was slow in 3.8.10; it was really slow in 4.0.2 with locale preferencing, until I turned that off; now it's merely slow again.) If it held one in reserve to switch in as active at a moment's notice it could recover nearly instantly from just one dropped connection. Unlike an active connection to an ultrapeer, a reserve connection would not need to consume much in the way of bandwidth or resources at either end prior to becoming active. Basically, the idea is to do the work of finding a replacement connection for one that fails *before* it fails, rather than after, but sit on this information until there is at least one new connection needed and use it only then. A form of precomputing in order to optimize recovery from network hiccups. I suppose that still wouldn't help with search results being poor if a connection or two bomb right after you start a search, which is all too common, leading me to suspect the existence of "fair weather friend" ultrapeers that are quite happy to route searches your way but drop you like a hot potato the minute you send off so much as 1 query yourself... Anything that supports uploading as somehow "better" than downloading hurts the network, since equal amounts of uploads and downloads is what will actually happen barring some kind of multicasting extension, and that means making it a pain to download reduces uploading in equal measure anyway. |
And the f*cking thing jsut hung AGAIN. I switched to it after that last post and got nothing but a blank grey window. Minimizing and unminimizing didn't change anything; maximizing and unmaximizing either. Trying to reduce it to the tray by the close box had no effect whatsoever; Limewire simply ignored me, which is ABSOLUTELY NOT CORRECT BEHAVIOR FOR AN INTERACTIVE APPLICATION -- UNDER --ANY-- CIRCUMSTANCES WHATSOEVER!!!!! I am fed up. I want these bugs fixed. I expect a genuinely STABLE "stable branch" version on the website by next Wednesday evening or ****ing else! I don't like having to regularly kill something via task manager. I don't like it losing my download list when I do. I don't like the way it brainlessly loses the list even though while riffling through its directories earlier I saw that it DOES save a ****** backup -- which either it doesn't use, or stupidly corrupts at the same time as the main copy instead of always leaving the backup file as the last known-good version. I don't like applications that gobble up RAM and then complain that a full gigabyte is still not enough for them, and I don't like anything that regularly hangs or loses data for whatever reason. There's no excuse for showstopper-magnitude problems like this outside the odd-numbered branch, for Christ's sake; no excuse at all. Nor for posting "minimum system requirements" of a few measly megs of memory (64, was it?) and a couple hundred MHz CPU when I regularly see it max out the CPU on a 1.5GHz Athlon and complain of low memory on a machine with sixteen! times! the minimum recommended memory. And I don't need anomalously heavy usage to experience these issues either; simply starting it with a blank download list and leaving it on overnight suffices to hang the 4.0 series on my machine. Which ought logically to be able to run ten instances concurrently without more than a little slowness with bandwidth, given that I never see it exceed around 120M actual memory usage or 1/3 the total bandwidth I have. I don't know if it's because it's written in Java, or because the developers are clueless, or because it just doesn't play nice with Windows XP. The fact of the matter is it just plain does not work acceptably on a run of the mill middle-high end system with the commonest end-user OS, up to date on service packs and drivers, with a broadband connection, and sharing a reasonably large library of files. That's not tolerable. The speed and stability of 3.3.5 must be brought back or else the features have to start being scaled back as too expensive in terms of performance. That's the bottom line. You have till next Wednesday, close of business. If I don't see a version on the site then that actually works on my (by no means low-end) machine, then it's the end of any chance I'll ever spend a dime on pro, and the start of a very high likelihood I'll report these issues rather more widely so as to facilitate informed consumer choice. |
If there's any remainind doubt that these problems are occurring with nonexcessive or abnormal usage and a fairly beefy machine, the following just happened: Started it up again -- predictably, it refused to resume old downloads. Sharing 10,681 files. Hash index recently rebuilt. I did a single search; after five minutes I switched back to Limewire and found 192 results (some duplicates, and most files I already have). Selected them all and hit download, then sorted through the overwrite dialogs hitting "no" to each. (This, rather than just sort by "quality" and select just the ones with stars, because I've found the paper-with-check often fails to appear for files I have and sometimes appears for files I don't. Until it works I'm stuck with this method for making groups of files complete.) About 3/4 of the way through the list, Limewire hung, and task manager showed greatly decreased network activity and over 100 threads. About five minutes later, the thread count dropped to around 50 and the UI became responsive again. At this point commit charge 798M/1622M, actually on the high side for usual; note that 1GB of the 1622 is *physical* ram so the system's about two more Limewire instances away from swapping, let alone running out of memory, based on its current process size of 103,664K (that's VM process size listed in Task Manager -- both virtual machine and virtual memory, so, the true size of the process excluding DLLs concurrently in use by other processes). Nonetheless, there's an out of memory error dialog displayed prominently in the Limewire UI, mocking me and my gigabyte of RAM. I dismiss it, and note that my connection quality is merely good -- one red bar. Within seconds, though, it becomes two and then three! Limewire has apparently taken it upon itself to close some connections without any input from me on the matter -- and it must be happening locally, since independent failures at three remote hosts (or routing three remote hosts) are very unlikely indeed. Astronomically so. Either my network connection is malfunctioning or Limewire is, and the network connection checks out fine since browsing to this "submit a post" page in Firefox gives no trouble or so much as a hint of delay. If www.gnutellaforums.com is reachable it's unlikely three specific independent hosts in geographically random locations (thanks to turning off locale preferencing!) all became unreachable within seconds of one another. Oh no, Limewire is surely to blame here, taking the initiative to trim bandwidth usage I suppose, while the task manager's network monitor shows a full 2/3 of my bandwidth available. (This is with normal activity; it recovered about when the bogus out of memory error appeared.) Shortly the connection quality recovers. In the download list: seven files. Miraculously, one of them actually ****ing downloads successfully in seconds, as I watch in astonishment. Seven files, and up for about ten minutes, one search done which returned a typical number of results -- that's all it takes to get unacceptable crashes/misbehavior from Limewire on a 1.5GHz 1GB Athlon machine. The developers must test this on 4GB quad Xeon boxes or something if their test suites don't reveal problems like this prerelease. Nothing, however, excuses them from releasing into the stable branch code that must have had everyone betatesting 3.9.x pulling out their hair at the error messages, the frequent hangs and data loss... What were they THINKING?! |
Addendum: it failed to find the other 6 files on requery, and it also failed to find more than about 800 additional MB to allocate on several further occasions, each announced with great fanfare with a huge, focus-stealing dialog box. (Exactly how big a chunk it requested each time is a mystery, but it has to have exceeded 800M for the system not to find it somewhere in the bowels of the 1600M virtual memory with only 700-odd in use.) I closed it down, and waited the interminable time it takes for the process to disappear from task manager (over 2 minutes this time; I clocked it, and sometimes it's over ten) before launching a new instance. Predictable as cloudless skies in the weather forecast for Cairo: "Sorry, but Limewire was unable to resume your old downloads!" There's no excuse for the code to display that dialog even being in the source tree, let alone actually being executed. And for a list of only six files? How ****ing hard can it be to load six old downloads? Especially when I've seen older versions (3.3.5 comes to mind) able to load over 2000(!) without difficulty -- and that arguably *is* excessive. This software sucks. Version 4 just plain doesn't work. It sort of goes through the motions of vaguely resembling something that's pretending to work while actually only being a cardboard cutout that does no work at all well -- a big, fat, lardy 100MB cardboard cutout that is. And it sure uses a lot of CPU doing nearly nothing except failing to present a UI, composing the latest out of memory error dialog of doom to present to me like a birthday gift from Satan (the clocks must all be wrong down there since it's not my birthday for some months yet, but since they can't ever see the ****ing sun or moon from there I can't really blame them for failing to keep an accurate calendar), and seeing how many concurrent threads it can spawn before Windows XP chokes (answer: over 700, since I've seen it grow to that many threads more than once both in some of the poorer 3.8.x versions and all the 4.0.x ones, before I killed the process so as not to find out how many it takes to crash XP and lose all my unsaved work in other applications, the loss of unsaved work in Limewire being by then a foregone conclusion)... Maybe I can't blame Satan for not keeping an accurate calendar but I sure as hell can blame Limewire for not keeping my download list across sessions, despite professing to have done so since version 2.x. My official position is that I'm still waiting for this feature to actually be implemented. |
Preliminary indications are that you have failed. 4.0.5 is slow the UI intermittently hangs, and I've already seen three "out of memory" errors on my 1GB system. During this time, there were never more than five items in the download list pane -- and the only likely culprit for any kind of "bad interaction", firewall software, was disabled the whole while. Commit charge 481/1622M. If it genuinely failed a memory allocation, it wanted over 1100M! (A smaller allocation from a fragmented heap might explain it, except the machine was rebooted less than an hour ago and there hasn't been time for the machine's memory to get badly fragmented yet. I've not done much since the reboot save a bit of surfing and launching Limewire and attempting to use it.) |
If you mean things like spyware, viruses, and the like, the system's clean -- I scan new executables, I run Spybot S&D, and so forth. If there's any malware lurking inside my machine I'll eat my dusty, sputtery casefan for a crunchy after-dinner treat. :P About the only thing I can think of that could possibly be the problem if Limewire itself isn't, especially now firewall software is eliminated as a possible cause, is the VM itself. But it's Sun's most recent version (1.4.2), so it's neither out of date nor a weird, dubious 3rd party VM that may not implement the full spec. In short, Limewire performs poorly on a commonplace OS with the reference implementation VM and a decent hunk of hardware resources to draw upon. I hate to think how horribly 4.0.5 must run, if it can even get out of the starting gate, on the many older P3-266 128MB Windows 98SE machines still floating around out there! |
Only an idea but do you share many mp3s with ID3v2 tags? If yes then remove them temporary from the shared directory and see if it helps!? |
I'll try it, but why on earth would it? Limewire performance shouldn't be affected at all by the content of the files, and it should scale reasonably with the quantity of shared files. |
MP3 thing: As a test, I removed "mp3" from the list of shared file extensions. It didn't run any better for the rest of that session but when I closed and restarted limewire, it seems to be running more smoothly, with 95M process size instead of 105-120. Still sharing 9433 files. Any clue what's going on here? It does look as if sharing one mp3 file is more "expensive" somehow than sharing one jpeg file or even large mpeg file, in terms of performance and memory demand. And why would having lots of mp3s shared make it either think it needed more than 1GB RAM or think I no longer had anywhere near 1GB RAM? Peerless: what "utilities" package? For what software? What's the detection procedure for whatever it was that you had? URL to read up on it? Any new kind of malware existing detector software doesn't catch I need to know more about. |
Maybe Limewire doesn't have a problem with mp3 files. Maybe the RIAA does, and when they detect someone sharing files with that extension they try to flood them off the net? Limewire might be freaking out due to huge amounts of malformed traffic intended to bring it down or degrade its performance. Even if you have just legitimate mp3s, maybe the RIAA still takes a preemptive "shoot first and ask questions later" approach...wouldn't surprise me at least to find them playing dirty in such a way! |
Interesting RIAA DOS idea, Zarcuthra, but it doesn't seem to jibe with my seeing reduced network activity during episodes of particular slowness/unresponsiveness. Also, it's bloated back up to 105MB and I just had the UI lock up for a couple of minutes, about 30 items currently in the download list and still sharing 9433 files. No complaints that 1GB is not enough from it yet this session though. I think we have three problems here: 1. It's inefficient -- sharing 9433 files makes it need 95MB? Assuming the mem usage in a clean install is around 30M (which, though high, coincides with most of the major browsers such as Firefox), this is about 6500 bytes more for each shared file. How much of that really has to be resident all the time and how much can be retrieved when the file's requested? It seems to me only the filename and maybe metadata for matching and maybe the hash need to remain in RAM at all times to match against incoming queries -- and for leaves shielded by ultrapeers (including my node), not even that. The info needs to be sent to ultrapeers upon connection (and reconnection) and then ultrapeers manage routing queries to me only if I have matches on my system. So, I'd say zero bytes need to be resident per shared file on average! Add to that that adding about 30 downloads made it bloat to 105, or 20 more megs, suggesting an astounding 600K(!) of resident data for each item. That's assuming, of course, that it doesn't leak, which since it's Java it shouldn't. Why does it need 600K (not bytes, K!) per download list entry? Earlier versions didn't -- 3.3.5 could have 2000(!) download items and run OK on a half-gig machine. That means, assuming it used all available RAM for just resident download information, it used at most 256K per download item. I find it unlikely it used even that much. In fact I think: 2. It leaks. Despite running in a garbage collecting environment, it leaks. I guess never-to-be-used-again objects remain reachable. Does it create and then discard circular data structures? Some GC implementations (reference counting, for one) won't discover that two objects are garbage if they each refer to the other but nothing else refers to either. I hoped Java's wasn't among them, but the fact is Limewire seems to leak memory. And recent versions (4.0 and above) seem to hemmorhage memory. 3. Like some archaic, long-forgotten MS-DOS applications, it doesn't recognize all of the installed memory. While those DOS applications recognized only 640K (which "ought to be enough for anybody", according to Bill Gates), even if you had several megs XMS or *** memory, Limewire seems to recognize 128M. That's an improvement, but it's not enough. Obviously. This, I think, is the only logical explanation for it reporting "out of memory" errors when its process size is around 120M and the system still has gobs of free ram. It's that, or it's trying to allocate hundreds of meg in one chunk, or it's lying altogether. Anyway, there's no excuse for not using as much RAM as needed out of all that's available on a modern 32 bit platform with protected mode flat virtual memory addressing. This isn't MS-DOS or the ancient hoary versions of MacOS where every app had a fixed allotment of RAM and hairy near and far pointers and segments and offsets were commonplace programming vocabulary. The only case now where per-application memory restrictions make sense is in some circumstances involving multi-user time sharing systems. And since my box isn't one of these, it's a single user home machine, I should be able to turn any such restriction off. If there is such a setting buried in the bowels of Limewire I've been unable to find it, either in the options dialog or with a hunt for Limewire-related registry keys in regedit or by opening every ascii file in either Limewire's program directory or its settings ".limewire" directory in a text editor! This suggests if such a "feature" exists, it was, in a true display of archaism, hard-coded without any way at all for end-users to alter it to suit their particular circumstances! That's really stupid. Hard-coded limits can't be outgrown without changing and recompiling the source, which most end-users can't be bothered to do even with open source stuff and can't do with closed-source, and Moore's Law makes the outgrowing of such limits inevitable. Not foreseeing the need to revise them upward on high end machines in the near future is like not foreseeing that using a two-digit field to store dates was going to be a problem in a few years' time back in the 1990s -- and yet there was software written in that decade that needed Y2K fixes. Of course, people had to pay for those fixes. Could this be built-in obsolescence? A way to force Pro purchasers to buy upgrades to keep their Limewire functioning as the machines have more and more RAM and user tasks demand more and more? I hope not. That would indicate very underhanded motives on the part of the development team. And why put a built-in obsolescence restriction in the free version? I suppose to motivate people to buy pro? It won't work -- all it does is make people try Limewire Basic, think "This runs like a pig and crashes all the time with memory errors on my gigabyte machine! I'm not paying for crap like this!" and switch to Gnucleus. Which doesn't work with Zonealarm, unfortunately, or I'd switch to Gnucleus! |
It found and downloaded some items. As soon as it did that, the size GREW to 106M and the thread count started climbing: from 80 to 105 or more. Meanwhile, network activity dropped to a quarter of its previous level and the UI hung. After a minute, it recovered and the stats were: 105M, 79 threads. So it temporarily created 15 threads and allocated 1MB to do some lengthy computation that had nothing to do with handling UI events and nothing to do with handling network events -- in other words, had nothing to do with its job. Frivolous extra activity "on the side" that doesn't obviously relate to stated software function makes me smell the distinct odour of spyware -- or at least some kind of embedded, unnecessary garbage that steals resources away from actually doing the work the user wants done. No bundled software, eh? I suppose if the secondary, hidden functions of the software are built right in rather than separate apps that get co-installed, then there's technically "no bundled software", but there still seems to be crudware embedded in Limewire. Say it ain't so, or I'm ditching Limewire and Zonealarm and using Gnucleus. Subsequently, it spat an "internal error" box at me after getting a couple more files. After another download completed, it hung again, grew to 106M and over 90 threads, and then reverted to normal after a few seconds. Same after another, only it went up to 105 threads that time. At this point, everything left was "Awaiting sources" or busy with 22 items in the download list so I shut down and restarted Limewire. Closed the window, right clicked the tray icon, choose Exit. It took over two minutes for the "javaw" process to disappear from Task Manager. Miraculously, the next session inherited the old download list. When it showed "Sharing 9433 files" and all green bars, process size 100M, 88 threads. Not 105M. So, 5 of the megs at the end of the last session were the result of it leaking memory. That was after an hour or so online. Also, 22 download items adds 5 megs, so I can say this about Limewire's memory usage now, assuming the base application size is around 30M, in keeping with other large network apps -- the process size is: * 30M base application size plus * 6500 bytes per shared file plus * 256K per pending download plus * 5M per hour left running w/typical activity. Also, I can say that it starts to slow down and experience UI freezes and network performance issues once the process size reaches about 100M, begins to display internal errors starting around 105, and just plain hangs, chokes, quits functioning correctly, or otherwise blows up when it gets to around 120. All without regard for how much memory the system as a whole has -- it seems to confine itself to 120 or so MB even if that means it crashes or otherwise malfunctions! It incorrectly detects available memory or something. This is clearly a bug, and I'd call it a showstopper. Sharing thousands of files will eat up most of that 120M quickly, at 6500 resident bytes per shared file, and letting it run for hours will do the rest. Running it overnight for, say, nine hours means 45 megs leaked, at the rate determined above; add that to say 5000 shared files and you can bet it's hung or otherwise gone on the blink come the morn. Basically, the only way to get acceptable performance out of limewire is to share few files, pop on to get files rather than leave Limewire running, and in short basically be a freeloader. The bug detecting correctly the available memory may be avoidable but the means of avoiding it carry too high a price. Yet not avoiding it also does, since a host that crashes, keeps having to be restarted, and is generally slow and wonky is not a good contribution to the p2p network either! It's for this reason that I'm characterizing this bug as a showstopper. As a side note, if it really does think there's only 128MB in this ****ing box, then the strange hangs and spasms of thread creation actually could be the garbage collector -- the GC may think it's genuinely running out of memory and do some kind of slow lengthy compacting operation, even with hundreds of physical megs free! In that case, I'd suspect the particular syndrome of LW hanging right after a download could mean that downloads completing call System.gc(). Also, the priority overriding event-handling is then a VM problem, unless there's a way to change the GC behavior with VM settings... A quick Google reveals that Sun's VM does have commandline options to affect GC behavior and the initial amount of memory it uses -- the problem being that Limewire on Windows comes as its own executable, with no (documented) command line options, and no clear way to change the options passed to the VM. So, we're stuck with whatever options Limewire's installer configures it to use by default, since they're not in any user-editable file I've been able to find. (That's with searching for all registry keys with "Limewire" in the name somewhere and all files with "Limewire" in the path using regedit and Agent Ransack. And I'm not sure I'd call a registry key "user editable" anyway.) If the GC is the cause of the app locking up and dropping connections, it needs to be made less aggressive, and the problems of Limewire not detecting correctly the amount of installed memory (underestimating mine by around a factor of eight), leaking, and using way too much memory per shared file (6500 bytes) and pending download (256K!) must be addressed in v4.0.6. If the 3 memory problems indicated above are not fixed in a publicly-released upgrade by close of business next Tuesday (1700h Eastern time) I'm switching to Gnucleus and to hell with ever buying Limewire Pro. |
Gnucleus :D !!! Seriously, the QRP system for routing queries is not made to handle more than 1000 files shared per user. More than 9000 is definitly too much as you kill Ultrapeers bandwidth. For the sake of the community share rare files (hard to find or download) and the ones you want to promote: that's it. That's what I do and everybody should do that. The ideal is that everyone on the Gnet shares 500 files or 5 gigs worth of files, that's what will help the network the most. Ciao |
Quote:
Gnutella must scale or it will die. p2p won't die, but the revenues from Limewire Pro will sure dry up. Make the software perform well at higher scales. In the short term, make Limewire actually share only 1000 files at a time; if you have more shared, it will rotate among them, changing what files it offers to a different thousand every hour or so, with a preference for small files. (Any given shared kb should be shared about as often as any other.) Of course, uploads aren't interrupted as the shared group changes; but newly queued uploads will then be for the new group. In the longer run Limewire should be able to offer whatever you're sharing all of the time without hassle -- and by 2006, this means it should handle sharing up to 100,000 files on a high-end 32 bit desktop box (4GB RAM, etc.) or a low-end 64-bit machine with maybe 4 or 8GB RAM, and a cable connection. (These are the figures suggested by extrapolating current trends in memory, bandwidth, and file storage, all of which follow Moore curves.) As for preferring to share rare files: most of the files I am sharing are rare; indeed most of them are jpgs and jpgs are for some reason difficult to get, often only available from one or two hosts at any given time. Bunching small files into related groups might help things too, by reducing overhead. Efficiency seems to get low for files under 1M, based on observations. Chain-downloading groups of related files, especially small ones, should carry the same costs as downloading one huge file. Sharing them should carry the same costs as sharing one huge file. Currently, Limewire scales poorly. It scales poorly handling large numbers of downloads -- queued, not concurrent -- and large numbers of shared files. This must change or Limewire will die. Not a threat, a prediction. Also, a new layer is needed between ultrapeers and leaves: "middlepeers" which connect leaves to ultrapeers. These would have relatively few leaves and offload much work to ultrapeers, but not have as many connections to ultrapeers or to leaves as ultrapeers do, nor any to other middlepeers. Middlepeers would run OK on bandwidth (cable) that is iffy for ultrapeers. They'd help the network scale by taking burden off ultrapeers; less steep requirements would allow there to be more of them and avoid the disincentives that keep the number of ultrapeers inadequate. Ultrapeers would send a connecting middlepeer the most common query string words they see and the middlepeer would filter that list against the content shared by their handful of leaves. The middlepeer would send the filtered list back to the ultrapeer, and the ultrapeer would broadcast a query to it only if it either used an uncommon query string word or used a common one contained in the filtered list. Thus, broad swathes of queries for commonly desired content these leaves happened not to have would not go to the middlepeer. It would also only have to index the files of a few leaves, not of hundreds. One more thing I want to see fixed is the icons in search results. The green checked paper icon should always appear by a file you already have, either in the download directory or a shared directory. It should never appear by a file you don't have. Currently, both of these conditions are violated, the first frequently, the second rarely. And fix this forum so the default "Unregistered" doesn't produce a message that the username is taken! In fact, any string starting with "Unregistered" seems to do that. |
Another thing. What's with Limewire sometimes being confused about whether or not a Bearshare or (especially) Shareaza host is nonexistent, or merely busy? Very often I see a bunch of busy hosts files go "Connecting ... Awaiting sources" as though it retried them only to find the host had gone offline. But often, selecting them and hitting "resume" produces a bunch of "busy hosts" again -- so the host is not offline. So which was it, offline or not offline? Is this Limewire incorrectly distinguishing busy signals from timeouts? Shouldn't it give up on a source only if a connection times out? It clearly gives up under other circumstances though -- sometimes I see it give up in a fraction of a second, not the 60 it takes to time out, after resuming one. Or maybe the problem's with some other clients. It seems to happen mainly with Shareaza and, to a lesser extent, Bearshare sources, suggesting these clients might be sending spurious 404s as a way of saying "I'm REALLY busy" or some such rot. (Yet, it sometimes does that with one file, gives another a straight busy signal, and queues a third while actually uploading a fourth in a group of files -- it's weirdly inconsistent.) My theory on this is as follows: Busy is busy. Don't send something else to mean busy, or let the connection time out when you're there. 404 means "I don't have this file", not "I do but I'm busy". See above. Queued means queued. Unless the source goes offline I want to see that position number decrease steadily and then the file download. All too often I see it increase, or change abruptly to busy or even awaiting sources. Commonly, selecting it and hitting resume puts it immediately back into the queue again at some high-numbered position. So the host hadn't gone offline. It just decided to kick me back to the end of the line for no reason at all. That's incorrect behavior and probably violates a few RFCs. Downloading means downloading. I don't want to see any more downloads that abruptly turn into busy signals at suspiciously round fractions like 1/3 or 3/4 of the file sent. Once you start uploading to a host, the only excuse for not finishing is going offline IMO. And none of this fractional-KB/s bullshit either. Shareaza is especially bad for being stingy with upload bandwidth but many hosts upload very slowly after advertising cable or better speeds. Then there's the ones where the download progresses reasonably at first, but after a while the kb/s starts going down and the time stops counting down and starts counting up. Resume often makes the slowing-down download return to normal speed, but left unchecked it commonly goes all the way to zero. I guess these hosts want to make sure you really, really want the file or they'll give the slot to another host? Since usually if you don't keep hitting resume every so often, it slows, eventually stops, and then you get a busy signal. I want this kind of crud to stop happening as well. Shareaza is bad for it but I've seen downloads from other Limewires exhibit the "slows down and eventually stops if you don't keep hitting resume" effect. Any developers listening, if that "feature" isn't gone in current limewires, it better be gone in 4.0.6 and subsequent versions. Anyway, besides seeing Bearshare and especially Shareaza fix their shoddy software to stop doing some of the above bogus things they seem to be doing, I'd like to see Limewire's developers overhaul the downloading code. Firstly they need to make sure the status determinations are accurate -- that it doesn't show "Awaiting sources" when it gets a busy signal from a source, that it doesn't do anything itself to slow or halt downloads such as not ack arriving chunks in a timely fashion. Removing the "it spawns dozens of threads and the event handling thread hangs for ten minutes" bug should fix most of the problems with downloading from other Limewires as well as some problems downloading from any source at all. Maybe there's even some way to cover other clients' misrepresentations of busy/nonexistent status somehow, and make up for these clients' deficiencies. Then there's clarifying the distinction among two of the states: need more sources and awaiting sources. As far as I can tell, the only difference is that the latter can be resumed while the former can be requeried. The actual state of the file is exactly the same: Limewire knows of no hosts for the file that don't time out when attempted. In this case, requerying makes more sense than resuming as the user-available action; if the sources it knows of have all gone offline or become overloaded to the point of timing out, new sources are desired to get the file, rather than retrying the existing ones. Therefore, the "Need more sources" state and its user available options makes perfect sense. "Awaiting sources" just seems to mean "Need more sources and requeried already since the last time Limewire was restarted", and there's no logical reason to distinguish this from "Need more sources" in general. In fact, since requerying is the only likely route to acquiring the file, there's a good reason to stop distinguishing it, and provide "Find more sources" rather than "Resume" as an option. Better yet, provide both -- since sometimes some clients lie about file availability and you end up with "Awaiting sources" showing by a file that clicking "resume" will cause to download immediately, until those buggy clients stop lying and the older versions of them that lie fall into disuse, providing both options makes a lot of sense. |
I can now confirm that 4.0.5 still has the "Sorry, Limewire was unable to resume your old downloads" bug. I want this dialog box removed from the code in 4.0.6 -- there is simply no excuse for displaying it. Ever. |
I ***** keep up cap'n! Any chance you can summarize your feature requests (or "imperious demands" if you prefer ;) ) as in http://www.gnutellaforums.com/showth...threadid=25656 ? Anyway--I've quite enjoyed your insights and metaphors, and hope to reread this thread in a few days. btw--if you think it will help, I could try to set up a similar situation here on OSX. If I were to unstuff a few files, they'd expand to ~13K .jpgs. They used to share just fine, so no problem to try again this weekend. Might that help rule out RAM or OS? and--get registered: it's just a handle easily looped to a throwaway email account, and makes quoting searching and messaging so much easier. |
MP3 thing: late LimeWire 3.9 betas and 4.0 introduced support for ID3v2.3 tags. The tags are kept in memory and everything is fine as long as the tags are not broken etc etc. Sorry, I cannot go into details here. Feel free to register and write me an PM. I'll forward this to the LW devs. One thing though, how many MP3s did you share? Hundrets? Thousands? You could experiment with the files a bit. Remove for example all files from the shared directory, share only MP3s and so on (important: you must restart LimeWire everytime). |
MP3s -- since it went down from 10000-odd to 9000-odd files when I removed "mp3" from shared extensions, I'm guessing around 1000. Registering -- signing up for a bunch of spam so I can have the dubious pleasure of having to keep track of yet another login/password pair? Thanks, but no thanks. How large is an id-somethingorother tag? Why does it keep at least 6500 bytes in RAM for each (including non-mp3s) shared file? Summarized requests: * Fix memory leaks. It leaks roughly 5M/hour under WinXP and moderate user activity. * Fix not correctly detecting available memory. My semi-scientific observations with Task Manager seem to indicate it thinks it's running out of memory when the process size gets into the 100-120M range. Since the nice round (in binary) number 128M is just above that, I'm guessing it doesn't detect RAM above that amount, and thinks it's running out at this point even on a 1GB machine with an additional 512M of swap for it to spill over into. Software that doesn't grow to use as much space as it needs even when that space is available is software that doesn't scale. Software that doesn't scale becomes obsolete in 5 years or less due to Moore's exponential growth curves. *Fix memory use inefficiencies. On my system, when the GUI comes up the process size is around or a bit below 30M; as it loads the shared library it shoots up to 95M, from which I calculated it keeps 6500 bytes resident per file. When it loads 20-odd pending downloads it grows another 5M meaning that each pending download adds a whopping 256K! Ludicrous when the files queued for downloading are themselves all under 100K each... *Get rid of "unable to resume old downloads" dialog. There is never a valid reason for such behavior. I've seen that the downloads.dat file has a downloads.bak backup; the latter should always be the most recent known good version of the file, so it can be loaded if downloads.dat is corrupted. Instead, some of the crashes I've seen seem to corrupt them both at once! * There's currently little or no incentive to be an ultrapeer, and the recurring theme around here is ultrapeers are short on bandwidth, ultrapeers are short on connection slots, etc. -- more ultrapeers lowers the burden on existing ultrapeers. Possibly add "middlepeers" that work reasonably on typical desktop PCs with broadband and benefit from more connections with better search results. Alternatively, create incentive to be an ultrapeer -- offer free copies of pro or something. :) * Something sometimes spawns dozens of threads that assume priority over the event handling thread. This leads to failed transfers and dropped connections as well as a nonresponsive UI. After a while the extra threads go away and the system returns to normal -- minus a few connections and with lots of interrupted up/downloads, that is. During these seizures there's a lot more CPU use than normal. Fix this. * There seems no logical reason for the "Awaiting sources" status -- it just seems to mean "Need more sources, and you've aleady searched for more once since the last time you restarted Limewire" and its sole effect on the user experience is to force them to restart Limewire to try again to find sources for the file. Anything that encourages people not to keep LW running is bad for the network. (All the memory leaks and related problems also discourage lengthy stays online!) * It scales poorly. Thousands of shared files, hundreds of queued downloads, and more than about one hour continuously online all mean trouble. Software that doesn't scale becomes quickly obsolete; see above. * Gnutella in general is inefficient at sharing small files. It's harder to get a typical 100K file than to get a 100K chunk of a typical 10M file. Zipping batches of related small files up seems like a way around that, except: everyone has to do it for it to work; the preferred archive format is system dependent, zip on Windows, sit on the Mac, gz or bz2 on *nix, etc.; and archiving renders metadata and file types invisible to gnutella clients so you can't find zips of jpgs with a search for images for instance. Therefore, handling of small files needs to change. I suggested that if someone requests a large number of small files from a host, the host should serve them all machinegun-style in one upload slot and one conceptual upload event, rather than back to the queue after each one. "Chain-uploading" of small files would make them closer to equivalency with large files for the (not uncommon) case of grabbing numerous small files from the same source. Someone else suggested whole-directory sharing. Another small-file issue is that I regularly see even files of only a few K upload quite gradually in terms of percentages. Apparently they can be split up quite small. This is inefficient -- you end up sending potentially hundreds of K of packet headers for a 10K file that way! Since the max size of a network packet is 64K, I suggest sending small files (32K and below?) as one single network packet and never sending a busy signal for a request for such a file. The file itself can be sent in the same packet that would be used to send a busy signal, so why not send the file? This logic makes the smallest files able to not take up queue slots too. For larger files, they should be sent in 32K chunks per packet, aside from the final chunk which obviously may be smaller. The only exception I can think of as making any sense would be serving files over dialup. Furthermore, ultrapeers might cache small files hosted by leaves connected to them via dialup. Queries they'd route to their dialup-using leaves would first be checked against the cache, and they'd return a cached file as a result instead if possible. Then you'd get the file from a broadband-connected ultrapeer instead of a dialup-connected machine. (If faced with a request for a file flushed from the cache, the ultrapeer would have to forward the request somehow or reupload the file.) The twin advantages being to keep the file available through the leaf's inevitable dialup dropouts and to reduce the burden on the dialup-connected leaf. * Fix search result icons for accuracy. I currently see stars or torn paper by results I already have, and sometimes the checked icon by files I don't. I suspect this may be due to my download directory not being shared, as I screen all new files before sharing them. * Make one upload slot by default (configurable as an "advanced option") an "express lane" which preferences broadband connections and serves only files under 1M from a separate queue -- all configurable under "advanced options". * Make (and/or document) commandline options to limewire.exe to allow people with Java know-how to adjust VM parameters. I assume it's a stub of some kind, so simply passing-thru any command line parameters to the VM executable ought to do it, except for Mac Classic users, where there is no such thing as a command line. :P I think that about does it for now. |
- Titile, Artist, Album, Year, Track, Comments and Genre with up to 1KB each (+ other optional fields). - Shared files come along with a Hash, a TigerTree, a list of alternate locations, statistics etc etc. That sums up! - Memory leaks are impossible with Java! - You should also know that each download in the queue allocates at least 2+2n Threads (n=number of additional sources). - The JVM has a initial Heap size limit. You can rise it with the -Xms and -Xmx arguments. Google will tell how... - Files smaller than 1MB are always send in one piece (they're not split up into smaller pieces aka PFS). - and so on... You seem to have a good tech. background but you speculate too much. For beginners: http://www.gnufu.net For experts: http://groups.yahoo.com/group/the_gdf And here you can see how LimeWire works: http://www.limewire.org |
Quote:
Quote:
Quote:
|
Quote:
|
You have some good suggestions and ideas for improvement, <unregistered person with many guest names>. You've written some very long posts, so it's going to take a while to go through them all. It would make it easier (and we probably would have responded quicker) if you post the ideas in the 'feature requests' area, perhaps with small summaries in the sticky 'Feature Requests' thread. I, personally, tend to ignore threads whose title ends in the word 'sucks'. Thanks. |
Well Reggie (it's easier than <unreg239u7m-98 2348yjftidopf>, I tried to confirm your situation, and no joy. I'm no expert, but my family and I are very happy with LW 4.0.5, and anticipating even more improvements. Looks like something in your setup sux (or else it's the Pro difference). You should change the thread title to something more realistic: LW 4.05 is very good at downloading small files. Here's what I tried: -unzipped about 1 GB of files (mostly .jpg) to bring my shares up to 13K. -upped the simultaneous downloads pref to 100 -searched for "wallpaper .jpg" (found about 350) -tried to download most of them at once (~200?). -1/2 hr later did a "Repeat Search" - ~1/2 hr later selected ~100 "awaiting sources" and hit the Resume button - waited another 1/2 hour, killed the dl's and resumed them from the Library->Incomplete folder (100 'blacks; 60 'reds')' Here's what happened -100 downloaded right off the bat (within a 1/2 hour) with no intervention. -the gui 'froze' (mouse clicks didn't immediately change panes, etc, but no hangs or other problems: I was able to just leave it be, switch to another user account to check mail, browse, and twiddle my thumbs. Switched back, and all was good: Uploads were still proceeding, the connections pane was happily recording that Ultrapeer mode wasn't affected, and downloads showed quite a few dl's awaiting sources). -After repeating search once; later resuming the "Awaitings;" and finally killing inactive downloads and resuming from the Library, -Final count: 159 files Notes -CPU usage only a couple of times went over 50%, but usually stayed around 20-38% -the most threads I saw recorded were 272 -real memory size peaked at 172 MB (VM 580 MB) btw--I tried a "What's new", and saw a bunch of ethernet results. Huh? Turned out the kids were running LW Free downstairs the whole time.:eek: A quick check showed they weren't sharing anything really offensive (Gad! they like "funny ads") Whew. :cool: So: -Looks like you have some good suggestions, but your system has other problems. LW 4.5 Pro handles small files quite well. -I look forward to further optimizations, but right now your comments say more about your setup than LW IMHO. Maybe a slow old Mac with limited RAM and processor will help? :p Specs: 18 month old 700 MHz PowerPC G3 iBook, 384 MB SDRAM; Linksys router (port 20300 opened and forwarded)->DOCSIS cable connection 5mbps down, 1mbps up). Sharing 13,152 files 5.15 GB (5,501,037,010 bytes)--mostly text and .jpg's, but also about 50 .mp3's and various other stuff. LimeWire version 4.0.5 Pro Java version 1.4.2_03 from Apple Computer, Inc. Mac OS X v. 10.3.4 on ppc Free/total memory: 8949904/73793536 -- listing session information -- Current thread: AWT-EventQueue-0 Active Threads: 218 Uptime: 1:44:54 Is Connected: true Number of Ultrapeer -> Ultrapeer Connections: 30 Number of Ultrapeer -> Leaf Connections: 30 Number of Leaf -> Ultrapeer Connections: 0 Number of Old Connections: 0 Acting as Ultrapeer: true Acting as Shielded Leaf: false Number of Active Uploads: 3 Number of Queued Uploads: 0 Number of Active Managed Downloads: 2 Number of Active HTTP Downloaders: 1 Number of Waiting Downloads: 72 Received incoming this session: true Number of Shared Files: 13152 Guess Capable: true -- listing threads -- Acceptor: 1 AWT-Shutdown: 1 AWT-EventQueue-0: 1 pinger thread: 1 QRPPropagator: 1 MessageLoopingThread: 60 QueryDispatcher: 1 MulticastService: 1 HttpClient-ReferenceQueueThread: 1 TimerQueue: 1 UDPService-Receiver: 1 TimerRunner: 1 DownloadWorker: 1 OutputRunner: 60 ConnectionDispatchRunner: 4 QueryUnicaster: 1 ConnectionWatchdog: 1 DestroyJavaVM: 1 Java2D Disposer: 1 HttpClient-IdleConnectionThread: 1 Thread-3: 1 Thread-0: 1 HTTPAcceptor: 1 ManagedDownload: 74 -- listing properties -- WINDOW_Y=34 FORCE_IP_ADDRESS=true WINDOW_X=68 PORT=20300 CLEAR_UPLOAD=false FILTER_HTML=true RUN_ON_STARTUP=false SEARCH_MAGNETMIX_BUTTON=true INSTALLED=true FORCED_IP_ADDRESS_STRING=24.72.*.* HARD_MAX_UPLOADS=10 AVERAGE_UPTIME=20385 TOTAL_UPTIME=978497 MAX_UPLOAD_BYTES_PER_SEC=295 COUNTRY= LAST_SHUTDOWN_TIME=1085791178151 APP_WIDTH=809 SESSIONS=48 SHOW_TOTD=false UPLOAD_SPEED=28 UPLOADS_PER_PERSON=1 FORCED_PORT=20300 |
This says that it works OK on a Mac, and recognizes more than 128MB on Macs. So the problem I have is Windows-specific. This may indicate a VM problem. Which is odd, since it's Sun's very own VM, and not some sort of unstable beta version either. If the makers of Java couldn't get it right it's unlikely Apple could, which makes me think Windows-specific code in Limewire may be the problem. There's two things to look at there: * The installer for the Windows binary. Maybe it configures the Java VM poorly? Or something. * Any platform-dependent code in Limewire. I'm not using the Windows native L&F, so it's *probably* not in there. Minimizing to the tray must be Windows-specific but I see problems without/before doing so so this platform-dependent code is *probably* not the problem either. Otherwise I'm not sure. The fact that you had the UI freeze once is telling, though -- the problem at least potentially exists in the Mac port too, but it takes a helluva lot more to provoke it. It looks like all the numbers I gave in an earlier post have to be multiplied by 5 or more on the Mac. That it performs vastly differently on different brands hardware with similar CPU speeds and RAM is disturbing. A supposedly cross-platform app that behaves far better under one OS than another is not really cross-platform, is it? Now I start to wonder if the devteam tests the code on a LAN of Mac G4s rather than Quad Xeons ... :P Anyway, shoddy performance under WinXP on a fairly beefy piece of hardware is not good. Most of your users and prospective buyers of Pro are using Windows. Given how crummy Windows is, I'd accept a minor performance hit compared to the Mac -- since my CPU is somewhat faster than his, I'd expect about the same performance under the same conditions. Instead, mine gets very cranky if its memory usage approaches 120M, even when there's 500-odd still unused megs of *physical* RAM. Anyone here got inside knowledge of how the Mac port differs from the Windows port? Maybe you can shed some light on why the Windows one only "sees" a tenth the system's memory and is very slow and cranky while the Mac port performs roughly five times better at least... |
One business day left. If you can't identify and fix these performance issues under WinXP given all the data I've given, you deserve to go out of business. :P |
YOU FAILED. Sorry. As of now, somewhat after the close of business deadline, there is no version available from your Web site more recent than 4.0.5, which is known to contain the same severe performance issues under WinXP that plagued earlier 4.0.x versions. Your seven days are up and you have not met the terms and conditions I have set. Therefore, I hereby publicly take out an oath never to purchase Pro. Sorry. It is too late now. You did not respond to a potential paying customer's concerns in a timely manner, and therefore you are losing business. Now it's time for me to spread the word that the 4.0 series of Limewire versions do not work at all acceptably on WinXP machines, vote it down on download.com, and the like...you had your chance to fix these problems (which, I note again for the record, have been around in varying degrees of severity since early 3.x versions or even earlier) and you chose instead to ignore them and focus on creeping featurism. Your choice; now you face the consequences. Goodbye. |
LOL! Hey Reggie--don't forget to link back here and say how others didn't have your problems on XP, and how it worked better than anything else. good luck finding out about your RAM, and I hope you'll do a similar comparison with Morpheus. Sorry if you thought this forum was the way to contact LW developers. Just us l'il ole users here (no one important enough to be able to issue ultimatums). cheers--do let us know what program you are trolling for, and where you regularly post. |
I do recall responding to this about a week ago... and scrolling back... yup, there's my post. Sorry if you wanted something more official. How's about this: *** OFFICIAL RESPONSE *** Please provide more concise posts. *** END OFFICIAL RESPONSE *** Thanks. |
Here's a concise post Limewire loses track of partial downloads and restarts them all from 0%. It hangs, is slow, loses preferential settings and I bought the Pro version. I'm running a 2.8G P4 with 1G RAM on a 512 Kbps ADSL connection. |
All times are GMT -7. The time now is 02:37 PM. |
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.
Copyright © 2020 Gnutella Forums.
All Rights Reserved.