Gnutella Forums  

Go Back   Gnutella Forums > Current Gnutella Client Forums > LimeWire+WireShare (Cross-platform) > Technical Support > General Windows Support
Register FAQ The Twelve Commandments Members List Calendar Arcade Find the Best VPN Today's Posts

General Windows Support For questions about Windows issues regarding LimeWire or WireShare or related questions


Reply
 
LinkBack Thread Tools Display Modes
  #1 (permalink)  
Old May 27th, 2004
Unre565635757
Guest
 
Posts: n/a
Default

It found and downloaded some items. As soon as it did that, the size GREW to 106M and the thread count started climbing: from 80 to 105 or more. Meanwhile, network activity dropped to a quarter of its previous level and the UI hung. After a minute, it recovered and the stats were: 105M, 79 threads. So it temporarily created 15 threads and allocated 1MB to do some lengthy computation that had nothing to do with handling UI events and nothing to do with handling network events -- in other words, had nothing to do with its job. Frivolous extra activity "on the side" that doesn't obviously relate to stated software function makes me smell the distinct odour of spyware -- or at least some kind of embedded, unnecessary garbage that steals resources away from actually doing the work the user wants done. No bundled software, eh? I suppose if the secondary, hidden functions of the software are built right in rather than separate apps that get co-installed, then there's technically "no bundled software", but there still seems to be crudware embedded in Limewire. Say it ain't so, or I'm ditching Limewire and Zonealarm and using Gnucleus.

Subsequently, it spat an "internal error" box at me after getting a couple more files.

After another download completed, it hung again, grew to 106M and over 90 threads, and then reverted to normal after a few seconds.

Same after another, only it went up to 105 threads that time.

At this point, everything left was "Awaiting sources" or busy with 22 items in the download list so I shut down and restarted Limewire. Closed the window, right clicked the tray icon, choose Exit. It took over two minutes for the "javaw" process to disappear from Task Manager.

Miraculously, the next session inherited the old download list. When it showed "Sharing 9433 files" and all green bars, process size 100M, 88 threads. Not 105M. So, 5 of the megs at the end of the last session were the result of it leaking memory. That was after an hour or so online. Also, 22 download items adds 5 megs, so I can say this about Limewire's memory usage now, assuming the base application size is around 30M, in keeping with other large network apps -- the process size is:

* 30M base application size plus
* 6500 bytes per shared file plus
* 256K per pending download plus
* 5M per hour left running w/typical
activity.

Also, I can say that it starts to slow down and experience UI freezes and network performance issues once the process size reaches about 100M, begins to display internal errors starting around 105, and just plain hangs, chokes, quits functioning correctly, or otherwise blows up when it gets to around 120. All without regard for how much memory the system as a whole has -- it seems to confine itself to 120 or so MB even if that means it crashes or otherwise malfunctions! It incorrectly detects available memory or something. This is clearly a bug, and I'd call it a showstopper. Sharing thousands of files will eat up most of that 120M quickly, at 6500 resident bytes per shared file, and letting it run for hours will do the rest. Running it overnight for, say, nine hours means 45 megs leaked, at the rate determined above; add that to say 5000 shared files and you can bet it's hung or otherwise gone on the blink come the morn. Basically, the only way to get acceptable performance out of limewire is to share few files, pop on to get files rather than leave Limewire running, and in short basically be a freeloader.

The bug detecting correctly the available memory may be avoidable but the means of avoiding it carry too high a price. Yet not avoiding it also does, since a host that crashes, keeps having to be restarted, and is generally slow and wonky is not a good contribution to the p2p network either! It's for this reason that I'm characterizing this bug as a showstopper.

As a side note, if it really does think there's only 128MB in this ****ing box, then the strange hangs and spasms of thread creation actually could be the garbage collector -- the GC may think it's genuinely running out of memory and do some kind of slow lengthy compacting operation, even with hundreds of physical megs free! In that case, I'd suspect the particular syndrome of LW hanging right after a download could mean that downloads completing call System.gc(). Also, the priority overriding event-handling is then a VM problem, unless there's a way to change the GC behavior with VM settings...

A quick Google reveals that Sun's VM does have commandline options to affect GC behavior and the initial amount of memory it uses -- the problem being that Limewire on Windows comes as its own executable, with no (documented) command line options, and no clear way to change the options passed to the VM. So, we're stuck with whatever options Limewire's installer configures it to use by default, since they're not in any user-editable file I've been able to find. (That's with searching for all registry keys with "Limewire" in the name somewhere and all files with "Limewire" in the path using regedit and Agent Ransack. And I'm not sure I'd call a registry key "user editable" anyway.) If the GC is the cause of the app locking up and dropping connections, it needs to be made less aggressive, and the problems of Limewire not detecting correctly the amount of installed memory (underestimating mine by around a factor of eight), leaking, and using way too much memory per shared file (6500 bytes) and pending download (256K!) must be addressed in v4.0.6. If the 3 memory problems indicated above are not fixed in a publicly-released upgrade by close of business next Tuesday (1700h Eastern time) I'm switching to Gnucleus and to hell with ever buying Limewire Pro.
Reply With Quote
  #2 (permalink)  
Old May 27th, 2004
et voilà's Avatar
+Modérateur à ses heures+
 
Join Date: July 26th, 2002
Location: Le Québec
Posts: 2,904
et voilà is a great assister to others; your light through the dark tunnel
Default

Gnucleus !!! Seriously, the QRP system for routing queries is not made to handle more than 1000 files shared per user. More than 9000 is definitly too much as you kill Ultrapeers bandwidth. For the sake of the community share rare files (hard to find or download) and the ones you want to promote: that's it. That's what I do and everybody should do that. The ideal is that everyone on the Gnet shares 500 files or 5 gigs worth of files, that's what will help the network the most.

Ciao
Reply With Quote
  #3 (permalink)  
Old May 27th, 2004
Unregdfhd
Guest
 
Posts: n/a
Default

Quote:
Originally posted by et voilà
the QRP system for routing queries is not made to handle more than 1000 files shared per user
More hard-coded limits? Does every generation need to learn the lesson that no, 640K will not be enough for everybody or for very long? I thought the whole point of this thing called "recorded history" is that these mistakes, such as assuming zero growth in a computer/Internet related area, don't get made twice. Instead they get made over and over and over and over and over and over and over and over and over...

Gnutella must scale or it will die. p2p won't die, but the revenues from Limewire Pro will sure dry up. Make the software perform well at higher scales. In the short term, make Limewire actually share only 1000 files at a time; if you have more shared, it will rotate among them, changing what files it offers to a different thousand every hour or so, with a preference for small files. (Any given shared kb should be shared about as often as any other.) Of course, uploads aren't interrupted as the shared group changes; but newly queued uploads will then be for the new group.

In the longer run Limewire should be able to offer whatever you're sharing all of the time without hassle -- and by 2006, this means it should handle sharing up to 100,000 files on a high-end 32 bit desktop box (4GB RAM, etc.) or a low-end 64-bit machine with maybe 4 or 8GB RAM, and a cable connection.
(These are the figures suggested by extrapolating current trends in memory, bandwidth, and file storage, all of which follow Moore curves.)

As for preferring to share rare files: most of the files I am sharing are rare; indeed most of them are jpgs and jpgs are for some reason difficult to get, often only available from one or two hosts at any given time.

Bunching small files into related groups might help things too, by reducing overhead. Efficiency seems to get low for files under 1M, based on observations. Chain-downloading groups of related files, especially small ones, should carry the same costs as downloading one huge file. Sharing them should carry the same costs as sharing one huge file.

Currently, Limewire scales poorly. It scales poorly handling large numbers of downloads -- queued, not concurrent -- and large numbers of shared files. This must change or Limewire will die. Not a threat, a prediction.

Also, a new layer is needed between ultrapeers and leaves: "middlepeers" which connect leaves to ultrapeers. These would have relatively few leaves and offload much work to ultrapeers, but not have as many connections to ultrapeers or to leaves as ultrapeers do, nor any to other middlepeers. Middlepeers would run OK on bandwidth (cable) that is iffy for ultrapeers. They'd help the network scale by taking burden off ultrapeers; less steep requirements would allow there to be more of them and avoid the disincentives that keep the number of ultrapeers inadequate. Ultrapeers would send a connecting middlepeer the most common query string words they see and the middlepeer would filter that list against the content shared by their handful of leaves. The middlepeer would send the filtered list back to the ultrapeer, and the ultrapeer would broadcast a query to it only if it either used an uncommon query string word or used a common one contained in the filtered list. Thus, broad swathes of queries for commonly desired content these leaves happened not to have would not go to the middlepeer. It would also only have to index the files of a few leaves, not of hundreds.

One more thing I want to see fixed is the icons in search results. The green checked paper icon should always appear by a file you already have, either in the download directory or a shared directory. It should never appear by a file you don't have. Currently, both of these conditions are violated, the first frequently, the second rarely.

And fix this forum so the default "Unregistered" doesn't produce a message that the username is taken! In fact, any string starting with "Unregistered" seems to do that.
Reply With Quote
  #4 (permalink)  
Old May 27th, 2004
Un7657457
Guest
 
Posts: n/a
Default

Another thing. What's with Limewire sometimes being confused about whether or not a Bearshare or (especially) Shareaza host is nonexistent, or merely busy? Very often I see a bunch of busy hosts files go "Connecting ... Awaiting sources" as though it retried them only to find the host had gone offline. But often, selecting them and hitting "resume" produces a bunch of "busy hosts" again -- so the host is not offline. So which was it, offline or not offline? Is this Limewire incorrectly distinguishing busy signals from timeouts? Shouldn't it give up on a source only if a connection times out? It clearly gives up under other circumstances though -- sometimes I see it give up in a fraction of a second, not the 60 it takes to time out, after resuming one.

Or maybe the problem's with some other clients. It seems to happen mainly with Shareaza and, to a lesser extent, Bearshare sources, suggesting these clients might be sending spurious 404s as a way of saying "I'm REALLY busy" or some such rot. (Yet, it sometimes does that with one file, gives another a straight busy signal, and queues a third while actually uploading a fourth in a group of files -- it's weirdly inconsistent.)

My theory on this is as follows:
Busy is busy. Don't send something else to mean busy, or let the connection time out when you're there.
404 means "I don't have this file", not "I do but I'm busy". See above.
Queued means queued. Unless the source goes offline I want to see that position number decrease steadily and then the file download. All too often I see it increase, or change abruptly to busy or even awaiting sources. Commonly, selecting it and hitting resume puts it immediately back into the queue again at some high-numbered position. So the host hadn't gone offline. It just decided to kick me back to the end of the line for no reason at all. That's incorrect behavior and probably violates a few RFCs.
Downloading means downloading. I don't want to see any more downloads that abruptly turn into busy signals at suspiciously round fractions like 1/3 or 3/4 of the file sent. Once you start uploading to a host, the only excuse for not finishing is going offline IMO. And none of this fractional-KB/s bullshit either. Shareaza is especially bad for being stingy with upload bandwidth but many hosts upload very slowly after advertising cable or better speeds. Then there's the ones where the download progresses reasonably at first, but after a while the kb/s starts going down and the time stops counting down and starts counting up. Resume often makes the slowing-down download return to normal speed, but left unchecked it commonly goes all the way to zero. I guess these hosts want to make sure you really, really want the file or they'll give the slot to another host? Since usually if you don't keep hitting resume every so often, it slows, eventually stops, and then you get a busy signal. I want this kind of crud to stop happening as well. Shareaza is bad for it but I've seen downloads from other Limewires exhibit the "slows down and eventually stops if you don't keep hitting resume" effect. Any developers listening, if that "feature" isn't gone in current limewires, it better be gone in 4.0.6 and subsequent versions.

Anyway, besides seeing Bearshare and especially Shareaza fix their shoddy software to stop doing some of the above bogus things they seem to be doing, I'd like to see Limewire's developers overhaul the downloading code. Firstly they need to make sure the status determinations are accurate -- that it doesn't show "Awaiting sources" when it gets a busy signal from a source, that it doesn't do anything itself to slow or halt downloads such as not ack arriving chunks in a timely fashion. Removing the "it spawns dozens of threads and the event handling thread hangs for ten minutes" bug should fix most of the problems with downloading from other Limewires as well as some problems downloading from any source at all.
Maybe there's even some way to cover other clients' misrepresentations of busy/nonexistent status somehow, and make up for these clients' deficiencies.

Then there's clarifying the distinction among two of the states: need more sources and awaiting sources. As far as I can tell, the only difference is that the latter can be resumed while the former can be requeried. The actual state of the file is exactly the same: Limewire knows of no hosts for the file that don't time out when attempted. In this case, requerying makes more sense than resuming as the user-available action; if the sources it knows of have all gone offline or become overloaded to the point of timing out, new sources are desired to get the file, rather than retrying the existing ones. Therefore, the "Need more sources" state and its user available options makes perfect sense. "Awaiting sources" just seems to mean "Need more sources and requeried already since the last time Limewire was restarted", and there's no logical reason to distinguish this from "Need more sources" in general. In fact, since requerying is the only likely route to acquiring the file, there's a good reason to stop distinguishing it, and provide "Find more sources" rather than "Resume" as an option. Better yet, provide both -- since sometimes some clients lie about file availability and you end up with "Awaiting sources" showing by a file that clicking "resume" will cause to download immediately, until those buggy clients stop lying and the older versions of them that lie fall into disuse, providing both options makes a lot of sense.
Reply With Quote
  #5 (permalink)  
Old May 27th, 2004
Unrerfgt45yfc
Guest
 
Posts: n/a
Default

I can now confirm that 4.0.5 still has the "Sorry, Limewire was unable to resume your old downloads" bug. I want this dialog box removed from the code in 4.0.6 -- there is simply no excuse for displaying it. Ever.
Reply With Quote
  #6 (permalink)  
Old May 27th, 2004
A reader, not an expert
 
Join Date: January 11th, 2003
Location: Canada
Posts: 4,613
stief has a spectacular aura about
Default

I ***** keep up cap'n!

Any chance you can summarize your feature requests (or "imperious demands" if you prefer ) as in http://www.gnutellaforums.com/showth...threadid=25656 ?

Anyway--I've quite enjoyed your insights and metaphors, and hope to reread this thread in a few days.

btw--if you think it will help, I could try to set up a similar situation here on OSX. If I were to unstuff a few files, they'd expand to ~13K .jpgs. They used to share just fine, so no problem to try again this weekend. Might that help rule out RAM or OS?

and--get registered: it's just a handle easily looped to a throwaway email account, and makes quoting searching and messaging so much easier.
Reply With Quote
  #7 (permalink)  
Old May 28th, 2004
Valued Member contributor
 
Join Date: August 10th, 2003
Location: In the middle of nowhere (54°N 10°E)
Posts: 318
rkapsi is flying high
Default

MP3 thing: late LimeWire 3.9 betas and 4.0 introduced support for ID3v2.3 tags. The tags are kept in memory and everything is fine as long as the tags are not broken etc etc. Sorry, I cannot go into details here. Feel free to register and write me an PM. I'll forward this to the LW devs.

One thing though, how many MP3s did you share? Hundrets? Thousands?

You could experiment with the files a bit. Remove for example all files from the shared directory, share only MP3s and so on (important: you must restart LimeWire everytime).
Reply With Quote
  #8 (permalink)  
Old May 28th, 2004
Unre5746868
Guest
 
Posts: n/a
Default

MP3s -- since it went down from 10000-odd to 9000-odd files when I removed "mp3" from shared extensions, I'm guessing around 1000.

Registering -- signing up for a bunch of spam so I can have the dubious pleasure of having to keep track of yet another login/password pair? Thanks, but no thanks.

How large is an id-somethingorother tag? Why does it keep at least 6500 bytes in RAM for each (including non-mp3s) shared file?

Summarized requests:
* Fix memory leaks. It leaks roughly
5M/hour under WinXP and moderate
user activity.
* Fix not correctly detecting available memory. My semi-scientific observations with Task Manager seem to indicate it thinks it's running out of memory when the process size gets into the 100-120M range. Since the nice round (in binary) number 128M is just above that, I'm guessing it doesn't detect RAM above that amount, and thinks it's running out at this point even on a 1GB machine with an additional 512M of swap for it to spill over into. Software that doesn't grow to use as much space as it needs even when that space is available is software that doesn't scale. Software that doesn't scale becomes obsolete in 5 years or less due to Moore's exponential growth curves.
*Fix memory use inefficiencies. On my system, when the GUI comes up the process size is around or a bit below 30M; as it loads the shared library it shoots up to 95M, from which I calculated it keeps 6500 bytes resident per file. When it loads 20-odd pending downloads it grows another 5M meaning that each pending download adds a whopping 256K! Ludicrous when the files queued for downloading are themselves all under 100K each...
*Get rid of "unable to resume old downloads" dialog. There is never a valid reason for such behavior. I've seen that the downloads.dat file has a downloads.bak backup; the latter should always be the most recent known good version of the file, so it can be loaded if downloads.dat is corrupted. Instead, some of the crashes I've seen seem to corrupt them both at once!
* There's currently little or no incentive to be an ultrapeer, and the recurring theme around here is ultrapeers are short on bandwidth, ultrapeers are short on connection slots, etc. -- more ultrapeers lowers the burden on existing ultrapeers. Possibly add "middlepeers" that work reasonably on typical desktop PCs with broadband and benefit from more connections with better search results. Alternatively, create incentive to be an ultrapeer -- offer free copies of pro or something.
* Something sometimes spawns dozens of threads that assume priority over the event handling thread. This leads to failed transfers and dropped connections as well as a nonresponsive UI. After a while the extra threads go away and the system returns to normal -- minus a few connections and with lots of interrupted up/downloads, that is. During these seizures there's a lot more CPU use than normal. Fix this.
* There seems no logical reason for the "Awaiting sources" status -- it just seems to mean "Need more sources, and you've aleady searched for more once since the last time you restarted Limewire" and its sole effect on the user experience is to force them to restart Limewire to try again to find sources for the file. Anything that encourages people not to keep LW running is bad for the network. (All the memory leaks and related problems also discourage lengthy stays online!)
* It scales poorly. Thousands of shared files, hundreds of queued downloads, and more than about one hour continuously online all mean trouble. Software that doesn't scale becomes quickly obsolete; see above.
* Gnutella in general is inefficient at sharing small files. It's harder to get a typical 100K file than to get a 100K chunk of a typical 10M file. Zipping batches of related small files up seems like a way around that, except: everyone has to do it for it to work; the preferred archive format is system dependent, zip on Windows, sit on the Mac, gz or bz2 on *nix, etc.; and archiving renders metadata and file types invisible to gnutella clients so you can't find zips of jpgs with a search for images for instance. Therefore, handling of small files needs to change. I suggested that if someone requests a large number of small files from a host, the host should serve them all machinegun-style in one upload slot and one conceptual upload event, rather than back to the queue after each one. "Chain-uploading" of small files would make them closer to equivalency with large files for the (not uncommon) case of grabbing numerous small files from the same source. Someone else suggested whole-directory sharing. Another small-file issue is that I regularly see even files of only a few K upload quite gradually in terms of percentages. Apparently they can be split up quite small. This is inefficient -- you end up sending potentially hundreds of K of packet headers for a 10K file that way! Since the max size of a network packet is 64K, I suggest sending small files (32K and below?) as one single network packet and never sending a busy signal for a request for such a file. The file itself can be sent in the same packet that would be used to send a busy signal, so why not send the file? This logic makes the smallest files able to not take up queue slots too. For larger files, they should be sent in 32K chunks per packet, aside from the final chunk which obviously may be smaller. The only exception I can think of as making any sense would be serving files over dialup.
Furthermore, ultrapeers might cache small files hosted by leaves connected to them via dialup. Queries they'd route to their dialup-using leaves would first be checked against the cache, and they'd return a cached file as a result instead if possible. Then you'd get the file from a broadband-connected ultrapeer instead of a dialup-connected machine. (If faced with a request for a file flushed from the cache, the ultrapeer would have to forward the request somehow or reupload the file.) The twin advantages being to keep the file available through the leaf's inevitable dialup dropouts and to reduce the burden on the dialup-connected leaf.
* Fix search result icons for accuracy. I currently see stars or torn paper by results I already have, and sometimes the checked icon by files I don't. I suspect this may be due to my download directory not being shared, as I screen all new files before sharing them.
* Make one upload slot by default (configurable as an "advanced option") an "express lane" which preferences broadband connections and serves only files under 1M from a separate queue -- all configurable under "advanced options".
* Make (and/or document) commandline options to limewire.exe to allow people with Java know-how to adjust VM parameters. I assume it's a stub of some kind, so simply passing-thru any command line parameters to the VM executable ought to do it, except for Mac Classic users, where there is no such thing as a command line. :P

I think that about does it for now.
Reply With Quote
  #9 (permalink)  
Old May 28th, 2004
Valued Member contributor
 
Join Date: August 10th, 2003
Location: In the middle of nowhere (54°N 10°E)
Posts: 318
rkapsi is flying high
Default

- Titile, Artist, Album, Year, Track, Comments and Genre with up to 1KB each (+ other optional fields).

- Shared files come along with a Hash, a TigerTree, a list of alternate locations, statistics etc etc. That sums up!

- Memory leaks are impossible with Java!

- You should also know that each download in the queue allocates at least 2+2n Threads (n=number of additional sources).

- The JVM has a initial Heap size limit. You can rise it with the -Xms and -Xmx arguments. Google will tell how...

- Files smaller than 1MB are always send in one piece (they're not split up into smaller pieces aka PFS).

- and so on...

You seem to have a good tech. background but you speculate too much.

For beginners:
http://www.gnufu.net

For experts:
http://groups.yahoo.com/group/the_gdf

And here you can see how LimeWire works:
http://www.limewire.org
Reply With Quote
  #10 (permalink)  
Old May 28th, 2004
44645y7hf
Guest
 
Posts: n/a
Default

Quote:
Originally posted by rkapsi
- Memory leaks are impossible with Java!
Not if objects are created and referenced, but never used. Fact is the process size grows by about 5M each hour if it's just left be!

Quote:
- The JVM has a initial Heap size limit. You can rise it with the -Xms and -Xmx arguments. Google will tell how...
Too bad there's no way to pass these options when running Limewire that I know of.

Quote:
- Files smaller than 1MB are always send in one piece (they're not split up into smaller pieces aka PFS).
Then why do files smaller than 1MB not just jump from 0% to 100% in one leap? Obviously they are sent piecemeal. In fact, files above about 60K must be sent in multiple packets.
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Internal errors! scalaron General Mac OSX Support 6 October 15th, 2005 04:49 PM
getting tired of limewire internal errors wolfin Open Discussion topics 1 August 7th, 2005 07:25 AM
Internal and Fatal errors at start up shaelesand Windows 1 May 20th, 2005 10:20 AM
Internal Errors rib6666 General Mac OSX Support 4 October 15th, 2004 04:01 AM
Internal Errors WON'T LET ME INTO LIMEWIRE vndraj@cox.net General Mac OSX Support 0 January 1st, 2003 11:32 AM


All times are GMT -7. The time now is 07:11 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.

Copyright © 2020 Gnutella Forums.
All Rights Reserved.