MP3s -- since it went down from 10000-odd to 9000-odd files when I removed "mp3" from shared extensions, I'm guessing around 1000.
Registering -- signing up for a bunch of spam so I can have the dubious pleasure of having to keep track of yet another login/password pair? Thanks, but no thanks.
How large is an id-somethingorother tag? Why does it keep at least 6500 bytes in RAM for each (including non-mp3s) shared file?
Summarized requests:
* Fix memory leaks. It leaks roughly
5M/hour under WinXP and moderate
user activity.
* Fix not correctly detecting available memory. My semi-scientific observations with Task Manager seem to indicate it thinks it's running out of memory when the process size gets into the 100-120M range. Since the nice round (in binary) number 128M is just above that, I'm guessing it doesn't detect RAM above that amount, and thinks it's running out at this point even on a 1GB machine with an additional 512M of swap for it to spill over into. Software that doesn't grow to use as much space as it needs even when that space is available is software that doesn't scale. Software that doesn't scale becomes obsolete in 5 years or less due to Moore's exponential growth curves.
*Fix memory use inefficiencies. On my system, when the GUI comes up the process size is around or a bit below 30M; as it loads the shared library it shoots up to 95M, from which I calculated it keeps 6500 bytes resident per file. When it loads 20-odd pending downloads it grows another 5M meaning that each pending download adds a whopping 256K! Ludicrous when the files queued for downloading are themselves all under 100K each...
*Get rid of "unable to resume old downloads" dialog. There is never a valid reason for such behavior. I've seen that the downloads.dat file has a downloads.bak backup; the latter should always be the most recent known good version of the file, so it can be loaded if downloads.dat is corrupted. Instead, some of the crashes I've seen seem to corrupt them both at once!
* There's currently little or no incentive to be an ultrapeer, and the recurring theme around here is ultrapeers are short on bandwidth, ultrapeers are short on connection slots, etc. -- more ultrapeers lowers the burden on existing ultrapeers. Possibly add "middlepeers" that work reasonably on typical desktop PCs with broadband and benefit from more connections with better search results. Alternatively, create incentive to be an ultrapeer -- offer free copies of pro or something.

* Something sometimes spawns dozens of threads that assume priority over the event handling thread. This leads to failed transfers and dropped connections as well as a nonresponsive UI. After a while the extra threads go away and the system returns to normal -- minus a few connections and with lots of interrupted up/downloads, that is. During these seizures there's a lot more CPU use than normal. Fix this.
* There seems no logical reason for the "Awaiting sources" status -- it just seems to mean "Need more sources, and you've aleady searched for more once since the last time you restarted Limewire" and its sole effect on the user experience is to force them to restart Limewire to try again to find sources for the file. Anything that encourages people not to keep LW running is bad for the network. (All the memory leaks and related problems also discourage lengthy stays online!)
* It scales poorly. Thousands of shared files, hundreds of queued downloads, and more than about one hour continuously online all mean trouble. Software that doesn't scale becomes quickly obsolete; see above.
* Gnutella in general is inefficient at sharing small files. It's harder to get a typical 100K file than to get a 100K chunk of a typical 10M file. Zipping batches of related small files up seems like a way around that, except: everyone has to do it for it to work; the preferred archive format is system dependent, zip on Windows, sit on the Mac, gz or bz2 on *nix, etc.; and archiving renders metadata and file types invisible to gnutella clients so you can't find zips of jpgs with a search for images for instance. Therefore, handling of small files needs to change. I suggested that if someone requests a large number of small files from a host, the host should serve them all machinegun-style in one upload slot and one conceptual upload event, rather than back to the queue after each one. "Chain-uploading" of small files would make them closer to equivalency with large files for the (not uncommon) case of grabbing numerous small files from the same source. Someone else suggested whole-directory sharing. Another small-file issue is that I regularly see even files of only a few K upload quite gradually in terms of percentages. Apparently they can be split up quite small. This is inefficient -- you end up sending potentially hundreds of K of packet headers for a 10K file that way! Since the max size of a network packet is 64K, I suggest sending small files (32K and below?) as one single network packet and never sending a busy signal for a request for such a file. The file itself can be sent in the same packet that would be used to send a busy signal, so why not send the file? This logic makes the smallest files able to not take up queue slots too. For larger files, they should be sent in 32K chunks per packet, aside from the final chunk which obviously may be smaller. The only exception I can think of as making any sense would be serving files over dialup.
Furthermore, ultrapeers might cache small files hosted by leaves connected to them via dialup. Queries they'd route to their dialup-using leaves would first be checked against the cache, and they'd return a cached file as a result instead if possible. Then you'd get the file from a broadband-connected ultrapeer instead of a dialup-connected machine. (If faced with a request for a file flushed from the cache, the ultrapeer would have to forward the request somehow or reupload the file.) The twin advantages being to keep the file available through the leaf's inevitable dialup dropouts and to reduce the burden on the dialup-connected leaf.
* Fix search result icons for accuracy. I currently see stars or torn paper by results I already have, and sometimes the checked icon by files I don't. I suspect this may be due to my download directory not being shared, as I screen all new files before sharing them.
* Make one upload slot by default (configurable as an "advanced option") an "express lane" which preferences broadband connections and serves only files under 1M from a separate queue -- all configurable under "advanced options".
* Make (and/or document) commandline options to limewire.exe to allow people with Java know-how to adjust VM parameters. I assume it's a stub of some kind, so simply passing-thru any command line parameters to the VM executable ought to do it, except for Mac Classic users, where there is no such thing as a command line. :P
I think that about does it for now.