![]() |
Heavy disc load Hi, compaired to other file sharing clients phex produces a lot of disc access even if there are only a few donwloads with slow connection (>10kb/s). It looks like internal buffers are flushed too often. Log is filled with: Code: [DownloadDataWriter] DEBUG phex.download.DownloadDataWriter - Trigger buffer write for phex.download.swarming.SWDownloadFile@abcdef, amount: 0 How can I reduce disc load? Increase Buffersize in phex.io.BufferSize.java and the classe which use these entries of BuferSize.java (MangagedFile, MemoryFile,...)? |
If I remember correctly the configuration values to tune are: Download.MaxWriteBufferPerDownload = 262144 Download.MaxTotalDownloadWriteBuffer = 1048576 Values are in bytes. Defaults are 256K and 1MB You should be able to set them in phexCorePrefs.properties. Check phex.prefs.core.DownloadPrefs for documentation of the values. |
Thank you. |
Hmm I increased both values: Download.MaxTotalDownloadWriteBuffer=41943040 Download.MaxWriteBufferPerDownload=10485760 Still writes are initated every few seconds, buffers can never be filled in this short time if I have nearly no download traffic (< 1kb/s) Log for DownloadDataWriter: Code: /foo/bar/phex_3.4.2.116$ ./startphex.sh There is even disc activtiy I dont know what is written here all the time, empty buffers? Using iotop: shows always around 100-200kb writing IO. The threads appear like in the log additionaly in the same period they appear in the log. If I shutdown phex it takes forever since every thread writes its big but empty buffer for each file (800 files in download list), most of the them are 0% level because they are orphaned in the gnutella network but they are activated for downloading, not stopped. |
I'm pretty sure 0 bytes are not written to disk. I would rather call this a flaw in the logging. But of course it could be very likely that things are not optimal. There are multiple causes that trigger a write. Also it could be a programming problem or a race condition that is causing excessive write. So its not easy to say why it happens. If you want to dig into this I would suggest you attempt to also log out the variable values you find in the if conditions in DownloadDataWriter.writeDownloadData(). Line 137/138, Line 158/159 and Line 181. You might also like to multiple in Line 138 the DateUtils.MILLIS_PER_MINUTE with maybe 5 and see if it makes a difference. |
Thank you for your helpful answer: Maybe I am wrong here and overlooked something: When I looked at the logs, every 5 sec a write is going on. most of them are 0 bytes written messages. So I searched for 5000 (ms) in the code (DownloadDataWriter) and there you go: Line 111: Code: wait( 5000 ); Every thread waits the same time, that means they wake up at the same time, and later additional comes on top the forced write in Line 137/138. That means: A much higher buffersize with this 5000ms is totally ignored if you have not much downloadtraffic thats why it is never filled to an expected level, its flushed far too soon. You can use a random value here to "spread" the threads more, set up the forced write time but the inefficient buffer usesage still exists, bec. wait time and downloads speed depend on each other to effeciencely use the buffer. You can use different values for different speeds for improved effiencency but this is still a even more bad hack. Thats why this time check based implementation IMO is not a good solution. My idea: The buffer itself should write data when it reaches the 90% level independent from a time. If buffer reaches 90% enter a data writing queue and pause downloading for this file until buffer is flushed. This is easy to implement and dont have to struggle with different thread situations. |
Hmm.. So there is a single DownloadDataWriter thread that should wake every 5 seconds to check if any buffer is full. Its a single thread, so I don't see why a random value should change anything. This check should not cause writing if buffers are not full (L158/L181). So something else must be going wrong there too. Additional at least every minute a write is forced (L137/138), maybe earlier if buffers run full (L120). I did some changes to DownloadDataWriter to have more correct logging. Hope this might help: https://sourceforge.net/p/phex/code/4572/ The line number above are before my changes. |
This is how it looks for me now Code:
|
Yes you are right its only one tread, I thought there were several for each file. I interpreted these kind of lines wrong ... Code: ... I tested your new version. Now my settings for buffer size are correctly used and/or the logger handles displays it right now. Buffers are flushed about 90% and it is more or less identical with the output from iotop. Great! Code: 230608 15:15:22.334(T817253)(156) DEBUG/phex.download.DownloadDataWriter [DownloadDataWriter]:: Total buffered data was: 5242880 bytes. |
This log line Code: 230608 15:13:09.477(T684396)(156) DEBUG/phex.download.DownloadDataWriter [SWDownloadWorker-55a0eb24-PhexPool-thread-15]:: Triggering write cycle. |
Yes you are right, there are other threads in older logs when more than one downloads were active. The log from above was when only one download was activated. I stopped all other downloads bec of better overview in logs. But why is it called so often? Is one call not enough? Its only called in the shutdown(), flushing here makes sense and in the BufferVolumeTracker I dont know what that is for. Ah theres this code in the inner class Sync extended from AbstractQueuedSynchronizer Code: @Override Seems the flushing of the buffer is never done in one step (or cannot be shure it is flushed 100%) and therefore this repeating calls until remaining is 0 to be shure all is flushed. That explains a least these massive log entries and maybe the strange disc access noise while writing (short access patterns). Increasing Buffer sized helped a little bit but is never smooth enough. The reason for this "fragmented writing noise" is probably this scopes play a role here, when the file parts are fragmented downloaded, the file size is increased from time to time to the highest fragments position. File is not created as a whole in the first place so you have real and bad fragmentation on disc. |
Hello, everyone. It was useful to read. Now I understand why the buffer is cleared so often. |
All times are GMT -7. The time now is 08:02 PM. |
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.
Copyright © 2020 Gnutella Forums.
All Rights Reserved.