![]() |
|
Register | FAQ | The Twelve Commandments | Members List | Calendar | Arcade | Find the Best VPN | Today's Posts | Search |
Development Open Discussion Anything else about the Phex development |
![]() |
| LinkBack | Thread Tools | Display Modes |
| ||||
![]() Hi, I am testing Phex for file distribution in a cluster right now, and Phex seems to use only about half the available network speed. Do you have an idea, what could be causing this?
__________________ ![]() -> put this banner into your own signature! <- -- Erst im Spiel lebt der Mensch. Nur ludantaj homoj vivas. GnuFU.net - Gnutella For Users Draketo.de - Shortstories, Poems, Music and strange Ideas. |
| ||||
![]() I take scp and push a file over to another computer. Then I take Phex and let it do the same. Phex gets only half the speed through the line.
__________________ ![]() -> put this banner into your own signature! <- -- Erst im Spiel lebt der Mensch. Nur ludantaj homoj vivas. GnuFU.net - Gnutella For Users Draketo.de - Shortstories, Poems, Music and strange Ideas. |
| |||
![]() You could try to play around with various kind of buffers. It could be that throughput is limited by buffer sizes on upload or download side. It should only make a big difference on very fast lines though. Bigger buffer cause more data to be transferred at once, but raise memory and system load. Additionally Phex is not really optimized for a single high speed end to end transfer like SCP is. There is a quite big overhead to mange segments, sync transfers and handle multiple connections. |
| ||||
![]() I tried increasing the download and upload buffers, even to 500MiB, but it didn't get the speeds up. I knew it might not work as I hoped, but I tried anyway... The network uses a 1 Gbit Ethernet, and the bottlenecks are the disk and the CPU, not the network. So I hoped, Phex could spread the load among several computers.
__________________ ![]() -> put this banner into your own signature! <- -- Erst im Spiel lebt der Mensch. Nur ludantaj homoj vivas. GnuFU.net - Gnutella For Users Draketo.de - Shortstories, Poems, Music and strange Ideas. |
| |||
![]() Increasing to a crazy high value will not necessarily give better performance, unless your system is able to deal with a few Gigs requested memory... Buffers are used on multiple layers. It wont help much to increase buffers on one layer when they are still small on other layers. This also means that buffer will use a multiple of the configured size. If you set all buffers to 500MB your system could likely request 4GB memory. Also its not always that much helps much... Larger buffers cause larger parts of data transfered into the memory. If your system cant handle this it could be slower then smaller data chunks. If the file is not too large and you set all buffers large enough and your system is able to handle the load, Phex could transfer the whole file into the memory before sending it on the net and on the other side download the complete file into memory before writing it to disk, but thats very theoretical. Here are a few buffers and other indicators that I found, that will have influence. There might be some more hidden somewhere.... DownloadPrefs.MaxTotalDownloadWriteBuffer DownloadPrefs.MaxWriteBufferPerDownload BufferSize._16K BufferSize._64K HttpFileDownload.BUFFER_LENGTH BandwidthPrefs.MaxTotalBandwidth BandwidthPrefs.MaxDownloadBandwidth BandwidthPrefs.MaxUploadBandwidth FileUtils.BUFFER_LENGTH GnutellaInputStream.READ_BUFFER_LENGTH DownloadPrefs.SegmentTransferTargetTime DownloadPrefs.SegmentMultiple DownloadPrefs.SegmentMaximumSize DownloadPrefs.SegmentInitialSize |
![]() |
| |