![]() |
|
Register | FAQ | The Twelve Commandments | Members List | Calendar | Arcade | Find the Best VPN | Today's Posts | Search |
General Discussion For anything which doesn't fit somewhere else (for PHEX users) |
![]() |
| LinkBack | Thread Tools | Display Modes |
|
| |||
![]() Hi. I still have many problems with last version of phex that I didn't have before 0.6.3. Currently I got too few succesfull connections (in and out) and they rarely hold for longer than 15 minutes. And usually I don't see any message before they just disappear from my 'connections' tab. I mean that there should be some message printed in column 'Status' when connection is lost and the line should stay there for a few seconds so I can read it. Most of the time I'm not connected, so I cannot search for new candidates. Anyway I downloaded some files already (I wasn't succesfull in that since 0.6.2). But what I miss in 'download' tab is a column where curent number of candidates for particular downloads is displayed so I don't have to select each of them to know, weather there are some candidates or not (or perhaps some other way to show that). The 'Queued' status doesn't say too much. Next thing is about candidates themselves. When phex cannot connect and download from particular host that is supposed to have requested file, it removes it under some conditions. I'd like to suggest that it always holds at least one host (more would be better) and e.g. increases time interval to wait before retry if it cannot connect for the same reacon for more times. This is good for rare files that only few people have and so are hard to find with normal search. So I can retry to connect when the man just started his client and is not connected to gnutella network yet. Chances are higher for succesfull download then. Currently when I have succesfull search with let's say 5 hits for the file I want to download and all candidates disappear from the 'candidates' tab, I manually readd all found hits from the 'search' tab. It's not too handy. I'd write more, but I think that's enough for one post anyway ![]() Thanx for Phex. Javier |
| |||
![]() I will try to get some of the suggested improvements into the next release. But I already explained my concerns about holding host longer here: I often thought about holding host a little longer. But this could really cause problems for not connected users. Please read this article about it. http://www.gnutellaforums.com/showth...threadid=11543 The research effects only connected user. This would even effect unconnected users and maybe even users that never used Gnutella before (some ISP dynamicly assign IP's). Even if you wait for 60 min between retries it could cause trouble for rare but popular files. Gregor |
| |||
![]() OK, I see the problem with keeping hosts too long, but what about just a little bit longer and only if it's the last host for the file? (You can ignore this question:-)) Do You have any idea about my problem with not keeping connections? Btw., I just realized that the 'abort upload' button doesn't work for me. Thnx. Javier |
| |||
![]() About the removal of possible clients in case of errors... For one offer more than one option: Make one option the following exponential test scheme: In this scheme first you try again in one (time unit) if that doesn't result in something instead of deleting the possible client (which will generate a lot of extra search traffic !) increase its exponent to 2 that means retry in two in continuation 4, 8 , 16 , 32 , very quickly it would be days before it would try again. If it should in any attempt result in a connection again reset the exponent to Zero. Give the user the option to configure at which exponent to eliminate (forget) the possible host and how many hosts have to be known with a minimum exponent to not trigger a new search. How about that - reason is I see it all the time now I successfully download a fragment and then I loose the wonderful high speed host because it ran into a limit and with the next manual search I get the same host and a good connection again. Think about all that useless search traffic to be saved ! Think about the efficiency having full filehost lists again like in 4.6 or so ! Admit it folks you love this idea too! only one or two digits to tell about the state of a host ... only 32 tests in 4 Giga Seconds (130 Years...) max. Actually make the base adjustable too because I just realize that the last few tests happen only every 65, 32 , 16 , 8 , 4 , 2, 1 Years ... that's not too efficient with base 1.5 every next test-pause is 50 percent longer than the last resulting with unit size 1second in 32 tests in 5 days Anyhow you see how with 3 or 4 parameters you have a world of options and efficiency ??? cheers P.S.: take a spreadsheet to think about it ... |
| |||
![]() I took a spreedsheet... I thought about it... and it will be in the next release... Thanks for the good idea. In the next release waiting times will be doubled after each failed connection. The candidate will be dropped once the number of failed connections in a row reach a configurable threshold. There are still other cases where candidates get dropped though. If you like to take a more detailed look on the old and new download activity diagramm check out these very technical images: Swarming 0.7 Swarming 0.7.2 Gregor |
| |||
![]() Vielen Dank ! Another thing that I would like to see in the future is to be able to watch and controll more downloadspecfic behaviours in a pure object oriented fashion: Such as being able to switch to a downloads own control page (or window) and change the settings for each download I believe it would be desirable to be able to reject the contents of any segement if its small or maybe even (introducing some plugin architecture) allowing to pre-view-visualize any segment and if its contents are not 'al gusto' manually eliminate them... I had it a few times now that I believe the fragmented caused file corruption - and I am wishing for a future where I could set up phext to replace a faulty segment manually or automatically! About the exponential retry scheme: People will want to sometimes manually reset the exponent. I still think the base should also be made adjustable if not even instead using a progressive factor (start out multiplying with 2 but in consecutive multiplications decrease the factor to 1.x within 'n' iterations) this way preventing the very inefficient way of checking only again in double the last timeinterval... Essentilly the parameters could be : float basicTimeUnit float baseFactor float factorProgressTarget int exponentLimit long nextConnectTime then the method would be : public void setNextConnect() |
| |||
![]() public float basicTimeUnit; public float baseFactor; public float factorProgressTarget; public int currentExponent; public int exponentLimit; public long nextConnectTime; public double waitTime; then the method would be : public void setNextConnect() throws TimeExceededException { float progressEachIteration=(baseFactor-factorProgressTarget)/exponentLimit float progressedFactor=baseFactor-(progressEachIteration*currentExponent); waitTime *= progressedFactor; nextConnectTime += (long)waitTime } |
| |||
![]() If baseFactor and factorProgressTarget are set to the same value its a plain exponential concept: It should be that factorProgressTarget is a value smaller than baseFactor (However be aware that if factorProgressTarget was allowed to be smaller than one the time would become shorter again after a certain time of extening the wait interval) |
![]() |
| |
![]() | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
other things | disgruntledhousewife | Open Discussion topics | 0 | December 21st, 2005 06:56 PM |
other things | disgruntledhousewife | Open Discussion topics | 0 | December 21st, 2005 06:49 PM |
Two things. | NarutoBoy | New Feature Requests | 14 | July 20th, 2005 08:53 PM |
2 things | Unregistered | User Experience | 0 | June 26th, 2002 09:44 AM |
Things to do... | Unregistered | New Feature Requests | 1 | March 6th, 2002 12:46 PM |