Message boards :
Number crunching :
upload problem
Message board moderation
Author | Message |
---|---|
Send message Joined: 14 Sep 17 Posts: 10 Credit: 8,845,670 RAC: 3,552 |
I canĀ“t upload wuĀ“s ! Transient http problem -- Peer certificate cannot be authenticated with given CA certificates Anyone else, or is it a local problem here ? |
Send message Joined: 14 Apr 18 Posts: 5 Credit: 8,065,904 RAC: 140 |
Yupp, same here. |
Send message Joined: 11 Aug 17 Posts: 644 Credit: 22,389,832 RAC: 12,498 |
Thank you! Now, as I see - problem source are detected and problem solved. Can you confirm that tasks sent and reported normally? |
Send message Joined: 31 Dec 17 Posts: 7 Credit: 2,604,123 RAC: 9 |
Thank you! Now, as I see - problem source are detected and problem solved. Can you confirm that tasks sent and reported normally? Confirmed here |
Send message Joined: 14 Sep 17 Posts: 10 Credit: 8,845,670 RAC: 3,552 |
ThankĀ“s for quickfix :-) |
Send message Joined: 14 Apr 18 Posts: 5 Credit: 8,065,904 RAC: 140 |
+1 |
Send message Joined: 26 Mar 18 Posts: 8 Credit: 247,395,533 RAC: 0 |
problem seems to be back. Upload got to 100% then backed off ... had to retry several times to go thru.Website is also very sluggish to access. |
Send message Joined: 11 Aug 17 Posts: 644 Credit: 22,389,832 RAC: 12,498 |
problem seems to be back. Upload got to 100% then backed off ... had to retry several times to go thru.Website is also very sluggish to access. Yes, during heavy operations like data archiving we see a similar problem from time to time. |
Send message Joined: 27 Oct 17 Posts: 6 Credit: 1,434,259 RAC: 321 |
Yes, during heavy operations like data archiving we see a similar problem from time to time. Hi Maybe you should consider doing what SETI@home does and switch their website and upload/download servers OFF on the same day/time each week (Tuesday in SETI's case) and then any back-office data archiving can be done at minimal disruption to users. I'm not a fan of this option, but it would maybe mean that 160-164 of 168 hours per week, the project is 100% functional and for 4-8 hours it is offline. Simple compromise but it might help and saves people complaining all the time about inaccessible server. regards Tim |
Send message Joined: 11 Sep 17 Posts: 51 Credit: 194,388,032 RAC: 3,439 |
ITS becouse formula, all work before start "sprint in formula for this project" they have to do something with this stupid chalenges" and not to server of rakesearch. Why hell have they increase capacity of servers ?becouse some unkown private person from internet create stipd formula or other "chalenges" Poject need in forst STABLE SUPPORT from users and not shocking overload ..this have nothing with "science" or just brain cells..) Upload problem for becouse back up ?? i dont see... but some preson write some here if they see few finnished task in ko ..panicking now is whoel server completly dont respond or zero download upload becouse stupid formula .... "scientific" ...., fffgf |
Send message Joined: 1 Jan 19 Posts: 4 Credit: 32,381,006 RAC: 12,838 |
Well, from a purely scientific point of view, it doesn't really matter who crunches the tasks and when. It would be a problem if a project server completely goes down for extended periods of time so that the net amount of work done becomes negative. Yes, those who have been crunching along here before that sprint started (like you and me) are doing less work now than they used to, and those who (re-)joined the project because of the sprint are not doing as much work as they potentially could. But this is not a net loss of computing power, and yesterday was in fact the best day in the history of RakeSearch. You might argue that those server problems are bad in the long term, as regular participants might decide to quit the project. But I don't think there has ever been any evidence of this happening to a significant extent (just look at how many stability issues SETI@home has even during normal operation without any "stupid challenges", and still they have a very solid user base). On the other hand, the "stupid challenges" attract new participants who would not have joined otherwise, and some of them might stick with the project after the Competition to finish a milestone, collect run time at another app for WUProp, earn a badge, or because they just like the project. I don't have a proof that this really leads to a significant long-term increase of the user base, either (although I have seen this causing a higher overall throughput for at least a few days after the competition), so my best guess is that the long-term effects of the "stupid challenges" are negligible. So all in all, the impacts of the "stupid challenges" are more work done for a short period of time and nothing significant in the long run, so there's no real downside. This probably explains why the project administrators do not share your view and rather see them as a chance to identify problems and increase the stability of the project also under very high-load conditions (see hoarfrost's latest comment, and I know from private communication with other project administrators that they have a similar opinion). |
Send message Joined: 16 Apr 18 Posts: 2 Credit: 313,873 RAC: 0 |
What exactly is the problem of the server? Is it just to small to manage all the requests, if more than x people are participating on the project? The amount of data shoud not be a problem, we are talking about max. 2,5kB per Work unit. |
©2024 The searchers team, Karelian Research Center of the Russian Academy of Sciences