climateprediction.net home page
Posts by KAMasud

Posts by KAMasud

1) Message boards : Number crunching : WAH2 sam50 spinups (Message 62761)
Posted 16 days ago by KAMasud
Post:
Sad. Missed them :) well, Goodluck.
2) Message boards : Number crunching : Stopped sending trickles 5 days ago, but still running. Should I let it run or abort? (Message 62704)
Posted 10 Sep 2020 by KAMasud
Post:
My trickles are also going somewhere but after the ones on the first of this month, non are showing up. Well, if they are going somewhere then it is a problem on the CPDN site.
3) Message boards : Number crunching : New work Discussion (Message 62565)
Posted 8 Jun 2020 by KAMasud
Post:
Another advantage for machines who have dedicated their CPU's to Climate, their machines hammer at the server every one and a half hours or so. Their requests I suppose get lined up, maybe justice. Machines which have not dedicated their CPU's to Boinc and if even there is work available (their work quota might be full of letting us say Collatz WU's) Boinc will not ask the server for work. If you check the server state now and then, there are about 22 or 24 machines dedicated. The rest are busy.
4) Message boards : Number crunching : New work Discussion (Message 62563)
Posted 8 Jun 2020 by KAMasud
Post:
You can of course control the size of your buffer. Setting it to the default of 0.1+0.5 days will get you fewer work units than setting it to 10 days, for example. But that is rather crude, and sometimes they won't send you anything at all.

Unfortunately, the BOINC scheduler is not aware of the "app_config.xml" file, so if you set it to run only two work units, it will still download as if you were running on all 12 cores. So until BOINC gets smarter (a long process), the best way would be for CPDN to provide you a user-selectable limit of how many work units you want downloaded at a time. Several projects do that now: WCG, LHC and Cosmology that I can think of at the moment.

But CPDN has been going in the direction of fewer options. They need to reverse course.



Better would be CPDN slashing drastically the due back date and then re-issuing the WU"s. The current due back date is from the time of Pentium One. I know, it used to take ages then. Now with faster RAM, SSD drives and modernisation, maybe five days to complete.
As to the buffer size, my GPU is busy on another project whose WU"s it completes in four minutes flat. I earn my credits with my GPU and CPU's are full time dedicated to Climate.
5) Message boards : Number crunching : New work Discussion (Message 62562)
Posted 8 Jun 2020 by KAMasud
Post:
"The queued stacks of WUs tend to be systems where they have, say, 12 cores, download 24 WUs but then only allow 2 WUs to run concurrently alongside their other projects."


As to this statement, yes true. It is not about allowing two WU's to run "concurrently" alongside other projects. It is all about how many cores you can run at an ambient temperature of 45c without melting the machine down. In the afternoon I can run only two cores( nothing to do with projects). At night, maybe four. Vector in GPU computing also.
Another factor is the base speed. At all twelve cores running simultaneously, maybe 2.2 MHz. A single core. 4.04 MHz. I am still thinking about the advantages of so many cores. Maybe getting WU's of Climate. Should I get a forty-eight cores machine? Naaa, that is a bit of being silly.
6) Message boards : Number crunching : New work Discussion (Message 62560)
Posted 8 Jun 2020 by KAMasud
Post:
Yes as long as they get "their results".... of course if the 3000 WU's were spread 2 per system across the available Windows systems the researchers would get their 3000 WU's back faster than waiting on fewer systems with huge queued stacks of WU's waiting on long due dates.

IMHO

Bill F



I do not agree with this statement " huge queued stacks of WU's waiting on long due dates". Long due dates besides the point, how many cores a machine has is also not much of a deciding point. Store at least ___days work is set at ten and store additional work is also set at ten days of work maximum. So, how many WU's a machine gets is still a self-limiting factor. I have a twelve thread machine which gets twenty-four WU's max. They report back pretty much at the expected time, within one month.

So, where exactly are these ' huge queued stacks of WU's waiting on long due dates" sitting and sitting they are somewhere. In the old day's we used to squirrel away WU's on floppies or alternative media. Is this still going on?
Then there are crashed hard drives which take WU's with them to the grave but they still get reported.


So you have 12 WUs your working on and 12 queued and the results will be back in a month. How much better if you have 12 WUs and someone else has 12 WUs and the results get back in a fortnight.

I had nothing to do with how many WU"s I have and neither can I put a stop to how much I get. Fight with the server.

The queued stacks of WUs tend to be systems where they have, say, 12 cores, download 24 WUs but then only allow 2 WUs to run concurrently alongside their other projects.
7) Message boards : Number crunching : New work Discussion (Message 62460)
Posted 22 May 2020 by KAMasud
Post:
Yes as long as they get "their results".... of course if the 3000 WU's were spread 2 per system across the available Windows systems the researchers would get their 3000 WU's back faster than waiting on fewer systems with huge queued stacks of WU's waiting on long due dates.

IMHO

Bill F



I do not agree with this statement " huge queued stacks of WU's waiting on long due dates". Long due dates besides the point, how many cores a machine has is also not much of a deciding point. Store at least ___days work is set at ten and store additional work is also set at ten days of work maximum. So, how many WU's a machine gets is still a self-limiting factor. I have a twelve thread machine which gets twenty-four WU's max. They report back pretty much at the expected time, within one month.

So, where exactly are these ' huge queued stacks of WU's waiting on long due dates" sitting and sitting they are somewhere. In the old day's we used to squirrel away WU's on floppies or alternative media. Is this still going on?
Then there are crashed hard drives which take WU's with them to the grave but they still get reported.
8) Message boards : Number crunching : New work Discussion (Message 62455)
Posted 22 May 2020 by KAMasud
Post:
Just grand. All the ancients can now rest in peace in their forgotten niches. The Catacombs Of Climate Prediction. Who is the Chief Ancient?
9) Questions and Answers : Wish list : One task for all processor cores / threads (Message 62341)
Posted 26 Apr 2020 by KAMasud
Post:
1) You can use Google Translate also if you want to convert from English to whichever language required. Just copy and paste it in to translate. Maybe not 100% correct but legible.

1) Sie können Google Translate auch verwenden, wenn Sie von Englisch in die gewünschte Sprache konvertieren möchten. Kopieren Sie es einfach und fügen Sie es ein, um es zu übersetzen. Vielleicht nicht 100% korrekt, aber lesbar.

2) Les, there are lots of Work Units out in the bright blue yonder for years. Their completion dates have come and gone but still being shown as work under progress. There should be a cut off date after which they should be reissued. "UK Met Office Coupled Model Full Resolution Ocean" 927 WU under progress with no response for ages. "UK Met Office HadCM3 short" 1399 WU under progress rest, ditto. Then we have "Weather At Home 2 (wah2) (region independent)" 4759 WU under progress rest, ditto.
As to how many of these are in limbo 28017 with 17 reporting ", Weather At Home 2 (wah2)" God knows or maybe the server does.
It seems Housekeeping or Housecleaning is required bad.

While we are sitting at home and twiddling our thumbs looking towards Climateprediction.net.
I only do GPU computing for other projects while all my CPU's are dedicated to Climate and sitting idle. Maybe I might wire brush them so they do not rust.
Regards.

2) Les, es gibt seit Jahren viele Arbeitseinheiten im hellblauen Jenseits. Ihre Fertigstellungstermine sind gekommen und gegangen, werden aber immer noch als laufende Arbeiten angezeigt. Es sollte einen Stichtag geben, nach dem sie erneut ausgestellt werden sollten. "UK Met Office Coupled Model mit voller Auflösung Ocean" 927 WU in Bearbeitung, seit Ewigkeiten ohne Antwort. "UK Met Office HadCM3 kurz" 1399 WU in Bearbeitung ruhen, dito. Dann haben wir "Weather At Home 2 (wah2) (region unabhängig)" 4759 WU in Bearbeitung, dito.
Wie viele davon sich in der Schwebe befinden 28017 mit 17 Berichten ", Weather At Home 2 (wah2)" Gott weiß oder vielleicht tut es der Server.
Es scheint, dass Housekeeping oder Housecleaning schlecht erforderlich sind.

Während wir zu Hause sitzen und mit den Daumen nach Climateprediction.net schauen.
Ich mache GPU-Computing nur für andere Projekte, während alle meine CPUs dem Klima und dem Leerlauf gewidmet sind. Vielleicht bürste ich sie mit Drahtbürsten, damit sie nicht rosten.
Grüße.

Help us poor souls, someone.
10) Message boards : Number crunching : "No tasks sent" (Message 62250)
Posted 22 Mar 2020 by KAMasud
Post:
Some problems can be solved if there is a will to solve them and some cannot if there is a lack of interest. Fast machines, slow machines. Work Units issued and lost.
Some machines are outdated and creepy slow, some machines are fast and their turnaround times are good.
Some WU's are out there but are lost in limbo. For example, my account is showing a WU issued in 2013, due back in 2014 and still being showed on my account. Well, that computer burnt out and I cannot retrieve that WU.
As far as I am concerned there seem to be some Pentium 1's out there hogging quite a few WU's? How long will administration wait for the first to blink? Reissue them to new faster machines i.e. if the admin cares.
Just a matter of good housekeeping and if there is any interest left.
I am talking about Windows WU's but the same trend may be visible on other OS's. Come on admin, have a heart and please do not wait until our time is past and we compute for the Creator.
Regards.
11) Message boards : Cafe CPDN : the say hello thread :) (Message 61596)
Posted 23 Nov 2019 by KAMasud
Post:
Hello from Pakistan, South Asia.
12) Message boards : Number crunching : New work Discussion (Message 61592)
Posted 21 Nov 2019 by KAMasud
Post:
Thank you. After a while, I got all twelve. Caught me napping because even in winter I cannot crunch all twelve simultaneously. Heats up.
Any idea if there are Windows tasks to be released? After eight years, I was surprised at how fast they went.
I seem to be missing Mo.V?
Regards.
13) Message boards : Number crunching : Validation pending for 9 years... (Message 61583)
Posted 20 Nov 2019 by KAMasud
Post:
I have seven validation pending WU's but not worried, going back nine years. Here for the science. I think I got the credits, who knows. Maybe the server was annoyed with our absence.
14) Message boards : Number crunching : Why do tasks crash on some machines but not others? (Message 61549)
Posted 16 Nov 2019 by KAMasud
Post:
Do you have a GPU which is also crunching for BOINC and have you given over any CPU to it or running 100% CPU's? I had this problem but now I only run tasks on 50% CPU's which at least for me solved it. I think the GPU also needs its space. Anyway, even with BOINC running on 50% CPU's, the load in task manager reaches 70% sometimes. It is the GPU consuming the rest of the CPU's. The benefit is also that my CPU clock speed has increased.
15) Message boards : Number crunching : New work Discussion (Message 61471)
Posted 6 Nov 2019 by KAMasud
Post:
Got one but eleven cpu's are idle. I can see that work is available. Some kind of quota system?
16) Message boards : Number crunching : HADCM3N Tasks not showing on tasks list (Message 43498)
Posted 2 Dec 2011 by KAMasud
Post:
Not normal for me. I have two HADCM3N in limbo somewhere.
17) Message boards : Number crunching : Message no work sent (Message 37913)
Posted 27 Aug 2009 by KAMasud
Post:

Good news as far as the slab is concerned. I have been wrestling with my last one it seems for eons now.
Regards
Masud.

we will have more \"variety\" of models (i.e. an easier-to-run slab model) in a week or so

18) Message boards : Number crunching : Lost credits (Message 37780)
Posted 15 Aug 2009 by KAMasud
Post:

Please do not get too excited. I am active and have not lost much :) even less then 5 percent. As a matter of fact? i might have gained. Yes, before this incident i had finished my HADAM3P and not downloaded any further, just now glued to a HADCM3.
Regards.
Masud.

It appears that the loss has something to do with current activity...

The higher the value of recent trickles compared to total accumulated credits, the higher the percentage of loss.

Individuals that were basically inactive lost no credits at all (but were still awarded the 5% compensation).

19) Message boards : Number crunching : Loss of Project Total Credits (Message 37751)
Posted 14 Aug 2009 by KAMasud
Post:
It seems that credits ARE a very touchy subject :)
Regards
Masud.
20) Message boards : Number crunching : Simultaneous projects download problem (Message 37548)
Posted 25 Jul 2009 by KAMasud
Post:

Play around with the percentages allocated to both the projects. Start with 50/50 then adjust.
Regards
Masud.

It's possible that your last CPDN Model(s) ran in priority mode to meet the "deadline", keeping other Projects from getting time.

Yes, could be of course, but my both present CPDN work units have their "report deadline" in early July 2010 only. I have now one of those running, the other one suspended, and a SETI job is being computed by the other processor. So basicly this is as I want it to be, only that a bit more monitoring is required than usually. I may have to make sure personally, that new work units are able to be downloaded from time to time.

A different thing is, that after latest updates my BOINC application seems to overestimate running times remarkably. It thinks e.g. a new CPDN HADAM3P 6.06 toil would take about 1800 hours, when about 500 is nearer to the final outcome.

Regards

PK



Next 20

©2020 climateprediction.net