climateprediction.net home page
New work discussion - 2

New work discussion - 2

Message boards : Number crunching : New work discussion - 2
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 33 · 34 · 35 · 36 · 37 · 38 · 39 . . . 42 · Next

AuthorMessage
Profile Dave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4347
Credit: 16,541,921
RAC: 6,087
Message 69545 - Posted: 31 Aug 2023, 5:23:12 UTC

Jean-David and me could both run 4 28GB tasks at once. It would be a pity to limit the number. Can the server alter the limit based on the host's RAM? Also I take it this limit is per host? I have 10 machines....
Limit is per host. Sadly the BOINC code currently does not allow the server to restrict the number of tasks a particular host gets according to RAM. It can only check whether the host has enough memory for a single task. I don't know if this feature has been requested of the devs over at git-hub.
ID: 69545 · Report as offensive
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 69546 - Posted: 31 Aug 2023, 5:32:17 UTC - in response to Message 69545.  
Last modified: 31 Aug 2023, 5:34:37 UTC

Jean-David and me could both run 4 28GB tasks at once. It would be a pity to limit the number. Can the server alter the limit based on the host's RAM? Also I take it this limit is per host? I have 10 machines....
Limit is per host. Sadly the BOINC code currently does not allow the server to restrict the number of tasks a particular host gets according to RAM. It can only check whether the host has enough memory for a single task. I don't know if this feature has been requested of the devs over at git-hub.
The Boinc client should consider whether it should get more if RAM is low, but it probably doesn't. That's another thing Boinc could implement. For example:

CPDN is set to let a host get only 1 task at a time.
Host with 64GB downloads 1 task and runs it.
Host is using 28/64GB.
Host downloads 1 more task and runs it.
Host is using 56/64GB.
Host realises tasks from CPDN are too large and it mustn't download another, so does nothing or downloads from another project.

Or....

The work request includes RAM available.
Host: I want tasks to run on 12 cores for 2 days and I have 64GB RAM.
Server: Tasks are 28GB each, you only get 2.
Host to another project: I want tasks to run on 10 cores for 2 days and I have 8GB RAM left.
Other project server: Here you go.

I'm tired of making umpteen suggestions and bug fixes on github, your turn. And no I haven't been banned :-)
ID: 69546 · Report as offensive
Glenn Carver

Send message
Joined: 29 Oct 17
Posts: 809
Credit: 13,604,352
RAC: 5,068
Message 69547 - Posted: 31 Aug 2023, 14:33:33 UTC - in response to Message 69545.  

Jean-David and me could both run 4 28GB tasks at once. It would be a pity to limit the number. Can the server alter the limit based on the host's RAM? Also I take it this limit is per host? I have 10 machines....[/size]
Limit is per host. Sadly the BOINC code currently does not allow the server to restrict the number of tasks a particular host gets according to RAM. It can only check whether the host has enough memory for a single task. I don't know if this feature has been requested of the devs over at git-hub.
Until we get more experience with volunteers running these high memory apps I think it makes sense to restrict it to a single task for now. We can change it later in light of experience.

No other projects I know of run tasks with this high memory requirements so it's not obvious how they will be received. Let's walk first before we run with this.
---
CPDN Visiting Scientist
ID: 69547 · Report as offensive
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 69550 - Posted: 31 Aug 2023, 15:48:44 UTC - in response to Message 69547.  

Until we get more experience with volunteers running these high memory apps I think it makes sense to restrict it to a single task for now. We can change it later in light of experience.

No other projects I know of run tasks with this high memory requirements so it's not obvious how they will be received. Let's walk first before we run with this.
LHC's ATLAS tasks at 10GB are the biggest I know of. But that's 8 threads, so you don't get people trying to run huge numbers of them. Are yours going to be single threads?
ID: 69550 · Report as offensive
Profile Dave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4347
Credit: 16,541,921
RAC: 6,087
Message 69551 - Posted: 31 Aug 2023, 15:57:37 UTC - in response to Message 69550.  

Are yours going to be single threads?
At least initially. Long term there may be multithreaded ones.
ID: 69551 · Report as offensive
klepel

Send message
Joined: 9 Oct 04
Posts: 77
Credit: 68,053,474
RAC: 10,350
Message 69552 - Posted: 31 Aug 2023, 18:31:50 UTC - in response to Message 69551.  

I still think, I will buy this 128 GB of RAM, RAM speed did not matter, wasn't it?

Then I will be able to equip a third computer with 64 GB RAM (4*16 GB) - a virtual box…

I remember the new OpenIFS shall be multithreaded.. not single threaded…
ID: 69552 · Report as offensive
Profile Dave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4347
Credit: 16,541,921
RAC: 6,087
Message 69553 - Posted: 31 Aug 2023, 19:40:46 UTC - in response to Message 69552.  

I remember the new OpenIFS shall be multithreaded.. not single threaded…
If and when the multithreaded tasks come they will have the same memory requirements as single threaded tasks though there is currently quite a wide range of memory requirements for OIFS tasks. I don't know if all the different ones will stay or not.
ID: 69553 · Report as offensive
Glenn Carver

Send message
Joined: 29 Oct 17
Posts: 809
Credit: 13,604,352
RAC: 5,068
Message 69554 - Posted: 31 Aug 2023, 21:27:18 UTC - in response to Message 69552.  

If you want to work around the '1 task per host' rule for the higher resolution OIFS tasks, there's no need to use a VM. Create another instance of boinc on the host using a different datadir and different gui_rpc_port number. Then just connect that to CPDN project and nothing else. That's how I do it.

Speed: for fastest execution time, single core speed is most important, followed by memory bandwidth. I've just finished building a new machine with DDR5 and was going to do some benchmarking with OIFS. I can post results if interested.

Yes, OIFS is multithreaded, that's how I run it standalone. I know how to implement it in boinc, it's just time and more testing. Depends how long it all takes, but it would be nice to have the multithreaded app ready for these hi-res configs.

Please don't get too focussed on OpenIFS though. There are currently no future external projects in the works to use OpenIFS, only the testing & development work I want to do.

I still think, I will buy this 128 GB of RAM, RAM speed did not matter, wasn't it?

Then I will be able to equip a third computer with 64 GB RAM (4*16 GB) - a virtual box…

I remember the new OpenIFS shall be multithreaded.. not single threaded…

---
CPDN Visiting Scientist
ID: 69554 · Report as offensive
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 69555 - Posted: 1 Sep 2023, 2:01:05 UTC - in response to Message 69553.  

If and when the multithreaded tasks come they will have the same memory requirements as single threaded tasks though there is currently quite a wide range of memory requirements for OIFS tasks. I don't know if all the different ones will stay or not.
I assume you mean same per task and not same per core. So we can fill the CPU with them without colossal amounts of memory.
ID: 69555 · Report as offensive
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 69556 - Posted: 1 Sep 2023, 2:03:23 UTC - in response to Message 69552.  
Last modified: 1 Sep 2023, 2:03:57 UTC

I still think, I will buy this 128 GB of RAM, RAM speed did not matter, wasn't it?
Speed certainly does matter. I have two almost identical CPUs - Ryzen 9 3900X and Ryzen 9 3900XT. One has faster memory. The speed difference on climate prediction and a few other projects is huge. Most projects no difference at all.
ID: 69556 · Report as offensive
Profile Conan
Avatar

Send message
Joined: 6 Jul 06
Posts: 141
Credit: 3,511,752
RAC: 144,072
Message 69557 - Posted: 2 Sep 2023, 10:17:19 UTC - in response to Message 69550.  

Until we get more experience with volunteers running these high memory apps I think it makes sense to restrict it to a single task for now. We can change it later in light of experience.

No other projects I know of run tasks with this high memory requirements so it's not obvious how they will be received. Let's walk first before we run with this.
LHC's ATLAS tasks at 10GB are the biggest I know of. But that's 8 threads, so you don't get people trying to run huge numbers of them. Are yours going to be single threads?


YOYO@home ECM/P2 tasks take at least 11 GB per task, single thread. Which is why I stopped running them on my 32 GB machine and limit them to just 3 at a time on my 64GB machine, they are real memory hogs.

Conan
ID: 69557 · Report as offensive
Glenn Carver

Send message
Joined: 29 Oct 17
Posts: 809
Credit: 13,604,352
RAC: 5,068
Message 69558 - Posted: 2 Sep 2023, 11:55:24 UTC - in response to Message 69557.  

Would it make any difference if they were multicore?
ID: 69558 · Report as offensive
Ingleside

Send message
Joined: 5 Aug 04
Posts: 108
Credit: 19,318,990
RAC: 33,312
Message 69559 - Posted: 2 Sep 2023, 12:36:43 UTC - in response to Message 69558.  

Would it make any difference if they were multicore?

I would much rather run a single let's say 10 GB model for one week using 4 cores, instead of running the same on a single core for 4 weeks.
With how really bad CPDN is at handling re-boots, while waiting to re-boot a few days for the 4 cores to finish is possible, waiting maybe 3 weeks for the single core to finish isn't really practical.
ID: 69559 · Report as offensive
Glenn Carver

Send message
Joined: 29 Oct 17
Posts: 809
Credit: 13,604,352
RAC: 5,068
Message 69560 - Posted: 2 Sep 2023, 12:45:56 UTC - in response to Message 69559.  

That's my preference.

Only OpenIFS will be multicore and it doesn't have the restart problem the Hadley models do.
ID: 69560 · Report as offensive
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 69561 - Posted: 2 Sep 2023, 16:29:40 UTC - in response to Message 69557.  

YOYO@home ECM/P2 tasks take at least 11 GB per task, single thread. Which is why I stopped running them on my 32 GB machine and limit them to just 3 at a time on my 64GB machine, they are real memory hogs.
Amicable numbers requires 8GB RAM and 1GB VRAM per GPU task. Didn't quite work out on my 6 GPU 32GB machine! I could add more RAM, but I'm supposed to be saving up for a forest to build a log cabin in.

Yafu has another requirement - up to 128 threads per task!
ID: 69561 · Report as offensive
kotenok2000

Send message
Joined: 22 Feb 11
Posts: 31
Credit: 226,546
RAC: 4,080
Message 69562 - Posted: 2 Sep 2023, 16:32:15 UTC

Fortunately yafu has tasks for lower amount of cores.
ID: 69562 · Report as offensive
Mr. P Hucker

Send message
Joined: 9 Oct 20
Posts: 690
Credit: 4,391,754
RAC: 6,918
Message 69563 - Posted: 2 Sep 2023, 16:41:51 UTC - in response to Message 69562.  

Fortunately yafu has tasks for lower amount of cores.
Does kotenok mean kitten or gun?
ID: 69563 · Report as offensive
kotenok2000

Send message
Joined: 22 Feb 11
Posts: 31
Credit: 226,546
RAC: 4,080
Message 69564 - Posted: 2 Sep 2023, 16:43:17 UTC

kitten
ID: 69564 · Report as offensive
Jean-David Beyer

Send message
Joined: 5 Aug 04
Posts: 1063
Credit: 16,546,621
RAC: 2,321
Message 69565 - Posted: 2 Sep 2023, 21:01:28 UTC - in response to Message 69547.  

Until we get more experience with volunteers running these high memory apps I think it makes sense to restrict it to a single task for now. We can change it later in light of experience.


One way to get more experience with volunteers running these high memory apps would be to send more of them to we volunteers.
ID: 69565 · Report as offensive
Profile Dave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4347
Credit: 16,541,921
RAC: 6,087
Message 69566 - Posted: 2 Sep 2023, 21:10:36 UTC - in response to Message 69565.  

One way to get more experience with volunteers running these high memory apps would be to send more of them to we volunteers.
With the number of computers chasing work as soon as tasks appear most will likely go pretty quickly and restricting machines to one task at a time means more machines will run tasks and stand more chance of showing up any problems that occur in a single task scenario. Issues running more than one at a time can be investigated later.
ID: 69566 · Report as offensive
Previous · 1 . . . 33 · 34 · 35 · 36 · 37 · 38 · 39 . . . 42 · Next

Message boards : Number crunching : New work discussion - 2

©2024 climateprediction.net