Message boards :
Number crunching :
New work discussion - 2
Message board moderation
Previous · 1 . . . 33 · 34 · 35 · 36 · 37 · 38 · 39 . . . 42 · Next
Author | Message |
---|---|
Send message Joined: 15 May 09 Posts: 4531 Credit: 18,703,882 RAC: 16,510 |
Jean-David and me could both run 4 28GB tasks at once. It would be a pity to limit the number. Can the server alter the limit based on the host's RAM? Also I take it this limit is per host? I have 10 machines....Limit is per host. Sadly the BOINC code currently does not allow the server to restrict the number of tasks a particular host gets according to RAM. It can only check whether the host has enough memory for a single task. I don't know if this feature has been requested of the devs over at git-hub. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
The Boinc client should consider whether it should get more if RAM is low, but it probably doesn't. That's another thing Boinc could implement. For example:Jean-David and me could both run 4 28GB tasks at once. It would be a pity to limit the number. Can the server alter the limit based on the host's RAM? Also I take it this limit is per host? I have 10 machines....Limit is per host. Sadly the BOINC code currently does not allow the server to restrict the number of tasks a particular host gets according to RAM. It can only check whether the host has enough memory for a single task. I don't know if this feature has been requested of the devs over at git-hub. CPDN is set to let a host get only 1 task at a time. Host with 64GB downloads 1 task and runs it. Host is using 28/64GB. Host downloads 1 more task and runs it. Host is using 56/64GB. Host realises tasks from CPDN are too large and it mustn't download another, so does nothing or downloads from another project. Or.... The work request includes RAM available. Host: I want tasks to run on 12 cores for 2 days and I have 64GB RAM. Server: Tasks are 28GB each, you only get 2. Host to another project: I want tasks to run on 10 cores for 2 days and I have 8GB RAM left. Other project server: Here you go. I'm tired of making umpteen suggestions and bug fixes on github, your turn. And no I haven't been banned :-) |
Send message Joined: 29 Oct 17 Posts: 1044 Credit: 16,211,222 RAC: 11,923 |
Until we get more experience with volunteers running these high memory apps I think it makes sense to restrict it to a single task for now. We can change it later in light of experience.Jean-David and me could both run 4 28GB tasks at once. It would be a pity to limit the number. Can the server alter the limit based on the host's RAM? Also I take it this limit is per host? I have 10 machines....[/size]Limit is per host. Sadly the BOINC code currently does not allow the server to restrict the number of tasks a particular host gets according to RAM. It can only check whether the host has enough memory for a single task. I don't know if this feature has been requested of the devs over at git-hub. No other projects I know of run tasks with this high memory requirements so it's not obvious how they will be received. Let's walk first before we run with this. --- CPDN Visiting Scientist |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
Until we get more experience with volunteers running these high memory apps I think it makes sense to restrict it to a single task for now. We can change it later in light of experience.LHC's ATLAS tasks at 10GB are the biggest I know of. But that's 8 threads, so you don't get people trying to run huge numbers of them. Are yours going to be single threads? |
Send message Joined: 15 May 09 Posts: 4531 Credit: 18,703,882 RAC: 16,510 |
Are yours going to be single threads?At least initially. Long term there may be multithreaded ones. |
Send message Joined: 9 Oct 04 Posts: 82 Credit: 69,818,270 RAC: 8,262 |
I still think, I will buy this 128 GB of RAM, RAM speed did not matter, wasn't it? Then I will be able to equip a third computer with 64 GB RAM (4*16 GB) - a virtual box… I remember the new OpenIFS shall be multithreaded.. not single threaded… |
Send message Joined: 15 May 09 Posts: 4531 Credit: 18,703,882 RAC: 16,510 |
I remember the new OpenIFS shall be multithreaded.. not single threaded…If and when the multithreaded tasks come they will have the same memory requirements as single threaded tasks though there is currently quite a wide range of memory requirements for OIFS tasks. I don't know if all the different ones will stay or not. |
Send message Joined: 29 Oct 17 Posts: 1044 Credit: 16,211,222 RAC: 11,923 |
If you want to work around the '1 task per host' rule for the higher resolution OIFS tasks, there's no need to use a VM. Create another instance of boinc on the host using a different datadir and different gui_rpc_port number. Then just connect that to CPDN project and nothing else. That's how I do it. Speed: for fastest execution time, single core speed is most important, followed by memory bandwidth. I've just finished building a new machine with DDR5 and was going to do some benchmarking with OIFS. I can post results if interested. Yes, OIFS is multithreaded, that's how I run it standalone. I know how to implement it in boinc, it's just time and more testing. Depends how long it all takes, but it would be nice to have the multithreaded app ready for these hi-res configs. Please don't get too focussed on OpenIFS though. There are currently no future external projects in the works to use OpenIFS, only the testing & development work I want to do. I still think, I will buy this 128 GB of RAM, RAM speed did not matter, wasn't it? --- CPDN Visiting Scientist |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
If and when the multithreaded tasks come they will have the same memory requirements as single threaded tasks though there is currently quite a wide range of memory requirements for OIFS tasks. I don't know if all the different ones will stay or not.I assume you mean same per task and not same per core. So we can fill the CPU with them without colossal amounts of memory. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
I still think, I will buy this 128 GB of RAM, RAM speed did not matter, wasn't it?Speed certainly does matter. I have two almost identical CPUs - Ryzen 9 3900X and Ryzen 9 3900XT. One has faster memory. The speed difference on climate prediction and a few other projects is huge. Most projects no difference at all. |
Send message Joined: 6 Jul 06 Posts: 147 Credit: 3,615,496 RAC: 420 |
Until we get more experience with volunteers running these high memory apps I think it makes sense to restrict it to a single task for now. We can change it later in light of experience.LHC's ATLAS tasks at 10GB are the biggest I know of. But that's 8 threads, so you don't get people trying to run huge numbers of them. Are yours going to be single threads? YOYO@home ECM/P2 tasks take at least 11 GB per task, single thread. Which is why I stopped running them on my 32 GB machine and limit them to just 3 at a time on my 64GB machine, they are real memory hogs. Conan |
Send message Joined: 29 Oct 17 Posts: 1044 Credit: 16,211,222 RAC: 11,923 |
Would it make any difference if they were multicore? |
Send message Joined: 5 Aug 04 Posts: 126 Credit: 24,157,448 RAC: 27,078 |
Would it make any difference if they were multicore? I would much rather run a single let's say 10 GB model for one week using 4 cores, instead of running the same on a single core for 4 weeks. With how really bad CPDN is at handling re-boots, while waiting to re-boot a few days for the 4 cores to finish is possible, waiting maybe 3 weeks for the single core to finish isn't really practical. |
Send message Joined: 29 Oct 17 Posts: 1044 Credit: 16,211,222 RAC: 11,923 |
That's my preference. Only OpenIFS will be multicore and it doesn't have the restart problem the Hadley models do. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
YOYO@home ECM/P2 tasks take at least 11 GB per task, single thread. Which is why I stopped running them on my 32 GB machine and limit them to just 3 at a time on my 64GB machine, they are real memory hogs.Amicable numbers requires 8GB RAM and 1GB VRAM per GPU task. Didn't quite work out on my 6 GPU 32GB machine! I could add more RAM, but I'm supposed to be saving up for a forest to build a log cabin in. Yafu has another requirement - up to 128 threads per task! |
Send message Joined: 22 Feb 11 Posts: 32 Credit: 226,546 RAC: 4,080 |
Fortunately yafu has tasks for lower amount of cores. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
Fortunately yafu has tasks for lower amount of cores.Does kotenok mean kitten or gun? |
Send message Joined: 22 Feb 11 Posts: 32 Credit: 226,546 RAC: 4,080 |
kitten |
Send message Joined: 5 Aug 04 Posts: 1120 Credit: 17,195,460 RAC: 2,697 |
Until we get more experience with volunteers running these high memory apps I think it makes sense to restrict it to a single task for now. We can change it later in light of experience. One way to get more experience with volunteers running these high memory apps would be to send more of them to we volunteers. |
Send message Joined: 15 May 09 Posts: 4531 Credit: 18,703,882 RAC: 16,510 |
One way to get more experience with volunteers running these high memory apps would be to send more of them to we volunteers.With the number of computers chasing work as soon as tasks appear most will likely go pretty quickly and restricting machines to one task at a time means more machines will run tasks and stand more chance of showing up any problems that occur in a single task scenario. Issues running more than one at a time can be investigated later. |
©2024 cpdn.org