climateprediction.net home page
Is more tasks better?

Is more tasks better?

Questions and Answers : Getting started : Is more tasks better?
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile Johan

Send message
Joined: 28 Jul 17
Posts: 1
Credit: 524,650
RAC: 0
Message 56719 - Posted: 21 Aug 2017, 20:31:45 UTC

What is better, running numerous simultaneous or only one at a time.
If numerous is better, how many is recommended?

Warm Regards
Newbe Johan Swanepoel
ID: 56719 · Report as offensive     Reply Quote
Profile JIM

Send message
Joined: 31 Dec 07
Posts: 1152
Credit: 22,053,321
RAC: 4,417
Message 56720 - Posted: 21 Aug 2017, 21:37:55 UTC - in response to Message 56719.  

That’s a complex question. The answer depends on several factors.
How many cores do you have (and how fast are they) and how much RAM. 2 GB’s of RAM per model (simulation) is recommended.
Running a model on each core will slow down your machine to some extent. How much performance loss are you willing to except? Many people leave at least 1 core empty for their own use.
Also running your machine flat out can cause it to run hot. This stresses the processor. If the cooling fan goes into high as soon as you start Boinc you may want cut down on the number of models being run at once to prolong the life of your machine.
ID: 56720 · Report as offensive     Reply Quote
Profile Dave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4314
Credit: 16,379,331
RAC: 3,596
Message 56721 - Posted: 22 Aug 2017, 7:08:57 UTC

Also, I don't have experience of this myself, not having cpu's with hyperthreading but some one did report a while ago that total throughput of models on a four core machine with hyperthreading was greatest using six or seven virtual cores rather than eight.

Over the years there have also been several posts asking why we don't have tasks that can use more than one core at a time. There are two aspects of this that need to be taken into account.
Firstly, the Fortran programme which is owned by the Met Office would have to be re-written and CPDN doesn't have the necessary permission to modify this code and Secondly, the serial nature of climate modelling means each result is contingent on the previous one so probably little would be gained compared to some other computing tasks such as rendering 3d images where it makes a massive difference.
ID: 56721 · Report as offensive     Reply Quote
Profile astroWX
Volunteer moderator

Send message
Joined: 5 Aug 04
Posts: 1496
Credit: 95,522,203
RAC: 0
Message 56724 - Posted: 23 Aug 2017, 19:56:13 UTC - in response to Message 56719.  

Johan,

For what it's worth, my i7-4790 running four real and one of four hyper-threads, the machine runs similar CPDN tasks slower than my slower-rated i5-4670.

When the i7 was new, it was tested adding another hyper-thread "cpu" per test. Each additional "cpu" took a larger bite from processing rate than its preceding test -- an obvious non-linear curve when plotted.
"We have met the enemy and he is us." -- Pogo
Greetings from coastal Washington state, the scenic US Pacific Northwest.
ID: 56724 · Report as offensive     Reply Quote
Profile geophi
Volunteer moderator

Send message
Joined: 7 Aug 04
Posts: 2167
Credit: 64,403,322
RAC: 5,085
Message 56725 - Posted: 25 Aug 2017, 20:07:00 UTC
Last modified: 25 Aug 2017, 20:07:59 UTC

For most situations, hyperthreading adds 10 to 15% additional work accomplished compared to the same processor with hyperthreading turned off. Individual tasks will take quite a bit longer to complete, but the total models completed over some long period of time, and the total credits per day/week etc. will be 10 to 15% greater with hyperthreading.

That said, this can be less true for larger, complex models that take more memory and therefore hit the cache and memory bandwidth harder. On the other hand, models with a small memory footprint (like the old FAMOUS) don't tax the cache and memory bandwidth as much and can reach the upper end of efficiency near 20%. With the various energy saving and heat protection settings for the processor in the bios, CPU speed may be throttled down by the PC when you max out the number of logical cores running at the same time which can also complicate throughput expectations.

It's harder to do such a test nowadays as we often have numerous model types (batches) at different resolutions and grid sizes and different physics taking more or less memory.
ID: 56725 · Report as offensive     Reply Quote

Questions and Answers : Getting started : Is more tasks better?

©2024 climateprediction.net