Message boards :
Number crunching :
New work Discussion
Message board moderation
Previous · 1 . . . 84 · 85 · 86 · 87 · 88 · 89 · 90 . . . 91 · Next
Author | Message |
---|---|
Send message Joined: 6 Jul 06 Posts: 147 Credit: 3,615,496 RAC: 420 |
Do you know if they run faster as 64 bit or are they the same? If the same then what is the benefit? Is there any reason (that you know of) why they need so much memory? More expansive models, more parameters or something else? Still keen to try some OpenIFS work units. I have 64 GB of RAM on my AMD 5900X (12 cores/24 threads), as only 4 work units seem to be downloaded at any particular time (in the last two attempts to get work) I should be OK, (I run a lot of other projects as well, so this limits how much work can be downloaded). Conan |
Send message Joined: 15 May 09 Posts: 4529 Credit: 18,606,817 RAC: 12,194 |
Do you know if they run faster as 64 bit or are they the same? I am afraid, I don't know enough about the modelling process to answer that. I can say that many of these tasks from the testing site have used 6GB of RAM peak on my computers. Running more than one at a time on my now dead laptop which only had 8GB RAM really slowed them down. Some have peaked at around 9GB/task but as they don't all peak at the same time if you stagger them it doesn't seem a major problem in that it doesn't cause crashes. The OIFS code is open source and produced by a pan European consortium which I see as an advantage over the met office code. It was the amount of memory they used that alerted one of my fellow mods to the fact that they must be 64 bit. |
Send message Joined: 5 Aug 04 Posts: 1119 Credit: 17,185,520 RAC: 2,698 |
Make sure that you have LOTS of ram before trying any!!! When I heard about OpenIFS memory requirements, I doubled my RAM size. Right now almost all that is unused. If I ever get any more CPDN work, any Rosetta work, or WCG work, that usage may go up some, but OpenIfs should use more. $ free -hw total used free shared buffers cache available Mem: 62Gi 3.4Gi 1.1Gi 87Mi 166Mi 57Gi 58Gi Swap: 15Gi 1.0Mi 15Gi Computer 1511241 CPU type GenuineIntel Intel(R) Xeon(R) W-2245 CPU @ 3.90GHz [Family 6 Model 85 Stepping 7] Number of processors 16 Operating System Linux Red Hat Enterprise Linux Red Hat Enterprise Linux 8.6 (Ootpa) [4.18.0-372.19.1.el8_6.x86_64|libc 2.28 (GNU libc)] BOINC version 7.16.11 Memory 62.28 GB Cache 16896 KB Swap space 15.62 GB Total disk space 488.04 GB Free Disk Space 482.76 GB Measured floating point speed 6.58 billion ops/sec Measured integer speed 30.58 billion ops/sec |
Send message Joined: 15 May 09 Posts: 4529 Credit: 18,606,817 RAC: 12,194 |
Just for information. Hoping these do come to main site soon. One of my Open IFS tasks from testing had the following on the task page. Peak working set size 8.77 GB Peak swap size 9.38 GB Oh and some of them have had final uploads in the region of 1GB too! |
Send message Joined: 7 Sep 16 Posts: 262 Credit: 34,843,347 RAC: 15,688 |
Woah. Is there any way to make those opt-in? That's massive, even for CPDN, and I would wager not many people have enough RAM to run more than one of those at a time. One of my dedicated compute boxes could only run one of those, and the other might manage three... |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
Woah. 10GB is not massive. I have three computers that will take 128GB and currently have 40, 50, and 64. RAM is cheap, upgrade your dinosaurs. Trouble is I run Windows, not geeky Linux, so I can't do them. |
Send message Joined: 15 May 09 Posts: 4529 Credit: 18,606,817 RAC: 12,194 |
Woah.As long as you have the minimum amount of RAM to run one, you can run more than that if you have sufficient swap space. The first ones wouldn't download if you had less than 5GB of RAM so the project won't let you run them unless you have at least the minimum required for one task. I was able to run four of those on a laptop with 8GB of RAM but it did slow them down a lot. Running two was relatively OK as they spend much of their time not using that much and as long as they are out of sync and not all peaking at once you can get away with it. |
Send message Joined: 15 May 09 Posts: 4529 Credit: 18,606,817 RAC: 12,194 |
On the Windows front, I have 2 NZ WAH2 tasks running under WINE from testing branch. There is something to do with the ancillary files that they want to get sorted before moving to bigger batches and main site work. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
On the Windows front, I have 2 NZ WAH2 tasks running under WINE from testing branch. There is something to do with the ancillary files that they want to get sorted before moving to bigger batches and main site work.Thanks, 6 computers waiting. |
Send message Joined: 7 Sep 16 Posts: 262 Credit: 34,843,347 RAC: 15,688 |
10GB is not massive. I have three computers that will take 128GB and currently have 40, 50, and 64. RAM is cheap, upgrade your dinosaurs. Trouble is I run Windows, not geeky Linux, so I can't do them. It is "an order of magnitude larger than any other distributed compute tasks I've run." Even the 1.5GB tasks are fairly rare, and CPDN only. Most of my boards support larger RAM, but only at significantly slower timings. They won't run 4 sticks at DDR4-3200, which is why I'm running two sticks in the compute boards right now - because bandwidth is useful with the big stuff. Though I'd hardly call a 3900X a dinosaur. If any of those show up in bulk, I'll certainly try them out, and I can add swap space easily enough (though not enough to load up all the cores, not that I do that anyway), but it's just a major shift in RAM-per-task sizes. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
It is "an order of magnitude larger than any other distributed compute tasks I've run." Even the 1.5GB tasks are fairly rare, and CPDN only.If you've ever done Rosetta or LHC, or some of the maths projects, you'll be used to big tasks. I have two 10 years old machines that take 128GB. I never compromise on the motherboard or RAM. Most of my boards support larger RAM, but only at significantly slower timings. They won't run 4 sticks at DDR4-3200, which is why I'm running two sticks in the compute boards right now - because bandwidth is useful with the big stuff. Though I'd hardly call a 3900X a dinosaur.That's odd because I have zero out of 7 boards that change timings with more sticks. They will all in fact run all sticks faster than the CPU supports. If any of those show up in bulk, I'll certainly try them out, and I can add swap space easily enough (though not enough to load up all the cores, not that I do that anyway), but it's just a major shift in RAM-per-task sizes.Ebay is full of all sorts of RAM. |
Send message Joined: 7 Sep 16 Posts: 262 Credit: 34,843,347 RAC: 15,688 |
Ah, Intel side of things? Both of my boards drop from 3200 down to 2667 or so going from 2 to 4 sticks. I mean, I've got a rig with 72GB RAM, but it's an old Westmere era dual Xeon box with power consumption to match, and "asleep" it still pulls 150W, so I'm not using it much. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
Ah, Intel side of things?My motherboards are: Ryzen: https://uk.msi.com/Motherboard/X470-GAMING-PLUS-MAX/ i5: https://www.gigabyte.com/Motherboard/Z370-HD3P-rev-10#kf Two dual xeon servers: https://www.dell.com/learn/us/en/05/shared-content~data-sheets~en/documents~r410-specsheet.pdf and https://www.dell.com/learn/us/en/16/shared-content~data-sheets~en/documents~r510-specsheet.pdf Ancient quad core: https://www.asus.com/uk/supportonly/P5N-D/HelpDesk_Knowledge/ Plus these laptop sorta things it's impossible to get documentation on: https://icecat.biz/p/packard+bell/dt.ua3eh.001/imedia-pcs-workstations-s2984-32917145.html and https://www.expertreviews.co.uk/laptops/49686/acer-aspire-5741-review I can find no indication that the first 4 (the decent ones that take loadsa RAM) will slow down with more RAM. Anyway, who needs fast RAM? I changed my Ryzen from single to dual channel and can hardly tell the difference on most games and most Boinc projects. But more RAM is always very helpful, what you aren't using is a massive disk cache. As for power, I've put all but my gaming Ryzen in the garage. They provide a tropical environment for 2 of my parrots. Who cares if I'm using 4kW 24/7? We have fossil fuels to burn baby! |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
This is getting off topic. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
This is getting off topic.Welcome to the art of conversation. Anyway it isn't off topic at all, we're discussing memory size, memory speed, relevant to the new larger work units. |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
Hardware discussion in the new thread, please. |
Send message Joined: 4 Oct 15 Posts: 34 Credit: 9,075,151 RAC: 374 |
All batches are being processed, are there any new work developments on the horizon?Another 63 HADCM3S tasks on the testing server at the moment. Pretty sure all that testing will lead to main site work at some point but no hints in discussions in other places about how long this will take or what they need to know before sending them out. Hold them back, i still have more than enough work downloaded ^^ https://www.cpdn.org/results.php?hostid=1521318&offset=0&show_names=0&state=1&appid= Some day i looked into the VM, and it was more than full :) Greets Felix |
Send message Joined: 31 Dec 07 Posts: 1152 Credit: 22,363,583 RAC: 5,022 |
What Os will these HADCM3S be for? |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
Mac OS |
Send message Joined: 6 Jul 06 Posts: 147 Credit: 3,615,496 RAC: 420 |
Just for information. Hoping these do come to main site soon. One of my Open IFS tasks from testing had the following on the task page. The application has been placed on the Application Page, just awaiting the actual work units to go with it. So something is moving. Conan |
©2024 cpdn.org