Message boards : Number crunching : Hardware for new models.
Message board moderation
Author | Message |
---|---|
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
My machine came with two of these, and I added two more:I use CPU-Z to find what motherboard/RAM I have and if I can add more and what type. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
There are seven more., but four say no module installed. So I seem to be using 4 memory modules and I could add up to four more (in pairs) if I were rich enough and needed more RAM.Agreed, sort of. Last time I checked, £100 for 32GB. You can't see what motherboard you have, but if it's taking 16GB modules, it should take a 16GB in every slot. At least. Maybe a 32GB in every slot. Since you have a W-2245 CPU, I assume you have a Dell Precision 5820. If so, they can take 512GB! So that would be 8 x 64GB modules! But.... it's *4* channel not 2. So you're better off buying 4 modules of whatever you need / can afford (unless you want to leave free sockets for another upgrade). |
Send message Joined: 5 Aug 04 Posts: 1120 Credit: 17,202,915 RAC: 2,154 |
There are seven more., but four say no module installed. So I seem to be using 4 memory modules and I could add up to four more (in pairs) if I were rich enough and needed more RAM. Yes, my machine is a Dell 5820 desktop workstation. Those modulles cost me more than that when I bought them in late 2020. I suppose they are less now. Dell Memory Upgrade - 16GB - 2RX8 DDR4 RDIMM 2933MHz QUANTITY: 2 | UNIT PRICE: $293.40 Dell have this to say about memory for this machine: Memory specifications Features Specifications Type ● DDR4 ECC RDIMMs - Supported only with Xeon W Series CPUs <---<<< what I have ● DDR4 Non-ECC UDIMMs supported with Core X Series CPUs Speed ● 2666 MHz (Discontinued on system configurations purchased after October 2020) ● 2933 MHz <---<<< what is in there now. ● 3200 MHz NOTE: 2933 MHz RDIMMs are not offered with Xenon W Skylake Series CPUs. NOTE: Computer configurations offered with 2933 MHz RDIMMs operating with Sky Lake processors will operate at 2666 MHz. NOTE: Computer configurations offered with 3200 MHz RDIMMs operating with Cascade Lake processors will operate at 2933 MHz. Connectors 8 DIMM Slots DIMM capacities ● 32 GB per slot 2666 MHz DDR4 ● 64 GB per slot 2933 MHz DDR4 ● 64 GB per slot 3200 MHz DDR4 So I would have to replace the current modules with 64 GByte modules to max out my box Dell Memory Upgrade - 64GB - 2RX4 DDR4 RDIMM 3200MHz (Not Compatible with Skylake CPU) Random Access Memory (RAM) is a type of hardware that your computer uses to store information. Adding memory is one of the most cost-effective ways to improve your computer's performance. Dell™ Branded memory offered in the Memory Selector has gone ... Show More Dell Memory Upgrade - 64GB - 2RX4 DDR4 RDIMM 3200MHz (Not Compatible with Skylake CPU) 1 Estimated Value $3,579.00 Dell Price $1,789.50 You Save $1,789.50 (50%) Temporarily Out of Stock <---<<< Right now with my machine up 11 days since most recent reboot, running 11 Boinc tasks, mainly WCG, I use very little RAM, so I harldy think I will be needing much more unless the new OpenIFS become absolute RAM hogs. I am more worried about running out of processor cache. $ free -hw total used free shared buffers cache available Mem: 62Gi 6.7Gi 802Mi 148Mi 601Mi 54Gi 54Gi Swap: 15Gi 137Mi 15Gi |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
Those modules cost me more than that when I bought them in late 2020. I suppose they are less now.$140 a piece now, so the same price. Or $45 a piece 2nd hand. I always get 2nd hand and run a thorough memtest on it. 2933MHz or above would be perfect (it's the max speed of the memory, it will just run slower if need be), or go down a notch to 2666 if it saves a lot of money, therefore you could afford larger ones. Right now with my machine up 11 days since most recent reboot, running 11 Boinc tasks, mainly WCG, I use very little RAM, so I hardly think I will be needing much more unless the new OpenIFS become absolute RAM hogs. I am more worried about running out of processor cache.Since you already have 4 channel memory in use, I guess you can't improve on that. I run big stuff like LHC. And the OpenIFS looks like it will be big too, especially if you want to use all your cores on it. |
Send message Joined: 5 Aug 04 Posts: 1120 Credit: 17,202,915 RAC: 2,154 |
Since you already have 4 channel memory in use, I guess you can't improve on that. I cannot really run all cores on Boinc tasks. I do not have air conditioning. I can control the fan speeds by BIOS settings. But the maximum I can run the fans at, before their noise gets really annoying, lets me run 8 or so cores in summer and I am currently running 11. The maximum number of cores for Boinc tasks is easily set in the Boinc-manager. And I can set the number of concurrent tasks for a given project in the app_config.xml file in the appropriate directory. For me, they are here: $ locate app_config /var/lib/boinc/projects/boinc.bakerlab.org_rosetta/app_config.xml /var/lib/boinc/projects/climateprediction.net/app_config.xml /var/lib/boinc/projects/milkyway.cs.rpi.edu_milkyway/app_config.xml /var/lib/boinc/projects/universeathome.pl_universe/app_config.xml /var/lib/boinc/projects/www.worldcommunitygrid.org/app_config.xml |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
I cannot really run all cores on Boinc tasks. I do not have air conditioning. I can control the fan speeds by BIOS settings. But the maximum I can run the fans at, before their noise gets really annoying, lets me run 8 or so cores in summer and I am currently running 11. The maximum number of cores for Boinc tasks is easily set in the Boinc-manager. And I can set the number of concurrent tasks for a given project in the app_config.xml file in the appropriate directory.I'm surprised. CPUs generate so little heat compared with GPUs. I have 12 GPUs and 100 CPU cores. Most are in my garage, and I just leave windows open. No, the glass ones not the MS ones! One day I'll join the garage to the house so I can let the heat drift through in winter. As for noise, just install a bigger heatsink and fan (or even water cooling). My 130W Ryzen 9 CPUs make very little noise running flat out. Two 6 inch fans on a 6x6x6 inch heatsink. Hence slow quiet fans. |
Send message Joined: 15 May 09 Posts: 4541 Credit: 19,039,635 RAC: 18,944 |
I'm surprised. CPUs generate so little heat compared with GPUs. I have 12 GPUs and 100 CPU cores. Most are in my garage, and I just leave windows open. No, the glass ones not the MS ones! One day I'll join the garage to the house so I can let the heat drift through in winter.Some crunchers live in hotter climes than UK. Even in Cambridge we are getting days above 40C now. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
Some crunchers live in hotter climes than UK. Even in Cambridge we are getting days above 40C now.Even so, about 100W is nothing. A fridge gives that off. Open the window. |
Send message Joined: 6 Aug 04 Posts: 195 Credit: 28,485,103 RAC: 9,827 |
Even so, about 100W is nothing. A fridge gives that off. Open the window.Are you sure? An E-rated American style fridge-freezer is about 350 kWh a year, that's 40W. A B-rated fridge is about 137 kWh a year, or 15W. Our i7-cpu based PCs pull over 100W without the monitors, about 900kWh a year. The waste heat from two PCs is warming our home office today, very slightly. |
Send message Joined: 1 Jan 07 Posts: 1061 Credit: 36,748,059 RAC: 5,647 |
A fridge is much bigger than a CPU, so the energy density is lower. |
Send message Joined: 5 Aug 04 Posts: 1120 Credit: 17,202,915 RAC: 2,154 |
As for noise, just install a bigger heatsink and fan My heat sink is about 5 inches high, 5 inches deep, and 4 inches wide with half the RAM chips on each side. No room for a bigger fan. There is a temperature-controlled fan built onto one end of that heat sink. There are three fans at the front of the computer blowing air in, two fans in the power supply. coretemp-isa-0000 Adapter: ISA adapter Package id 0: +79.0°C (high = +88.0°C, crit = +98.0°C) Core 1: +65.0°C (high = +88.0°C, crit = +98.0°C) Core 2: +63.0°C (high = +88.0°C, crit = +98.0°C) Core 3: +66.0°C (high = +88.0°C, crit = +98.0°C) Core 5: +73.0°C (high = +88.0°C, crit = +98.0°C) Core 8: +74.0°C (high = +88.0°C, crit = +98.0°C) Core 9: +65.0°C (high = +88.0°C, crit = +98.0°C) Core 11: +79.0°C (high = +88.0°C, crit = +98.0°C) Core 12: +61.0°C (high = +88.0°C, crit = +98.0°C) amdgpu-pci-6500 Adapter: PCI adapter vddgfx: +0.76 V fan1: 2052 RPM (min = 1800 RPM, max = 6000 RPM) edge: +32.0°C (crit = +97.0°C, hyst = -273.1°C) power1: 4.25 W (cap = 25.00 W) dell_smm-virtual-0 Adapter: Virtual device fan1: 4252 RPM fan2: 1135 RPM fan3: 3869 RPM |
Send message Joined: 15 May 09 Posts: 4541 Credit: 19,039,635 RAC: 18,944 |
I think we need a new thread dedicated to discussing hardware.. ;) |
Send message Joined: 27 Mar 21 Posts: 79 Credit: 78,322,658 RAC: 1,085 |
On RAM upgrades: Unless obliged by a service contract to do so, don't buy RAM from the ODM (like Dell for instance), just buy RAM according to CPU and mainboard specification. Intel ARK lists maximum RAM capacity which most likely requires more than one DIMM per channel, and maximum RAM clock which likely applies to 1 DIMM per channel or LRDIMMs (i.e. may be lower with 2DPC in case of plain RDIMMs). On CPU temperature and heat output: – If core clocks are high, hot-spot temperatures will be high, almost independently of the size of the heatsink and its intake air temperature. – Take a look into the BIOS for power limits. They might be set needlessly high by default. – There is a trend among consumer desktop mainboard BIOSes to apply too high voltage by default, I hope that's not the case with workstation BIOSes. |
Send message Joined: 27 Mar 21 Posts: 79 Credit: 78,322,658 RAC: 1,085 |
Hardware requirements for current "OpenIFS 43r3 Perturbed Surface" work: The following items need to be taken into account, in descending order of concern: 1. Upload bandwidth of your internet link. 2. Disk space. 3. RAM capacity. 99. CPU. This one doesn't really matter, except of course that CPU core count has got an influence on how many tasks you may want to run in parallel, and that core count × core speed influences how many tasks you can complete per day at most. One or both of these factors (concurrent tasks, average rate of task completions) influence the sizing of items 1…3 in this list. What I mean to say: While CPU performance related questions like "Should or shouldn't I use HT a.k.a. SMT?", "How much processor cache should be there per task to avoid memory access bottlenecks?" are arguably interesting, they are unlikely to be the deciding questions regarding possible OIFS throughput on your equipment. (From here on, I am using "OIFS" as a shorthand for "current OpenIFS 43r3 Perturbed Surface work".) 1. Sizing of the upload bandwidth of your internet link – Each OIFS task produces 1.72 GiB of compressed result data files which need to be uploaded. – Take the sum of the average_task_completions_per_day of all of your computers behind a given internet link, multiply with 1.72 GiB per result, and you have got the minimum uplink bandwidth which is required to sustain those average task completions per day. – Now make an estimate of internet link downtimes and CPDN upload server downtimes, as a percentage over a longer time frame, and increase the figure for uplink bandwidth accordingly. (Problem: We can't predict these downtimes. But we can set a figure of a downtime which we aim to bridge without interruption of computation.) Also increase the figure for anything else which you want to use the uplink for. Most of us won't resize our internet link based on what we want to compute, therefore the consideration is usually the other way around: Take your uplink bandwidth, divide by 1.72 GiB (1.85 GB) per result, multiply by estimated task duration, and you know how many tasks you can run concurrently in steady state in the optimum case. Reduce this figure for downtimes and other overhead. Example: – Let's say your uplink bandwidth is 7 Mb/s = 0.875 MB/s, and your average task duration is 15 h = 54,000 s. – Then your uplink can sustain 0.875 MB/s / 1850 MB * 54,000 s ≈ 25 tasks running in parallel as long as there is no downtime. – Let's say the CPDN upload server was down from a Friday night until the following Monday morning = 2.5 days, and you would like to be able to clear your resulting upload backlog within 1.5 days after this outage. Then this would only work if you ran no more than 25*1.5/(2.5+1.5) ≈ 9 tasks in parallel (on average during the period from beginning of the outage to the end of the clearing of your upload backlog). 2. Sizing of disk space – On the BOINC server, the workunit property rsc_disk_bound is set to 7.0 GiB. – The server will stop assigning new tasks to a host eventually, based on rsc_disk_bound and on what the client reports about disk capacity to the server. Unfortunately I'm unclear about the details: Possible versus actual disk utilization; number of new tasks or number of the host's tasks in progress… Perhaps somebody can set us straight how this works from the BOINC server's side of view. – Of lesser concern for disk sizing: The BOINC client should stop starting tasks once the remaining allowed disk space becomes less than rsc_disk_bound. – On a side note, the BOINC client will kill a running task if it notices that the files which are identifiable by the client to belonging to this task exceed rsc_disk_bound. – Furthermore, there are the mentioned 1.72 GiB result data. During longer outages of your internet link or of CPDN's upload server — or more generally during periods during which you compute faster than you can upload — you need to have respectively much disk space to store these pending uploads. 3. Sizing of RAM – You want RAM for the running tasks, some RAM for disk cache, and RAM for everything else which goes on on the computer. – On the server, the workunit property rsc_memory_bound is set to ≈5.6 GiB. – In my observation, on a stripped-down compute server which runs nothing but OIFS, time-averaged resident memory size of OIFS tasks is at the order of 3.5 GiB, and time-averaged disk cache footprint is at the order of 1.5 GiB/task, roughly. (So, 5 GiB together, per running task.) – In my observation, via spot checks of completed tasks, peak resident memory size of OIFS tasks was ≈4.8 GiB on dual-socket computers running a distributor's kernel, and ≈4.4 GiB on a single-socket desktop computer running a somewhat stripped-down self compiled kernel. The OIFS executable is statically linked, therefore the difference in peak memory consumption can't be related to differences of installed libraries. Also, these peak sizes were consistent across a good number of completed tasks, and the different computer fetched work at the same time frames, therefore the difference is unlikely to come from workunit differences. I suspect the difference in peak resident memory sizes which I observed is caused by kernel and machine configuration, e.g. via per-CPU memory allocations. – The BOINC client will not start new tasks/ put tasks into waiting state when actual used memory of all running tasks gets to the limit of memory allowed to be used by BOINC. – If there is swap space, there kernel will begin to swap from RAM to swap space when resident memory of all processes on the system reaches available physical memory. – If the kernel needs to swap frequently, the machine will basically become unresponsive. – The kernel will start to kill random processes (memory hogs first) when resident memory of all processes reaches the sum of available physical memory and swap space. But the machine will already have become unresponsive before that. – On a side note, the BOINC client will kill a science task if its resident memory exceeds rsc_memory_bound, as far as I know. This should never happen since rsc_memory_bound is supposed to be set big enough by the project administrator. To summarize, supply about 5 GiB RAM for each OIFS task which you want to run concurrently, plus enough RAM for everything else which happens on the computer. The latter is equally important to get right, but not as straightforward. If you want to avoid task failures because of out-of-memory situations, supply enough swap space on top of enough RAM. In contrast, if you want to avoid the computer to ever become unresponsive, supply plenty of RAM for all processes plus for the system's filesystem caches, yet maybe disable swap devices. |
Send message Joined: 5 Aug 04 Posts: 1120 Credit: 17,202,915 RAC: 2,154 |
Even so, about 100W is nothing. A fridge gives that off. Open the window. My machine is drawing 265 watts at the moment, running 12 Boinc tasks and my Firefox browser where I am typing this. It is wintertime, so I do not need to open a window. In the summertime I have a double window fan blowing outside air inside and windows open elsewhere, but when it is 90F outside, it is tough to keep the computer box cool enough to keep the processor cool enough unless I run the fans so fast as to drive me crazy. |
Send message Joined: 29 Oct 17 Posts: 1050 Credit: 16,614,827 RAC: 12,088 |
The following items need to be taken into account, in descending order of concern:As the developer of OpenIFS, that order is not correct. RAM should be top closely followed by CPU & upload capacity. Why? Because the current configurations used by CPDN are very low resolution and bordering on scientifically unimportant. The plan is to go to higher resolutions which will require much more RAM in order to fit the model comfortably. And CPU, especially multiple cores, because the higher resolutions take longer to run. To give you an idea, the next resolution configuration up from the one currently being run will need ~12 Gb, the one after that ~22Gb. As the resolution goes higher, the model timestep has to decrease so runtimes get longer. Disk space; lowest priority and most machines will have 100s if not more available. Storage is cheap. In any system, one of these will be a bottleneck to throughput, I don't think we can generalize here too much as it will depend on what individuals have. |
Send message Joined: 14 Sep 08 Posts: 127 Credit: 42,959,629 RAC: 78,322 |
To give you an idea, the next resolution configuration up from the one currently being run will need ~12 Gb, the one after that ~22Gb. As the resolution goes higher, the model timestep has to decrease so runtimes get longer. This is very helpful. Thanks. Guess need to get more aggressive in memory upgrade now... In any system, one of these will be a bottleneck to throughput, I don't think we can generalize here too much as it will depend on what individuals have. Agree. I can see @xii5ku probably has the same unfortunate copper based "broadband" as I do where I get pitiful upload bandwidth relative to download. Down:up ratio is like 50-100:1 here. :-( With current oifs3 tasks, my upload link indeed saturate first before running out of memory, though it likely would change for the next resolutions and that's good news for me. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
It gives off 100W while running. Depends how long it runs for. In hot weather it will be a lot. Dunno what an E rating is, but my fridge freezer runs most of the time when the room is 25C.Even so, about 100W is nothing. A fridge gives that off. Open the window.Are you sure? An E-rated American style fridge-freezer is about 350 kWh a year, that's 40W. A B-rated fridge is about 137 kWh a year, or 15W. Our i7-cpu based PCs pull over 100W without the monitors, about 900kWh a year. The waste heat from two PCs is warming our home office today, very slightly. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
A fridge is much bigger than a CPU, so the energy density is lower.Irrelevant, the room the heat is going into is the same size. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
My heat sink is about 5 inches high, 5 inches deep, and 4 inches wide with half the RAM chips on each side. No room for a bigger fan. There is a temperature-controlled fan built onto one end of that heat sink. There are three fans at the front of the computer blowing air in, two fans in the power supply.I have two machines with a 6x6x6inch cooler with 2 6 inch fans on it. Ryzen 9 3900XT. Can't hear the fans. If it won't fit, get a bigger case or as I do just leave the side off. |
©2024 cpdn.org