Message boards :
Number crunching :
New work Discussion
Message board moderation
Previous · 1 . . . 85 · 86 · 87 · 88 · 89 · 90 · 91 · Next
Author | Message |
---|---|
Send message Joined: 9 Oct 04 Posts: 81 Credit: 69,642,427 RAC: 6,660 |
Dear Felix, I know, I shouldn’t write anything, but you have 227 tasks in process: 73 N114 tasks and 154 N216 tasks (quick calculation). Your computer finishes the tasks: N114 in about 300000 [s]: 300000 [s]*73=21900000 [s] and N216 in about 950000 [s]: 950000 [s]*154=146300000 [s] Your VM computer has 4 processors, therefore your VM will finish all tasks in about 487 days, well above the 365 days deadline: About 122 days afterwards or 33% of your tasks will not finish in time. Do you mind to release about 33% of your tasks? Preferable N216 tasks – just kill them! So other computers (idle on CPDN tasks) might work on them and the batches are finished in a useful time for the researcher! Thanks a lot, klepel |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
Your VM computer has 4 processors, therefore your VM will finish all tasks in about 487 days, well above the 365 days deadline: About 122 days afterwards or 33% of your tasks will not finish in time.That simply shouldn't happen. Can't we have a decent scheduler? |
Send message Joined: 15 May 09 Posts: 4472 Credit: 18,448,326 RAC: 22,385 |
There is something about running the client in a VM or under WINE that seems to mess with the prediction of time taken unless one manually runs cpu benchmarks from the tools menu before downloading tasks. It needs someone who is bothered enough by it to put in a feature request to git-hub asking for the benchmarks to be run by the client every time it starts up. (Unless Richard is going to correct me on this with more information.)Your VM computer has 4 processors, therefore your VM will finish all tasks in about 487 days, well above the 365 days deadline: About 122 days afterwards or 33% of your tasks will not finish in time.That simply shouldn't happen. Can't we have a decent scheduler? |
Send message Joined: 1 Jan 07 Posts: 1033 Credit: 36,250,502 RAC: 10,950 |
(Unless Richard is going to correct me on this with more information.)No, I'll give VMs a miss - I stick to native silicon. But I have noticed that under native Linux, the CPDN server has a tendency to allocate a task for every CPU core reported by the host. That's probably because of the project's 'feast or famine' batch workload: I'm usually fetching work when I have zero CPDN work on hand, and I've seen Dave's report that a new batch has become available. The CPDN server seems to disregard the ongoing work from other projects. It doesn't worry me, because none of the recent batches have taken more than about 10 days to complete: I just plod through them, one or two at a time - never a deadline problem. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
There is something about running the client in a VM or under WINE that seems to mess with the prediction of time taken unless one manually runs cpu benchmarks from the tools menu before downloading tasks. It needs someone who is bothered enough by it to put in a feature request to git-hub asking for the benchmarks to be run by the client every time it starts up. (Unless Richard is going to correct me on this with more information.)Unless you've changed how many cores the VM gets, why would the last benchmark be invalid? |
Send message Joined: 15 May 09 Posts: 4472 Credit: 18,448,326 RAC: 22,385 |
I don't know but it often is. One of the problems I know is the mix of task types, occasionally with inaccurate assessments of amount of computing power needed by the project though that usually gets spotted in testing branch of the project. I just know that especially if the particular installation of BOINC be that the windows or the Linux client has not been active for some time, then the estimates are often way out though the current Windows ones I am running look like they were spot on. Not quite sure why the orders of magnitude problems, just know that forcing cpu benchmarks before downloading work stops the problem. |
Send message Joined: 15 May 09 Posts: 4472 Credit: 18,448,326 RAC: 22,385 |
it is hoped that the NZ area (Windows tasks) will appear by the end of the week. The plan is to not wait till the three testing tasks complete as the fact that they have all gone past half way and are returning zips as expected is enough to confirm the ancillary files are OK. |
Send message Joined: 31 Dec 07 Posts: 1152 Credit: 22,302,272 RAC: 5,703 |
GOOD!!! Maybe I can grab some. It's been so long I have just about given up in CPDN. |
Send message Joined: 15 May 09 Posts: 4472 Credit: 18,448,326 RAC: 22,385 |
GOOD!!! Maybe I can grab some. It's been so long I have just about given up in CPDN.When they arrive, I doubt if you will have more than a few hours to get in there before they are all gone. Also last lot of Windows work some people got transient http errors because of how the servers were hammered. However the tasks still downloaded eventually once assigned to those computers. |
Send message Joined: 12 Apr 21 Posts: 293 Credit: 14,195,741 RAC: 15,922 |
You can actually get an extremely high amount of tasks regardless of benchmarks, if you want. I've done it in macOS and Linux and pretty sure you can do it in Windows. I'm finally finished with N216s and N144s but still have about a month left to finish processing the HadCM3s from a few months ago. I won't do it again because I realized I got too many. It's a temptation sometimes with the sporadic nature of CPDN tasks but if you're going to do it you've got to do it sensibly. It does seem like a new batch gets released once a month or so. Therefore loading up on 6 to 8 weeks of work should keep one always crunching. The solution to overloading needs to be done server side, like having a max limit of tasks in progress per PC, which I believe some projects have. This project would be much better if it cut deadlines in half, put in a limit on maximum tasks in progress, and severely limited the number of tasks that can be assigned to a PC that has more than a few consecutive errors. |
Send message Joined: 15 May 09 Posts: 4472 Credit: 18,448,326 RAC: 22,385 |
And still more Mac work in testing though as fewer and fewer Macs support 32bit computing this seems a bit strange to me. I wonder if there is a way of doing the equivalent of installing the needed 32 bit libraries on a Linux installation that would let 64bit Macs run CPDN work? |
Send message Joined: 7 Sep 16 Posts: 262 Credit: 34,400,503 RAC: 15,604 |
And still more Mac work in testing though as fewer and fewer Macs support 32bit computing this seems a bit strange to me. I wonder if there is a way of doing the equivalent of installing the needed 32 bit libraries on a Linux installation that would let 64bit Macs run CPDN work? To the best of my knowledge, there's no way to enable 32-bit binary support on the "We've removed 32-bit support" versions of MacOSX. And Rosetta, the x86 to ARM translation layer for the Apple Silicon ones, definitely doesn't support 32-bit. They're really relying a lot on VMs and the handful of ancient machines, and I don't think it makes any sense either. There aren't that many of us who run old MacOS VMs! |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
You can actually get an extremely high amount of tasks regardless of benchmarks, if you want. I've done it in macOS and Linux and pretty sure you can do it in Windows. I'm finally finished with N216s and N144s but still have about a month left to finish processing the HadCM3s from a few months ago. I won't do it again because I realized I got too many. It's a temptation sometimes with the sporadic nature of CPDN tasks but if you're going to do it you've got to do it sensibly. It does seem like a new batch gets released once a month or so. Therefore loading up on 6 to 8 weeks of work should keep one always crunching.I only set a 2 day buffer, so I'll get one per core. I don't like having enormous queues on my machine, it seems wasteful. But I do "tickle" the server every hour and 5 minutes (an hour is the server's limit). The solution to overloading needs to be done server side, like having a max limit of tasks in progress per PC, which I believe some projects have.I can't believe a server with so little tasks can ever get overloaded. There's virtually nothing getting done here compared with the big projects. This project would be much better if it cut deadlines in half, put in a limit on maximum tasks in progress, and severely limited the number of tasks that can be assigned to a PC that has more than a few consecutive errors.The deadlines especially. It's insanity to tell my computer to get it done in a year when we all know they need them in a month. Boinc just leaves them sat there and does other stuff! |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
And still more Mac work in testing though as fewer and fewer Macs support 32bit computing this seems a bit strange to me. I wonder if there is a way of doing the equivalent of installing the needed 32 bit libraries on a Linux installation that would let 64bit Macs run CPDN work?Surely a 64 bit Mac will run a 32 bit program? Windows has no problem with this at all. Apple aren't that stupid are they? |
Send message Joined: 7 Sep 16 Posts: 262 Credit: 34,400,503 RAC: 15,604 |
Surely a 64 bit Mac will run a 32 bit program? Windows has no problem with this at all. Apple aren't that stupid are they? Not anymore. Apple removed 32-bit support from their x86 OS a few versions back, and the Apple Silicon stuff is pure 64-bit, AArch64, the chips don't even implement AArch32 modes, as far as I know. Apple is 64-bit pure and has been for quite a few years now. You're running something 4-5 years old in order to still have 32-bit support. I've got a bunch of VMs of it... but I can't imagine much work is getting done on those tasks on actual "MacOS X on the iron." |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
Not anymore. Apple removed 32-bit support from their x86 OS a few versions back, and the Apple Silicon stuff is pure 64-bit, AArch64, the chips don't even implement AArch32 modes, as far as I know. Apple is 64-bit pure and has been for quite a few years now.Apple has always been insane like this, cutting off old stuff straight away. Let's just make the customer buy new everything, they'll fall for it. Windows just works, on any program. We can use old programs we bought ages ago and are no longer made. Apple steal this from you, they stop you using the program you paid for. A lot of programs are still being written in 32 bit, because they just don't need 64 bit. Why waste processor cycles? |
Send message Joined: 7 Sep 16 Posts: 262 Credit: 34,400,503 RAC: 15,604 |
Apple has always been insane like this, cutting off old stuff straight away. Let's just make the customer buy new everything, they'll fall for it. Windows just works, on any program. We can use old programs we bought ages ago and are no longer made. Apple steal this from you, they stop you using the program you paid for. A lot of programs are still being written in 32 bit, because they just don't need 64 bit. Why waste processor cycles? And Apple very explicitly doesn't care about the old stuff that's no longer still maintained. They're clear on this, they warned users about 32-bit stuff stopping working several years before turning off support. One can always virtualize older versions, but it means Apple isn't carrying around as much cruft and ancient, uninspected corners of the OS. As for performance, 64-bit modes give you an awful lot more registers, so performance tends to be better. If the reduction in cache density (for 64-bit vs 32-bit pointers) is a problem, you can always use the x32 ABI - operating with 64-bit registers, but 32-bit pointers. A lot of HPC is done like that, though most desktop OSes don't have the x32 libraries laying around. Only the 32-bit and 64-bit options. And you claiming that this is yet another reason you'll never, ever own Apple products is entirely outside the point of the discussion, which is that researchers are still releasing 32-bit MacOS binaries that have an increasingly small number of computers that can run them - outside the few of us running VMs on Windows or Linux to get some compute done on those without erroring out. But it would be very, very useful to be able to detect computers that fail every task of a given type and stop sending them tasks. It's just wasting everyone's time and resources. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
Why should you not be able to use non-maintained software? Somebody writes a really nice program you use, it works, it doesn't need fixed, it's no longer made 10 years later. Why should you not be able to keep using it? How often do you replace your car? I have a 20 year old car, I have Windows machines 12 years old, and guess what, they all run the latest Windows 11, running 32 bit and 64 bit programs. You've fallen for the throwaway society hook line and sinker. And thanks for helping me with my point, you say there are people still releasing 32-bit MacOS binaries. And Apple don't care. And yet you buy stuff from these criminals. |
Send message Joined: 7 Sep 16 Posts: 262 Credit: 34,400,503 RAC: 15,604 |
You've fallen for the throwaway society hook line and sinker. And thanks for helping me with my point, you say there are people still releasing 32-bit MacOS binaries. And Apple don't care. And yet you buy stuff from these criminals. I run Linux as my daily driver (more Qubes lately, but not yet on everything, ARM support doesn't exist yet), with MacOS VIRTUAL MACHINES, on my Linux compute nodes, to run tasks that have particularly weird requirements. I don't run obsolete and no longer patched OSes as my daily drivers by any means. Your axe to grind with Apple is well noted, and is increasingly irrelevant here. Are you going to rewrite the 32-bit MacOS stuff for 64-bit and ARM? No? Then quit complaining about how a company you don't buy anything from is doing things you don't like. Bitching about it here isn't going to change a single thing about either the tasks to run, or Apple's practices. |
Send message Joined: 9 Oct 20 Posts: 690 Credit: 4,391,754 RAC: 6,918 |
Your axe to grind with Apple is well noted, and is increasingly irrelevant here. Are you going to rewrite the 32-bit MacOS stuff for 64-bit and ARM? No? Then quit complaining about how a company you don't buy anything from is doing things you don't like. Bitching about it here isn't going to change a single thing about either the tasks to run, or Apple's practices.Might aswell point out to everyone using Apple products that they shouldn't be. Selling things which are obsolete after a short time is unethical, criminal, and environmentally unfriendly. I run Windows and only Windows, because it's the easiest to use, the most commonly supported for programs, and doesn't ditch its users. I see no need to use anything else. Windows 11 on 7 PCs here. |
©2024 cpdn.org