climateprediction.net home page
Posts by fortran

Posts by fortran

1) Questions and Answers : Unix/Linux : New work for new machine? (Message 55927)
Posted 17 Mar 2017 by fortran
Post:
I thought I was reading off a post from 2017.

What is causing the OS dependence on results? I am guessing your models aren't in FORTRAN, like the ancient weather models.

In terms of numbers, does uptime also come into things? I think the average number of shutdowns my machines see is something like 2 per year (power goes out here, for longer than my batteries can last). Are Windows people also running 24/7/365?

I have seen some people working on using LLVM on Windows. And people compiling Linux with LLVM stills seems to be unusual. When I go to port this stuff to my GPUs, should I try gcc and LLVM?

Oh, in my inventory, 4 of those AMD64 cores are an APU, which also has R7 graphics. So, effectively I have 2 R7-250, not 1. And if needed, I may set up a compute server. Ryzen R5-1600X with multiple RX460 looks to be a nice place, and not use too much electricity.
2) Questions and Answers : Unix/Linux : New work for new machine? (Message 55922)
Posted 17 Mar 2017 by fortran
Post:
I pulled my RX460 from my one computer, as basically it was performing worse than a HD6450 card. I then looked at what was happening in Debian (in the vicinity of a code freeze). Amdgpu is nominally in the kernel, and support depends on what kernel is available. I am mostly running Debian/stable, but also keeping in mind that I want to jump to Devuan at a convenient point.

Libclc was the obvious culprit, but not the only one. Libclc is the culprit in conjunction with Mesa3D, and support for Polaris was mostly coming from LLVM.

A couple of weeks ago(?), Mesa3D came out with 17.0.0 (and the 17.0.1 point release after that). As I was tracking the development of LLVM-4.0.0, I didn't spend time trying to compile Mesa3D. But a week or so ago(?), LLVM-4.0.0 was released. And consequently I have a copy of the LLVM family sitting in /usr/local. Trying to compile Mesa3D with LLVM-4, I ran across a small problem (which could be because I am not familiar with these libraries and tools). But, an email message in the -dev mailing lists says there is a bug in LLVM-4, which seems likely to me to affect doing GPU work. It appears in the near term that Mesa3D will try to work around the problem, but Mesa3D is hoping that LLVM fix the bug "soon".

At some point, I will be able to have a Mesa3D set of functionality in /usr/local that is significanty newer than what is in Jessie or Jessie-backports. At which point, it would seem to make sense to go back to libclc, and see if I can get anything at all working with the RX460 card.

There also seems to be reasons to hope for help when the Linux-4.10 kernel makes its way into backports (4.9 is there at the moment).


Lately I started to get a little dribble of work from ClimatePrediction, and then I read that you are going to stop producing models across Windows/Mac/Linux, and assign models to architecture based on perceived universe of machines available.

Which bothers me, as I would hope that your work would have been architecture neutral (sort of, maybe).

I am interested in local weather, and climate change (I am at 56N and 120W). Among other things, I want to start modeling surface winds using WindNinja and another package I haven't memorised the name of, based on likely weather scenarios. So, I have asked Environment Canada about this, and that got me looking into statistical downscaling software. You read about some package that looks useful, and after joining, you find out it is binary (Windows only). Or you find out there is source code, but you need to take approved classes before you are allowed to see the source code. Or ....

I've been doing numerical methods since 1980. I happen to live downwind of a 100+ MW windfarm, and I can get data from the windfarm to check my work against. The minor in my M.Eng. is statistical mechanics. I don't need to take some dumb class.

But, sensitivity of model output to operating systems is going to influence how I try and get WindNinja and this other source code working with the AMD GPUs I have (1 HD-5450, 2 HD6450, 1 R7-250 and 1 RX460) along with 16 AMD64 CPU cores. I may be up to 22 CPU cores before I get the RX460 actually working.

If this isn't sufficient horsepower for a patient researcher (I am used to waiting days for a model to run), I will look at using BOINC and getting people in my region to devote cycles.

Apparently in the winter (winter is "ending" about now), most of our weather comes from the Gulf of Alaska. Bogosolov (sp?) has put up two ash clouds in the last few weeks that I have seen in the news. Both times, the Alaska Volcano Observatory show the clouds heading towards Japan, and not here.
3) Message boards : Number crunching : Shelterbelt design on farms (Message 54957)
Posted 18 Oct 2016 by fortran
Post:
I am misunderstanding number crunching then. If people know of other models with available source code I should look at, I was hoping for suggestions. Or maybe they know fenics or NinjaWind won't do what I want? Even if I find source code, if some of this needs to run on GPUs, that means I have to write code (probably OpenCL) to do that part.

But, if someone wants to move this thread, okay.
4) Message boards : Number crunching : Shelterbelt design on farms (Message 54955)
Posted 18 Oct 2016 by fortran
Post:
No bites, eh?

I have a response from the wind farm, they want to know more about what I am doing. I still don't know what I am doing with certainty.

A friend brought up carbon sequestration.

There is an argument that everything that is growing on a farm, should be planned. Which can mean planning where a tree gets planted, what kind of tree, how long we let it live, how we manage its growth. One person in Manitoba, is raising willows which are periodically "harvested" (coppicing or pollarding). In his instance, I believe the plan is to burn the willow produced.

In permaculture, a person might bury the wood (hugelkulture). Hugelkulture probably has more immediate use in terms of carbon sequestration.

Growing a raspberry bush isn't going to influence microclimate significantly. Growing a Burr Oak should influence microclimate a lot. Some places may be able to grow fast growing species that need ongoing management such as Paulownia (I believe in optimum climate, it can grow to 20 feet in its first or second year).
5) Message boards : Number crunching : Trying to get Linux work (Message 54949)
Posted 17 Oct 2016 by fortran
Post:
And checking the BOINC manager, I see that 3 instances are off and running. ETA is only 3.5 days each. :-)
6) Message boards : Number crunching : Trying to get Linux work (Message 54948)
Posted 17 Oct 2016 by fortran
Post:
Ah, I see that linux-gate is a facility provided by the kernel.

http://stackoverflow.com/questions/19981862/what-are-ld-linux-so-2-and-linux-gate-so-1

So, everything appears to be resolved in the zip files I looked at.

7) Message boards : Number crunching : Trying to get Linux work (Message 54947)
Posted 17 Oct 2016 by fortran
Post:
Well, something was downloaded. Now I can see if things will work, or what else is missing.

191 MB of wah2*zip and gz files.

One wah2*zip file has data in the name, it contains datain and jobs directories full of various files.

Three wah2_eu*zip files. One of those files has namelists, stashc and a cpdc files. I presume the other 2 are similar.

One wah2_se*zip file which contains a shared object library. Or perhaps a person should say it is a wah2*8.12_i686-pc-linux-gnu.zip file, with se in the name.

There are 2 other i686 zips, with am and rm in the fname. They contain a single file, nominally the same as the name of the archive (minus the zip extension).

Running ldd against the shared object library, it lists 8 objects. The 6 in the middle are of the form
name => path
and the path involves a 32 bit designation, and does exist on this computer. The last object is just /lib/ld-linux.so.2 (no 32 bit designation in path, but a 32 bit ld-linux does exist on machine). The first object is
linux-gate.so.1
which does not exist on this computer, and is not listed in any Debian package. I presume this is part of ClimaePrediction?

Running ldd against the wah2am file, it sees 6 objects, the middle 4 are found with 32 bit indications in the path, the first and last same as the shared object.

And the wah2rm file is nominally the same as the wah2am file.
8) Message boards : Number crunching : Trying to get Linux work (Message 54946)
Posted 17 Oct 2016 by fortran
Post:
I just woke up, and put in my request for a unit or two.
9) Message boards : Number crunching : Shelterbelt design on farms (Message 54942)
Posted 17 Oct 2016 by fortran
Post:
Going back to the 1920's and 1930's, shelterbelts were important. As near as I can tell, all of the findings are designed for places where there are little or no elevation changes, and winds are never that strong anyway.

I live in the foothills of the Rocky Mountains, and about 5 miles upwind from me is a 100+ MW wind farm. And in the 40 or so years since I went to high school and seed the pasture here, my upwind neighbours have been cutting down the aspen/poplar that nominally form the typical farm woodlot here.

I want to model what the wind direction/speed is likely to be here, and then design a shelterbelt for my dugout to minimize summer evaporation and maximize winter snow capture.

Applied Math Body and Soul likes FENICS, and they seem to think it applicable to microweather problems. I have run across NinjaWind, and started to build it. I have a bunch of data to build a DEM, but that isn't finished yet.

Are there other models I should be looking at?

At present, I have 16 CPU cores (all AMD64) and 4 GPUs.

Thanks.
10) Message boards : Number crunching : Trying to get Linux work (Message 54940)
Posted 17 Oct 2016 by fortran
Post:
Snow is falling here, which screwed up my plans (for working outside). And I have time to do other things, like getting Climate Prediction working again. After moving and hardware upgrades.

A year ago, I updated hardware which seemed to make having ClimatePrediction restart unusual. Unusual for the people writing the BOINC modules. So, nothing ever got downloaded. Les Bayliss (sp?) has mentioned 32 bit requirements for AMD64 machines. I am running Debian, and multiarchticture is present here. And has been for quite a while.

Since the hardware change 6 months ago or so, the project directory in /var has been empty. Not surprisingly, this means no ClimatePrediction jobs are running.

With the recent (much earlier than expected) snow, I have had time to add various 32 bit libraries to my amd64 machine. And Climate Prediction has still not started any jobs, or added any content to .../projects/... so that I could at least run ldd to look for more missing 32 bit libraries.

Maybe benchmarks are needed. So, I had BOINC Manager redo benchmarks. Still no change (no BOINC jobs, no messages in the log to indicate why).

Today, I told BOINC Manager to remove Climate Prediction, and then I added it back in. Just the same as before. No content to the .../Projects/... directory, no jobs listed, no storage taken, nothing noted in the BOINC log.

My foo has run out. I don't know what else is needed to find out what is missing to get jobs (if they are available) to download.

What am I doing wrong?

I recently posted things vaguely similar, and a search here produced nothing? Not even what I posted.
11) Questions and Answers : Unix/Linux : New work for new machine? (Message 54708)
Posted 25 Aug 2016 by fortran
Post:
I looked (a little) in the top thread. Obviously not far enough.

Debian is multiarch by default, at least for Intel/AMD type stuff. I haven't done anything to restrict 32 bit from this machine.

I have played a couple of times with the virtual machines Devuan seems to prefer, which come from Hashicorp/Atlas. Does a person need to set up something that way? Or just wait for 64 bit stuff to come along?

My plans, were to run for something like a week with just a single core doing BIONC, then add one core, and so on. As this is an 8 core machine with only 4 floating point units, I wouldn't be surprised to see something happen at the 4/5 boundary. I think the most you can go to is 7 cores, as the last core needs to spend time feeding the GPU and doing other things.

I can easily forget about ClimatePrediction until I get through this startup period, as I can do SETI and WorldCommunity jobs until then (SETI is sending GPU jobs as well).
12) Questions and Answers : Unix/Linux : New work for new machine? (Message 54706)
Posted 24 Aug 2016 by fortran
Post:
My server had been running Gentoo, and in trying to go back to Debian (and soon to Devuan), problems came up which seemed to be better handled by retiring this old machine (which had 8 GB RAM on a dual AMD 4800+ CPU).

The new machine (well, motherboard and CPU) is now an 8 core FX-8320E with 16GB of RAM, and I set aside 20 GB of disk for BOINC. And this new machine also has an AMD graphics board with single precision GPU (OpenCL) support.

Boinc didn't seem to want to start with the same stuff that was here prior to the upgrades, so I had to start anew. In connecting to ClimatePrediction.net I got a message that the project might not have any work for this kind of machine.

Yes, at the moment I am only allowing a single CPU to spend all day, every day doing BOINC.

But what is it that ClimatePrediction is looking for, if a 16 GB machine on a recent 8 core processor with 20 GB of disk isn't useful?
13) Message boards : Number crunching : Total Credit (Message 53826)
Posted 26 Mar 2016 by fortran
Post:
Just a small fish compared to some. I lost a minor amount a few days ago, and then quite a bunch yesterday or the day before. Maybe not quite 50% in total.

Just providing data, not complaining.

Someone had noticed losses to Seti, I looked and I don't see anything similar. It's just ClimatePrediction.

Do any of you need the heat in the winter? I live where the Alaska Highway begins, we get a real winter. Well, not this year. We mostly had 3 snowfalls all winter, and 2 of those were in the last month or so. I think we are going to be water rationing big time this summer.

The computers crunching on BOINC, are in my bedroom. The heat isn't bad, but one of the fans is going to die at some point. I'm starting to get used to the noisy fan. I'll build a fan controller with a fancy Maxim chip to fix that machine soonish, and put in a couple of microphones to monitor sound as well as temperature.
14) Questions and Answers : Unix/Linux : Multiple CP task management (Message 51423)
Posted 16 Feb 2015 by fortran
Post:
I am not complaining about how long it takes. It takes what it takes. I have been doing numerical methods for a long time, I don't have a problem with long run times.

I just noticed that this 1000 hour job was taking cycles in preference to a "normal" job which is around 1 or 2 days.

That normal job finished, and climate prediction hasn't downloaded any more jobs. So I still have this one long job running (along with jobs from 2 other projects). Which is fine.

I just thought that if this sort of situation is common, it might be better if the short job finished earlier. But from the early responses in this thread, that ability is not present in BOINC, or ClimatePrediction's use of BOINC. Which is fine, I can put a manual suspend on the long job to get the short job through.

I suppose some people run these models to see the pictures. I don't even know if pictures are available. I just run the models.

Occasionally I do some Monte Carlo stuff, and can chew up a few hours of CPU time with that, and BOINC waits, which is what it should do. But BOINC is the biggest consistent load my computer sees.
15) Questions and Answers : Unix/Linux : Multiple CP task management (Message 51420)
Posted 15 Feb 2015 by fortran
Post:
This long job is a hadam3p. It is 192 hours in, 710 hours to go. So, inaccurate time estimation isn't the problem.

AMD 64 X2 4800+ dual core. Not a new CPU, but it beat the heck out of the VAX 11/785 I did my M.Eng. on in the mid 1980's.

I am going to put in a new machine that should be about 30% faster, which on 900 to 1000 hours for my old machine, is still a long job.
16) Questions and Answers : Unix/Linux : Multiple CP task management (Message 51414)
Posted 14 Feb 2015 by fortran
Post:
Computer runs all the time, and BOINC can use all the CPU if there are no tasks running in the foreground, so to speak.
17) Questions and Answers : Unix/Linux : Multiple CP task management (Message 51407)
Posted 13 Feb 2015 by fortran
Post:
I had juggled those settings in the past. I'll just go with the manual suspend to see how that works for now. Thanks.
18) Questions and Answers : Unix/Linux : Multiple CP task management (Message 51405)
Posted 13 Feb 2015 by fortran
Post:
Sorry if this is already asked, nothing jumped out looking at topics.

Not too long ago, I started seeing some tasks which have estimated running times on the order of 1000 hours. I had mostly been seeing tasks of about 1 day. My machine has 2 cores, and hence if boinc tasks are running, I only have 2 tasks running. If there is one CP task running and one other task running, at the current time the CP task which is running is this 1000 hour one. Which means the other task (about half done) just sits there. I have manually suspended the big task, to get this other task close to completion, and then I will release the hold in the hope the other one will manage to get some time "accidentally" (luck of the round robin draw, so to speak). But is there something else I should do to keep this small task from being stalled by the long task?
19) Questions and Answers : Unix/Linux : GUI RPC bind failed: 98 (Message 31415)
Posted 16 Nov 2007 by fortran
Post:

Sounds like it\'s another example of Boinc\'s network sensitivity. There are a bunch of similar issues logged (some are duplicates), could you scan through them and document your problem in the one which is closest to your experience?

http://boinc.berkeley.edu/trac/ticket/171
http://boinc.berkeley.edu/trac/ticket/113
http://boinc.berkeley.edu/trac/ticket/282
http://boinc.berkeley.edu/trac/ticket/286
http://boinc.berkeley.edu/trac/ticket/206


It looks like 113. 171 is a duplicate of 113.

It is a DNS (and other?) related problem. It showed up at 5.8.15 (I think), and is noted to still happen with 5.10.28 (I have 5.10.27 here). It sounds complicated enough that nobody is going to try and fix this specifically, but it might go away as some of the component bugs contributing to it slowly get squashed.
20) Questions and Answers : Unix/Linux : GUI RPC bind failed: 98 (Message 31403)
Posted 16 Nov 2007 by fortran
Post:
Yesterday late afternoon, early evening, my network connection went down. I had 2 Climate Prediction models running here (dual core) and some SETI ones. In the course of trying to get my connection going again, I unplugged my firewall/router and ADSL modems a number of times. Since on my LAN I get my IP from the firewall/router, the computer running these models noticed there was a network problem. And for whatever reason, if the connection goes down, the NIC is taken down by the OS (Debian unstable). I suspect this is what lead to the GUI RPC bind error, which is probably why all of the models I had (ClimatePrediction and SETI) dying from errors. Which is kind of sad, as the 2 ClimatePrediction models would have probably run to completion by the end of the month.

Am I correct in guessing as to what happened? Was there anything I should have done to keep this from happening?




©2021 climateprediction.net