climateprediction.net home page
Using GPUs for number crunching

Using GPUs for number crunching

Questions and Answers : Wish list : Using GPUs for number crunching
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
Profile Iain Inglis
Volunteer moderator

Send message
Joined: 16 Jan 10
Posts: 1079
Credit: 6,904,049
RAC: 6,657
Message 48741 - Posted: 8 Apr 2014, 15:36:35 UTC

Can 'efficiency' be used to choose between a climate model and, for example, testing a number to see whether it's prime? If knowledge is proportional to work then it might be amusing to attend a seminar on "Entropy and Knowledge" given by a philosopher and a physicist. In fact, if I remember correctly, the CPDN project leader Myles Allen is both - he could do it!

Or perhaps the BOINC credit unit, the cobblestone, is the measure of knowledge and we should simply devote our efforts to the project that produces the most credits.

I suspect that if minimising energy consumption is the objective then the best thing to do is to avoid distributed computing altogether.
ID: 48741 · Report as offensive     Reply Quote
Profile Ananas
Volunteer moderator

Send message
Joined: 31 Oct 04
Posts: 336
Credit: 3,316,482
RAC: 0
Message 48743 - Posted: 9 Apr 2014, 21:17:38 UTC

The developer of Infernal (used by RNA-World) tested a GPU version and of course the plain calculation was faster - but more time got lost loading the data into / retrieving the results from the GPU memory than it saved crunching them.

Even if there was someone who took on this job for CPDN, my guess would be that CPDN would have an even worse savings (GPU) to expence (bus) ratio.

Btw., I don't think that it basically wouldn't work with Fortran, Fortran can sure call functions in .so or .dll files compiled from C sources.
ID: 48743 · Report as offensive     Reply Quote
old_user714979

Send message
Joined: 28 Mar 14
Posts: 7
Credit: 47,798
RAC: 0
Message 48745 - Posted: 10 Apr 2014, 1:46:48 UTC - in response to Message 48739.  

Well, AFAIK all currently active climate-models uses SSE2-optimizations, and my guess this means they're using double-precision. Since the fortran-compiler linked a few posts back is CUDA, and Nvidia-cards has abyssimally poor double-precision-speed of only 1/24 single-precision-performance, except if you pays $$$$ for the professional cards, even a top-end Nvidia-GTX780Ti only manages 210 GFLOPS at most. A quad-core (8 with HT) cpu on the other hand is around 100 GFLOPS. Meaning even best-case the Nvidia-GPU will only be 2x faster than CPU. In reality even 50% performance on GPU can be too high, meaning your "slow" CPU is outperforming your "fast" GPU.

So, unless can use single-precision on most of the calculations, a CUDA-version of CPDN is a waste of development-time.

Instead of CUDA, an OpenCL-compiler would be more interesting, since OpenCL also works with the much faster Amd-GPU's. But even with this additional speed, it's still unlikely can get a climate-model to run faster on GPU than CPU.

I'm actually moving from the Nvidia card and once prices settle will probably try for an R9 280X as a compromise between my wish list and what I can afford. Hopefully an E3-1275V3 is to be matched with my new Asus P9D-WS motherboard (in theory supports 4x Crossfire, ECC, RAID, etc.). In the distant future when the price of these cards crash on eBay I'l probably move to 2x or 3x Crossfire to stretch the life of the system or sooner if I need a performance upgrade. This is practically trailing edge hardware for a gamer but as an older gamer I'm looking for reliability over performance and have more requirements than pure gaming. Although I'm not into coin mining.

The R9 280X is 1/4 for DP. The AMD trade off is heat for performance. This is the point where the efficiency of an ~90W Xeon Quad core running 8x work units against a GPU could begin with a comparison of Performance/Power ratio. Will any GPUs be more efficient than the CPU? As I've yet to buy the graphics card there is some flexibility but 1 GByte cards are not going to be acceptable.

I was aware of the performance problems of the Nvidia line but it gave me some crude numbers to show CPU vs GPU performance. Plus there was a commercial FORTRAN compiler available. Some comments in these GPU related threads seem concerned FORTRAN was not available to support GPU hardware.
ID: 48745 · Report as offensive     Reply Quote
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 48746 - Posted: 10 Apr 2014, 2:55:47 UTC - in response to Message 48745.  

FORTRAN has been able to support GPUs for several years. Just not at the capabilities required by these professional climate models. (Either double precision, or 80 bits wide.)
And climate models are mostly about serial processing, not parallel. You can't calculate tomorrows weather before first off calculating today's. And on and on.

ID: 48746 · Report as offensive     Reply Quote
Profile JIM

Send message
Joined: 31 Dec 07
Posts: 1152
Credit: 22,053,321
RAC: 4,417
Message 48747 - Posted: 10 Apr 2014, 3:39:09 UTC

This issue just won�t stay dead. Every time that you think it have been safely staked through the heart it rises from the grave like Dracula in one of those old Hammer movies.

ID: 48747 · Report as offensive     Reply Quote
old_user714979

Send message
Joined: 28 Mar 14
Posts: 7
Credit: 47,798
RAC: 0
Message 48748 - Posted: 10 Apr 2014, 5:27:48 UTC - in response to Message 48738.  

Hi Volvo

The CPDN models come from the UK Met Office where they consist of a version of the Unified Model. CPDN then adapts these models for its own use, for example deciding on the precise parameter values for particular experiments and compiling the models for the three platforms: Windows, Linux and Mac. Further CPDN adaptations can consist of time-slicing long models so that different computers take on different sections, and they all have to be stitched together.

But they all still consist of the Unified Model which the Met Office has adapted and developed continuously for years. The Met Office has a team of developers working on this, just as at the small number of other institutions that have developed models. I've seen a list of the names of one of these teams; it filled a computer screen. I also know that these organisations employ ace programmers.

GPUs and weather modeling is not new. Here are some random samples from Google:
http://www.mmm.ucar.edu/wrf/WG2/michalakes_lspp.pdf
http://www.nvidia.com/content/PDF/sc_2010/theater/Govett_SC10.pdf
http://data1.gfdl.noaa.gov/multi-core/2011/presentations/Govett_Successes%20for%20Challenges%20Using%20GPUs%20for%20Weather%20and%20Climate%20Models.pdf
Being an ace FORTRAN programmer does not instantly grant any detailed knowledge about programming GPU hardware. If you said there was a dedicated team researching GPU solutions then I would concede they have the knowledge. Finding somebody who has played with GPUs at home does not translate into that person having any influence over future directions of a corporate programming project.

To my knowledge these organisations all run their models on CPUs, in some cases on supercomputers. For example, a supercomputer in Tokyo is used for this purpose. If using GPUs were possible for the type of calculations required for climate models I'm pretty sure that all these model programmers in several institutions would already have harnessed this possibility. They have every motivation to complete model runs as quickly as possible because similar models based on the UM are used for weather prediction, for which they also run ensembles, albeit much smaller than ours at CPDN.

Supercomputers can be built with GPUs e.g.:http://en.wikipedia.org/wiki/Titan_(supercomputer) or http://en.wikipedia.org/wiki/Tianhe-I and used for climate modelling.

CPDN has two programmers who do not design the UM which runs on CPU.

I wouldn't expect these two programmers to design the model but I expect they would be considered experts in implementing stable code in the BOINC environment. If they have researched GPU processing and say it can never be done due to design limitations of the hardware platform (e.g. rounding errors) then end of story. If the BOINC programming team hasn't made that evaluation or extra programming staff would be required to implementation GPU processing then it is a financial problem not technical. Maybe the only option is to ask for donations specifically to investigate GPU processing.
We are all aware that running research tasks on computers uses electricity and that we need to ensure that our computers run as efficiently as possible. One way we can reduce the carbon footprint is be ensuring that as few models crash as possible.

As a Problem Manager (ITIL) in a large Telco I've seen a few train wrecks by teams and their programmers. Must admit the spectacular crashes do get attention. I tend to take notice when one of the earlier posts here said
The project's servers are already struggling to cope with the huge amounts of data being returned. Why do you want to increase this so drastically?
ID: 48748 · Report as offensive     Reply Quote
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 48749 - Posted: 10 Apr 2014, 6:09:02 UTC - in response to Message 48748.  

You STILL haven't got the point.

The Met Office models are a global standard.
They're run by researchers in a lot of places around the globe, who know that these models are stable, and that they can compare results with other people using them.

Every researcher currently using these models would have to switch to GPU programs and start again with their testing to see if they get consistent results.

Why bother when the current system works? And if they don't change, then we don't.

And have you also missed the point from earlier, that they're OWNED by the Met Office, who doesn't provide anyone with the source code?

ID: 48749 · Report as offensive     Reply Quote
old_user714979

Send message
Joined: 28 Mar 14
Posts: 7
Credit: 47,798
RAC: 0
Message 48750 - Posted: 10 Apr 2014, 6:44:34 UTC - in response to Message 48747.  

This issue just won�t stay dead. Every time that you think it have been safely staked through the heart it rises from the grave like Dracula in one of those old Hammer movies.

I haven't seen it staked through the heart, not even a flesh wound so I expect it will keep coming back each year as a wish list item.

Floating point double precision didn't exist in hardware on the early GPUs and there must have been a time when there wasn't FORTRAN compiler support for the GPU hardware. To see postings saying single precision is not adequate or FORTRAN compilers don't exist that support GPU hardware doesn't inflict a flesh wound.

There are real issues with GPU hardware rounding, FORTRAN compilers and their extensions, etc. It appears from these wish list threads that people give reasons not to investigate GPU processing and appearing to say nobody has actually attempted to investigate BOINC GPU processing and failed due to a GPU hardware limitation.

My guess is the expensive of optimizing then testing the code for parallel GPU processing is the killer, impossible with the available resources, and unlikely in the foreseeable future. The aim has to be the same results as the existing BOINC work units and not a new model. Throw a lot of money, probably a vast pile of money and programmers at the problem and BOINC GPU processing is possible. That is a flesh wound that won't stop people revisiting this as a wish list item.
ID: 48750 · Report as offensive     Reply Quote
old_user714979

Send message
Joined: 28 Mar 14
Posts: 7
Credit: 47,798
RAC: 0
Message 48751 - Posted: 10 Apr 2014, 8:52:27 UTC - in response to Message 48749.  
Last modified: 10 Apr 2014, 9:07:24 UTC

You STILL haven't got the point.

The Met Office models are a global standard.
They're run by researchers in a lot of places around the globe, who know that these models are stable, and that they can compare results with other people using them.

The wish list request here is not to change the model but to optimize code running on particular hardware. The Unified Model is not running on the same supercomputer hardware at all the different sites around the world plus the FORTRAN compilers would introduce low level optimizations choices appropriate for the hardware. Hopefully these optimizations would not affect the accuracy of the results even when changing for example the execution order of instructions. Seem to remember even in my CDC 6400/6600 SCOPE/KRONOS days FORTRAN optimizations for hardware could exist without invalidating results.

I'd expect the Met Office would see little practical benefit in resourcing a team to produce code optimized for execution on GPUs. In effect they control the purse strings and it is their cost/benefit considerations that determine priority. The Unified Model code is not static as improvements are incorporated over time but if the cost of developing or maintaining GPU code exceeds any benefits to them then it is pointless.

Every researcher currently using these models would have to switch to GPU programs and start again with their testing to see if they get consistent results.

A request to optimize code is not a request to change the model therefore no requirement for researchers to switch programs. Of course if the new code was faster than the existing code there might be an incentive to upgrade.

Why bother when the current system works? And if they don't change, then we don't.

Viewpoints are different. On one side there is access to a resource of wasted CPU instruction cycles that could be utilized for climate studies while on the other side there are people who are willing to donate their wasted CPU and GPU cycles for a good purpose. As a person donating CPU and GPU cycles I wish to donate all those cycles not just a fraction. Other BOINC projects will fully utilize spare CPU and GPU processor cycles therefore I perceive a higher benefit in donating CPU and GPU cycles to one of those projects.

I'm guessing people will keep revisiting this wish list topic.
ID: 48751 · Report as offensive     Reply Quote
Profile Dave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4314
Credit: 16,378,503
RAC: 3,632
Message 48752 - Posted: 10 Apr 2014, 9:18:07 UTC

I am sure you are right that this topic will keep being revisited but until the ability to cope with 80bit numbers appears on GPUs, my understanding is that it just isn't worth starting on. Once it is widely available, then it may well be worth working on but only if someone comes up with enough money to fund the work.

You have suggested having donations specifically for this but as these would almost certainly take away from the donations the project currently receives I think this is unlikely. The only ways I can see it happening is if a group of volunteer programmers with the skills and time plus hardware resources wish to take it on or someone taking it on as a masters/PHD project.

Any takers?
ID: 48752 · Report as offensive     Reply Quote
old_user714979

Send message
Joined: 28 Mar 14
Posts: 7
Credit: 47,798
RAC: 0
Message 48769 - Posted: 11 Apr 2014, 2:31:33 UTC - in response to Message 48752.  

I am sure you are right that this topic will keep being revisited but until the ability to cope with 80bit numbers appears on GPUs, my understanding is that it just isn't worth starting on. Once it is widely available, then it may well be worth working on but only if someone comes up with enough money to fund the work.

Double precision started appearing in AMD GPUs in 2008 and they all stress IEEE 754 compliance in their marketing hype. Now double precision is in every GPU and the discussion has moved onto performance differences between Nvidia and AMD, with the cost/power/performance trade-offs between different GPUs.

Programmers still have to know the hardware:
https://developer.nvidia.com/sites/default/files/akamai/cuda/files/NVIDIA-CUDA-Floating-Point.pdf
You have suggested having donations specifically for this but as these would almost certainly take away from the donations the project currently receives I think this is unlikely. The only ways I can see it happening is if a group of volunteer programmers with the skills and time plus hardware resources wish to take it on or someone taking it on as a masters/PHD project.

Think there was a comment in this thread from a moderator that the Met doesn't release its code which would eliminate volunteer programmer teams. I cannot know if the task is too large or too small for a single Piled Higher and Deeper student or that a code optimization for a different processor is a suitable thesis topic.
ID: 48769 · Report as offensive     Reply Quote
old_user714979

Send message
Joined: 28 Mar 14
Posts: 7
Credit: 47,798
RAC: 0
Message 48770 - Posted: 11 Apr 2014, 5:27:15 UTC

Last work unit completed. I'm moving to another BOINC project and won't be following this thread. CYA..
ID: 48770 · Report as offensive     Reply Quote
old_user680925

Send message
Joined: 20 Jun 12
Posts: 1
Credit: 76,526
RAC: 0
Message 49421 - Posted: 25 Jun 2014, 20:59:49 UTC

Just chalk it up to inefficient code and an archaic mainframe.

Eventually to get timely results, the powers that be will realize they need to change.

Until then, you might as well talk to the wall.
ID: 49421 · Report as offensive     Reply Quote
Lockleys

Send message
Joined: 13 Jan 07
Posts: 195
Credit: 10,581,566
RAC: 0
Message 49422 - Posted: 25 Jun 2014, 21:39:19 UTC - in response to Message 49421.  

A CPDN model is of-the-order-of 1 million lines of FORTRAN code. Who fancies taking that on as a project to convert to GPU? (Even if the UK Met Office, which owns the code, were to agree to release the source to make it feasible.)
ID: 49422 · Report as offensive     Reply Quote
Profile astroWX
Volunteer moderator

Send message
Joined: 5 Aug 04
Posts: 1496
Credit: 95,522,203
RAC: 0
Message 49432 - Posted: 27 Jun 2014, 4:38:03 UTC - in response to Message 49421.  

Methinks you know not of which you write. Lockleys calls it correctly.

The UK Met Office Model was developed over decades by PhD physicists/meteorologists. Are you qualified to use the adjectives you used against that development and sustaining update? (I doubt it.)

If your argument is against FORTRAN, well, yes, it is old. It is, however, a language developed specifically for scientific projects. You are at risk in calling it inefficient -- or calling CRAY supercomputers 'archaic.'

Do you have a serious point or are you merely another troll? (If you can do a better job, why not form a company and do it?)

"We have met the enemy and he is us." -- Pogo
Greetings from coastal Washington state, the scenic US Pacific Northwest.
ID: 49432 · Report as offensive     Reply Quote
Profile Dave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4314
Credit: 16,378,503
RAC: 3,632
Message 49435 - Posted: 27 Jun 2014, 11:47:51 UTC

A case of, "If I wanted to get there, then I wouldn't start from here."
ID: 49435 · Report as offensive     Reply Quote
steveyos

Send message
Joined: 22 Apr 15
Posts: 1
Credit: 10,139
RAC: 0
Message 52173 - Posted: 7 Jul 2015, 3:01:27 UTC

titan owner here reporting in

60+ hours on my i7 3770k

I'm not sure how big this is compared to other projects but http://allprojectstats.com/showuser.php?id=3492102 I'm on my way to being ranked #1 most handsome boincer in the universe and it's 100% due to my titan I didn't read this thread not sure if they have a good reason but you'd get a lot more work done putting this on graphics cards instead of cpu's as would every boinc project I would love to find out how many hours would get shaved off willing to beta test and everything as long as I get the weather forecast before everyone else and get to use it as part of my magic show
ID: 52173 · Report as offensive     Reply Quote
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 52174 - Posted: 7 Jul 2015, 4:47:28 UTC - in response to Message 52173.  

Perhaps you SHOULD have read the thread before posting.
Then you would have found out that all of the modelling programs used here are owned and developed by the UK Met Office, for use on their supercomputers.
The versions that we run are the desktop versions supplied by the Met Office for use by professional climate scientists around the planet.

So, no GPUs now or in the future.

ID: 52174 · Report as offensive     Reply Quote
Profile Dave Jackson
Volunteer moderator

Send message
Joined: 15 May 09
Posts: 4314
Credit: 16,378,503
RAC: 3,632
Message 52175 - Posted: 7 Jul 2015, 6:21:58 UTC

Those of us who have followed this thread since the start know about the million lines of Fortran code involved. Out of interest, how much new code is there for each new model type? I imagine it is at least an order of magnitude less in order for new models to come out looking at specific extreme weather events?

I also wonder if there are other sets of code being used for climate modelling apart from the Met Office programs and assuming there are, why do we not hear of them?
ID: 52175 · Report as offensive     Reply Quote
Profile Iain Inglis
Volunteer moderator

Send message
Joined: 16 Jan 10
Posts: 1079
Credit: 6,904,049
RAC: 6,657
Message 52176 - Posted: 7 Jul 2015, 10:11:55 UTC - in response to Message 52175.  

[Dave Jackson wrote:]Those of us who have followed this thread since the start know about the million lines of Fortran code involved. Out of interest, how much new code is there for each new model type? I imagine it is at least an order of magnitude less in order for new models to come out looking at specific extreme weather events? ...

... my impression is that the representation of the physics changes relatively slowly, though there are new land, carbon cyle and other models added from time to time. The different experiments are principally different sets of input parameters, which include data (e.g. oceans, number of years), switch settings (i.e. turn this or that model component on or off) and output selections.

There is then a rather problematic BOINC layer on top, which tries to get the industrial Met Office code to fit into the BOINC framework - and adapt to changes in both the client and the server ends of that software. At least that's not FORTRAN!
ID: 52176 · Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · Next

Questions and Answers : Wish list : Using GPUs for number crunching

©2024 climateprediction.net