climateprediction.net home page
Harlan is a high level language for general purpose GPU computing

Harlan is a high level language for general purpose GPU computing

Message boards : Number crunching : Harlan is a high level language for general purpose GPU computing
Message board moderation

To post messages, you must log in.

AuthorMessage
Zarck

Send message
Joined: 9 Aug 04
Posts: 9
Credit: 8,678
RAC: 0
Message 46604 - Posted: 8 Jul 2013, 23:30:27 UTC

https://github.com/eholk/harlan

Used for Climate ?

@+
*_*
ID: 46604 · Report as offensive     Reply Quote
Les Bayliss
Volunteer moderator

Send message
Joined: 5 Sep 04
Posts: 7629
Credit: 24,240,330
RAC: 0
Message 46605 - Posted: 9 Jul 2013, 1:53:09 UTC - in response to Message 46604.  

These climate models from the UK Met Office are written in Fortran.
The Met office has no plans to re-write them for gpus.

ID: 46605 · Report as offensive     Reply Quote
Eirik Redd

Send message
Joined: 31 Aug 04
Posts: 391
Credit: 219,888,554
RAC: 1,481,373
Message 46607 - Posted: 9 Jul 2013, 4:23:53 UTC - in response to Message 46605.  
Last modified: 9 Jul 2013, 4:25:07 UTC

These climate models from the UK Met Office are written in Fortran.
The Met office has no plans to re-write them for gpus.



Partly, I think, because the FORTRAN climate models have been used, tested, verified and cross-checked for many years.
Partly because the climate models are not particularly suited to the gpu paradigm.
Partly because re-writing the code, testing, and cross-verifying would cost -- much.
And the current GPU's ,and software, are not well enough defined and tested to spend, at least, a few lifetimes of skilled labor for a merely probable gain.
ID: 46607 · Report as offensive     Reply Quote
Profile tullio

Send message
Joined: 6 Aug 04
Posts: 264
Credit: 965,476
RAC: 0
Message 46609 - Posted: 9 Jul 2013, 18:25:02 UTC

Both SETI@home and Einstein@home use GPUs which can give a ten fold increase in speed. But there is a number of caveats. You need one application if you use a nVidia board and another if you use an ATI/AMD board. These application are produced by volunteer programmers, at least in the SETI case and you must have an app_info.xml file in your project directory to download them. And you must have the correct drivers for your board, which are upgraded very frequently by their vendors. If you don't you will produce a lot of invalid results, so the workunits must be resent to other volunteers, which puts strain on the servers. So GPUs are a mixed blessing. I don't use them.
Tullio
ID: 46609 · Report as offensive     Reply Quote
Profile MikeMarsUK
Volunteer moderator
Avatar

Send message
Joined: 13 Jan 06
Posts: 1498
Credit: 15,613,038
RAC: 0
Message 46618 - Posted: 10 Jul 2013, 15:57:31 UTC - in response to Message 46609.  

... These application are produced by volunteer programmers, ...


I gather that the generation of models that we're running now are roughly a million lines of Fortran each, and the new models (HadGem) which have been intermittently discussed are around 10 million lines each. Probably too much code to reasonably expect volunteers to migrate... (and in any case, the source code is strictly controlled by the Met office's Hadley Centre). Can a GPU cope with a task of that complexity?


I'm a volunteer and my views are my own.
News and Announcements and FAQ
ID: 46618 · Report as offensive     Reply Quote
Profile tullio

Send message
Joined: 6 Aug 04
Posts: 264
Credit: 965,476
RAC: 0
Message 46619 - Posted: 10 Jul 2013, 18:06:50 UTC - in response to Message 46618.  
Last modified: 10 Jul 2013, 18:08:32 UTC

... These application are produced by volunteer programmers, ...


I gather that the generation of models that we're running now are roughly a million lines of Fortran each, and the new models (HadGem) which have been intermittently discussed are around 10 million lines each. Probably too much code to reasonably expect volunteers to migrate... (and in any case, the source code is strictly controlled by the Met office's Hadley Centre). Can a GPU cope with a task of that complexity?


The Titan supercomputer, which was the top machine in the top500 list last November and is now surpassed by the Tienhe-2, uses 16000 nVidia Tesla boards as coprocessors.
It runs FORTRAN because the Linpack benchmark was written in parallel FORTRAN. I suspect that nVidia is using both SETI@home and Einstein@home as a testing ground for its GPUs. ATI/AMD is giving less support to the OpenCL programming environment, while nVidia uses its proprietary CUDA.
Tullio
ID: 46619 · Report as offensive     Reply Quote
Profile geophi
Volunteer moderator

Send message
Joined: 7 Aug 04
Posts: 2168
Credit: 64,543,482
RAC: 6,686
Message 46622 - Posted: 10 Jul 2013, 23:19:40 UTC - in response to Message 46618.  

Can a GPU cope with a task of that complexity?


The research I've read on utilizing GPUs in weather/climate models seems to see them as being utilized for a specific type of calculations, like radiation computations. They can greatly speed up certain specific calculations, but are not, at this time, suited to the total complexity of these huge models. That may change in the future, but even so, the time frame to make changes by the met office, and then adapt them for cpdn, would be very large.
ID: 46622 · Report as offensive     Reply Quote

Message boards : Number crunching : Harlan is a high level language for general purpose GPU computing

©2024 climateprediction.net