Message boards :
Number crunching :
HadAM3 Models Discussion
Message board moderation
Author | Message |
---|---|
Send message Joined: 27 Jan 07 Posts: 300 Credit: 3,288,263 RAC: 26,370 |
OK, I downloaded a new HadAM3 model the other day. Unlike the other sulfur and coupled models, the attrition models keep doing a malloc() and free() every few seconds back and forth, as evidenced by my memory usage going up and down in a sinusoidal way. As any programmer knows, getting memory from the OS costs significant CPU time for the kernel. Has anyone else on Linux noticed an HadAM3 model doing this? (I do not recall the Seasonal Attrition Project models doing this, but it\'s been a long time since I ran one of those.) Also, the model is crawling along in progress. BOINC tells me my current sulfur model will take 520 hours, but the attrition model will take 800 hours to finish. |
Send message Joined: 7 Aug 04 Posts: 2184 Credit: 64,822,615 RAC: 5,275 |
I noted the memory usage pattern on the Seasonal project, and it continues on this new version. From a post I made to the beta site\'s forum on this version of hadam3... I downloaded a beta Linux hadam3 model and tracked the memory usage as a function of timestep. In the terminal window, beta Linux hadam3 logs every timestep. Like the hadsm3 and hadcm3 models, the radiative timestep occurs every three model hours. It appears to occur in hadam3 between hh10 and hh20, and takes about 9 times as long as a regular timestep. Memory usage of hadam3_um observed: |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
... my current sulfur model ... Are you sure about that? The sulphur models became obsolete in February 2006. |
Send message Joined: 27 Jan 07 Posts: 300 Credit: 3,288,263 RAC: 26,370 |
Do you have any information from the science team about the future of the HadAM3 models? Obviously, there is still data to crunch, but if they intend to use these models as research, I would highly recommend a little development effort to the code. It\'s probably moot to argue for a code change, but in this case a little development effort will go a long way in terms of performance. Reusing a block of memory between loop iterations is really easy to implement. The trick will be getting a few grad students who know Fortran to do it. :) If it is taking 9x longer per timestep than the other models, then a lot more models can be completed. Again, the development effort makes a lot of sense only if the HadAM models will continue to be used in research. |
Send message Joined: 27 Jan 07 Posts: 300 Credit: 3,288,263 RAC: 26,370 |
What are the HadSM3 models? That\'s what I have. hadsm3fub_00ni_005917257_3 |
Send message Joined: 7 Aug 04 Posts: 2184 Credit: 64,822,615 RAC: 5,275 |
I\'m not absolutely sure that some enhancement couldn\'t be made for the memory usage, but the primary reason the memory requirements are higher, and the timesteps take over 10 times as much as slab, are that it is a much higher resolution model than hadsm3 or hadcm3. More grid points horizontally and vertically mean much more computation time per timestep. |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
HadSM3 = slab (ocean) models (the SM part). The HADAM3 models were tested for some time on the beta site before being released here, so I would think that they were made the best it was possible to. They WILL be around for a while, and used for a variety of research, not just the UK floods of 2000, as was the SAP project. |
Send message Joined: 5 Aug 04 Posts: 1496 Credit: 95,522,203 RAC: 0 |
Last I saw, when the England flood study is concluded, there will be regional studies for South Africa and for regional Snow accumulation and melt rate for the US Pacific Northwest. From there? Who knows? Edit: Heh! You must have posted at the time I began to write, Les. "We have met the enemy and he is us." -- Pogo Greetings from coastal Washington state, the scenic US Pacific Northwest. |
Send message Joined: 27 Jan 07 Posts: 300 Credit: 3,288,263 RAC: 26,370 |
I\'m not absolutely sure that some enhancement couldn\'t be made for the memory usage, but the primary reason the memory requirements are higher, and the timesteps take over 10 times as much as slab, are that it is a much higher resolution model than hadsm3 or hadcm3. More grid points horizontally and vertically mean much more computation time per timestep. Since the increased compute time is because of a higher resolution model, it\'s hard to say how much of an improvement the memory usage (up/down usage) enhancements would be to the code. I can promise you there will be some performance benefit; I just don\'t know how much. I guess when the time comes to apply the models to the South African or NW America territory studies, the team should look at revising the code in this way. If I wasn\'t busy with my graduate studies already, I\'d try to help, as I\'ve always wanted to work with weather models as a career. |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
If I wasn\'t busy with my graduate studies already, I\'d try to help, as I\'ve always wanted to work with weather models as a career. Hate to be a wet blanket, but you\'d need to work for the project, as part of Oxford University\'s Atmospheric, Oceanic & Planetary Physics department. (Although I think that the progammers were moved to the Comms lab section last year.) Or the UK Met Office\'s Hadley Centre In Exeter. (They own the code.) But if that\'s not possible, you can always join the beta testers when the next models come up for testing. |
Send message Joined: 13 Jan 06 Posts: 1498 Credit: 15,613,038 RAC: 0 |
The SAP models were identical to these, except these do 500 times less disk reads (at the cost of a slightly higher memory footprint). The radiation timesteps are fairly infrequent. The downside to having it maintain a steady memory footprint is that since it only peaks briefly, you\'d be less able to run two or four together on a memory-restricted machine. I can (just) run two simultaneously on a cut-down 1GB XP machine (not recommended). If they both maintained a steady 450MB each, that\'d be impossible - a lot of memory would be sitting around unused for extended periods of time. I\'ve been running one at a time on a 512MB laptop (also not recommended!!) without any major issues (the laptop only gets rebooted once every couple of months, so any memory fragmention is being dealt with successfully by the operating system). I'm a volunteer and my views are my own. News and Announcements and FAQ |
Send message Joined: 27 Jan 07 Posts: 300 Credit: 3,288,263 RAC: 26,370 |
Hate to be a wet blanket, but you\'d need to work for the project, as part of Oxford University\'s Atmospheric, Oceanic & Planetary Physics department. Well, studying overseas is a very remote possibility but not completely zero. Studying weather overseas sounds like a cool idea for a start to my career. It all depends on how things go with my 1st grad degree and job prospects here in the USA. I suspect that \"life\" will change for me a lot after graduation, so perhaps I\'ll just be a cruncher for now. :) |
Send message Joined: 13 Jan 06 Posts: 1498 Credit: 15,613,038 RAC: 0 |
Optimising the memory usage may also be a time-consuming task = the current generation of models is one million lines of fortran which has been developed over thirty years. The next generation of models (HadGEM etc) is ten million lines ... I'm a volunteer and my views are my own. News and Announcements and FAQ |
Send message Joined: 4 Sep 06 Posts: 79 Credit: 5,583,517 RAC: 0 |
Ouch! When do they come? And what will the requirements be to run these models? Can\'t imagine to run four models simultainiously.. |
Send message Joined: 5 Sep 04 Posts: 7629 Credit: 24,240,330 RAC: 0 |
These big models were/are from a group at a different Uni, and, as Carl, (who was working on them) has now left, there\'s no feedback. But Carl was working on a 64bit model, requiring 4.7 Gigs of ram, with a possibility of smaller, simpler, models to get them under 4 Gigs. There was also talk of multithreading these big models across several cpu cores, but this first of all reqires that BOINC be able to do this, and last that I heard, this was some time off. This is some of what Carl wrote privately back in June 2007: I ran some numbers on all cpdn/boinc machines that have trickled in the past 2 weeks: Whatever, they are a long way off yet. |
©2024 cpdn.org