climateprediction.net home page
Regional attribution models - ensemble type?

Regional attribution models - ensemble type?

Message boards : climateprediction.net Science : Regional attribution models - ensemble type?
Message board moderation

To post messages, you must log in.

AuthorMessage
old_user98682

Send message
Joined: 17 Sep 05
Posts: 5
Credit: 99,187
RAC: 0
Message 48783 - Posted: 13 Apr 2014, 3:30:25 UTC
Last modified: 13 Apr 2014, 3:46:01 UTC

Greetings all,

I assume that the regional attribution model used in the UK winter floods and ANZ heatwave & drought experiments is deterministic (i.e. two simulations started with exactly the same initial conditions will follow the same trajectory)?

If so, what type of ensemble is being used to estimate the probability of extreme events? Is it an initial conditions ensemble or are model parameters being perturbed?

Thank you for any clarification, as I couldn't find an answer to this question in the experiment webpages.
ID: 48783 · Report as offensive     Reply Quote
Profile Iain Inglis
Volunteer moderator

Send message
Joined: 16 Jan 10
Posts: 1079
Credit: 6,837,697
RAC: 6,537
Message 48786 - Posted: 13 Apr 2014, 10:36:41 UTC - in response to Message 48783.  
Last modified: 13 Apr 2014, 10:41:16 UTC

The second part of your question might have to wait for a member of the project team to pass by (though ensembles usually involve perturbed parameters). However, a limited answer to the first part can be made.

If a model were to be run twice on the same machine it would, other things being equal, produce the same results. There is, for example, no random number generation seeded by some local factor. However, two different machines, particularly if they have different operating systems or different versions of the same operating system, may not produce the same results. The chief causes of this are processor variety (i.e. Intel, AMD) and differences in the mathematical run-time libraries on different platforms, which give different results. Based on my investigations of an earlier model type (the HADSM3 'slab' model) I would say that Windows/Intel machines generally agree with each Windows/Intel, as do Windows/AMD with Windows/AMD, as do Macs with each other, but Linux machines rarely agree with each other (because of the wide variety of distributions).

Since ensemble analysis is purely statistical, this variation doesn't matter. Extensive tests have been done by the project to test whether platform variations and incomplete ensembles introduce a systematic bias. My understanding is that they do not.
ID: 48786 · Report as offensive     Reply Quote
old_user98682

Send message
Joined: 17 Sep 05
Posts: 5
Credit: 99,187
RAC: 0
Message 48791 - Posted: 14 Apr 2014, 10:39:37 UTC - in response to Message 48786.  
Last modified: 14 Apr 2014, 10:40:54 UTC

Thanks Iain for the explanation regarding my question about whether the Regional Attribution model is deterministic - the variation across CPU and OS makes sense given the different run-time libraries used.

As for the second part of my question, the reason why I asked what type of ensemble is used in the attribution experiments is that for a deterministic model, using exactly the same model parameters and initial conditions will result in precisely the same outcome (subject to the CPU and OS variance that you've explained), which would hardly be useful for estimating the likelihood of extreme events! My guess is that an initial condition ensemble is used (with small perturbations to reflect the uncertainty in ocean and atmospheric observations), since altering model parameters would change the representation of the physics, but I'm curious as to whether this is indeed the case.
ID: 48791 · Report as offensive     Reply Quote
Profile Iain Inglis
Volunteer moderator

Send message
Joined: 16 Jan 10
Posts: 1079
Credit: 6,837,697
RAC: 6,537
Message 48796 - Posted: 14 Apr 2014, 14:07:18 UTC

For this attribution experiment we do have one extra piece of information with which to reason, which is that the results chart (here) is constructed on the basis that each outcome is equally probable. So if it isn't an initial conditions ensemble then any perturbed physics has been made to look like one.

In past experiments the project scientists have used complicated parameter selection schemes in situations where they don't actually know the probability distribution of the real physics and have then filtered the resulting ensemble according to observations. I once asked how this worked and got an answer of such delphic incomprehensibility that I have given up trying to understand what they are up to and just run the models instead.
ID: 48796 · Report as offensive     Reply Quote
Profile geophi
Volunteer moderator

Send message
Joined: 7 Aug 04
Posts: 2167
Credit: 64,403,322
RAC: 5,085
Message 48797 - Posted: 14 Apr 2014, 15:22:58 UTC - in response to Message 48796.  

delphic incomprehensibility


LOL
ID: 48797 · Report as offensive     Reply Quote
old_user98682

Send message
Joined: 17 Sep 05
Posts: 5
Credit: 99,187
RAC: 0
Message 48813 - Posted: 16 Apr 2014, 11:30:12 UTC - in response to Message 48796.  

Good observation Iain, as the equal probability basis suggests that an initial condition ensemble is being used since small perturbations reflect the uncertainty in measuring the state of the atmosphere and ocean. Then the impact of chaos theory results in a much broader spread of final outcomes than would be expected given the small spread of initial conditions.

I too ran simulations for the original CPDN slab model experiment to generate a probability distribution of temperature, which involved both initial condition and model parameter ensembles, and also recall not really understanding how the parameter selection scheme worked, so you weren't alone in being in the dark!
ID: 48813 · Report as offensive     Reply Quote
old_user702252

Send message
Joined: 29 Aug 13
Posts: 2
Credit: 0
RAC: 0
Message 49022 - Posted: 2 May 2014, 15:02:01 UTC

Dear all,
I saw your discussion and as a project scientist, I think I can provide a reply.
The UK floods and ANZ heatwave projects are initial condition ensembles (we slightly perturb the sea surface temperatures in each simulation, i.e. each of them is different). There are other projects where we perturb parameters, but not in these ones.
Deterministic projections usually refer to only one projection, which is not what we do here, as we have thousands of simulations of almost the same, so it is probabilistic.
I think how to perturb the parameters is relatively easy to understand. Every process that occurs on a scale smaller than the grid (50km x 50km for the European region at the moment) has to be parameterised because it cannot be calculate but maybe you already know this. Of course, we do not really know the exact value of these parameters, for example how fast snow falls. Snow fall speed depends on the shape, size and weight of each snow flake, what the air temperature is, how the wind blows, etc, and therefore each snow flake falls at a different speed, but we cannot go on that level of detail, so experts on snow fall velocity give their best guess of the average velocity of snow fall. Perturbing the parameters then "simply" involves changing slightly this best guess value to see how much it actually influence the climate. Often 2-3 alternative values are selected for each of the parameters (maybe a dozen of them, so this makes a big ensemble in the end, especially if initial conditions perturbation is performed on top of that). Perturbing some parameters hardly has any effect on the climate system, perturbing others however may have a big impact, and therefore, a lot of effort is made better know what the exact value of these parameter are (for example through lab experiments of snow falling in a known environment). I hope this made sense!
ID: 49022 · Report as offensive     Reply Quote

Message boards : climateprediction.net Science : Regional attribution models - ensemble type?

©2024 climateprediction.net