I am seriously considering swapping to Python as my primary language. There are several things which hold me back.
- Multiple concurrent versions of the official Python is symptom of something gone terribly wrong. Today there were new the following versions available at python.org: 2.7.5, 3.2.5, and 3.3.2. This is because some libraries etc are incompatible with the latest version, and that sucks. However, they are never going to keep up with the current version as long as this practice continues. Please keep only one current python version on the official page. Python3 was released in 2008! If there is a need for old versions/branches then let somebody fork and maintain it elsewhere! I don't care how much it breaks doing this, because those things will break eventually anyway.
- That some scientific packages are only compatible with old versions (this has partially been addressed since the last time I checked). I want to use the new python versions because I do not want to build a large library of custom functions that are only compatible with old python.
- A nice IDE preferably with scientific plotting included (and an awesome debugger). I know this exists. But again i see multiple options and not a defacto standard. I would like something like "RStudio" for R. (This may be unfair because I have not fully researched this. Anyway it is major concern.)
- Interactive Plotting and 3D plotting. I think actually this is pretty good now, but i would have to do some research to see if it is satisfactory for my typical usecases. (I also need precise pdf+png(+svg+tiff) -exports where point sizes, margins, fontsizes, dpi are easy to control. I guess that python is better than matlab at this though.)
- I have invested a large effort in matlab and I am extremely proficient in it. But then I am an extremely great programmer and I know I would adapt quickly (and I already have some experience with python.)
- Python syntax. There is lots to like about the python language but parts of the syntax is just not pretty to behold (i do not care if religious python fanatics claim otherwise). I do not like the use of single and double underscores. I like explicit terminations of code-blocks with e.g. "}" rather than the use of indentation. Yes, it imposes a uniform indentation style, but my IDE could auto-indent the code anyway and present it to me in a nice way. There is a reason why virtually every other language uses explicit endings. E.g. Using indentation gives problems when you want to blog code snippets.
I have also looked at R, which looks great for the tasks I usually do. But I think i will not use this because of syntax, lack of interactive plotting, and because i think python has a greater momentum from a much larger community. R has other advantages though. E.g. It seems much less likely to have version conflicts in R than in python. Also there are some really excellent statistics toolboxes available.
Here I demonstrate that a very simple Semi-empirical model (SEM) can approximate the historical contribution to sea level rise from Greenland as estimated by Box and Colgan (2013). I start with a very simple SEM model where ice wastage is proportional to Greenland temperatures from Vinther et al.
This demonstrates that a GrIS mass loss can be approximated with a convenient semi-empirical model using a locally relevant forcing. Here, I used annual mean temperatures from SW Greenland from Vinther et al. If used the Greenland average temperatures used by Box and Colgan (2013), or restricted the record to summer, then I am sure I would be able to get even better correspondence.
This convenient model can capture Greenland mass loss. It will not be able to capture sub-components that go into the Greenland total such as the retreat and speed up of Jakobshavn. I don't think anybody would expect this simple model to capture detailed processes like that. Yet this is exactly the type of critique that is often levelled at SEMs of global mean sea level rise.
In SEMs of global sea level rise I have used global mean temperature or global radiative forcing. Using such global scale predictors will probably have a hard time following the shape of the GrIS curve. However, SEMs of global SLR do not attempt to model each contributor to SLR individually. It is unreasonable to expect that the SEM model can capture all the individual processes that go into the total.
Models, observations and paleo climate records show that Greenland temperatures are correlated with global temperatures and we talk about concepts such as polar amplification. However, it is also clear that there are deviations where regional temperatures do not follow the same warming pattern as the global average. Sometimes the GrIS wastage will be faster and other times slower than what you would expect from global mean temperature alone. However, it would be quite a coincidence if these regional deviations would be synchronous over both ice sheets and all the hundreds of thousands of small glaciers at the same time. The relevance of the local deviations are therefore greatly diminished when we model global mean sea level rise.
Remark: Box and Colgan link ice discharge and SMB through a semi-empirical model (similar to what Rignot has done).
- Box, J.E., L. Yang, D.H. Browmich, L-S. Bai, 2009: Greenland ice sheet surface air temperature variability: 1840-2007, J. Climate, 22(14), 4029-4049, doi:10.1175/2009jcli2816.1. PDF
- Box, J. E., Greenland ice sheet mass balance reconstruction. Part II: Surface mass balance (1840-2010), J. Climate, submitted 30 July, resubmitted 28 December, 2012, accepted 13 March, 2013. http://dx.doi.org/10.1175/JCLI-D-12-00518.1
- Box, J.E. and W. Colgan, Greenland ice sheet mass balance reconstruction. Part III: Marine ice loss and total mass balance (1840-2010), J. Climate, submitted 31 July, 2012, revised 1 March, 2013, accepted 13 March, 2013. http://dx.doi.org/10.1175/JCLI-D-12-00546.1
- Vinther, B. M., K. K. Andersen, P. D. Jones, K. R. Briffa and J. Cappelen, Extending Greenland Temperature Records into the late 18th Century, doi:10.1029/2005JD006810, JGR, 111, D11105. http://www.cru.uea.ac.uk/cru/data/greenland/swgreenlandave.dat
It looks pretty dramatic, While there are caveats with this type of data (see paper). I believe it is clear that there is a real trend in this data. I hope to look closer at this study and this region when I get back from EGU 2013.
Perhaps this can be linked to other records. e.g. this 160yr Coral climate reconstruction: here
We show in our paper that the surge index correlates with major land falling hurricanes with a correlation coefficient of 0.6. As expected this correlation improves with smoothing (see paper or table below). We show that the vast majority of the 50 greatest surge events can be related to known landfalling hurricanes. Yet Pielke Jr. still objects to our interpretation that the surge index is primarily a record of hurricane surge threat. He arrives at this conclusion by picking a pre-processing line that results in a low correlation. He restricts the record to the 150 greatest events and observes that the number of extreme surges has a relatively small correlation coefficient with major landfalling hurricane counts. It is unreasonable to expect that surge magnitude can be fully explained by wind speed alone, and thus there is no surge threshold that corresponds exactly to landfalling hurricanes and nothing else. Similarly you would not expect hurricane damage to be fully explained by wind speed alone.
Correlations between HURDAT, NHD and Surge Index.
In the comment thread Roger Pielke Jr bends the truth when he states: "The answer is that normalized damage systematically outperforms the surge index. The fact that the surge index is not well correlated with other measures of hurricane activity should be a first tip-off that it is not adding much value". To demonstrate just how wrong this claim is then we can just test which series correlates best with standard measures of hurricane activity: NHD or the Surge Index. I do that in the table below (following the approach here). Green indicates that surge index correlates better, and red indicates NHD correlates better.
Note: correlation coefficients are slightly different than in our paper here because the interval has been restricted to 1923-2005.
Pooling events does not improve NHD correlations
We would not expect perfect correlation between damage and wind speeds but we would expect closer agreement as we average more events. One very curious thing about the NHD is that the correlation coefficients become worse (even decidely poor) when you appply a 5 year moving average to the series (this is called low-freq in the table above). This is extremely hard to explain unless the NHD is influenced by a strong non-climatic residual trend.
I suggest that PielkeJr follows his own advice: "if you are looking for climate signals in extreme events, look first at climate data" - do not use the loss records. Yet he is continuously trotting out the normalized damage record in the media to imply that there is no climatic trend in hurricane activity.
I find it interesting to plot the frequency of extreme normalized damage events. I have a chosen to define extreme damage so that so that it corresponds to roughly the same number of events as UScat1-5.
Surprisingly there is a strong trend. Let me emphasize that the same clear trend is not in total normalized damage per year. It clearly shows that the distribution is not stationary. I find Pielke's explanation
that this is partly due to changes in reporting plausible and even probable. However, the point is that this suggests that the normalization has not removed all non-climatic influence, and that with this normalization damages are not stationary. I also note that there is also a positive trend in the frequency of top150 and top50 damage events.
I am not trying to bash the NHD or the normalization scheme that Pielke has devised (even if i think it only corrects for the dominant effects). It is important work, and I think he is right to attack the use of non-normalized records. I merely want to say there is a limit to what we can hope to get out of the damage records. It is not possible to fully correct for the huge societal changes over the 20th century to the point where we can extract the climatic trend in damages directly. Instead I propose that we look at individual events and the high frequency signals if we want to tease out the sensitivity of damages to changes in environment (e.g. warming / hurricane statistics / surge statistics).
I guess there is no reason to look at other data when you already have found bias adjusted data that tells you what you want to hear. Or as Pielke Jr says: "If you want to look at trends in hurricanes, there is absolutely no need to construct abstract indices as there is actually good data on hurricanes themselves.
" -Except of course that pretty much everybody agree that this "good data" needs to be corrected for a significant bias in the pre-satellite era. I invite anybody to look directly at the surges recorded in the tide records
themselves rather than my abstract index if they prefer.
Disclaimer: This is not a blog but my personal homepage, and I do not invite comments on this page. I will edit this page as I see fit. Consider this a public draft. Mail or tweet me if you have any comments.
I have recently written a paper
where I estimate global glacier volume to be 35 cm using a statistical approach . The statistical approach i use is a refinement of volume-area scaling where I include additional predictors of volume. These are elevation range (R), glacier length (L), and Continentality (C). The statistical approach I use is simply multiple linear regression in log space. I thus fit relationships of the form: V=k*Aγ1
Recent estimates of global glacier volume:
| Grinsted 2013||35 cm +/- 7 cm|
| Huss & Farinotti 2012||43 cm +/- 6 cm |
| Radic & Hock 2010||60 cm +/- 7 cm |
| Radic & Hock 2010 applied to RGIv2*||~50 cm|
A more complete history is given in Cogley 2012.
(*) I have seen two different updated RH10 estimates from presentations by Hock and Huss and they both were just around 50cm but about 5 cm different.
My estimate of 35 cm is below Radic & Hock's 2010 estimate even when RH10 is updated to use the new RGIv2 inventory. Both estimates are scaling based, and the reason for the difference must be found in the applied volume-area scaling relationships. The figure below show that there is a clear positive bias in the Radic & Hock 2010 ice cap relationship. It is less obvious for the glacier relationship. However, when the Radic and Hock 2010 relationship is applied to the 220 glaciers and 34 ice caps in this sample then both relationships result in 40-50% too great a total volume. One key difference between my study and Radic and Hock (2010), is that I calibrate the scaling law myself whereas they use published scaling relationships. These published relationships were calculated using a smaller subset of the glacier volume data than was available to me. It will be interesting to see what happens as the database of glacier volume estimates grows.
Radic & Hock (2010) use Chen & Ohmura's (1990) glacier relationship of V=0.2055 * A1.36
but replace the exponent with 1.375 as theoretically argued by Bahr et al. 1997 (here A and V have units m2
) . However, the units of the scaling-constant depends on the exponent. Thus the Chen and Ohmura constant simply does not apply for an exponent of 1.375. To illustrate the problem then consider this: First express Chen and Ohmura in km2 and km3 units: V=0.029704 * A1.36. Then follow RH2010 and replace the exponent with 1.375. If we apply the resulting scaling law to a large 500 km2 glacier then we get V= 153 km3
when we start from km-units, but V=188 km3 if we had started from m-units as in RH10. -and we would get V=139 km3 if we had used Chen&Ohmura's γ=1.36. I believe this to be the origin of the positive bias in Radic and Hock (2010). I discuss this further in my paper, and also show how well other scaling relationships match the total volume in the volume database (see fig3c&d, reproduced here).
The solid black curves show the combinations of exponents and constants that result in exactly the same total volume as in the volume database. We see that RH10 falls a good distance above the line, which indicates that it results in a too large total volume (total volume scales 1:1 with k for a given γ). The original Chen & Ohmura relationship (k=0.029, γ=1.36) on the other hand falls very close to the line.
New physical methods
Huss & Farinotti has also very recently estimated global glacier volume (~43cm) using a truly innovative flux conservation approach. In this approach it is necessary to assume both surface mass balance and a calving flux. This approach has the great advantage that it results in ice thickness grids for every glacier in the world, and thus allows for a much more detailed validation of the results. It can also be used as the starting point in more physical models of glacier mass loss. The individual volume estimates are probably mostly better than the scaling based volume for individual glaciers. However, that does not mean that the total volume also will be better if there is a bias. And that is a real concern because the parameterisations have been calibrated on rather small empirical datasets. I am also concerned about the potential for bias on the largest glaciers and ice caps. So that is why I think that more statistical approaches like mine remain important.
A good overview of different methods for volume estimation can be found in the introduction to Linsbauer et al. 2012.
Area-Volume scaling and slope.
I have since become aware that volume-area scaling apparently is controversial, and the main arguments seem to be three-fold:
- Please note that my statistical model actually does allow for slope to be taken into account, but only does so if it leads to an actual improvement in the performance in a validation against withheld data. It does so because R, L and A are predictors and mean slope can be calculated as R/L or alternatively R/sqrt(A). From ice physical considerations you have an expectation for which exponent the slope should have in the thickness(/volume) relationship. However, there are many reasons why nature might deviate from the theoretically derived relationship. One reason could be that the thermal regime, and thus the ice mechanical properties are related to vertical range (think of temperature lapse rate). Indeed, Haeberli and Hoelzle (1995) uses vertical range in their parameterisation of basal shear stress. Personally, I believe that continentality also plays a strong role in the thermal regime, and should be included in the basal shear stress parameterisation. For that reason I think you have to be careful if you are going to extrapolate a basal shear stress parameterisation calibrated on continental glaciers to maritime environments. Neglecting this effect could lead to a bias in the total volume. I am not arguing that my own estimate is without bias, rather I am arguing against the attitude that there is only one way of doing this and that all other methods are wrong.
- Volume is obviously not independent of area as V=A*H. Part of the correlation between A and V is thus trivial. It is therefore argued that the correlation appears more impressive than it really is. This has lead to a mantra that it is somehow wrong to plot area against volume. In my opinion it is not. Such plots show that volume and area correlates - it simply does not matter whether the correlation is trivial or not. You can clearly predict volume from area, and that is what such a plot shows. Ofcourse you should not be overly impressed by a high R2 value, and you should be careful how you do the statistics in general in such a case. In the case of my own paper it would make absolutely no difference to the total volume estimate, nor to the percentage deviation from the fitted line whether I plotted thickness or volume on the y-axis in figure 2.
- There are no measurements of volume, only measurements of thickness which are then inter/extrapolated to the entire glacier. I agree. However, the scatter in area volume plots (the deviation from a straight line) is much greater than the plausible level of uncertainty in the volume estimates. The dominant source of the scatter is simply the fact that area and volume is never perfectly correlated. The uncertainty in the scaling regressions is primarily coming from this real scatter in the data, to the point of the measurement uncertainty having a neglible influence. Even with perfect measurements of area and volume the scatter would still be there. So in short i think this "no volume measurements" critique is misplaced as its effect on the total volume estimate will be very minor.
- Ice physics dictate that ice velocities increase as the slope increases. Thus a thinner glacier can accomodate the same mass flux when the slope is steeper (see e.g. Haeberli and Hoelzle 1995, or text books such as Paterson). We thus expect slope to be important for the thickness and thus volume. This has led some people to conclude that "slope must be included in volume estimates". It is true that this will lead to better estimates of the volume of individual glaciers, but that does not mean that you need to include it or even that it leads to better estimates when you estimate the total volume of hundreds of thousands of glaciers. Indeed you could even choose to estimate global glacier volume statistically like this: Vtot=avg_thickness*Areatot, even if it is poor at capturing the volume of individual glaciers well. The only problem being how you would go about estimating the average thickness of all glaciers in the world. The best argument I have against the "slope must be included" argument, is a test where I show that plain volume area scaling performs well against global model data where slope effects are included. This demonstrates that there is no reason, to the best of our physical understanding, to expect that volume area scaling would not work in the real world.
Please email me any comments to this page - especially if you disagree. This is apparently much more controversial than I thought when I wrote the paper.
Here's some links to different software packages that i plan to try:
I also might look into sift++ for feature tracking in time-lapse photography.
Here's what i did (so far):
Install linux in a virtualbox. This is where i want to have my GAMIT installation.
inside ubuntu do the following:
- Get GAMIT sources from MIT (you have to check here on how to get access)
- Install dependencies
- sudo apt-get -y update
- sudo apt-get -y upgrade
- sudo apt-get -y install build-essential #will install gcc and g++ and gnu-make
- sudo apt-get -y install gfortran
- sudo apt-get -y install csh
- sudo apt-get -y install tcsh
- sudo apt-get -y install libx11-dev
- go to the directory where the source files are located. in my case "~/gg" and execute:
- chmod +x install_software
- This extracted all the tar files, but eventually failed for me with an error referring to OS_ID.
- I then edited ~/gg/gamit/Makefile.config
- The line where it says "OS_ID Linux 0001 3000" i changed to "OS_ID Linux 0001 4000"
- If your OS is newer than mine, then you might have to make this number even higher.
- I also modified it to say "X11LIBPATH /usr/lib/i386-linux-gnu" up in the top somewhere.
- I then executed the ./install_software again, and it seems to be working as it is currently compiling lots of stuff
Success. And finally the install script gave these instructions:
also ~/gg needs to be linked the gamit.
You also need to set up the paths in your .cshrc & .bashrc files
Create the gg link in your home directory to the version of
gamit/globk you just installed ? (y/n)
add this to .bashrc:
add this to .cshrc (but replace XYZ with a 3-letter abbrev. for your institute.)
setenv HELP_DIR ~/gg/help/
setenv INSTITUTE XYZ
set path = ($path ~/gg/gamit/bin ~/gg/kf/bin ~/gg/com)
Track still does not work for me. It just says "KILLED". My current theory is that it is due to not enough mem during compile time. Will try to recompile with more memory.
UPDATE: track works after a recompile with more memory to the virtual machine.
Greenhouse gases is warming the world and this will lead to more frequent heat waves all else being equal. You might naively think that this increase in extreme warm spells will be compensated by an similar decrease in the number of cold spells and that the damage on society will be the same. However, that is simply not true statistically even if you assume that the damage from a cold spell is the same as the damage from a warm spell.
Let me demonstrate with a simple experiment.
Lets say that the "pre-industrial" temperature distribution can be represented by a normal distribution. Events that are more than 2 standard deviations away from the mean we classify as either heat waves or cold waves. We generate 10000 events of pre-industrial temperatures and count the number of warm and cold events. We then proceed to add a mean warming signal and count the number of warm and cold events in this warmed climate. Here's the results:
| || Heat waves|| Cold waves|| # extremes|
| "Pre-industrial" climate||214||223||437|
| "warmed" climate||1576||10||1586|
In the warmer climate we got significantly fewer cold waves (13 compared to 233). However, this decrease was more than compensated by a large increase in the number of heat waves (214 to 1576). The total number of extreme events increased by a factor 3.6. This increase will be greater the further out in the tail you look. It is intuitively clear that it must be so if you imagine the case with extreme warming. You cannot decrease the number of cold waves below zero, but a heat wave will happen virtually every time you roll the dice if you apply sufficient warming. Rahmstorf makes the same argument in the reply to Hoerling on dot earth
This post was provoked by some statements by Martin Hoerling in the dot-earth commentary
to Coumou and Rahmstorf
s recent work on climate extremes. In it he said
"Note that the increase in heat waves was largely balanced by a decrease in cold waves".
No evidence is presented for this statement (it is afterall just an interview comment). I can only assume that they observe it is for one of the following reasons:
- The threshold for warm and cold extremes have been chosen to be something much closer to the mean value.
- They look at changes over a short time interval where the warming signal is very small compared to weather variability.
- (... or the shape of the distribution has changed dramatically)
These are probably necessary 'evils' in order to obtain sufficient number of extreme events to do credible statistics. The observational period is short. But these processing choices would also obscure any possibility to detect the distribution changes we are talking about. Therefore, this line of evidence cannot be used to claim that Coumou and Rahmstorf is using "Exaggerated language".
Here is a finite element simulation of strong winds in the Scharffenbergbotnen blue ice area.
Simulations made by Torsten Malm (Aalto Univ.) and Thomas Zwinger (CSC - IT Center for Science) using Elmer, animation made by Jyrky Hokkanen (CSC). [Thesis]
It so happens that i have some validation data. But first lets take a view of the valley from one of the northernmost nunataks (North is the left side of the youtube video).
Photo taken from one of the peaks in the left side of the video (on a nice day).
The camp was on the moraine to the left of the icefall in the back of the valley.
My validation data:
The strong winds in the deep end of the valley are roughly located where we had a camp in 2006/7.
After a particular violent storm we came back to our camp and found:
- the pyramid tent gone (it used to be where all the boxes are scattered). We found it later a kilometre away.
- Food scattered around tens of kilometres away from our camp site (found by SWEDARP people).
- Our nice hilleberg tent completely broken.
- The huge sledge flipped upside down.
- A huge mallet blown ~100metres away.
- we lost lots of sample bottles etc in this storm, but we managed to collect samples anyway.
So yep ... the model is right!
There are definitely very strong winds where it predicts it should be.