Frequently Asked Questions
Frequently Asked Questions
set(gcf,'renderer','painters'); set(gcf,'renderer','zbuffer'); set(gcf,'renderer','opengl'); set(findobj(gca,'type','patch'),'alphadatamap','none','facealpha',1).. and additionally make sure you have the latest (open gl) drivers for your graphics card.
figure('color',[1 1 1]) t=(0:1e-8:500e-8)'; X=sin(t*2*pi*5e6)+randn(size(t))*.1; Y=sin(t*2*pi*5e6+.4)+randn(size(t))*.1; wtc([t X],[t Y]) freq=[128 64 32 16 8 4 2 1]*1e6; set(gca,'ytick',log2(1./freq),'yticklabel',freq/1e6) ylabel('Frequency (MHz)')
figure('color',[1 1 1]) t=(0:1:500)'; X=sin(t*2*pi/11)+randn(size(t))*.1; Y=sin(t*2*pi/11+.4)+randn(size(t))*.1; wtc([t X],[t Y],'mcc',0); %MCC:MonteCarloCount Note that the significance contour can not be trusted with out running the Monte Carlo test.
Here is an example that does just that: t=(0:1:500)'; X=sin(t*2*pi/11)+randn(size(t))*.1; [Wx,period,scale,coi,sig95]=wt([t X]); incoi=period(:)*(1./coi)>1; p=[100 64; 100 10; 50 64]; %are these points in the COI? ispointincoi=interp2(t,period,incoi,p(:,1),p(:,2),'nearest')
You can use anglemean.m provided with the package. Here is a small example that calculates the mean angle at the period closest to 11:
If you want to restrict the mean to be calculated over significant regions outside the COI then you can do like this:
t=(0:1:500)'; X=sin(t*2*pi/11)+randn(size(t))*.1; Y=sin(t*2*pi/11+.4)+randn(size(t))*.1; [Wxy,period,scale,coi,sig95]=xwt([t X],[t Y]); [mn,rowix]=min(abs(period-11)); %row with period closest to 11. ChosenPeriod=period(rowix) [meantheta,anglestrength,sigma]=anglemean(angle(Wxy(rowix,:)))
incoi=(period(:)*(1./coi)>1); issig=(sig95>=1); angles=angle(Wxy(rowix,issig(rowix,:)&~incoi(rowix,:))); [meantheta,anglestrength,sigma]=anglemean(angles)
This can not always be done and when it can, it should be done with care. A 90deg lead might as well be a 90deg lag to the anti-phase. There is therefore a non-uniqueness problem when doing the conversion. A phase angle can also only be converted to a time lag for a specific wavelength. This equation works best for determining the time lag when the series are near in-phase.
wavelength=11; phaseangle=20*pi/180; timelag=phaseangle*wavelength/(2*pi)A visual inspection of the time series at the wavelength in question should make it clear if the time lag is right. I also recommend calculating the time lag with other methods for support.
The phase arrows show the relative phasing of two time series in question. This can also be interpreted as a lead/lag. How it should be interpreted is best illustrated by example:
figure('color',[1 1 1]) t=(1:200)'; X=sin(t); Y=sin(t-1); %X leads Y. xwt([t X],[t Y]); % phase arrows points south east
You have to be very careful interpreting XWT peaks. If you take the WTC of a signal with pure white noise then the XWT will look very similar to the WT of the signal. The same problem exists in 'normal' power spectral analysis. If you calculate the cross Power spectral density of a periodic signal with a white noise signal then you will get a peak. It does not mean that the series have any kind of connection just because there is a peak. I recommend examining the WTC and the phase arrows. If there is a connection then you would expect the phenomena to be phase-locked. i.e. that the phase-arrows point only in one direction for a given wavelength.
So, if they vary between in-phase and anti-phase then it is a clue that they probably not are linked.
The definition of Wavelet coherence (WTC) effectively normalizes by the local power in time frequency space. Therefore WTC is very insensitive to the noise colour used in the null-hypothesis (see Grinsted et al. 2004). It can easily be demonstrated by example:
The null-hypothesis in the significance tests for WT, XWT and WTC is normally distributed AR1 noise. The AR1 coefficient and process variance
is chosen so that it best fits the observed data. It is therefore quite important that the data is close to normal and is reasonably well
modeled by a Gaussian AR1 process. Otherwise we can trivially reject the null-hypothesis and the significance level calculated by the program
is not appropriate. However, the Central Limit Theorem tells us that the distribution tends towards normality as we convolute with longer and longer
wavelets (in the absence of long-range persistence). This means that the data distribution is only really important on the shortest scales.
So, if we are primarily looking at longer scales we do not need to worry so much about the distribution. However, for the WT and XWT the
color of the noise is very important and a very non-normal distribution will affect the performance of the ar1 estimators (ar1.m & ar1nv.m).
The WTC is relatively insensitive to the colour of the noise in the significance test (see also xxxx).
1-10 of 11