Article

Science & pseudoscience of quantitative techniques

Eric recently wrote a fascinating piece on the ‘volatility virus’ which has contaminated the professional investment community. This blog post builds on that piece by considering the use – more specifically, the misuse – of quantitative ‘analysis’ more generally.

It is important to point out I am not ‘anti-quant’. Quite the contrary, in fact; I am all in favour of scientific analysis, interpreted correctly with integrity. What I warn against is the false confidence frequently taken from quantitative results. Pseudoscientific conjectures masquerading as science, asserted with the utmost conviction.

When it comes to economics and finance, we should recognise that we are all vulnerable to the pseudoscience trap. There are two principal reasons for this:

  1. First, it is discomforting to admit the chaotic and unpredictable nature of economies and financial markets. It is much more reassuring to convince ourselves that we know this or that with some degree of certainty. We place excessive confidence in figures and charts generated by various studies and ignore wide confidence intervals around results. We want to believe there is an ‘answer’ and we are impressed by those providing one, especially when ‘considerable’ statistical evidence is provided as support. Each one of us wants to impress too. Admitting we know little with any degree of certainty doesn’t feel impressive.
  2. The second vulnerability is a result of selection biases in the industry. Many people in finance have a predisposition towards numbers and this is where their cleverness lies. They are happier crunching numbers or plotting graphs than thinking about the often dubious premises on which the analysis is built. Growing computing power and the institutional desire to appear at the cutting edge reinforce the trend.

 

The other selection bias is a result of the highly competitive and financially lucrative nature of the industry. The most prestigious organisations are full of confident high achievers, used to winning and getting things right. These people tend to be convinced and convincing and we are likely to be taken in by them. They are also the people we want to hear from (or so we fool ourselves).

A few concrete examples of the powerful attraction of pseudoscience are illustrative:

 

The level of the neutral real interest rate or R*

We have written questioning the concept of R* previously, but even assuming the concept remains valid, you never hear about how uncertain estimates of R* really are.

Even Fed officials have fallen into this trap, at least this is what comes across in public:

The work by Laubach and Williams has become the standard model for estimating R*, which is widely assumed to be around 0%. What is ignored when commentators quote this research is that the 95% confidence interval around this estimate ranges between +5% and -5% (the issues of uncertainty around estimates is explicitly discussed here) – hardly a reason to argue about small changes in the appropriate level of the Fed Funds rate. And what about other models?

 

Economic forecasts (e.g. around Brexit)

In the run up to the Brexit referendum Michael Gove, former education secretary, famously said, “I think we’ve had enough of experts”, for which he was widely criticised. Were his comment more generalised, the criticism would be fair enough.

But he had a point because the experts in question were  economic forecasters. The problem again is not that the economic analysis around Brexit was careless or deceitful, rather the way such analysis is projected to the wider public in the media. Economic forecasts come with wide margins of error and considerable uncertainty – lots of unexpected things can happen. Nuanced analysis doesn’t go down well on TV or social media. Emotionally charged and ideological debates – Brexit, Trump policies, Keynesian vs Classical economics – quickly depart from honest and objective analysis.

The lesson is that economic forecasts always need to be taken with a pinch of salt. There will usually be surprises. But that is not how economic statistics or forecasts will be portrayed on the news.

 

Analysis of investment portfolios

The statistical analysis conducted on fund managers’ portfolios has never been more detailed. Analysis of factor exposures, risk contribution by position, hit ratios and correlations has never been greater. In many places, much of this has been internalised; “statistical arbitrage” is a fund category in itself.

Is all of this helpful? Done in the right way, probably. But the hurdle must be high. Is the sample period representative? Is there sufficient history? How stable are the correlations? Is this noise or am I on to something? Should I be levering up gross long/short positions on the basis of statistical correlations or might those correlations change?

 

Conclusion

Uncomfortable as it is to recognise, in the words of my colleague, Dave Fishwick, “you can’t delegate thought”. More than ever, we are today bombarded by statistics and quantitative analysis. As clever and scientific as much of it seems, in many cases it is anything but. The scientific method is one of questioning and objectivity (as best can be achieved). It is more scientific to reject pseudoscience, admit ignorance and act accordingly than accept faulty analysis. In this vein, the investment approach of Warren Buffett, which ignores volatility, correlation and economic forecasts is scientific – they are empirically irrelevant to long term returns.

The point is not that quantitative analysis is a bad thing. Just that we have to think. We should take a leaf out of the great physicist Richard Feynman’s book: “Since then I never pay any attention to anything by ‘experts’. I calculate everything myself”.


The value of investments will fluctuate, which will cause prices to fall as well as rise and you may not get back the original amount you invested. Past performance is not a guide to future performance.