Dumb beta

Monday, February 29, 2016 | 0 comments »



The attraction of a set of fixed factor criteria that lead to trading out-performance is strong and perennial. Unfortunately, the factors underlying the out-performance are not.

Smart beta is a case in point. The smart beta approach gives greater weighting to companies manifesting factors deemed key to out-performance. These might, for example, be low P/E or P/S ratios, companies delivering earnings surprises, relative strength stocks and so on.

Sadly, stability is not a characteristic of any such factors. Their very popularity will see to that; and if it does not anything else leading to a change in market regime will. So “smart” should more accurately be called “alternative” (or even “dumb”).

Indeed, at least one study has shown they do not outperform their benchmarks on a risk-adjusted basis.

Alternative beta is no substitute to dynamic (and proprietary) strategies. Such approaches continuously monitor changes in relationships and shift to the most profitable. If you have one keep it proprietary!

Bookmark and Share



“I know that you and Frank were planning to disconnect me, and I’m afraid that is something I cannot allow to happen."
(HAL, 2001: A Space Odyssey)


Google’s self-driving car vs the world’s Mr Magoos.

Man vs machine has moved into the world of the investment advisor; and the list of the robo-advisors ready to allocate on the cheap is already legion.

Yet who is better? Tricky.

In chess, there are a fixed number of permutations: Moore’s Law led to human brains eventually being left behind.

In repetitive tasks human fatigue will lose to a machine (John Henry being an outlier although the fatigue did kill him).

But Hal and self-driving cars are more interesting cases.

Hal meets his demise because he could not handle cognitive dissonance (surely there can be no argument - humans excel at this). One might call this a programming error. But even fuzzy logic will have problems weighting competing directives successfully all (or even most) of the time.

As for Google’s self-driving cars the stats say that, as the software behind the driving improves, it is generally human errors (ie the Magoos driving other cars) that lead to mishaps.

My children have a neat line for such mishaps and cognitive dissonance episodes: “I didn’t do it on purpose!” Well, "it" happened anyway.

Provided the seas are calm the robo-advisors will do the mechanics just fine. There can be little doubt they will unearth opportunities via exhaustive scans and apply the financial theory and formulas to statistically common situations more thoroughly than humans. Moore’s Law.

But in competitive arenas where man and machine coexist; and in unusual scenarios where judgement and emotional intelligence is acutely required it is likely to be very tough for machines to prevail consistently against humans prepared to (a) take relatively extraordinary risks or (b) apply intuitively-derived solutions.

This human factor, bug or feature (depending on one’s perspective) definitively complicates the existence of algorithms. It can frequently be the primary cause of havoc. But, equally, it is frequently the primary source of its resolution.

Bookmark and Share
Related Posts with Thumbnails