Daniel Kahneman’s seminal piece Thinking Fast, Thinking Slow, provides the foundation for much of our current understanding of behavioural finance. It introduces the idea that the human brain processes information in two ways: automatically (thinking fast) and deliberatively (thinking slow)1.
These automatic thinking processes can be hugely valuable. After adequate training they can lead us to quick and skilled responses or intuitions. When combined with associative memory, we can rapidly produce a coherent pattern of activated ideas that can allow us to significantly shorten problem-solving tasks.
The problem with automatic thinking, however, is that it can just as rapidly infer and invent causes and intentions. Rapid processing can cause us to focus on existing evidence, while ignoring absent evidence. It occasionally reads more into information it is presented with than is warranted. It overweights low probabilities. It is more sensitive to changes than to states. It frames decision problems narrowly, when a broader picture would lead to a different conclusion2.
To some extent this is a function of the way we are hardwired: we simply couldn’t apply deliberative thinking to the tsunami of information and decisions that we are flooded with each day. But we should not underestimate its importance. The impressions and feelings generated by our automatic systems inform the explicit beliefs and reflective choices of the deliberative system – not the other way around. Still, we simply cannot ignore the potential for our automatic thinking system to produce sub-optimal financial decisions.
The Law of Small Numbers dominates the marketing machinery of our financial services industry. It’s our ultimate automatic thinking quick-sort tool. No matter how many times we are reminded (and remind others in turn) about how hugely destructive our adherence to it can be, our industry continually reverts to it when trying to ‘sell a point’.
So, what is this Law of Small Numbers and why is it so problematic? Kahneman uses an example from statisticians Howard Weiner and Harris Zweiling of a Gates Foundation project to illustrate the problem.
The foundation was interested in funding schools that they believed had the highest potential for success. Researchers applying for funding produced data suggesting that successful schools needed to be small. In their study, of the 1 662 schools in Pennsylvania that they analysed, they classified 6 of the top 50 schools as small. On the surface that seemed to be fairly compelling evidence. It meant that small schools out-represented large schools in the success pool by a factor of 4. As Kahneman noted: “It was easy to construct a causal story that explained how small schools are able to provide superior education and thus produce high-achieving scholars by giving them more personal attention and encouragement than they could get with larger schools3.”
Unfortunately, in this case, the facts were wrong – although that fact wasn’t fully appreciated until after the Gates Foundation made a considerable investment in the project. Had the researchers asked the question as to the characteristics of schools with the worst success rates, the answer would have been that small schools also had the worst success rates. How could that be?
The issue is not about the size of the school at all, but what happens when we work with data that represents small sample sizes. Small sample size data is prone to producing outcomes that are significantly more variable, and potentially more extreme. The net effect is that outcomes in a larger sample size would probably be randomly distributed, which can easily become inadvertently skewed with the smaller data sample. Simply put, we are far more likely to see examples of extraordinary and extreme outcomes when we have a small data set than we are when there is a large one.
In financial services, the application of The Law of Small Numbers is most prevalent when we provide performance numbers to investors. How often do we walk past airport billboards that shout the praises of a top performing fund or investment manager? And yet a ten-year performance record of excellence is statistically meaningless as an indicator of skill in the broader scheme of things. We are using very limited and very noisy data sets to make sweeping claims about what is happening and why those outcomes have been achieved. Still, research from BNP Paribas on fund flows in the South African unit trust industry provides evidence that money flows towards strong historical performance records4.
Even when we believe that our sample set is easily large enough to justify our conclusions, our misunderstanding about probability distributions can lead us to jump to conclusions that defy substantiation. As Nassim Taleb famously pointed out in relation to Warren Buffett’s extraordinary 30-year performance history, if one considers that in this case the sample size represents the several hundred million other investors in the world (and that’s probably a low estimate), the odds are extremely high that someone could have achieved Warren Buffett’s performance results by sheer chance – and not by skill at all5. In other words, a random distribution of the performances from several hundred million investors would have yielded at least one data point, if not dozens of data points with outcomes of that order. So…is Buffett skilful? Here is where the statistics defeat us again. We simply do not have a large enough data sample representing Buffett’s range of investment decisions to say with statistical certainty that Buffett is skilful.
When it comes to our financial decision-making, it’s the reflexive, fast-thinking part of our processing, the automatic mode, that plays the greatest role in our everyday processing. Cynically put, marketing departments count on the fact that consumers aren’t thinking very deliberatively when they do deliberate.
Consider these examples:
When cash-strapped consumers consider such ‘grudge’ purchases as short-term or medical coverage, cost will often be a deciding factor. Marketing to these customers will often be deliberately framed to highlight this point. In reality, a tantalisingly low premium may mask the fact that there are high embedded excesses or copayments involved. As such, the promised protections afforded by the policy may well turn out to be significantly more expensive than initially understood. As such, a sub-optimal fast-thinking financial decision will invariably nudge out a better ‘value-for-money’ decision if the consumer is provided with no basis on which to deliberate.
What about this perennial problem? Payday comes and it’s clear an employee is not going to be able to make that required down-payment for the living-room set he has promised his family. But how convenient is the offering of a quick R300 payday loan to address this ‘consumption emergency’? For only R15 per R100 for every two weeks that the loan is outstanding, the individual’s problem is solved. What’s missing, of course, is that critical deliberation insight that helps consumers appreciate that after a mere two months, the charges on that R300 loan are now R180.
Much of financial education and advice presumes that financial decision-making falls into the deliberative thinking camp for individuals. Consumers may have been properly coached in the powers of compounding, or the wisdom of long-term investing and diversification, for example, but even this knowledge foundation can be undermined when framing or anchoring end up short-circuiting a considered outcome.
While the academic world now appears to fully grasp that ‘economic man’ (a rational decision-maker) is probably a fiction, our policies and financial advice frameworks simply don’t go far enough in terms of acknowledging this reality. Financial education starts first with trying to teach individuals basic principles. Implicitly this means that we believe we can get results by simply appealing to the deliberative side of human decisionmaking. To date, the outcome of such thinking suggests this is not a useful assumption. As we pointed out in Benefits Barometer 2014 in the article 'Failure to launch' a 2014 study on Financial Literacy, Financial Education, and Downstream Financial Behaviours6 concluded interventions to improve financial literacy have a mere 0.1% effect on financial behaviour.
Counterintuitive as this may sound, an individual’s goalsetting and prioritisation is far more likely to reside in the automatic than the deliberative thinking space. Just think how many bad choices we human beings make around the deployment of our financial resources. For that reason we need a major rethink of how we could use our insights about automatic thinking to create outcomes that speak directly to greater financial capability.
We know, for example, that automatic thinking is shaped by the accessibility of different features of the situation. Seemingly unimportant features in decision-making, such as how many choices a person has to make, can completely unhinge effective decision-making. When people think automatically, the way choices are presented and the context under which decisions are made is critical.
This means that if we are going to push back the tsunami of influence from automatic thinking that can produce a horrific undertow of sub-optimal decision-making, we need to pay more attention to:
This is our starting point. We can also provide little educational ’nudges’ to get people to think through problems more deliberatively. Or, we may need to completely rethink how we provide advice.
In the World Bank’s 2015 World Development Report entitled Mind, Society, and Behavior, they introduced the following example to show exactly how to quietly introduce financial education at a critical point in an individual’s financial decision-making. In this specific field experiment, individuals received payslip envelopes that provided illustrations on the back of how the cost of borrowing from a payday lender (short-term loan or loan shark) compares to borrowing the same amount from a credit card. Providing a constant reinforced message with every pay cheque created an important level of anchoring around concepts of ‘good’ debt and ‘bad’ debt.
Now let’s tackle the bigger mind-shift. How could we completely rethink the way we provide financial advice to individuals? Consider this challenge: consumers are flooded with information about what the financial services industry believes they need. Financial products are often sold on the basis of why they would be critical to the individual. Given that we are trying to stimulate financial empowerment and financial well-being, would the framing not be far more effective if we told individuals what they don’t need? We take exactly this approach in 'Insuring what matters' when we discuss decision-making around short-term insurance and medical aid.
In a field experiment, randomly chosen borrowers received envelopes that showed the dollar fees they would accumulate when a payday loan is outstanding for three months, compared to the fees to borrow the same amount with a credit card.
Borrowers who received the envelope with the costs of the loans expressed in dollar amounts were 11 percent less likely to borrow in the next four months compared to the group that received the standard envelope. Payday borrowing decreased when consumers could think more broadly about the true costs of the loan.
What’s required in all these examples is a sharper focus on decision-making by both individuals and their advisers, such that choice architecture dramatically simplifies the trade-offs being made with each decision and helps individuals quantify the outcomes of these decisions.
In Benefits Barometer 2014 the article called Understanding the employee-employer contract we provided one example of such a tool when we described a payroll application that could help members derive a better understanding of how their employment contracts and benefit structures provide such present, future and future perfect solutions. We also showed how better choice architecture in an investment framework could improve outcomes for umbrella fund members.
In this edition we provide a framework for decision-making around medical and short-term insurance coverage decisions that starts with the question: what if I did nothing?
When it comes to our financial decision-making, it’s the reflexive fast-thinking part of our processing, the automatic mode, that plays the greatest role in our everyday processing.
We would get better results if we focused on how to protect individuals from the reality of their automatic thought processes. Alternatively we could consider ways to use the natural process of automatic thinking to get individuals to make more of the ‘right’ decisions to address their wellbeing objectives.
Here is the conflict, we cannot increase deliberative thinking by increasing the information we give to individuals. The more information we flood individuals with, the less likely we are to see the right decision – much less any decision. We need to change that dynamic.
Our starting point is to determine whether the financial decision in question is more likely to be addressed by the individual with an automatic or a deliberative process. Only when we know the answer to that question will our interventions be effective. This is what should dictate how we formulate our advice process and our collateral marketing documents. (The forests of our planet will thank us for taking that little extra step.)
The financial well-being challenge is in finding the most effective way to walk that thin line between presuming or dictating what would be in the best interests of an individual’s well-being, and recognising that most individuals simply do not have the wherewithal to know that answer – at least not if it involves projecting out hugely complex and variable decision trade-offs into the future.
1 Kahneman (2011)
2 World Bank (2015)
3 Kahneman (2011)
4 Gopi (2015)
5 Taleb (2010)
6 Fernandes, Lynch & Netemeyer (2014)