Why do we act in the ways that we do, and how can we apply our knowledge of human behaviour to the world of business?
We will explore behavioural economics and what we know about how people make decisions. Additionally, you will learn how vital it is to choose the right messenger and the influence of those who communicate information. You will understand the definition of norms, and how and why people follow them. With this knowledge, you will learn how to apply and harness positive norms to specific audiences.
Identify the ways in which you can harness ego to create desired behaviours with:
- -Decision Making
- -High Risk Consequences
- -Interpersonal Issues
- Learn how to utilise effective messengers, understand the influence of socio-cultural norms, and recognise the role of the ego to better serve the needs of your employees and your customers.
- Discover nudging and the rational model
- You’ll explore behavioural economics and what we know about how people make decisions.
- This will help you identify the shortcomings of the rational model of human behaviour and give you an understanding of heuristics and biases.
- You’ll learn how vital it is to choose the right messenger and the influence of those who communicate information.
- Exploring the main characteristics and differences between hard and soft messengers, you’ll learn how to choose the right one to communicate an effective message.
- Learn how to harness norms and ego
- You’ll understand the definition of norms, and how and why people follow them. With this knowledge, you’ll learn how to apply and harness positive norms to specific audiences.
- You’ll also identify the ways you can harness ego to create desired behaviours.
- Demonstrate how to leverage effective behaviour change using different social psychological factors.
Behavioural science is the science of human behaviour. It combines insights from many fields but principally sits at the intersection between the two disciplines of economics and psychology. This learning activity will introduce the economic approach to human behaviour. It’s often said that economists like to explain 90% of the behaviour with 10% of the information, relying on a bare-bones model of behaviour which hopes to predict rather than describe. Traditionally, economists have been interested in how people respond to incentives: if we change the costs and benefits of different courses of action, then people will respond in predictable ways, without us needing to know exactly what is happening inside their heads.
The canonical model, sometimes referred to as homo economicus, depicts humans as self-interested and hyper-rational creatures. Like Star Trek’s Mr Spock, a character who operates solely within the realms of pure logic, humans have all the information about their decisions, know what they want, and have unlimited willpower and cognitive capacity. They think meticulously through every consequence of their decisions and act accordingly. They not only have access to all available information—they are able to process that information perfectly rationally.
Although few economists truly believe this is how humans behave, they believe it is a good starting point for predicting human behaviour, on average. Through modelling people as having individual preferences, and acting consistently based on those preferences, we can know how they will respond to changes in the costs and benefits of their decisions.
Many criticisms of this rational model have focused on its unrealistic assumptions. People rarely have all the information they need to make a decision. People are not just self-interested but are social, behaving altruistically towards others and hoping that their behaviour will be reciprocated. And people use a variety of heuristics or mental shortcuts which help them to make day to day decisions.
An important insight of the economic approach is that these nuances of human behaviour may not matter for prediction. The notion that ‘price goes up, demand goes down’ is a cornerstone of the economic approach. When economists model this, they assume that people respond to prices like Spock, paying attention to everything they could buy and responding immediately to any changes in price so that they get the best deal. As a result, if the price of a commodity increases, people will buy less of it—and vice-versa.
Although the model isn’t strictly true, several authors have shown that you can instead assume that people are irrational, that is, they choose what they buy completely randomly and still predict that a higher price means less demand. On average the rational theory does give us sensible predictions about consumer behaviour: namely, that they will buy more of something when it’s cheaper, and less when it’s more expensive.
One consequence (or cause) of economists’ approach is how relaxed they are about individual preferences. They generally view choice as a good thing and think individuals should not be judged, or at least not interfered with by politicians for the decisions they make. In the model of homo economicus it doesn’t matter what preferences are for—they could be for unhealthy foods or a gym membership—because people will still respond in the same way to changes in prices, income, and other characteristics of the goods. To an economist, all that really matters for individuals to be rational is that they behave consistently: if they prefer bananas to apples, and apples to oranges, they shouldn’t prefer oranges to bananas. There is no judgment over which is the better fruit to choose.
Another reason for the longevity of this standard economic model is its versatility. Many of the quirks of decision-making can be readily incorporated into the model. Economists can therefore see precisely how departures from reality lead to behaviour that deviates from the rational model. This is why behavioural economics, which accounts for these deviations, has become so widely accepted into the mainstream.
How does an economist’s approach differ to that of a psychologist?
In this article, we will look at the information deficit model, an approach used by both practices, and examine where it falls short.
Psychologists have typically been interested not just in our behaviour but in our attitudes, perceptions, moods, and beliefs. They are interested in describing our behaviour as well as predicting it.
According to psychologists (and other social sciences), we only have access to limited information in order to make our decisions. We have limited ability to process this information and may disagree on what it means. We have to choose which information to pay attention to, and how to incorporate it into our behaviour. Additionally, this process is affected by factors in our environment outside of our control, such as the people around us or even the weather.
How psychologists spend a lot of time on the following descriptions of behaviour:
- How goals and values are formed.
- How individuals perceive reality.
- How people reason their way towards a decision.
- How irrational processes (like emotions or external stimuli) affect behaviour.
These generally require direct studies of individuals. For this reason, many psychologists’ empirical studies are laboratory experiments that look in-depth at a limited number of people, allowing them to investigate behaviour at a high level of detail. For example, a widely used approach in psychology tracks eye movement to see what people are paying attention to. Individuals who have a choice between different foods—crisps, bars, sweets, drinks, sandwiches—may only look at a limited number of them, forgetting about sandwiches and drinks as they make their choice. They will also look at the crisps more if they are going to choose them.
Another prominent idea in psychology is the intention-behaviour gap. We frequently pledge to eat more healthily, drink or smoke less, to exercise, to be more patient, kinder, and more productive, but we frequently fall short. If choices simply reflect preferences, then there’s no room for a gap between what we want to do and what we end up doing. But this is simplistic and leaves out the all-important question of how we bring our behaviour into line with our intentions, which may be good for our well-being.
Ironically, although psychologists have a radically different approach to economists, they have historically converged on a similar model of how to change behaviour, known as the information deficit model. This holds that improving decisions is a matter of providing people with the right information: a pamphlet, a website, or a class. For psychologists, if our behaviour is determined by our attitudes, beliefs, and goals, then by changing the way we think we can change our behaviour. For economists, information can help us to better approximate the rational model and understand the costs and benefits of our decisions.
As we will see throughout this article, the information deficit model leaves a lot to be desired. A major failing is its inability to explain why more and better information does not always lead to better decisions. Sometimes information will have no effect at all on behaviour, and at other times it might lead people to actively ignore it. There continues to be an anti-science sentiment—vaccine hesitancy, conspiracy theories, creationism, and climate change denial—even among educated people and in an age of freely available information.
We have seen that economists have traditionally focused on predicting behaviour, using parsimonious theories to make testable statements about peoples’ decision making.
We have also seen that psychologists have traditionally focused on describing behaviour, investigating the nuances and complexities of individuals and how they make decisions.
How do psychologists view risk?
We all have attitudes to risk in our day-to-day lives, and all look at risk differently: we are risk-averse, risk-neutral, or risk-loving. Some of us might invest in stocks while others choose a savings account at the bank. Some of us may participate in extreme sports while others exercise in their back garden. Some of us may completely change our career paths mid-life, while others prefer to stay the course in their current job.
Economists typically approach risk through an approach known as expected utility. As you might guess, this involves the diligent calculation of the costs and benefits of a decision, using the probabilities of each outcome. For example, say you had a choice to engage in a bet where you tossed a coin and won $100 if it turned up heads, and nothing if it came down tails OR to take $50 for certain. The expected value of the gamble is the same as the certain amount.
Economists classify people as risk-averse if they turn down the gamble and take a certain amount (as would most people), as risk-loving if they accept the coin toss and as risk-neutral if they do not mind either way. Economists would tend to observe peoples’ choices when faced with gambles like these (and many more complex ones) to measure their risk attitudes and test their theories of behaviour.
As a takeaway task, we would like you to consider instead how psychologists might approach risk attitudes.
- How would we describe, as opposed to predict, attitudes to risk, remembering the key differences between how psychologists and economists approach things?
- Which influences might there be on people’s’ risk behaviour that aren’t accounted for in the above example?
- Would people have the same risk attitudes over time and across contexts?
- Is behaviour rational?
As we have found so far in the articles this week, there seems to be a clear affinity between the economic and psychological approaches, despite the differences we have seen.
By bringing a richer and more realistic account of how human beings make decisions, psychologists can help economists make accurate predictions and better understand the world. By bringing rigour at the theoretical and empirical levels, economists can help psychologists to discipline their approach rather than having a range of conflicting experimental results.
It was two psychologists, Daniel Kahneman and Amos Tversky, who revolutionised the field of behavioural economics when they applied psychological insights to economic models. Kahneman and Tversky were fascinated by the quirks of human decision-making and identified a number of heuristics and biases we still use today (Kahneman and Tversky, 1974).
Heuristics are the mental shortcuts we use to make decisions. Biases result when the heuristic leads to faulty decision making. As Kahneman and Tversky are keen to emphasise, heuristics are extremely useful, but they misfire, and when they do, they lead to biases.
A famous example of theirs was of a sergeant in the Israeli Air Force who praised his pilots for landing well and admonished them for doing badly. He noticed that the ones he praised went on to perform disappointingly in the next round, while those he admonished went on to perform better. He concluded that praise was counterproductive, while criticism was effective—the most obvious explanation based on the information available to him.
The sergeant was unaware of a widely observed phenomenon in statistics known as regression to the mean. Tall parents tend to have shorter children; students who perform well in a test tend to do worse in the subsequent one, pilots who have landed smoothly tend to mess up next time, while those who land badly tend to do better next time. It’s probable neither the sergeant’s praise nor his criticism were having much effect.
Kahneman and Tversky also popularised the notion that people compare themselves to reference points, which they explore in their widely cited paper Prospect Theory: An Analysis of Decision Under Risk. Reference points are points of comparison: individuals might notice changes in their wealth relative to their existing level, rather than the absolute level. So, if your salary starts at a low level and increases each year, you might feel better than if you started with a much higher salary that gave you more money in total over the years but that never increased. We will look at reference points much more later in the course.
Behavioural science has followed on from the examples of Kahneman and Tversky, as well as other pioneers like Richard Thaler and Cass Sunstein, to identify the key drivers of human behaviour. It involves the use of mathematical models which are not strictly true, but which predict behaviour well in the lab or the field. It also involves some looking ‘under the hood’ of human behaviour and measuring brain activity. Behavioural science is inherently pluralist in its approach, seeking to use the best methods to understand human behaviour.
A prominent example is the availability heuristic, which retrieves information most vivid and readily available to us. Availability explains why people tend to fear statistically improbable accidents such as terrorism or lightning strikes, which are fantastical and often reported by the media; versus probable ones like car crashes or falling down the stairs, which are more mundane but more likely and just as deadly.
How does availability bias affect you? What type of things do you worry about in terms of risks of going out or risk of death? Are you more likely to be scared of terrorism than a fall down the stairs?
As the discipline of behavioral science has emerged, some key ideas have become central to how we think about behavior. Perhaps the most prominent of these is the idea that there are two systems of thinking.
The Dual Processing model
Behavioral scientists often use the Dual Processing model of human behavior, popularized by Daniel Kahneman in his bestseller Thinking, Fast and Slow (1). The model splits human decision making into System 1 and System 2, which can be thought of as processes in the brain.
System 1 is fast, effortless, and automatic—it’s how you know that 2 + 2 = 4. It is considered more basic and is shared by animals and humans. Sometimes it might happen 100% automatically, like being automatically repelled by an insect or salivating when seeing an appetizing dish.
System 2 is slow, effortful, and reflective—you would use it to work out 37 x 59. It is slow and sequential, allowing for the abstract and analytical thinking that characterizes the human mind. System 2 is more closely related to what we would normally consider intelligence.
We will learn much more about Dual Processing Theory (DPT) as the course goes on, but for now, it is worth bearing in mind two warnings. Firstly, DPT does not imply, and nor is it considered true, that there are literally two systems in the brain. There are a variety of areas in the brain and virtually any thought process will engage multiple areas at once. DPT is a model of the world used by behavioral scientists to understand behavior, rather than a literal description of it.
Secondly, it is tempting but inaccurate to think of System 1 as irrational and System 2 as rational. Although System 2 is generally more reflective and would be used for making big, complicated decisions, System 1 can be extremely fast and effective, even with limited information.
The German psychologist Gerd Gigerenzer has emphasized the benefits of intuition and using System 1 for some decisions: doctors regularly make quick, life-saving decisions (2). This is a good example of where lots of training and experience using System 2 allows decisions to move into System 1 and, on the face, be made without too much effort. Think also of how sports players train hard and then play as if they are not thinking about their shots.
Reflection on two systems
How does the two-system model make you reflect on your own behavior? Can you think of everyday examples where you use System 1 and System 2?
So far, we have seen that behavioural science sits at the intersection of the two disciplines of psychology and economics, and has evolved to incorporate the best elements of both to become the science of human behaviour.
While psychologists have been primarily interested in describing behaviour, economists have primarily been interested in predicting behaviour. Drawing their insights together has led to rich and realistic models of behaviour that can be used to predict behaviour change.
Behavioural scientists have identified the heuristics and biases that characterise human behaviour. Heuristics are the mental shortcuts we use to make decisions quickly and with minimal effort, while biases result when these heuristics misfire.
A prominent example is the availability heuristic, which retrieves information most vivid and readily available to us. Availability explains why people tend to fear statistically improbable accidents such as terrorism or lightning strikes, which are fantastical and often reported by the media, versus probable ones like car crashes or falling down the stairs, which are more mundane but more likely and just as deadly.
Behavioural scientists most commonly draw from a model known as Dual Process Theory (DPT), popularised by Kahneman, which splits the human mind into two systems. System 1 is fast, effortless, and automatic; System 2 is slow, effortful, and reflective. System 1 knows how to walk without thinking too much about it; System 2 is used to decide which direction you need to walk in to get to the restaurant. System 2 is more often associated with what we think of as intelligence, but that doesn’t mean it is always right or morally superior; functioning without System 1 would be nigh-on impossible.
The most widespread application of behavioural science has been nudge, popularised by Thaler and Sunstein. Nudge harnesses DPT to argue that we can change behaviour by changing the environment in which people operate. Traditionally, the Information Deficit Model (IDM) held that changing behaviour was just a matter of providing people with the right information so they could make informed decisions—in other words, engaging System 2 by changing their minds. However, this has proved to be ineffective as changing peoples’ minds is extremely difficult.
Since System 1 is affected by the decision-making environment where people will go for the easiest choice, the most attention-grabbing choice, the choice others are making, or the expert-endorsed choice. By manipulating the environment we can exert reliable influences on behaviour without having to change minds. Crucially, nudge relies on a libertarian paternalist approach, so while choices are affected, none are taken away. People can still make the choice they want to make but are nudged towards ones that are judged to be good for them and society at large. The use of nudge has proliferated across the globe, with many successful examples in areas ranging from health to crime to the environment.
We have synthesised existing behavioural science to come up with the MINDSPACE framework, a pneumonic that captures 9 reliable influences on behaviour. MINDSPACE stands for Messenger, Incentives, Norms, Defaults, Salience, Priming, Affect, Commitment, and Ego. It is a recipe book from which you, as future behavioural scientists, are encouraged to draw from, to design ways to change behaviour pragmatically and carefully. In the rest of this course, we will begin to explore each element of the MINDSPACE framework in much more detail.
Have you ever checked if you turned the oven up or if you locked the house? If you have it’s because you’re having to think about something that you’ve automated. Your system one has made it easy for you to turn the oven off and lock the house up in ways without you having to think about it. So when you think about whether you’ve done it, you’re not sure. Your processing models have really been the biggest advance we’ve seen in behavioural science over the last couple of decades. So much of what we do simply comes about rather than being thought about.
Now, it is worth saying that there aren’t really two systems in the brain, but it’s a nice characterization of how we make our decisions because for a very long time, we thought about system two, your effortful conscious thoughtful self, being the star of the show. If you think about something, you act upon it. Actually, it turns out that, as I say, most of what we do simply comes about rather than being thought about. System one, your automatic unconscious effortless brain, is making most of your decisions for you. And most of the time, that’s fantastic, right? Because it makes life easy for you. It turns the oven off, it locks the house.
This is all very exciting because what it means is that we can think about changing behaviour without having to change people’s minds. Because, actually, changing your mind is really effortful, isn’t it? And actually, we don’t do it very often, right? Think about the last time you actually really changed your mind about something important. Probably quite a long time ago, if ever. Whereas changing environments or context within which people act can have very powerful effects on what we do.
So this tool processing model is actually really exciting if we want to change behaviour because we don’t have to rely on changing people’s minds, we don’t need to change the way they think about things, but change the environments within which they act. So, for example, if you want to eat more healthy food or you want your staff to eat more healthy food, then put the healthy food towards the front of the canteen rather than the back. You’re making it easier for system one to do the right thing without having to properly engage system two to think about whether to do the right thing or not.
This provides us with so many opportunities to not only change behaviours in individuals, but in large populations relatively cheaply, easily, and quickly.
We would like you to reflect on the usefulness of nudging your research from the previous step. Nudges have been tried in a wide variety of settings and in some, they have proved useful while in others they may not have worked as well. In some settings, nudges may be enough but some behavioural scientists now argue for stronger policies that move beyond nudge to change behaviour. And there are also those who argue against nudges on ethical grounds, an issue that will come up later in the course.
Use the following poll to tell us your opinion of nudge, and the comments section to discuss with other learners the nudge(s) you investigated and why they led you to the conclusion they did.
Based on your research from the previous step, do you think nudging is an effective way to change behaviour?
Yes, nudging is effective
No, nudging is not effective
It wasn’t that long ago that many more planes crashed than is the case now. You know, one of the main reasons for that is planes were taking off without co-pilot. Now, that seems a very obvious thing, doesn’t it, to have a co-pilot present? It’s obvious that it’s often overlooked. When a pilot is sitting in the cockpit, he or she is paying attention to what’s in front of them. Are the engine’s running and so on? But not paying attention to fundamentally what really matters, for example, whether there’s a co-pilot there. This is called situational blindness. We pay attention to what’s in front of us, and not necessarily to what really matters.
One of the most powerful ways to overcome situational blindness, and this has led to fewer planes crashing, is to have very simple checklists. The pilot would have a checklist that contains the item, is there a co-pilot. Tick, done, plane takes off safely. The power of checklists has been shown in so many other environments, too. For example, surgical operating theatres, where again, really simple items on the checklist have a really significant effect on outcomes. Are we operating on the right patient? I mean, really, really obvious things that sometimes get overlooked. I can’t emphasise enough to you how much the obvious gets overlooked. And we can think about using checklists in behavioural science, too.
Let’s draw out the main mechanisms that influence us, largely through system one processes, so that policy makers and decision makers have those in front of them when they’re designing behavioural interventions.