• Privacy

What Behavioral Economics Can Teach Privacy Officers

read

A common refrain heard at WireWheel from its customers is the challenge privacy officers have socializing privacy risk and embedding it into organizational processes. Privacy risks, while opaque and obscure to many people, are quite tangible to an organization handling consumer data in terms of regulatory compliance, information security, product design, and consumer trust.

Behavioral economics has studied how people assess risks, and seemingly simply ignore them. Perhaps privacy officers can look to behavioral economics to uncover ways to more effectively nudge people in the organization to take the steps necessary to avoid potential privacy threats and gain that often-elusive buy-in.

WireWheel’s VP, Product Marketing, Judy Gordon met with behavioral sciences expert Michael Hallsworth, Ph.D. to illicit his insights at the Fall Spokes Privacy Technology Conference. Hallsworth is Managing Director, Americas, The Behavioural Insights Team. He previously held positions at Columbia University, Imperial College London and is the author of Behavioral Insights (MIT Press, 2020). You can view their conversation “What Behavioral Economics Can Teach Privacy Officers” here.[1]

Mental shortcuts

The core insight here is that our behavior tends to be more unthinking and in response to our immediate context. We tend to assume that what we’re doing to make decisions is weighing costs and benefits, paying attention to all the available information, and then making a considered decision on the basis of that information.

While people will weigh cost and benefit, Hallsworth notes that this is effortful. It takes time and we often simply don’t like doing it. And importantly, in many circumstances, that process is simply too slow to be useful, so we deploy mental shortcuts which “operate quickly, intuitively, and without our awareness.”

One example: Do what other people are doing. “And this mental shortcut [or heuristic] in response to our environment is necessary to navigate the world. We couldn’t be weighing up all the decisions we make all the time. So, we have habits. We have a usual way of dealing with situations.” But sometimes that kind of mental shortcuts can also create problems for us.

Ultimately, behavioral science attempts to “understand how we really make decisions in context and try to design products, policies, or services to take that into account.”

What information gets through our filters?

“One thing we know is really important is this idea of sales,” says Hallsworth. “What information do people pay attention to? And the way we allocate our attention follows some predictable paths.”

We need to understand how we filter information. We’re bombarded by information all the time in our day-to-day lives and certain things get through those barriers because they attract our attention.

And those things are generally quite salient. They’re easy to understand and they create a vivid picture in our minds.

Consider, for example, how much television news time is allocated to different kinds of natural disasters.

What was found is that “things like famines and droughts, which are based on things not happening, require thousands of times more deaths to get the same attention as one death from a volcano,” which is spectacular and draws attention. It gets through the kinds of filters we put up.

“The same thing happens with risk perceptions.” Think here of our response to – the salience of – the idea of plane crashes versus road travel which is far riskier. And “salience can also be about how we see what others are doing.”

The salience of risk and information aversion

We talk about social norms quite a lot as we often infer what the right thing to do is from others. Consequently, one very obvious thing to look at is what are other people doing. And if no one else has any kind of risk mitigations that are visible, well, we should be okay.

Although we know in theory there’s a risk, if no one else is doing anything I’m probably okay.

“Information aversion” is another factor. Even though we understand that there is a risk, it makes us uncomfortable, so we “set it aside.” While “one way of resolving that discomfort is to actually take mitigation procedures. An easier way might be to explain it away. To rationalize it and avoid information that brings it to mind.”

“What’s the point just getting worried about it? You know, it’s safer, easier, and more pleasant, not to know, really.” This is just a couple of ways that people think about risk and don’t always take mitigation procedures that lead to practical good outcomes.

Privacy risk and the challenge of salience

But companies need to be practical. “Information aversion” will likely not fly as a rationale in response to regulators in the event of compliance breaches. And Gordon pointedly asks, “what it is that most leaders and organizations may be doing wrong when they try to get people or their companies to protect themselves against privacy risk?”

“It’s a really difficult challenge,” acknowledges Hallsworth.

Part of the problem is one of salience. Privacy online, or privacy issues in general, can be quite abstract [and] theoretical rather than having an emotional reaction that may lead to action.

The really difficult thing here is trying to get a balance. To get these kinds of ideas salient to prompt action – to make it real for people – without creating an unproductive kind of anxiety. Because what can happen is the effort goes into managing the anxiety – the information aversion – and fear becomes the result.

“Instead, what you really need to do is to create a need for action, but connected to practical, specific actions,” to manage the potential anxiety. “We’re not that good at connecting the general idea to the specific actions that people can take. Identifying those actions, putting them in the right order, and then communicating them at the right time in ways that people can work on.”

One underlying problem here might be, if you’re say a Chief Privacy Officer, then it is of course very real to you. You’re very in detail. And we know from behavioral science that if you’re really familiar with something you tend to think that others are more familiar and see it as important as you do: The ‘illusion of similarity.’

Organizational privacy as heuristic

“Behavior is strongly shaped by environment and immediate context. This is the key insight from a lot of the behavioral science literature. This means that even if you know the right thing to do…it’s that practical constraint of what it’s like to work in an organization that will actually be what determines behavior more powerfully than beliefs or intentions.”

So, what you really have to do, then, is think about how to build the privacy actions into institutional processes.

The way that you can ensure a behavior endures is to make it not effortful. You want it to become habitual, And the way to make it habitual is to build it into institutional processes.

Privacy professionals seeking to systematize privacy would benefit from leveraging heuristics: “thinking about how to help people create their own mental shortcuts.” Perhaps, offers Hallsworth rather than creating big lists of complex things people have to do that compete with their daily challenges, look at establishing a few general rules of action. This may be more effective than a complex list of things “that people have to bear in mind and consciously remember.

Why consumer actions are at odds with their privacy values

While acknowledging “the jury’s still out,” Hallsworth offers that “intentions and views don’t always play out in practice because of the contextual factors in the moment.”

Again, the risk becomes abstract, and the immediate gain is in front of you. It’s just easier to go ahead. The mental barriers we have about why we shouldn’t do that? We are pretty good at finding ways of explaining those away in the moment if we want to. We also may be overly optimistic about what the consequences may be. These are the kind of ways we rationalize.

“The question then becomes, how much of a prompt would you need in the moment to [alter that behavior] and make people step away? And we know it seems that putting up prompts doesn’t really make much of a difference because people just ignore them. So, the real question? What would make a difference in that moment, because in that moment, the concerns are abstract, and the gain is quite concrete.”

Is the Data Subject Access Request a dark pattern?!

Gordon asks a very interesting and important question. “Is this idea that I have to go and make a request about my information counter to what a behavioral scientist would say is the best way to manage your privacy?”

If you were to design a process for people not to do, it would be something like that.

It’s like when companies don’t want you to cancel as a subscription and you have to go through numerous [often difficult] steps.

“Also, I think the gain from [exercising your DSAR rights] is quite unclear, because you don’t actually know what the data is. If you knew that there was something important or meaningful in that data, you might be more motivated to find it out, but if it’s only the possibility of something important, that again takes away of motivation. People are unlikely to go through many steps for unclear benefit.”

And nudging?

“A nudge is the idea that people have a set of choices, and the choices are presented in such a way that given the mental shortcuts people use, it is more likely the choice is made for the option that benefits [the person making it.]”

As an example, Hallsworth points to Apple’s “nudge,” with the change in privacy default settings from your data will be shared to requiring the user to affirm – make an effort to share – their data. Something he says (and the adtech world knows) has a very big impact on behavior.

One of the cool things about nudges, although you set it up so that people are more likely to choose option ‘A’ as opposed to option ‘B’ or ‘C,’ is they still can choose. They still have freedom of choice, if they want to override your nudge to take option ‘A.’

And that’s why it’s different from regulation, where we have to do some things and preempt that freedom of choice.

“This is one subset of how you can apply behavioral science, but it’s a powerful way of doing it, and that’s why it has gotten a lot of attention.

“This is why you need to think carefully about regulation. It’s not just requiring people [and organizations] to do certain things, it’s how. I don’t want to say that you should go and regulate that [organizations] have to disclose information a particular way: it can also backfire.” But an incentive for “a market or system so companies are incentivized to do ‘the right thing’.”