illustration of future of preferences

TDL Perspective: The Future of Preferences

read time - icon

0 min read

Dec 13, 2020

“I think the biggest questions in neuroscience that haven’t been answered yet are going to be huge drivers of what we decide to do as a species with technology.”

Foreword

The TDL Perspectives project is an ongoing series of interviews with thought leaders who are involved in our mission of democratizing behavioral science. We pick out specific insights that are at the frontier of current events in behavioral science, whether that means applications that plug our insights into various industries or theoretical discussions about the contentious frontier of current research. If you have thoughts about these discussions, have the expertise you’d like to share, or want to contribute in some way, feel free to reach out to Nathan at nathan@thedecisionlab.com.

Introduction

Today, Sekoul Krastev, a Managing Director at The Decision Lab, sits down with Nathan to discuss artificial intelligence and the future of human choices. We zoom in on the intersection between AI and behavioral science to understand how the decision-making landscape undergoes periodic transitions and what can be done to make the world a better place in this context. We deconstruct the various ways that people in relevant fields think about human and machine cognition. Then, we look to the future of technology to understand how these different understandings of decision-making inform potential solutions to current problems. 

Key take-aways

  • Even if decisions are made in an instant, the process is one spread out in time. This is not always the case for machines and it changes the way they are designed.
  • The biggest difference between people and AI may have to do with the way they are set up, rather than their cognitive processes once outcomes are determined. 
  • A finite amount of information is really important for a well-functioning AI system.
  • Value-based choice is still an open question, one currently beyond the reach of automation.
  • Technology makes for a more influential individual but it comes at a price.
  • The speed at which society makes choices is perpetually faster than regulatory norms, so it often falls on people in tech companies to make significant decisions about how we go about our lives.
  • Behavioral science may be a key factor in changing the race between technological development and ethical frameworks.

Discussion

Nathan: I have Sekoul with me today and we’re going to talk about AI and behavioral science. Let’s jump right in. People often look at AI as an alternative to human decision-making. People propose that artificial intelligence can replace human decision-making in a number of contexts, especially when we recognize that our decision-making is flawed and that we’re making avoidable mistakes. Do you see artificial intelligence as an alternative to human decision-making?

Sekoul: I think in some contexts it can be. Artificial intelligence is a pretty broad term. It ranges all the way from fairly simple statistics to black box algorithms that solve complex problems. So depending on the decision you’re trying to automate, I think you have different types of success.

Sekoul: In a very simple scenario where you’re trying to determine, for example, if an image is of a cancerous or non-cancerous cell, that’s a decision that’s historically been made by professionals who are trained at that. And we know that AI is now better than humans at making the diagnostic

Nathan: What do you think AI is doing there that we’re not? Is it a question of being able to input information better? Or just selecting the right sort of information? What do you think the difference is?

“It’s been shown in experiments that humans can [intuit] implicitly, but cannot actually describe many of those features explicitly. So the AI is able to do a lot of, what we call, intuition, which is essentially processing large amounts of data to come up with a very simple outcome.”

Sekoul: I think it’s able to pick up on information in a more perfect way. I think over the course of a career, a professional might learn to intuitively understand features of the image that would predict one outcome or the other. And I think the AI can do the same thing much more quickly. The reason for that is that you have a very clear outcome. And so you’re able to give feedback to the AI, and tell it when it is correct, when it is incorrect. When you do that, it learns what features are predictive of an outcome and what features aren’t.

Sekoul: It’s been shown in experiments that humans can do that implicitly, but cannot actually describe many of those features explicitly. So the AI is able to do a lot of, what we call, intuition, which is essentially processing large amounts of data to come up with a very simple outcome.

Nathan: Let’s talk about that a bit more. What do you think intuition is made of? Because I think that’s one thing that’s sometimes very kind of misunderstood in behavioral science, is the idea of our processing power, that we aren’t necessarily aware of.

Nathan: Daniel Kahneman, in his pretty famous book, Thinking, Fast And Slow, talks about how expert judgements are made in the blink of an eye, in a way that we can’t really recognize as a kind of thorough deliberative precise choice. It’s one that’s achieved without any conscious processing. So do you think there are unconscious systems at work there that are similar to computational machine learning systems? Or do we have certain ways of processing information that our current AI systems haven’t caught up with?

Sekoul: So we don’t know enough about how the brain processes information to really say. It’s very likely that it does so in a way that we haven’t been able to replicate with AI yet. There’s certainly, I mean, if you think about the philosophy of neuroscience or cognitive science, there’s certainly an experience of making a decision that AI probably isn’t creating, whether that’s the qualia or the experience of making the choice.

Sekoul: In terms of just purely information processing, intuition is something that basically takes large amounts of data, and because our attention can not possibly attend to each piece of information, it sort of points the spotlight to the very small part of it or maybe even none of it. And you just have a feeling that something is correct or not.

Sekoul: So just because something doesn’t enter your conscious awareness, doesn’t necessarily mean that you’re using a different system to make that decision. I think that’s a misconception. And there’s actually a lot of research showing that, even for decisions that we consider conscious, a lot of the processing happens before we’re consciously aware of the decision. There’s even research from a couple of years ago, showing that when people are asked to reach for an item, the commands to reach for the item actually comes before the conscious awareness that you made that decision. So, I mean, people use that to say that there’s no free will.

Sekoul: Interestingly, the reverse of that is that there is a free won’t, meaning that you can cancel the action as you’re reaching towards the item, up to the very, very last second of it. So you have conscious control over aborting the action. But in terms of choosing to do it, it seems like the conscious awareness is a little bit detached from the information processing. Which is to say that, for sure there are decisions we make more deliberately and less deliberately, but in both cases, we’re processing information using the same systems. And we’re essentially creating an outcome that’s based on techniques that are somewhat similar to what AI is doing.

Nathan: It’s funny when I think about, especially in an academic context, the process of decision-making or the experience of decision-making, as you were saying, I find that a lot of my preconceptions about how it works fall apart quite quickly. 

[read: Taking a hard look at democracy]

Nathan: One recent example is how a voter makes their voting choice and what point in time is that choice actually happens. In this example, there’s information processing going on for weeks before an election. People clearly collect information from friends, from advertisements, from political figures, watching speeches, watching debates, whatever. But there’s no clear point of decision. The experience of making decisions is actually quite distributed over quite a long amount of time.

Nathan: And I wonder, if our machines are given that same ability to process information over time, because usually we expect an output spat out right when we want it. And I guess that’s the same for humans too. Where we’re calling an output once you’re in the voting booth or once you’re at the doctor’s office or whatever it is.

Sekoul: I mean, that’s interesting because, again, your experience of your own opinion might need to crystallize at a different point. So, if at any point, you asked the person who they will vote for it will crystallize in a particular way, depending on how they feel at that moment. And it’s the same thing with an AI, at any given point they’re running averages for different outcomes. There’s obviously different ways to get an outcome. You can have things that compete with each other to a finish line. You can have things that are going different directions and get pulled up and then start to go down. And then as soon as you reach a threshold on the upside or the downsides, you get to a decision.

Sekoul: There’s different ways to do it. But ultimately we, at any given point, can rush that decision and have some sort of a system check. And that’s true for AI. That’s true for humans as well.

Can we manage uncertainty with cognitive shortcuts?

“Algorithms are just typically designed more deliberately. As opposed to, for example, a voter who may not have access to all the information that pertains to the decision that they’re trying to make. And that’s where I think algorithms are more powerful. It’s not so much in the execution, it’s more in the setup.”

Nathan: One idea that ties a few of these strands together is uncertainty management. These days everyone talks about how uncertain the times are, right? To me, this amounts to the feeling that we don’t have enough information to make decisions that we are faced with. If we think of a person as a system that’s constantly intaking information and is called to make decisions at somewhat unpredictable moments, there may be interventions that mediate the way we’re receiving information, right? 

Nathan: I wonder if you think AI can help with these sort of uncertainty management problems where policies can’t be fully constructed because we don’t have enough information to achieve what we decide are ideal outcomes. So, in a pandemic response, for example, you don’t exactly know how well people are going to cooperate with whatever policy you implement. And that introduces a forecasting problem.

Nathan: People talk about how we use certain heuristics. We use shortcuts and cheap ways of processing data in order to come to conclusions, even if we don’t have all the relevant information. Do you think that way of processing is something that machines can adopt? Or do you think there’s benefits in machines finding other ways of making those decisions without the shortcuts?

Sekoul: I think ultimately we take shortcuts for the same reasons that machines need to. There’s finite computational resources in the human brain, as there is in a computer. And in fact, if you think about a computer, the resources are even more finite, in other words, they have less processing power than the brain. So if anything, machines need to simplify the data and the decision even more. That said, the timeline that they deal with typically isn’t the same that we deal with. It’s pretty rare that you ask an algorithm to make a decision, a complex decision like the one you just described, extremely quickly. Whereas a human might be asked to have an opinion about something like that very, very quickly.

Sekoul: So I think algorithms are just designed more deliberately typically. As opposed to, for example, a voter who may not have access to all the information that pertains to the decision that they’re trying to make. And that’s where I think algorithms are more powerful. It’s not so much in the execution, it’s more in the setup.

Sekoul: Now, if you took a human being and you trained them to understand different topics and to understand the relationship between those topics and an outcome, et cetera, et cetera, et cetera. If you could somehow get over all of their past training and experience and convince them to look at the data dispassionately and purely think about, okay, this is the outcome, and these are the policies that are likely to lead to it with X percent likelihood. If you could do that, I think a human would be better than an AI at making decisions.

Is value-based choice a solely biological process?

Nathan: Well there’s a whole other question of value in those decisions. And assigning value to different outcomes. In a purely mechanistic sense, as long as your outputs are completely deliberate, like we were talking about before, assigning value is not actually that difficult. Because you can compare how close a certain step gets me to my final goal.

Nathan: But with political decision-making or moral decision-making, you have a problem of value being contested all of a sudden. So that probably poses quite a challenge for machines that are trying to make these sorts of decisions.

“Is it preferable to use a purely evidence-based way of making decisions? As individuals, sometimes maybe. As groups, probably not, because people have preferences. So ultimately, it’s very difficult to understand what a preference is composed of. I think people assume that preferences are composed of purely an outcome, which science is very good at predicting in some cases. But I think preferences are more complex than that.”

Sekoul: I think that’s where it gets a little bit messy. Value-based choice, it’s a relatively new field in neuroscience and psychology. And we don’t understand value-based choices that well. We know that a lot of it is driven by the prefrontal cortex. So it seems like we’re fairly deliberate about those kinds of choices. But we also know that, depending on the situation, there are different levels of effect of the emotional centers of the brain that can override that deliberate choice.

Sekoul: There’s a dynamic in how that decision is made in the brain. That makes it very difficult to understand to what extent the outcome is affected by different information. Especially when you think about the fact that a lot of the emotional response that we might see is driven by experience that has taken an entire lifetime to form. That’s the part I think that is really difficult to operationalize in an algorithm.

Sekoul: So you might operationalize the prefrontal cortex. You might say, I’m trying to get from point A to point B, and this policy will help me get there. And from a purely prefrontal cortex perspective, all you need to do is make a plan and draw the shortest route between the two points. And that’s your optimal solution. An algorithm can do that. Again, assuming you have finite information, and you give the same information to the person and the AI.

Sekoul: A purely prefrontal view of how value-based choice is made might be fairly similar between an algorithm and the brain. But as soon as you involve other brain centers, and of course, it’s not that simple, I’m kind of reducing it to that, but there’s definitely a mystery around how emotions and past experience, memories, et cetera, will drive that decision in different directions. And that’s something that the algorithm can’t simulate as easily. Just because we don’t understand exactly how that effect is created.

Nathan: Right. That makes a lot of sense. Are there ways that an algorithm could maybe take off some of the cognitive load of decision-making? Could we take the parts of our processing that we do understand and chop it up into parts that could be assisted through technology? Could we use AI to simplify our domain of choices that we have to make?

Sekoul: I wouldn’t say that it’s AI that we would use in that case. I mean, the answer is definitely yes. But to an extent, it’s science that does that. Science does that for us as a society. So we look at the best scientific consensus we can get on a topic. And we consider that to be a data point. But I don’t think anyone uses that alone as a driver to make decisions about anything in their life.

Sekoul: So is it preferable to use a purely evidence-based way of making decisions? As individuals, sometimes maybe. As groups, probably not to be honest, because people have preferences. So ultimately, it’s very difficult to understand what that preference is composed of. I think people assume that preferences are composed of purely an outcome, which science is very good at predicting in some cases. But I think preferences are more complex than that. How do you get to the outcome? And what’s the feeling you had while getting there? 

Nathan: It’s interesting that you mentioned science or technology as a way of facilitating decision-making. Because I think there’s a really complex relationship there between technology that hypothetically improves our lives, makes our choices simpler, and gets us to better outcomes quicker. But I think a lot of people also see science as something that’s complexifying the world. That gives us a whole bunch more options all of a sudden, and opens up new frontiers of decision-making. But also makes our environment a lot more stressful for the same cognitive apparatus to be processing.

How do humans handle advanced technology?

Nathan: Do you think there’s any value to that concern that advanced technologies that the user doesn’t fully understand challenge their ability to make their way through the world?

Sekoul: I think technology definitely opens more doors, and because those doors allow for more actions and decisions, it creates complexity, it creates cognitive load, it makes our lives more difficult. It also makes us more productive. I think the average person today is probably orders of magnitude more productive, just as an individual, and their effect on the world is more profound, compared to somebody a hundred or a thousand years ago. I think technology has that amplifying effect. And by virtue of amplifying our effect on the world, it necessarily brings in this increased complexity because we’re basically affecting reality in a more significant way.

Nathan: I think one interesting place to go from here is that we not only have more control over our world thanks to technology, but we also have control over that technology, especially people that are designing this. And I think there’s a key role here in terms of people that are designing the technology that facilitates our interaction with the world. Do you think that there’s certain ways of designing that technology that are beneficial? Perhaps like bringing in behavioral science to make that technology better? Do you think that’s a valuable use for behavioral science?

Sekoul: Yeah, definitely. There’s different ways to do it. I mean, user experience design has been around for such a long time. And creating interfaces that lend themselves better to people expressing their preferences and opinions and so on. I think that’s something that’s really powerful.

Sekoul: I think creating a shorter distance between humans and the control over the technology is really important. That’s, for example, what Elon Musk is doing with Neuralink, obviously a lot of criticism around that for various reasons. But ultimately the idea of bridging the gap between user and interface is a really powerful one. That’s, for sure, going to be a big topic over the next 30 years.

Sekoul: At the same time, I think understanding what people want when they’re using technology is really difficult. So as much as you can bridge a gap, and increase engagement, increase the speed at which people engage with the technology, et cetera. To actually understand what a user really fundamentally wants out of that interaction is quite difficult.

Sekoul: The reason for that is that there’s the short-term wants and the long-term wants. And in the short-term you might think, okay, well, this user is driven to more interaction when I put bright colors and give them lots of likes and comments and whatever. That’s great, but that just creates an ecosystem of dopamine hedonism or whatever. It basically creates a hedonic treadmill that people will engage with and get addicted to.

Sekoul: But ultimately in the long-term, understanding what creates actual value, from a humanistic perspective, in people’s lives is something that user experience design is very unlikely to ever get to. So I think that’s where behavioral science can come in, understanding the long term perspective, asking ourselves more existential questions about what our relationship with technology should be.

Sekoul: The problem is, you can talk about that philosophically, but how to operationalize that, how do you operationalize something that we’ve spent thousands of years trying to understand, that’s really difficult. And I think that’s something that companies like Facebook and Apple and Google are struggling with more and more.

Nathan: Is delivering those long-term valuable outputs as opposed to preying on our tendencies towards certain kinds of salient products something you’ve seen in the field at all?

Sekoul: Yeah, I do think that they’ve shifted from delivering very short-term value to medium-term value. But I think the long-term value, at a personal and societal level, is just a huge challenge. How do you decide what long-term value for society looks like? 

Nathan: It is hard. I think an extension of that is that, especially for big companies, people with a lot of influence over the environment in which we’re making our decisions, actually have influence over what that long-term value is. I mean, we know that our extended preferences about the world are variable and are subject to certain influences. And I think, especially when we have certain people at the helm of a place like Facebook, where people are engaging with that every day, spending multiple hours there every day, they probably have some control over what people’s preferences are.

Who oversees the ethics of rapid technological change?

Sekoul: I think it’s interesting that people have been talking more and more about how some of these social media companies might have malicious intent. And they have a responsibility that they don’t fully realize.

Sekoul: I don’t know to what extent that’s true. What I do know is that technological advances come, paradigm changes happen, and as they do, there’s always a struggle to catch up. And I think the most recent one where basically it connected everyone in the world in the span of a decade or less. I don’t think any company or individual or group of people could have handled that in a good way. I don’t think it’s possible to do that slowly and deliberately. Just because we don’t understand fundamentally what that means. We don’t understand how the brain treats that kind of environment. We’re basically built to interact with 50 people in our entire lifetime. So when you expose us to 500, 5,000, five million, that becomes really confusing. And nobody can really know what that will look like. Especially because, well, it’s not happening to one person, it’s happening to everyone at the same time. So it’s a crazy complex system.

Nathan: Yeah. There’s no control.

Sekoul: I think rather than criticizing those companies, and of course they should be criticized for lots of things, but I think from an existential perspective, we, as a society, have to just think more about what value we want from those technologies. And it comes back to AI. I think understanding the problem we’re trying to solve is the most important part of all of this.

Sekoul: People use AI as if it is a tool that can help us solve many problems but they don’t emphasize the understanding of the problems enough. They’re thinking of AI more as the solution, but it’s only a solution to problems that are extremely well-defined. And I think we have to start defining problems better.

Nathan: And whose job is it to define those problems properly? Is it whoever’s tasked with trying to make people’s lives better through this technology? Or is there an antecedent, is it a political question of who’s assigned to it? Or it’s just whoever’s there in the moment? Whether you’re at the helm of a tech company, as we explode into this digital era. All of a sudden, it’s your problem just because you’re the one able to solve it.

Sekoul: I think people are literally in charge of solving those problems. There’s people whose job it is to solve the problem. And I don’t think they’re digging deep enough. If you’re designing a new interface for the iPhone, for example, it’s literally your job to think about that problem. But you’ve probably taken on a more short-term view. You’re thinking, how do I make this interaction faster? How do I make this more effective, efficient, pleasant to the user? How do I sell more phones, et cetera?

Sekoul: So ultimately the economic drivers will rush the decision. And I don’t think it’s those people’s fault. It’s not that they’re at fault. So if you follow that logic, then I guess you could say that those economic drivers are driven by consumers and the policies around those things. So definitely there’s a place for policies to slow down those decisions and make them a little bit more deliberate. I think we don’t fully understand how technology, how AI, how those things will affect us on a societal level. And I think it’s okay to sometimes slow down, take your time, and understand things before you fully leap into them. I don’t think that’s going to happen. So it’s more of a hypothetical, where it would be nice, but there are a lot of reasons it can’t happen just like that.

Nathan: I mean, we could take, and maybe we end with this kind of case study, we could take a case study of the almost instant reaction to COVID-19 by moving most of the world online, most of our social interaction online. And there was no one point where we could stop and say, wait, let’s all have a big group discussion about how to do this properly. Whether we’re going to use Zoom. What are the potential effects of taking six hours of class a day online? There’s no point of, this is coming back to what we were saying at the beginning about the point of decision, there’s no one place where you can stop and say, hold on, this is exactly what needs to happen.

Nathan: And so when we think about technology, especially artificial intelligence technology, as something you can only apply when that decision is crystallized and when we know exactly what outputs we want, we get into a tricky situation. So what do you think behavioral science can do to improve that process? Whether it’s slowing it down or just working in the moment as fast as we can to redirect some of the flows, especially the highest levels of design governance and business as well. What can behavioral science do in that moment?

Sekoul: COVID-19 is a very good case study for this. Because there was a rush online, at least in the Western world. I think you have to qualify that. Because a lot of, I mean, most of the world didn’t move to Zoom classes. Most of the world just kind of kept going the way they were going before in a lot of ways. Because they are under the poverty line or close to it, and had no choice. But for the part of the world that we’re in. And I think a lot of the changes that we saw happened extremely quickly.

Sekoul: And I think to a large extent, a lot of what technology offers us in a situation like that is a tool. And how we choose to use it just reflects the kind of immediate problem we’re trying to solve. In this case, we couldn’t see each other physically, so we moved classes online. That’s great.

Sekoul: I think what behavioral science can do in that situation is not necessarily block that from happening. I don’t think that’s realistic. But I think just understanding the effects, trying to understand what types of questions should be asked as you’re doing that. Trying to understand what are the problems that are being created and how might this affect people. Basically experimenting around this shift. And going towards a direction where you can make those decisions more deliberate and make small adjustments around them.

Sekoul: So for example, let’s say you did this completely without caring about the psychology of people, you just move people online and you say, okay, kids, just spend six hours a day on Zoom, that’s it. That’s one scenario. And you might end up with a good situation or not.

Sekoul: But another scenario is one where you move everything online. You try different things. You have classrooms where, for example, you have a lunch break and kids are allowed to hang out on Zoom. Other classrooms where you don’t do that. You have classrooms where there’s one type of Zoom interaction. Maybe they have little breakout groups where they interact in small teams, another one where it’s always a big group, et cetera.

Sekoul: And with time, I think you answer questions around what types of interactions are most positive. But I think that’s the value that behavioral science will bring. Ultimately, it will just give you more data around what drives positive interactions and positive feelings.

Sekoul: Again though, I think the bigger questions are around what happens if you were to do this for decades. For a very long time. Hopefully that’s not the case here. I think we’re months away from that not being the case.

Sekoul: But for a lot of what technology is offering us, it is the case. We’re heading towards a world where we can’t live without it. And that’s where behavioral science needs to ask more fundamental questions. That’s where fundamental behavioral science research comes in. Not just research as part of a company, but rather the research that’s done at universities around questions like what is it to be human? And what ultimately fulfills us? How do we process information?

Sekoul: I think the biggest questions in neuroscience that haven’t been answered yet are going to be huge drivers of what we decide to do as a species with technology.

Nathan: All right. That’s an excellent place to end it. Thank you for joining me.

About the Authors

Read Next

Notes illustration

Vous souhaitez savoir comment les sciences du comportement peuvent aider votre organisation ?