Behavioural Science is a rapidly expanding field and everyday new research is being developed in academia, tested and implemented by practitioners in financial organisations, development agencies, government ‘nudge’ units and more. This interview is part of a series interviewing prominent people in the field. And in today's interview the answers are provided by Linnea Gandhi.
Linnea is a Lecturer and PhD candidate at Wharton, where she studies meta-science, research infrastructure, and behavior change. Prior to Wharton, she taught as an adjunct professor at Chicago Booth and ran a consulting firm specializing in applied psychology and experimentation.
Who or what got you into Behavioural Science?
Two pivotal moments come to mind. First, during my first semester of my MBA at Booth, I took a course on applied behavioral science (“Managing in Organizations”) from Professor Ayelet Fishbach. From the first class session onwards, I was entranced: Finally, a ‘science’ behind the otherwise touchy-feely soft concepts of teams, relationships, and organizations that I was drawn to in my prior years consulting. I looked into several of the academic citations at the bottom of the slides, and this led me to start taking PhD classes in behavioral science as my electives during the MBA.
Second, the next semester, I considered taking Professor Richard Thaler’s MBA class on “Managerial Decision Making”. He was one of several professors who taught it throughout the year. However, his syllabus said that many of his examples came from sports – not my forte (e.g., I once thought there was an “end zone” in baseball) – so I decided I would wait for another semester, with another professor. My husband – who had studied economics in college – persuaded me to reconsider. He said “This guy is a big deal! He’s one of the fathers of behavioral economics. You have to take his class.” Fortunately, I took his advice. Taking Richard’s class and getting to know him that semester was arguably the game-changing decision of my professional career.
What is the accomplishment you are proudest of as a Behavioural scientist?
Strangely, I’m not particularly proud of any past behavioral science project or paper. I see the flaws in all of them, particularly when it comes to the rigor of evaluating whether a behavioral intervention “worked”.
In my time consulting with businesses, running a clean, well-powered experiment was nearly impossible. Much of the work – too much – came down to best-guess recommendations with a wide confidence interval. I thought this would change by switching to academia, where running experiments is a core part of the job. I was wrong! It just evolved to a different sort of doubt.
Now, I feel acutely aware of every degree of freedom I exercise in the research process. From the sampled population, to the behavioral measure, to the intervention and experience design – I have to make countless decisions that seem harmless. But I question each one. In the multiverse of experiments I could have run, was this the best for the question I’m trying to answer? The only Band-Aid I’ve found is to document these decisions, but it’s imperfect and unsatisfying at best.
As much as this “garden of forking paths” (as statistician Andrew Gelman calls it) keeps me up at night, I don’t think I’d have it any other way. Humility and skepticism, hand-in-hand, are integral to the scientific method. We produce knowledge (or, at least, reduce uncertainty about what we know) far more slowly without them.
So I suppose what I’m proudest of – across my experience consulting, doing research, and teaching – is that I’m getting increasingly comfortable admitting and embracing this uncertainty. Above all else these days, I seek to practice and spread science, not pseudoscience. Richard Feynman’s words capture my feeling well: “There is one feature I notice that is generally missing in cargo cult science… It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty — a kind of leaning over backwards.” That leaning over backwards – nearly falling – just feels right to me.
If you weren’t a behavioural scientist, what would you be doing?
I don’t know if there is a job out there that fits this description, but I really enjoy coaching others who need to give some sort of presentation. It could be a startup pitch, a conference talk, or a class lecture. There’s nothing like empowering someone with a message to deliver it to its fullest impact, especially when there are opportunities to improvise and engage with the audience.
I got lucky to do a lot of performing in my teens and twenties, including improvisational theater, and have had great coaches myself. This seems to help a lot in my own presentations, and those I’ve coached on the side seem to value the help. But could it be a real job? Who knows. I suppose if I don’t get an academic position in a couple of years, we’ll find out!
How do you apply behavioural science in your personal life?
My husband again deserves credit for this. He’s incredibly rational, so any of the tools I’ve learned to help debias our decision processes are fair game for us to use in any big decisions.
For instance, a week before everything shut down at the start of the COVID-19 pandemic, we had to decide whether I would stay in Chicago (where we lived) or move to New York City (where he had to be for work during the week). We used a multi-attribute decision process to help us make the decision, pulling in objective reference points and independently completing it prior to discussing.
One of the things engaging in behavioral science has taught me is that the cleverness of your intervention isn’t nearly as important as actually being able to measure whether it works or not. So I’ve built a few processes in my routine to collect data to measure changes in my own behaviors. (Of course, there was no ability to create a good counterfactual!) Most notably, I spend a few minutes every day tracking my time across a pre-set list of codes. I’ve done this since 2017 when I started my own firm and have carried it through into my PhD. It’s helped me monitor how long certain activities actually take (combatting my natural planning fallacy and myopia when saying “yes” to a new project) and flagged when my relative hours across activities don’t match how important those activities are to me (an assessment that would otherwise fall prey to ease of recall and motivated reasoning).
As one of my Booth professors, Linda Ginzel, constantly reminded us in her class: “If you don’t write it down, it doesn’t exist.” Behavioral science-inspired interventions aren’t worth much without reliable data to evaluate whether they work or are just “BS”.
With all your experience, what skills would you say are needed to be a behavioural scientist? Are there any recommendations you would make?
I’ll begin with a caveat: there are varied behavioral science journeys out there. The below recommendations are very much biased by my predilection to focus on the later stages of measurement and intervention evaluation.
If I could go back in time, to college even, I would more deeply invest in two sets of skills: statistical intuition and relationship building.
Statistical intuition: This goes beyond taking a statistics class. You want to truly understand the assumptions behind any statistical tools that you are using when you produce an inference about how well a behavioral intervention does or doesn’t work. Learning a bit about the philosophy of science, especially the evolution of null hypothesis testing (its benefits and flaws) and alternative approaches (such as Bayesian methods), can open your eyes to the uncertainty around the claims you read in papers or make in your own work. Some specific tips:
Embrace and understand that (at least with null hypothesis tests) you are in the business of disproving, not proving. Get comfortable with every measure having bounds of uncertainty.
Learn how to build a falsifiable hypothesis, ideally one that can be more severely tested than not. Any theory you have, should be able to be put to the test. What evidence would cause you to give up your theory? Now go try to find it.
Do NOT just let your data scientists run your power or sample size calculations. The parameters of those calculations are all business assumptions or decisions: What is your tolerance for false positives or negatives? What is the smallest effect you could see, to be able to have the outcome of the experiment influence your decision? Learn about the math, not to necessarily do the math, but to make sure the outcome of the math is sensible.
Relationship building: Behavioral science can rarely be done alone, especially if you’re looking to run experiments in field settings. You need to know how to make friends, add value, and integrate your ideas and methods into pre-existing organizational infrastructure. Some specific tips:
Planning an experiment can be as complicated as planning a wedding. Build a disciplined process, hire a project manager, and expect at least one thing to go wrong.
Build a friendly sandbox. If you’re just starting out, or are introducing these ideas to an organization that is new to them, identify the spaces where there is less red tape, approvals, and review cycles. Where are the stakes low and the feedback loops fast?
Find the ugly duckling. Sometimes products or processes that are doing well are tough to experiment with; stakeholders are averse to losses in the domain of gains. Look instead for areas of the company that have been given up on, such as those losing money or otherwise written often. Their appetite for risk is often higher.
Everyone is the hero of their own story. I picked this up in improvisational theater, from the great Susan Messing. But it applies just as well to business. No one thinks they’re the villain; everyone thinks they’re the hero. Frame your behavioral experiment to be consistent with that narrative. And try to avoid champion-challenger setups. If your intervention is competing against someone else’s, there’s likely to be a loser. Try to broaden the frame by co-creating interventions or running multiple experiments as a portfolio, reducing the focus on any single one.
How do you think behavioural economics will develop (in the next 10 years)?
I’ll answer for behavioral science more generally, as I’m not an economist.
I expect, and hope, that those studying and practicing behavioral science will invest more in science. The replication crisis – which I believe is more about a crisis in generalizability – makes clear how little we reliably know. Small differences in experimental designs and behavioral interventions matter. This means that a practitioner reading a prior study cannot just copy-paste the finding; they need to do science, run their own experiments, and form their own inferences for their own contexts.
Over the next decade, I hope and expect practitioners to invest in science, practicing – as best they can – the experimental method. They will spend more of their time and resources on measurement, randomization, and causal methods, and less of their time cobbling together a compelling story of what might work from past research.
In turn, I hope academics invest in evolving the production function of science. Currently, we produce papers: one-shot narratives meant to capture the imagination of an editor, peer, or practitioner. This unstructured prose makes reconciling claims across papers – in behavioral science or more broadly – challenging if not impossible. (At least, that’s how I feel!) Instead, I hope we start to produce products.
Imagine all behavioral science experiments were continuously coded up into a living, structured database, tagged across elements of their contexts, populations, interventions, and measures. We would be able to collectively navigate them, add evidence where we found gaps or inconsistencies, put our theories of nudging to the test out of sample, and build a more nuanced model of behavior change.
That’s my current vision, at least. And it’s why I left business for academia, to see if I could help shift the production function of behavioral science (or any social science) to be more structured, navigable, evolving, and – as a result – useful to the policymakers and practitioners who look to us for insight. (And, directly or indirectly, pay for our work!)
Readers interested in this angle on social science might enjoy the work of Duncan Watts (my advisor) and the projects we’re pursuing at Wharton’s CSS Lab.
What advice would you give to young behavioural scientists or those looking to progress into the field?
You don’t need a certificate, license, or credentials to apply these ideas. Behavioral concepts and the scientific method are just tools. Do you need permission to use Microsoft Excel? No? Then you don’t need permission to use these tools either.
The lowest-hanging fruit is often the processes and products you already own. Writing an email? How might you rewrite it in a way that is more behaviorally-informed? Choosing amongst candidates for a new job? How might you design that process to reduce bias and noise? (With all the caveats, of course, around trying to test these when you can!)
It is tempting to spend all your team learning about behavioral concepts and biases in books, blogs, classes, etc. They will capture your imagination and make for great cocktail party stories. But they will not yield reliable value without being married to the scientific process – or at least principles of inference inspired by it. Invest in science first, especially basic statistics and experimentation. Then invest in memorizing all those fascinating concepts.
The better you are at critically consuming behavioral science research – understanding and questioning the theories, designs, and analyses involved – the better you will be at translating, applying, and producing it yourself.
Researchers are human too! Don’t forget to apply behavioral science to reduce bias and noise in the process of doing behavioral science.
Which other behavioural scientists/economists would you love to read an interview by?
Am I allowed to name more than one?
If so, I’m not sure if all of these individuals would consider themselves pure behavioral scientists/economists. But they have fundamentally shaped how I think about and apply it. In no particular order: Shannon White (Meta), Amy Jansen (The Hague), Mia Jovanova (UPenn), Julian Parris (JMP), Uri Simonsohn (DataColada; ESADE), Sendhil Mullainathan (Chicago Booth), Angela Duckworth (Wharton), Justin Landy (NSU), Brian Nosek (OSF).
I gain a few IQ points each time I interact with their work or teaching.
(If I’ve forgotten anyone influential in my intellectual journey, who has not yet interviewed here - apologies! Blame ease of recall.)
What are the greatest challenges being faced by behavioural science, right now?
Credibility. Much of the debate in 2022 around “Do nudges work?” raises questions about how well we really know which nudges work, for whom, under which conditions.
Humility. I think we know a lot less than we thought we did, or a lot less than we sell in our popular writing. (Myself included.) And that’s a hard pill to swallow.
Incommensurability. I don’t think that is any individual’s fault, but rather a challenge with how social science produces research. Our focus on producing papers means we tend to prioritize eye-catching, one-off narratives rather than incremental contributions to a well-organized evidence base of quantifiably comparable findings. We don’t have system-wide tools or incentives to do this (yet). And until we do, I think it is nearly impossible for us to know what we do and do not yet know as a field.
What is your biggest frustration with the field as it stands?
Our field – somewhat like biomedicine or nutrition science, I’d guess – has a bleeding edge between academia and practice. Findings published one day are applied the next by practitioners in government, business, or non-profits. Given that, I believe we (in academia, where I now reside) have a responsibility to evolve how we produce research.
Part of that means shifting away from holding basic research up on a pedestal and wrinkling our nose at applied research. I found this viewpoint (which is not held by everyone, but which permeates the ether of academia) frustrating when I first got into behavioral science a decade ago at Chicago Booth, and I continue to find it frustrating today. I know I’m biased; I’ve spent over a decade in industry, focused on applications. But I can’t help but think that our field – academics and practitioners alike – would be better off investing in more “use inspired, basic research” or Pasteur’s Quadrant (per Donald Stokes). What I mean by that is embedding our research in the practical problems we imagine our theories might solve, drawing on a variety of techniques and disciplines to solve them, and in doing so, better evolve our theories. (Duncan Watts articulates this far better than me in his article “Should Science Be More Solution Oriented?”).
If academia were closer to practice, and practice were closer to academia – and neither of us rolled our eyes so much at the “impracticality” of the former and the “derivative nature” of the latter – we might come up with a better way to produce knowledge that is both novel and useful, credible and generalizable.
Thank you so much for taking the time to answer my questions Linnea!
As I said before, this interview is part of a larger series which can also be found here on the blog. Make sure you don't miss any of those, nor any of the upcoming interviews!
Keep your eye on Money on the Mind!
Hello Linnea,
Your interview is a captivating exploration into the world of behavioral science. Your journey into the field through Professor Ayelet Fishbach's course at Booth and your subsequent encounter with Professor Richard Thaler truly exemplify the pivotal moments that shape one's career trajectory.
I was particularly intrigued by your candid discussion about the challenges you've faced in your professional journey. The pursuit of scientific integrity and your willingness to embrace uncertainty resonate strongly. Your dedication to practicing and spreading genuine science is commendable, and your adherence to rigorous research methodologies is truly inspiring.
Furthermore, as someone interested in both behavioral science and technology, I appreciate your insights on applying behavioral science in your personal life. Your use of multi-attribute…