Quantcast
Channel: Sanjay Srivastava – The Hardest Science
Viewing all articles
Browse latest Browse all 11

How many (and whose) lives would you bet on your theory?

$
0
0

The following is a guest post by Neil Lewis, Jr.

Disclaimer: Before I begin, I want to start by acknowledging a major constraint on the generalizability of what I am about to say. In this post I will use the term “psychologist” periodically for the sake of brevity, instead of explicitly naming every sub-field when discussing points I think apply to multiple sub-fields of the discipline. When I say “psychologist,” however, it should be clear that I am not referring to the clinicians and other therapists that most people in the general public think of when hearing that term. The work of clinicians is highly essential at this moment; it is urgently needed to get us through this pandemic. Clinicians, and other essential workers, I thank you for your service. This post is about, and for, the rest of us.

I have found myself in a strange situation lately. For the past month and a half or so, I have been arguing somewhat vehemently against something that I typically encourage my fellow scientists to do: to disseminate our research broadly and expediently. I was trained as a social psychologist by intervention scientists who instilled in me the Lewinian values of conducting action research that simultaneously advances theory and produces practical knowledge. Lewin’s quotes, “there is nothing so practical as a good theory” and “research that produces nothing but books will not suffice,” guide the work that we do in my lab.

Because I’m such a Lewin stan, and by extension, a strong advocate for doing and disseminating socially-relevant research, some colleagues in the field have been surprised that I have been discouraging the rapid dissemination of social scientific – and in particular, (social) psychological – research on the COVID-19 pandemic. “Aren’t you the one who’s always getting on a soap box about the need for more applied research in the field?” Yes, I am. I want psychologists to do socially meaningful research; I want all social scientists to do that – to get rid of the stupid line we draw in the sand between basic and applied research, and advance knowledge and improve the world at the same time.

But doing that…doing that well doesn’t happen overnight. It takes a lot of time, patience, and humility to do the kind of action research that is necessary to understand a social problem deeply enough to develop an effective intervention to address it. This is one of the biggest lessons I’ve learned over the years as I’ve worked on education and health interventions in the United States.

Good Intentions With Unforeseen Consequences: An Early Lesson From My Career

When I first started doing intervention research, I had grand aspirations for building on previous findings in the field and using them to change the world—my primary focus at that time was reducing education inequities. I’ll never forget my first intervention study as a graduate student. I spent months reading the literature, pulling together relevant stimuli and measures, outlining the research design and thinking through which statistical model we would use to evaluate its effectiveness, and preparing the documents and presentation to make the pitch to the stakeholders. I was pumped! We had, I thought then, the perfect study to test the idea, and we were going to help those kids succeed in school. Then we went to the first “all-hands” meeting with the research team and school district partners to outline the plan of action. The superintendent smiled and nodded while we presented. Then when we were done she told us all of the reasons why it wasn’t going to happen—at least not in the way we envisioned. That response was a bit surprising. The lab had made other successful pitches like this, and had run other successful intervention studies in similar districts; it wasn’t apparent why “basic” ideas like the need to fully randomize were suddenly controversial. But we listened and learned some important lessons.

There were some historical pieces of information that we didn’t fully understand about that particular district until we were all sitting down in conversation that day. Some tense politics, at multiple levels, influenced what we could and couldn’t do. Long story short, we–psychologists, educators, social workers, and government officials–worked together to come up with an intervention that made sense for that context. But it’s worth noting that the study we ended up with looked nothing like the study we originally planned. And although it was frustrating at that time—because I was confident in our theory and the methods we had planned—looking back, it was the right thing to do. At that time our theory wasn’t specified enough to account for those politics.

That intervention story was actually the story of a relatively easy case of intervening– a case in which we had well-validated measures and stimuli that were effective in similar schools in a different part of that same state. In other words, the problem was well-defined, we had a solid body of research in similar situations to work with…yet we were still foiled by local context: structural, cultural, and political features that mattered greatly for the psychological experiences we were trying to produce in the intervention. If we had run the study with the randomization plan we initially envisioned (where some students would have ended up in a wait list control), for instance, it probably would have created more resentment between parents and district officials in a community in which citizens (for some good historical and contemporary reasons) did not have the greatest trust in their government.

Transferring Lessons to the COVID-19 Pandemic

What does this have to do with COVID-19? The story I just shared is the story of a case in which I thought I had strong priors for how to apply a psychological theory to a new context that, again, I thought I knew a lot about. Even with that much (perceived) knowledge going into the situation, the theory was not directly applicable without modifications—modifications that I could not have anticipated without the on-the-ground knowledge of a variety of experts in that community. COVID-19 is nothing like that situation. The pandemic takes the most complex intervention scenarios I’ve ever worked on and makes them look like child’s play.

For psychologists to intervene or otherwise apply our theories to the pandemic requires understanding the psychological experiences of a much more diverse group of people than the WEIRD students that we typically study (Henrich, Heine, & Norenzayan, 2010). It requires deep knowledge about more situations than the laboratories in which we study those students (Baumeister, Vohs, & Funder, 2007). It requires knowing whether the measures we’re using are valid indicators of our constructs of interest (Maul, 2017) and having a sense of confidence about whether the processes we’re measuring actually translate to behaviors or other policy-relevant outcomes (Webb & Sheeran, 2006)—something that, at least social and personality psychologists, have limited information about (Flake, Pek, & Hehman, 2017). It requires knowing the constraints on generalizability of our findings (Simons, Shoda, & Lindsay, 2017) and thus when and whether it is appropriate or inappropriate to try and apply them to different situations, and what the most effective strategies would be to implement them successfully (Earl & Lewis, 2019; Goroff, Lewis, Scheel, Scherer, & Tucker, 2018). Without those things, my guess is that, at best, the application of psychological theories to the COVID-19 pandemic will be like most other interventions in the field: they will not have high enough fidelity to be practically meaningful (DeAngelis, 2010). To use the language of Yeager and colleagues (2016): “although promising, self-administered psychological interventions have not often been tested in ways that are sufficiently relevant for policy and practice (pp. 375).

Caution is Warranted: Lives are at Stake 

These are some of the concerns that have been on my mind that have led me to trade in my “go forth and disseminate” bullhorn for a “slow down and be cautious” traffic sign. In normal times, I’m quite forgiving of the field’s flaws and I persist on the optimism that we will progressively get better. But right now, I fear that our optimism and good intentions are not enough; they may even be dangerous. People are dying from this, my relatives among them. In times like these when lives are hanging in the balance, I think we have to hold ourselves to a higher standard than our status quo.

Earlier this week, some colleagues who share these concerns and I released a new preprint making that very argument (IJzerman et al., 2020). The response on social media has been…eye-opening. I haven’t engaged, I’ve just been watching it unfold. We anticipated some push back. In fact we actually finalized writing the week before, but at my request my co-authors agreed to wait until I had the time, energy, and patience to fully listen to the field’s response; there was too much going on the week before (i.e., illness and death) so I did not trust myself to have the patience and self-control required to quietly absorb (defensive) responses. This week I was able to listen, and I learned a lot. Some of the responses we received make sense to me (even if I disagree with them), while for some others I still need to figure out where I stand. But there’s a third category that I find deeply disturbing. The third category makes me think there are some in our field who (a) do not fully appreciate the gravity of the situation that we are in, (b) do not understand or appreciate that there is a vast space between abstract theory and pilot interventions, and an even larger space between pilot interventions and dissemination at scale, and (c) do not realize that there are serious and tremendous opportunity costs associated with the recommendations they so confidently make.

There is a reason there is an entire field called implementation science devoted to studying how to put research into practice (and cataloguing the fallout of well-meaning interventions that went awry). Applying findings is no small feat, and though you may think your recommendations are cheap and easy to implement, and would surely provide benefits without incurring any costs, let me adapt one of economists’ favorite sayings to the current context and make something abundantly clear: there’s no such thing as a free intervention.

Opportunity Costs You May Not Have Considered

In psychology, and I suspect at least some of the other social sciences, we typically do research, write up the results and include a paragraph in the discussion section about how it can be used to change the world, and publish it in a paywalled journal that very few people read. If someone does actually read it and wants to implement it, that’s their problem to figure out— we’ve moved on to the next grant and paper. One of the things implementation experts have to figure out is the opportunity cost associated with using your intervention vs. something else vs. nothing. Here is a concrete example that actually changed the way I think about my own research and its utility for application. A few years ago a colleague (an economist who works on education policy) got very excited about psychological interventions in education—he read some of the best-selling books from psychologists who have become famous for small interventions that ostensibly produce large effects. We met to talk about them because he was considering incorporating some in his future work. In that discussion he asked me a question that stumped me, and that all these years later I still don’t have a good answer to. It’s an obvious questions from an economist’s perspective, but the kind of question that rarely gets asked in psychology: If you were a policy-maker with $100 million dollars to reduce education disparities [our topic of mutual interest], how much would you allocate to psychological interventions (pick whichever one you think is most effective), and how much would you allocate to something else like free lunch programs?

He asked that question because in the intervention world, there is a practical reality that we must contend with. Resources. Are. Constrained. Using your theory to intervene, almost always means not trying something else, or trying something else with fewer of the resources that it otherwise might have had to be effective. In the COVID-19 pandemic, we have both a health crisis and an economic crisis. There is a lot less money to go around which means there is more competition for every dollar. We need to spend money on testing. We need to spend money on protective gear for healthcare professionals and other essential workers who are on the front lines every day. We need to spend money on more ventilators in hospitals. We need to spend money on antibody tests. We need to spend money to develop vaccines. We need to spend money to feed people who are at home struggling to make ends meet because they lost their jobs, but still have children at home to feed

Do we really need to spend money implementing your theory? Is it really that urgent? If so, which of those buckets should we take the money from? Those categories listed above—food, protective gear, vaccines— are ones upon which I’m willing to bet a lot of money that they will save lives. Taking resources from them to reallocate to implementing our theories might save some lives, but probably at the cost of other lives that could be saved if the money spent on our theories was spent elsewhere. How is implementing your theory going to reduce morbidity and mortality rates? And what is the relative impact of that vs. other strategies with more pandemic-tested evidence?

If you are confident that your theory should be used in the pandemic response in the way that we’re using epidemiology and public health theories, please do me a favor. Look at your spouse. Or your child. Or your parent. Or your grandparent. Or whoever it is that you love most in this world. If they are not physically with you, take out a photo and look at them. And ask yourself:

Would you bet their life on your theory?

That is what you’re asking the public to do.

Personally, I am not willing to bet any of my loved one’s, or anyone else’s, lives on our theories during this pandemic, and I say that even as someone who does health communication intervention research in health clinics. As much as I would love to think the papers I’ve written contain the insights necessary to address this crisis, even I have to admit that my work is not ready for application in this pandemic context, and my work falls on probably the most applied end of the basic-to-applied continuum in the field. I just don’t see it as a worthwhile bet, particularly when we have much more relevant research from fields like epidemiology and public health who have spent decades preparing for moments like this. Physical distancing and contact tracing aren’t abstract, vaguely specified ideas. They’re robust strategies that have saved lives in previous pandemics. We may come up with our own crisis-ready interventions before the next existential threat if we start working together (with other disciplines) now, but at the moment, in my humble opinion, we’re just not there.

So What Can We Do With Our Desire to Help?

If our theories aren’t ready for pandemic application, does that mean there’s nothing we can do to help? Not at all. Our value in academia is often constructed based on the fame of our theories, and metrics like how often they are cited. However, we have many other skills that are valuable outside of our ivory towers, and that can be applied to the pandemic response right now. Here are just a few ways that we might help.

One of the things that I have been spending a lot of time on recently, that I’ve come to realize is more helpful than anticipated, is simply helping (local) organizations interpret data. I teach PhD students quantitative research methods, and undergraduate students persuasion and social influence, and intervention science courses—the types of classes that walk students through the process of how data is constructed and curated, and what that means for the inferences we can make from it. The same lessons I teach to my students turn out to transfer well to helping organizations figure out how to make sense of the firehose of new data that is coming in everyday. There are some journalists, for instance, that now check in with me to make sure they have a good understanding of new data before they report on it in newscasts. Even if it’s just background research, having that extra set of eyes that know to look for things like the pattern of missingness, and to ask whether there are systematic biases in how the data was collected, helps to minimize the likelihood that my local and regional neighbors are misinformed.

In addition to interpreting data, some local organizations now find themselves in situations where they have to figure out how to collect and manage new types of data that they have never worked with before so that they can make decisions for the future. The data collection and management lessons we teach to new graduate students for building a research workflow turns out to be valuable as well, especially for smaller organizations that do not (yet) have in-house data scientists. Walking through the logic that makes it clear why depending on what kinds of decisions they want/need to make, they may need to collect different kinds of data (i.e., to estimate different models), can be quite helpful.

These are just a few things that we can do to use our skills to help during this time. We don’t necessarily need to spend limited resources trying out a new message based on our theory; we can help our local journalists, soup kitchens, school districts, health departments, etc., with more pressing needs. The key thing is to avoid the trap of a top-down mentality: I am the expert here to tell you why my theory and methods are suitable for helping address this pandemic. A more effective approach might be to do the exact opposite: to go to those working tirelessly on the front-line and ask a simple question: how can I help?

Acknowledgments

I would like to thank Sonya Dal Cin, Lisa DeBruine, Berna Devezer, Patrick Forscher, Hans IJzerman, Andrew Przbylski, John Sakaluk, Anne Scheel, and Sanjay Srivastava for their very thoughtful feedback on an earlier draft of this post.

References

Baumeister, R. F., Vohs, K. D., & Funder, D. C. (2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on Psychological Science, 2(4), 396-403.

DeAngelis, T. (2010, November). Getting research into the real world. Monitor on Psychology, 41(10). Retrieved from: https://www.apa.org/monitor/2010/11/dissemination

Earl, A., & Lewis, N. A., Jr. (2019). Health in Context: New Perspectives on Healthy Thinking and Healthy Living. Journal of Experimental Social Psychology, 81(3), 1-4.

Flake, J. K., Pek, J., & Hehman, E. (2017). Construct validation in social and personality research: Current practice and recommendations. Social Psychological and Personality Science, 8(4), 370-378.

Goroff, D. L., Lewis, N. A., Jr., Scheel, A. M., Scherer, L. D., & Tucker, J. (2018, November 1). The Inference Engine: A Grand Challenge to Address the Context Sensitivity Problem in Social Science Research. https://doi.org/10.31234/osf.io/j8b9a

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2-3), 61-83.

IJzerman, H., Lewis, N. A., Jr., Weinstein, N., DeBruine, L. M., Ritchie, S. J., Vazire, S., … & Przybylski, A. K. (2020, April 27). Psychological Science is Not Yet a Crisis-Ready Discipline. https://doi.org/10.31234/osf.io/whds4

Maul, A. (2017). Rethinking Traditional Methods of Survey Validation. Measurement: Interdisciplinary Research and Perspectives, 15(2), 51-69.

Simons, D. J., Shoda, Y., & Lindsay, D. S. (2017). Constraints on generality (COG): A proposed addition to all empirical papers. Perspectives on Psychological Science, 12(6), 1123-1128.

Webb, T. L., & Sheeran, P. (2006). Does changing behavioral intentions engender behavior change? A meta-analysis of experimental evidence. Psychological Bulletin, 132(2), 249-268

Yeager, D. S., Romero, C., Paunesku, D., Hulleman, C. S., Schneider, B., Hinojosa, C., … & Dweck, C. S. (2016). Using Design Thinking to Improve Psychological Interventions: The Case of the Growth Mindset During the Transition to High School. Journal of Educational Psychology, 108(3), 374-391.


Viewing all articles
Browse latest Browse all 11

Latest Images

Trending Articles





Latest Images