As a followup to my previous linkpost to the New Yorker article covering the Ariely and Gino scandals, I'm linking this statement by Zoe Ziani, the grad student who first discovered inconsistencies in Gino's papers. Here she gives a blow-by-blow retelling of her experience attempting to uncover fraud, as well as telling a fairly harrowing story about how higher-ups in her organization attempted to silence her.

I find this story instructive both on the object-level, and as a case study both for a) how informal corrupt channels tries to cover up fraud and corruption, and b) for how active participation is needed to make the long arc of history bend towards truth.

In her own words:

____

Disclaimer: None of the opinions expressed in this letter should be construed as statements of fact. They only reflect my experience with the research process, and my opinion regarding Francesca Gino’s work. I am also not claiming that Francesca Gino committed fraud: Only that there is overwhelming evidence of data fabrication in multiple papers for which she was responsible for the data.

On September 30th, 2023, the New Yorker published a long piece on “L’affaire Ariely/Gino”, and the role I played in it. I am grateful for the messages of support I received over the past few weeks. In this post, I wanted to share more about how I came to discover the anomalies in Francesca Gino’s work, and what I think we can learn from this unfortunate story.

What is The Story?

How it all began

I started having doubts about one of Francesca Gino’s paper (Casciaro, Gino, and Kouchaki, “The Contaminating Effect of Building Instrumental Ties: How Networking Can Make Us Feel Dirty”, ASQ, 2014; hereafter abbreviated as “CGK 2014” ) during my PhD. At the time, I was working on the topic of networking behaviors, and this paper is a cornerstone of the literature.

I formed the opinion that I shouldn’t use this paper as a building block in my research. Indeed, the idea that people would feel “physically dirty” when networking did not seem very plausible, and I knew that many results in Management and Psychology published around this time had been obtained through researchers' degrees of freedom. However, my advisor had a different view: The paper had been published in a top management journal by three prominent scholars… To her, it was inconceivable to simply disregard this paper.

I felt trapped: She kept insisting, for more than a year, that I had to build upon the paper… but I had serious doubts about the trustworthiness of the results. I didn’t suspect fraud: I simply thought that the results had been “cherry picked”. At the end of my third year into the program (i.e., in 2018), I finally decided to openly share with her my concerns about the paper. I also insisted that given how little we knew about networking discomfort, and given my doubts about the soundness of CGK 2014, it would be better to start from scratch and launch an exploratory study on the topic.

Her reaction was to vehemently dismiss my concerns, and to imply that I was making very serious accusations. I was stunned: Either she was unaware of the “replication crisis” in psychology (showing how easy it is to obtain false-positive results from questionable research practices), or she was aware of it but decided to ignore it. In both cases, it was a clear signal that it was time for me to distance myself from this supervisor.

I kept digging into the paper, and arrived at three conclusions:

The paper presents serious methodological and theoretical issues, the most severe being that it is based on a psychological mechanism (the “Macbeth Effect”) that has repeatedly failed to replicate.

The strength of evidence against the null presented in study 1 of the paper made it extremely unlikely that the result was p-hacked: It is statistically implausible to obtain such a low p-value under the null, even when using researchers' degrees of freedom.

Francesca Gino had many other papers that appeared equally implausible (i.e., untrustworthy psychological mechanisms leading to large effects with very low p-values).

It was at this point that I started suspecting that part of the evidence presented in CGK 2014 was not just p-hacked but based on fabricated data[...] 

See more at: https://www.theorgplumber.com/posts/statement/ 

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 9:39 PM

Wow, I thought I was already pretty cynical about this stuff, but this updated me another notch (emphasis mine):

Second, what about Gino’s co-authors and reviewers? While we might never know how many of her papers contained fabricated data, she has published 130+ papers over the course of 17 years, with approximately as many co-authors. Each of these papers was presumably assigned to two or three reviewers, one editor, presumably an associate editor too, for multiple rounds of reviews. What should we make of the fact that no one saw (or said) anything?

Some of Gino’s co-authors worked extensively with her. A dozen of them were regular collaborators, who wrote 5+ papers with her. Some of them are junior scholars, and are most likely under intense stress, dealing with the fallout of having a regular co-author suspected of data fabrication. We cannot fault them for staying quiet and dealing with these issues first. However, I am very disappointed with Gino’s more senior co-authors. It is shocking to me that, Juliana Schroeder excepted (who has sent a clear signal to the scientific community that she is taking this scandal seriously, and shared transparent guidelines on how she is evaluating the papers she has co-authored with Gino), none of these tenured co-authors have made any sort of public statement. Do they not think that the scientific community deserves more transparency? Do they not feel the need to signal whether and why people should still trust their work? I still have some hope that the repeatedly delayed Many Co-Authors project will shed some light on these questions… but I’m not holding my breath.

Finally, the journals and HBS deserve to be called out. The former for publishing Gino’s research, the latter for providing her with the resources that helped her become the superstar she was before her downfall. Now that it appears that some of Gino’s research was based on fabricated data, why aren’t they actively investigating all her papers? Why aren’t they helping the co-authors, and the rest of the scientific community, figure out the truth? After Diederik Stapel was terminated for research misconduct, Tilburg University conducted a comprehensive investigation into the entirety of his research output, and made the findings public. Why won’t HBS and the journals do the same thing? The silver lining is that HBS at least conducted an investigation and acted decisively, unlike Duke university after Data Colada [98] came out…

Is this a retroactive grant situation?

This is the type of thing that speaks to me aesthetically but my guess is that it wouldn't pencil, though I haven't done the math myself (nor do I have a good sense of how to model it well). Improving business psychology is just not a very leveraged way to improve the long-term future compared to the $X00M/year devoted to x-risk, so the flow-through effects have to be massive in order to be better than marginal longtermist grants. (If I was making grants in metascience this is definitely the type of thing I'd consider).

I'm very open to being wrong though; if other people have good/well-justified Fermi estimates for a pretty large effect (better than the marginal AI interpretability grant for example) I'd be very happy to reconsider.

(sorry for pontificating when you asked for an actual envelope or napkin) upside is an externality, Ziani incidentally benefits but the signal to other young grad students that maybe career suicide is a slightly more viable risk seems like the source of impact. Agree that this subfield isn't super important, but we should look for related opportunities in subfields we care more about.

I don't know if designing a whistleblower prize is a good nerdsnipe / econ puzzle, in that it may be a really bad goosechase (since generating false positives through incentives imposes name-clearing costs on innocent people, and either you can design your way out of this problem or you can't).

My guess is still that this is below the LTFF bar (which imo is quite high) but I've forwarded some thoughts to some metascience funders I know. I might spend some more free time trying to push this through later. Thanks for the suggestion!