Who do EAs Feel Comfortable Critiquing?
Many effective altruists seem to find it scary to critique each other
I think there’s useful content in this piece, but also, I think this version is less well-structured than I’d like.
I plan on posting this to the EA Forum eventually but want to wait to see if the heat dies down a bit first.
This post was written with long-standing issues in mind. Recent heated topics emphasize some specific things, but that’s not my main focus.
I think EA is made up of a bunch of different parties, many of whom find it uncomfortable to criticize or honestly evaluate each other for a wide variety of reasons.
I think that this is a very standard challenge that most organizations and movements have. As I would also recommend to other parties, I think that investigation and improvement here could be very valuable to EA.
In this post, I rely mostly on anecdotal experiences, introspection, and recommendations from various management books. This comes from me working on QURI (trying to pursue better longtermist evaluation), being an employee and manager in multiple (EA and not EA) organizations, and hearing a whole lot of various rants and frustrations from EAs. I’d like to see further work to better understand where bottlenecks to valuable communication are most restrictive and then design and test solutions.
I personally think that “people being uncomfortable openly giving people candid feedback” is a significant bottleneck, but I realize others don’t. In this post, I aim to lay out what I see the issue looking like. I don’t give much evidence or statistics. I think it would be very interesting to get more information here, but that would also be significantly more work.
There’s a massive difference between a group saying it’s open to criticism, and a group that people actually feel comfortable criticizing.
I think that many EA individuals and organizations advocate and promote feedback in ways unusual for the industry. However, I think there’s also a lot of work to do.
In most communities, it takes a lot of iteration and trust-building to find ways for people to routinely and usefully give feedback to each other. Building up trust, and the environments necessary for candid communication, is really hard work.
In companies, for example, employees often don’t have much to personally gain by voicing their critiques to management, and a lot to potentially lose. Even if the management seems really nice, is an honest critique really worth the ~3% chance of resentment? Often you won’t ever know — management could just keep their dislike of you to themselves, and later take action accordingly. On the other side, it’s often uncomfortable for managers to convey candid feedback to their reports privately, let alone discuss department or employee failures to people throughout the organization.
Thanks for reading The QURI Medley! Subscribe to receive new posts and support our work.
Evaluation between different EA clusters
Evaluation of Global Welfare Organizations
Global poverty charity evaluation and criticism seem like fair game. When GiveWell started, they weren’t friends with the leaders of the organizations they were evaluating. I think this has continued to be a culture where it’s understood that new global health organizations will be publicly evaluated on similar grounds.
Evaluation of Longtermism Organizations
Around longtermism, there doesn’t seem to be much public organization evaluation or criticism. I think one issue is that many of the potential evaluators are social peers of the people they would be evaluating. They all go to the same EAGs (me included!). Larks did yearly reviews that I thought were strong, but those didn’t really give much negative feedback — they focused more on highlighting the positive organization examples.
Around AI Safety, there are many candid discussions on certain strategies and research directions.1 I think this is great, and probably a good place to begin, but I also think there’s quite a bit more work to be done. We still seem far from having a GiveWell equivalent for Longtermism.
I think there are a bunch of bottlenecks here other than discomfort (evaluation is expensive, and arguably we don’t have many choices anyway). However, I think discomfort is one significant bottleneck.
Evaluation of EA Funders/Leaders
In conversation, people seem particularly nervous around the funders and top leaders. There are only a few EA funders, and they seem highly correlated in opinion with each other. It might be that way for the next 20-50 years. For those without a terrific understanding of these funders, upsetting a funder can feel like a lifetime ban from almost all high-level effective altruist positions.
I’ve previously done some funding work, and know many of the funders personally, but they still make me very nervous. (To be clear, I have a lot of respect for most of the funders, and I blame most of the issue on the situation.)
Recently there have been several posts complaining against “EA Leadership”. These posts seem mostly either by new/adjacent members or/and are anonymous (which would demonstrate discomfort). They also generally don’t give many direct evaluations or feedback of what I see as the main power hierarchies.
I think the main people who could give great critiques generally stay quiet. The ones who are loud are more commonly the ones with less to lose by voicing their statements, and that correlates with them generally not being able to provide as well-pointed feedback.
Evaluation by Funders/Leaders
While it’s awkward to criticize those with power over you, it can be even more awkward to publicly criticize those who you have power over.
I’ve very rarely seen EA funders publicly say bad things about organizations.2 Their one primary signal of a bad project is to just not fund it, but that’s a very minimal signal.
Related, there’s far more public criticism from Google employees about their management than there is their management about their employees. This plays out on a lot of levels.
It can be really difficult for those in power to respond to criticisms they think are bad. Those who publicly “punch up” are often given much more leeway and have less potential downside than those who “punch down”. Leaders are typically very much outnumbered and their time is particularly limited. I guess that some don’t feel very safe engaging a lot with outer community members, especially if they don’t have enough time to adequately respond to comments or deal with problems that might come up.
Of course, those with power can still take action behind the scenes. This combination (has trouble publicly responding, but can secretly respond in powerful ways) is catastrophic for trust building.
Aside: I think it’s generally fair to say that power is much more complicated than leaders have power, others don’t. Managers and funders are highly outnumbered, have a restricted set of information, and sometimes are comparatively disadvantaged at community discussion (compared to what their status would imply). When managers reveal information to their communities, they consider how their communities might harm them with this information.
So trust works both ways. Communities will share key feedback with leadership in proportion to how much they trust leadership to make use of that feedback. Leaders will share useful information and feedback insofar as they trust their communities.
Evaluation of EA, by Others
I think right now other groups feel perhaps too comfortable critiquing effective altruism. The last few months have felt intense.
However, I’m personally uncomfortable critiquing many groups online (or even saying things I know some groups disagree with), particularly on Twitter, because I’m afraid of agressive backlash/trolls. There seem to be a lot of combative people online from all parts of the political aisle. I’m sure some critics of EA might feel similar about the EA community.
I think there are generally a lot of bullying efforts by Twitter users specifically, to push for specific ideological narratives. I’m sure this is somewhat effective, but I really don’t want to live in an intellectual environment where that’s the norm.
So, I really hope that critics of EA can expect not to be harassed or attacked. I guess this is at least somewhat of an issue now, and I’m sure it could get a lot worse.
Another issue is that as EA funding expands, I’m sure that many other groups would be hesitant to be honest about EA, for fear of eventually losing funding due to it. Even if EA funders were actually very open to this, these critics would likely not know, and would default to being conservative. (This is the same concern as “Evaluation of EA Funders”.)
Grab Bag of Semi-Related Examples/Info
I oversaw one manager who continuously gave positive reports about the work they were managing. Eventually, these reports said several things like “overcame disaster X successfully”. I asked if there were early indicators of said disaster, and of course, there were, this person just didn’t think these were worth previously mentioning. I learned that this manager was very optimistic and had low neuroticism about things I’d normally be worried about. In the background, I wouldn’t be surprised if much of the issue was that they were uncomfortable about sharing bad news with me. This made me much more paranoid about distortions from those I oversee more broadly.
I’ve been around several group houses that had official policies asking victims of sexual abuse to speak up. Few did, so many of these houses assumed that things were totally fine. I later learned of some serious incidents. One problem was that the victims didn’t trust the house management (They assumed they either wouldn’t be believed, or might be attacked for bringing up issues), and assumed the “house policies” were just there for signaling.
For a while, I had one boss in particular who I had a lot of problems with. I cared about our relationship and the organization. I really didn’t want to be fired, so I thought it best to keep much of my dislike of the environment to myself. Now, I’m often curious if I’m that boss.
I see a big part of my job as a board member as trying to collect important information that employees don’t feel comfortable telling the executive directors directly. This has happened several times so far, and I’m sure I’ve missed a whole lot.
I’ve probably spent 30 solid hours trying to explain and steelman the position of senior EA leadership (1-3 levels above me) to people. Many of these conversations get pretty heated, even though I’m a few steps removed from the actual decisions. I think there are a bunch of EAs with very strong feelings, but very poor models of how EA leadership actually works.
I really liked the book Radical Candor. This book presents several examples of bosses who mess up at giving feedback, either being too mean, or not giving enough.
What should be done about this?
Perhaps the most straightforward thing is to find some management experts or consultants who have fixed similar problems. The broad problems seem common to businesses and therapy. However, it can be quite tough to find someone who’s actually strong and can jive with this community, then build buy-in for that person.
Another strategy is that in situations where direct feedback is uncomfortable, we go up one meta-level and get “feedback about the magnitude of the blockers of giving feedback”.
Some questions to ask include:
How much evaluation and candid information do we have about which groups?
How valuable is the information in (1)? Are there significant gaps?
If there are gaps, do potential critics feel disincentivized to provide such critique? Are there any fixes that could change these incentives?
How much do different groups trust each other to be reasonable, not antagonistic, and not vindictive? Are there any improvements we could make to build communication-enhancing trust?
I have some thoughts on how to improve incentives and build trust, but I think the number one problem is just understanding that there is an issue in the first place.
Quick Example Surveys
I made a few quick surveys on Twitter on some of these issues. These are obviously focused on “discomforts that Twitter users mostly have with specific groups”. I’d of course be curious to see more and better work here.
Thanks to Nuño Sempere, Nics Olayres, Misha Yagudin, Ben Goldhaber, and Ben West for their comments.
One of the commenters recommended these posts on candid AI critique: