Uncertainty about what motivates "senders" of public messages leads "receivers" to "read between the lines" to discern the sender's deepest commitments. Anticipating this, senders "write between the lines," editing their expressions so as to further their own ends. I examine how this interactive process of inference and deceit affects the quality and extent of public deliberations on sensitive issues. A principle conclusion is that genuine moral discourse on difficult social issues can become impossible when the risks of upsetting some portion of one's audience are too great. Reliance on euphemism and platitude should be expected in this strategic climate. Groups may embark on a tragic course of action, believed by many at the outset to be ill-conceived, but that has become impossible to criticize.
https://www.brown.edu/Departments/E...mepage/papers/Loury_Political_Correctness.pdf
The paper is written by Glenn Loury. He is an economist at Brown University. I disagree with a number of the things this guy has concluded over the years, but on this issue I think he identifies a phenomenon that Julia Galef, while speaking to Ezra Klein in an interview about online discourse, rightfully identifies as one of the greatest schisms that harm and devolve online discussion and communities. And whether you take his paper seriously or not, the underlying issue being spoken about is pretty hard to deny if you visit any message board on the internet.
We are a species of evolutionarily derived shortcuts, including mental shortcuts. One of those shortcuts is a specific type of filtering bias, and you often see this phenomena and dynamic happen on message boards a lot. Where a person may come into a hypothetical discussion, say Affirmative Action, and that person says something that appears on the surface to mirror what a conservative, victim-blaming troll, or right-wing shill might say about that debate. In doing so, people reading that inevitably update their internal probability calculus that the poster is an outcrop of any number of negatively perceived groups, and often act dismissive and hostile based on that assumption. And that isn't exactly irrational, it's a short-hand that is often correct. But often is not always. And when you have that perception, we inevitably mentally shorten the leash of tolerance toward those people we consider bad faith actors.
And on the other side of that phenomena is people become consciously and subconsciously perceptive to that, so they hesitate to offer opinions that may be perceived as ideologically wrong by the dominant group. So they either feel the need to over-preface their statements so as not to be perceived in that light, or as is more often the case, they refrain altogether from engaging. Which creates a sort of vicious cycle over the long-run. Perceptive people that are concerned with being pigeon-holed into being an undesirable or committing normative sins refrain from engaging. While the now fewer, and less perceptive dissenters(and often less experienced at articulating themselves), or the actual bad-faith actors, are the only ones that continue to engage in some form of dissent. Which serves to strengthen that association calculus. Which actually leads the group to perceiving dissent even more as coming from bad-faith actors. And as dissent reduces, but the internal association calculus doesn't go away, the threshold of deviation away from the norm at which people have that internal calculus applied shrinks. So even smaller and smaller marginal dissent is pilloried by an ever increasing number of people. Then boom, you have an issue echo chamber.
As an example, I mentioned my disagreement with Loury, my reference to Julia Galef and Ezra Klein, one because it is true, but also because I am implicitly aware that if you Google Glenn Loury you will come across many of his often unorthodox opinions. And if I didn't preface this post with those other qualifiers, I suspect I would be subject to scrutiny and shortened leash under that phenomenon as well. Which also made me hesitant to make this post at all.
All of this is not to say that trait is inherently bad. It survived for a reason. If we didn't have it people would get sucked into trolls and malcontents all day, all the time. If we didn't apply that judgement at all messageboards would almost all look like 4chan. We would be paralyzed in our ability to make decisions. It serves a function to help weed out bad actors in our lives, but it also can create echo chambers, encourage dog piles, create disproportionate moderation standards based on perceived group identity, and ultimately stifle healthy conversation when over-applied past it's usefulness. I'm not here to say when or where that threshold gets passed and it becomes more harmful than helpful - though I do think there are a number of issues that get discussed around here where I think it would helpful to leave more room for conversations to breathe - but I think being aware of that mental process can at least(hopefully) help guide people into recognizing the dynamic at play in everyone's decision-making. Certainly for the health of a community but also as individuals, since every one of us is subject to the human condition and the strengths and weaknesses it contains, from conservatives to liberals, posters to moderators.