These are some additional thoughts after my post yesterday, which I drafted as a response to a note to the listserv and just mailed privately. When I stand back, the vigor of my interest in this situation makes me wonder if I'm losing the ability to discern tempests in teapots from actual storms. On the other hand, I'm genuinely interested in the issue of community stewardship and curious about the forms of leadership that will have to emerge in a new era of local and hyperlocal society, if I've discerned correctly the real storm a-brewin'.
1. Let's call the traditional libertarian solution to online spats the "suck it up and delete" model. Among its other faults, this model dismisses the substance of people's grievances, and automatically judges any criticism as intolerance or an attempt at censorship. In the case of Hector and the Spammers, we see how the "suck it up and delete" model -- as well as a lack of transparency on the part of the moderator -- put people in the position where they had to act as vigilantes. I don't think vigilantism is desirable. I'm not saying that one should always opt out of the system for adjudicating grievances. I'm saying, this system is way too short, and people run out of options way too fast, for it to be satisfying. Or sustainable. When people become vigilantes, it's time to examine the system.
2. We should not insulate ourselves in our communities, either face to face or online/virtual, and we should absolutely not pretend that there aren't differences, either political and personal. And I agree that we are enriched by different opinions and voices, and I acknowledge that dissent is an American tradition worth preserving at all costs.
That said, there's a richer array of community-based, collaborative, and shared solutions to trolls and provocateurs than what the "suck it up and delete" approach offers. These balance principles with actual practice, and I think are more defensible and sustainable.
Here's an example: Create an off-list blog where moderators quarantine questionable posts (not posters), which will not be released until a certain reasonable threshold of list members vote to remove it. This gives you 1) universal access to questionable posts and 2) community involvement in maintaining certain standards while you 3) preserve respect for the First Amendment, the value of dissent and diversity, and the importance of public discourse and 4) retain the symbolic value of posting to the entire list as achievable and potentially available to everyone. Those are the public values you want to preserve. Additionally, the process would be entirely transparent: you always see the decisions that your moderator is making, in real time.
On the individual/private side, this solution means that 1) any individual doesn't end up with a polluted inbox (by an admittedly human definition of "pollution," but one that is dynamic, transparently created, and under discussion) and 2) continues to have access to diversity opinions, because she can openly read the quarantine. If any individual believes that the quarantined post deserves the symbolic privilege of being posted, then she can vote for its release and lobby friends and family to vote for it as well. In this way, you create a social feedback system in which subjective decisions are checked broadly against collective values, and in which the decisions at all levels are visible to everyone in the decision-making chain. It also creates two layers of moderation. At the end of the day, the real moderators would be the online community itself.
There's value for the poster, too: 1) it means that posts, not posters, are flagged for consideration; 2) it means that a poster can continue to argue for the legitimacy of one's ideas, and give the community access to those arguments; and 3) it means that one can learn what is broadly permissible and what isn't in the spectrum of human discourse.
Most of this could be automatized; the only human decision would be by the moderator, who would be putting items in the quarantine blog.
Another example: Create a panel of moderators who rotate responsibilities on a regular basis. Or create a blog for the moderator to discuss moderation decisions/situations, with comments enabled for input/discussion.
3. If you're interested in a grand philosophical flourish of an ending, read on; otherwise skip.
I'm not a legal scholar, but I am invoking my American right to propound on legal and cultural matters with an amateur's enthusiasm that shouldn't be mistaken for as a claim to authority:
I think it's erroneous to immediately and automatically equate online community stewardship with censorship, just as it's a logical fallacy to say that we must meet the challenge of tolerance in our neighborhood listservs because someday we'll have to stomach Nazi parades in the streets. There are very fine tissues of experience, tradition, and culture between the two poles of tolerance and censorship, and discussing what is there and, importantly, how to enact and live it shouldn't just be task for lawyers and the legal-minded. There's a culture of practice which -- sometimes -- overlaps with the culture of law. (Even the culture of "law" doesn't always overlap with the law.) We all live between these two cultures. Some of us bridge the distance between culture and law by using values that are spiritual, religious, or scriptural. Some of us have other cultural values to bear. I don't know what sort of term to use for how I do it, except to say that I rely on my humanistic training and what little I know about the principles of user-centered design. We may never create the perfect utopia, but we can at least accept that we are the creators of the systems we inhabit, not their servants, and the best system is the one that serves the most creators, as inclusively and sustainably as possible.
Now back to your regular programming.
Sunday, March 1, 2009
Subscribe to:
Post Comments (Atom)
6 comments:
Interesting posts. Although I profoundly disagree with you, it is interesting to follow your thinking.
I think you misunderstand the "logical fallacy" you mention about allowing Nazi parades in our streets. The logic of free speech is not that we must listen to the fringe because they have something relevant to say. The logic is that by trying to limit or curb the free speech of the fringe, we inevitably will curtail much more speech than just the speech we target. History has clearly shown us this.
The WPNA is a quasi-governmental organization, so the first amendment concerns are real, not implied. This is not a private club where you can exclude the cranks and neer-do-wells as you wish, this is (ideally) a representative body that has to be open to all voices in the neighborhood - and lots of those voices are going to be full throated assholes. That's life in the public sphere.
In my post I tried to show how online community stewardship is entirely compatible with the protection of free speech, you come to the very conclusion I criticized. For you, any stewardship is censorship, plain and simple. But this is simplistic. Can you respond to the substance of my post?
You also argue that WPNA is "quasi-governmental." Well, a public school is also a governmental entity, and one cannot go into a school and say whatever one wants. (I've never tried it, but I bet this is true.) Same thing for a police station, which is also a governmental entity. Try standing in front of the fire station; they will move you along. There are operational, legal, and financial constraints on free speech in those environments; why doesn't a neighborhood listserv have access to those same sorts of constraints?
Or let me put it another way: let's acknowledge that "shut up and delete" isn't tenable, or for the sake of argument, let's put it off the table. What are YOUR solutions for preserving community spaces according to community standards AND that preserve public spaces according to principles of free speech, free assembly, and due process?
Words are missing from my first sentence. It should read:
In my post I tried to show how online community stewardship is entirely compatible with the protection of free speech. But in your post you come to the very conclusion I criticized.
Also, El Longhorn, you're forgetting one crucial aspect of this situation: Hector is a pseudonym. Surely you're not going to argue that unreal persons deserve free speech protections, are you?
I don't know if I can respond in the way that you want since I don't agree that there is a problem requiring stewardship. Your recommendations are essentially procedural - we can create a committee with certain standards to flag posts and then the community can vote up or down on whether those "controversial" posts should actually be allowed to go out to the listserv. Technology can help us do this.
Such a system will inevitably catch both good and bad posts, as people's subjective opinions of what constitutes offensive and inappropriate vary greatly. For me, the minor annoyance of folks like Hector is not worth all of this. For the people whose posts get flagged, will they spend the time and effort to convince the "community" that there is value to what they are saying or will they just go away and stop posting? I bet they just stop. Most people do and then you get less diverse, more homogenized speech.
Also, I am inherently skeptical of the "community values" that are always bandied about - usually they are just individual values that the person is trying to impose on the community.
Regarding the pseudonym, there was a debate about requiring people to post their addresses at the end of their message - few agreed with that. Seems like the "community" wants it both ways - they want to remain anonymous themselves but want to make sure the troublemakers are who they say they are. As for my solutions, I think the current process was fine - no personal insults, no cursing or obscene language, etc. I like objective standards (time, space, and manner restrictions in first amendment law terminology). Regulating speech in a certain time, place and manner is OK (public schools, police stations, no door to door stuff at 2 a.m.), it is when we get into regulating content that you run into problems. And that is essentially what you want to regulate - content.
I can think of one post that was inappropriate by Hector - the one where he talked about "colored kids" as being animals (although that may not be too far off from what many in our community really think). Other than that, he is snide, misleading, condescending and generally disruptive, but he really hasn't done much. Which posts of his would you have censored (or "moderated")?
A couple of responses:
1. I agree that it's contradictory for people to not want to use their addresses yet now demand Hector to do the same.
2. You wrote, "For the people whose posts get flagged, will they spend the time and effort to convince the "community" that there is value to what they are saying or will they just go away and stop posting? I bet they just stop. Most people do and then you get less diverse, more homogenized speech." I don't know if those people will quit or not, but it doesn't matter -- the stuff will still be posted and publicly available, so they haven't lost access to the public square. I also know the listserv has lost people who could no longer stomach the tone and content posts. To paraphrase one of your sentences, by NOT curbing the pollution of email inboxes, you curtail political involvement. Is that a desirable outcome?
3. I would have also put Miguel's post (which I eventually posted in an edited form) in the quarantine, as well as Hector's posts. I also think that Steve Spiers' posts would have ended up there.
4. I think "objective" standards, as you define them, are insufficient when it comes to matters like this. In any case, there are multiple ways to define the "objective." As I said in my earlier note, one kind of objectivity is the procedural. Your objectivity is called "instrumental objectivity" -- if the clock says 9:01, the clock doesn't lie, and everyone has to leave.
But another kind of objectivity is the multi-subjective. This is utilized in science all the time: by summing or incorporating multiple subjective viewpoints, you can distribute bias and neutralize extreme views. That's what I've tried to offer here.
To emphasize: my solution isn't curtailing speech. It's simply creating another place for problematic speech to go, as well as a mechanism for it to get out of there. The list itself is curated.
Post a Comment