Wednesday, March 4, 2009
Hector is Gone
My interview with Mark Boyden will be posted soon, but one thing he told me is that Hector unsubscribed from the list (though he might have resubscribed under a different name).
Sunday, March 1, 2009
More Thoughts
These are some additional thoughts after my post yesterday, which I drafted as a response to a note to the listserv and just mailed privately. When I stand back, the vigor of my interest in this situation makes me wonder if I'm losing the ability to discern tempests in teapots from actual storms. On the other hand, I'm genuinely interested in the issue of community stewardship and curious about the forms of leadership that will have to emerge in a new era of local and hyperlocal society, if I've discerned correctly the real storm a-brewin'.
1. Let's call the traditional libertarian solution to online spats the "suck it up and delete" model. Among its other faults, this model dismisses the substance of people's grievances, and automatically judges any criticism as intolerance or an attempt at censorship. In the case of Hector and the Spammers, we see how the "suck it up and delete" model -- as well as a lack of transparency on the part of the moderator -- put people in the position where they had to act as vigilantes. I don't think vigilantism is desirable. I'm not saying that one should always opt out of the system for adjudicating grievances. I'm saying, this system is way too short, and people run out of options way too fast, for it to be satisfying. Or sustainable. When people become vigilantes, it's time to examine the system.
2. We should not insulate ourselves in our communities, either face to face or online/virtual, and we should absolutely not pretend that there aren't differences, either political and personal. And I agree that we are enriched by different opinions and voices, and I acknowledge that dissent is an American tradition worth preserving at all costs.
That said, there's a richer array of community-based, collaborative, and shared solutions to trolls and provocateurs than what the "suck it up and delete" approach offers. These balance principles with actual practice, and I think are more defensible and sustainable.
Here's an example: Create an off-list blog where moderators quarantine questionable posts (not posters), which will not be released until a certain reasonable threshold of list members vote to remove it. This gives you 1) universal access to questionable posts and 2) community involvement in maintaining certain standards while you 3) preserve respect for the First Amendment, the value of dissent and diversity, and the importance of public discourse and 4) retain the symbolic value of posting to the entire list as achievable and potentially available to everyone. Those are the public values you want to preserve. Additionally, the process would be entirely transparent: you always see the decisions that your moderator is making, in real time.
On the individual/private side, this solution means that 1) any individual doesn't end up with a polluted inbox (by an admittedly human definition of "pollution," but one that is dynamic, transparently created, and under discussion) and 2) continues to have access to diversity opinions, because she can openly read the quarantine. If any individual believes that the quarantined post deserves the symbolic privilege of being posted, then she can vote for its release and lobby friends and family to vote for it as well. In this way, you create a social feedback system in which subjective decisions are checked broadly against collective values, and in which the decisions at all levels are visible to everyone in the decision-making chain. It also creates two layers of moderation. At the end of the day, the real moderators would be the online community itself.
There's value for the poster, too: 1) it means that posts, not posters, are flagged for consideration; 2) it means that a poster can continue to argue for the legitimacy of one's ideas, and give the community access to those arguments; and 3) it means that one can learn what is broadly permissible and what isn't in the spectrum of human discourse.
Most of this could be automatized; the only human decision would be by the moderator, who would be putting items in the quarantine blog.
Another example: Create a panel of moderators who rotate responsibilities on a regular basis. Or create a blog for the moderator to discuss moderation decisions/situations, with comments enabled for input/discussion.
3. If you're interested in a grand philosophical flourish of an ending, read on; otherwise skip.
I'm not a legal scholar, but I am invoking my American right to propound on legal and cultural matters with an amateur's enthusiasm that shouldn't be mistaken for as a claim to authority:
I think it's erroneous to immediately and automatically equate online community stewardship with censorship, just as it's a logical fallacy to say that we must meet the challenge of tolerance in our neighborhood listservs because someday we'll have to stomach Nazi parades in the streets. There are very fine tissues of experience, tradition, and culture between the two poles of tolerance and censorship, and discussing what is there and, importantly, how to enact and live it shouldn't just be task for lawyers and the legal-minded. There's a culture of practice which -- sometimes -- overlaps with the culture of law. (Even the culture of "law" doesn't always overlap with the law.) We all live between these two cultures. Some of us bridge the distance between culture and law by using values that are spiritual, religious, or scriptural. Some of us have other cultural values to bear. I don't know what sort of term to use for how I do it, except to say that I rely on my humanistic training and what little I know about the principles of user-centered design. We may never create the perfect utopia, but we can at least accept that we are the creators of the systems we inhabit, not their servants, and the best system is the one that serves the most creators, as inclusively and sustainably as possible.
Now back to your regular programming.
1. Let's call the traditional libertarian solution to online spats the "suck it up and delete" model. Among its other faults, this model dismisses the substance of people's grievances, and automatically judges any criticism as intolerance or an attempt at censorship. In the case of Hector and the Spammers, we see how the "suck it up and delete" model -- as well as a lack of transparency on the part of the moderator -- put people in the position where they had to act as vigilantes. I don't think vigilantism is desirable. I'm not saying that one should always opt out of the system for adjudicating grievances. I'm saying, this system is way too short, and people run out of options way too fast, for it to be satisfying. Or sustainable. When people become vigilantes, it's time to examine the system.
2. We should not insulate ourselves in our communities, either face to face or online/virtual, and we should absolutely not pretend that there aren't differences, either political and personal. And I agree that we are enriched by different opinions and voices, and I acknowledge that dissent is an American tradition worth preserving at all costs.
That said, there's a richer array of community-based, collaborative, and shared solutions to trolls and provocateurs than what the "suck it up and delete" approach offers. These balance principles with actual practice, and I think are more defensible and sustainable.
Here's an example: Create an off-list blog where moderators quarantine questionable posts (not posters), which will not be released until a certain reasonable threshold of list members vote to remove it. This gives you 1) universal access to questionable posts and 2) community involvement in maintaining certain standards while you 3) preserve respect for the First Amendment, the value of dissent and diversity, and the importance of public discourse and 4) retain the symbolic value of posting to the entire list as achievable and potentially available to everyone. Those are the public values you want to preserve. Additionally, the process would be entirely transparent: you always see the decisions that your moderator is making, in real time.
On the individual/private side, this solution means that 1) any individual doesn't end up with a polluted inbox (by an admittedly human definition of "pollution," but one that is dynamic, transparently created, and under discussion) and 2) continues to have access to diversity opinions, because she can openly read the quarantine. If any individual believes that the quarantined post deserves the symbolic privilege of being posted, then she can vote for its release and lobby friends and family to vote for it as well. In this way, you create a social feedback system in which subjective decisions are checked broadly against collective values, and in which the decisions at all levels are visible to everyone in the decision-making chain. It also creates two layers of moderation. At the end of the day, the real moderators would be the online community itself.
There's value for the poster, too: 1) it means that posts, not posters, are flagged for consideration; 2) it means that a poster can continue to argue for the legitimacy of one's ideas, and give the community access to those arguments; and 3) it means that one can learn what is broadly permissible and what isn't in the spectrum of human discourse.
Most of this could be automatized; the only human decision would be by the moderator, who would be putting items in the quarantine blog.
Another example: Create a panel of moderators who rotate responsibilities on a regular basis. Or create a blog for the moderator to discuss moderation decisions/situations, with comments enabled for input/discussion.
3. If you're interested in a grand philosophical flourish of an ending, read on; otherwise skip.
I'm not a legal scholar, but I am invoking my American right to propound on legal and cultural matters with an amateur's enthusiasm that shouldn't be mistaken for as a claim to authority:
I think it's erroneous to immediately and automatically equate online community stewardship with censorship, just as it's a logical fallacy to say that we must meet the challenge of tolerance in our neighborhood listservs because someday we'll have to stomach Nazi parades in the streets. There are very fine tissues of experience, tradition, and culture between the two poles of tolerance and censorship, and discussing what is there and, importantly, how to enact and live it shouldn't just be task for lawyers and the legal-minded. There's a culture of practice which -- sometimes -- overlaps with the culture of law. (Even the culture of "law" doesn't always overlap with the law.) We all live between these two cultures. Some of us bridge the distance between culture and law by using values that are spiritual, religious, or scriptural. Some of us have other cultural values to bear. I don't know what sort of term to use for how I do it, except to say that I rely on my humanistic training and what little I know about the principles of user-centered design. We may never create the perfect utopia, but we can at least accept that we are the creators of the systems we inhabit, not their servants, and the best system is the one that serves the most creators, as inclusively and sustainably as possible.
Now back to your regular programming.
Subscribe to:
Posts (Atom)