Yesterday, a friend was complaining about Internet Trolls, those annoying occasional inhabitants of message boards, blog comments and so forth, that exist merely to incite and inflame rather than participate or inform.

The Irony

Oddly enough, it’s quite possible trolls may help hasten the demise of some venues for anonymous speech and become not only their own undoing, but help promote new segments of various online communities. For most day-to-day blog, message forum or other activities, nothing even approaching full anonymity is required. There’s just not that much so deeply controversial going on most of the time. As a result, the primary impetus to using a pseudo-anonymous screen name is to keep your real life identity somewhat private for reasons of simple prudence, even if there’s relatively little risk of identity theft and such from such online places. Being pseudo-anonymous, however, does imply that there’s some degree of real world tie-in at some level. Just what form this may take is difficult to say.

Possibly digital signatures, verified by trusted agents, or who knows. The end result would be that certain blog comment features, social sites, forums and more would have recourse for truly blocking users. The issue today is that we are in a nether world of transition. The full ecosystem to allow users to easily
acquire and use such tools does not exist.

“Trusted” Members

Once easy to use verification tools exist allowing for both authentication and pseudo-anonymity, there are two concomitant pieces that can result in more tightly managed communities. The first is add-ons to community software to verify members’ identities. And the second might be the ability to tier software
filters such that users may choose their level of participation. In this way, blog owners, forum owners and others could choose what participation levels they’ll allow for different classes of user. (This exists now to some degree, but based on the most simplistic concepts of identity in most cases.) Once there’s some kind of real identity tag, product administrators could choose whether or not there would be any participation at all for fully anonymous or blocked users. Or they could simply allow such anonymous and blocked users to do as they wish given that those who are verified have a choice as to what they view. Some verified
participants may look at everything. Others may look poorly upon those who don’t choose to publicly own their words and choose to ignore them. This way, one can have a fully non-censored product if desired, yet allow for user choice to block out the noise, most likely to come from non-verified accounts.

Online Community Evolution: A Quick Look Back

Going back in online history to the early major consumer services, CompuServe, Prodigy and the still in the mix America Online, we see they were all overrun by the amazing benefits of the open standards various

Internet technologies offered. But as much as early and current Internet communities provide in terms of features and benefits, something was also lost.

These companies had a direct billing relationship with their customers. By virtue of this, they all had a reasonably high degree of assurance of with whom they were dealing. Or minimally, they and their users had recourse to seek out those who went below and beyond in terms of behavior. As a result, whether some
liked it or not, agreed or disagree with policies, their communities were inherently more manageable then what’s possible now.

And What of the Purists?

There are those who would say that practically any verification or identification or classification of speech based on identity is bad. That is squelches. That it risks stifling voices. They may be right. Unfortunately, they’re also wrong. Without any controls at all, there seems to be a tendency towards the “tragedy of the commons.” The still existing USENET Newsgroup system is a perfect example of a spam-ridden wasteland where the signal-to-noise ratio is so low the place is effectively worthless for any discourse. Hardly a place amenable to speech.

And what of reputation? Collectively, we’ve always made judgments about the veracity and value of content to some degree based on its source.

Going Forward

As we go forward and online community becomes even more a part of all things, it’s more likely than not that moderation tools will be needed. The cliché about the lone bad apple spoiling the bunch is perfectly
apropos here. Just a little bit of trouble can cause site owners and moderators inordinate amounts of time. There are certainly free speech purists who would look upon some of what’s suggested here with great concern. And they’d perhaps be partially right. But the thing is, any right to speech does not guarantee
the right that others will listen. And certainly not that they’d have to listen. In any case, the tools described here would allow for choice among the listeners, which is as important. Just as the voices of the speakers are important. It’s a balance. It’s easy enough to see this is the case should one happen to join an online community with a fully verified user base. For example, various professional organizations have online forums, but their user base actually pays subscription charges. Such forums seem to rarely suffer the same level of difficulty of more open forums, even those requiring some form of registration. (Which is most communities, at least to do anything more than read.)

So while one-off communities may exercise greater controls than more open spaces, more generic across the ‘net tools don’t fully exist yet. Individual communities may choose to put up certain walls or attempt to use billing mechanisms, but this won’t quite work for everyone. It’s too cumbersome for users, (who may just be passing through a community for at time), and too expensive for some community producers. Some of the pieces do exist now and more are forming. It’s just a matter of time before features get more sophisticated to allow for the kind of granularity I’ve discussed here. Just for fun I’ll throw a dart and say such tools will begin gaining traction sometime from late 2008 through 2009. That will be enough time, (over the course of 2007 through mid 2008), for enough new online communication products to have clogged up some common areas with crap to the point that users will demand some controls or they’ll abandon such products.