Wednesday, December 19th, 2012

The Internet's Vigilante Shame Army

In 2012, more than ever, the internet became the way we shame people now. Here to talk about online justice and public shaming are academic internet experts Whitney Phillips and Kate Miltner. (Whitney recently completed her PhD dissertation on trolls; Kate wrote her Masters thesis on LOLCats—yep!) Up for discussion today: Violentacrez, hate blogs, racist teens on Twitter, poor Lindsey Stone, Hunter Moore, and last Friday's misidentification of the Sandy Hook shooter.

Whitney: Contrary to Nathan Heller's Onion-worthy New York Magazine article lamenting the loss of the "hostile, predatory, somewhat haunted" feel of early web, the internet of 2012 is not always a warm and fuzzy place. In fact it can be pretty terrible, particularly for women, a point Katie J.M. Baker raises in her pointed response to Heller's article. The internet is so far from a utopian paradise, in fact, that lawmakers in the US, UK, and Australia are scrambling to do something, anything, to combat online aggression and abuse.

Not everyone supports legal intervention, of course. Academics like Jonathan Zittrain readily concede that online harassment is a major concern, but they argue that the laws designed to counter these behaviors risk gutting the First Amendment. A better solution, Zittrain maintains, would be to innovate and implement on-site features that allow people to undo damage to someone's reputation, livelihood, and/or peace of mind. As an example, during an interview with NPR, Zittrain suggested that Twitter users could be given the option to update or retract "bad" information, which would then ping everyone who interacted with the original tweet. Existing damage would thus be mitigated, and draconian censorship measures rendered unnecessary.

Regardless of the impact that either type of intervention might have, the fact is that today, this very second, there is often little recourse against behaviors that might be deeply upsetting, but aren't quite illegal either. In those cases, what should be done? What can be done?

If recent high-profile controversies surrounding Violentacrez, Comfortably Smug, racist teens on Twitter, Lindsey Stone and Hunter Moore are any indication, it would seem that many people, members of the media very much included, are increasingly willing to take online justice into their own hands. Because these behaviors attempt to route around the existing chain of command (within mainstream media circles, the legal system, even on-site moderation policies), I've taken to describing them as a broad kind of online vigilantism. It might not be vigilantism in the Dog the Bounty Hunter sense, but it does—at least, it is meant to—call attention to and push back against some real or perceived offense.

Kate: The first thing to point out is that a lot of this online vigilantism takes the form of shaming—and surprise, surprise, this sort of behavior is not unique to the internet (Channel 2's Shame On You, anyone?) . As a cultural practice, shaming goes back centuries, probably millennia. It's one method of enforcing norms—those unwritten, extra-legal, societal rules that dictate (or at least influence) how we behave. Only now, instead of making norm violators run around wearing big red As on their chests, we make Facebook pages that exhort people to "LIKE THIS IF YOU THINK HESTER PRYNNE IS A DIRTY SKANK! >:-O"

Shaming is a tool that people use for all sorts of reasons—not only to enforce norms, but to feel superior, exact revenge, make a joke, etc. Public humiliation and shaming were used for centuries as judicial punishments—there's a reason we use the word "pilloried" in shaming contexts. According to law professor Daniel Solove, humiliation-based punishments subsided as the prison system developed; they were eventually outlawed on the grounds that they were cruel and unusual forms of punishment. Despite this categorization, Solove contends that certain types of shaming can be useful in that they give people a chance to fight back and can deter potential offenders from engaging in unacceptable behavior. (He offers up customer service forums and sites that combat street harassment, such as Hollaback, as examples of that sort of 'beneficial' shaming).

In the case of Violentacrez and Hunter Moore, shaming was used to fill a legislative void and provide recourse in the absence of a meaningful legal solution; one of the reasons people engage in this type of behavior is because they feel it is their only option for achieving justice. Moore, "The Man Who Makes Money Publishing Your Nude Pics" and proprietor of now-defunct "revenge porn" site Is Anyone Up?, was doxxed by Anonymous for his remorseless victimization of women. Violentacrez, "the creepy uncle of Reddit" and creator/moderator of several exploitative subreddits (including /r/jailbait and /r/creepshots) lost his job and became known as "The Biggest Troll On The Web" after his identity was revealed by Gawker's Adrian Chen, and everyone then knew that Violentacrez was a 49-year-old guy from Texas named Michael Brutsch.

The things that Brutsch and Moore did aren't technically illegal, but clearly cross many moral boundaries. So, because there wasn't much that could be done from a legal perspective, people took matters into their own hands and meted out their own form of punishment.

Another notable example of this is the Megan Meier case, the teen girl who committed suicide in 2006 after being cyber-bullied on MySpace by the mother of a friend. As a result of the outcry that followed Meier's death, legislation ended up being passed that expanded existing harassment laws. So, yes, I suppose Solove has a point when he points out that shaming isn’t necessarily all bad, but also, eesh—what an incredibly slippery slope.

Whitney: Yes, just because it's an option, doesn't mean it's a good option. In the end, online public shaming may create its own problems. So before anybody throws themselves a vigilante-themed cosplay party, it makes sense to slow down and consider how public shaming works (or doesn't work) online.

First, and this is a critical point, shaming often involves crowds. "The hivemind," in internet parlance. Sometimes the hivemind acquires its target in grassroots fashion, for example in the Lindsey Stone case. The outrage many people experienced upon seeing/hearing about one of Stone's Facebook photos—which showed Stone mugging disrespectfully in front of a "silence and respect" sign at Arlington National Cemetery—prompted someone to create a "Fire Lindsey Stone" Facebook page, which quickly amassed several thousand followers and ultimately resulted in Stone being fired from her job.

Yes, but branding people with their worst decisions and keeping them excluded from mainstream society isn't just about deterring the guilty party. These kinds of deterrents—capital punishment being the most obvious example—are as much about scaring everyone else straight.

In other cases, the hivemind is spurred to action by external amplifiers, which is where questions of accuracy become particularly important. We saw this last Friday with the tragic shootings at Sandy Hook Elementary. In this case, a number of media outlets—all of whom were scrambling to piece together as much information as quickly as law enforcement was providing it—identified the shooter as one Ryan Lanza of New Jersey, a report that came from police that proved false. Adam Lanza was the actual shooter; Ryan Lanza was his brother. Adam had been carrying Ryan’s ID, hence the misidentification. Not only were these initial reports confusing, they triggered a whole slew of nasty mob behaviors, including speculative laundry- (and even home address)-airing as well as outright harassment of the Ryan Lanza who turned out to be the shooter's brother and all the other "Ryan Lanzas" who had the misfortune of having searchable social media profiles. Even the friends and followers of the Ryans/Adams in question were subject to abuse (see this article by Matt Bors, who was inundated by hateful messages simply by being Facebook friends with the "real" Ryan Lanza).

Kate: I think that CUNY professor Angus Johnston put it quite well:

The other issue that Johnston's tweet highlights is that, in our era of "citizen journalism," it's not just media outlets who are involved in this process, but any person with a social media account, all of whom seem to get caught up in an unfortunate game of FIRST!!11!

The super-reactive and impatient nature of crowds, particularly when it comes to very emotional events like Sandy Hook, means that people will circulate (and in Ryan Lanza’s case, act upon) anything that looks like news, even if it hasn't been verified. This is why people were so angry when they found out that Comfortably Smug was intentionally posting misinformation on Twitter during Hurricane Sandy. He manipulated an urgent need for news during an emergency to basically mess with people. He knew very well that what he was putting out there would spread fast and far, because that's what happens in these sorts of situations.

People seem to be getting more sensitive to the accuracy of information that is spread around social networks (particularly Twitter) during crisis events. Xeni Jardin—along with dozens of others—took some heat for circulating incorrect details about the identity of the Sandy Hook gunman and defended herself by arguing that this sort of rapid information-sharing is how we collaboratively process these sorts of things.

Now, to be fair to Jardin, she was only reporting what the police had confirmed—that was their mistake, not hers. And yes, Twitter does allow us to collectively report and process information. However, the problem with the “Internet Truth Machine” is that, while eventually we get the story straight, the messy process of confirming actual, verified facts no longer takes place behind the doors of a newsroom, but publicly and in real time. This means more transparency, but it also means that any person can pounce on a potentially incorrect piece of information and run with it, whether that means a retweet or something less innocuous. This is why journalists and other amplifiers/influencers have a particular responsibility to ensure that the information they’re putting out there is correct. Which, in practical terms, means waiting before they tweet—even at the risk of being late with the news.

Whitney: Of course, the idea that journalists should wait any amount of time runs up against their job descriptions, which is to report what happens when it happens. Problems arise when the "facts" they do report are, in fact, not factual, or when they don't bother to strongly qualify “not sure if legit” retweets (an admittedly difficult task in 140 characters), or when what they’re passing along is no more than sensationalist hearsay–all of which risks inciting the angry online mob before the full details of the story have a chance to shake out. Mashable's Lance Ulanoff has a few harsh words for precisely this impulse:

No one waits for the facts anymore, least of all online media. It is "find and run with it." All apologies can come later. It's just media, after all. Just digital words and images, easily changed with the stroke of a key. People will read the update or the updated post. So it's all good, right?

Except it's not.

It matters because journalistic carelessness is damaging. It represents a basic disregard for what is true in favor of what is first, fast, and clickable, an attitude that, as mentioned earlier, risks encouraging knee-jerk and often irresponsible vigilante behavior—or as this Telegraph article describes, crazed amateur detective work.

But stirring the hivemind isn’t the only problem with the “report now, ask questions later” attitude. This frenzied clamor for information, even for false or partial information (both on the part of journalists and the public), only amplifies the powder-keg relationship between breathless media coverage of mass shootings/suicides and future mass shootings, a point Charlie Brooker discusses in this must-watch clip, and which feminist media scholar Carol Stabile addresses in a recent blog post. Given this relationship, I would argue that the incomplete, speculative or otherwise sensationalist reports to emerge in the wake of Sandy Hook are every bit as problematic, if not more so, than the handful of trolls who have created offensive parody accounts, and whom law enforcement in Newtown are now threatening to file charges against.

Kate: Agreed, and ugh. Of course, the Sandy Hook case is an extreme (and extremely upsetting) example. It's important to point out that not all cases of shaming/online vigilantism are equal, or provoke the same level of outrage.

In the case of Lindsey Stone, it seemed that things started off with finger-wagging and then ended up escalating to a call for her head/job—a reaction that some felt to be incredibly unfair and disproportionate. This illustrates one of the major problems with online shaming, which is that the outcome often ends up being different from the original intent: what starts off as a "tsk-tsk-tsk" ends up being a permanent, often life-damaging record of the shaming targets' mistakes and/or transgressions. The internet never forgets, which is why some people, including philosopher/law professor Martha Nussbaum and researcher Brene Brown, think that shaming is inappropriate no matter what the situation, because the punishment ends up not fitting the crime (i.e., being fired over a silly picture).

Nussbaum and Brown both argue that shaming is too damaging to be used in a civilized society, because they target the person ("you are innately bad") vs. the person's acts ("you did a bad thing"). Nussbaum points out that the point of shaming is to alienate the "defective" person from society by marking them with a degraded identity (in this case, a nasty Google history). Short of a name change, these people are saddled with this stuff for life. Which, I suppose, can be a good or a bad thing depending on your views on rehabilitation and the ability of human beings to change. Personally, I don't think that branding people with their worst decisions and keeping them excluded from mainstream society is the best way to deter them from behaving badly—there are lots of maxims about people with nothing to lose, and they exist for a reason.

Whitney: Yes, but branding people with their worst decisions and keeping them excluded from mainstream society isn't just about deterring the guilty party. These kinds of deterrents—capital punishment being the most obvious example—are as much about scaring everyone else straight as they are about discouraging recidivism. So it's not just "I don't want to be executed so I shouldn't commit any more crime," but also, and perhaps more importantly, "I better not start committing crime because I don't want to end up like that guy they just executed." These types of deterrents are, in other words, fundamentally pedagogical. They teach us what we can and cannot get away with. The same goes for acts of online vigilantism. Like more traditional offline deterrents (if you kill people, we will kill you, so don't kill people), online shaming allows certain individuals or groups to model what is and what is not acceptable within a specific (sub)cultural context. But not just model—the second (and simultaneous) half of that equation is punishing, or threatening to punish, anyone who deviates from whatever established norm.

Take the Jezebel racist teens piece, which publicized the names, social media profiles and schools of the accused teenagers. One can criticize (and I have criticized) Tracie Egan Morrissey for this approach, particularly the fact that she unfairly implicated an innocent 15 year old in her anti-racist dragnet (not to mention the fact that she inadvertently incentivized anonymous racist expression). But in order to fully engage with the story, it's critical to remember that she wasn't just focused on these specific teenagers. She had much bigger fish to fry, namely all those people who haven't been called out for their racism. Morrissey was sending them the unequivocal message that YOU SIMPLY CANNOT SAY THIS SORT OF SHIT IN PUBLIC, AND IF YOU DO, YOU'LL BE SORRY, a point echoed by many of the article's commenters. In other words, if you step out of line, you will be punished—because you deserve to be punished.

Kate: Right, but the problem is that I don't think that "THIS COULD HAPPEN TO YOU" is particularly effective. (If this Deadspin piece on racist reactions to Obama's Newtown address is any indication, Egan Morrissey might as well have been shouting at a brick wall.)

History is filled with cases of people being turned into examples of where the targeting didn't have the intended effect, and even crossed over into backfire territory. This is particularly true when the targets are poorly chosen. In the realm of recent history, the RIAA tried using the "selectively sue" tactic with illegal downloaders (Napster! KaZaA!) in an attempt to scare all of those durn teenagers who were robbing multinational conglomerates of their chimney-sweeping wages (I don’t mean to make light of piracy because of artists rights and compensation, but I don’t think that these suits were really about artists losing money). What ended up happening is that a few of the "worst" offenders went to court, the RIAA looked pretty stupid, and everybody else kept downloading whatever the hell they wanted.

You brought up a key point about shaming targets "deserving" to be punished. One thing that I've noticed in quite a few of these cases is that a rhetoric of blame comes into play. And the blame often centers around the public nature of the activity as well as expectations of digital literacy— i.e., the shaming behavior is not only justified because the target did something "wrong," but also because they were stupid enough to do it on the internet. I mean, the internet has been around long enough at this point, MORONS—what else can they expect when they post (X Y Z type of content) online? It's their own fault, they totally should have known better. But that rhetoric comes scarily close to other types of victim blaming that most people no longer consider acceptable. For example, "What else can you expect when you go out in public after midnight wearing a short skirt? It's her own fault, she should have known better."

And in many instances, that border verges on non-existent. Misogyny is a major theme when it comes to online shaming—to quote Adrian Chen ,"nothing gets the online hive mind's psychotic rage juices flowing better than a young woman who it feels is out of line." Both Kiki Kannibal and Amanda Todd were excoriated for being "sluts" online. Todd was pressured (and then blackmailed) into baring her breasts to a man she met in an internet chat room. After a topless photo of her was released online, Todd experienced harassment so severe that she committed suicide. Kannibal was unfairly blamed for the death of her ex-boyfriend (who also raped her); after she received death threats, her family had to relocate. But hey, they totally asked for it, right? Disgusting.

Where once you (mostly) had distinct, separate communities with norms that were policed within those communities, we now have more of a mish-mosh where people with conflicting values end up policing each other, which makes things very messy.

Whitney: That's not the only problem with "asking for it" rhetoric. This same sort of victim blaming is pervasive within trolling circles, and provides an interesting analogue to social justice vigilantism. (Is that a phrase? It is now) To trolls, the fact that someone has been trolled is proof that they should have been trolled. The target's anger or frustration or distress thus functions as simultaneous goal of and justification for the trolls' actions. And with good (well, maybe not good, but "valid," i.e. the conclusions follow logically from the premises) reason, since according to trolls, nothing on the internet should be taken seriously—an imperative that doesn't just allow for but in fact encourages pushback against sentimentality and emotional attachment. Of course, "nothing on the internet should be taken seriously" is a universalizing, (not to mention self-contradictory) assumption, one that is very easy to make when you have the option—which is to say, the privilege—of remaining anonymous. But for those who accept this premise, it is extremely easy to justify punishing those who fail to adhere to how people on the internet "should" behave.

Significantly, the same vigilante rhetoric used by trolls is almost interchangeable with the rhetoric used by those engaged in other forms of online shaming (danah boyd discusses the behavioral mirroring between trolls and those who would unmask them here). Some universalizing assumption—nothing on the internet should be taken seriously, everything you have ever said on the internet can and should be held against you forever, people have the right to say and do anything on the internet with absolute impunity because free speech—is forwarded, simultaneously establishing the boundary of what is and what is not acceptable, and reifying (what is presumed to be) a clear-cut distinction between the "us" who punishes and the "them" who (ostensibly) deserves punishment. And this is where the specific details of a given case—specifically, what borders are being policed by whom—become critical to assessing the ethics, or lack thereof, of vigilante interventions.

Kate: Right, and this is complicated by the fact that the boundaries between specific pockets of internet are collapsing and blurring. So where once you (mostly) had distinct, separate communities with norms that were policed within those communities, we now have more of a mish-mosh where people with conflicting values end up policing each other, which makes things very messy. For example, what is considered to be acceptable (or at least, not particularly problematic) on certain parts of Reddit can be very different from what is considered to be acceptable in "mainstream" internet circles. That wasn't an issue in the early days of the site when it was mostly self-contained, but now that Reddit provides endless fodder for mainstream news outlets, the libertarian ethos that errs on the side of "freedom to" vs. "freedom from"* (i.e., "freedom to post upskirt shots of teenagers" vs. "freedom from exposure to content that most people would find disturbing") is more problematic. For non-redditors, it may be totes cool that Obama did an AMA and Advice Memes are being used to sell Kia Sorentos, but not as cool that /r/deadjailbait exists.

This clash is not exclusive to Reddit, either—you can also see it with certain hate-blogs. The hate-bloggers (which is a problematic term for me, but that's another post) will stumble upon a blog/blogger whose modes and ways of being are completely nauseating/repulsive to them, and react by posting snarky takedowns ("they put this drivel in the public arena, I am within my right to publicly point out that they are a-holes"). And to be honest, sometimes I'm totally reading along in smug mode, particularly when the targets of their ire are misogynist/classist/otherwise offensively ignorant. The targeted bloggers see it as harassment, the hate-bloggers and their audiences see it as justified criticism, and we're back to square one.

Whitney: But maybe that's not a terrible place to end up, because it means we can't easily rely on any facile, forgone conclusions. My personal take (and in the spirit of back-to-square-one-ing) is that the question of whether or not public shaming is GOOD (effective, humane, passes utilitarian muster, etc) is contingent on a number of factors. First, which nouns and verbs are scribbled into the Mad Libs template ([PERSON] violated the maxim that [UNIVERSALIZING ASSUMPTION] and therefore deserves to be taught a lesson by [GROUP])?

Furthermore, what assumption is being universalized? In the case of Hunter Moore, for example, I accept the basic assumption that one should not be a woman-hating dickbag, and so have no problem when the internet decides to give him a healthy dose of his own medicine. I do not, however, accept the premise that posting a stupid, ultimately juvenile photograph (of which I have been the subject of many, as has everyone) to Facebook justifies being fired, so think that what happened to Lindsey Stone sucks.

Finally, the efficacy/ethics of vigilante tactics depends on the accuracy of the information provided. Any vigilante intervention based on false or misleading information is something from which we should back away slowly. The problem is that it's often difficult to know what information is true and what information is false, particularly if we're watching the story unfold on Twitter. One possible solution is to stop blindly accepting and amplifying information we see on Twitter—after all, and to echo the Lance Ulanoff piece mentioned earlier, it is better to be right than to be first, particularly when you consider the potentially devastating consequences of getting things wrong.

In the end, then, how I feel about public online shaming has less to do with how I feel about the method itself, and more about the details that animate each particular case.

Because the truth is, there's an awful lot of unconscionable shit on the internet. Merely shrugging our shoulders because "that's just how people are online" is akin to apologizing for incidents of sexual assault on the grounds that "boys will be boys." That position isn't just defeatist, it reifies existing systems of power, and almost guarantees that the behaviors in question will continue. And waiting for the proper authorities to intervene, and not just intervene but intervene reasonably and effectively—I wouldn't hold my breath. Given all that, yes, sometimes people do and will continue to need to take matters into their own hands. Which isn't a ringing endorsement for anything, but rather functions a neon CAUTION sign, flashing in all directions.

Kate: This reminds me quite a bit of the whole free speech conundrum. Free speech is really great when it is being used the "right" way, and terrible when it is being used the "wrong" way. For me, it is great when free speech is used to defend against abuses of power and other such things; it is terrible when, for example, white supremacist groups get to spew their vile vitrol with impunity. If you are the Westboro Baptist Church, it is great that you get to picket military funerals, and terrible that you can't stop Those Awful Liberal Freedom-Killers from telling people that yes, it is perfectly acceptable to be gay, and that gay people deserve equal rights. Conflicting value systems! They make everything so complicated.

Whitney: It's interesting you mention the WBC, particularly their mind-bogglingly inhumane decision to picket Sandy Hook Elementary. This is one of those cases where vigilantism strikes me as the best, and perhaps the only, means of pushback. For some reason, this hate group (which isn't even legally recognized as such; rather it is classified as a legitimate religious organization and currently enjoys tax exempt status) is free to pollute the airwaves and street corners with a seemingly endless supply of insane, toxic, incendiary bigotry, a fact I have never fully understood, particularly in this case (if ever there were a time to play the incitement to violence/fighting words card, picketing the funerals of twenty murdered 6 year-olds on the grounds that gay people exist would be it—though I suppose that's a tall order for a government that doesn't recognize these same gay people as equal under the law). No one should have to see or hear that sort of thing, least of all those personally affected by the tragedy. So when I heard that Anonymous had declared war on the WBC and that the Jester, a well-known activist hacker, had taken down an obnoxious troll account mocking the Sandy Hook victims, I tipped my hat to them. Because good.

Kate: I agree that we shouldn't just stand idly by while people are victimized, but the problem with the nature of online shaming as it currently stands is that once the hounds are unleashed, they go for broke, no matter what the original crime was. For that reason, along with the other tricky points we've discussed, online vigilantism just makes me uncomfortable—even though I may cheer at some of the outcomes (WBC, etc).

In the end, I think that a lot of this comes down to a certain sense of moderation and reason. I don't think this behavior is going away anytime soon, so ideally, we'll develop some sort of normative standard around proportionality—i.e., don't ruin someone's life for doing something that any person could have done in a moment when his or her judgment was lacking. Compassion and reason are (kind of) built into the legal system, it should apply to the internet as well. We no longer throw people into jail for life for stealing a loaf of bread; people's lives shouldn't be ruined because they posted something stupid on Facebook. We should be able to say "this is not okay" without resorting to catastrophic punishments.

However, until schadenfreude and moral panics stop being good for ratings/pageviews/upvotes/likes/retweet counts, I worry that won't be the case. Using other people's humiliation and/or "just desserts" as entertainment (dramatic or humorous) is as timeless as shaming itself—moderation and reason aren't fun to watch. We are not dealing just with individual instances of specific behavior, we are dealing with a system in which we are all complicit, and that is a much, much harder thing to change.

* Thanks to Mary L. Gray for this framing.

Previously: The Meme Election: Clicktivism, The Buzzfeed Effect And Corporate Meme-Jacking

In the Summer of 2012, Whitney Phillips received her PhD in English (Folklore/Digital Culture emphasis) from the University of Oregon. Her dissertation, titled “This is Why We Can’t Have Nice Things: The Origins, Evolution and Cultural Embededness of OnlineTrolling,” pulls from cultural, media and internet studies, and approaches the subject of trolling ethnographically. She writes about internet/culture here and here and is currently a lecturer at NYU.

Kate Miltner is the Research Assistant for the Social Media Collective at Microsoft Research New England. She received her MSc from the London School of Economics after writing her dissertation on LOLCats, something for which she has been mocked mercilessly in the comments sections of Gawker, The Huffington Post, Mashable, Time Magazine, and the Independent. She has also written about internet culture for The Guardian and The Atlantic. You can find out more about her at

The authors would like to thank Chris Menning at ModPrimate and the Social Media Collective at Microsoft Research for their input and support.

16 Comments / Post A Comment

Tully Mills (#6,486)

This is terrific.

Spencer Lund (#2,331)

@Tully Mills Agreed

petejayhawk (#1,249)

It's interesting you mention the WBC, particularly their mind-bogglingly inhumane decision to picket Sandy Hook Elementary. This is one of those cases where vigilantism strikes me as the best, and perhaps the only, means of pushback.

I take issue with this in an otherwise wonderful piece. None of the methods of vigilantism in this case are remotely new – Anonymous/4chan/whatever have knocked WBC sites offline in the past. The personal information released was all more or less publicly available. But the biggest problem with this particular instance of targeting WBC is that innocent people have been implicated in Anonymous' dossier, one of whom is a (very casual, distant) acquaintance of mine. The general attitude around the internet is that THIS ONE CASE is when all of Anonymous' legal/semilegal/illegal tactics is OK, purely because it's Westboro, the one organization that pretty much every person on the planet is united in loathing. And if some innocent eggs are cracked along the way, suddenly that's OK.

I hate to be the guy who links to his own ramblings here, but as a former Kansan, the assholes of WBC are a subject somewhat near (certainly NOT dear) to my heart:

@petejayhawk Ah, see, I did not know that innocent people had been implicated — which then kicks my response up to the section about accuracy, and how important it is to get these sorts of things right (if one chooses to do them at all). I certainly do not pretend that any of this stuff is clear cut; I certainly do not pretend that I'm not sometimes swept up in the emotion of it all too, which in the end is what fuels so much of these sorts of behaviors, for better or worse (usually worse). I'm not 100% sure how I feel about any of it, honestly. But — such is the world, I suppose. Thanks for the comment, it gave me something to think about…

deepomega (#1,720)

One, this is great. But two, I have a hard time not approaching this from a pragmatic/behavioral standpoint. For instance – if there is "good" free speech and "bad" free speech, fine, but what is the most effective way to promote the former and eliminate the latter? It might not be a law! It might be creating a culture where the bad free speech is not given root, which may mean allowing it to poison the community it's in. Etc.

Seems pertinent to add this, which is happening right now:

I was peripherally involved in the creation of the New York mag piece & would like to note the irony of starting out a piece about the problem of online rudeness by calling someone else stupid.

melis (#1,854)

@Ben Mathis-Lilley@twitter I'm trying to find a generous reading of your comment. Are you really suggesting that critiquing the argument of a published article is equivalent to posting creepshots or sending rape threats to female writers?

No, I'm not suggesting that — sexist/violent threats against women are worse than smart-assed comments about a writer. But the New York piece does not condone or ignore the continuing existence of horrible aggressive behavior online; the Violentacrez/Gawker story is mentioned early on as a case in point of the backlash against open awfulness, as is the Karen Klein saga. The New York piece is, quite obviously, about a change in manners that's taken hold in online attitudes among people who aren't seeking to hide their identities. You might say it's about the fact that a website run by Choire and Alex in 2012 is printing pieces by PhDs about how to most ethically and efficiently scrub online discourse of anonymity-enabled maliciousness, vis a vis what their website might have covered in, say, 2005. Does such a piece suggest that no one online threatens women in terrible ways? It EXPLICITLY states otherwise. It talks about efforts to shame people who do so. (Or their own alleged self-shaming, in Perez Hilton's case.) So why declare that it ignores the problem? I don't know — I actually don't know what the point of saying that is.

I would also bet that as a writer on the Internet with a vaguely Jewish-sounding surname, Nathan Heller is not unfamiliar with disturbing hate mail. But on that I'm really just guessing.

Belinda@twitter (#240,260)

Great article-very interesting and thought-provoking.

r&rkd (#1,719)

Putting aside hopes of somehow changing people to make them want to be nicer, and instead assuming that we'll have to rely on deterrence to a large extent, I think a major barrier to modifying the negative behavior of many internet users is that the odds of retribution are slim, and humans are more motivated by the likelihood of punishment, less by its severity. A perpetrator in the Megan Meier case had her life ruined by a criminal prosecution, and that could conceivably happen to almost any troll, but of course it won't happen to the vast majority of them because the resources aren't there.

Perhaps it's also worthwhile to note that the sort of "name and shame" behavior discussed is far from limited to the "high profile" cases listed within. In many corners of the internet, especially in those populated by individuals who exist within a political/social justice sphere, "name and shame" has become a rather commonplace and normalized behavior. The case of Lindsay Stone, in particular, is example of the absurdity that occurs when we as a culture value outrage over understanding and insist that punishment is more important than rehabilitation. The entire process with Stone was beyond ridiculous, and the message it sends is clear: denizens of the internet are perfect and flawless while those who are targeted are both guilty without benefit of rational exploration and "asking for it".

I, for one, am more concerned with how "internet vigilantism" is based on a shallow and unexamined sense of moral superiority and subjectively determined "social norms" which may or may not actually address real concerns. For every high-profile case, we have multitudes of smaller-scale examples occurring constantly. I'm no head doc, but the interaction between engaging in online "name and shame" and the resulting confirmation of (perceived) moral superiority is distressing.

In a large way, we've lost the social concept of "I'd rather see ten guilty men set free than the imprisonment of a single innocent". Those who engage in this sort of behavior generally have little to no concern for innocents who may be wrongfully caught in the net, because it is, at the core, an "ends justify the means" approach to justice.

stuffisthings (#1,352)

Well I can see one thing we've totally given up on policing: the correct usage of the word "troll."

Two hundred years ago, if you were a bully, or made cruel comments there was no doubt who was responsible. I don't think we have an right to anonymously hurt others. Part of our problem is the rise of anonymity has encouraged bad behaviour. I'm all for everyone taking responsibility for what they say and do.

misiekpl (#242,183)

A perpetrator in the Megan Meier case had her life ruined by a criminal prosecution, and that could conceivably happen to almost any troll, but of course it won't happen to the vast majority of them because liquidy the resources aren't there.

Post a Comment