Everyday Victim Blaming

challenging institutional disbelief around domestic & sexual violence and abuse

Concerns About the Report Abuse Petition

We are hearing concerns about the petition for a 'Report Abuse' button on twitter.

We share some of these concerns.  So, what's the solution?

Marginalised groups being 'rude' or 'offensive' to oppressors isn't abuse.  It is anger.  It is understandable, and powerful - it should not be silenced. Being 'nice' or 'polite' does not mean that you are likely to be listened to.  The issue of oppression will not be solved by 'asking nicely'.

MRA's already use the 'report spam' button to try and silence women - note how many suspensions @misogyny_online (can't link to them, they are currently suspended!) has had already.

Let's come up with some practical ideas that will work towards a solution - either tweet us @EVB_Now and we'll storify, or add comments on here.

We'll then write a post with all the suggestions and send it to the powerful men (as they are mainly men, we've checked!) at the head of Twitter.

Download this post as PDF? Click here Download PDF

, , ,

Comments are currently closed.

56 thoughts on “Concerns About the Report Abuse Petition

  • Cel West says:

    Yes – very concerned that an angry woman’s comment to a prominent feminist will be seen as abuse, whereas the continued, purposeful, personal misgendering or outing of trans people will not be.

    • Admin says:

      We agree – thank you for adding the first comment!

    • staringatclouds says:

      The report will need to be assessed by people and not some kind of expert system, a system of “x reports gets an account suspended/tweet removed’ is not sufficient, we need moderators.

      These people will need appropriate training and be sensitive to what constitutes a heated argument and what constitutes abuse and harrassment.

      They will need to be able to assess the timeline of both the reporter and reportee, taking into account any location data & IP’s to cater for a false accusation made by an individual with multiple accounts.

      In the case of false accusations, consideration will need to be given for suspending the user making the report.

      And they will need to be sensitive to LGBT, race, religion, abillity issues as well as gender.

      In extreme cases such as threats of violence or rape or even persistent stalking and harrassment twitter, not the victim, should notify the relevant authorities and take steps to de escalate the situation such as suspending an abusers account or blocking the abuser and victim from each other.

      Under no circumstances should the victims personal details be released, the abuser will have left sufficient evidence in the content of their tweets for relevant authorities to decide if an arrest and prosecution is warranted, if someone is making death or rape threats it’s irrelevant whom they are made to, the fact that they are published on line is all the required evidence, if the abuser tries ‘the right to confront my accuser’ to obtain the victims identity then the accuser would be twitter and no one else.

      Needless to say these people should be well paid for their efforts.

      So now it’s up to twitter to defend it’s vulnerable user from it’s predatory users, all yours twitter.

      Apologies for any spelling mistakes & long windedness, I’m on my phone and I’ve been thinking about this a bit too long

  • Catherine says:

    This is a great idea. I also have major concerns about the current proposals and, without a doubt, a ‘report abuse’ button would be used by those who have personal issues with a tweeter or MRA’s. Not to mention those who seek to shut women up by any means.

    Essentially, whilst there are massive problems with twitter reporting processes, the problem lies with the police. Rape threats, abuse, online harassment all need to be taken seriously by them. If you have screencaps of the abuse you’ve recieved then the police should pursue it.
    Twitter need to make a commitment to working with the police to deal with these men.
    I don’t blame twitter too much, although I do feel that their presence & responses on their on social media site is extremely poor.
    No, the police need to take action with this. I know so many women who have recieved horrific abuse: not just because they are women, but because they are WoC or have a disability. They’ve reported it to the police and they are told to take more responsibility for protecting themselves – victim blaming much?!
    I’m really pleased that so many high profile people are expressing their disgust at the abuse Caroline has received but I suspect that this is because, to many of them, Caroline is ‘one of them’ and not an angry black woman challenging racism.
    Anyway, sorry for the rant and thanks for the opportunity to explore some solutions.
    @planetcath

  • I am in complete support of this button. It needs to be accompanied by a consultation, training and implementation by Twitter staff so that it isn’t abused by MRAs, who lets be honest, are the ones responsible for all the abuse.

    We can’t just sit back and do nothing because a few people might abuse the report button. It will be by the same people who abuse the spam button [which for the longest time I thought functioned as a report button for abuse]. We can’t allow more women to threatened with violence and abused. It happens far too often on twitter and it needs to stop.

    • Cel West says:

      Often radical feminists call trans women “abusive MRAs”. We need to be careful here.

      • Anyone who sends rape or death threats to an abuser who needs to stop. I don’t care who you are or why you think you have the right to send such messages, it is a crime and should be treated as such.

        This goes as much for the het white women on twitter who like to insult women journalists for the crime of having a public platform as it does for anyone else.

        No one has the right to threaten another person with physical or sexual violence.

        • Cel West says:

          I agree. However jokes by other women are not death threats. These measures, harnessed to privilege, will silence and harm women.

          • I’m not sure I agree about jokes. Humour has been used as an excuse for racist behaviour. I don’t think it is possible to make jokes about rape or death threats that aren’t abusive or triggering to other women. I don’t think this needs to be dealt with by twitter. I would like to think that jokes about rape or death threats were just considered offensive and no one did it.

        • Cel West says:

          Also insults aren’t threats, and an insult is in the eye of the beholder – I’ve had certain feminists call my mentioning their previous work “insulting”.

          Censorship by patriarchy will harm women and promote VAW.

    • I couldn’t put it better myself, abuse button with the appropriate number of well trained staff.

  • umlolidunno says:

    It’s something of a problem to rest the definition of what is and is not ‘abuse’ on the identity of the recipient; it is still abusive when I tell a sexist man to f**k off and call him a s**tbag, it’s just provoked and justifiable. Abuse is abuse in the same way that a crime is a crime.

    From what you’ve written above, it seems that this campaign is not in fact for an ‘abuse’ button, but an ‘oppressive abuse’ button (analogous to the difference between ‘crime’ and ‘hate crime’). The latter in both cases is a much harder thing to target.

    Implementing a ‘report abuse’ button that works on the basis of user identity strikes me as almost unfeasible; not least because oppressed people can throw this abuse horizontally (like the women joining in misogynistic rape threats to Criado-Perez) thus bypassing this definition of abuse entirely. Twitter would also have to somehow create a hierarchy of user identities (and implement some way of recording them in the first place, which I doubt people would appreciate), and of course new accounts can simply lie upon registration.

    A (imo) more sensible “hate crime” approach would be, just for example, a filter for racist, misogynist etc language as one of the necessary criteria for suspending an account. There are many such ideas we can toss around for this criteria, such as: how many reports an account receives after 1st interactions with other users and whether those 1st interactions contain blacklisted language. They can be weighted accordingly to prevent misuse of the button. Another possibility is to concentrate on streamlining the existing process of reporting abuse by having an ‘abuse’ button alongside ‘favourite’ and ‘retweet’ on each tweet, or by getting Twitter to collate all tweets sent from a reported account to the reporter’s account and screening them that way instead of requiring the user to fish out the URLs of each example.

    There are tons of ideas I’m sure that could be put forward, but we really need to establish whether it’s the conduct itself you’re objecting to, or just who it is aimed at.

  • Jessica says:

    I think it is a good idea but just as I am petitioning to Facebook. A group of global advocates with the should be employed to ultimately see over all reports. This means only real cases cyber bullying and graphic violent images or videos posted for humorous pleasure are removed and not things that are posted to voice real world issues that provide education and a voice to those people silenced.

    • umlolidunno says:

      “A group of global advocates with the should be employed to ultimately see over all reports.”

      This is just impossible. Consider the amount of abuse that just one woman has received. In order to implement this, Twitter would have to use MTurk-style ‘digital sweatshops’ to vet the hundreds upon hundreds of cases, (including child sex abuse images) the way that Facebook does. It’s clunky, inefficient, and pretty unethical. I don’t think demanding enough human employees to manually oversee every report of abuse is a good solution, or a likely outcome.

  • Arvid Skimmeland says:

    Between the radical feminists and the anti-feminists there seems to be a verbal war, a war no one will win nor any regular people think to much about. I would say let them carry on.

    But the serious problem with abuse, harassement, sexual and violent, is that it’s hindering citizens to use their freedom of speech and expression, if a woman is threatened to silence by one or more people then we have a serious democratic problem.

    Radical feminist or glamour model, you all have the right and freedom to express yourself and be free from sexual and violent threats. And that is why I think an abuse button will be necesary even if, ironicly, it can be abused.

  • Anna Robertson says:

    This morning I was thoroughly in favour of the ‘report abuse’ button. However, on reflection I can see that’s a reflection of my white, cis status. I’m not subject to a lot of abuse so it’s important for me to listen to the experiences of those who do day in, day out. I am thinking of the way that the report spam button is abused already; how similar functions are abused on YouTube (I forget her name, but the woman who did the series on tropes in video games had a video taken down after an MRA campaign) and just Facebook in general (urgh). I also don’t have a huge amount of faith that twitter would moderate fairly- coming back to FB, I reported a ‘funny’ video that portrayed VAW after their policy change and they didn’t take it down.

    I think the following would be required for a report abuse button to work:
    -clear guidelines about what constitutes abuse. I know this is what scares some women. Who makes these calls? Will transphobia be overlooked, yet again? But I’m not sure how to prevent abuse of the function without it being clear that ‘hey dude, don’t be racist’/ ‘I’m a vocal feminist/anythingist’ isn’t grounds for removal from Twitter.
    – staff devoted to moderating this. They’d need to be specially trained and DIVERSE. They’d need to understand the nature of inequalities. At the very least they’d need to come from a basic position that racism, misogyny, ableism, homo & transphobia, and other oppressions exist. That they are a serious problem and need to be fought. This is counter to what the kyriarchy tells us, so it’s a pretty tall order.

    The problem is that I don’t have the faith in Twitter that they will implement these things with the understanding needed. And I know that if we get something imperfect we will continue to fight, but what if we get the bare minimum- a report abuse button, and nothing more, no change in policy, no devoted, specially trained staff. Will the high profile campaign run out of steam? Will anything change for the victims?

    I have a wider sense of unease about the lack of action on the rape culture that saturates society, and the grudging, tokenistic gestures being made to counter it, but I don’t need to explain them to you. I’m sure you’re more than aware.

    I’m so sorry I haven’t been able to suggest anything constructive. I’d find it frustrating to receive a message that was so pessimistic. I’ve found today very, very grinding and despiriting for many reasons and I think that’s partly why. Thank you for giving people a forum to discuss their concerns and I hope that we are able to come up with something together. And thank you for all the work that you do in general.

    • The lack of adequate training is definitely a problem with the FB campaign. There definitely needs to be a change in education and not just for staff of Twitter

  • Frothy Dragon says:

    Do people think Twitter would be incapable of working out what constitutes abuse and what doesn’t? Do they assume it’s four year olds running the site?

    From what I remember, Kim asked for a clarification on what constituted abuse in the terms and conditions. This can be applied.

    Yes, there’s going to be issues with the system. But is the status quo really preferable? I’d rather know that if I was being inundated with rape threats again, I could report those tweets easily. We currently have a sub-par reporting system on Twitter, and that needs to change.

    • Admin says:

      Starting this post was in no way to detract from Kim’s efforts, at all – the petition has been widely shared and covered by the national media.
      We wanted to counter the issue of inertia – if we don’t think an abuse button would work, what are the alternatives? Can we come up with practical suggestions to help make it work?
      Or, do we shrug our shoulders and say ‘nothing will change’?
      Apologies if it felt like a criticism, that was not our intention at all. Our intention was to find practical suggestions that will minimise the risk of women being penalised for being vocal about sexism, feminism, domestic and sexual violence, and misogyny.

  • Gregor Pattinson says:

    It is imperative Twitter protects its users. Not doing so is a complete abdication of responsibility. I’ve read apologists arguing that many of the comments towards Caroline Criado-Perez were sexist but not direct violent threats. So wrong. The violence is in the words and how they are directed. The victim has, in a very real sense, every right to feel they’ve been assaulted.

    Targeted violence, whether verbal or physical, is not generally tolerated in our society. If a shopper was harangued and threatened by a group of other shoppers in a major supermarket, the company concerned would be compelled to act. It’s likely they would do so with the support of the Police. Twitter is no different, it is a commercial enterprise, not a shooting gallery for pathetic individuals to victimise innocent people. As such Twitter has an obligation to its users.

    I’d like to see some advertisers get involved as well, sad though it is, this would be very effective. Money really is a universal language in multi-national commercial circles.

    It can’t be rocket science to provide a mechanism that protects users from abuse. We’re dealing with some of the most tech savvy people in the world for crying out loud.

  • Cel West says:

    Already above we see the old “trans women are men’s rights activists!” trope being trotted out.

    This measure will be a license for transphobia and another open season on trans women, who are increasingly being sued by cis feminists for having dissenting opinions.

    This will in the end serve hate and will isolate and kill.

    I hope it helps many women too.

    But the dual nature of this disgusts me.

    • I’ve read through the comments but I’m not sure who you are referring too? Is it my comment about MRAs? Because I was thinking of the member of Fathers4Justice who threatened to shoot me in the face and the man who threatened to have me gangraped and A Voice for Men who write hateful things about women. I used MRA to mean misogynist rights activist who believe that women need to be silenced and punished for any attempt to speak out.

      • Cel West says:

        It was in response to your reply to my comment, where you took the side of radical feminists calling trans women MRAs.

        Trans women are not MRAs and criticism or a bad taste joke – not a rape or death threat joke, but a bad taste joke – is not a death threat.

        I loathe MRAs and I’m sorry you’re being abused by them – one is stalking my group of friends online. But using their existence to oppress others isn’t on.

        And those who claim trans women are all MRAs will be taken at their word by any “twitter police”. It makes me deeply sad.

        • Admin says:

          Hi both,

          Could you take this individual discussion offline? My aim was to send the whole thread to Twitter’s Management Team, so it needs to contain practical suggestions.

          Thanks for your support.

          • Cel West says:

            One practical suggestion is that twitter police differentiate between trans women and MRAs.

          • Admin says:

            Also I’m trying to manage this by moderating with my iPhone as my laptop has died, and as I’m not known for my graceful thumbs, it’s not going well!
            :)

          • Cel West says:

            Another suggestion is that discussions, even angry ones, between women around feminism – which is a hugely broad area – should not be moderated.

            I know these are impossible. But, as you see, or perhaps not, nothing else will stop this being used to silence women.

          • Admin says:

            Some discussions look particularly frightening, especially to new users, those new to feminism and all of us who are still learning.
            We want to keep this a space where women aren’t frightened, which means it probably needs to look different to twitter!

        • I absolutely did no such thing. You are reading into my post something which is not there.

          I am specifically talking about rape and death threats which are not acceptable. Ever. Not as a joke. Not as satire. I would report anyone who did so regardless of who it is aimed at because NO ONE deserves to be sent rape or death threats, even ones dressed up as “jokes”.

          • Cel West says:

            There’s a long history of radfem abuse that you don’t know of, and apparently don’t care about.

            The trope is real and your cissexism is real whether you care about trans experiences or not.

            No-one deserves to be branded a man and a MRA, forcibly outed, sued, kicked out of their school and job. No-one. And all of this will still be fine under the measures that are proposed.

            The incident in this case is a foolish woman who photoshopped some insect killer to say “radfem protector”, after being abused online for a sustained period. She was a fool and she shouldn’t have done it, but it wasn’t a death threat.

          • Admin says:

            I asked nicely… Comments are now set to auto-approve by Moderator and will be done tomorrow.

          • Sister Trinity says:

            This whole exchange just highlights that whatever measures are applied need to focus on factors that make a user’s behavior identifiable as abuse, irrespective of their (supposed–people can pretend any number of things online) identity.

  • As umlolidunno mentions above, employing manual overseers is both infeasible and likely unethical. It may be possible to employ people for an appeals process, but not for the initial evaluation of abuse. And, as she says, a hierarchy of who is allowed to abuse who is fraught with similar problems. I’d like to expand on her comment to clarify how it might work.

    If this is going to be practical we need to examine how abuse happens on Twitter and how we can prevent false positives. This has to be well defined so it can be practically implemented and addresses people’s concerns about suppressing people’s voices unnecessarily.

    A simple initial proposal is to restrict use of this ‘report abuse’ functionality to tweets that the user is likely to see and that the user considers abusive. This means @-replies or mentions, perhaps extended to a hashtag that the user has contributed to. If someone tweets at you with abusive language, you should be able to report that user for abuse. It can be weighted according to how recently the first interaction with that person was. In order for someone to abuse that button, they’d need to entice you somehow to tweet at them *first* with abusive language. This seems implausible to me.

    This would of course mean that the angry comments Cel West mentioned above would also fall under this notion of abuse. I see no problem with this. What we should be working towards is freedom from abuse, not the freedom *to* abuse.

    • Cel West says:

      Yes, and white people will think anything a person of colour says is “angry”, and get them banned.

      https://twitter.com/Hijabinist has much more on how any polite criticism on race is taken as abuse.

      It will silence anyone saying anything critical, and the more not a rich white person they are, it will silence them more.

      • How so? Under this proposal you can say whatever you want, as long as you don’t say it *at* someone. People have a right to set their own boundaries.

        • Cel West says:

          You’re calling for you to not be able to politely @ someone and call them out on racism.

          Get out. Go away. Leave.

          • This is why additional criteria such as blacklisted terms would come in. Of course I’m not saying a person politely telling someone about their racism should be banned. These are just initial thoughts on how this problem might be solved.

            If you recognise that what happened to @CCriadoPerez is a problem and this is not the solution, then what do you think might be? That’s what this forum is about, I thought.

            If you don’t recognise that what happened to @CCriadoPerez is a problem, or you think it’s just collateral damage, then I don’t know what to say to you.

          • Cel West says:

            I think we need to campaign massively against misogyny, not just force it underground.

            I feel for her and I desperately want it to stop. Sustained abuse has forced me away from Twitter more than once. But we need to be careful.

            I think there need to be massive safeguards on this. Making it hard to ban someone who doesn’t use blacklisted terms will be a start.

            And a lot of feminists will campaign to have “cis” blacklisted. We need to be very careful about what we blacklist.

          • This isn’t trying to force misogyny underground, it’s trying to think of ways to to give its targets a space free of it. It also doesn’t preclude any other campaign against misogyny.

      • umlolidunno says:

        The idea is to come up with constructive suggestions about how to troubleshoot this kind of problem. Things like a learning algorithm to distinguish what constitutes abuse from reports confirmed as genuine, for example. Conditioning the suspension of accounts on other criteria like blacklisted terms or reports of abuse on 1st interactions.

        You’ve already made your recommendation that we should simply do nothing. If you’re not interested in any other contribution, this might not be the conversation for you.

        • staringatclouds says:

          If you have an automated blacklist to detect threats and insults then people will simply use deliberate misspellings, other languages, special symbols to replace letters and at the end of the day other completely innocuous words to replace the offending ones.

          They will still be threats and insults, a learning system would eventually automatically blacklist every word in every language.

          I’m a moderator on a largeish forum, I’ve seen how ingenious people can get when they want to.

          As infeasible and no doubt expensive as it may be, somewhere in the loop you will need a person with training making an assessment of what is going on, automatic systems will not suffice.

  • Admin says:

    Can we use this space to comment on the questions in the post, and find practical suggestions?

    I’d rather not have to spend my Saturday evening moderating :)

    This space is to offer practical suggestions to Twitter, let’s make it full of them so they can’t ignore us!

  • Catherine says:

    For me, it comes back again to education and training. I don’t assume that twitter is run by four year olds but I do think it’s probably run by men who don’t recognise (or care) misogynist abuse.
    I think this is a real issue. If an online social networking site or micro blogging site isn’t prepared to listen to the concerns of it’s users then what do we do? Boycott? I don’t think that will do any good at all. In fact, it will just give those women who experience huge amounts of abuse a day off.
    So, my suggestions are these;
    1) put pressure on twitter to provide their policy & procedures on what constitutes abuse
    2) use a campaigning group such as EVAW coalition to support twitter to improve their understanding of violence against women
    3) put pressure on the police to take these threats seriously. If a woman is being harassed and targeted online then there is NO evidence to suggest that this won’t crossover into real life.
    4) support and believe women when they say they are being abused and threatened. Women rarely make these things up and we need to come from a standpoint on I believe you
    5) put pressure on all of these organisations to ensure that no woman is taken more seriously than another. The amount of racism, homophobia etc experienced online is horrific. If a black woman says she is being threatened then TAKE IT SERIOUSLY

    That’s it for now I think.
    @planetcath

  • umlolidunno says:

    Max Wall 2000th and I had a little chat away from this page. We thought it was a good idea to take a step back and really define the problem that we’re calling upon Twitter to handle before suggesting how it could be done.

    The terrible abuse @CCriadoPerez received differed from what others (many of us) have experienced in terms of degree, rather than kind. That is to say, women I know personally have received grisly rape threats in response to their feminist views, but they didn’t get a few hundred such in the space of a day. So the problem is in two parts – 1)the type of things being said, and 2)the number of people saying them.

    Short of a Twitter-wide filter on language, there are few new options to really be had for the first part. Users can already block others who harass them, and perhaps the issue here is streamlining the existing reporting process as I mentioned above, by either adding a ‘report abuse’ button to each tweet or by having Twitter collate tweets from a reported account to the reporter’s account – both of these options are considerably easier than having a reporter collect each URL of abusive tweets. We should spitball more ideas on this. The block functionality itself could do with the added modification of applying to twitter searches, which also deals with seeing blocked people on hashtags.

    The other problem is the coordinated/snowballed trolling that happened the other day. Perhaps the way to deal with this is to have some algorithm monitoring a spike in the number of incoming mentions (relative to what is average for that account – of course this is flexible with an account’s growing or waning activity), such that any reports made during an unusually large spike would be funnelled to a real-life person to review. We could spitball ideas for what the next step would be: collecting incoming messages on the recipient’s behalf, for example. The recipient can then choose to take those to the police.

  • Admin says:

    This comment was sent to us by email, thanks to @JaneFae for her contribution:

    Twitter actually have a relatively good process for reporting one-on-one abuse, which can be used. The problems with twitter are:

    – what Twitter mods perceive as abuse is not necessarily what you or i would perceive as same

    – they do not recognise generalised misogyny as abuse…or rather, its not abusive enough to do anything about, cause twitter is US-based and will deploy the same argument as FB did in response to the FBrape campaign. Basically: some people want FBrapre images banned, some don’t, coz “free speech”. This is a difficult issue: look at us, we’re the good guys, stuck in the middle.

    The only real solution is an enforcement of existing laws – and that got ruled out by the DPP about 6 weeks ago.

  • Admin says:

    This piece was circulating on twitter – it’s by @BryceElder. It’s probably the best suggestion we’ve seen, so far.

    Dear @twitter. Here’s my proposal to counter trolling that doesn’t involve petitions, newspaper columns and generalised outrage.

    Problems with a “report abuse” button:
    * it’s after the event. The offence has already happened.
    * it requires human involvement to screen for spurious reports. This slows down the process as well as costing lots of money. And, last I checked, Twitter is neither profitable nor a charity.

    A better way would be to automate the troll catching. This would be relatively simple.

    The average troll:
    * has created the account within the last month or so.
    * has fewer than 50 followers.
    * sends a disproportionate number of messages to people with blue ticks.
    * uses a few trigger words repeatedly in their messages. You know which ones.

    Using these criteria, it’d be simple to write an algorithm that screened and flagged suspicious behaviour without any need for human censorship.

    Users could then be offered a ” safe Twitter” option, which automatically muted any user who’d been flagged as a potential troll. An opt-in filter, basically.

    There would be false positives, of course, as well as stuff the algorithm wouldn’t catch. But so what? No one’s account has been suspended. No one’s freedom of expression has been infringed. The only effect would be that potentially trollish messages may not reach their target. Would anyone troll in those circumstances? Fewer would, certainty.

    Of course, it’s questionable whether many people would opt in to such a filter. It’s human nature to want to hear /everything/ being said about you, no matter how shitty. But human nature really isn’t Twitter’s problem to solve. It’s just a web site.

    • umlolidunno says:

      I really like this approach, but the criteria themselves certainly need hashing out;
      * has created the account within the last month or so.
      * has fewer than 50 followers.
      * sends a disproportionate number of messages to people with blue ticks.

      None of the above were true of any of the trolls I was engaging with during CCriadoPerez’s pile-on. These criteria are more characteristic of spam accounts than troll accounts. Certainly I think something approximating the above points is already a factor in spam reporting. The only qualitatively different suggestion is the language blacklist, which is probably worth thinking about further, since it keeps creeping up.

  • Natslayer says:

    I advocate that accounts can only be set up with verified email addresses ie, against your home address and ticking a t&c agreement not to engage in abuse, on the understanding that illegal communication will be passed to the police.

    • That could cause some class-connected access issues, as well as issues for those who use pseudonymity to protect themselves. There area a lot of activists on Twitter who would be hit by one or both.

  • Alison says:

    I am conflicted. Initially I thought great idea then changed my mind because of concerns of it being abused by the MRAs and their supporters. However, if anyone found abusing the function were automatically suspended, it could work. This should also be done in conjunction with an account prepared to call them out too, such as @misogyny_online or @NoMoreAbuse3 et al. People need to be made aware of the extent of online misogynistic abuse as @EverydaySexism has raised awareness of offline harassment and sexism, and how it does not just affect those with huge platforms. That way, if a woman wants back up & doesn’t have many followers there should be a group of us there willing to expose, confront, challenge the misogynists & support her.

  • TNT666 says:

    There’s a world of difference between perceived “offense” and threats of rape and death. Yes Facebook is successfully lobied by MRAs and shut down female rights sights and leave rape threats up… That’s not the “button’s” fault, it’s the humans behind the button managing the deletions.
    We need better. Personal threats of violence have no business in social media. That’s what the button should be used for… not just perceived offensiveness… any person can feel offended by any option, they’re all up for grabs.

  • Ophelia says:

    On the one hand, I think Twitter users need a way of drawing attention to any abuse they receive, and for the sender of said abuse to be punished. However, I also have concerns about censorship potential (e.g. automated filtering and blocking of some accounts).

    What we seem to be aiming for is this:
    1) Clear definitions of what is / is not abuse
    2) A means of drawing attention to abuse
    3) A means of confirming that it is genuine abuse (not just an attempt to silence legitimate criticism)
    4) For confirmed abuse to be taken seriously.
    5) For severe consequences to be dealt to abusers.

    I politely suggest the following measures:
    1) Twitter has a clear set of definitions of abuse, including but perhaps not limited to threats of physical harm (direct or indirect), racism, misogyny, homophobia, transphobia, and ableism.
    2) These definitions should be laid out in no uncertain terms and new users must tick a box to agree to follow a Code of Conduct before they can make their first post.
    3) Public Tweets (i.e. not sent directly to another user) containing certain trigger words should automatically be flagged as “potentially abusive” using algorithms.
    4) Any public Tweet categorised as “potentially abusive”, which is subsequently reported by another user is then moderated. If it is not reported, no action will be taken.
    5) Any Tweet specifically directed at a user using “@” can be reported using a “Report Abuse” button. This button will save the Tweet in its entirety to a database, so that it can be moderated even if the original Tweet is deleted by the user.
    5a) If the “Report Abuse” button is used, and the abusive Tweet is found by the filter algorithm to contain any trigger word, the account from which it was sent will be temporarily suspended (The user will be unable to send Tweets from the account, which will display an automated warning message, e.g. “*User*’a account has been suspended for *e.g. racist* abuse towards another user. This is in violation of our Conduct Policy (*link to T&Cs*)” for, say, 24 hours. This message will be visible to the suspended user, to all of their followers and to anyone else who can view their feed.
    5ai) If the Tweet contains trigger words that identify it as a threat of physical harm, then it will be saved, removed from view and sent to a designated supervisor for Further Action. (see 7c)
    5aii) If the reported Tweet does not contain any trigger words, it will be passed on to moderators for confirmation. The sender’s account will not be affected unless the Tweet is confirmed as abusive.
    6) Repeat offenders (i.e. any user who has tweeted 3 separate confirmed cases of abuse) will have their account deleted automatically.
    7) All human moderators will be specifically trained in recognising forms of racism, sexism, homophobia etc and what does and does not constitute legitimate debate.
    7a) Moderators will be responsible to a supervisor, who will be sufficiently skilled to advise on ambiguous cases.
    7b) The role of supervisor may be fulfilled by / advised by volunteer affiliates of relevant campaign groups (e.g. EVAW)
    7c) The supervisor will be responsible for ensuring Tweets which require Further Action (e.g. Police involvement) are handled appropriately.

    What I have outlined above is doubtlessly an imperfect solution, but it aims to set clear standards of user behaviour and to prevent abuse of the Report function by only blocking Tweets that are confirmed to be abusive. As has already been mentioned, the manual moderation of Twitter may be too inefficient / costly to be practical, so my suggestions might be tweaked to increase automatic filtering and limit human moderation just to police intervention. However, through personal experience with Facebook, algorithms alone can prove too unreliable.

    It might also be worth having some kind of extra support for targets of multiple abusive Tweets, in which abuse could be recorded and documented. There could even be a Liaison Officer -type role involved – someone to act as a point of contact for targeted users, to update them on how their reports are being dealt with and to listen to feedback on how well Twitter is being moderated.

    Hope my long-winded reply helps in some way.

  • […] have pointed out. [Update] Everyday Victim blaming are currently inviting people to submit their Concerns About the Report Abuse Petition; please read and submit your own. In case I didn’t make it clear in my post, I do not support […]

  • abbygirlinuk says:

    I think off line education needs to happen as well. As many posters have pointed out, it’s too big to handle only online. Offline education in schools, colleges, workplaces. People need to talk in real life, face to face, about what is right and wrong on line, and why. It’s about peoples’ attitudes and pre/misconceptions to situations and people of all ethnicites, religions, gender selections, physical abilities, learning disabilities, mental health states, relationships, all of it. The first thing is that we are all human. We think, we feel, we love, we eat, drink, play, laugh, die. Those are the basis of all human existence. If we go back to that, and teach children, and each other, about what makes us even more similar beyond the basics, we can find common ground, rather than differences and animosities. Just my thoughts. Do with them what you will

  • staringatclouds says:

    One concern I have is there seems to be a required field on the report form asking for your full name, along with a note stating that this may be shared with third parties including the person being reported.

    If this is true then I for one will never use the feature regardless of any abuse as my name is sufficiently unique to identify me, my family and where I live.

    Even if you don’t know the city I live in a search engine only yields a handful of results for my name, easily small enough for a determined abuser to go after in real life.

    I will give my name only if it is kept confidential between me, twitter and the police.