Hate speech / cyber-bullying. Avoiding bad PR

DISCLAIMER: I do not want to police diaspora.I understand if podmins want to run their private 4chan and i understand why this is in concept a great idea.

So… I ran into another holocaust apologist today.

This raises the issue of how diapora handles free speech and the reporting of users.

Supposedly it is up to podmins to “police” their userbase (geraspora for example, seems to delete/ban users who post facist propaganda).

However, the current structure of diaspora seems to (as far as I can understand), by design, enable cross-posting of content to other linked pods without any filters applied.

This creates several challenges:

  1. Some countries have legal ramnifications for publishing certain material (be it copyright, cyber-bullying, hatespeech etc)

  2. This could very quickly turn into bad PR for diaspora.

  3. Witch-hunts, flame-fests and all the usual goodness (data-trash).

One can of course argue that the “ignore” user feature is enough. However, what if diaspora grows to a degree where this becomes exceedingly difficult? What if podmins get sucked into all sorts of legal trouble? What if some tabloid journalist decides to go on a crusade nailing diaspora as a plattform for terrorists/pedophiles/nazis, etc.?

What control mechanisms are in place? Are they transparent? Are they easy to use?

This is something which should be handled delicately and early, before such problems arise (and believe me, they will, when nobody expects or/and is ready to handle them).

What do you guys think? Am I just paranoid?


Note: This discussion was imported from Loomio. Click here to view the original discussion.

Here are some potential ideas which might help:

1: A pod-related manifesto or TOS which can be modified by the podmin to hold legal and ideological disclaimers and is visible quickly and on request to users when they sign up and after.

2: A “report” user feature (the grumpy button) which reports a post/user to the podmin.

3: User “reputation”, be it public or private which can be transfered between pods. A tag/score system comes to mind.

4: A sharable ignore list for users.

5: A diaspora manifesto/disclaimer on the wiki which offers immediate insight for people running amok trying to get user XY banned.

Ideally the feature should work in a way that

A: a disgruntled user gets the sense that he is potentially heard,

B: the podmin has a chance to act, with enough information

C: the user being reported is notified (without showing him the reporter) so he knows what’s up and can take measures if everything was just a misunderstanding.

D: and all that in a way that the podmin can not later be legally nailed down on the fact that he knew about it.

Quite a challenge… did I mention I am willing to code this if this discussion gets lively enough and turns into enough of a roughdraft?

A few thoughts:

  1. We had an idea a while back about implementing a sort of drop-in “Terms of Service”, in which a podmin could update the TOS, and everyone would be notified of a new TOS to agree to the next time they refreshed the stream or something.

You could just have a boolean field to check whether or not a user has accepted the new TOS, and this field simply gets reset every time a new TOS comes up.

  1. I think a system for reporting users is fundamentally important when it’s on community hosting. There could be some kind of function in which you flag a user, add a comment on why you’re flagging them, and a notification of some sort goes to the podmin.

  2. User reputation may be a bit of a sticky subject. Some users will say things that are unpopular, but the problem is, those scores can be manipulated simply by communicating with like-minded people. If you see a Neo-Nazi posting, sure, you could give him some negative feedback, but he could just as easily offset it with popular feedback from his Neo-Nazi friends. Besides which, we’re not Klout.

I agree that a public reputation system is kind of a nono since it creates competition and data-trash. However, it would help podmins to know how often a certain user has received negative attention and when in order to make an educated decision.

legally, it might be the best idea to have a way to forward any posts in question (a version of share?) to pod storage file where they could (or, at least it could be said they could) be reviewed. how this is best done, UI-wise, might well determine how spammy and trashy it becomes–that is to say, there should be adequate hurdles for the ‘hate speech reporter’ in place, so that reporting on a whim is kept to a minimum, and is well documented (a pop-up with a list of questions to be answered tends to take the fun out of it).

Has anyone ever tried a “private reputation system”? That is, you can give people karma and up/down vote posts, but none of that data is visible to the end user. It would certainly be an interesting experiment…to see how people change their ideas, beliefs and especially how they say things when they are criticized or praised by their peers.

However, I don’t necessarily believe this whole idea is a good thing. If you don’t like the way someone is behaving on your pod, you as a podmin have the right to revoke his/her account. If you’re a user, you can just not follow that person or perhaps block them. I don’t see why we need a censorship system in any way on a distributed social network.

Proposal: Diaspora community real-name search

My biggest complaint about Diaspora is finding people.

Both facebook and linkedin have policies that either require or strongly encourage people to use a name that could identify them in real life, rather than a pseudonym.

I would like to see a very clear ‘publish my real name, and encourage search engines to find it’ as a default option in Diaspora.

I’d also like to see a mock-up ‘help’ webpage that lays out very clearly to regular people what this means, and what they are gaining or losing by choosing to either publish their name, or not to publish.

I also think this would dramatically impact the potential issue of hate speech/cyber-bullying. If someone is bullying/hate-speeching, or whatever, I have no desire to censor them. I just wish to be able to clearly point out where they physically are, so that anyone who chooses to can confront them in person. This, in my mind, avoids any legal complications.

I’d also like to be able to block anyone (or at least filter) who does not use a real name from my activity feed.


Outcome: Motion blocked. General consensus is that there are privacy implications in implementing real names for search.

Votes:

  • Yes: 2
  • Abstain: 1
  • No: 1
  • Block: 10

Note: This proposal was imported from Loomio. Vote details, some comments and metadata were not imported. Click here to view the proposal with all details on Loomio.

@troybenjegerdes To be honest it’s difficult to vote on a proposal where the end result of the proposal doesn’t really achieve anything. I guess this is closes related to being a feature since you are proposing ways of publishing ones name and several other things.

The problem with feature proposals is that they need to either be very exact (blueprint style) or have some code to show up for them. Voting on vague proposals on what could be done will not actually get it done. And if it is not a proper documented blueprint of a set of features that everyone can properly vote on, it will have little impact on code that will be written in the future.

Maybe you could document your proposed changes in more details in the wiki, add a link here and then reset the proposal ending time once done to allow more discussion?

Blueprints can be written for example here: http://wiki.diaspora-project.org/wiki/Category:Proposals

This is, yet again, another feature that should be in a fork. You can absolutely expose information on YOUR fork of Diaspora, but it’s never going to be included in diaspora/diaspora because of the wide-reaching security implications. In fact, many people choose to share data with Diaspora in order to remain anonymous to search engines…most of the points you made have the exact opposite intentions of many Diaspora users.

Can anyone propose better wording?

What I am asking for is an end-user experience where I can tell my parents ‘just search for my name’ because if I try to tell them to go to a specific HTTP URL, they will just end up typing into whatever search box pops up first.

Why is this a privacy violation to let the end user choose if they want to make their real name public, or not?

The direct connection to hate speech and bullying is it’s pretty damn trivial for a technically-savy bully/hate-speech antagonist to continually create new anonymous accounts.

I find that discussions are move civil and accountable if people use real names that might let me find that person in real life.

Flaburgan: I don’t really need to know your real name, but if flaburgan@some_random_pod starts posting about how the holocaust was the greatest thing ever and generally being abusive, how am I to tell if it’s you, or if it’s someone else?

What I am asking for is an end-user experience where I can tell my parents ‘just search for my name’ because if I try to tell them to go to a specific HTTP URL, they will just end up typing into whatever search box pops up first.

Of course you can already do this - just put your real name in the ‘name’ field on your profile, and check the box which says ‘Allow for people to search for you within Diaspora’, and they should be able to find you by your real name - from within Diaspora, but not from a search engine.

I’ve just tried searching for your name within Diaspora, and no result came up. This means either you haven’t used your real name, you haven’t checked the ‘make me searchable’ box, or you’re on another pod from me and the federation of data between pods isn’t working.

That’s the biggest problem in finding people, that pods do not reliably share information with each other. This is the top priority for the community developers, but it is an enormously complex problem in a large decentralised network, and has not been solved yet.

I can’t see why anything needs to change to the profile to achieve what you want: if people want to be found by their real name, they just put their real name into the name field, make themselves searchable, and Bob’s your uncle - once the federation problems have been solved. This leaves each user in full control of what happens to their data, and of the level of privacy they want.

All your proposals amount to dangerous privacy leaks, the existence of which in networks such as the ones you name is largely why Diaspora was set up in the first place. I think bringing hate speech into it as a reason for introducing these privacy leaks is a red herring - enabling the leaking of privacy won’t help prevent hate speech, and could actually help cyber-bullies as they could find out personal information about people using the privacy leaks.

It sounds as though you want a network which is completely different in outlook and ethos from Diaspora, and privacy is not an important thing to you, perhaps Facebook or another network would suit you better.

I think there are a lot of red herrings in this proposal, it really doesn’t belong in a thread on hate-speech/cyber bullying.

  1. opt-out (i.e., checked by default) is entirely against the diaspora philosophy of private by default. In facebook, I absolutely expect the privacy policy to frequently change and expose things by default (super annoying), I have to go and carefully find the options and uncheck them. In diaspora, with the default options, my content is private and nothing terrible happens. When I have time or the desire, I can go and find the (hopefully better designed) privacy control and start sharing exactly as much content as I want to.

  2. I do not think this would impact hate-speech at all. This kind of policy quite obviously impacts the “casual” user without having any effect on someone truly malicious (who can easily take the time to un-check a box if they care). It’s like claiming drm is an effective way to fight piracy, you only ever affect the casual user (why does the dvd I paid for provide a lower quality service (enforced advertisements) than the torrent?), while the “competent” user can easily circumvent any deterrents put in place.

By the way, @Troy your godwin point is not a good argument here. We need something to block unwanted content. Post it with your real name or a nickname will not change anything about that.

Flaburgan: I don’t really need to know your real name, but if flaburgan@some_random_pod starts posting about how the holocaust was the greatest thing ever and generally being abusive, how am I to tell if it’s you, or if it’s someone else?

It would be just as easy for a malicious poster to set up an account called troybenjegerdes@random_pod and started posting hate speech. Forcing people to use a screen name that resembles a ‘real’ name changes nothing in terms of one person impersonating another - unless you propose forcing people to upload a scan of their birth certificate when registering - but it does impact on the privacy of many other users in the process, as David McMullin points out.

We can’t solve the issue of identifying people trying to imitate another user - let’s not even try? :slight_smile: All we can do is make sure pod security is tight and what is outside is outside. And a “report to podmin” link on content and person profiles would be nice of course.

I like the idea of adding a “report to podmin” feature for content and accounts. I’m unsure whether it’s currently feasible, but perhaps we could also give podmins the ability to blacklist pods from federating content that may cause legal problems or is in violation of that specific pod’s ToS? So if some nut out there starts up holocaustdeniaspora.com, other pods that want nothing to do with content of that nature have a way to distance themselves.

I would recommend instead of implementing a real-name search, the following:

Use contact APIs for different social services that a user has authenticated with their accounts. A user opts in to being able to be found through their connected social accounts, and they can only be found by people they’re already friends with on these other social networks. Therefore, random people can’t just look someone up through just their social account, but if you’re already friends, you can find them on Diaspora easily.

What about telling the people with a message"Warning: maybe john@joindiaspora.com is not the same person as john@diasp.eu." when ignoring someone(john@joindiaspora.com in this example).

Of course, we have to take in mind that both Johns are reachable by the user or john@diasp.eu has checked the searchable box.

But please, whatever you do do not create a general black list for users neither for contents this is illegal in some places and is totally unnecessary(he/she can create a new account!). I think you know it, but I want to make it very clear.

Remember: Banning is a different thing, banning is not general black list because is applied only on that pod.

I find this conversation moot as I would never use a pod that would censor in any way shape or form. I find the whole notion of censorship on Diaspora to be repugnant and against the spirit of the whole project. If a podmin is having issues then they should put up a disclaimer that new users would have to agree to that basically states that a user takes responsibility for their own data, content and views when they join. Essentially the podmin is not responsibly if the user posts something illegal or offensive. However I still find this whole conversation to be just the first step along the slippery slope of turning Diaspora into Facebook. Why was Diaspora founded? To promote privacy and freedom, and here we are talking about censorship? Free speech isn’t just for the topics you want advertised. It’s for the taboo topics too. It’s for the “terrorists” (also consider that under the NDAA pretty much anyone that challenges the gov’t is a “terrorist”) and the pedophiles and holocaust apologists, and the nazi’s and whoever else wants to express their views. What one could do is create personal filter lists to block terms they don’t want to show up in their stream. Instead of having the podmin censor users have each user simply censor out posts they themselves don’t like. You don’t like harsh language? Then block posts that have bad words in them. You don’t like pedophiles? then block pedophile content from your stream. You don’t like nazies? Block them from your stream. This could have other benefits as well by thinning one’s stream and eliminating excess posts. But ultimately this comes down to an important question of intention: Are we trying to make the user more comfortable by getting rid of offensive content for them or are we trying to go on an offensive moral cruisade for them?